Research


I currently focus on online algorithms, in particular scheduling under random-order arrival.


Below are various papers on which I have worked.

In my field publications on conferences are valued higher than in journals.
The author order is alphabetical.



Refereed Conference Proceedings



Coming soon.


Susanne Albers, Maximilian Janke

To appear in: 41st Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS) 2021

Download: arXiv, from this website.



Coming soon.


Susanne Albers, Waldo Gálvez, Maximilian Janke

To appear in: The 32nd International Symposium on Algorithms and Computation (ISAAC) 2021



We study Online Makespan Minimization with uncertain job processing times. Jobs are assigned to m parallel and identical machines. Preemption is not allowed. Each job has a regular processing time while up to Γ jobs fail and require additional processing time. The goal is to minimize the makespan, the time it takes to process all jobs if these Γ failing jobs are chosen worst possible. This models real-world applications where acts of nature beyond control have to be accounted for. So far Makespan Minimization With Budgeted Uncertainty has only been studied as an offline problem. We are first to provide a comprehensive analysis of the corresponding online problem. We provide a lower bound of 2 for general deterministic algorithms showing that the problem is more difficult than its special case, classical Online Makespan Minimization. We further analyze Graham’s Greedy strategy and show that it is precisely 3−2/m -competitive. This bound is tight. We finally provide a more sophisticated deterministic algorithm whose competitive ratio approaches 2.9052.


Susanne Albers, Maximilian Janke

The 17th Algorithm and Data Structures Symposium (WADS) 2021

Download: Conference, from this website.



Makespan minimization on identical machines is a fundamental problem in online scheduling. The goal is to assign a sequence of jobs to m identical parallel machines so as to minimize the maximum completion time of any job. Already in the 1960s, Graham showed that Greedy is (2-1/m)-competitive. The best deterministic online algorithm currently known achieves a competitive ratio of 1.9201. No deterministic online strategy can obtain a competitiveness smaller than 1.88. In this paper, we study online makespan minimization in the popular random-order model, where the jobs of a given input arrive as a random permutation. It is known that Greedy does not attain a competitive factor asymptotically smaller than 2 in this setting. We present the first improved performance guarantees. Specifically, we develop a deterministic online algorithm that achieves a competitive ratio of 1.8478. The result relies on a new analysis approach. We identify a set of properties that a random permutation of the input jobs satisfies with high probability. Then we conduct a worst-case analysis of our algorithm, for the respective class of permutations. The analysis implies that the stated competitiveness holds not only in expectation but with high probability. Moreover, it provides mathematical evidence that job sequences leading to higher performance ratios are extremely rare, pathological inputs. We complement the results by lower bounds, for the random-order model. We show that no deterministic online algorithm can achieve a competitive ratio smaller than 4/3. Moreover, no deterministic online algorithm can attain a competitiveness smaller than 3/2 with high probability.


Susanne Albers, Maximilian Janke

The 47th International Colloquium on Automata, Languages and Programming (ICALP) 2020

Download: Journal, Conference, arXiv, from this website.



We study the fundamental list update problem in the paid exchange model Pd. This cost model was introduced by Manasse, McGeoch and Sleator and Reingold, Westbrook and Sleator. Here the given list of items may only be rearranged using paid exchanges; each swap of two adjacent items in the list incurs a cost of d. Free exchanges of items are not allowed. The model is motivated by the fact that, when executing search operations on a data structure, key comparisons are less expensive than item swaps. We develop a new randomized online algorithm that achieves an improved competitive ratio against oblivious adversaries. For large d, the competitiveness tends to 2.2442. Technically, the analysis of the algorithm relies on a new approach of partitioning request sequences and charging expected cost. Furthermore, we devise lower bounds on the competitiveness of randomized algorithms against oblivious adversaries. No such lower bounds were known before. Specifically, we prove that no randomized online algorithm can achieve a competitive ratio smaller than2in the partial cost model, where an access to the i-th item in the current list incurs a cost of i−1 rather than i. All algorithms proposed in the literature attain their competitiveness in the partial cost model.Furthermore, we show that no randomized online algorithm can achieve a competitive ratio smaller than 1.8654 in the standard full cost model. Again the lower bounds hold for large d.


Susanne Albers, Maximilian Janke

The 37th International Symposium on Theoretical Aspects of Computer Science (STACS) 2020

Download: Conference, from this website.