#### New measures for the analysis of online algorithms

Competitive analysis has been the method of choice for the analysis of online algorithms. However, despite its extreme success, it has long been known that it leads to unsatisfactory results for certain fundamental online problems. For instance, for the well-known problem of paging in a virtual memory system, there exists a vast class of algorithms all of which have the same competitive ratio. In order to bridge this gap between theoretical analysis and empirical performance, we introduced*bijective analysis*as a natural and intuitive framework for comparing the performance of online algorithms. Informally, this technique is based on pairwise comparison of the costs incurred by two algorithms on sets of request sequences of the same size. Using this technique, we proved that Least-Recently-Used is the unique optimal algorithm for the paging problem, a result that is in full agreement with the experimental evaluation of paging algorithms.

#### Parameterized analysis of online graph optimization problems

Several graph and network optimization problems have interesting on-line variants For instance, in online Steiner problems the set of terminals is not known in advance, but is rather revealed as a sequence of requests. Although traditional competitive analysis can be applied in the context of online Steiner tree problems, for many variants it is often overly pessimistic. More specifically, it is often the case that extremely naive algorithms achieve the same optimal competitive ratio as more sophisticated ones (typically, linear in the number of terminals). We got around this problem by applying a*parameterized*competitive analysis of Steiner tree problems. The main idea is to identify key parameters that affect the performance of algorithms, and express the competitive ratio as function of these parameters (in addition to the number of terminals). We are interested in intuitive parameters that reflect real settings (for instance, parameters motivated by the study of real networks).

#### Black-box techniques for efficient interruptible algorithms

Is it possible to design algorithms that produce reasonable and satisfactory solutions even if they are interrupted during their execution? This is a central question in the context of*bounded-resource computation*in AI, with numerous applications such as medical diagnostic systems, robotic motion planning and game-playing programs. An

*interruptible*algorithm has exactly this desirable property: it can be interrupted (queried), at which point it must report its current solution. Such algorithms are, in general, more desirable than

*contract algorithms,*in which the allowable computation time is known in advance. We investigated optimal strategies for converting a contract algorithm to its interruptible version. This can be accomplished, in “black-box” fashion, by scheduling executions of contract algorithms in either a single or multiple processors.

#### Computational models of greedy-like algorithms

Greedy algorithms are a popular approach for solving optimization problems,
mainly due to their conceptual simplicity and amenability to analysis.
We usually identify an algorithm as “greedy” based on our intuition and experience
(or even our personal “taste”). Suppose, however, that one wants to argue that a given
optimization problem cannot be solved efficiently by any greedy algorithm.
Clearly, any informal understanding of what makes an algorithm greedy cannot yield such statements.
Instead, we require a formal model that abstracts the essential properties of “greediness”.
In my Ph.D. thesis
I investigated the approximation power of * priority algorithms *.
The latter is a class of algorithms that captures several intuitive properties of
greedy algorithms and provides a framework for showing that many optimization problems
do not admit efficient greedy-like algorithms.