direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Abstracts for COGA-Seminar SS2013

Torsten Gellert: Scheduling Multiple Cranes on a Shared Pathway

In many logistics applications, transport requests are conducted in parallel by several vehicles moving along a fixed shared pathway. Examples include cranes mounted on a common rail, like gantry cranes loading and unloading containers in intermodal transportation, or forklifts moving along a narrow passageway in large warehouses. Such systems of rail-mounted vehicles play a key role in various logistic questions, and the efficiency of their operation frequently has a significant impact on the overall performance of the surrounding production environment.

We introduce a model to deal with these type of transport requests. Each of the transports consists of an origin-destination pair in the two-dimensional plane. Their execution amounts to finding tours for every vehicle, while vehicle are restricted to remain in their initial sequence w.r.t. the x-axis---this constraint is enforced in practice by the rail. The goal is to minimize the makespan for a given set of transport requests.

Since it is easy to see, that the problem is strongly NP-hard in general, we try to examine, which additional knowledge would make it polynomial solvable. Throughout the talk, we keep an eye on the differences to the one-dimensional version of the problem. Furthermore, we show some approximation algorithms.

Julie Meißner: An Aggregated Time-Space Network for Airline Crew Scheduling

The airline crew scheduling problem asks to distribute flights in a planning period to crew members. The flights have a fixed start and end time and they change the location of the crew member that gets assigned to it. Each crew member's work schedule has to respect daily and weekly work time bounds and allow sufficient rest periods.

We model the flight set as a time-space network and use a mix of expansion and generation strategies to include the constraints on work schedules. This network flow model allows us to find a crew schedule by computing a minimum cost flow respecting an additional set of cover constraints.

We will describe the model construction, present theoretical results about the model and a computational study comparing the solutions to historic crew schedules.

Laura Vargas Koch: Der Einfluss kombinatorischer Struktur auf Nash-Gleichgewichte in Auslastungsspielen

Verkehr und Kommunikation bilden zwei wichtige Bereiche unseres Lebens. Hier interagieren wir mit vielen Nutzern ohne von außen gesteuert zu werden. Ein interessantes Konzept um solche komplexen Systeme zu modellieren sind Auslastungsspiele. Bei der Untersuchung von Auslastungsspielen ist man an der Existenz stabiler Zustände interessiert, da sich bei ihren realen Vorbildern in der Regel stabile Zustände einstellen. Wünschenswert ist, dass die Spieler in einem realistischen Verlauf einen stabilen Zustand erreichen.

Ein Konzept für stabile Zustände sind Nash-Gleichgewichte. In meiner Bachelorarbeit habe ich Ergebnisse zur Existenz von Nash-Gleichgewichten, zur Konvergenz gegen Nash-Gleichgewichte und zur Berechenbarkeit von Nash-Gleichgewichten in (ungewichteten) Auslastungsspielen, sowie gewichteten und Spieler-spezifischen Auslastungsspielen zusammengetragen. Die Eigenschaften der Auslastungsspiele wurden in Abhängigkeit der Strategieräume untersucht. Die beiden Hauptquellen waren die Artikel „On the Impact of Combinatorial Structure of Congestion Games“ und „Pure Nash Equilibria in Player-Specific and Weighted Congestion Games“ von Ackermann, Röglin und Vöcking. In meinem Vortrag werde ich einen Überblick über die betrachteten Ergebnisse liefern und ein ausgewähltes Theorem genauer vorstellen.

Myriam v. Mirbach: Der Preis der Anarchie in Auslastungsspielen

Der Preis der Anarchie ist in der Spieltheorie das Verhältnis zwischen den Gesamtkosten aller Spieler in einem Gleichgewicht und den Gesamtkosten im sozialen Optimum. Roughgarden hat in seiner Arbeit Intrinsic Robustness of the Price of Anarchy das Glätteargument entwickelt, welches eine obere Schranke an den Preis der Anarchie innerhalb einer Klasse von Spielen darstellt. Dieses lässt sich nicht nur auf reine Nash-Gleichgewichte, sondern auch auf weitere Equilibriumskonzepte anwenden. Existiert innerhalb der Klasse von Spielen ein Spiel mit einem Preis der Anarchie bezüglich eines reinen Nash-Gleichgewichts, welches die obere Schranke des Glättearguments annimmt, so wird die Klasse scharf genannt.

In meiner Bachelorarbeit, die ich vorstellen werde, habe ich mich mit den Ergebnissen von Roughgarden beschäftigt. Des Weiteren habe ich Auslastungsspiele betrachtet und für einige Unterklassen anhand von Spielkonstruktionen gemäß "Weighted Congestion Games: The Price of Anarchy, Universal Worst-Case Examples, and Tightness von Bhawalkar, Gairing und Roughgarden" gezeigt, dass diese scharf sind.

Jeanette Schnake: Two-source Unsplittable Flow

Das Unsplittable Flow Problem ist ein Mehrgüterflussproblem, bei die Aufgabe darin besteht, einen bedarfserfüllenden Fluss zu finden, der für jedes Gut nur entlang eines einzelnen Pfades von der Quelle zur Senke des Gutes verläuft. Es sind dabei Kapazitäten für die Kanten gegeben, die nicht überschritten werden dürfen.

Als Speziallfall kann man Instanzen betrachten, bei denen die Quellen aller Güter gleich sind. Auch dieser Spezialfall des Problems ist NP-vollständig. Im Vortrag werde ich einen 3-Approximationsalgorithmus für dieses single-source unsplittable-flow Problem am Beispiel vorstellen und eine Idee für die Erweiterung auf den Fall zweier Quellen geben.

Zum Schluss werde ich die bekannten Resultate für den Fall einer Quelle zusammenfassen und auf meine Ziele eingehen.

Stefan König: Approximation in Geometric Optimization Problems via Core Sets

The minimum enclosing ball problem (MEB), where a finite set of points is to be covered by a Euclidean ball of minimal radius, is probably one of the most studied geometric optimization problems. This talk is concerned with the class of optimization problems where both the finite set of points and the Euclidean ball are allowed to be replaced by arbitrary convex bodies.

MEB and its relatives frequently arise in shape fitting or facility location problems, often as subproblems of the more complex k-center problem, where a point set is to be covered by the union of k balls. The latter problem being NP-hard, the concept of core sets has proved very powerful to derive a PTAS for k-center. The key ingredient is that, for the solution of the MEB problem, it suffices to know a "very small" subset of the input point, the core set.

In order to give tight bounds on the size of these core sets for MEB and in the general case, we connect the problems to results in classic convex geometry. We give sharp inequalities relating certain radii of convex bodies which are then used to derive sharp upper bounds on the size of core sets. In the MEB case, this yields a tight (dimension independent) bound. In the general case, we show that there are core sets of size linear in the dimension and that this bound is best possible even when restricted to normed spaces. This is joint work with René Brandenberg.

Andreas Schütz: Congestion Games with Multi-Dimensional Demands

Weighted congestion games are a significant and extensively studied class of strategic games, in which the players compete for subsets of shared resources in order to minimize their private costs. Each player assigns a strictly positive demand to the resources of its selected strategy where the cost function of each resource depends on the cumulated demands of all players sharing it. A typical application is routing in computer networks, in which each player sends an amount of data through a network and the delay of each arc depends on the total amount of data that is sent through it. However, what if the delay of each arc additionally depends on the number of its users or the number of files that are sent through it?

To model such situations, we generalize weighted congestion games by introducing a new class of congestion games with multi-dimensional demands. For a given integral number k >1, each player has a positive k-dimensional demand vector and the cost function of each resource is a function of k variables. While previous research focused on weighted congestion games with 1-dimensional demands, we investigate the equilibrium existence problem in congestion games with multi-dimensional demands. For games with arbitrary strategy spaces, we give a complete characterization of all types of resource cost functions that guarantee the existence of a pure Nash equilibrium. Furthermore, we discuss the equilibrium existence problem in congestion games with multi-dimensional demands and restricted strategy spaces.

Giacomo Lanza: A new Algorithm for Stable Allocation in Two-Sided Markets without a Central Authority of Control

As defined by Gale and Shapley in 1962, a two-sided market consists of two disjointed groups of agents. They introduced this notion to model residential placement. In the simplest version of the problem, a two-sided market is given in which each agent has strict preferences about the agents on the other side and can be matched to one of them. Since that time, stability problems have been widely studied and extended in several directions.


The most widely used extension among these kind of problems is the stable allocation problem. In this presentation, a new algorithm for finding stable allocations in uncoordinated two-sided markets will be introduced. That is, a two-sided market in which a central authority with the task to match agents in order to produce a stable matching, does not exist. The agents are then free to play their selfish strategy trying to reach some match.  Our algorithm has to find a feasible and stable allocation starting from an arbitrary one. Some analyses to describe the characteristics and the performances of the algorithm will also be presented.

Kevin Schewior: Routing Games with Progressive Filling

Max-min fairness (MMF) is a widely known approach to a fair allocation of bandwidth to each of the users in a network. This allocation can be computed by uniformly raising the bandwidths of all users without violating capacity constraints. We consider an extension of these allocations by raising the bandwidth with arbitrary and not necessarily uniform time-depending velocities (allocation rates). These allocations are used in a game-theoretic context for routing choices, which we formalize in progressive filling games (PFGs).

We present a cumulative analysis of equilibria in PFGs. We show that these games possess pure Nash and strong equilibria. While computation in general is NP-hard, there are polynomial-time algorithms for prominent classes of Max-Min-Fair Games (MMFG), including the case when all users have the same source-destination pair. We characterize prices of anarchy and stability for pure Nash and strong equilibria in PFGs and MMFGs when players have different or the same source-destination pairs.

Max Klimm: Congestion Games, Network Design, and Liverpool

In my talk, I will present and discuss some of the main results obtained during the bilateral cooperation with Martin Gairing (University of Liverpool). First, I will present a characterization of the existence of pure Nash equilibria in congestion games with player-specific cost functions. Then, I will show how to buy capacities on the edges of a network so as to minimize the weighted sum of the costs of installing the capacities and the total latency cost of the resulting equilibrium.

Prof. Patvardhan: Quantum-inspired Evolutionary Algorithms

Quantum computing is the product of two of the most significant advances in the history of science: the theory of quantum mechanics, which describes the universe at the smallest scale with great accuracy, and the theory of computation, which has resulted in digital computers with exponentially increasing power and many efficient algorithms that are capable of solving a wide range of interesting problems.

Evolutionary Algorithms have been inspired from nature and have proven to be useful for wide range of real world engineering problems. However, problems of slow/premature convergence still remain and have to be tackled with suitable implementation for the particular optimization problem at hand.

Quantum-inspired Evolutionary Algorithm (QEA) is a recent branch of Evolutionary Algorithms (EA). QEA is a population-based probabilistic Evolutionary Algorithm that integrates concepts from quantum computing for higher representation power and robust search. It maintains a population of individuals in quantum bits or qubits. A qubit coded individual can probabilistically represent a linear superposition of states in the search space and has a better characteristic of population diversity than other representations.

The primary advantage of QEA is that it provides the overall framework that can then be tailored for the individual application at hand. The broad applicability of QEAs is clear from the fact that these ideas have been applied with advantage for a variety of problems including continuous functions optimization as well as combinatorial optimization.

The talk would introduce the QEAs and present some of our recent work on QEAs and their application to various problems viz., discrepancy of hypergraph colorings, knapsack problems, parameter optimization.

Vasantha Lakshmi: Optical Character Recognition of printed text: Challenges and Case studies

The process of converting a document image into editable text format is known as Optical Character Recognition (OCR). Texts in image form are non-editable. An OCR system enables us to manipulate printed or handwritten text by importing into an electronic machine with minimum effort and time. Even though the Roman script is quite simple with limited character set the OCR problem is far from being completely solved.

Indic scripts are varied and complicated wherein modifiers are attached to the base characters to generate a huge number of ligatures that need to be recognized. This makes the OCR problem more difficult. Multiple scripts, fonts, sizes, document layouts, text orientation, noise and backgrounds in text add further complications in meeting the twin requirements of high recognition accuracy and fast recognition.

In this talk we present the OCR problem and identify the various challenges particularly in the context of Indic scripts. Two important sub-problems are document background removal and selection of a set of optimal features for recognition. We present some work done by our group at Dayalbagh Educational Institute using Wavelets for background removal and Evolutionary Algorithms for the selection of optimal features. Work done on OCR of two scripts i.e. Telugu and Urdu is also presented. The two scripts are very different and the case studies serve to highlight the variety of issues that arise in OCR.

Christian Döblin: Vergleich zweier Algorithmen für das Pickup and Delivery Problem mit Zeitfenstern

Beim Pickup and Delivery Problem mit Zeitfenstern gibt es eine Menge von Lieferaufträgen, wobei jeder Auftrag aus dem Aufnehmen der Ware bei einem Kunden und dem Abliefern dieser Ware bei einem anderen Kunden besteht. Jeder Kunde ist dabei nur innerhalb seines Zeitfensters erreichbar. Das Ziel ist es, alle Aufträge mit einer gegebenen Fahrzeugflotte aus einem Depot so zu erfüllen, dass die gefahrene Gesamtdistanz möglichst klein ist. Dabei müssen außer den Zeitfenstern auch Kapazitäts- und Reihenfolgebedingungen eingehalten werden.

Wir betrachten hier eine Heuristik von Ropke und Pisinger sowie ein exaktes Verfahren von Ropke, Cordeau und Laporte für dieses Problem und vergleichen beide Verfahren sowohl hinsichtlich Laufzeit als auch Lösungsgüte auf verkleinerten Instanzen aus der Wirtschaft.

Philipp von Falkenhausen: Quantitative Comparative Statics for a Multimarket Paradox

Comparative statics is a well established research field where one analyzes how changes in parameters of a strategic game affect the resulting equilibria. Examples of such parameter changes include tax or subsidy changes in oligopoly models or trade changes. While classic comparative statics is mainly concerned with qualitative approaches (e.g., deciding whether a parameter change improves or hurts equilibrium profits or welfare), we aim at quantifying this effect.

We consider the famous multimarket oligopoly model introduced by Bulow, Geanakoplos and Klemperer. In this model, there are two firms competing on two markets with one firm having a monopoly on one market. Bulow et al. describe the counterintuitive example of a positive price shock in the firm’s monopoly market resulting in a reduction of the firm’s new equilibrium profit. We quantify for the first time the worst-case profit reduction for the case of two markets with affine price functions and firms with convex cost technologies. We show that the relative loss of the monopoly player is at most 25% no matter how many firms compete on the second market. In particular we show for the setting of Bulow et al. involving affine price functions and only one additional firm on the second market that the worst case loss in profit is bounded by 6.25%. We further investigate a dual effect: How much can a firm gain from a negative price shock in its monopoly market? Our results imply that this gain is at most 33%. We complement our bounds by concrete examples of markets where these bounds are attained.

Mario Meißner: Praxisorientierte Untersuchungen von Sortierregeln im online Scheduling zur Vermeidung exzessiver Wartezeiten

Ein Berliner Bremsenhersteller besitzt eine Messstation zur Vermessung verschiedener Produktionsteile. Dabei kommt es immer wieder zu mehr Aufträgen für die Station, als diese auf einmal bewältigen kann. Die Firma möchte nun wissen in welcher Reihenfolge wartende Teile gemessen werden sollten, sodass keine unverhältnismäßig große Wartezeit auftritt.

In meiner Masterarbeit wird diese Aufgabe als Schedulingproblem auf parallelen identischen Maschinen modelliert, wobei die Jobs gewichtet sind und sowohl release date, als auch due date besitzen. Ziel ist es Sortierregeln zu finden, welche die totale gewichtete quadratische Verspätung minimieren.

Robert Schweitzer: Kostenverteilungsfunktionen gemeinsam genutzter Ressourcen

Beim Kostenverteilungsmodell wird angenommen, dass sich mehrere Parteien eine Ressource teilen. Die Spieler agieren über ihre Nachfragemenge und versuchen ihren individuellen Gewinn zu maximieren, welcher sich aus der Differenz von Nutzen und Kosten errechnet. Die zentrale Beobachtung bei diesem Modell ist, dass der Gesamtgewinn nicht automatisch maximiert wird, wenn jeder Spieler seinen individuellen Gewinn maximiert.

Bisherige Untersuchungen  zu  diesem  Modell  beschränkten  sich auf einzelne Kostenverteilungsfunktionen. Für diese wurden mit verschiedenen Kostenfunktionen und Spielerzahlen die jeweiligen Effizienzwerte berechnet. Offen blieb hingegen die Frage nach einer allgemeinen unteren Schranke der Effizienz für beliebige Kostenfunktionen, Spielerzahl (=n) und Nutzenprofil bei freier Wahl der Kostenverteilungsmethode. Es werden verschieden Ansätze zur
Lösung dieses Problems betrachtet, wodurch am Ende der Wert 1/n als untere Schranke bewiesen werden kann. Desweiteren wird gezeigt, dass es keine überall dominante Kostenverteilungsfunktion geben kann.

Lukasz Jez: Online Knapsack Revisited (joint work with M. Cygan)

In a natural online variant of the (Max) Knapsack Problem, we are given
one item after another and after each item have to irrevocably decide
whether we place it in the knapsack or not, without knowing the future
items.

Formulated this way, the problem admits no constant-competitive
algorithm.  To get around this problem, several approaches, including
stochastic ones, have been proposed.  We focus on one by Iwama et al.,
which allows the algorithm to remove items from the knapsack (and lose
them forever).

The setting is as follows: an algorithm is to pack items, of arbitrary
sizes and profits, in k knapsacks (bins) without exceeding the capacity of any bin. We study two objective functions: the sum and the maximum of profits over all bins. The first one generalizes the dual bin packing problem, whereas the second is the one (partially) studied by Iwama et al. We consider two variants, depending on whether the algorithm is allowed to remove (forever) items from its bins or not, and two special cases where the profit of an item is a function of its size, in addition to the general setting. We study both deterministic and randomized algorithms; for the latter, we consider both the oblivious and the adaptive adversary model. We classify each variant as either admitting O(1)-competitive algorithms or not. Surprisingly, we develop simple O(1)-competitive algorithms for some cases of the max-objective variant believed to be infeasible because only 1-bin deterministic algorithms were considered for them before.

Zusatzinformationen / Extras

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe

Diese Seite verwendet Matomo für anonymisierte Webanalysen. Mehr Informationen und Opt-Out-Möglichkeiten unter Datenschutz.