Lecture wise summary will be added here.
Room 327, Mon 9:30 AM and Tue 9.30 AM (Tentative) (IMSc)
Kernelization is a vibrant and rapidly developing sub area of parameterized complexity. The course on kernelization aims at consolidating the results achieved in this area over the last two decades.
Many important optimization problems from various applications appear to be intractable (e.g., NP-hard), i.e., algorithms that solve these optimally can be assumed to use much time in the worst case. Many of the investigations in algorithms research are directed to the design and analysis of methods that cope with this intractability, e.g., using heuristics and approximation, looking at special cases, etc. In this course, we look at another aspect, that appears universally in almost every practical implementation that aims to deal with such NP-hard problems, namely preprocessing. Using relatively fast (say polynomial time) data reduction steps that keep the answer to the problem invariant, but reduce the instance of the problem, we aim at building a (hopefully) smaller but equivalent instance. A slower exact algorithm can then be run on this smaller instance. In this course, we look at a mathematical analysis of such preprocessing algorithms, now termed kernelization algorithms. The history of preprocessing, such as applying reduction rules to simplify truth functions, can be traced back to the origins of Computer Science - the 1950's work of Quine, and much more. A modern example showing the striking power of efficient preprocessing is the commercial integer linear program solver CPLEX. The goal of a preprocessing subroutine is to solve efficiently the ``easy parts'' of a problem instance and reduce it (shrinking it) to its computationally difficult ``core'' structure (the problem kernel of the instance).
How can we measure the efficiency of such a kernelization subroutine? For a long time, the mathematical analysis of polynomial time preprocessing algorithms was not studied. The basic reason for this was that if we seek to start with an instance I of an NP-hard problem and try to find an efficient (P-time) subroutine to replace I with an equivalent smaller sized instance I' then success would imply P=NP - discouraging efforts in this research direction, from a mathematically-powered point of view. The situation in regards the systematic, mathematically sophisticated investigation of preprocessing subroutines has changed drastically with advent of parameterized complexity, where the issues are naturally framed. More specifically, we ask for upper bounds on the reduced instance sizes as a function of a parameter of the input, assuming a polynomial time reduction/preprocessing algorithm. A typical example is the famous Nemhauser-Trotter kernel for the Vertex Cover problem, showing that a "kernel" of at most 2k vertices can be obtained, with k the requested maximum size of a solution. A large number of results have been obtained in the past years, and the research in this area shows a rapid growth, not only in terms of number of papers appearing in top Theoretical Computer Science and Algorithms conferences and journals, but also in terms of techniques. Important very recent developments were the introduction of new lower bound techniques, showing (under complexity theoretic assumptions); showing that certain problems must have kernels of at least certain sizes, and meta-results that show that large classes of problems all have small (e.g., linear) kernels --- these include a large collection of problems on planar graphs. In this course, the recent and not so recent developments in the area of Kernelization will be discussed.
This is the official home page of a course on Kernelization, held at the Institute of Mathematical Sciences, and taught by Saket Saurabh. It is intended as an introduction to the emerging field of kernelization. Despite being a first course, the course aims to cover considerable ground. In particular, the course will quickly leave the textbook phase to cover recent work and breakthrough techniques that have arisen in the last few years. Some familiarty with algorithms and classical complexity is assumed. Some comfort with rigorous mathematical writing will be useful.