Program

(All talks will be in Alladi Ramkrishnan Hall.)

Monday, Jan 5, 2015
10:00 - 10:10 Opening remarks
10:10 - 11:00 Neeraj Kayal A survey of recent results on depth reduction and lower bounds in arithmetic complexity.
11:00 - 11:30 Coffee Break
11:30 - 11:55 Srikanth Srinivasan Lower bounds for non-commutative skew circuits.
12:00 - 12:25 Nutan Limaye The shifted partial derivative complexity of elementary symmetric polynomials.
12:30 - 14:00 Lunch
14:00 - 14:50 Chandan Saha Lower bounds for small depth arithmetic circuits.
14:50 - 15:30 Coffee Break
15:30 - 16:00 Sagnik Mukhopadhyay Tribes is hard in the message passing model
16:00 - 16:30 Swagato Sanyal Fourier dimension and sparsity
16:30 - 17:00 Coffee



Tuesday, Jan 6, 2015
10:00 - 10:50 Leslie Goldberg Phase transitions and the complexity of approximating partition functions.
10:50 - 11:30 Coffee Break
11:30 - 12:25 Stanislav Zivny The complexity of Valued CSPs.
12:30 - 14:00 Lunch
14:00 - 14:50 Discussion
14:50 - 15:30 Coffee Break
15:30 - 16:20 Iain Stewart A survey of logical reductions between problems.
16:30 - 17:30 Open Problems Session
17:30 - 18:00 Coffee
19:00 - 21:30 Workshop Dinner



Wednesday, Jan 7, 2015
10:00 - 10:50 Kousha Etessami Fixed Point Computation Problems for Algebraically-Defined Functions, and their Computational Complexity.
10:50 - 11:30 Coffee Break
11:30 - 12:20 Samir Datta The Dynamic Complexity of Reachability and related problems.
12:30 - 12:55 Rahul Savani The Complexity of the Simplex Method.
13:00 - 14:00 Lunch



Thursday, Jan 8, 2015
10:00 - 10:50 Olaf Beyersdorff Proof complexity.
10:50 - 11:30 Coffee Break
11:30 - 11:55 Leroy Chew Proof complexity of resolution-based QBF calculi.
12:00 - 12:25 Ruiwen Chen Satisfiability algorithms and lower bounds for boolean formulas.
12:30 - 14:00 Lunch
14:00 - 14:50 Anuj Dawar Symmetric Circuits.
14:50 - 15:30 Coffee Break
15:30 - 16:00 Balagopal Komarath Comparator Circuits over Lattices
16:00 - 16:30 Arpita Korwar Polynomial Identity Testing for a sum of ROABPs
16:30 - 17:00 Coffee



Friday, Jan 9, 2015
10:00 - 10:50 Prahladh Harsha Role of composition in PCP constructions.
10:50 - 11:30 Coffee Break
11:30 - 12:00 Arkadev Chattopadhyay The power of super-logarithmic number of players.
12:10 - 12:55 Jayalal Sarma Weighing Schemes and NL vs UL problem.
12:55 - 13:00 Closing Remarks
13:00 Lunch



Some of the open slots will be dynamically allotted for talks by graduate students.


Talk Titles and Abstracts

Olaf Beyersdorff , Univ. of Leeds

Proof complexity


Arkadev Chattopadhyay, TIFR

The power of super-logarithmic number of players.

In the `Number-on-Forehead' (NOF) model of multiparty communication, the input is a k times m boolean matrix A (where k is the number of players) and Player i sees all bits except those in the i-th row, and the players communicate by broadcast in order to evaluate a specified function f at A. We discover new computational power when k exceeds log m. We give a protocol with communication cost poly-logarithmic in m, for block composed functions with limited block width. These are functions of the form f o g where f is a symmetric b-variate function, and g is a (kr)-variate function and (f o g)(A) is defined, for a k times br matrix to be f(g(A^1),\ldots,g(A^b)) where A^i is the i-th k times r block of A. Our protocol works provided that k > 1+ \ln b + 2^r. Ada et.al previously obtained *simultaneous* and deterministic efficient protocols forcomposed functions of block-width r=1. The new protocol is the first to work for block composed functions with r>1. Moreover, it is simultaneous, with vanishingly small error probability, if public coin randomness is allowed. The deterministic and zero-error version barely uses interaction.

This is joint work with Michael Saks.


Ruiwen Chen , Univ. of Edinburgh

Satisfiability algorithms and lower bounds for boolean formulas.

The random restriction approach was initially used to prove formula size lower bounds, where it was shown that formulas shrink on average under random restrictions. In several recent work, this approach was revived and improved to the concentrated version where formulas shrink with high probability. This concentrated shrinkage property leads to nontrivial satisfiability algorithms, average-case lower bounds, and pseudorandom generators. In this talk, we will survey techniques in proving concentrated shrinkage, and its applications in designing algorithms and proving lower bounds.


Leroy Chew , Univ. of Leeds

Proof complexity of resolution-based QBF calculi


Samir Datta, CMI

The Dynamic Complexity of Reachability and related problems


Anuj Dawar, Univ. of Cambridge

Symmetric Circuits

We are interested in Boolean circuits that decide properties of graphs (or similar relational structures). In particular, the output of the circuit is invariant under permutations of the inputs corresponding to permutations of the vertices. We consider symmetric circuits - i.e.those where this invariance is witnessed by automorphisms of the circuit, and show a close connection between these and definability in logic. This provides a means of establishing lower bounds for symmetric circuits and, e.g. separating Boolean symmetric circuits from those with threshold gates.

This is joint work with Matthew Anderson.


Kousha Etessami, Univ. of Edinburgh

Fixed Point Computation Problems for Algebraically-Defined Functions, and their Computational Complexity

A variety of fundamental computational problems in many fields (in game theory, economics, stochastic processes, verification, and other areas), which on the surface may look quite different, can all be boiled down to computing or approximating a fixed point for an algebraically-defined function which maps a subset of R^n to itself, and which is specified by an algebraic circuit (or formula) using standard gates from among { + , *, - , / , max , min } and using rational coefficients and constants.

The problems in question share the property that the *existence* of a fixed point is not in doubt: it is guaranteed by some classic fixed point theorem (e.g., Brouwer's, Banach's, Tarski's, etc.). But the proofs of existence do not in general yield an efficient way to find a fixed point. For many of these problems, we neither have efficient (P-time) algorithms, nor do we know them to be NP-hard. Indeed, total search complexity classes such as Papadimitriou's PPAD (= piecewise-linear-FIXP) and FIXP have been developed to characterize their complexity.

For some problems in FIXP, by now we know that any non-trivial approximation of an actual fixed point, even in NP, would resolve long standing open problems in arithmetic-vs.-Turing complexity, in particular the PosSLP problem and the square-root-sum problem. For other such problems, by now we have P-time approximation algorithms, using a variety of methods (including variants of Newton's method combined with linear programming).

What makes such a fixed point problem "hard" or "easy"?

In this talk, I will survey a number of such problems, and I will delineate a taxonomy of the (relative) complexity of such problems, based on both their combinatorial and their numerical "difficulty", and based on the nature of their defining algebraic circuits.

This talk is based on joint works in a series of papers with Mihalis Yannakakis and with Alistair Stewart.


Leslie Goldberg, Univ. of Oxford

Phase transitions and the complexity of approximating partition functions

A "partition function" is a generating function for counting combinatorial objects. For example, the partition function of the so-called "hard-core lattice gas model" counts the weighted independent sets of a graph, where the weight of an independent set is an (exponential) function of its size. Recent results have shown deep connections between the complexity of approximating such partition functions and phase transitions which arise when the relevant models are viewed on infinite trees. I will tell you about some of the results that are known, focussing especially on the basic connections between phase transitions and inapproximability.


Prahladh Harsha, TIFR

Role of composition in PCP constructions

Proof composition is an essential ingredient of all known constructions of probabilistically checkable proofs (PCPs), first introduced by Arora and Safra in the proof of the PCP Theorem. Informally speaking, proof composition is a recursive procedure applied to PCP constructions to reduce the alphabet size. Proof composition is applied (possibly several times over) to PCPs over the large alphabet to obtain PCPs over a small (even binary) alphabet. In the last decade, our understanding of proof composition has considerably improved. (1) Composition of PCPs with high soundness error (greater than 1/2) is by now well understood using the notion of PCPs of proximity. These allow for modular composition which in turn led to Dinur's alternate proof of the PCP Theorem and constructions of shorter PCPs. (2) Compositions of PCPs with low soundness error (less than 1/2) is obtained using the notion of decodable PCPs. These gave rise to PCPs with sub-constant error and nearly linear size improving over parameters previously obtained via parallel repetition theorem. (3) Recently, alternate composition techniques have been proposed towards resolving the sliding scale conjecture.

In this talk, I'll survey these composition techniques and indicate how several improvements in PCP constructions have in fact arisen from a better understanding of proof composition.


Neeraj Kayal, MSR Bangalore

A survey of recent results on depth reduction and lower bounds in arithmetic complexity

Two basic questions in arithmetic complexity are the following:
(i) Can explicit polynomials be efficiently computed?
(ii) Can computation be efficiently parallelized?
We will first motivate these two questions and then survey some recent results that give some partial answers.

Nutan Limaye, IIT Bombay

The shifted partial derivative complexity of elementary symmetric polynomials.

The shifted partial derivative measure was introduced by Kayal (ECCC 2012) and has been used to prove many strong depth-4 circuit lower bounds starting from the works of Kayal and Gupta et al (CCC 2013). We show a strong lower bound on the dimension of the shifted partial derivative space of the Elementary Symmetric Polynomials (under upper bounds on their degree). This extends the work of Nisan and Wigderson (Computational Complexity 1997), who studied the partial derivative space of these polynomials. Prior to our work, there have been no results on the shifted partial derivative measure of these polynomials.

This is a joint work with Hervé Fournier, Meena Mahajan, and Srikanth Srinivasan.


Chandan Saha, IISc

Lower bounds for small depth arithmetic circuits

An approach to proving a super-polynomial lower bound for arithmetic circuits reduces the problem to proving `strong enough' lower bounds for small depth circuits, in particular (nonhomogeneous) depth-3 circuits and (homogeneous) depth-4 circuits. Depth of a circuit is the number of layers of gates in it.

In this talk, we plan to discuss an exponential lower bound for (homogeneous) depth-4 circuits that comes close to being `strong enough'. The techniques also yield exponential lower bounds for certain (nonhomogeneous) depth-3 circuits, in particular depth-3 circuits with low bottom fanin which also answers a question posed by Shpilka and Wigderson.

Based on joint works with Neeraj Kayal, Srikanth Srinivasan and Nutan Limaye.


Jayalal Sarma , IIT Madras

Weighing Schemes and NL vs UL problem

For a graph G(V,E) with |V|=n, and a vertex s in V, a weighting scheme (w : E -> N) is called a min-unique (resp. max-unique) weighting scheme, if for any vertex v of the graph G, there is a unique path of minimum(resp.~maximum) weight from s to v. Instead, if the number of paths of minimum(resp. maximum) weight is bounded by n^c for some constant c, then the weighting scheme is called a min-poly (resp. max-poly) weighting scheme.

In this work, we show an unambiguous non-deterministic log-space (UL) algorithm for the problem of testing reachability in layered directed acyclic graphs (DAGs) augmented with a min-poly weighting scheme. This improves the result due to Allender and Reinhardt (2000) where a UL algorithm was given for the case when the weighting scheme is min-unique. At the other end of the spectrum, we propose a UL algorithm for testing reachability in layered DAGs augmented with max-poly weighting schemes. To achieve this, we first reduce reachability in DAGs to the longest path problem for DAGs with a unique source, such that the reduction also preserves the max-poly property of the graph. We then design a UL algorithm for the longest path problem on DAGs with a unique source and augmented with a max-poly weighting scheme.

An important consequence of our results is that, to show NL = UL, it suffices to design log-space computable min-poly (or max-poly) weighing schemes for DAGs.

This is joint work with Anant Dhayal (IITM) and Saurabh Sawlani (IITM)


Rahul Savani, Univ. of Liverpool

The Complexity of the Simplex Method

The simplex method is a well-studied and widely-used pivoting method for solving linear programs. When Dantzig originally formulated the simplex method, he gave a natural pivot rule that pivots into the basis a variable with the most violated reduced cost. In their seminal work, Klee and Minty showed that this pivot rule takes exponential time in the worst case. We prove two main results on the simplex method. Firstly, we show that it is PSPACE-complete to find the solution that is computed by the simplex method using Dantzig's pivot rule. Secondly, we prove that deciding whether Dantzig's rule ever chooses a specific variable to enter the basis is PSPACE-complete. We use the known connection between Markov decision processes (MDPs) and linear programming, and an equivalence between Dantzig's pivot rule and a natural variant of policy iteration for average-reward MDPs. We construct MDPs and show PSPACE-completeness results for single-switch policy iteration, which in turn imply our main results for the simplex method.

Joint work with John Fearnley


Srikanth Srinivasan, IIT Bombay

Lower bounds for non-commutative skew circuits

Abstract We consider lower bounds for non-commutative arithmetic circuits, i.e. arithmetic circuits that compute polynomials in the restrictive model where variables do not commute. Contrary to the more popular commutative setting, we have strong explicit lower bounds for non-commutative formulas and more generally, for non-commutative algebraic branching programs due to Nisan (STOC 1991). However, proving general non-commutative circuit lower bounds remains open.

It has been observed that Nisan’s hard polynomial is in fact computable by linear sized "skew" circuits (skew circuits are circuits where every multiplication gate has the property that all but one of its children is an input variable or a scalar) and hence, proving lower bounds for such circuits is a natural question to now consider. We resolve this question by showing that any non-commutative skew circuit which computes the square of the polynomial defined by Nisan must have exponential size.

As a further step towards proving lower bounds for general non-commutative circuits, we also extend our techniques to prove a superpolynomial lower bound for any class of circuits that has a smaller than maximal "non-skew-depth". As far as we know, this is the strongest model of computation for which we have superpolynomial lower bounds.

Joint work with Nutan Limaye and Guillaume Malod.


Iain Stewart , Univ. of Durham

A survey of logical reductions between problems

In this survey, I will explain and motivate reductions like first-order translations and, in particular, first-order projections. Taking a logical view of complexity theory, where instead of classifying problems according to the resources required for their solution, we classify according to how difficult it is to specify the problems in logic, we obtain a very different perspective on complexity theory. Amazingly, this perspective dovetails beautifully with more traditional complexity theory. We can associate complexity classes with logics and we can obtain alternative notions of completeness where the reductions involve logical translations.


Stanislav Zivny , Univ. of Oxford

The complexity of Valued CSPs

In this talk, we survey recent results on the computational complexity of optimisation problems that can be cast as Valued Constraint Satisfaction Problems (VCSPs). We will focus on problems that can or provably cannot, assuming standard complexity-theoretic assumptions, be solved optimally in polynomial time.