$$ \newcommand{\eps}{\epsilon} \newcommand{\F}{\mathbb F} \newcommand{\from}{\colon} \newcommand{\E}{\mathbb E} \newcommand{\bits}{\{0,1\}} \newcommand{\sbits}{\{\pm1\}} \newcommand{\R}{\mathbb R} \renewcommand{\E}{\mathbb E} \renewcommand{\P}{\mathbb P} \newcommand{\sse}{\subseteq} \newcommand{\cA}{\mathcal A} \newcommand{\cB}{\mathcal B} \newcommand{\cC}{\mathcal C} \newcommand{\cD}{\mathcal D} \newcommand{\cE}{\mathcal E} \newcommand{\cF}{\mathcal F} \newcommand{\cG}{\mathcal G} \newcommand{\cH}{\mathcal H} \newcommand{\cI}{\mathcal I} \newcommand{\cJ}{\mathcal J} \newcommand{\cK}{\mathcal K} \newcommand{\cL}{\mathcal L} \newcommand{\cM}{\mathcal M} \newcommand{\cN}{\mathcal N} \newcommand{\cO}{\mathcal O} \newcommand{\cP}{\mathcal P} \newcommand{\cQ}{\mathcal Q} \newcommand{\cR}{\mathcal R} \newcommand{\cS}{\mathcal S} \newcommand{\cT}{\mathcal T} \newcommand{\cU}{\mathcal U} \newcommand{\cV}{\mathcal V} \newcommand{\cW}{\mathcal W} \newcommand{\cX}{\mathcal X} \newcommand{\cY}{\mathcal Y} \newcommand{\cZ}{\mathcal Z} \newcommand{\fa}{\mathfrak a} \newcommand{\fb}{\mathfrak b} \newcommand{\fc}{\mathfrak c} \newcommand{\fd}{\mathfrak d} \newcommand{\fe}{\mathfrak e} \newcommand{\ff}{\mathfrak f} \renewcommand{\leq}{\leqslant} \renewcommand{\le}{\leqslant} \renewcommand{\geq}{\geqslant} \renewcommand{\ge}{\geqslant} \renewcommand{\sleq}{\preceq} \renewcommand{\sle}{\preceq} \renewcommand{\sgeq}{\succeq} \renewcommand{\sge}{\succeq} \newcommand{\norm}[1]{\|#1\|} \newcommand{\snorm}[1]{\|#1\|^2} \newcommand{\iprod}[1]{\langle#1\rangle} \newcommand{\bbB}{\mathbb B} \newcommand{\bbS}{\mathbb S} \newcommand{\bbR}{\mathbb R} \newcommand{\bbN}{\mathbb N} \newcommand{\bbZ}{\mathbb Z} \newcommand{\bbI}{\mathbb I} \newcommand{\bbQ}{\mathbb Q} \newcommand{\bbP}{\mathbb P} \newcommand{\bbE}{\mathbb E} \newcommand{\bbF}{\mathbb F} \newcommand{\bbG}{\mathbb G} \newcommand{\sfE}{\mathsf E} \newcommand{\poly}{\mathop{\mathrm{poly}}} \newcommand{\polylog}{\mathop{\mathrm{poly}log}} \newcommand{\Opt}{\mathop{\mathrm{Opt}}} \newcommand{\ra}{\rightarrow} \newcommand{\la}{\leftarrow} \newcommand{\e}{\varepsilon} $$

FOCS 2014 Accepted papers list

  • Mark Braverman (Princeton University) , Klim Efremenko (University of Chicago)

    Abstract:
    In this paper we extend the notion of list decoding to the setting of interactive communication and study its limits. In particular, we show that any protocol can be encoded, with a constant rate, into a list-decodable protocol which is resilient
    to a noise rate of up to $1/2-\varepsilon$, and that this is tight.

    Using our list-decodable construction, we study a more nuanced model of noise where the adversary can corrupt up-to $\alpha$ fraction of Alice's communication and up-to $\beta$ fraction of Bob's communication. We use list-decoding in order to fully characterize the region $\mathcal{R}_U$ of pairs $(\alpha, \beta)$ for which unique decoding with constant rate is possible. The region $\mathcal{R}_U$ turns out to be quite unusual in its shape. In particular, it is bounded by a piecewise-differentiable curve with infinitely many pieces.
    We show that outside this region the rate must be exponential. This suggests that in some error regimes list-decoding is necessary for optimal unique decoding.
    We also consider the setting where only one party of the communication must output the correct answer. We precisely characterize the region of all pairs $(\alpha,\beta)$ for which one-sided unique decoding is possible in a way that Alice will output the correct answer.

  • Itai Benjamini (Weizmann Institute) , Gil Cohen (Weizmann Institute) , Igor Shinkar (Weizmann Institute)

    Abstract:
    We construct a bi-Lipschitz bijection from the Boolean cube to the Hamming ball of equal volume. More precisely, we show that for all even n there exists an explicit bijection f from the n-dimensional Boolean cube to the Hamming ball of equal volume embedded in (n+1)-dimensional Boolean cube, such that for all x and y it holds that distance(x,y) / 5 \le distance(f(x),f(y)) \le 4 distance(x,y) where distance(,) denotes the Hamming distance. In particular, this implies that the Hamming ball is bi-Lipschitz transitive.

    This result gives a strong negative answer to an open problem of Lovett and Viola [CC 2012], who raised the question in the context of sampling distributions in low-level complexity classes. The conceptual implication is that the problem of proving lower bounds in the context of sampling distributions requires ideas beyond the sensitivity-based structural results of Boppana [IPL 97].

    We study the mapping f further and show that it (and its inverse) are computable in DLOGTIME-uniform TC0, but not in AC0. Moreover, we prove that f is "approximately local" in the sense that all but the last output bit of f are essentially determined by a single input bit.

  • Oded Lachish (Birkbeck, University of London)

    Abstract:
    In the Matroid Secretary Problem (MSP), the elements of the ground set of a Matroid are revealed on-line one by one, each together with its value. An algorithm for the MSP is Matroid-Unknown if, at every stage of its execution: (i) it only knows the elements that have been revealed so far and their values, and (ii) it has access to an oracle for testing whether or not any subset of the elements that have been revealed so far forms an independent set. An algorithm is Known-Cardinality if, in addition to (i) and (ii), it also knows, from the start, the cardinality $n$ of the ground set of the Matroid.

    We present here a Known-Cardinality algorithm with a competitive-ratio of $O(\log \log \rho)$, where $\rho$ is the rank of the Matroid. The algorithm is also Order-Oblivious as defined by Azar et al. (2013). The prior known results for a Known-Cardinality algorithm are a competitive-ratio of $O(\log \rho)$, by Babaioff et al. (2007), and a competitive-ratio of $O(\sqrt{\log \rho})$, by Chakraborty and Lachish (2012).

  • Moritz Hardt (IBM Research Almaden)

    Abstract:
    Alternating Minimization is a widely used and empirically successful heuristic
    for Matrix Completion and related low-rank optimization problems. Theoretical
    guarantees for Alternating Minimization have been hard to come by and are
    still poorly understood. This is in part because the heuristic is iterative
    and non-convex in nature. We give a new algorithm based on Alternating
    Minimization that provably recovers an unknown low-rank matrix from a random
    subsample of its entries under a standard incoherence assumption. Our results
    reduce the sample size requirements of the Alternating Minimization approach
    by at least a quartic factor in the rank and the condition number of the
    unknown matrix. These improvements apply even if the matrix is only close to
    low-rank in the Frobenius norm. Our algorithm runs in nearly linear time in
    the dimension of the matrix and, in a broad range of parameters, gives the
    strongest sample bounds among all subquadratic time algorithms that we are aware
    of.

    Underlying our work is a new robust convergence analysis of the well-known
    Power Method for computing the dominant singular vectors of a matrix. This
    viewpoint leads to a conceptually simple understanding of Alternating
    Minimization. In addition, we contribute a new technique for controlling the
    coherence of intermediate solutions arising in iterative algorithms based on a
    smoothed analysis of the QR factorization. These techniques may be of interest
    beyond their application here.

  • Konstantin Makarychev (Microsoft Research) , Maxim Sviridenko (Yahoo! Labs)

    Abstract:
    We present a new framework for solving optimization problems with a diseconomy of scale. In such problems, our goal is to minimize the cost of resources used to perform a certain task. The cost of resources grows super linearly as x^q where x is the amount of resource. We define a novel linear programming relaxation for such problems, and then show that the integrality gap of the relaxation is equal to the q-th moment of the Poisson random variable with parameter 1. Using our framework, we obtain approximation algorithms for the Minimum Energy Efficient Routing, Minimum Degree Balanced Spanning Tree, Load Balancing on Unrelated Parallel Machines, and Unrelated Parallel Machine Scheduling with Nonlinear Functions of Completion Times problems.

    Our analysis relies on the decoupling inequality for nonnegative random variables. The
    inequality states that the q-th norm of the sum n random variables X_i is at most C_q times the q-th norm of the sum of n random variables Y_i,
    where X_i are independent nonnegative random variables, Y_i are possibly dependent
    nonnegative random variable, and each Y_i has the same distribution as X_i.
    The inequality was proved by de la Pena in 1990. However,
    the optimal constant C_q was not known. We show that the optimal constant is
    is equal to the fractional q-th Bell number raised to the power of 1/q.

  • Thomas Rothvoss (University of Washington)

    Abstract:
    A classical theorem of Spencer shows that any set system with n sets and n elements
    admits a coloring of discrepancy O(n^1/2).
    Recent exciting work of Bansal, Lovett and Meka shows that such colorings can be found in
    polynomial time. In fact, the Lovett-Meka algorithm finds a half integral point in
    any ``large enough'' polytope. However, their algorithm crucially relies on
    the facet structure and does not apply to general convex sets.

    We show that for any symmetric convex set K with measure at least exp(-n/500),
    the following algorithm finds a point y in K \cap [-1,1]^n with Omega(n) coordinates in {-1,+1}:
    (1) take a random Gaussian vector x; (2) compute the point y in K \cap [-1,1]^n that is closest to x. (3) return y.

    This provides another truly constructive proof of Spencer's theorem and the first constructive
    proof of a Theorem of Giannopoulos.

  • Oded Goldreich (Weizmann Institute) , Dana Ron (Tel-Aviv University)

    Abstract:
    We initiate a study of learning and testing dynamic environments,
    focusing on environment that evolve according to a fixed local rule.
    The (proper) learning task consists of obtaining the initial configuration
    of the environment, whereas for non-proper learning it suffices to predict
    its future values. The testing task consists of checking whether
    the environment has indeed evolved from some initial configuration
    according to the known evolution rule.
    We focus on the temporal aspect of these computational problems,
    which is reflected in the requirement that only a small portion
    of the environment is inspected in each time slot
    (i.e., the time period between two consecutive applications
    of the evolution rule).

    We present some general observations, an extensive study of two special cases,
    two separation results, and a host of open problems.
    The two special cases that we study refer to linear rules of evolution
    and to rules of evolution that represent simple movement of objects.
    Specifically, we show that evolution according to any linear rule
    can be tested within a total number of queries that is sublinear
    in the size of the environment, and that evolution according to
    a simple one-dimensional movement can be tested within a total number
    of queries that is independent of the size of the environment.

  • Moritz Hardt (IBM Research Almaden) , Jonathan Ullman (Harvard University SEAS)

    Abstract:
    We show that, under a standard hardness assumption, there is no computationally
    efficient algorithm that given n samples from an unknown distribution can give
    valid answers to n^{3+o(1)} adaptively chosen statistical queries. A
    statistical query asks for the expectation of a predicate over the underlying
    distribution, and an answer to a statistical query is valid if it is "close" to
    the correct expectation over the distribution.

    Our result stands in stark contrast to the well known fact that exponentially
    many statistical queries can be answered validly and efficiently if the
    queries are chosen non-adaptively (no query may depend on the answers to
    previous queries). Moreover, Dwork et al. [DworkFHRR14], showed how to
    accurately answer exponentially many adaptively chosen statistical queries via
    a computationally inefficient algorithm. They also gave efficient algorithm
    that can answer \Omega~(n^2) adaptively chosen queries, which shows
    our result is almost quantitatively tight.

    Conceptually, our result demonstrates that achieving statistical validity alone can be a source of computational intractability in adaptive settings. For example, in the modern large collaborative research environment, data analysts typically choose a particular approach based on previous findings. False discovery occurs if a research finding is supported by the data but not by the underlying distribution. While the study of preventing false discovery in Statistics is decades old, to the best of our knowledge our result is the first to demonstrate a computational barrier. In particular, our result suggests that the perceived difficulty of preventing false discovery in today's collaborative research environment may be inherent.

  • George Barmpalias (State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing and School of Mathematics, Statistics and Operations Research, Victoria University, Wellington, New Zealand) , Richard Elwes (Department of Mathematics, University of Leeds, UK) , Andy Lewis-Pye (Department of Mathematics, London School of Economics, UK)

    Abstract:
    Schelling's model of segregation looks to explain the way in which particles or agents of two types may come to arrange themselves spatially into configurations consisting of large homogeneous clusters, i.e. connected regions consisting of only one type.
    As one of the earliest agent based models studied by economists and perhaps the most famous model of self-organising behaviour, it also has direct links to areas at the interface between computer science and statistical mechanics, such as the Ising model and the study of contagion and cascading phenomena in networks.

    While the model has been extensively studied it has largely resisted rigorous analysis, prior results from the literature generally pertaining to variants of the model which are tweaked so as to be amenable to standard techniques from statistical mechanics or stochastic evolutionary game theory. In \cite{BK}, Brandt, Immorlica, Kamath and Kleinberg provided the first rigorous analysis of the unperturbed model, for a specific set of input parameters. Here we provide a rigorous analysis of the model's behaviour much more generally and establish some surprising forms of threshold behaviour, notably the existence of situations where an \emph{increased} level of intolerance for neighbouring agents of opposite type leads almost certainly to \emph{decreased} segregation.

  • Marcin Pilipczuk (Department of Informatics, University of Bergen, Norway) , Michał Pilipczuk (Department of Informatics, University of Bergen, Norway) , Piotr Sankowski (Institute of Informatics, University of Warsaw, Poland) , Erik Jan van Leeuwen (Max-Planck Institut für Informatik, Saarbrücken, Germany)

    Abstract:
    We propose polynomial-time algorithms that sparsify planar and bounded-genus graphs while preserving optimal or near-optimal solutions to Steiner problems. Our main contribution is a polynomial-time algorithm that, given an unweighted graph G embedded on a surface of genus g and a designated face f bounded by a simple cycle of length k, uncovers a set F of edge of G of size polynomial in g and k that contains an optimal Steiner tree for any set of terminals that is a subset of the vertices of f. We apply this general theorem to prove that:

    * given an unweighted graph G embedded on a surface of genus g and a terminal set S in V(G), one can in polynomial time fi nd a set F in E(G) that contains an optimal Steiner tree T for S and that has size polynomial in g and |E(T)|;
    * an analogous result holds for an optimal Steiner forest for a set S of terminal pairs;
    * given an unweighted planar graph G and a terminal set S in V(G), one can in polynomial time fi nd a set F in E(G) that contains an optimal (edge) multiway cut C separating S (i.e., a cutset that intersects any path with endpoints in di fferent terminals from S) and that has size polynomial in |C|.

    In the language of parameterized complexity, these results imply the fi rst polynomial kernels for Steiner Tree and Steiner Forest on planar and bounded-genus graphs (parameterized by the size of the tree and forest, respectively) and for (Edge) Multiway Cut on planar graphs (parameterized by the size of the cutset). Steiner Tree and similar "subset" problems were identifi ed in [Demaine, Hajiaghayi, Computer J., 2008] as important to the quest to widen the reach of the theory of bidimensionality ([Demaine et al., JACM 2005], [Fomin et al., SODA 2010]). Therefore, our results can be seen as a leap forward to achieve
    this broader goal.

    Additionally, we obtain a weighted variant of our main contribution: a polynomial-time algorithm that, given an edge-weighted plane graph G, a designated face f bounded by a simple cycle of weight w(f), and an accuracy parameter eps > 0, uncovers a set F in E(G) of total weight at most poly(1/eps)w(f) that, for any set of terminal pairs that lie on f, contains a Steiner forest within additive error eps*w(f) from the optimal Steiner forest. This result deepens the understanding of the recent framework of approximation schemes for network design problems on planar graphs ([Klein, SICOMP 2008], [Borradaile, Klein, Mathieu, ACM TALG 2009], and later works) by explaining the structure of the solution space within a brick of the so-called mortar graph, the central notion of this framework.

  • Sian-Jheng Lin (Research Center for Information Technology Innovation, Academia Sinica) , Wei-Ho Chung (Research Center for Information Technology Innovation, Academia Sinica) , Yunghsiang S. Han (Department of Electrical Engineering, National Taiwan University of Science and Technology)

    Abstract:
    In this paper, we present a practical algorithm for (n=2^r,k) systematic Reed-Solomon (RS) erasure codes over characteristic-2 finite fields {F}_{2^r},r in {Z}^+. At first, we define a new transform over {F}_{2^r} and proposed a fast transform algorithm. Based on this transform, we then develop the encoding and erasure decoding algorithms for the (n=2^r,k) Reed-Solomon codes. Thanks to the efficiency of transform, the encoding can be completed in O(n\log_2(k)) finite field operations, and the erasure decoding in O(n\log_2(n)) finite field operations. To the best of our knowledge, this is the first approach supporting Reed-Solomon erasure codes over characteristic-2 finite fields to achieve a complexity of O(n\log_2(n)), in both additions and multiplications. It is worth noting that the proposed algorithm does not rely on finite field FFTs. Due to the small complexity leading factor and ease of implementation, the algorithms are advantageous in practical applications.

  • Toby Cubitt (University of Cambridge) , Ashley Montanaro (University of Bristol)

    Abstract:
    The calculation of ground-state energies of physical systems can be formalised as the k-local Hamiltonian problem, which is the natural quantum analogue of classical constraint satisfaction problems. One way of making the problem more physically meaningful is to restrict the Hamiltonian in question by picking its terms from a fixed set S. Examples of such special cases are the Heisenberg and Ising models from condensed-matter physics.

    In this work we characterise the complexity of this problem for all 2-local qubit Hamiltonians. Depending on the subset S, the problem falls into one of the following categories: in P; NP-complete; polynomial-time equivalent to the Ising model with transverse magnetic fields; or QMA-complete. The third of these classes contains NP and is contained within StoqMA. The characterisation holds even if S does not contain any 1-local terms; for example, we prove for the first time QMA-completeness of the Heisenberg and XY interactions in this setting. If S is assumed to contain all 1-local terms, which is the setting considered by previous work, we have a characterisation that goes beyond 2-local interactions: for any constant k, all k-local qubit Hamiltonians whose terms are picked from a fixed set S correspond to problems either in P; polynomial-time equivalent to the Ising model with transverse magnetic fields; or QMA-complete.

    These results are a quantum analogue of Schaefer's dichotomy theorem for boolean constraint satisfaction problems.

  • Sungjin Im (University of California at Merced) , Janardhan Kulkarni (Duke University) , Kamesh Munagala (Duke University) , Kirk Pruhs (University of Pittsburgh)

    Abstract:
    We consider the classical problem of minimizing the total weighted flow-time for unrelated machines in the online non-clairvoyant setting. In this problem, a set of jobs $J$ arrive over time to be scheduled on a set of $M$ machines. Each job $j$ has processing length $p_j$, weight $w_j$, and is processed at a rate of $\ell_{ij}$ when scheduled on machine $i$. The online scheduler knows the values of $w_j$ and $\ell_{ij}$ upon arrival of the job, but is not aware of the quantity $p_j$. We present the first online algorithm that is scalable ($(1+\eps)$-speed $O(\frac{1}{\epsilon^2})$-competitive for any constant $\eps > 0$) for the total weighted flow-time objective. No non-trivial results were known for this setting, except for the most basic case of identical machines. Our result resolves a major open problem in online scheduling theory. Moreover, we also show that no job needs more than a logarithmic number of migrations.

    We further extend our result and give a scalable algorithm for the objective of minimizing total weighted flow-time plus energy cost for the case of unrelated machines. In this problem, each machine can be sped up by a factor of $f^{-1}_i(P)$ when consuming power $P$, where $f_i$ is an arbitrary strictly convex power function. In particular, we get an $O(\gamma^2)$-competitive algorithm when all power functions are of form $s^{\gamma}$. These are the first non-trivial non-clairvoyant results in any setting with heterogeneous machines.

    The key algorithmic idea is to let jobs migrate selfishly until they converge to an equilibrium. Towards this end, we define a game where each job's utility which is closely tied to the instantaneous increase in the objective the job is responsible for, and each machine declares a policy that assigns priorities to jobs based on when they migrate to it, and the execution speeds. This has a spirit similar to coordination mechanisms that attempt to achieve near optimum welfare in the presence of selfish agents (jobs). To the best our knowledge, this is the first work that demonstrates the usefulness of ideas from coordination mechanisms and Nash equilibria for designing and analyzing online algorithms.

  • Francois Le Gall (The University of Tokyo)

    Abstract:
    In this paper we present a quantum algorithm solving the triangle finding problem in unweighted graphs with query complexity $\tilde O(n^{5/4})$, where $n$ denotes the number of vertices in the graph. This improves the previous upper bound $O(n^{9/7})=O(n^{1.287...})$ recently obtained by Lee, Magniez and Santha. Our result shows, for the first time, that in the quantum query complexity setting unweighted triangle finding is easier than its edge-weighted version, since for finding an edge-weighted triangle Belovs and Rosmanis proved that any quantum algorithm requires $\Omega(n^{9/7}/\sqrt{\log n})$ queries. Our result also illustrates some limitations of the non-adaptive learning graph approach used to obtain the previous $O(n^{9/7})$ upper bound since, even over unweighted graphs, any quantum algorithm for triangle finding obtained using this approach requires $\Omega(n^{9/7}/\sqrt{\log n})$ queries as well. To bypass the obstacles characterized by these lower bounds, our quantum algorithm uses combinatorial ideas exploiting the graph-theoretic properties of triangle finding, which cannot be used when considering edge-weighted graphs or the non-adaptive learning graph approach.

  • Per Austrin (KTH Royal Institute of Technology) , Venkatesan Guruswami (Carnegie Mellon University) , Johan Håstad (KTH Royal Institute of Technology)

    Abstract:
    We prove the following hardness result for a natural promise variant of the classical CNF-satisfiability problem: Given a CNF-formula where each clause has width $w$ and the guarantee that there exists an assignment satisfying at least $g = \lceil \frac{w}{2}\rceil -1$ literals in each clause, it is NP-hard to find a satisfying assignment to the formula (that sets at least one literal to true in each clause). On the other hand, when $g = \lceil \frac{w}{2}\rceil$, it is easy to find a satisfying assignment via simple generalizations of the algorithms for \textsc{$2$-Sat}.

    Viewing \textsc{$2$-Sat} $\in \mathrm{P}$ as easiness of \textsc{Sat} when $1$-in-$2$ literals are true in every clause, and NP-hardness of \textsc{$3$-Sat} as intractability of \textsc{Sat} when $1$-in-$3$ literals are true, our result shows, for any fixed $\eps > 0$, the hardness of finding a satisfying assignment to instances of ``\textsc{$(2+\eps)$-Sat}'' where the density of satisfied literals in each clause is promised to exceed $\frac{1}{2+\eps}$.

    We also strengthen the results to prove that given a $(2k+1)$-uniform hypergraph that can be 2-colored such that each edge has perfect balance (at most $k+1$ vertices of either color), it is NP-hard to find a 2-coloring that avoids a monochromatic edge. In other words, a set system with discrepancy $1$ is hard to distinguish from a set system with worst possible discrepancy.

    Finally, we prove a general result showing intractability of promise CSPs based on the paucity of certain ``weak polymorphisms." The core of the above hardness results is the claim that the only weak polymorphisms in these particular cases are juntas depending on few variables.

  • Xi Chen (Columbia University) , Rocco A. Servedio (Columbia University) , Li-Yang Tan (Columbia University)

    Abstract:
    We consider the problem of testing whether an unknown Boolean function $f:\{-1,1\}^n -> \{-1,1}$ is monotone versus $\eps$-far from every monotone function. The two main results of this paper are a new lower bound and a new algorithm for this well-studied problem.

    Lower bound: We prove an {$\tilde{\Omega}(n^{1/5})$} lower bound on the query complexity of any non-adaptive two-sided error algorithm for testing whether an unknown Boolean function $f$ is monotone versus constant-far from monotone. This gives an exponential improvement on the previous lower bound of $\Omega(\log n)$ due to Fischer et al.~\cite{FLN+02}. We show that the same lower bound holds for monotonicity testing of Boolean-valued functions over hypergrid domains $\{1,\dots,m\}^n$ for all $m\ge 2$.

    Upper bound: We give an $\tilde{O}(n^{5/6})\poly(1/\eps)$-query algorithm that tests whether an unknown Boolean function $f$ is monotone versus $\eps$-far from monotone. Our algorithm, which is non-adaptive and makes one-sided error, is a modified version of the algorithm of Chakrabarty and Seshadhri~\cite{CS13a}, which makes $\tilde{O}(n^{7/8})\poly(1/\eps)$ queries.

  • Tali Kaufman (Bar-Ilan University) , David Kazhdan (Hebrew University) , Alexander Lubotzky (Hebrew University)

    Abstract:
    Expander graphs have been a focus of attention in computer science in the last four decades. In recent years a high dimensional theory of expanders is emerging. There are several possible generalizations of the theory of {\em expansion} to simplicial complexes, among them stand out {\em coboundary expansion} and {\em topological overlapping}. It is known that for every $d$ there are {\em unbounded degree} simplicial complexes of dimension $d$ with these properties. However, a major open problem (asked by Gromov and others) is whether {\em bounded degree} high dimensional expanders according to these definitions exist for $d \geq 2$. We show bounded degree complexes of dimension $d=2$ which are high dimensional expanders. More precisely, our main result says that the $2$-skeletons of the $3$-dimensional Ramanujan complexes have the topological overlapping property. Assuming a conjecture of Serre on the congruence subgroup property, they are also coboundary expanders.

  • Ilario Bonacina (Sapienza University of Rome) , Nicola Galesi (Sapienza University of Rome) , Neil Thapen (Academy of Sciences of the Czech Republic)

    Abstract:
    We show $\Omega(n^2)$ lower bounds on the total space used in resolution refutations of random $k$-CNFs over $n$ variables, and of the graph pigeonhole principle and the bit pigeonhole principle for $n$ holes. This answers the long-standing open problem of whether there are families of $k$-CNF formulas of size $O(n)$ requiring total space $\Omega(n^2)$ in resolution, and gives the first truly quadratic lower bounds on total space. The results follow from a more general theorem showing that, for formulas satisfying certain conditions, in every resolution refutation there is a memory configuration containing many clauses of large width.

  • Zengfeng Huang (MADALGO, Aarhus University) , Ke Yi (HKUST)

    Abstract:
    Data summarization is an effective approach to dealing with the ``big data'' problem.
    While data summarization problems traditionally have been studied is the streaming
    model, the focus is starting to shift to distributed models, as distributed/parallel
    computation seems to be the only viable way to handle today's massive data sets. In
    this paper, we study epsilon-approximations, a classical data summary that,
    intuitively speaking, preserves approximately the density of the underlying data set
    over a certain range space. We consider the problem of computing
    epsilon-approximations for a data set which is held jointly by k players, and give
    general communication upper and lower bounds that hold for any range space whose
    discrepancy is known.

  • Piotr Indyk (MIT) , Michael Kapralov (MIT)

    Abstract:
    We give an algorithm for $\ell_2/\ell_2$ sparse recovery from Fourier measurements using $O(k\log N)$ samples, matching the lower bound of \cite{DIPW} for non-adaptive algorithms up to constant factors for any $k\leq N^{1-\delta}$. The algorithm runs in $\tilde O(N)$ time. Our algorithm extends to higher dimensions, leading to sample complexity of $O_d(k\log N)$, which is optimal up to constant factors for any $d=O(1)$. These are the first sample optimal algorithms for these problems.

    A preliminary experimental evaluation indicates that our algorithm has empirical sampling complexity comparable to that of other recovery methods known in the literature, while providing strong provable guarantees on the recovery quality.

  • Gilles Brassard (Université de Montréal, CIFAR and ETH-ITS) , Ashwin Nayak (University of Waterloo) , Alain Tapp (Université de Montréal) , Dave Touchette (Université de Montréal) , Falk Unger

    Abstract:
    We study the problem of simulating protocols in a quantum communication setting over noisy channels. This problem falls at the intersection of quantum information theory and quantum communication complexity, and will be of importance for eventual real-world applications of interactive quantum protocols, which can be proved to have exponentially lower communication costs than their classical counterparts for some problems. To the best of our knowledge, these are the first results regarding the quantum version of this problem, first studied by Schulman in a classical setting (FOCS '92, STOC '93). We simulate a length N quantum communication protocol by a length O(N) protocol with arbitrarily small error. Our simulation strategy has a far higher communication rate than a naive one that encodes separately each particular round of communication to achieve comparable success. Such a strategy would have a communication rate going to 0 in the worst interaction case as the length of the protocols increases, in contrast to our strategy, which has a communication rate proportional to the capacity of the channel used. Under adversarial noise, our strategy can withstand, for arbitrarily small \epsilon > 0, error rates as high as 1/2 -\epsilon when parties preshare perfect entanglement, but the classical channel is noisy. We show that this is optimal. Note that in this model, the naive strategy would not work for any constant fraction of errors. We provide extension of these results in several other models of communication, including when also the entanglement is noisy, and when there is no pre-shared entanglement but communication is quantum and noisy. We also study the case of random noise, for which we provide simulation protocols with positive communication rates and no pre-shared entanglement over some quantum channels with quantum capacity Q=0, proving that Q is in general not the right characterization of a channel's capacity for interactive quantum communication. Our results are stated for a general quantum communication protocol in which Alice and Bob collaborate, and hold in particular in the quantum communication complexity settings of the Yao and Cleve-Buhrman models.

  • Leonid Gurvits (The City College of New York) , Alex Samorodnitsky (The Hebrew University)

    Abstract:
    We show that the permanent of a doubly stochastic $n \times n$ matrix
    $A = \(a_{ij}\)$ is at least as large as $\prod_{i,j} \(1-a_{ij}\)^{1-a_{ij}}$
    and at most as large as $2^n$ times this number. Combined with previous
    work, this improves on the deterministic approximation factor for the
    permanent, giving $2^n$ instead of $e^n$-approximation.
    We also give a combinatorial application of the lower bound, proving S. Friedland's "Asymptotic Lower
    Matching Conjecture" for the monomer-dimer problem.

  • Gregory Valiant (Stanford) , Paul Valiant (Brown University)

    Abstract:
    We consider the problem of verifying the identity of a distribution: Given the description of a distribution over a discrete support $p=(p_1,p_2,\ldots,p_n)$, how many samples (independent draws) must one obtain from an unknown distribution, $q$, to distinguish, with high probability, the case that $p=q$ from the case that the total variation distance ($L_1$ distance) $||p-q||_1 \ge \eps$? We resolve this question, up to constant factors, on an instance by instance basis: there exist universal constants $c,c'$ and a function $f(p,\eps)$ on distributions and error parameters, such that our tester distinguishes $p=q$ from $||p-q||_1\ge \eps$ using $f(p,\eps)$ samples with success probability $>2/3$, but no tester can distinguish $p=q$ from $||p-q||_1\ge c\cdot \epsilon$ when given $c'\cdot f(p,\epsilon)$ samples. The function $f(p,\eps)$ is upper-bounded by a multiple of $\frac{||p||_{2/3}}{\epsilon^2}$, but is more complicated, and is significantly smaller in cases when $p$ has many small domain elements, or a single large one. This result significantly generalizes and tightens previous results: since distributions of support at most $n$ have $L_{2/3}$ norm bounded by $\sqrt{n},$ this result immediately shows that for such distributions, $O(\sqrt{n}/{\eps^2})$ samples suffice, tightening the previous bound of $O(\frac{\sqrt{n} \polylog n}{\eps^4})$ for this class of distributions, and matching the (tight) known results for the case that $p$ is the uniform distribution over support $n$.

    The analysis of our very simple testing algorithm involves several hairy inequalities. To facilitate this analysis, we give a complete characterization of a general class of inequalities---generalizing Cauchy-Schwarz, Holder's inequality, and the monotonicity of $L_p$ norms. Specifically, we characterize the set of sequences $(a)_i=a_1,\ldots,a_r,$ $(b)_i=b_1,\ldots,b_r,$ $(c)_i=c_1\ldots,c_r,$ for which it holds that for all finite sequences of positive numbers $(x)_j=x_1,\ldots$ and $(y)_j=y_1,\ldots,$ $$\prod_{i=1}^r \left( \sum_j x_j^{a_i} y_j^{b_i}\right)^{c_i} \ge 1.$$ For example, the standard Cauchy-Schwarz inequality corresponds to the sequences $a=(1,0,1/2),$ $b=(0,1,1/2),$ $c=(1/2, 1/2, -1).$ Our characterization is of a non-traditional nature in that it uses linear programming to compute a derivation that may otherwise have to be sought through trial and error, by hand. We do not believe such a characterization has appeared in the literature, and hope its computational nature will be useful to others, and facilitate analyses like the one here.

  • Jin-Yi Cai (University of Wisconsin-Madison) , Heng Guo (University of Wisconsin-Madison) , Tyson Williams (University of Wisconsin-Madison)

    Abstract:
    We show that an effective version of Siegel's Theorem on finiteness of integer solutions and an application of elementary Galois theory are key ingredients in a complexity classification of some Holant problems. These Holant problems, denoted by Holant(f), are defined by a symmetric ternary function f that is invariant under any permutation of the k >= 3 domain elements. We prove that Holant(f) exhibits a complexity dichotomy. This dichotomy holds even when restricted to planar graphs. A special case of this result is that counting edge k-colorings is #P-hard over planar 3-regular graphs for k >= 3. In fact, we prove that counting edge k-colorings is #P-hard over planar r-regular graphs for all k >= r >= 3. The problem is polynomial-time computable in all other parameter settings. The proof of the dichotomy theorem for Holant(f) depends on the fact that a specific polynomial p(x, y) has an explicitly listed finite set of integer solutions, and the determination of the Galois groups of some specific polynomials. In the process, we also encounter the Tutte polynomial, medial graphs, Eulerian partitions, Puiseux series, and a certain lattice condition on the (logarithm of) the roots of polynomials.

  • Amirali Abdullah (University of Utah) , Alexandr Andoni (Microsoft Research) , Ravi Kannan (Microsoft Research) , Robert Krauthgamer (Weizmann Institute)

    Abstract:
    We study spectral algorithms for the high-dimensional Nearest Neighbor
    Search problem (NNS). In particular, we consider a semi-random setting,
    where a dataset $P$ in $\R^d$ is chosen arbitrarily from an unknown subspace of low dimension $kGaussian noise. We design spectral NNS algorithms whose query time depends polynomially
    on $d$ and $\log n$ (where $n=|P|$) for large ranges of $k$, $d$ and $n$.
    Our algorithms use a repeated computation of the top PCA subspace,
    and are effective even when the random-noise magnitude is much larger than the interpoint distances in $P$.
    Our motivation is that, in practice, a number of NNS algorithms use
    spectral methods, which still lack a good theoretical justification,
    rather than random-projection methods that seem to be theoretically optimal.

  • Mohsen Ghaffari (MIT) , Bernhard Haeupler (Microsoft Research)

    Abstract:
    We study coding schemes for error correction in interactive communications. Such interactive coding schemes simulate any n-round interactive protocol using N rounds over an adversarial channel that corrupts up to \delta N transmissions. Important performance measures for a coding scheme are its maximum tolerable error rate \rho, communication complexity N, and computational complexity.

    We give the first coding scheme for the standard setting which performs optimally in all three measures: Our randomized non-adaptive coding scheme has a near-linear computational complexity and tolerates any error rate \delta < 1/4 with a linear N = \Theta(n) communication complexity. This improves over prior results which each performed well in two of these measures.

    We also give results for other settings of interest, namely, the first computationally and communication efficient schemes that tolerate \rho < 2/7 adaptively, \rho < 1/3 if only one party is required to decode, and \rho < 1/2 if list decoding is allowed. These are the optimal tolerable error rates for the respective settings. These coding schemes also have near linear computational and communication complexity.

    These results are obtained via two techniques: We give a general black-box reduction which reduces unique decoding, in various settings, to list decoding. We also show how to boost the computational and communication efficiency of any list decoder to become near linear.

  • Amir Abboud (Stanford University) , Virginia Vassilevska Williams (Stanford University)

    Abstract:
    We consider several well-studied problems in dynamic algorithms and prove that sufficient progress on any of them would imply a breakthrough on one of five major open problems in the theory of algorithms:
    \begin{enumerate}
    \item Is the $3$SUM problem on $n$ numbers in $O(n^{2-\eps})$ time for some $\eps>0$?
    \item Can one determine the satisfiability of a CNF formula on $n$ variables in $O((2-\eps)^n\poly n)$ time for some $\eps>0$?
    \item Is the All Pairs Shortest Paths problem for graphs on $n$ vertices in $O(n^{3-\eps})$ time for some $\eps>0$?
    \item Is there a linear time algorithm that detects whether a given graph contains a triangle?
    \item Is there an $O(n^{3-\eps})$ time combinatorial algorithm for $n\times n$ Boolean matrix multiplication?
    \end{enumerate}
    The problems we consider include dynamic versions of bipartite perfect matching, bipartite maximum weight matching, single source reachability, single source shortest paths, strong connectivity, subgraph connectivity, diameter approximation and some nongraph problems such as Pagh's problem defined in a recent paper by Patrascu [STOC 2010].

  • Daniel Lokshtanov (Department of Informatics, University of Bergen, Norway) , Marcin Pilipczuk (Department of Informatics, University of Bergen, Norway) , Michał Pilipczuk (Department of Informatics, University of Bergen, Norway) , Saket Saurabh (Institute of Mathematical Sciences, India, and Department of Informatics, University of Bergen, Norway)

    Abstract:
    We give a fixed-parameter tractable algorithm that, given a parameter $k$ and two graphs $G_1,G_2$, either concludes that one of these graphs has treewidth at least $k$, or determines whether $G_1$ and $G_2$ are isomorphic. The running time of the algorithm on an $n$-vertex graph is $2^{O(k^5\log k)} n^5$, and this is the first fixed-parameter algorithm for Graph Isomorphism parameterized by treewidth.

    Our algorithm in fact solves the more general canonization problem. We namely design a procedure working in $2^{O(k^5\log k)} n^5$ time that, for a given graph $G$ on $n$ vertices, either concludes that the treewidth of $G$ is at least $k$, or finds an isomorphism-invariant construction term - an algebraic expression that encodes $G$ together with a tree decomposition of $G$ of width $O(k^4)$. Hence, a canonical graph isomorphic to $G$ can be constructed by simply evaluating the obtained construction term, while the isomorphism test reduces to verifying whether the computed construction terms for $G_1$ and $G_2$ are equal.

  • Nima Anari (University of California, Berkeley) , Gagan Goel (Google Research) , Afshin Nikzad (Stanford University)

    Abstract:
    In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk, ClickWorker and CrowdFlower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is - if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility.

    We note that although previous work has studied this problem, a crucial difference in which we deviate from earlier work is the notion of *large-scale* markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve an approximation factor better than $0.414$ and $0.5$ for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of $0.292$ and $0.33$ respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves an approximation factor of $1-1/e$. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism which is used in all the previous works so far on this problem. We also show that our mechanism is optimal in a large and natural class of mechanisms by showing that no mechanism can achieve a factor better than $1-1/e$ in this class. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the markets are large.

  • Neeraj Kayal (Microsoft Research India) , Nutan Limaye (Indian Institute of Technology Bombay) , Chandan Saha (Indian Institute of Science Bangalore) , Srikanth Srinivasan (Indian Institute of Technology Bombay)

    Abstract:
    We show here a exp(d log N) size lower bound for homogeneous depth four arithmetic formulas. That is, we give an explicit family of polynomials of degree d on N variables (with N=d^3 in our case) with 0,1-coefficients such that for any representation of a polynomial f in this family of the form f=\Sum \Prod Q_{ij} where the Q_{ij}'s are homogeneous polynomials (recall that a polynomial is said to be homogeneous if all its monomials have the same degree), it must hold that the total number of monomials in all the Q_{ij}'s put together must be at least exp(d logN) .

    The above mentioned family which we refer to as the Nisan-Wigderson design-based family of polynomials, is in the complexity class VNP. Our work builds on the recent lower bound results and yields an improved quantitative bound.

  • Bernhard Haeupler (Microsoft Research)

    Abstract:
    We provide the first capacity approaching coding schemes that robustly simulate any interactive protocol over an adversarial channel that corrupts any eps fraction of the transmitted symbols. Our coding schemes achieve a communication rate of 1 - O(sqrt{eps log log 1/eps}) over any adversarial channel. This can be improved to 1 - O(sqrt{eps}) for random, oblivious, and computationally bounded channels, or if parties have shared randomness unknown to the channel.

    Surprisingly, these rates exceed and thus break the 1 - \Omega(sqrt{H(eps)}) = 1 - Omega(sqrt{eps log 1/eps}) interactive channel capacity bound which [Kol and Raz; STOC'13] recently proved for random errors. Equally interesting, we conjecture our two rates to be optimal for their respective settings and therefore capture the interactive channel capacity for both random and adversarial errors.

    In addition to being very communication efficient, our randomized coding schemes have multiple other advantages. They are computationally efficient, extremely natural, and significantly simpler than prior (non-capacity approaching) schemes. In particular, our protocols do not employ any coding but allow the original protocol to be performed as-is, interspersed only by short exchanges of hash values. When hash values do not match, the parties backtrack. Our approach is, as we feel, by far the simplest and most natural explanation for why and how robust interactive communication in a noisy environment is even possible.

  • Monika Henzinger (University of Vienna) , Sebastian Krinninger (University of Vienna) , Danupon Nanongkai (University of VIenna)

    Abstract:
    The decremental single-source shortest paths (SSSP) problem concerns maintaining the distances between a given source node s to every node in an n-node m-edge graph G undergoing edge deletions. It is one of the most important problems with many potential applications. While its static counterpart can be easily solved in linear time, this decremental problem is much more challenging even in the undirected unweighted case. In this case, the classic O(mn) total update time of Even and Shiloach (JACM 1981) has stood the best for three decades, until it was recently improved to O(n^{2+o(1)}) by a (1+ε)-approximation algorithm of Bernstein and Roditty (SODA 2011), and more recently to O(n^{1.8+o(1)}+m^{1+o(1)}) by us (SODA 2014). In this paper, we finally bring the running time of this case down to near-linear: We give a (1+ε)-approximation algorithm with O(m^{1+o(1)}) total update time, thus obtaining near-linear time. Moreover, we obtain O(m^{1+o(1)}log W) time for the weighted case, where the edge weights are integers from 1 to W. The only prior work on weighted graphs in o(mnlog W) time is our O(mn^{0.986}log W)-time result from STOC 2014 which works for the general weighted directed case.

    In contrast to the previous results which rely on maintaining a sparse emulator, our algorithm relies on maintaining a so-called sparse (d, ε)-hop set introduced by Cohen (JACM 2000) in the PRAM literature. A (d, ε)-hop set of a graph G=(V, E) is a set E' of weighted edges such that the distance between any pair of nodes in G can be (1+ε)-approximated by their d-hop distance (given by a path containing at most d edges) on G'=(V, E ∪ E'). Our algorithm can maintain an (n^{o(1)}, ε)-hop set of near-linear size in near-linear time. It is the first of its kind to the best of our knowledge. To maintain the distances on this hop set, we develop a monotone bounded-hop Even-Shiloach tree. It results from extending and combining our previous monotone Even-Shiloach tree (FOCS 2013) with the bounded-hop SSSP technique of Bernstein (STOC 2013). These two new tools might be of independent interest.

  • Subhash Khot (New York University) , Rishi Saket (IBM India Research Lab)

    Abstract:
    We show that it is quasi-NP-hard to color 2-colorable 12-uniform hypergraphs with 2^{(log n)^{Omega(1)}} colors where n is the number of vertices. Previously, Guruswami et al. [GHHSV14] showed that it is quasi-NP-hard to color 2-colorable 8-uniform hypergraphs with 2^{2^{Omega(sqrt{log log n})}} colors. Their result is obtained by composing a standard Outer PCP with an Inner PCP based on the Short Code of super-constant degree. Our result is instead obtained by composing a new Outer PCP with an Inner PCP based on the Short Code of degree two.

  • Mihai Patrascu , Mikkel Thorup (University of Copenhagen)

    Abstract:
    We present a data structure representing a dynamic set S of
    w-bit integers on a w-bit word RAM. With |S|=n and w > log n and
    space O(n), we support the following standard operations in O(log n/log w) time:

    - insert(x) sets S=S+{x}.

    - delete(x) sets S=S-{x}.

    - predecessor(x) returns max{y in S|y< x}.

    - successor(x) returns min{y in S|y >= x}.

    - rank(x) returns #{y in S|y< x}.

    - select(i) returns y in S with rank(y)=i, if any.

    Our O(log n/log w) bound is optimal for dynamic rank and select,
    matching a lower bound of Fredman and Saks [STOC'89]. More precisely
    this is a lower bound for the query that holds even for polylogarithmic
    update times. Conceivably, the update time could be improved. Our
    optimality is for the maximal operation time including both queries and
    updates.


    When the word length is large, our time bound is also optimal for
    dynamic predecessor, matching a lower bound of Beame and Fich
    [STOC'99] whenever log n/log w=O(log w/loglog w). Contrasting
    rank and select, predecessor admits faster solutions when the word
    length is small, e.g., van Emde Boas' [FOCS'75] O(log w) time bound. The
    predecessor bounds for smaller word lengths are essentially understood
    if we allow randomization [Patrascu Thorup STOC'06], and then
    our new bound completes the picture for dynamic predecessor search.

    Our clean O(log n/log w) bound for dynamic predecessor should
    be compared with Fredman and Willard's [STOC'90] bound of
    O(log n/log w + sqrt(log n)). For larger word lengths, it was improved to
    O(log n/log w + loglog n) by Andersson and Thorup [FOCS'96 STOC'00], but
    the extra additive term still renders the bound suboptimal.

    Technically, the most interesting aspect of our data structure is that it
    operates in constant time for sets of size n=w^{O(1)}. This
    resolves a main open problem of Ajtai, Komlos, and Fredman [FOCS'83].
    Ajtai et al. presented such a data structure in Yao's abstract
    cell-probe model with w-bit cells/words, but pointed out that the
    functions used could not be implemented. As a partial solution to the
    problem, Fredman and Willard [STOC'90] introduced a "fusion node" that
    could handle queries in constant time, but used polynomial time on the
    updates. Despite inefficient updates, this lead them to the first
    sub-logarithmic bounds for dynamic searching mentioned above.
    We call our data structure for small sets a "dynamic fusion node"
    as it does both queries and updates in constant time.

    Fredman and Willard [FOCS'90] later introduced atomic
    heaps that using tables of size s can handle all operations in
    constant time for sets of size (log s)^{O(1)}. Our focus
    is on understanding the asymptotic impact of a large word length. We only use space linear in the number of stored keys, but our capacity for sets of
    size w^{O(1)} with constant operation time is the largest
    possible regardless of the available space. Such a large set
    size is new even in the simple case of a deterministic dynamic dictionary with
    just membership and updates in constant time.

  • Mrinal Kumar (Rutgers University) , Shubhangi Saraf (Rutgers University)

    Abstract:
    We prove exponential lower bounds on the size of homogeneous depth 4 circuits computing an explicit polynomial in $\VP$. Our results hold for the {\it Iterated Matrix Multiplication} polynomial - in particular we show that any homogeneous depth 4 circuit computing the $(1,1)$ entry in the product of $n$ generic matrices of dimension $n^{O(1)}$ must have size $n^{\Omega(\sqrt{n})}$.

    Our results strengthen previous works in two significant ways.
    \begin{enumerate}
    \item Our lower bounds hold for a polynomial in $\VP$. Prior to our work, Kayal et al~\cite{KLSS14} proved an exponential lower bound for homogeneous depth 4 circuits (over fields of characteristic zero) computing a poly in $\VNP$. The best known lower bounds for a depth 4 homogeneous circuit computing a poly in $\VP$ was the bound of $n^{\Omega(\log n)}$ by~\cite{LSS13, KLSS14}.

    Our exponential lower bounds also give the first exponential separation between general arithmetic circuits and homogeneous depth 4 arithmetic circuits. In particular they imply that the depth reduction results of Koiran~\cite{koiran} and Tavenas~\cite{Tavenas13} are tight even for reductions to general homogeneous depth 4 circuits (without the restriction of bounded bottom fanin).
    \item Our lower bound holds over all fields. The lower bound of~\cite{KLSS14} worked only over fields of characteristic zero. Prior to our work, the best lower bound for homogeneous depth 4 circuits over fields of positive characteristic was $n^{\Omega(\log n)}$~\cite{LSS13, KLSS14}.
    \end{enumerate}

  • Karl Bringmann (Max Planck Institute for Informatics, Saarbrücken, Germany)

    Abstract:
    The Frechet distance is a well-studied and very popular measure of similarity of two curves. Many variants and extensions have been studied since Alt and Godau introduced this measure to computational geometry in 1991. Their original algorithm to compute the Frechet distance of two polygonal curves with n vertices has a runtime of O(n^2 log n). More than 20 years later, the state of the art algorithms for most variants still take time more than O(n^2 / log n). The problem has been conjectured to be 3SUM-hard, but recently this was shown to be unlikely [Buchin et al., SODA'14].

    As an alternative approach to obtain conditional lower bounds, in this paper we assume the Strong Exponential Time Hypothesis or, more precisely, that there is no O*((2-delta)^N) algorithm for CNF-SAT for any delta > 0. Under this assumption we show that the Frechet distance cannot be computed in strongly subquadratic time, i.e., in time O(n^{2-delta}) for any delta > 0. This means that finding faster algorithms for the Frechet distance is as hard as finding faster CNF-SAT algorithms, and the existence of a strongly subquadratic algorithm can be considered unlikely.

    Our result holds for both the continuous and the discrete Frechet distance. We extend the main result in various directions. Based on the same assumption we (1) show non-existence of a strongly subquadratic 1.001-approximation, (2) present tight lower bounds in case the numbers of vertices of the two curves are imbalanced, and (3) examine realistic input assumptions (c-packed curves).

  • Joshua A. Grochow (Santa Fe Institute) , Toniann Pitassi (University of Toronto, Department of Computer Science)

    Abstract:
    We introduce a new and very natural algebraic proof system, which has tight connections to (algebraic) circuit complexity. In particular, we show that any super-polynomial lower bound on any Boolean tautology in our proof system implies that the permanent does not have polynomial-size algebraic circuits (VNP is not equal to VP). Note that, prior to our work, there was no proof system for which lower bounds on an arbitrary tautology implied *any* computational lower bound.

    Our proof system helps clarify the relationships between previous algebraic proof systems, and begins to shed light on why proof complexity lower bounds for various proof systems have been so much harder than lower bounds on the corresponding circuit classes. In doing so, we highlight the importance of polynomial identity testing (PIT) for understanding proof complexity.

    More specifically, we introduce certain propositional axioms satisfied by any Boolean circuit computing PIT. (The existence of efficient proofs for our PIT axioms appears to be somewhere in between the major conjecture that PIT is in P and the known result that PIT is in P/poly.) We use these PIT axioms to shed light on AC^0[p]-Frege lower bounds, which have been open for nearly 30 years, with no satisfactory explanation as to their apparent difficulty. We show that either:

    a. Proving super-polynomial lower bounds on AC^0[p]-Frege implies VNP does not have polynomial-size circuits of depth d---a notoriously open question for any d at least 4---thus explaining the difficulty of lower bounds on AC^0[p]-Frege, or

    b. AC^0[p]-Frege cannot efficiently prove the depth d PIT axioms, and hence we have a lower bound on AC^0[p]-Frege.

    We also get many variants on this statement for other proof systems and other computational lower bounds.

    Finally, using the algebraic structure of our proof system, we propose a novel way to extend techniques from algebraic circuit complexity to prove lower bounds in proof complexity. Although we have not yet succeeded in proving such lower bounds, this proposal should be contrasted with the difficulty of extending AC^0[p] circuit lower bounds to AC^0[p]-Frege lower bounds.

  • Shaddin Dughmi (University of Southern California)

    Abstract:
    There has been a recent surge of interest in the role of information in strategic interactions. Much of this work seeks to understand how the realized equilibrium of a game is influenced by uncertainty in the environment and the information available to players in the game. Lurking beneath this literature is a fundamental, yet largely unexplored, algorithmic question: how should a ``market maker'' who is privy to additional information, and equipped with a specified objective, inform the players in the game? This is an informational analogue of the mechanism design question, and views the \emph{information structure} of a game as a mathematical object to be designed, rather than an exogenous variable.


    We initiate a complexity-theoretic examination of the design of optimal information structures, a task often referred to as \emph{signaling}. We examine one of the simplest instantiations of the signaling question: Bayesian zero-sum games, and a principal who must choose an information structure maximizing the equilibrium payoff of one of the players. In this setting, we show that optimal signaling is computationally intractable, and in some cases hard to approximate, assuming that it is hard to recover a planted clique from an Erd\H{o}s-R\'{e}nyi random graph. This is despite the fact that equilibria in these games are computable in polynomial time, and therefore suggests that the hardness of optimal signaling is a distinct phenomenon from the hardness of equilibrium computation.


    Necessitated by the non-local nature of information structures, en-route to our results we prove an ``amplification lemma'' for the planted clique problem which may be of independent interest. Specifically, we show that even if we plant many cliques in an Erd\H{o}s-R\'{e}nyi random graph, so much so that most nodes in the graph are in some planted clique, recovering a constant fraction of the planted cliques is no easier than the traditional planted clique problem.

  • Andris Ambainis (University of Latvia and Institute for Advanced Study, Princeton) , Ansis Rosmanis (Institute for Quantum Computing, School of Computer Science, University of Waterloo) , Dominique Unruh (University of Tartu)

    Abstract:
    Quantum zero-knowledge proofs and quantum proofs of knowledge are
    inherently difficult to analyze because their security analysis uses
    rewinding. Certain cases of quantum rewinding are handled by the
    results by Watrous (SIAM J Comput, 2009) and Unruh (Eurocrypt 2012),
    yet in general the problem remains elusive. We show that this is not
    only due to a lack of proof techniques: relative to an oracle, we show
    that classically secure proofs and proofs of knowledge are insecure in
    the quantum setting.

    More specifically, sigma-protocols, the Fiat-Shamir construction, and
    Fischlin's proof system are quantum insecure under assumptions that
    are sufficient for classical security. Additionally, we show that for
    similar reasons, computationally binding commitments provide almost no
    security guarantees in a quantum setting.

    To show these results, we develop the "pick-one trick", a general
    technique that allows an adversary to find one value satisfying a
    given predicate, but not two.

  • Allan Grønlund (MADALGO, Aarhus University) , Seth Pettie (University of Michigan)

    Abstract:
    The 3SUM problem is to decide, given a set of $n$ real numbers, whether any three sum to zero.
    We prove that the decision tree complexity of 3SUM is $O(n^{3/2}\sqrt{\log n})$, that there is a randomized 3SUM algorithm running in $O(n^2 (\log \log n)^2/ \log n)$ time, and a deterministic algorithm running in $O(n^2(\log \log n)^{5/3}/(\log n)^{2/3})$ time.
    These results refute the strongest version of the 3SUM conjecture,
    namely that its decision tree (and algorithmic) complexity is $\Omega(n^2)$.

    Our results lead directly to improved algorithms for
    $k$-variate linear degeneracy testing for all odd $k\ge 3$.
    The problem is to decide, given a linear function
    $f(x_1,\ldots,x_k) = \alpha_0 + \sum_{1\le i\le k} \alpha_i x_i$ and a set $S\subset \mathbb{R}$,
    whether $0\in f(S^k)$.
    We show the decision tree complexity is $O(n^{k/2}\sqrt{\log n})$
    and give algorithms running in time $O(n^{(k+1)/2} / \poly(\log n))$.

    Finally, we give a subcubic algorithm for a generalization of the $(\min,+)$-product over real-valued matrices and apply it
    to the problem of finding zero-weight triangles in weighted graphs. A depth-$O(n^{5/2}\sqrt{\log n})$ decision tree
    is given for this problem, as well as an algorithm running in time $O(n^3 (\log\log n)^2/\log n)$.

  • Craig Gentry (IBM Research) , Shai Halevi (IBM Research) , Mariana Raykova (SRI) , Daniel Wichs (Northeastern University)

    Abstract:
    We construct the first schemes that allow a client to privately outsource arbitrary program executions to a remote server while ensuring that: (I) the client's work is small and essentially independent of the complexity of the computation being outsourced, and (II) the server's work is only proportional to the run-time of the computation on a random access machine (RAM), rather than its potentially much larger circuit size. Furthermore, our solutions are non-interactive and have the structure of reusable garbled RAM programs, addressing an open question of Lu and Ostrovsky (Eurocrypt 2013). We also construct schemes for an augmented variant of the above scenario, where the client can initially outsource a large private and persistent database to the server, and later outsource arbitrary program executions with read/write access to this database.

    Our solutions are built from non-reusable garbled RAM in conjunction with new types of reusable garbled circuits that are more efficient than prior solutions but only satisfy weaker security. For the basic setting without a persistent database,we can instantiate the required reusable garbled circuits using indistinguishability obfuscation. For the more complex setting with a persistent database we need stronger notions of obfuscation. Our basic solution also requires the client to perform a one-time preprocessing step to garble a program at the cost of its RAM run-time, and we can avoid this cost using stronger notions of obfuscation. It remains an open problem to instantiate these new types of reusable garbled circuits under weaker assumptions, possibly avoiding obfuscation altogether.

  • Yin Tat Lee (MIT) , Aaron Sidford (MIT)

    Abstract:
    In this paper we present a new algorithm for solving linear programs that requires only O~(sqrt(rank(A))L) iterations where A is the constraint matrix of a linear program with m, constraints and n variables and L is the bit complexity of a linear program. Each iteration of our method consists of solving O~(1) linear systems and additional nearly linear time computation.

    Our method improves upon the previous best iteration bound by factor of (m/rank(A))^(1/4) for methods with polynomial time computable iterations and by sqrt(m/rank(A)) for methods which solve at most O~(1) linear systems in each iteration. Our method is parallelizable and amenable to linear algebraic techniques for accelerating the linear system solver. As such, up to polylogarithmic factors we either match or improve upon the best previous running times for solving linear programs in both depth and work for different ratios of m and rank(A).

    Moreover, our method matches up to polylogarithmic factors a theoretical limit established by Nesterov and Nemirovski in 1994 regarding the use of a ``universal barrier'' for interior point methods, thereby resolving a long-standing open question regarding the running time of polynomial time interior point methods for linear programming.

  • Yuan Li (University of Chicago) , Alexander Razborov (University of Chicago) , Benjamin Rossman (National Institute of Informatics)

    Abstract:
    Let $P$ be a fixed graph (hereafter called a "pattern"), and let $\Subgraph(P)$ denote the problem of deciding whether a given graph $G$ contains a subgraph isomorphic to $P$. We are interested in $\AC^0$-complexity of this problem, determined by the smallest possible exponent $C(P)$ for which $\Subgraph(P)$ possesses bounded-depth circuits of size $n^{C(P)+o(1)}$. Motivated by the previous research in the area, we also consider its "colored" version $\Subgraph_\col(P)$ in which the target graph $G$ is $V(P)$-colored, and the average-case version $\Subgraph_\ave(P)$ under the distribution $G(n,n^{-\theta(P)})$, where $\theta(P)$ is the threshold exponent of $P$. Defining $C_\col(P)$ and $C_\ave(P)$ analogously to $C(P)$, our main contributions can be summarized as follows.

    * $C_\col(P)$ coincides with the tree-width of the pattern $P$ within a logarithmic factor. This shows that the previously known upper bound by Alon, Yuster, Zwick \cite{AYZ95} is almost tight.

    * We give a characterization of $C_\ave(P)$ in purely combinatorial terms within a factor of 2. This shows that the lower bound technique of Rossman \cite{Ros08} is essentially tight, for any pattern $P$ whatsoever.

    * We prove that if $Q$ is a minor of $P$ then $\Subgraph_\col(Q)$ is reducible to $\Subgraph_\col(P)$ via a linear-size monotone projection. At the same time, we prove that there is {\em no} monotone projection whatsoever that reduces $\Subgraph(M_3)$ to $\Subgraph(P_3 + M_2)$ ($P_3$ is a path on 3 vertices, $M_k$ is a matching with $k$ edges, and "+" stands for the disjoint union). This result strongly suggests that the colored version of the subgraph isomorphism problem is much better structured and well-behaved than the standard (worst-case, uncolored) one.

  • Michael Kapralov (MIT) , Yin Tat Lee (MIT) , Cameron Musco (MIT) , Christopher Musco (MIT) , Aaron Sidford (MIT)

    Abstract:
    We present the first single pass algorithm for computing spectral sparsifiers of graphs in the dynamic semi-streaming model. Given a single pass over a stream containing insertions and deletions of edges to a graph G, our algorithm maintains a randomized linear sketch of the incidence matrix of G into dimension O((1/epsilon^2) n polylog(n)). Using this sketch, at any point, the algorithm can output a (1 +/- epsilon) spectral sparsifier for G with high probability.

    While O((1/epsilon^2) n polylog(n)) space algorithms are known for computing *cut sparsifiers* in dynamic streams [AGM12b, GKP12] and spectral sparsifiers in *insertion-only* streams [KL11], prior to our work, the best known single pass algorithm for maintaining spectral sparsifiers in dynamic streams required sketches of dimension Omega((1/epsilon^2) n^(5/3)) [AGM14].

    To achieve our result, we show that, using a coarse sparsifier of G and a linear sketch of G's incidence matrix, it is possible to sample edges by effective resistance, obtaining a spectral sparsifier of arbitrary precision. Sampling from the sketch requires a novel application of ell_2/ell_2 sparse recovery, a natural extension of the ell_0 methods used for cut sparsifiers in [AGM12b]. Recent work of [MP12] on row sampling for matrix approximation gives a recursive approach for obtaining the required coarse sparsifiers.

    Under certain restrictions, our approach also extends to the problem of maintaining a spectral approximation for a general matrix A^T A given a stream of updates to rows in A.

  • Uriel Feige (Weizmann Institute) , Tomer Koren (Technion) , Moshe Tennenholtz (Microsoft Research and Technion)

    Abstract:
    We consider sequential decision making in a setting where regret is measured with respect to a set of {\em stateful} reference policies, and feedback is limited to observing the rewards of the actions performed (the so called ``bandit" setting). If either the reference policies are stateless rather than stateful, or the feedback includes the rewards of all actions (the so called ``expert" setting), previous work shows that the optimal regret grows like $\Theta(\sqrt{T})$ in terms of the number of decision rounds $T$.

    The difficulty in our setting is that the decision maker unavoidably loses track of the internal states of the reference policies, and thus cannot reliably attribute rewards observed in a certain round to any of the reference policies.
    In fact, in this setting it is impossible for the algorithm to estimate which policy gives the highest (or even approximately highest) total reward.
    Nevertheless, we design an algorithm that achieves expected regret that is sublinear in $T$, of the form $O(T/\log^{1/4}{T})$. Our algorithm is based on a certain {\em local repetition lemma} that may be of independent interest. We also show that no algorithm can guarantee expected regret better than
    $O(T/\log^{3/2}{T})$.

  • Ilan Komargodski (Weizmann Institute) , Tal Moran (IDC) , Moni Naor (Weizmann Institute) , Rafael Pass (Cornell University) , Alon Rosen (IDC) , Eylon Yogev (Weizmann Institute)

    Abstract:
    A program obfuscator takes a program and outputs an "obfuscated" version of it, where the goal is that the obfuscated program will not reveal much about its structure beyond what is apparent from executing it. There are several ways of formalizing this goal. Specifically, in indistinguishability obfuscation, first defined by Barak et al. (CRYPTO 2001), the requirement is that the results of obfuscating any two functionally equivalent programs (circuits) will be computationally indistinguishable. Recently, a fascinating candidate construction for indistinguishability obfuscation was proposed by Garg et al. (FOCS 2013). This has led to a flurry of discovery of intriguing constructions of primitives and protocols whose existence was not previously known (for instance, fully deniable encryption by Sahai and Waters, STOC 2014). Most of them explicitly rely on additional hardness assumptions, such as one-way functions.

    Our goal is to get rid of this extra assumption. We cannot argue that indistinguishability obfuscation of all polynomial-time circuits implies the existence of one-way functions, since if $P = NP$, then program obfuscation (under the indistinguishability notion) is possible. Instead, the ultimate goal is to argue that if $P \neq NP$ and program obfuscation is possible, then one-way functions exist.

    Our main result is that if $NP \not\subseteq ioBPP$ and there is an efficient (even imperfect) indistinguishability obfuscator, then there are one-way functions. In addition, we show that the existence of an indistinguishability obfuscator implies (unconditionally) the existence of SZK-arguments for NP. This, in turn, provides an alternative version of our main result, based on the assumption of hard-on-the average NP problems. To get some of our results we need obfuscators for simple programs such as CNFs.

  • Moshe Babaioff (Microsoft Research) , Nicole Immorlica (Microsoft Research) , Brendan Lucier (Microsoft Research) , S. Matthew Weinberg (MIT)

    Abstract:
    We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and his value for a set of items is additive. The seller aims to maximize his revenue. It is known that an optimal mechanism in this setting may be quite complex, requiring randomization [Hart-Reny12] and menus of infinite size [Daskalakis-Deckelbaum-Tzamos13]. Hart and Nisan [Hart-Nisan12] have initiated a study of two very simple pricing schemes for this setting: item pricing, in which each item is priced at its monopoly reserve; and bundle pricing, in which the entire set of items is priced and sold as one bundle. Hart and Nisan [Hart-Nisan12] have shown that neither scheme can guarantee more than a vanishingly small fraction of the optimal revenue. In sharp contrast, we show that for any distributions, the better of item and bundle pricing is a constant-factor approximation to the optimal revenue. We further discuss extensions to multiple buyers and to valuations that are correlated across items.

  • Arkadev Chattopadhyay (Tata Institute of Fundamental Research) , Jaikumar Radhakrishnan (Tata Institute of Fundamental Research) , Atri Rudra (University at Buffalo, SUNY)

    Abstract:
    We provide the first communication lower bounds that are sensitive to the network topology for computing natural and simple functions by point to point message passing protocols for the `Number in Hand' model. All previous lower bounds were either for the broadcast model or assumed full connectivity of the network. As a special case, we obtain bounds of the form \Omega(k^2 n) on the randomized communication complexity of many simple functions for a broad class of networks having k distributed nodes and each holding an n-bit input string. The best previous bounds were of the form \Omega(kn).

    The main tool that we use for deriving our bounds is a new connection with the theory of metric embeddings. This enables us to prove a variety of results that include the following: A distributed XOR lemma that generalizes the recent work of Woodruff and Zhang (SODA'14); a tight bound (discarding poly-log factors) on the randomized complexity of Element Distinctness that answers a question of Phillips, Verbin and Zhang (SODA'12) and new lower bounds for composed functions that were also left open in the work of Phillips et.al. Finally, these bounds yield new topology-dependent bounds for several natural graph problems considered by Woodruff and Zhang (DISC'13).

  • Dana Moshkovitz (MIT)

    Abstract:
    The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game G^k in terms of the value of the game G and the number of repetitions k.
    Contrary to what one might have guessed, the value of G^k is not val(G)^k, but rather a more complicated expression depending on properties of G.
    Indeed, there are numerous works aiming to understand the value of repeated games, both for general games G and for special cases.
    A case of particular interest is that of projection games, where the answer of the first prover determines at most one acceptable answer for the second prover.

    In this work we give a simple transformation, which we call "fortification", that can be applied to any projection game.
    We show that for fortified games G, the value of the k-repeated game is approximately val(G)^k.
    This results in nearly a quadratic improvement in the size of projection PCP with the lowest error known today.
    Unlike previous proofs of the parallel repetition theorem that relied on information theory or linear algebra, our proof is purely combinatorial and quite short.

    We then discuss the problem of derandomizing parallel repetition, and the limitations of the fortification idea in this setting.
    We point out a connection between the problem of derandomizing parallel repetition and the problem of composition.
    This connection could shed light on the so-called Projection Games Conjecture, which asks for projection PCP with minimal error.

  • Avishay Tal (Weizmann Institute)

    Abstract:
    We give a new and improved proof that the shrinkage exponent of De Morgan formulae is $2$. Namely, we show that for any Boolean function $f: \{-1,1\}^n \to \{-1,1\}$, setting each variable out of $x_1, \ldots, x_n$ with probability $1-p$ to a randomly chosen constant, reduces the expected formula size of the function by a factor of $O(p^2)$. This result is tight and improves the work of H{\aa}stad [SIAM J. C., 1998] by removing logarithmic factors.

    As a consequence of our results, the function defined by Andreev [MUMB., 1987], $A: \{-1,1\}^n \to \{-1,1\}$, which is in P, has formula size at least $\Omega(\frac{n^3}{\log^2 n \log^3\log n})$. This lower bound is tight up to the $\log^3\log n$ factor, and is the best known lower bound for functions in P. In addition, the functions defined by Komargodski et al. [FOCS, 2013], $h_r: \{-1,1\}^n \to \{-1,1\}$, which are also in P, cannot be computed correctly on a fraction greater than $1/2 + 2^{-r}$ of the inputs, by De Morgan formulae of size at most $\frac{n^3}{r^2 \mathrm{poly}\log n}$, for any parameter $r \le n^{1/3}$.

    The proof relies on a result from quantum query complexity by Laplante et al. [CC, 2006], H{\o}yer et al. [STOC, 2007] and Reichardt [SODA, 2011]:for any Boolean function $f$, $Q_2(f) \le O(\sqrt{L(f)})$, where $Q_2(f)$ is the bounded-error quantum query complexity of $f$, and $L(f)$ is the minimal size De Morgan formula computing $f$.

  • Tobias Christiani (IT University of Copenhagen) , Rasmus Pagh (IT University of Copenhagen)

    Abstract:
    The generation of pseudorandom elements over finite fields is fundamental to the space, time and randomness complexity of randomized algorithms and data structures.
    We consider the problem of generating $k$-independent random values in a word RAM model equipped with constant time addition and multiplication in $\F$, and present the first nontrivial construction of a generator that outputs each value in \emph{constant time}, not dependent on $k$.
    Our generator has period length $|\F|\poly \log k$ and uses $O(k \poly(\log k) \log |\F| )$ bits of space, which is optimal up to a $\poly \log k$ factor.
    We are able to bypass Siegel's lower bound on the time-space tradeoff for $k$-independent functions by a restriction to sequential evaluation.

  • Dimitris Achlioptas (University of California Santa Cruz and RACTI) , Fotis Iliopoulos (National Technical University of Athens)

    Abstract:
    We give an algorithmic local lemma by establishing a sufficient condition for the uniform random walk on a directed graph to reach a sink quickly. Our work is inspired by Moser's entropic method proof of the Lov\'{a}sz Local Lemma for satisfiability and completely bypasses the Probabilistic Method formulation of the LLL. In particular our method works when the set of underlying objects is entirely unstructured, fully dispensing with the notion of variables. Similarly to Moser's argument, the key point is that algorithmic progress is measured in terms of entropy rather than energy (number of violated constraints) so that termination can be established even under the proliferation of states in which \emph{every} step of the algorithm (random walk) increases the total number of violated constraints.

  • Umang Bhaskar (Caltech) , Katrina Ligett (Caltech, MC305-16, Pasadena, CA 91125, USA) , Leonard J. Schulman (Caltech, MC305-16, Pasadena, CA 91125, USA) , Chaitanya Swamy (Dept. of Combinatorics and Optimization, Univ. Waterloo, Waterloo, ON N2L 3G1, Canada)

    Abstract:
    The typical approach for analyzing network routing games assumes, right at the onset, that one has precise, detailed information about the underlying latency functions. Such information may however be unavailable and/or difficult to obtain. Moreover, in various situations, one is primarily interested in enforcing a desirable target flow as the equilibrium by suitably influencing players’ behavior by applying limited stimuli to the underlying routing game. For example, one may seek to enforce a desired traffic pattern in a road network by imposing suitable tolls on the roads. We raise the question of whether one can achieve target flows as equilibria without knowing the underlying latency functions.

    Our main result gives a crisp positive answer to this question. We show that, under fairly general settings, one can efficiently compute edge tolls that induce a given target multicommodity flow in a nonatomic routing game using polynomial number of queries to an oracle that takes candidate tolls as input and returns the resulting equilibrium flow. This result is obtained via a novel application of the ellipsoid method, applies to arbitrary multicommodity settings, and both linear and non-linear latency functions (given an approximate-equilibrium oracle). Our algorithm extends easily to a variety of settings, such as most notably, (i) the setting where certain edges cannot be tolled and/or there is an upper bound on the total toll paid by a user; and (ii) general nonatomic congestion games. We obtain tighter bounds on the query complexity for series-parallel networks, and single-commodity routing games with linear latency functions, and complement these with a query-complexity lower bound applicable even to single-commodity routing games on parallel-link graphs with linear latency functions.

    We also explore the use of Stackelberg routing to achieve target equilibria. We show that on a series-parallel graph with m edges, a Stackelberg routing inducing the given target flow as an equilibrium can be computed in polytime and with at most m queries. For a stronger problem that roughly corresponds to determining the delay functions, we obtain strong query- and computational- complexity lower bounds.

    Our results build upon various new techniques and results that we obtain pertaining to computation of approximate equilibrium, connections between different notions of approximate equilibrium, properties of multicommodity flows and tolls in series-parallel graphs, and sensitivity of equilibrium flow with respect to tolls. Our results demonstrate that it is indeed possible to circumvent the potentially-onerous task of modeling latency functions, and yet obtain meaningful results for the underlying routing game. Our array of upper and lower bounds indicate the richness of our query model, and suggest a promising direction for further research.

  • Hyung-Chan An (School of Computer and Communication Sciences, EPFL) , Mohit Singh (Microsoft Research) , Ola Svensson (School of Computer and Communication Sciences, EPFL)

    Abstract:
    Linear programming has played a key role in the study of algorithms for combinatorial optimization problems. In the field of approximation algorithms, this is well illustrated by the uncapacitated facility location problem. A variety of algorithmic methodologies, such as LP-rounding and primal-dual method, have been applied to and evolved from algorithms for this problem. Unfortunately, this collection of powerful algorithmic techniques had not yet been applicable to the more general capacitated facility location problem. In fact, all of the known algorithms with good performance guarantees were based on a single technique, local search, and no linear programming relaxation was known to efficiently approximate the problem.

    In this paper, we present a linear programming relaxation with constant integrality gap for capacitated facility location. We demonstrate that fundamental concepts from the matching theory, including alternating paths and residual networks, provide key insights that lead to the strong relaxation. Our algorithmic proof of integrality gap is obtained by finally accessing the rich toolbox of LP-based methodologies: we present a constant factor approximation algorithm based on LP-rounding. Our results resolve one of the ten open problems selected by the textbook on approximation algorithms of Williamson and Shmoys.

  • Radu Curticapean (Department of Computer Science, Saarland University, Saarbrücken, Germany) , Dániel Marx (Computer and Automation Research Institute, Hungarian Academy of Sciences (MTA SZTAKI), Budapest, Hungary)

    Abstract:
    For a class \H of graphs, #Sub(\H) is the counting problem
    that, given a graph H\in \H and an arbitrary graph G, asks for
    the number of subgraphs of G isomorphic to H. It is
    known that if \H has bounded vertex-cover number (equivalently,
    the size of the maximum matching in \H is bounded), then #Sub(\H) is
    polynomial-time solvable. We complement this result with a corresponding
    lower bound: if \H is any recursively enumerable class of
    graphs with unbounded vertex-cover number, then #Sub(\H) is
    #W[1]-hard parameterized by the size of H and hence not
    polynomial-time solvable and not even fixed-parameter tractable,
    unless FPT=#W[1].

    As a first step of the proof, we show that counting k-matchings in
    bipartite graphs is #W[1]-hard. Recently, Curticapean [ICALP 2013]
    proved the W[1]-hardness of
    counting k-matchings in general graphs; our result strengthens
    this statement to bipartite graphs with a considerably simpler proof
    and even shows that, assuming the
    Exponential Time Hypothesis (ETH), there is no f(k)n^{o(k/\log k)}
    time algorithm for counting k-matchings in bipartite graphs for
    any computable function f(k). As a consequence, we obtain an
    independent and somewhat simpler proof of the classical result of
    Flum and Grohe [SICOMP 2004] stating that counting
    paths of length k is #W[1]-hard, as well as a similar
    almost-tight ETH-based lower bound on the exponent.

  • Nabil H. Mustafa (ESIEE Paris, France) , Rajiv Raman (Indraprastha Institute of Information Technology (IIIT) Delhi, India.) , Saurabh Ray (Computer Science Department, New York University, Abu Dhabi, U.A.E.)

    Abstract:
    Weighted geometric set-cover problems arise naturally in several geometric and non-geometric settings (e.g. the breakthrough of Bansal-Pruhs (FOCS 2010) reduces a wide class of machine
    scheduling problems to weighted geometric set-cover). More than two decades of research has succeeded in settling the $(1+\eps)$-approximability status for most geometric set-cover problems, except for four basic scenarios which are still lacking. One is that of weighted disks in the plane for which, after a series of papers, Varadarajan (STOC 2010) presented a clever
    \emph{quasi-sampling} technique, which together with improvements by Chan \etal~(SODA 2012), yielded a $O(1)$-approximation algorithm. Even for the unweighted case, a PTAS for a fundamental class of objects called pseudodisks (which includes disks, unit-height rectangles, translates of convex sets etc.) is currently unknown. Another fundamental case is weighted halfspaces in $\Re^3$, for which a PTAS is currently lacking. In this paper, we present a QPTAS for all of these remaining problems. Our results are based on the separator framework of Adamaszek-Wiese (FOCS 2013, SODA 2014), who recently obtained a QPTAS for weighted independent set of polygonal regions. This rules out the possibility that these problems are APX-hard,
    assuming $\textbf{NP} \nsubseteq \textbf{DTIME}(2^{polylog(n)})$. Together with the recent work of Chan-Grant (CGTA 2014), this settles the APX-hardness status for all natural geometric set-cover problems.

  • Eshan Chattopadhyay (University of Texas at Austin) , David Zuckerman (University of Texas at Austin)

    Abstract:
    \begin{abstract}
    Non-malleable codes were introduced by Dziembowski, Pietrzak and Wichs \cite{DPW10} as an elegant generalization of the classical notions of error correction and error detection, where the corruption of a codeword is viewed as a tampering function acting on it. Informally, a non-malleable code with respect to a family of tampering functions $\mathcal{F}$ consists of a randomized encoding function $\Enc$ and a deterministic decoding function $\Dec$ such that for any $m$, $\Dec(\Enc(m))=m$. Further, for any tampering function $f \in \mathcal{F}$ and any message $m$, $\Dec(f(\Enc(m)))$ is either $m$ or is $\epsilon$-close to a distribution $D_f$ independent of $m$.

    Of particular importance are non-malleable codes in the $C$-split-state model. In this model, the codeword is partitioned into $C$ equal sized blocks and the tampering function family consists of functions $(f_1,\ldots,f_C)$ such that $f_i$ acts on the $i^{th}$ block. For $C=1$ there cannot exist non-malleable codes. For $C=2$, the best known explicit construction is by Aggarwal, Dodis and Lovett \cite{ADL13} who achieve rate $= \Omega(n^{-6/7})$ and error $=2^{-\Omega(n^{-1/7})}$.

    In our main result, we construct efficient non-malleable codes in the $C$-split-state model for $C=10$ that achieve constant rate and error $=2^{-\Omega(n)}$. These are the first explicit codes of constant rate in the $C$-split-state model for any $C=o(n)$, that do not rely on any unproven assumptions. We also improve the error in the explicit non-malleable codes constructed in the bit tampering model in \cite{CG14b}.

    Our constructions use an elegant connection found between seedless non-malleable extractors and non-malleable codes by Cheraghchi and Guruswami \cite{CG14b}. We explicitly construct such seedless non-malleable extractors for $10$ independent sources and deduce our results on non-malleable codes based on this connection. Our constructions of extractors use encodings and a new variant of the sum-product theorem.

    \end{abstract}

  • Tim Roughgarden (Stanford)

    Abstract:
    When and how do communication and complexity lower bounds for an
    optimization problem $\Pi$ yield lower bounds on the quality of
    equilibria in games derived from $\Pi$? This paper gives three
    answers, each motivated by well-studied problems in mechanism or
    network game design.
    \begin{enumerate}

    \item Suppose $\Prob$ is the problem of welfare-maximization in a
    combinatorial auction. We prove that the worst-case POA of
    approximate mixed Nash equilibria of every
    ``simple'' mechanism is bounded below by the communication hardness
    of approximating $\Pi$, where ``simple'' means that the action space
    of every player is sub-doubly-exponential in the number of items $m$.
    This result gives senses in which certain well-studied auction formats,
    such as simultaneous first-price auctions and greedy combinatorial
    auctions, are optimal among all simple mechanisms.

    \item Consider the same problem $\Prob$, but with succinctly described
    bidder valuations. We prove that, assuming $coNP \not\sse MA$, the
    worst-case POA of approximate mixed Nash equilibria of every
    ``tractable'' mechanism is bounded below by the \

    $NP$-hardness of approximating the underlying optmization
    problem, where the most stringent
    requirement for ``tractability'' is that a bidder can compute in
    polynomial time an approximate best response given mixed strategies
    of the other players.
    For example, if $coNP \not\sse MA$, then greedy combinatorial auctions
    have essentially optimal worst-case POA for
    single-minded bidders among all tractable mechanisms.
    \item For games $\Games$ derived from a problem $\Prob$ that have
    action set sizes polynomial in the description of the corresponding
    instance $I \in \Prob$ --- including congestion and scheduling games ---
    $NP$-hardness of approximating $\Prob$ translates under mild
    conditions to lower bounds on the worst-case POA of approximate
    mixed Nash equilibria and exact correlated equilibria.

    As an example, we show how techniques for proving hardness of
    approximation can be used to prove an exponential (in the degree $d$)
    lower bound on the POA in congestion games (and several variants) with
    bounded-degree
    polynomial cost functions with nonnegative coefficients. This lower
    bound is conditional on appropriate complexity assumptions but
    bypasses the need for an explicit construction.
    \end{enumerate}

  • Barna Saha (University of Massachusetts Amherst)

    Abstract:
    Given a string $\sigma$ over alphabet $\Sigma$ and a grammar $G$ defined over the same alphabet, how many minimum number of repairs (insertions, deletions and substitutions) are required to map $\sigma$ into a valid member of $G$? The study of this {\em language edit distance problem} has a very early root. In 1972 Aho and Peterson provided a dynamic programming algorithm for context free languages that runs in $O(|G|^2n^3)$ time, $n$ being the string length and $|G|$ being the size of the grammar. While later improvements reduced the running time to $O(|G|n^3)$ (Myers'95), the cubic running time on the input length was a major bottleneck to apply these algorithms to multitude of applications of the language edit distance problem.

    In many of these applications either as a modeling grammar or as a major building block, one fundamental context free language plays a very important role--this is the language of well-balanced parentheses, also known as the \dy~ language. \dy$(s)$ (representing there are $s$ different types of parentheses) language has been pivotal in the development of formal language theory and its nondeterministic counterpart is known to be the hardest context free grammar. Checking whether a string of parentheses is well-balanced is a prototypical example of stack-based algorithms. \dy$(s)$ language edit distance is a {\em significant generalization} of the well-studied {\em string edit distance problem}, and has numerous applications ranging from data quality in databases, generating automated error-correcting parsers in compiler optimization to structure prediction problems in biological sequences.
    Recognizing \dy~language has attracted attention in other models of computation (property testing: FOCS'99, RANDOM-APPROX'01, streaming model:STOC'10, FOCS'10), but recognizing a language is much easier than computing edit distance to a language. In this work, we provide the very first efficient (near-linear time) algorithm to tightly compute the edit distance of any string to the class of well-balanced strings, beating the cubic running time of the dynamic programming.


    Our main result is an algorithm for edit distance computation to \dy$(s)$ language for any positive integer $s$ that runs in $\tilde{O}(n^{1+\epsilon})$\footnote{$\tilde{O}(n)=O(n\polylog(n))$} time and achieves an approximation factor of $O(\frac{1}{\epsilon}\beta(n)\log{|OPT|})$, for any $\epsilon > 0$. Here $OPT$ is the optimal edit distance to \dy$(s)$ and $\beta(n)$ is the best approximation factor known for the simpler problem of string edit distance running in analogous time. If we allow $O(n^{1+\epsilon}+|OPT|^2n^{\epsilon})$ time, then the approximation factor can be reduced to $O(\frac{1}{\epsilon}\log{|OPT|})$. Since the best known near-linear time algorithm for the string edit distance problem has $\beta(n)=\polylog(n)$, under near-linear time computation model both the string edit distance and the \dy$(s)$ language edit distance problem have a $\polylog(n)$ approximation factor. This comes as a surprise since \dy$(s)$ is a significant generalization of the string edit distance problem and their exact computations via dynamic programming show a marked difference in time complexity.

    Rather less surprisingly, we show that the framework for efficiently approximating edit distance to \dy$(s)$ can be utilized for many other languages. We illustrate this by considering various memory checking languages (studied extensively under distributed verification) such as \stack, \queue, \pq~and \deque~which comprise of valid transcripts of stacks, queues, priority queues and double-ended queues respectively. Therefore, any language that can be recognized by these data structures, can also be repaired efficiently by our algorithm.

  • Adi Livnat (Virginia Tech) , Christos Papadimitriou (University of California at Berkeley) , Aviad Rubinstein (University of California at Berkeley) , Andrew Wan (Simons Institute, UC Berkeley) , Gregory Valiant (Stanford University)

    Abstract:
    We show that, if truth assignments on n variables reproduce through recombination so that satisfaction of a particular Boolean function confers a small evolutionary ad- vantage, then a polynomially large population over polynomially many generations (polynomial in n and the inverse of the initial satisfaction probability) will end up al- most surely consisting exclusively of satisfying truth assignments. We argue that this theorem sheds light on the problem of the evolution of complex adaptations.

  • Vipul Goyal (Microsoft Research India) , Silas Richelson (UCLA) , Alon Rosen (IDC Herzliya) , Margarita Vald (Tel Aviv University)

    Abstract:
    In their seminal work on non-malleable cryptography, Dolev, Dwork and Naor, showed how to construct a non-malleable commitment with logarithmically-many "rounds"/"slots", the idea being that any adversary may successfully maul in some slots but would fail in at least one. Since then new ideas have been introduced, ultimately resulting in constant-round protocols based on any one-way function. Yet, in spite of this remarkable progress, each of the known constructions of non-malleable commitments leaves something to be desired.

    In this paper we propose a new technique that allows us to to construct a non-malleable protocol with only a single ``slot", and to improve in at least one aspect over each of the previously proposed protocols. Two direct byproducts of our new ideas are a four round non-malleable commitment and a four round non-malleable zero-knowledge argument, the latter matching the round complexity of the best known zero-knowledge argument (without the non-malleability requirement). The protocols are based on the existence of one-way permutations (or alternatively one-way functions with an extra round) and admit very efficient instantiations via standard homomorphic commitments and sigma protocols.

    Our analysis relies on algebraic reasoning, and makes use of error correcting codes in order to ensure that committers' tags differ in many coordinates. One way of viewing our construction is as a method for combining many atomic sub-protocols in a way that simultaneously amplifies soundness and non-malleability, thus requiring much weaker guarantees to begin with, and resulting in a protocol which is much trimmer in complexity compared to the existing ones.

  • Dorit Aharonov (Hebrew University) , Aram Harrow (MIT) , Zeph Landau (UC Berkeley) , Daniel Nagaj (UC Berkeley) , Mario Szegedy (Rutgers) , Umesh Vazirani (UC Berkeley)

    Abstract:
    We introduce a technique for applying quantum expanders in a distributed fashion, and use it to
    solve two basic questions: testing whether a bipartite quantum state shared by two parties is
    the maximally entangled state and disproving a generalized area law. In the process these two
    questions which appear completely unrelated turn out to be two sides of the same coin. Strikingly
    in both cases a constant amount of resources are used to verify a global property.

  • George Giakkoupis (INRIA Rennes) , Philipp Woelfel (University of Calgary)

    Abstract:
    In this paper we settle an open question by determining the remote memory reference (RMR) complexity of randomized mutual exclusion on the Distributed Shared Memory (DSM) model with atomic registers in a weak (slightly stronger than oblivious) adversary model.
    In particular, we present a mutual exclusion algorithms for the DSM model that has constant expected amortized RMR complexity in the oblivious adversary model and is deterministically deadlock-free.
    The algorithm is fairly simple to implement.
    Prior to this work, no randomized algorithm that achieves $o(\log n/\log\log n)$ RMR complexity was known in this model.
    Our algorithm compares favorably with one by Bender and Gilbert (FOCS 2011), which has expected amortized RMR complexity of $O(\log^2\log n)$ in the CC model, and provides only probabilistic deadlock-freedom (with high probability).

  • Bartlomiej Bosek (Jagiellonian University) , Dariusz Leniowski (University of Warsaw) , Piotr Sankowski (University of Warsaw) , Anna Zych (University of Warsaw)

    Abstract:
    In this paper we study the problem of maintaining maximum size matchings in incremental
    bipartite graphs. In this problem a bipartite graph $G$ between $n$ clients and $n$ servers is
    revealed online. The clients arrive in an arbitrary order and request to be matched to
    a subset of servers. In our model we want to maximize the matching size between clients and servers
    and allow to switch clients between servers, i.e., after a client arrives we find an augmenting path
    from a client to a free server. Our goals in this model are twofold. First, we want to minimize the number of times
    clients are reallocated between the servers. Second, we want to give fast algorithms
    that recompute such reallocation.

    As for the number of changes, we propose a greedy algorithm that chooses an augmenting path $\pi$
    that minimizes the maximum number of times each edge in $\pi$ was used by augmenting paths so far.
    We show that this algorithm reallocates each client $O(\sqrt{n})$ times. This gives an $O(n^{3/2})$ bound on the total
    number of changes, what resolves the main open
    question risen by Chaudhuri {\it et al.} (INFOCOM'09) who asked to improve the naive $O(n^2)$ upper bound.
    Our bound of $O(n^{3/2})$ is accompanied by a tight example. Next, we argue that
    the same bound holds in the decremental case. Moreover, we show incremental and decremental algorithms
    that maintain $(1-\varepsilon)$-approximate matching with total of $O(\varepsilon^{-1}n)$ reallocations, for
    any $\varepsilon>0$.

    Finally, we address the question of how to efficiently compute paths given by this greedy
    algorithm. We show that by introducing proper amortization we can obtain an incremental
    algorithm that maintains the maximum size matching in total $O(\sqrt{n}m)$ time. This matches
    the running time of one of the fastest static maximum matching algorithms that was given by Hopcroft and
    Karp (SIAM J. Comput '73). We extend our result to decremental case where we give the same total bound
    on the running time. Additionally, we show $O(\varepsilon^{-1}m)$ time incremental and decremental
    algorithms that maintain $(1-\varepsilon)$-approximate matching for any $\varepsilon>0$. Observe
    that this bound matches the running time of the fastest approximate solution as well.

  • Anat Ganor (Weizmann Institute) , Gillat Kol (IAS) , Ran Raz (Weizmann Institute & IAS)

    Abstract:
    We show an exponential gap between communication complexity and information complexity, by giving an explicit example for a communication task (relation), with information complexity <= O(k), and distributional communication complexity >= 2^k. This shows that a communication protocol cannot always be compressed to its internal information. By a result of Braverman, our gap is the largest possible. By a result of Braverman and Rao, our example shows a gap between communication complexity and amortized communication complexity, implying that a tight direct sum result for distributional communication complexity cannot hold.

  • Parinya Chalermsook (Max-Planck-Institut fur Informatik) , Bundit Laekhanukit (McGill University & Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA)) , Danupon Nanongkai (University of Vienna)

    Abstract:
    The study of graph products is a major research topic and typically concerns the term f (G ∗ H), e.g., to show that f (G ∗ H) = f (G)f (H). In this paper, we study graph products in a non-standard form f (R[G ∗ H]) where R is a “reduction”, a transformation of any graph into an instance of an intended optimization problem. We resolve some open problems as applications.

    The first problem is minimum consistent deterministic finite automaton (DFA). We show a tight n^{1 - epsilon}-approximation hardness, improving the {n^1/14-epsilon}-hardness of [Pitt and Warmuth, STOC 1989 and JACM 1993], where n is the sample size. (In fact, we also give improved hardnesses for the case of acyclic DFA and NFA.) Due to Board and Pitt [Theoretical Computer Science 1992], this implies the hardness of properly learning DFAs assuming NP != RP (the weakest possible assumption). This affirmatively answers an open problem raised 25 years ago in the paper of Pitt and Warmuth and the survey of Pitt [All 1989]. Prior to our results, this hardness only follows from the stronger hardness of improperly learning DFAs, which requires stronger assumptions, i.e., either a cryptographic or an average case complexity assumption [Kearns and Valiant STOC 1989 and JACM 1994; Daniely et al. STOC 2014].

    The second problem is edge-disjoint paths (EDP) on directed acyclic graphs (DAGs). This problem is a “cousin” of the arguably most important problem in routing, EDP on undirected graphs, in the sense that both problems admit an O(sqrt{n})-approximation algorithm [Chekuri, Khanna, and Shepherd, Theory of Computing 2006] and a matching Oemga(sqrt{n}) integrality gap, but so far only small hardness factors are known (n^{1/26-epsilon}) for the case of DAGs [Chuzhoy et al., STOC 2007] and log^{1/2-epsilon}n for the case of undirected graphs [Andrews et al., Combinatorica 2010]). (n denotes the number of vertices.) Our techniques give a tight n^{1/2-epsilon}-hardness for EDP on DAGs, removing one of the barriers that mark the lack of our understanding of EDP.

    As by-products of our techniques: (i) We give a tight hardness of packing vertex-disjoint k-cycles for large k, complimenting [Guruswami and Lee, ECCC 2014] and matching [Krivelevich et al., SODA 2005 and ALG 2007]. (ii) We give an alternative (and perhaps simpler) proof for the hardness of properly learning DNF, CNF and intersection of halfspaces [Alekhnovich et al. FOCS 2004 and JCSS 2008]. Our new concept reduces the task of proving hardnesses to merely analyzing graph product inequalities, which are often as simple as textbook exercises. This concept was inspired by, and can be viewed as a generalization of, the graph product subadditivity technique we previously introduced in SODA 2013. This more general concept might be useful in proving other hardness results as well.

  • Raef Bassily (Pennsylvania State University) , Adam Smith (Pennsylvania State University) , Abhradeep Thakurta (Yahoo! Labs)

    Abstract:
    In this paper, we initiate a systematic investigation of \emph{differentially private} algorithms for convex empirical risk minimization. Various instantiations of this problem have been studied before. We provide new algorithms and matching lower bounds for private ERM assuming only that each data point's contribution to the loss function is Lipschitz bounded and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex.

    Our algorithms run in polynomial time, and in some cases even match the optimal nonprivate running time (as measured by oracle complexity [Nmirovski and Yudin, 1983, Agrawal et al., 2012]). We give separate algorithms (and lower bounds) for $(\eps,0)$- and $(\eps,\delta)$-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different.

    Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is \emph{smooth}. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.

  • Constantinos Daskalakis (EECS, MIT) , Qinxuan Pan (EECS, MIT)

    Abstract:
    Fictitious play is a natural dynamic for equilibrium play in zero-sum games, proposed by [Brown 1949], and shown to converge by [Robinson 1951]. Samuel Karlin conjectured in 1959 that fictitious play converges at rate O(1/\sqrt{t}) with the number of steps t. We disprove this conjecture showing that, when the payoff matrix of the row player is the n x n identity matrix, fictitious play may converge with rate as slow as Omega(t^{-1/n}).


  • Sponsors

    • IEEE Computer Society Technical Committee on Mathematical Foundations of Computing
    • Microsoft Research
    • Bell Labs, Alcatel Lucent
    • DIMACS
    • National Science Foundation (NSF)

Thanks to Wolfgang Richter and SOSP 2013 for the website design| All trademarks and registered trademarks are the property of their respective owners.