skip to main content
Computational applications of noise sensitivity
Publisher:
  • Massachusetts Institute of Technology
  • 201 Vassar Street, W59-200 Cambridge, MA
  • United States
Order Number:AAI0805205
Pages:
1
Bibliometrics
Skip Abstract Section
Abstract

This thesis is concerned with the study of the noise sensitivity of boolean functions and its applications in theoretical computer science. Noise sensitivity is defined as follows: Let f be a boolean function and let ý ý (0, ½) be a parameter. Suppose a uniformly random string x is picked, and y is formed by flipping each bit of x independently with probability ý. Then the noise sensitivity of f at ý is defined to be the probability that f ( x ) and f ( y ) differ.

In this thesis we investigate the noise sensitivity of various classes of boolean functions, including majorities and recursive majorities, boolean threshold functions, and monotone functions. Following this we give new complexity-theoretic and algorithmic applications of noise sensitivity: (1) Regarding computational hardness amplification, we prove a general direct product theorem that tightly characterizes the hardness of a composite function g ý f in terms of an assumed hardness of f and the noise sensitivity of g . The theorem lets us prove a new result about the hardness on average of NP : If NP is (1 ý poly( n ))-hard for circuits of polynomial size, then it is in fact (½ + o (1))-hard for circuits of polynomial size. (2) In the field of computational learning theory, we show that any class whose functions have low noise sensitivity is efficiently learnable. Using our noise sensitivity estimates for functions of boolean halfspaces we obtain new polynomial and quasipolynomial time algorithms for learning intersections, thresholds, and other functions of halfspaces. From noise sensitivity considerations we also give a polynomial time algorithm for learning polynomial-sized DNFs under the “Random Walk” model; we also give the first algorithm that learns the class of “junta” functions with efficiency better than that of the brute force algorithm. (3) Finally, we introduce a new collective coin-flipping problem whose study is equivalent to the study of “higher moments” of the noise sensitivity problem. We prove several results about this extension, and find optimal or near-optimal choices for the coin-flipping function for all asymptotic limits of the parameters. Our techniques include a novel application of the reverse Bonami-Beckner inequality. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

Cited By

  1. Blanc G, Gupta N, Lange J and Tan L Universal guarantees for decision tree induction via a higher-order splitting criterion Proceedings of the 34th International Conference on Neural Information Processing Systems, (9475-9484)
  2. Wei C and Ermon S General bounds on satisfiability thresholds for random CSPs via fourier analysis Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, (3958-3965)
  3. Dachman-Soled D, Feldman V, Tan L, Wan A and Wimmer K Approximate resilience, monotonicity, and the complexity of agnostic learning Proceedings of the twenty-sixth annual ACM-SIAM symposium on Discrete algorithms, (498-511)
  4. ACM
    Khot S and Vishnoi N (2015). The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative-Type Metrics into ℓ1, Journal of the ACM, 62:1, (1-39), Online publication date: 2-Mar-2015.
  5. ACM
    O’Donnell R, Wu Y and Zhou Y (2014). Optimal Lower Bounds for Locality-Sensitive Hashing (Except When q is Tiny), ACM Transactions on Computation Theory, 6:1, (1-13), Online publication date: 1-Mar-2014.
  6. O'Donnell R and Zhou Y Approximability and proof complexity Proceedings of the twenty-fourth annual ACM-SIAM symposium on Discrete algorithms, (1537-1556)
  7. Ré C and Suciu D (2008). Approximate lineage for probabilistic databases, Proceedings of the VLDB Endowment, 1:1, (797-808), Online publication date: 1-Aug-2008.
  8. Klivans A and Sherstov A (2007). Unconditional lower bounds for learning intersections of halfspaces, Machine Language, 69:2-3, (97-114), Online publication date: 1-Dec-2007.
Contributors
  • Carnegie Mellon University
  • Harvard University

Recommendations