Complexity Zoo Glossary
"Now you too can speak Theorese!"
On this page I've collected terms that appear (or don't appear) throughout the Complexity Zoo, and that theoretical computer scientists tend to use without defining them, assuming everyone just knows what they mean. The list is by no means comprehensive-- if you find something missing or in error, let someone know or jump in and help out!
- Each question you ask (say to an oracle) can depend on the answers to the previous questions. If A and B are complexity classes, then AB is the class of languages decidable by an A machine that can make adaptive queries to a B oracle.
- Typically denoted , an alphabet is a finite set of characters. Common examples include and .
- Concerning the rate at which a function grows, ignoring constant factors. For example, if an algorithm uses 3n+5 steps on inputs of size n, we say its running time is "asymptotically linear" - emphasizing that the linear dependence on n is what's important, not the 3 or the 5.
- block sensitivity
- Given a Boolean function , the block sensitivity of an input is the maximum number of disjoint blocks of variables such that does not equal , where denotes with the variables in flipped. Then is the maximum of over all . Defined by Nisan in 1991. See also sensitivity, certificate complexity.
- Blum integer
- A product of two distinct primes, both of which are congruent to 3 mod 4.
- Boolean formula
- A circuit in which each gate has fanout 1.
- Boolean function
- Usually, a function , that takes an -bit string as input and produces a bit as output.
- certificate complexity
- Given a Boolean function , the certificate complexity of an input is the minimum size of a set of variables such that whenever agrees with on every variable in . Then is the maximum of over all . See also block sensitivity.
- To engineers, a "circuit" is a closed loop. But in theoretical computer science, a circuit never has loops: instead, it starts with an input, then applies a sequence of simple operations (or gates) to produce an output. For example, an OR gate outputs 1 if either of its input bits are 1, and 0 otherwise. The output of a gate can then be used as an input to other gates.
- The closure of a set under some operations, is the set of everything you can get by starting with , then repeatedly applying the operations. For instance, the closure of the set under addition and subtraction is the set of integers.
- Conjunctive Normal Form. A special kind of Boolean formula, consisting of an AND of ORs of negated or non-negated literals. For instance: . See also DNF.
- An infinite sequence (hierarchy) of complexity classes collapses if all but finitely many of them are actually equal to each other.
- The complement of a language is the set of all instances not in the language. The complement of a complexity class consists of the complement of each language in the class. (Not the set of all languages not in the class!)
- A problem is complete for a complexity class if (1) it's in the class, and (2) everything in the class can be reduced to it (under some notion of reduction). So, if you can solve the complete problems for some class, then you can solve every problem in the class. The complete problems are the hardest.
- Basically, a function is 'constructible' if it's nondecreasing, and if, given an input , can be computed in time linear in . Informally, constructible functions are those that are sufficiently well-behaved to appear as complexity bounds. This is a technical notion that is almost never needed in practice.
- decision problem
- A problem for which the desired answer is a single bit (1 or 0, yes or no). For simplicity, theorists often restrict themselves to talking about decision problems.
- decision tree
- A (typically) binary tree where each non-leaf vertex is labeled by a query, each edge is labeled by a possible answer to the query, and each leaf is labeled by an output (typically yes or no). A decision tree represents a function in the obvious way.
- decision tree complexity
- Given a Boolean function , the decision tree complexity of is the minimum height of a decision tree representing (where height is the maximum length of a path from the root to a leaf). Also called deterministic query complexity.
- When referring to a circuit, the maximum number of gates along any path from an input to the output. (Note that circuits never contain loops.)
- Not randomized.
- Disjunctive Normal Form. A Boolean formula consisting of an OR and ANDs of negated or non-negated literals. For instance: . See also CNF.
- downward self-reducible
- A problem is downward self-reducible if an oracle for instances of size enables one to solve instances of size .
- equivalence class
- A maximal set of objects that can all be transformed to each other by some type of transformation.
- Usually an infinite sequence of objects, one for each input size n. For example, a "family of circuits."
- The maximum number of input wires that any gate in a circuit can have. A "bounded fanin" circuit is one in which each gate has a constant number of input wires (often assumed to be 2).
- The maximum number of output wires any gate in a circuit can have. When talking about "circuits," one usually assumes unbounded fanout unless specified otherwise.
- finite automaton
- An extremely simple model of computation. In the most basic form, a machine reads an input string once, from left to right. At any step, the machine is in one of a finite number of states. After it reads an input character (symbol), it transitions to a new state, determined by its current state as well as the character it just read. The machine outputs 'yes' or 'no' based on its state when it reaches the end of the input.
- IEEE Symposium on Foundations of Computer Science (held every fall).
- function problem
- A problem where the desired output is not necessarily a single bit, but could belong to a set with more than 2 elements. Contrast with decision problem.
- A circuit where each gate has fanout 1.
- A basic component used to build a circuit. Usually performs some elementary logical operation: for example, an AND gate takes a collection of input bits, and outputs a '1' bit if all the input bits are '1', and a '0' bit otherwise. See also fanin, fanout.
- Hamming distance
- Given two bit strings for some , their Hamming distance is the number of bits that are different between the two strings. This function satisfies the properties of a metric on the vectorspace , since it is non-negative, is symmetric, satisfies and the satisfies the triangle inequality.
- Hamming weight
- Given a bit string , the Hamming weight of is the number of non-zero bits. Equivalently, the Hamming weight of is the Hamming distance between and a string of all zeros having the same length.
- A problem is hard for a class if everything in the class can be reduced to it (under some notion of reduction). If a problem is in a class and hard for the class, then it's complete for the class. Beware; hard and complete are not synonyms!
- A particular case of a problem.
- Another term for decision problem (but only a total decision problem, not a promise decision problem). An instance is in a language, if and only if the answer to the decision problem is "yes."
- An alternate characterization of a language is as a set of words over an alphabet : . This is equivalent since determining membership in is a decision problem called the characteristic function of . A simple example of a language is ODD, the set of all strings over which end in 1.
- Las Vegas algorithm
- A zero-error randomized algorithm, i.e. one that always returns the correct answer, but whose running time is a random variable. The term was introduced by Babai in 1979. Contrast with Monte Carlo.
- A complexity class C is low for D if DC = D; that is, adding C as an oracle does not increase the power of D. C is self-low if CC = C.
- lower bound
- A result showing that a function grows at least at a certain asymptotic rate. Thus, a lower bound on the complexity of a problem implies that any algorithm for the problem requires at least a certain amount of resources. Lower bounds are much harder to come by than upper bounds.
- many-one reduction
- A reduction from problem A to problem B, in which an algorithm converts an instance of A into an instance of B having the same answer. Also called a Karp reduction. (Contrast with Turing reduction.)
- A function is monotone (or monotonic) if, when one increases any of the inputs, the output never decreases (it can only increase or stay the same). A Boolean circuit is monotone if it consists only of AND and OR gates, no NOT gates.
- monotone-nonmonotone gap
- A Boolean function has a monotone-nonmonotone gap if it has nonmonotone Boolean circuits (using AND, OR, and NOT gates) that are smaller than any monotone Boolean circuits (without NOT gates) for it.
- Monte Carlo algorithm
- A bounded-error randomized algorithm, i.e. one that returns the correct answer only with some specified probability. The error probability can be either one-sided or two-sided. (In physics and engineering, the term refers more broadly to any algorithm based on random sampling.) The term was introduced by Metropolis and Ulam around 1945. Contrast with Las Vegas.
- nondeterministic machine
- A hypothetical machine that, when faced with a choice, is able to make all possible choices at once - i.e. to branch off into different 'paths.' In the end, the results from all the paths must be combined somehow into a single answer. One can obtain dozens of different models of computation, depending on the exact way this is stipulated to happen. For example, an NP machine answers 'yes' if any of its paths answer 'yes.' By contrast, a PP machine answers 'yes' if the majority of its paths answer 'yes.'
- A probability is non-negligible if it's greater than for some polynomial , where is the size of the input.
- This means that a different algorithm can be used for each input size. Boolean circuits are a nonuniform model of computation --- one might have a circuit for input instances of size 51, that looks completely different from the circuit for instances of size 50.
- o ("little-oh")
- For a function to be means that for all positive constants , is less than for all sufficiently large .
- O ("big-oh")
- For a function to be means that for some nonnegative constant , is less than for all sufficiently large .
- ω (little omega)
- For a function to be means that for all positive constants , is greater than for all sufficiently large .
- Ω (Omega)
- For a function to be means that for some nonnegative constant , is greater than for all sufficiently large .
- Also called "black box." An imaginary device that solves some computational problem immediately.
- Note: An oracle is specified by the answers it gives to every possible question you could ask it. So in some contexts, 'oracle' is more or less synonymous with 'input'—but usually an input so long that the algorithm can only examine a small fraction of it.
- O-optimal or O-speedup
- Informally, an O-optimal algorithm is one that is optimal in big-O notation. More formally, an algorithm A accepting a language L is O-optimal if for any other A' accepting L, there exists a constant such that for all inputs : . A language with no O-optimal A is said to have O-speedup. See Speedup.
- A single sequence of choices that could be made by a nondeterministic machine.
- A game-theoretic reformulation of the classical Lebesgue measure, whose full definition is too long to fit here; please see the survey paper by Lutz, [Lut93], wherein the term is formally defined. The measure is useful in proving zero-one laws. Also known as "Lutz's p-measure."
- , where is a constant. Also an adverb ("polylogarithmically").
- To mathematicians, a polynomial in is a sum of multiples of nonnegative integer powers of : for example, . To computer scientists, on the other hand, polynomial often means upper-bounded by a polynomial: so , for example, is "polynomial." Also an adverb ("polynomially"). A function that grows polynomially is considered to be 'reasonable,' unlike, say, one that grows exponentially.
- The process of accepting or rejecting an input conditioned on some random event occurring in a desired fashion. For example, guessing the solution to an NP-complete problem then killing yourself if the solution was incorrect could be viewed as an anthropic form of post-selection, as in any universes in which you are still alive, your random choice of a solution was correct. This intuition leads to classes such as BPPpath and PostBQP.
- A function from inputs to outputs, which we want an algorithm to compute. A crossword puzzle is not a problem; it's an instance. The set of all crossword puzzles is a problem. See also: decision problem, language.
- promise problem
- A problem for which the input is guaranteed to have a certain property. I.e. if an input doesn't have that property, then we don't care what the algorithm does when given that input.
- p-optimal or p-speedup
- (aka polynomially optimal or polynomial speedup) A Turing machine accepting a language is polynomially optimal if for any other accepting , there exists a polynomial such that for all inputs : . A language with no p-optimal is said to have p-speedup. p-optimal was defined by Krajicek and Pudlak [KP89]; see also Messner [Mes99].
- P-uniform or P-nonuniform
- A family of Boolean circuits is P-uniform if a Turing machine given input string (1 repeated n times) can output the member of the family with inputs in time polynomial in n. A problem is P-nonuniform if no family of minimal Boolean circuits for the problem is P-uniform.
- Making use of quantum-mechanical superposition, which is a particular kind of parallel and very fast nondeterministic algorithmic processing that collapses to one value (usually the answer to a problem instance) when its output is observed, captured, or used. If you don't know what that means, well, I can't explain it in this sentence (try lectures 9 and 10 from the Democritus course taught by the Zookeeper). But it has nothing to do with the original meaning of the word 'quantum' (i.e. a discrete unit).
- , for some constant c.
- random access
- This means that an algorithm can access any element xi of a sequence immediately (by just specifying i). It doesn't have to go through x1,...,xi-1 first. Note that this has nothing directly to do with randomness.
- Making use of randomness (as in 'randomized algorithm'). This is probably an unfortunate term, since it doesn't imply that one starts with something deterministic and then 'randomizes' it. See also Monte Carlo and Las Vegas.
- random self-reducible
- A problem is random self-reducible if the ability to solve a large fraction of instances enables one to solve all instances. For example, the discrete logarithm problem is random self-reducible.
- A result of the form, "Problem A is at least as hard as Problem B." This is generally shown by giving an algorithm that transforms any instance of Problem B into an instance of Problem A.
- To add an oracle. We say a complexity class inclusion (or technique) is relativizing if it works relative to all oracles. Since there exist oracles A,B such that PA = NPA and PB does not equal NPB [BGS75], any technique that resolves P versus NP will need to be nonrelativizing.
- satisfiability (SAT)
- One of the central problems in computer science. The problem is, given a Boolean formula, does there exist a setting of variables that satisfies the formula (that is, makes it evaluate to true)? For example, is satisfiable: , and , are both satisfying assignments. But is unsatisfiable. See also: Garden entry on satisfiability.
- A problem is self-reducible if an oracle for the decision problem enables one to solve the associated function problem efficiently. For example, NP-complete problems are self-reducible. See also: downward self-reducible, random self-reducible.
- Given a Boolean function , the sensitivity of an input is the number of variables such that flipping them changes the value of . Then is the maximum of over all .
- When referring to a string, the number of bits. When referring to a circuit, the number of gates.
- The amount of memory used by an algorithm (as in space complexity).
- ACM Symposium on Theory of Computing (held every spring).
- A sequence of 1s and 0s. (See, it's not just physicists who plumb Nature's deepest secrets -- we computer scientists theorize about strings as well!)
Note: For simplicity, one usually assumes that every character in a string is either 1 or 0, but strings over larger alphabets can also be considered.
- Growing slower (as a function of n) than any exponential function. Depending on the context, this can either mean (so that the Number Field Sieve factoring algorithm, which runs in about time, is "subexponential"); or for every .
- Growing faster (as a function of ) than any polynomial in . This is not the same as exponential: for example, is superpolynomial, but not exponential.
- The memory used by a Turing machine.
- Θ (Theta)
- For a function to be means that and (i.e. they grow at the same rate).
- tight bound
- An upper bound that matches the lower bound, or vice versa. I.e. the best possible bound for a function.
- A total function is one that is defined on every possible input.
- truth table
- A table of all possible inputs to a Boolean function, together with the corresponding outputs.
- truth table reduction
- A Turing reduction in which the oracle queries must be nonadaptive.
- Turing reduction
- A reduction from problem A to problem B, in which the algorithm for problem A can make queries to an oracle for problem B. (Contrast with many-one reduction.)
- An inefficient encoding system, in which the integer n is denoted by writing n 1's in sequence.
- A single algorithm is used for all input lengths. For example, Turing machines are a uniform model of computation -- one just has to design a single Turing machine for multiplication, and it can multiply numbers of any length. (Contrast with the circuit model.)
- upper bound
- A result showing that a function grows at most at a certain asymptotic rate. For example, any algorithm for a problem yields an upper bound on the complexity of the problem.
- with high probability (w.h.p.)
- Usually this means with probability at least 2/3 (or any constant greater than 1/2). If an algorithm is correct with 2/3 probability, one can make the probability of correctness as high as one wants by just repeating several times and taking a majority vote.
Note: Sometimes people say "high probability" when they mean "non-negligible probability."
- Given an alphabet , a word is a string . That is, a string of characters drawn from . Using the alphabet , some valid words include , , and .
- zero-one law
- Any theorem which specifies that the probability of an event is either zero or one is known as a zero-one law.