Difference between revisions of "User:Argentpepper/Sandbox/Glossary"

From Complexity Zoo
Jump to navigation Jump to search
(→‎B: use built-in definition list syntax)
(add span with id to appropriate definitions)
 
(22 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
===== A =====
 
===== A =====
 
; adaptive : Each question you ask (say to an oracle) can depend on the answers to the previous questions.  If A and B are complexity classes, then A<sup>B</sup> is the class of languages decidable by an A machine that can make adaptive queries to a B oracle.
 
; adaptive : Each question you ask (say to an oracle) can depend on the answers to the previous questions.  If A and B are complexity classes, then A<sup>B</sup> is the class of languages decidable by an A machine that can make adaptive queries to a B oracle.
; alphabet : Typically denoted <math>\Sigma</math>, an alphabet is a finite set of characters. Common examples include <math>\{a, b, \dots, z\}</math> and <math>\{0, 1\}</math>.
+
; <span id="alphabet">alphabet</span> : Typically denoted <math>\Sigma</math>, an alphabet is a finite set of characters. Common examples include <math>\{a, b, \dots, z\}</math> and <math>\{0, 1\}</math>.
 
; asymptotic : Concerning the rate at which a function grows, ignoring constant factors.  For example, if an algorithm uses 3n+5 steps on inputs of size n, we say its running time is "asymptotically linear" - emphasizing that the linear dependence on n is what's important, not the 3 or the 5.
 
; asymptotic : Concerning the rate at which a function grows, ignoring constant factors.  For example, if an algorithm uses 3n+5 steps on inputs of size n, we say its running time is "asymptotically linear" - emphasizing that the linear dependence on n is what's important, not the 3 or the 5.
  
 
===== B =====
 
===== B =====
; block sensitivity : Given a Boolean function <math>f:\{0,1\}^n\to\{0,1\}</math>, the block sensitivity <math>\mathrm{bs}^X(f)</math> of an input <math>X=x_1\dots x_n</math> is the maximum number of disjoint blocks <math>B</math> of variables such that <math>f(X)</math> does not equal <math>f\left(X^{(B)}\right)</math>, where <math>X^{(B)}</math> denotes <math>X</math> with the variables in <math>B</math> flipped.  Then <math>\mathrm{bs}(f)</math> is the maximum of <math>\mathrm{bs}^X(f)</math> over all <math>X</math>.  Defined by Nisan in 1991.  See also [[#sensitivity|sensitivity]], [[#certificate-complexity|certificate complexity]].
+
; <span id="block-sensitivity">block sensitivity</span> : Given a Boolean function <math>f:\{0,1\}^n\to\{0,1\}</math>, the block sensitivity <math>\mathrm{bs}^X(f)</math> of an input <math>X=x_1\dots x_n</math> is the maximum number of disjoint blocks <math>B</math> of variables such that <math>f(X)</math> does not equal <math>f\left(X^{(B)}\right)</math>, where <math>X^{(B)}</math> denotes <math>X</math> with the variables in <math>B</math> flipped.  Then <math>\mathrm{bs}(f)</math> is the maximum of <math>\mathrm{bs}^X(f)</math> over all <math>X</math>.  Defined by Nisan in 1991.  See also [[#sensitivity|sensitivity]], [[#certificate-complexity|certificate complexity]].
 
; Blum integer : A product of two distinct primes, both of which are congruent to 3 mod 4.
 
; Blum integer : A product of two distinct primes, both of which are congruent to 3 mod 4.
 
; Boolean formula : A circuit in which each gate has fanout 1.
 
; Boolean formula : A circuit in which each gate has fanout 1.
Line 11: Line 11:
  
 
===== C =====
 
===== C =====
<font color="red" id="certificate-complexity"><b>certificate complexity:</b></font> Given a Boolean function <math>f:\{0,1\}^n\to\{0,1\}</math>, the certificate complexity <math>C^X(f)</math> of an input <math>X=x_1\dots x_n</math> is the minimum size of a set <math>S</math> of variables such that <math>f(Y)=f(X)</math> whenever <math>Y</math> agrees with <math>X</math> on every variable in <math>S</math>.  Then <math>C(f)</math> is the maximum of <math>C^X(f)</math> over all <math>X</math>.  See also [[#block-sensitivity|block sensitivity]].
+
; <span id="certificate-complexity">certificate complexity</span> : Given a Boolean function <math>f:\{0,1\}^n\to\{0,1\}</math>, the certificate complexity <math>C^X(f)</math> of an input <math>X=x_1\dots x_n</math> is the minimum size of a set <math>S</math> of variables such that <math>f(Y)=f(X)</math> whenever <math>Y</math> agrees with <math>X</math> on every variable in <math>S</math>.  Then <math>C(f)</math> is the maximum of <math>C^X(f)</math> over all <math>X</math>.  See also [[#block-sensitivity|block sensitivity]].
 
 
 
[[Image:Example-circuit.jpg|thumb|200px|right|Example of a Boolean [[#circuit|circuit]] of [[#depth|depth]] 4 that checks if two 2-bit strings are different.]]
 
[[Image:Example-circuit.jpg|thumb|200px|right|Example of a Boolean [[#circuit|circuit]] of [[#depth|depth]] 4 that checks if two 2-bit strings are different.]]
<font color="red" id="circuit"><b>circuit:</b></font> To engineers, a "circuit" is a closed loop.  But in theoretical computer science, a circuit never has loops: instead, it starts with an input, then applies a sequence of simple operations (or gates) to produce an output.  For example, an OR gate outputs 1 if either of its input bits are 1, and 0 otherwise.  The output of a gate can then be used as an input to other gates.
+
; circuit : To engineers, a "circuit" is a closed loop.  But in theoretical computer science, a circuit never has loops: instead, it starts with an input, then applies a sequence of simple operations (or gates) to produce an output.  For example, an OR gate outputs 1 if either of its input bits are 1, and 0 otherwise.  The output of a gate can then be used as an input to other gates.
 
+
; closure : The closure of a set <math>S</math> under some operations, is the set of everything you can get by starting with <math>S</math>, then repeatedly applying the operations.  For instance, the closure of the set <math>\{1\}</math> under addition and subtraction is the set of integers.
<font color="red"><b>closure:</b></font> The closure of a set <math>S</math> under some operations, is the set of everything you can get by starting with <math>S</math>, then repeatedly applying the operations.  For instance, the closure of the set <math>\{1\}</math> under addition and subtraction is the set of integers.
+
; <span id="cnf">CNF</span> : Conjunctive Normal Form.  A special kind of Boolean formula, consisting of an AND of ORs of negated or non-negated literals.  For instance: <math>(a\vee\neg b)\wedge(b\vee\neg c\vee d)</math>.  See also [[#dnf|DNF]].
 
+
; collapse : An infinite sequence (hierarchy) of complexity classes <i>collapses</i> if all but finitely many of them are actually equal to each other.
<font color="red" id="cnf"><b>CNF:</b></font> Conjunctive Normal Form.  A special kind of Boolean formula, consisting of an AND of ORs of negated or non-negated literals.  For instance: <math>(a\vee\neg b)\wedge(b\vee\neg c\vee d)</math>.  See also [[#dnf|DNF]].
+
; complement : The complement of a language is the set of all instances not in the language.  The complement of a complexity class consists of the complement of each language in the class.  (Not the set of all languages not in the class!)
 
+
; complete : A problem is complete for a complexity class if (1) it's in the class, and (2) everything in the class can be reduced to it (under some notion of reduction).  So, if you can solve the complete problems for some class, then you can solve every problem in the class.  The complete problems are the hardest.
<font color="red"><b>collapse:</b></font> An infinite sequence (hierarchy) of complexity classes <i>collapses</i> if all but finitely many of them are actually equal to each other.
+
; constructible : Basically, a function <math>f</math> is 'constructible' if it's nondecreasing, and if, given an input <math>x</math>, <math>f(\left\vert x\right\vert)</math> can be computed in time linear in <math>\left\vert x\right\vert+f(\left\vert x\right\vert)</math>.  Informally, constructible functions are those that are sufficiently well-behaved to appear as complexity bounds.  This is a technical notion that is almost never needed in practice.
 
 
<font color="red"><b>complement:</b></font> The complement of a language is the set of all instances not in the language.  The complement of a complexity class consists of the complement of each language in the class.  (Not the set of all languages not in the class!)
 
 
 
<font color="red"><b>complete:</b></font> A problem is complete for a complexity class if (1) it's in the class, and (2) everything in the class can be reduced to it (under some notion of reduction).  So, if you can solve the complete problems for some class, then you can solve every problem in the class.  The complete problems are the hardest.
 
 
 
<font color="red"><b>constructible:</b></font> Basically, a function <math>f</math> is 'constructible' if it's nondecreasing, and if, given an input <math>x</math>, <math>f(\left\vert x\right\vert)</math> can be computed in time linear in <math>\left\vert x\right\vert+f(\left\vert x\right\vert)</math>.  Informally, constructible functions are those that are sufficiently well-behaved to appear as complexity bounds.  This is a technical notion that is almost never needed in practice.
 
  
 
===== D =====
 
===== D =====
<font color="red"><b>decision problem:</b></font> A problem for which the desired answer is a single bit (1 or 0, yes or no).  For simplicity, theorists often restrict themselves to talking about decision problems.
+
; <span id="decision-problem">decision problem</span> : A problem for which the desired answer is a single bit (1 or 0, yes or no).  For simplicity, theorists often restrict themselves to talking about decision problems.
 
+
; decision tree : A (typically) binary tree where each non-leaf vertex is labeled by a query, each edge is labeled by a possible answer to the query, and each leaf is labeled by an output (typically yes or no).  A decision tree represents a function in the obvious way.
<font color="red"><b>decision tree:</b></font> A (typically) binary tree where each non-leaf vertex is labeled by a query, each edge is labeled by a possible answer to the query, and each leaf is labeled by an output (typically yes or no).  A decision tree represents a function in the obvious way.
+
; decision tree complexity : Given a Boolean function <math>f:\{0,1\}^n\to\{0,1\}</math>, the decision tree complexity <math>D(f)</math> of <math>f</math> is the minimum height of a decision tree representing <math>f</math> (where height is the maximum length of a path from the root to a leaf).  Also called deterministic query complexity.
 
+
; depth : When referring to a circuit, the maximum number of gates along any path from an input to the output.  (Note that circuits never contain loops.)
<font color="red"><b>decision tree complexity:</b></font> Given a Boolean function <math>f:\{0,1\}^n\to\{0,1\}</math>, the decision tree complexity <math>D(f)</math> of <math>f</math> is the minimum height of a decision tree representing <math>f</math> (where height is the maximum length of a path from the root to a leaf).  Also called deterministic query complexity.
+
; deterministic : Not randomized.
 
+
; <span id="dnf">DNF</span> : Disjunctive Normal Form.  A Boolean formula consisting of an OR and ANDs of negated or non-negated literals.  For instance: <math>(a\wedge c)\vee(b\wedge\neg c)</math>.  See also [[#cnf|CNF]].
<font color="red"><b>depth:</b></font> When referring to a circuit, the maximum number of gates along any path from an input to the output.  (Note that circuits never contain loops.)
+
; <span id="downward-self-reducible">downward self-reducible</span> : A problem is downward self-reducible if an oracle for instances of size <math>n-1</math> enables one to solve instances of size <math>n</math>.
 
 
<font color="red"><b>deterministic:</b></font> Not randomized.
 
 
 
<font color="red"><b>DNF:</b></font> Disjunctive Normal Form.  A Boolean formula consisting of an OR and ANDs of negated or non-negated literals.  For instance: <math>(a\wedge c)\vee(b\wedge\neg c)</math>.  See also [[#cnf|CNF]].
 
 
 
<font color="red" id="downward-self-reducible"><b>downward self-reducible:</b></font> A problem is downward self-reducible if an oracle for instances of size <math>n-1</math> enables one to solve instances of size <math>n</math>.
 
  
 
===== E =====
 
===== E =====
<font color="red"><b>equivalence class:</b></font> A maximal set of objects that can all be transformed to each other by some type of transformation.
+
; equivalence class : A maximal set of objects that can all be transformed to each other by some type of transformation.
  
 
===== F =====
 
===== F =====
<font color="red"><b>family:</b></font> Usually an infinite sequence of objects, one for each input size n.  For example, a "family of circuits."
+
; family : Usually an infinite sequence of objects, one for each input size n.  For example, a "family of circuits."
 
+
; fanin : The maximum number of input wires that any gate in a circuit can have.  A "bounded fanin" circuit is one in which each gate has a constant number of input wires (often assumed to be 2).
<font color="red"><b>fanin:</b></font> The maximum number of input wires that any gate in a circuit can have.  A "bounded fanin" circuit is one in which each gate has a constant number of input wires (often assumed to be 2).
+
; fanout : The maximum number of output wires any gate in a circuit can have.  When talking about "circuits," one usually assumes unbounded fanout unless specified otherwise.
 
+
; finite automaton : An extremely simple model of computation.  In the most basic form, a machine reads an input string once, from left to right.  At any step, the machine is in one of a finite number of states.  After it reads an input character (symbol), it transitions to a new state, determined by its current state as well as the character it just read.  The machine outputs 'yes' or 'no' based on its state when it reaches the end of the input.
<font color="red"><b>fanout:</b></font> The maximum number of output wires any gate in a circuit can have.  When talking about "circuits," one usually assumes unbounded fanout unless specified otherwise.
+
; FOCS : IEEE Symposium on Foundations of Computer Science (held every fall).
 
+
; function problem : A problem where the desired output is not necessarily a single bit, but could belong to a set with more than 2 elements.  Contrast with decision problem.
<font color="red"><b>finite automaton:</b></font> An extremely simple model of computation.  In the most basic form, a machine reads an input string once, from left to right.  At any step, the machine is in one of a finite number of states.  After it reads an input character (symbol), it transitions to a new state, determined by its current state as well as the character it just read.  The machine outputs 'yes' or 'no' based on its state when it reaches the end of the input.
+
; formula : A circuit where each gate has fanout 1.
 
 
<font color="red"><b>FOCS:</b></font> IEEE Symposium on Foundations of Computer Science (held every fall).
 
 
 
<font color="red"><b>function problem:</b></font> A problem where the desired output is not necessarily a single bit, but could belong to a set with more than 2 elements.  Contrast with decision problem.
 
 
 
<font color="red"><b>formula:</b></font> A circuit where each gate has fanout 1.
 
  
 
===== G =====
 
===== G =====
<font color="red"><b>gate:</b></font> A basic component used to build a circuit.  Usually performs some elementary logical operation: for example, an AND gate takes a collection of input bits, and outputs a '1' bit if all the input bits are '1', and a '0' bit otherwise.  See also fanin, fanout.
+
; gate : A basic component used to build a circuit.  Usually performs some elementary logical operation: for example, an AND gate takes a collection of input bits, and outputs a '1' bit if all the input bits are '1', and a '0' bit otherwise.  See also fanin, fanout.
  
 
===== H =====
 
===== H =====
{{Term|id=hamming-distance|Hamming distance|Given two bit strings <math>x,y\in\{0,1\}^n</math> for some <math>n</math>, their Hamming distance <math>d(x,y)</math> is the number of bits that are different between the two strings. This function satisfies the properties of a metric on the vectorspace <math>\{0,1\}^n</math>, since it is non-negative, is symmetric, satisfies <math>d(x,x)=0</math> and the satisfies the triangle inequality.
+
; Hamming distance : Given two bit strings <math>x,y\in\{0,1\}^n</math> for some <math>n</math>, their Hamming distance <math>d(x,y)</math> is the number of bits that are different between the two strings. This function satisfies the properties of a metric on the vectorspace <math>\{0,1\}^n</math>, since it is non-negative, is symmetric, satisfies <math>d(x,x)=0</math> and the satisfies the triangle inequality.
}}
+
; Hamming weight : Given a bit string <math>x\in\{0,1\}^*</math>, the Hamming weight of <math>x</math> is the number of non-zero bits. Equivalently, the Hamming weight of <math>x</math> is the Hamming distance <math>d(x,\mathbf{0})</math> between <math>x</math> and a string of all zeros having the same length.
{{Term|id=hamming-weight|Hamming weight|Given a bit string <math>x\in\{0,1\}^*</math>, the Hamming weight of <math>x</math> is the number of non-zero bits. Equivalently, the Hamming weight of <math>x</math> is the Hamming distance <math>d(x,\mathbf{0})</math> between <math>x</math> and a string of all zeros having the same length.
+
; hard : A problem is hard for a class if everything in the class can be reduced to it (under some notion of reduction).  If a problem is in a class <i>and</i> hard for the class, then it's complete for the class.  Beware; hard and complete are not synonyms!
}}
 
<font color="red"><b>hard:</b></font> A problem is hard for a class if everything in the class can be reduced to it (under some notion of reduction).  If a problem is in a class <i>and</i> hard for the class, then it's complete for the class.  Beware; hard and complete are not synonyms!
 
  
 
===== I =====
 
===== I =====
<font color="red"><b>instance:</b></font> A particular case of a problem.
+
; instance : A particular case of a problem.
  
 
===== L =====
 
===== L =====
{{Term|language|Another term for [[#D|decision problem]] (but only a total decision problem, not a promise decision problem).  An instance is in a language, if and only if the answer to the decision problem is "yes."
+
; <span id="language">language</span> : Another term for [[#D|decision problem]] (but only a total decision problem, not a promise decision problem).  An instance is in a language, if and only if the answer to the decision problem is "yes."
An alternate characterization of a language is as a set of [[#word|words]] over an alphabet <math>\Sigma</math>: <math>L\subseteq \Sigma^*</math>. This is equivalent since determining membership in <math>L</math> is a decision problem called the characteristic function of <math>L</math>. A simple example of a language is ODD, the set of all strings over <math>\{0,1\}</math> which end in 1.}}
+
: An alternate characterization of a language is as a set of [[#word|words]] over an alphabet <math>\Sigma</math>: <math>L\subseteq \Sigma^*</math>. This is equivalent since determining membership in <math>L</math> is a decision problem called the characteristic function of <math>L</math>. A simple example of a language is ODD, the set of all strings over <math>\{0,1\}</math> which end in 1.
 
+
; Las Vegas algorithm : A zero-error randomized algorithm, i.e. one that always returns the correct answer, but whose running time is a random variable.  The term was introduced by Babai in 1979.  Contrast with Monte Carlo.
<font color="red" id="lasvegas-algorithm"><b>Las Vegas algorithm:</b></font> A zero-error randomized algorithm, i.e. one that always returns the correct answer, but whose running time is a random variable.  The term was introduced by Babai in 1979.  Contrast [http://www.cheatcodesforsime3.com/ with] Monte Carlo.
+
; low : A complexity class C is low for D if D<sup>C</sup> = D; that is, adding C as an oracle does not increase the power of D.  C is <i>self-low</i> if C<sup>C</sup> = C.
 
+
; lower bound : A result showing that a function grows <i>at least</i> at a certain asymptotic rate.  Thus, a lower bound on the complexity of a problem implies that any algorithm for the problem requires <i>at least</i> a certain amount of resources.  Lower bounds are much harder to come by than upper bounds.
<font color="red" id="low"><b>low:</b></font> A complexity class C is low for D if D<sup>C</sup> = D; that is, adding C as an oracle does not increase the power of D.  C is <i>self-low</i> if C<sup>C</sup> = C.
 
 
 
<font color="red" id="lower bound"><b>lower bound:</b></font> A result showing that a function grows <i>at least</i> at a certain asymptotic rate.  Thus, a lower bound on the complexity of a problem implies that any algorithm for the problem requires <i>at least</i> a certain amount of resources.  Lower bounds are much harder to come by than upper bounds.
 
  
 
===== M =====
 
===== M =====
<font color="red"><b>many-one reduction:</b></font> A reduction from problem A to problem B, in which an algorithm converts an instance of A into an instance of B having the same answer.  Also called a Karp reduction.  (Contrast with Turing reduction.)
+
; many-one reduction : A reduction from problem A to problem B, in which an algorithm converts an instance of A into an instance of B having the same answer.  Also called a Karp reduction.  (Contrast with Turing reduction.)
 
+
; monotone : A function is monotone (or monotonic) if, when one increases any of the inputs, the output never decreases (it can only increase or stay the same).  A Boolean circuit is monotone if it consists only of AND and OR gates, no NOT gates.
<font color="red"><b>monotone:</b></font> A function is monotone (or monotonic) if, when one increases any of the inputs, the output never decreases (it can only increase or stay the same).  A Boolean circuit is monotone if it consists only of AND and OR gates, no NOT gates.
+
; monotone-nonmonotone gap : A Boolean function has a monotone-nonmonotone gap if it has nonmonotone Boolean circuits (using AND, OR, and NOT gates) that are smaller than any monotone Boolean circuits (without NOT gates) for it.
 
+
; Monte Carlo algorithm : A bounded-error randomized algorithm, i.e. one that returns the correct answer only with some specified probability.  The error probability can be either one-sided or two-sided.  (In physics and engineering, the term refers more broadly to any algorithm based on random sampling.)  The term was introduced by Metropolis and Ulam around 1945.  Contrast with Las Vegas.
<font color="red" id="gap"><b>monotone-nonmonotone gap:</b></font> A Boolean function has a monotone-nonmonotone gap if it has nonmonotone Boolean circuits (using AND, OR, and NOT gates) that are smaller than any monotone Boolean circuits (without NOT gates) for it.
+
; nondeterministic machine : A hypothetical machine that, when faced with a choice, is able to make all possible choices at once - i.e. to branch off into different 'paths.'  In the end, the results from all the paths must be combined somehow into a single answer.  One can obtain dozens of different models of computation, depending on the exact way this is stipulated to happen.  For example, an {{zcls|n|np}} machine answers 'yes' if <i>any</i> of its paths answer 'yes.'  By contrast, a {{zcls|p|pp}} machine answers 'yes' if the <i>majority</i> of its paths answer 'yes.'
 
+
; non-negligible : A probability is non-negligible if it's greater than <math>1/p(n)</math> for some polynomial <math>p</math>, where <math>n</math> is the size of the input.
<font color="red"><b>Monte Carlo algorithm:</b></font> A bounded-error randomized algorithm, i.e. one that returns the correct answer only with some specified probability.  The error probability can be either one-sided or two-sided.  (In physics and engineering, the term refers more broadly to any algorithm based on random sampling.)  The term was introduced by Metropolis and Ulam around 1945.  Contrast with Las Vegas.
+
; nonuniform : This means that a different algorithm can be used for each input size.  Boolean circuits are a nonuniform model of computation --- one might have a circuit for input instances of size 51, that looks completely different from the circuit for instances of size 50.
 
 
<font color="red"><b>nondeterministic machine:</b></font> A hypothetical machine that, when faced with a choice, is able to make all possible choices at once - i.e. to branch off into different 'paths.'  In the end, the results from all the paths must be combined somehow into a single answer.  One can obtain dozens of different models of computation, depending on the exact way this is stipulated to happen.  For example, an {{zcls|n|np}} machine answers 'yes' if <i>any</i> of its paths answer 'yes.'  By contrast, a {{zcls|p|pp}} machine answers 'yes' if the <i>majority</i> of its paths answer 'yes.'
 
 
 
<font color="red"><b>non-negligible:</b></font> A probability is non-negligible if it's greater than <math>1/p(n)</math> for some polynomial <math>p</math>, where <math>n</math> is the size of the input.
 
 
 
<font color="red"><b>nonuniform:</b></font> This means that a different algorithm can be used for each input size.  Boolean circuits are a nonuniform model of computation -- one might have a circuit for input instances of size 51, that looks completely different from the circuit for instances of size 50.
 
  
 
===== O =====
 
===== O =====
<font color="red"><b>o ("little-oh"):</b></font> For a function <math>f(n)</math> to be <math>o(g(n))</math> means that <math>f(n)</math> is <math>O(g(n))</math> and is not <math>\Omega(g(n))</math> (i.e. <math>f(n)</math> grows more slowly than <math>g(n)</math>).
+
; o ("little-oh") : For a function <math>f(n)</math> to be <math>o(g(n))</math> means that <math>f(n)</math> is <math>O(g(n))</math> and is not <math>\Omega(g(n))</math> (i.e. <math>f(n)</math> grows more slowly than <math>g(n)</math>).
 
+
; O ("big-oh") : For a function <math>f(n)</math> to be <math>O(g(n))</math> means that for some nonnegative constant <math>k</math>, <math>f(n)</math> is less than <math>kg(n)</math> for all sufficiently large <math>n</math>.
<font color="red"><b>O ("big-oh"):</b></font> For a function <math>f(n)</math> to be <math>O(g(n))</math> means that for some nonnegative constant <math>k</math>, <math>f(n)</math> is less than <math>kg(n)</math> for all sufficiently large <math>n</math>.
+
; &Omega; (Omega) : For a function <math>f(n)</math> to be <math>\Omega(g(n))</math> means that for some nonnegative constant <math>k</math>, <math>f(n)</math> is greater than <math>kg(n)</math> for all sufficiently large <math>n</math>.
 
+
; oracle : Also called "black box."  An imaginary device that solves some computational problem immediately.
<font color="red"><b>&Omega; (Omega):</b></font> For a function <math>f(n)</math> to be <math>\Omega(g(n))</math> means that for some nonnegative constant <math>k</math>, <math>f(n)</math> is greater than <math>kg(n)</math> for all sufficiently large <math>n</math>.
+
: ''Note:'' An oracle is specified by the answers it gives to every possible question you could ask it.  So in some contexts, 'oracle' is more or less synonymous with 'input'—but usually an input so long that the algorithm can only examine a small fraction of it.
 
+
; O-optimal or O-speedup : Informally, an O-optimal algorithm is one that is optimal in big-O notation. More formally, an algorithm A accepting a language L is O-optimal if for any other A' accepting L, there exists a constant <math>c</math> such that for all inputs <math>x</math>: <math>time_A(x)\leq c(|x|+time_{M'}(x))</math>. A language with no O-optimal A is said to have O-speedup. See [[Speedup]].
<font color="red" id="oracle"><b>oracle:</b></font> Also called "black box."  An imaginary device that solves some computational problem immediately. <br><i>Note:</i> An oracle is specified by the answers it gives to every possible question you could ask it.  So in some contexts, 'oracle' is more or less synonymous with 'input' - but usually an input so long that the algorithm can only examine a small fraction of it.
 
 
 
<font color="red" id="ospeedup"><b>O-optimal or O-speedup:</b></font> Informally, an O-optimal algorithm is one that is optimal in big-O notation. More formally, an algorithm A accepting a language L is O-optimal if for any other A' accepting L, there exists a constant <math>c</math> such that for all inputs <math>x</math>: <math>time_A(x)\leq c(|x|+time_{M'}(x))</math>. A language with no O-optimal A is said to have O-speedup. See [[Speedup]].
 
  
 
===== P =====
 
===== P =====
<font color="red"><b>path:</b></font> A single sequence of choices that could be made by a nondeterministic machine.
+
; path : A single sequence of choices that could be made by a nondeterministic machine.
 
+
; ''p''-measure : A game-theoretic reformulation of the classical Lebesgue measure, whose full definition is too long to fit here; please see the survey paper by Lutz, {{zcite|Lut93}}, wherein the term is formally defined. The measure is useful in proving [[#zeroone-law|zero-one laws]]. Also known as "Lutz's ''p''-measure."
{{Term|id=p-measure|''p''-measure|A game-theoretic reformulation of the classical Lebesgue measure, whose full definition is too long to fit here; please see the survey paper by Lutz, {{zcite|Lut93}}, wherein the term is formally defined. The measure is useful in proving [[#zeroone-law|zero-one laws]]. Also known as "Lutz's ''p''-measure."}}
+
; polylogarithmic : <math>(\log(n))^c</math>, where <math>c</math> is a constant.  Also an adverb ("polylogarithmically").
 
+
; polynomial : To mathematicians, a polynomial in <math>n</math> is a sum of multiples of nonnegative integer powers of <math>n</math>: for example, <math>3n^2-8n+4</math>.  To computer scientists, on the other hand, polynomial often means <i>upper-bounded</i> by a polynomial: so <math>n+\log(n)</math>, for example, is "polynomial."  Also an adverb ("polynomially").  A function that grows polynomially is considered to be 'reasonable,' unlike, say, one that grows exponentially.
<font color="red"><b>polylogarithmic:</b></font> <math>(\log(n))^c</math>, where <math>c</math> is a constant.  Also an adverb ("polylogarithmically").
+
; post-selection : The process of accepting or rejecting an input conditioned on some random event occurring in a desired fashion. For example, guessing the solution to an NP-complete problem then killing yourself if the solution was incorrect could be viewed as an anthropic form of post-selection, as in any universes in which you are still alive, your random choice of a solution was correct. This intuition leads to classes such as {{zcls|b|bpppath|BPP<sub>path</sub>}} and {{zcls|p|postbqp|PostBQP}}.
 
+
; problem : A function from inputs to outputs, which we want an algorithm to compute.  A crossword puzzle is not a problem; it's an <i>instance</i>.  The set of <i>all</i> crossword puzzles is a problem. ''See also:'' [[#decision-problem|decision problem]], [[#language|language]].
<font color="red"><b>polynomial:</b></font> To mathematicians, a polynomial in <math>n</math> is a sum of multiples of nonnegative integer powers of <math>n</math>: for example, <math>3n^2-8n+4</math>.  To computer scientists, on the other hand, polynomial often means <i>upper-bounded</i> by a polynomial: so <math>n+\log(n)</math>, for example, is "polynomial."  Also an adverb ("polynomially").  A function that grows polynomially is considered to be 'reasonable,' unlike, say, one that grows exponentially.
+
; promise problem : A problem for which the input is guaranteed to have a certain property.  I.e. if an input doesn't have that property, then we don't care what the algorithm does when given that input.
 
+
; p-optimal or p-speedup : (aka polynomially optimal or polynomial speedup) A Turing machine <math>M</math> accepting a language <math>L</math> is polynomially optimal if for any other <math>M'</math> accepting <math>L</math>, there exists a polynomial <math>p</math> such that for all inputs <math>x</math>: <math>\mathrm{time}_M(x)\leq p(|x|,\mathrm{time}_{M'}(x))</math>. A language with no p-optimal <math>M</math> is said to have p-speedup. p-optimal was defined by Krajicek and Pudlak [[zooref#kp89|[KP89]]]; see also Messner [[zooref#mes99|[Mes99]]].
{{Term|post-selection|The process of accepting or rejecting an input conditioned on some random event occurring in a desired fashion. For example, guessing the solution to an NP-complete problem then killing yourself if the solution was incorrect could be viewed as an anthropic form of post-selection, as in any universes in which you are still alive, your random choice of a solution was correct. This intuition leads to classes such as {{zcls|b|bpppath|BPP<sub>path</sub>}} and {{zcls|p|postbqp|PostBQP}}.}}
+
; P-uniform or P-nonuniform : A family of Boolean circuits is P-uniform if a Turing machine given input string <math>1^n</math> (1 repeated n times) can output the member of the family with <math>n</math> inputs in time polynomial in n. A ''problem'' is P-nonuniform if no family of minimal Boolean circuits for the problem is P-uniform.
 
 
<font color="red"><b>problem:</b></font> A function from inputs to outputs, which we want an algorithm to compute.  A crossword puzzle is not a problem; it's an <i>instance</i>.  The set of <i>all</i> crossword puzzles is a problem. ''See also:'' [[#decision-problem|decision problem]], [[#language|language]].
 
 
 
{{Term|id=promise-problem|promise problem|A problem for which the input is guaranteed to have a certain property.  I.e. if an input doesn't have that property, then we don't care what the algorithm does when given that input.}}
 
 
 
<font color="red" id="pspeedup"><b>p-optimal or p-speedup:</b></font> (aka polynomially optimal or polynomial speedup) A Turing machine <math>M</math> accepting a language <math>L</math> is polynomially optimal if for any other <math>M'</math> accepting <math>L</math>, there exists a polynomial <math>p</math> such that for all inputs <math>x</math>: <math>\mathrm{time}_M(x)\leq p(|x|,\mathrm{time}_{M'}(x))</math>. A language with no p-optimal <math>M</math> is said to have p-speedup. p-optimal was defined by Krajicek and Pudlak [[zooref#kp89|[KP89]]]; see also Messner [[zooref#mes99|[Mes99]]].
 
 
 
<font color="red" id="puniform"><b>P-uniform or P-nonuniform:</b></font> A family of Boolean circuits is P-uniform if a Turing machine given input string <math>1^n</math> (1 repeated n times) can output the member of the family with <math>n</math> inputs in time polynomial in n. A ''problem'' is P-nonuniform if no family of minimal Boolean circuits for the problem is P-uniform.
 
  
 
===== Q =====
 
===== Q =====
<font color="red"><b>quantum:</b></font> Making use of quantum-mechanical superposition, which is a particular kind of parallel and very fast nondeterministic algorithmic processing that collapses to one value (usually the answer to a problem instance) when its output is observed, captured, or used.  If you don't know what that means, well, I can't explain it in this sentence (try lectures [http://www.scottaaronson.com/democritus/lec9.html 9] and [http://www.scottaaronson.com/democritus/lec10.html 10] from the [http://www.scottaaronson.com/democritus/default.html Democritus course] taught by the Zookeeper).  But it has nothing to do with the original meaning of the word 'quantum' (i.e. a discrete unit).
+
; quantum : Making use of quantum-mechanical superposition, which is a particular kind of parallel and very fast nondeterministic algorithmic processing that collapses to one value (usually the answer to a problem instance) when its output is observed, captured, or used.  If you don't know what that means, well, I can't explain it in this sentence (try lectures [http://www.scottaaronson.com/democritus/lec9.html 9] and [http://www.scottaaronson.com/democritus/lec10.html 10] from the [http://www.scottaaronson.com/democritus/default.html Democritus course] taught by the Zookeeper).  But it has nothing to do with the original meaning of the word 'quantum' (i.e. a discrete unit).
 
+
; quasipolynomial : <math>O(2^{\log^c n})</math>, for some constant c.
<font color="red"><b>quasipolynomial:</b></font> <math>O(2^{\log^c n})</math>, for some constant c.
 
  
 
===== R =====
 
===== R =====
<font color="red"><b>random access:</b></font> This means that an algorithm can access any element x<sub>i</sub> of a sequence immediately (by just specifying i).  It doesn't have to go through x<sub>1</sub>,...,x<sub>i-1</sub> first.  Note that this has nothing directly to do with randomness.
+
; random access : This means that an algorithm can access any element x<sub>i</sub> of a sequence immediately (by just specifying i).  It doesn't have to go through x<sub>1</sub>,...,x<sub>i-1</sub> first.  Note that this has nothing directly to do with randomness.
 
+
; randomized : Making use of randomness (as in 'randomized algorithm').  This is probably an unfortunate term, since it doesn't imply that one starts with something deterministic and then 'randomizes' it.  See also Monte Carlo and Las Vegas.
<font color="red"><b>randomized:</b></font> Making use of randomness (as in 'randomized algorithm').  This is probably an unfortunate term, since it doesn't imply that one starts with something deterministic and then 'randomizes' it.  See also Monte Carlo and Las Vegas.
+
; <span id="random-self-reducible">random self-reducible</span> : A problem is random self-reducible if the ability to solve a large fraction of instances enables one to solve <i>all</i> instances.  For example, the discrete logarithm problem is random self-reducible.
 
+
; reduction : A result of the form, "Problem A is at least as hard as Problem B."  This is generally shown by giving an algorithm that transforms any instance of Problem B into an instance of Problem A.
<font color="red" id="random-self-reducible"><b>random self-reducible:</b></font> A problem is random self-reducible if the ability to solve a large fraction of instances enables one to solve <i>all</i> instances.  For example, the discrete logarithm problem is random self-reducible.
+
; relativize : To add an oracle.  We say a complexity class inclusion (or technique) is <i>relativizing</i> if it works relative to all oracles.  Since there exist oracles A,B such that {{zcls|p|p}}<sup>A</sup> = {{zcls|n|np}}<sup>A</sup> and {{zcls|p|p}}<sup>B</sup> does not equal {{zcls|n|np}}<sup>B</sup> {{zcite|BGS75}}, any technique that resolves {{zcls|p|p}} versus {{zcls|n|np}} will need to be nonrelativizing.
 
 
<font color="red" id="reduction"><b>reduction:</b></font> A result of the form, "Problem A is at least as hard as Problem B."  This is generally shown by giving an algorithm that transforms any instance of Problem B into an instance of Problem A.
 
 
 
<font color="red"><b>relativize:</b></font> To add an oracle.  We say a complexity class inclusion (or technique) is <i>relativizing</i> if it works relative to all oracles.  Since there exist oracles A,B such that {{zcls|p|p}}<sup>A</sup> = {{zcls|n|np}}<sup>A</sup> and {{zcls|p|p}}<sup>B</sup> does not equal {{zcls|n|np}}<sup>B</sup> {{zcite|BGS75}}, any technique that resolves {{zcls|p|p}} versus {{zcls|n|np}} will need to be nonrelativizing.
 
  
 
===== S =====
 
===== S =====
<font color="red"><b>satisfiability (SAT):</b></font> One of the central problems in computer science.  The problem is, given a Boolean formula, does there exist a setting of variables that <i>satisfies</i> the formula (that is, makes it evaluate to true)?  For example, <math>(a\vee b)\wedge(\neg a\vee\neg b)</math> is satisfiable: <math>a=\mathrm{true}</math>, <math>b=\mathrm{false}</math> and <math>a=\mathrm{false}</math>, <math>b=\mathrm{true}</math> are both satisfying assignments.  But <math>(a\vee b) \wedge (\neg a) \wedge (\neg b)</math> is unsatisfiable. ''See also'': [[Complexity Garden#sat|Garden entry on satisfiability]].
+
; satisfiability (SAT) : One of the central problems in computer science.  The problem is, given a Boolean formula, does there exist a setting of variables that <i>satisfies</i> the formula (that is, makes it evaluate to true)?  For example, <math>(a\vee b)\wedge(\neg a\vee\neg b)</math> is satisfiable: <math>a=\mathrm{true}</math>, <math>b=\mathrm{false}</math> and <math>a=\mathrm{false}</math>, <math>b=\mathrm{true}</math> are both satisfying assignments.  But <math>(a\vee b) \wedge (\neg a) \wedge (\neg b)</math> is unsatisfiable. ''See also'': [[Complexity Garden#sat|Garden entry on satisfiability]].
 
+
; self-reducible : A problem is self-reducible if an oracle for the decision problem enables one to solve the associated function problem efficiently.  For example, {{zcls|n|npc|NP-complete}} problems are self-reducible.  See also: [[#downward-self-reducible|downward self-reducible]], [[#random-self-reducible|random self-reducible]].
<font color="red"><b>self-reducible:</b></font> A problem is self-reducible if an oracle for the decision problem enables one to solve the associated function problem efficiently.  For example, {{zcls|n|npc|NP-complete}} problems are self-reducible.  See also: [[#downward-self-reducible|downward self-reducible]], [[#random-self-reducible|random self-reducible]].
+
; <span id="sensitivity">sensitivity</span> : Given a Boolean function {{bfunc}}, the sensitivity <math>s^X(f)</math> of an input <math>X=x_1\dots x_n</math> is the number of variables such that flipping them changes the value of <math>f(X)</math>.  Then <math>s(f)</math> is the maximum of <math>s^X(f)</math> over all <math>X</math>.
 
+
; size : When referring to a string, the number of bits.  When referring to a circuit, the number of gates.
<font color="red" id="sensitivity"><b>sensitivity:</b></font> Given a Boolean function {{bfunc}}, the sensitivity <math>s^X(f)</math> of an input <math>X=x_1\dots x_n</math> is the number of variables such that flipping them changes the value of <math>f(X)</math>.  Then <math>s(f)</math> is the maximum of <math>s^X(f)</math> over all <math>X</math>.
+
; space : The amount of memory used by an algorithm (as in space complexity).
 
+
; STOC : ACM Symposium on Theory of Computing (held every spring).
<font color="red"><b>size:</b></font> When referring to a string, the number of bits.  When referring to a circuit, the number of gates.
+
; string : A sequence of 1s and 0s.  (See, it's not just physicists who plumb Nature's deepest secrets -- we computer scientists theorize about strings as well!)<br><i>Note</i>: For simplicity, one usually assumes that every character in a string is either 1 or 0, but strings over larger alphabets can also be considered.
 
+
; subexponential : Growing slower (as a function of n) than any exponential function.  Depending on the context, this can either mean <math>2^{o(n)}</math> (so that the Number Field Sieve factoring algorithm, which runs in about <math>2^{n^{1/3}}</math> time, is "subexponential"); or <math>2^{o(n^{\epsilon})}</math> for every <math>\epsilon>0</math>.
<font color="red"><b>space:</b></font> The amount of memory used by an algorithm (as in space complexity).
+
; superpolynomial : Growing faster (as a function of <math>n</math>) than any polynomial in <math>n</math>.  This is <i>not</i> the same as exponential: for example, <math>n^{\log n}</math> is superpolynomial, but not exponential.
 
 
<font color="red"><b>STOC:</b></font> ACM Symposium on Theory of Computing (held every spring).
 
 
 
<font color="red"><b>string:</b></font> A sequence of 1s and 0s.  (See, it's not just physicists who plumb Nature's deepest secrets -- we computer scientists theorize about strings as well!)<br><i>Note</i>: For simplicity, one usually assumes that every character in a string is either 1 or 0, but strings over larger alphabets can also be considered.
 
 
 
<font color="red"><b>subexponential:</b></font> Growing slower (as a function of n) than any exponential function.  Depending on the context, this can either mean <math>2^{o(n)}</math> (so that the Number Field Sieve factoring algorithm, which runs in about <math>2^{n^{1/3}}</math> time, is "subexponential"); or <math>2^{o(n^{\epsilon})}</math> for every <math>\epsilon>0</math>.
 
 
 
<font color="red"><b>superpolynomial:</b></font> Growing faster (as a function of <math>n</math>) than any polynomial in <math>n</math>.  This is <i>not</i> the same as exponential: for example, <math>n^{\log n}</math> is superpolynomial, but not exponential.
 
  
 
===== T =====
 
===== T =====
<font color="red"><b>tape:</b></font> The memory used by a Turing machine.
+
; tape : The memory used by a Turing machine.
 
+
; &Theta; (Theta) : For a function <math>f(n)</math> to be <math>\Theta(g(n))</math> means that <math>f(n) = O(g(n))</math> and <math>f(n) = \Omega(g(n))</math> (i.e. they grow at the same rate).
<font color="red"><b>&Theta; (Theta):</b></font> For a function <math>f(n)</math> to be <math>\Theta(g(n))</math> means that <math>f(n) = O(g(n))</math> and <math>f(n) = \Omega(g(n))</math> (i.e. they grow at the same rate).
+
; tight bound : An upper bound that matches the lower bound, or vice versa.  I.e. the best possible bound for a function.
 
+
; total : A total function is one that is defined on every possible input.
<font color="red"><b>tight bound:</b></font> An upper bound that matches the lower bound, or vice versa.  I.e. the best possible bound for a function.
+
; truth table : A table of all <math>2^n</math> possible inputs to a Boolean function, together with the corresponding outputs.
 
+
; truth table reduction : A Turing reduction in which the oracle queries must be nonadaptive.
<font color="red"><b>total:</b></font> A total function is one that is defined on every possible input.
+
; Turing reduction : A reduction from problem A to problem B, in which the algorithm for problem A can make queries to an oracle for problem B.  (Contrast with many-one reduction.)
 
 
<font color="red"><b>truth table:</b></font> A table of all <math>2^n</math> possible inputs to a Boolean function, together with the corresponding outputs.
 
 
 
<font color="red"><b>truth table reduction:</b></font> A Turing reduction in which the oracle queries must be nonadaptive.
 
 
 
<font color="red"><b>Turing reduction:</b></font> A reduction from problem A to problem B, in which the algorithm for problem A can make queries to an oracle for problem B.  (Contrast with many-one reduction.)
 
  
 
===== U =====
 
===== U =====
<font color="red"><b>unary:</b></font> An inefficient encoding system, in which the integer n is denoted by writing n 1's in sequence.
+
; unary : An inefficient encoding system, in which the integer n is denoted by writing n 1's in sequence.
 
+
; uniform : A single algorithm is used for all input lengths.  For example, Turing machines are a uniform model of computation -- one just has to design a single Turing machine for multiplication, and it can multiply numbers of any length.  (Contrast with the circuit model.)
<font color="red"><b>uniform:</b></font> A single algorithm is used for all input lengths.  For example, Turing machines are a uniform model of computation -- one just has to design a single Turing machine for multiplication, and it can multiply numbers of any length.  (Contrast with the circuit model.)
+
; upper bound : A result showing that a function grows <i>at most</i> at a certain asymptotic rate.  For example, any algorithm for a problem yields an upper bound on the complexity of the problem.
 
 
<font color="red"><b>upper bound:</b></font> A result showing that a function grows <i>at most</i> at a certain asymptotic rate.  For example, any algorithm for a problem yields an upper bound on the complexity of the problem.
 
  
 
===== W =====
 
===== W =====
<font color="red"><b>with high probability (w.h.p.):</b></font> Usually this means with probability at least 2/3 (or any constant greater than 1/2).  If an algorithm is correct with 2/3 probability, one can make the probability of correctness as high as one wants by just repeating several times and taking a majority vote.<br><i>Note</i>: Sometimes people say "high probability" when they mean "non-negligible probability."
+
; with high probability (w.h.p.) : Usually this means with probability at least 2/3 (or any constant greater than 1/2).  If an algorithm is correct with 2/3 probability, one can make the probability of correctness as high as one wants by just repeating several times and taking a majority vote.<br><i>Note</i>: Sometimes people say "high probability" when they mean "non-negligible probability."
 
+
; <span id="word">word</span> : Given an [[#alphabet|alphabet]] <math>\Sigma</math>, a word is a string <math>w \in \Sigma^*</math>. That is, a string of characters drawn from <math>\Sigma</math>. Using the alphabet <math>\Sigma = \{a, b, \dots, z\}</math>, some valid words include <math>word</math>, <math>science</math>, <math>foobar</math> and <math>fotmewwi</math>.
{{Term|word|Given an [[#alphabet|alphabet]] <math>\Sigma</math>, a word is a string <math>w \in \Sigma^*</math>. That is, a string of characters drawn from <math>\Sigma</math>. Using the alphabet <math>\Sigma = \{a, b, \dots, z\}</math>, some valid words include <math>word</math>, <math>science</math>, <math>foobar</math> and <math>fotmewwi</math>.}}
 
  
 
===== X =====
 
===== X =====
 
===== Y =====
 
===== Y =====
 
===== Z =====
 
===== Z =====
{{Term|id=zeroone-law|zero-one law|Any theorem which specifies that the probability of an event is either zero or one is known as a zero-one law.}}
+
; <span id="zeroone-law">zero-one law</span> : Any theorem which specifies that the probability of an event is either zero or one is known as a zero-one law.

Latest revision as of 23:10, 1 February 2013

A
adaptive
Each question you ask (say to an oracle) can depend on the answers to the previous questions. If A and B are complexity classes, then AB is the class of languages decidable by an A machine that can make adaptive queries to a B oracle.
alphabet
Typically denoted , an alphabet is a finite set of characters. Common examples include and .
asymptotic
Concerning the rate at which a function grows, ignoring constant factors. For example, if an algorithm uses 3n+5 steps on inputs of size n, we say its running time is "asymptotically linear" - emphasizing that the linear dependence on n is what's important, not the 3 or the 5.
B
block sensitivity
Given a Boolean function , the block sensitivity of an input is the maximum number of disjoint blocks of variables such that does not equal , where denotes with the variables in flipped. Then is the maximum of over all . Defined by Nisan in 1991. See also sensitivity, certificate complexity.
Blum integer
A product of two distinct primes, both of which are congruent to 3 mod 4.
Boolean formula
A circuit in which each gate has fanout 1.
Boolean function
Usually, a function , that takes an -bit string as input and produces a bit as output.
C
certificate complexity
Given a Boolean function , the certificate complexity of an input is the minimum size of a set of variables such that whenever agrees with on every variable in . Then is the maximum of over all . See also block sensitivity.
Example of a Boolean circuit of depth 4 that checks if two 2-bit strings are different.
circuit
To engineers, a "circuit" is a closed loop. But in theoretical computer science, a circuit never has loops: instead, it starts with an input, then applies a sequence of simple operations (or gates) to produce an output. For example, an OR gate outputs 1 if either of its input bits are 1, and 0 otherwise. The output of a gate can then be used as an input to other gates.
closure
The closure of a set under some operations, is the set of everything you can get by starting with , then repeatedly applying the operations. For instance, the closure of the set under addition and subtraction is the set of integers.
CNF
Conjunctive Normal Form. A special kind of Boolean formula, consisting of an AND of ORs of negated or non-negated literals. For instance: . See also DNF.
collapse
An infinite sequence (hierarchy) of complexity classes collapses if all but finitely many of them are actually equal to each other.
complement
The complement of a language is the set of all instances not in the language. The complement of a complexity class consists of the complement of each language in the class. (Not the set of all languages not in the class!)
complete
A problem is complete for a complexity class if (1) it's in the class, and (2) everything in the class can be reduced to it (under some notion of reduction). So, if you can solve the complete problems for some class, then you can solve every problem in the class. The complete problems are the hardest.
constructible
Basically, a function is 'constructible' if it's nondecreasing, and if, given an input , can be computed in time linear in . Informally, constructible functions are those that are sufficiently well-behaved to appear as complexity bounds. This is a technical notion that is almost never needed in practice.
D
decision problem
A problem for which the desired answer is a single bit (1 or 0, yes or no). For simplicity, theorists often restrict themselves to talking about decision problems.
decision tree
A (typically) binary tree where each non-leaf vertex is labeled by a query, each edge is labeled by a possible answer to the query, and each leaf is labeled by an output (typically yes or no). A decision tree represents a function in the obvious way.
decision tree complexity
Given a Boolean function , the decision tree complexity of is the minimum height of a decision tree representing (where height is the maximum length of a path from the root to a leaf). Also called deterministic query complexity.
depth
When referring to a circuit, the maximum number of gates along any path from an input to the output. (Note that circuits never contain loops.)
deterministic
Not randomized.
DNF
Disjunctive Normal Form. A Boolean formula consisting of an OR and ANDs of negated or non-negated literals. For instance: . See also CNF.
downward self-reducible
A problem is downward self-reducible if an oracle for instances of size enables one to solve instances of size .
E
equivalence class
A maximal set of objects that can all be transformed to each other by some type of transformation.
F
family
Usually an infinite sequence of objects, one for each input size n. For example, a "family of circuits."
fanin
The maximum number of input wires that any gate in a circuit can have. A "bounded fanin" circuit is one in which each gate has a constant number of input wires (often assumed to be 2).
fanout
The maximum number of output wires any gate in a circuit can have. When talking about "circuits," one usually assumes unbounded fanout unless specified otherwise.
finite automaton
An extremely simple model of computation. In the most basic form, a machine reads an input string once, from left to right. At any step, the machine is in one of a finite number of states. After it reads an input character (symbol), it transitions to a new state, determined by its current state as well as the character it just read. The machine outputs 'yes' or 'no' based on its state when it reaches the end of the input.
FOCS
IEEE Symposium on Foundations of Computer Science (held every fall).
function problem
A problem where the desired output is not necessarily a single bit, but could belong to a set with more than 2 elements. Contrast with decision problem.
formula
A circuit where each gate has fanout 1.
G
gate
A basic component used to build a circuit. Usually performs some elementary logical operation: for example, an AND gate takes a collection of input bits, and outputs a '1' bit if all the input bits are '1', and a '0' bit otherwise. See also fanin, fanout.
H
Hamming distance
Given two bit strings for some , their Hamming distance is the number of bits that are different between the two strings. This function satisfies the properties of a metric on the vectorspace , since it is non-negative, is symmetric, satisfies and the satisfies the triangle inequality.
Hamming weight
Given a bit string , the Hamming weight of is the number of non-zero bits. Equivalently, the Hamming weight of is the Hamming distance between and a string of all zeros having the same length.
hard
A problem is hard for a class if everything in the class can be reduced to it (under some notion of reduction). If a problem is in a class and hard for the class, then it's complete for the class. Beware; hard and complete are not synonyms!
I
instance
A particular case of a problem.
L
language
Another term for decision problem (but only a total decision problem, not a promise decision problem). An instance is in a language, if and only if the answer to the decision problem is "yes."
An alternate characterization of a language is as a set of words over an alphabet : . This is equivalent since determining membership in is a decision problem called the characteristic function of . A simple example of a language is ODD, the set of all strings over which end in 1.
Las Vegas algorithm
A zero-error randomized algorithm, i.e. one that always returns the correct answer, but whose running time is a random variable. The term was introduced by Babai in 1979. Contrast with Monte Carlo.
low
A complexity class C is low for D if DC = D; that is, adding C as an oracle does not increase the power of D. C is self-low if CC = C.
lower bound
A result showing that a function grows at least at a certain asymptotic rate. Thus, a lower bound on the complexity of a problem implies that any algorithm for the problem requires at least a certain amount of resources. Lower bounds are much harder to come by than upper bounds.
M
many-one reduction
A reduction from problem A to problem B, in which an algorithm converts an instance of A into an instance of B having the same answer. Also called a Karp reduction. (Contrast with Turing reduction.)
monotone
A function is monotone (or monotonic) if, when one increases any of the inputs, the output never decreases (it can only increase or stay the same). A Boolean circuit is monotone if it consists only of AND and OR gates, no NOT gates.
monotone-nonmonotone gap
A Boolean function has a monotone-nonmonotone gap if it has nonmonotone Boolean circuits (using AND, OR, and NOT gates) that are smaller than any monotone Boolean circuits (without NOT gates) for it.
Monte Carlo algorithm
A bounded-error randomized algorithm, i.e. one that returns the correct answer only with some specified probability. The error probability can be either one-sided or two-sided. (In physics and engineering, the term refers more broadly to any algorithm based on random sampling.) The term was introduced by Metropolis and Ulam around 1945. Contrast with Las Vegas.
nondeterministic machine
A hypothetical machine that, when faced with a choice, is able to make all possible choices at once - i.e. to branch off into different 'paths.' In the end, the results from all the paths must be combined somehow into a single answer. One can obtain dozens of different models of computation, depending on the exact way this is stipulated to happen. For example, an NP machine answers 'yes' if any of its paths answer 'yes.' By contrast, a PP machine answers 'yes' if the majority of its paths answer 'yes.'
non-negligible
A probability is non-negligible if it's greater than for some polynomial , where is the size of the input.
nonuniform
This means that a different algorithm can be used for each input size. Boolean circuits are a nonuniform model of computation --- one might have a circuit for input instances of size 51, that looks completely different from the circuit for instances of size 50.
O
o ("little-oh")
For a function to be means that is and is not (i.e. grows more slowly than ).
O ("big-oh")
For a function to be means that for some nonnegative constant , is less than for all sufficiently large .
Ω (Omega)
For a function to be means that for some nonnegative constant , is greater than for all sufficiently large .
oracle
Also called "black box." An imaginary device that solves some computational problem immediately.
Note: An oracle is specified by the answers it gives to every possible question you could ask it. So in some contexts, 'oracle' is more or less synonymous with 'input'—but usually an input so long that the algorithm can only examine a small fraction of it.
O-optimal or O-speedup
Informally, an O-optimal algorithm is one that is optimal in big-O notation. More formally, an algorithm A accepting a language L is O-optimal if for any other A' accepting L, there exists a constant such that for all inputs : . A language with no O-optimal A is said to have O-speedup. See Speedup.
P
path
A single sequence of choices that could be made by a nondeterministic machine.
p-measure
A game-theoretic reformulation of the classical Lebesgue measure, whose full definition is too long to fit here; please see the survey paper by Lutz, [Lut93], wherein the term is formally defined. The measure is useful in proving zero-one laws. Also known as "Lutz's p-measure."
polylogarithmic
, where is a constant. Also an adverb ("polylogarithmically").
polynomial
To mathematicians, a polynomial in is a sum of multiples of nonnegative integer powers of : for example, . To computer scientists, on the other hand, polynomial often means upper-bounded by a polynomial: so , for example, is "polynomial." Also an adverb ("polynomially"). A function that grows polynomially is considered to be 'reasonable,' unlike, say, one that grows exponentially.
post-selection
The process of accepting or rejecting an input conditioned on some random event occurring in a desired fashion. For example, guessing the solution to an NP-complete problem then killing yourself if the solution was incorrect could be viewed as an anthropic form of post-selection, as in any universes in which you are still alive, your random choice of a solution was correct. This intuition leads to classes such as BPPpath and PostBQP.
problem
A function from inputs to outputs, which we want an algorithm to compute. A crossword puzzle is not a problem; it's an instance. The set of all crossword puzzles is a problem. See also: decision problem, language.
promise problem
A problem for which the input is guaranteed to have a certain property. I.e. if an input doesn't have that property, then we don't care what the algorithm does when given that input.
p-optimal or p-speedup
(aka polynomially optimal or polynomial speedup) A Turing machine accepting a language is polynomially optimal if for any other accepting , there exists a polynomial such that for all inputs : . A language with no p-optimal is said to have p-speedup. p-optimal was defined by Krajicek and Pudlak [KP89]; see also Messner [Mes99].
P-uniform or P-nonuniform
A family of Boolean circuits is P-uniform if a Turing machine given input string (1 repeated n times) can output the member of the family with inputs in time polynomial in n. A problem is P-nonuniform if no family of minimal Boolean circuits for the problem is P-uniform.
Q
quantum
Making use of quantum-mechanical superposition, which is a particular kind of parallel and very fast nondeterministic algorithmic processing that collapses to one value (usually the answer to a problem instance) when its output is observed, captured, or used. If you don't know what that means, well, I can't explain it in this sentence (try lectures 9 and 10 from the Democritus course taught by the Zookeeper). But it has nothing to do with the original meaning of the word 'quantum' (i.e. a discrete unit).
quasipolynomial
, for some constant c.
R
random access
This means that an algorithm can access any element xi of a sequence immediately (by just specifying i). It doesn't have to go through x1,...,xi-1 first. Note that this has nothing directly to do with randomness.
randomized
Making use of randomness (as in 'randomized algorithm'). This is probably an unfortunate term, since it doesn't imply that one starts with something deterministic and then 'randomizes' it. See also Monte Carlo and Las Vegas.
random self-reducible
A problem is random self-reducible if the ability to solve a large fraction of instances enables one to solve all instances. For example, the discrete logarithm problem is random self-reducible.
reduction
A result of the form, "Problem A is at least as hard as Problem B." This is generally shown by giving an algorithm that transforms any instance of Problem B into an instance of Problem A.
relativize
To add an oracle. We say a complexity class inclusion (or technique) is relativizing if it works relative to all oracles. Since there exist oracles A,B such that PA = NPA and PB does not equal NPB [BGS75], any technique that resolves P versus NP will need to be nonrelativizing.
S
satisfiability (SAT)
One of the central problems in computer science. The problem is, given a Boolean formula, does there exist a setting of variables that satisfies the formula (that is, makes it evaluate to true)? For example, is satisfiable: , and , are both satisfying assignments. But is unsatisfiable. See also: Garden entry on satisfiability.
self-reducible
A problem is self-reducible if an oracle for the decision problem enables one to solve the associated function problem efficiently. For example, NP-complete problems are self-reducible. See also: downward self-reducible, random self-reducible.
sensitivity
Given a Boolean function , the sensitivity of an input is the number of variables such that flipping them changes the value of . Then is the maximum of over all .
size
When referring to a string, the number of bits. When referring to a circuit, the number of gates.
space
The amount of memory used by an algorithm (as in space complexity).
STOC
ACM Symposium on Theory of Computing (held every spring).
string
A sequence of 1s and 0s. (See, it's not just physicists who plumb Nature's deepest secrets -- we computer scientists theorize about strings as well!)
Note: For simplicity, one usually assumes that every character in a string is either 1 or 0, but strings over larger alphabets can also be considered.
subexponential
Growing slower (as a function of n) than any exponential function. Depending on the context, this can either mean (so that the Number Field Sieve factoring algorithm, which runs in about time, is "subexponential"); or for every .
superpolynomial
Growing faster (as a function of ) than any polynomial in . This is not the same as exponential: for example, is superpolynomial, but not exponential.
T
tape
The memory used by a Turing machine.
Θ (Theta)
For a function to be means that and (i.e. they grow at the same rate).
tight bound
An upper bound that matches the lower bound, or vice versa. I.e. the best possible bound for a function.
total
A total function is one that is defined on every possible input.
truth table
A table of all possible inputs to a Boolean function, together with the corresponding outputs.
truth table reduction
A Turing reduction in which the oracle queries must be nonadaptive.
Turing reduction
A reduction from problem A to problem B, in which the algorithm for problem A can make queries to an oracle for problem B. (Contrast with many-one reduction.)
U
unary
An inefficient encoding system, in which the integer n is denoted by writing n 1's in sequence.
uniform
A single algorithm is used for all input lengths. For example, Turing machines are a uniform model of computation -- one just has to design a single Turing machine for multiplication, and it can multiply numbers of any length. (Contrast with the circuit model.)
upper bound
A result showing that a function grows at most at a certain asymptotic rate. For example, any algorithm for a problem yields an upper bound on the complexity of the problem.
W
with high probability (w.h.p.)
Usually this means with probability at least 2/3 (or any constant greater than 1/2). If an algorithm is correct with 2/3 probability, one can make the probability of correctness as high as one wants by just repeating several times and taking a majority vote.
Note: Sometimes people say "high probability" when they mean "non-negligible probability."
word
Given an alphabet , a word is a string . That is, a string of characters drawn from . Using the alphabet , some valid words include , , and .
X
Y
Z
zero-one law
Any theorem which specifies that the probability of an event is either zero or one is known as a zero-one law.