title | filename | chapternum |
---|---|---|
Syntactic sugar, and computing every function |
lec_03a_computing_every_function |
4 |
- Get comfortable with syntactic sugar or automatic translation of higher-level logic to low-level gates. \
- Learn proof of major result: every finite function can be computed by a Boolean circuit. \
- Start thinking quantitatively about the number of lines required for computation.
"[In 1951] I had a running compiler and nobody would touch it because, they carefully told me, computers could only do arithmetic; they could not do programs.", Grace Murray Hopper, 1986.
"Syntactic sugar causes cancer of the semicolon.", Alan Perlis, 1982.
The computational models we considered thus far are as "bare bones" as they come.
For example, our NAND-CIRC "programming language" has only the single operation foo = NAND(bar,blah)
.
In this chapter we will see that these simple models are actually equivalent to more sophisticated ones.
The key observation is that we can implement more complex features using our basic building blocks, and then use these new features themselves as building blocks for even more sophisticated features.
This is known as "syntactic sugar" in the field of programming language design since we are not modifying the underlying programming model itself, but rather we merely implement new features by syntactically transforming a program that uses such features into one that doesn't.
This chapter provides a "toolkit" that can be used to show that many functions can be computed by NAND-CIRC programs, and hence also by Boolean circuits.
We will also use this toolkit to prove a fundamental theorem: every finite function
::: {.nonmath}
In this chapter, we will see our first major result: every finite function can be computed by some Boolean circuit (see circuit-univ-thm{.ref} and finitecomputation{.ref}).
This is sometimes known as the "universality" of
Despite being an important result, circuit-univ-thm{.ref} is actually not that hard to prove. seccomputalternative{.ref} presents a relatively simple direct proof of this result.
However, in secsyntacticsugar{.ref} and seclookupfunc{.ref} we derive this result using the concept of "syntactic sugar" (see synsugar{.ref}).
This is an important concept for programming languages theory and practice.
The idea behind "syntactic sugar" is that we can extend a programming language by implementing advanced features from its basic components.
For example, we can take the AON-CIRC and NAND-CIRC programming languages we saw in compchap{.ref}, and extend them to achieve features such as user-defined functions (e.g., def Foo(...)
), conditional statements (e.g., if blah ...
), and more.
Once we have these features, it is not that hard to show that we can take the "truth table" (table of all inputs and outputs) of any function, and use that to create an AON-CIRC or NAND-CIRC program that maps each input to its corresponding output.
We will also get our first glimpse of quantitative measures in this chapter. While circuit-univ-thm{.ref} tells us that every function can be computed by some circuit, the number of gates in this circuit can be exponentially large. (We are not using here "exponentially" as some colloquial term for "very very big" but in a very precise mathematical sense, which also happens to coincide with being very very big.) It turns out that some functions (for example, integer addition and multiplication) can be in fact computed using far fewer gates. We will explore this issue of "gate complexity" more deeply in codeanddatachap{.ref} and following chapters. :::
We now present some examples of "syntactic sugar" transformations that we can use in constructing straightline programs or circuits. We focus on the straight-line programming language view of our computational models, and specifically (for the sake of concreteness) on the NAND-CIRC programming language. This is convenient because many of the syntactic sugar transformations we present are easiest to think about in terms of applying "search and replace" operations to the source code of a program. However, by equivalencemodelsthm{.ref}, all of our results hold equally well for circuits, whether ones using NAND gates or Boolean circuits that use the AND, OR, and NOT operations. Enumerating the examples of such syntactic sugar transformations can be a little tedious, but we do it for two reasons:
-
To convince you that despite their seeming simplicity and limitations, simple models such as Boolean circuits or the NAND-CIRC programming language are actually quite powerful.
-
So you can realize how lucky you are to be taking a theory of computation course and not a compilers course...
:)
One staple of almost any programming language is the ability to define and then execute procedures or subroutines. (These are often known as functions in some programming languages, but we prefer the name procedures to avoid confusion with the function that a program computes.) The NAND-CIRC programming language does not have this mechanism built in. However, we can achieve the same effect using the time-honored technique of "copy and paste". Specifically, we can replace code which defines a procedure such as
def Proc(a,b):
proc_code
return c
some_code
f = Proc(d,e)
some_more_code
with the following code where we "paste" the code of Proc
some_code
proc_code'
some_more_code
and where proc_code'
is obtained by replacing all occurrences of a
with d
, b
with e
, and c
with f
.
When doing that we will need to ensure that all other variables appearing in proc_code'
don't interfere with other variables.
We can always do so by renaming variables to new names that were not used before.
The above reasoning leads to the proof of the following theorem:
Let NAND-CIRC-PROC be the programming language NAND-CIRC augmented with the syntax above for defining procedures.
Then for every NAND-CIRC-PROC program
::: {.remark title="No recursive procedure" #norecursion}
NAND-CIRC-PROC only allows non-recursive procedures. In particular, the code of a procedure Proc
cannot call Proc
but only use procedures that were defined before it.
Without this restriction, the above "search and replace" procedure might never terminate and functionsynsugarthm{.ref} would not be true.
:::
functionsynsugarthm{.ref} can be proven using the transformation above, but since the formal proof is somewhat long and tedious, we omit it here.
::: {.example title="Computing Majority from NAND using syntactic sugar" #majcircnand} Procedures allow us to express NAND-CIRC programs much more cleanly and succinctly. For example, because we can compute AND, OR, and NOT using NANDs, we can compute the Majority function as follows:
def NOT(a):
return NAND(a,a)
def AND(a,b):
temp = NAND(a,b)
return NOT(temp)
def OR(a,b):
temp1 = NOT(a)
temp2 = NOT(b)
return NAND(temp1,temp2)
def MAJ(a,b,c):
and1 = AND(a,b)
and2 = AND(a,c)
and3 = AND(b,c)
or1 = OR(and1,and2)
return OR(or1,and3)
print(MAJ(0,1,1))
# 1
progcircmajfig{.ref} presents the "sugar-free" NAND-CIRC program (and the corresponding circuit) that is obtained by "expanding out" this program, replacing the calls to procedures with their definitions. :::
::: { .bigidea #synsugar}
Once we show that a computational model
::: {.remark title="Counting lines" #countinglines}
While we can use syntactic sugar to present NAND-CIRC programs in more readable ways, we did not change the definition of the language itself.
Therefore, whenever we say that some function
We can write a Python program that implements the proof of functionsynsugarthm{.ref}.
This is a Python program that takes a NAND-CIRC-PROC program Proc
of two arguments x
and y
, then whenever we see a line of the form foo = Proc(bar,blah)
, we can replace this line by:
-
The body of the procedure
Proc
(replacing all occurrences ofx
andy
withbar
andblah
respectively). -
A line
foo = exp
, whereexp
is the expression following thereturn
statement in the definition of the procedureProc
.
To make this more robust we add a prefix to the internal variables used by Proc
to ensure they don't conflict with the variables of
The code of the Python function desugar
below achieves such a transformation.
def desugar(code, func_name, func_args,func_body):
"""
Replaces all occurences of
foo = func_name(func_args)
with
func_body[x->a,y->b]
foo = [result returned in func_body]
"""
# Uses Python regular expressions to simplify the search and replace,
# see https://docs.python.org/3/library/re.html and Chapter 9 of the book
# regular expression for capturing a list of variable names separated by commas
arglist = ",".join([r"([a-zA-Z0-9\_\[\]]+)" for i in range(len(func_args))])
# regular expression for capturing a statement of the form
# "variable = func_name(arguments)"
regexp = fr'([a-zA-Z0-9\_\[\]]+)\s*=\s*{func_name}\({arglist}\)\s*$'
while True:
m = re.search(regexp, code, re.MULTILINE)
if not m: break
newcode = func_body
# replace function arguments by the variables from the function invocation
for i in range(len(func_args)):
newcode = newcode.replace(func_args[i], m.group(i+2))
# Splice the new code inside
newcode = newcode.replace('return', m.group(1) + " = ")
code = code[:m.start()] + newcode + code[m.end()+1:]
return code
progcircmajfig{.ref} shows the result of applying desugar
to the program of majcircnand{.ref} that uses syntactic sugar to compute the Majority function.
Specifically, we first apply desugar
to remove usage of the OR function, then apply it to remove usage of the AND function, and finally apply it a third time to remove usage of the NOT function.
::: {.remark title="Parsing function definitions (optional)" #parsingdeg}
The function desugar
in desugarcode{.ref} assumes that it is given the procedure already split up into its name, arguments, and body.
It is not crucial for our purposes to describe precisely how to scan a definition and split it up into these components, but in case you are curious, it can be achieved in Python via the following code:
def parse_func(code):
"""Parse a function definition into name, arguments and body"""
lines = [l.strip() for l in code.split('\n')]
regexp = r'def\s+([a-zA-Z\_0-9]+)\(([\sa-zA-Z0-9\_,]+)\)\s*:\s*'
m = re.match(regexp,lines[0])
return m.group(1), m.group(2).split(','), '\n'.join(lines[1:])
:::
Another sorely missing feature in NAND-CIRC is a conditional statement such as the if
/then
constructs that are found in many programming languages.
However, using procedures, we can obtain an ersatz if/then construct.
First we can compute the function
Before reading onward, try to see how you could compute the if
/then
types of constructs.
The
def IF(cond,a,b):
notcond = NAND(cond,cond)
temp = NAND(b,notcond)
temp1 = NAND(a,cond)
return NAND(temp,temp1)
The
if (condition): assign blah to variable foo
with code of the form
foo = IF(condition, blah, foo)
that assigns to foo
its old value when condition
equals foo
the value of blah
otherwise.
More generally we can replace code of the form
if (cond):
a = ...
b = ...
c = ...
with code of the form
temp_a = ...
temp_b = ...
temp_c = ...
a = IF(cond,temp_a,a)
b = IF(cond,temp_b,b)
c = IF(cond,temp_c,c)
Using such transformations, we can prove the following theorem. Once again we omit the (not too insightful) full formal proof, though see functionsynsugarthmpython{.ref} for some hints on how to obtain it.
Let NAND-CIRC-IF be the programming language NAND-CIRC augmented with if
/then
/else
statements for allowing code to be conditionally executed based on whether a variable is equal to
Then for every NAND-CIRC-IF program
Using "syntactic sugar", we can write the integer addition function as follows:
# Add two n-bit integers
# Use LSB first notation for simplicity
def ADD(A,B):
Result = [0]*(n+1)
Carry = [0]*(n+1)
Carry[0] = zero(A[0])
for i in range(n):
Result[i] = XOR(Carry[i],XOR(A[i],B[i]))
Carry[i+1] = MAJ(Carry[i],A[i],B[i])
Result[n] = Carry[n]
return Result
ADD([1,1,1,0,0],[1,0,0,0,0]);;
# [0, 0, 0, 1, 0, 0]
where zero
is the constant zero function, and MAJ
and XOR
correspond to the majority and XOR functions respectively.
While we use Python syntax for convenience, in this example ADD
is a finite function that takes as input for i in range(n)
by simply repeating the code i
with
By going through the above program carefully and accounting for the number of gates, we can see that it yields a proof of the following theorem (see also addnumoflinesfig{.ref}):
For every
Once we have addition, we can use the grade-school algorithm to obtain multiplication as well, thus obtaining the following theorem:
For every
We omit the proof, though in multiplication-ex{.ref} we ask you to supply a "constructive proof" in the form of a program (in your favorite programming language) that on input a number
The
For every
See lookupfig{.ref} for an illustration of the LOOKUP function.
It turns out that for every
For every
An immediate corollary of lookup-thm{.ref} is that for every
We prove lookup-thm{.ref} by induction.
For the case
As a warm-up for the case of general
def LOOKUP2(X[0],X[1],X[2],X[3],i[0],i[1]):
if i[0]==1:
return LOOKUP1(X[2],X[3],i[1])
else:
return LOOKUP1(X[0],X[1],i[1])
or in other words,
def LOOKUP2(X[0],X[1],X[2],X[3],i[0],i[1]):
a = LOOKUP1(X[2],X[3],i[1])
b = LOOKUP1(X[0],X[1],i[1])
return IF( i[0],a,b)
More generally, as shown in the following lemma, we can compute
For every
If the most significant bit
Proof of lookup-thm{.ref} from lookup-rec-lem{.ref}. Now that we have lookup-rec-lem{.ref},
we can complete the proof of lookup-thm{.ref}.
We will prove by induction on
a = LOOKUP_(k-1)(X[0],...,X[2^(k-1)-1],i[1],...,i[k-1])
b = LOOKUP_(k-1)(X[2^(k-1)],...,Z[2^(k-1)],i[1],...,i[k-1])
return IF(i[0],b,a)
If we let
At this point we know the following facts about NAND-CIRC programs (and so equivalently about Boolean circuits and our other equivalent models):
-
They can compute at least some non-trivial functions.
-
Coming up with NAND-CIRC programs for various functions is a very tedious task.
Thus I would not blame the reader if they were not particularly looking forward to a long sequence of examples of functions that can be computed by NAND-CIRC programs. However, it turns out we are not going to need this, as we can show in one fell swoop that NAND-CIRC programs can compute every finite function:
There exists some constant
By equivalencemodelsthm{.ref}, the models of NAND circuits, NAND-CIRC programs, AON-CIRC programs, and Boolean circuits, are all equivalent to one another, and hence NAND-univ-thm{.ref} holds for all these models. In particular, the following theorem is equivalent to NAND-univ-thm{.ref}:
There exists some constant
::: { .bigidea #finitecomputation } Every finite function can be computed by a large enough Boolean circuit. :::
Improved bounds. Though it will not be of great importance to us, it is possible to improve on the proof of
NAND-univ-thm{.ref} and shave an extra factor of
To prove NAND-univ-thm{.ref}, we need to give a NAND circuit, or equivalently a NAND-CIRC program, for every possible function.
We will restrict our attention to the case of Boolean functions (i.e.,
Input ( |
Output ($G(x)$) |
---|---|
1 | |
1 | |
0 | |
0 | |
1 | |
0 | |
0 | |
1 | |
0 | |
0 | |
0 | |
0 | |
1 | |
1 | |
1 | |
1 |
Table: An example of a function
For every LOOKUP_4
procedure.
G0000 = 1
G1000 = 1
G0100 = 0
...
G0111 = 1
G1111 = 1
Y[0] = LOOKUP_4(G0000,G1000,...,G1111,
X[0],X[1],X[2],X[3])
We can translate this pseudocode into an actual NAND-CIRC program by adding three lines to define variables zero
and one
that are initialized to Gxxx = 0
with Gxxx = NAND(one,one)
and a statement such as Gxxx = 1
with Gxxx = NAND(zero,zero)
.
The call to LOOKUP_4
will be replaced by the NAND-CIRC program that computes
There was nothing about the above reasoning that was particular to the function
-
Initialize
$2^n$ variables of the formF00...0
tillF11...1
so that for every$z\in{0,1}^n$ , the variable corresponding to$z$ is assigned the value$F(z)$ . -
Compute
$LOOKUP_n$ on the$2^n$ variables initialized in the previous step, with the index variable being the input variablesX[
$0$ ]
,...,X[
$2^n-1$ ]
. That is, just like in the pseudocode forG
above, we useY[0] = LOOKUP(F00..00,...,F11..1,X[0],..,x[
$n-1$ ])
The total number of lines in the resulting program is
While NAND-univ-thm{.ref} seems striking at first, in retrospect, it is perhaps not that surprising that every finite function can be computed with a NAND-CIRC program. After all, a finite function
By being a little more careful, we can improve the bound of NAND-univ-thm{.ref} and show that every function
There exists a constant
::: {.proof data-ref="NAND-univ-thm-improved"}
As before, it is enough to prove the case that
We let
{#efficient_circuit_allfuncfig}
eqcomputefusinggeffcircuit{.eqref} means that for every
To complete the proof we need to give a bound on
{#computemanyfunctionsfig .margin }
In our case, because there are at most
$$
O\left(\tfrac{2^n}{n^2} \cdot n + \tfrac{2^n}{n-2\log n} \right)
\leq
O\left(\tfrac{2^n}{n} + \tfrac{2^n}{0.5n} \right) = O\left( \tfrac{2^n}{n} \right)
$$
which is what we wanted to prove. (We used above the fact that
Using the connection between NAND-CIRC programs and Boolean circuits, an immediate corollary of NAND-univ-thm-improved{.ref} is the following improvement to circuit-univ-thm{.ref}:
There exists some constant
circuit-univ-thm{.ref} is a fundamental result in the theory (and practice!) of computation. In this section, we present an alternative proof of this basic fact that Boolean circuits can compute every finite function. This alternative proof gives a somewhat worse quantitative bound on the number of gates but it has the advantage of being simpler, working directly with circuits and avoiding the usage of all the syntactic sugar machinery. (However, that machinery is useful in its own right, and will find other applications later on.)
There exists some constant
{#computeallfuncaltfig .margin }
The idea of the proof is illustrated in computeallfuncaltfig{.ref}. As before, it is enough to focus on the case that
::: {.proof data-ref="circuit-univ-alt-thm"}
We prove the theorem for the case
-
We show that for every
$\alpha\in {0,1}^n$ , there is an$O(n)$ -sized circuit that computes the function$\delta_\alpha:{0,1}^n \rightarrow {0,1}$ , where$\delta_\alpha(x)=1$ iff$x=\alpha$ . -
We then show that this implies the existence of an
$O(n\cdot 2^n)$ -sized circuit that computes$f$ , by writing$f(x)$ as the OR of$\delta_\alpha(x)$ for all$\alpha\in {0,1}^n$ such that$f(\alpha)=1$ . (If$f$ is the constant zero function and hence there is no such$\alpha$ , then we can use the circuit$f(x) = x_0 \wedge \overline{x}_0$ .)
We start with Step 1:
CLAIM: For
PROOF OF CLAIM: The proof is illustrated in deltafuncfig{.ref}.
As an example, consider the function
Now for every function
where
Therefore we can compute
We have seen that every function
For every natural numbers
funcvscircfig{.ref} depicts the set
While we defined
Let $SIZE^{AON}{n,m}(s)$ denote the set of all functions $f:{0,1}^n \rightarrow {0,1}^m$ that can be computed by an AND/OR/NOT Boolean circuit of at most $s$ gates. Then, $$ SIZE{n,m}(s/2) \subseteq SIZE^{AON}{n,m}(s) \subseteq SIZE{n,m}(3s) $$
::: {.proof data-ref="nandaonsizelem"}
If
The results we have seen in this chapter can be phrased as showing that
::: {.remark title="Finite vs infinite functions" #infinitefunc}
Unlike programming languages such as Python, C or JavaScript, the NAND-CIRC and AON-CIRC programming language do not have arrays.
A NAND-CIRC program
For the time being, our focus will be on finite functions, but we will discuss how to extend the definition of size complexity to functions with unbounded input lengths later on in nonuniformcompsec{.ref}. :::
::: {.solvedexercise title="$SIZE$ closed under complement." #sizeclosundercomp}
In this exercise we prove a certain "closure property" of the class
Prove that there is a constant
::: {.solution data-ref="sizeclosundercomp"}
If Y[0]
in temp
and add the line
Y[0] = NAND(temp,temp)
at the very end to obtain a program
- We can define the notion of computing a function via a simplified "programming language", where computing a function
$F$ in$T$ steps would correspond to having a$T$ -line NAND-CIRC program that computes$F$ . - While the NAND-CIRC programming only has one operation, other operations such as functions and conditional execution can be implemented using it.
- Every function
$f:{0,1}^n \rightarrow {0,1}^m$ can be computed by a circuit of at most$O(m 2^n)$ gates (and in fact at most$O(m 2^n/n)$ gates). - Sometimes (or maybe always?) we can translate an efficient algorithm to compute
$f$ into a circuit that computes$f$ with a number of gates comparable to the number of steps in this algorithm.
::: {.exercise title="Pairing" #embedtuples-ex}
This exercise asks you to give a one-to-one map from
-
Prove that the map
$F(x,y)=2^x3^y$ is a one-to-one map from$\N^2$ to$\N$ . -
Show that there is a one-to-one map
$F:\N^2 \rightarrow \N$ such that for every$x,y$ ,$F(x,y) \leq 100\cdot \max{x,y}^2+100$ . -
For every
$k$ , show that there is a one-to-one map$F:\N^k \rightarrow \N$ such that for every$x_0,\ldots,x_{k-1} \in \N$ ,$F(x_0,\ldots,x_{k-1}) \leq 100 \cdot (x_0+x_1+\ldots+x_{k-1}+100k)^k$ . :::
::: {.exercise title="Computing MUX" #mux-ex}
Prove that the NAND-CIRC program below computes the function
t = NAND(X[2],X[2])
u = NAND(X[0],t)
v = NAND(X[1],X[2])
Y[0] = NAND(u,v)
:::
::: {.exercise title="At least two / Majority" #atleasttwo-ex}
Give a NAND-CIRC program of at most 6 lines to compute the function
::: {.exercise title="Conditional statements" #conditionalsugarthmex}
In this exercise we will explore conditionalsugarthm{.ref}: transforming NAND-CIRC-IF programs that use code such as if .. then .. else ..
to standard NAND-CIRC programs.
-
Give a "proof by code" of conditionalsugarthm{.ref}: a program in a programming language of your choice that transforms a NAND-CIRC-IF program
$P$ into a "sugar-free" NAND-CIRC program$P'$ that computes the same function. See footnote for hint.^[You can start by transforming$P$ into a NAND-CIRC-PROC program that uses procedure statements, and then use the code of desugarcode{.ref} to transform the latter into a "sugar-free" NAND-CIRC program.] -
Prove the following statement, which is the heart of conditionalsugarthm{.ref}: suppose that there exists an
$s$ -line NAND-CIRC program to compute$f:{0,1}^n \rightarrow {0,1}$ and an$s'$ -line NAND-CIRC program to compute$g:{0,1}^n \rightarrow {0,1}$ . Prove that there exist a NAND-CIRC program of at most$s+s'+10$ lines to compute the function$h:{0,1}^{n+1} \rightarrow {0,1}$ where$h(x_0,\ldots,x_{n-1},x_n)$ equals$f(x_0,\ldots,x_{n-1})$ if$x_n=0$ and equals$g(x_0,\ldots,x_{n-1})$ otherwise. (All programs in this item are standard "sugar-free" NAND-CIRC programs.) :::
::: {.exercise title="Half and full adders" #halffulladderex}
-
A half adder is the function
$HA:{0,1}^2 :\rightarrow {0,1}^2$ that corresponds to adding two binary bits. That is, for every$a,b \in {0,1}$ ,$HA(a,b)= (e,f)$ where$2e+f = a+b$ . Prove that there is a NAND circuit of at most five NAND gates that computes$HA$ . -
A full adder is the function
$FA:{0,1}^3 \rightarrow {0,1}^{2}$ that takes in two bits and a "carry" bit and outputs their sum. That is, for every$a,b,c \in {0,1}$ ,$FA(a,b,c) = (e,f)$ such that$2e+f = a+b+c$ . Prove that there is a NAND circuit of at most nine NAND gates that computes$FA$ . -
Prove that if there is a NAND circuit of
$c$ gates that computes$FA$ , then there is a circuit of$cn$ gates that computes$ADD_n$ where (as in addition-thm{.ref})$ADD_n:{0,1}^{2n} \rightarrow {0,1}^{n+1}$ is the function that outputs the addition of two input$n$ -bit numbers. See footnote for hint.^[Use a "cascade" of adding the bits one after the other, starting with the least significant digit, just like in the elementary-school algorithm.] -
Show that for every
$n$ there is a NAND-CIRC program to compute$ADD_n$ with at most$9n$ lines. :::
Write a program using your favorite programming language that on input of an integer
Write a program using your favorite programming language that on input of an integer
Write a program using your favorite programming language that on input of an integer
::: {.exercise title="Multibit function" #mult-bit-ex}
In the text NAND-univ-thm{.ref} is only proven for the case
Prove that
-
If there is an
$s$ -line NAND-CIRC program to compute$f:{0,1}^n \rightarrow {0,1}$ and an$s'$ -line NAND-CIRC program to compute$f':{0,1}^n \rightarrow {0,1}$ then there is an$s+s'$ -line program to compute the function$g:{0,1}^n \rightarrow {0,1}^2$ such that$g(x)=(f(x),f'(x))$ . -
For every function
$f:{0,1}^n \rightarrow {0,1}^m$ , there is a NAND-CIRC program of at most$10m\cdot 2^n$ lines that computes$f$ . (You can use the$m=1$ case of NAND-univ-thm{.ref}, as well as Item 1.) :::
::: {.exercise title="Simplifying using syntactic sugar" #usesugarex}
Let
Temp[0] = NAND(X[0],X[0])
Temp[1] = NAND(X[1],X[1])
Temp[2] = NAND(Temp[0],Temp[1])
Temp[3] = NAND(X[2],X[2])
Temp[4] = NAND(X[3],X[3])
Temp[5] = NAND(Temp[3],Temp[4])
Temp[6] = NAND(Temp[2],Temp[2])
Temp[7] = NAND(Temp[5],Temp[5])
Y[0] = NAND(Temp[6],Temp[7])
-
Write a program
$P'$ with at most three lines of code that uses bothNAND
as well as the syntactic sugarOR
that computes the same function as$P$ . -
Draw a circuit that computes the same function as
$P$ and uses only$AND$ and$NOT$ gates. :::
In the following exercises you are asked to compare the power of pairs of programming languages.
By "comparing the power" of two programming languages
- Either prove that for every program
$P$ in$X$ there is a program$P'$ in$Y$ that computes the same function as$P$ , or give an example for a function that is computable by an$X$ -program but not computable by a$Y$ -program.
and
- Either prove that for every program
$P$ in$Y$ there is a program$P'$ in$X$ that computes the same function as$P$ , or give an example for a function that is computable by a$Y$ -program but not computable by an$X$ -program.
When you give an example as above of a function that is computable in one programming language but not the other, you need to prove that the function you showed is (1) computable in the first programming language and (2) not computable in the second programming language.
::: {.exercise title="Compare IF and NAND" #compareif}
Let IF-CIRC be the programming language where we have the following operations foo = 0
, foo = 1
, foo = IF(cond,yes,no)
(that is, we can use the constants
::: {.exercise title="Compare XOR and NAND" #comparexor}
Let XOR-CIRC be the programming language where we have the following operations foo = XOR(bar,blah)
, foo = 1
and bar = 0
(that is, we can use the constants d = XOR(a,b)
and e = XOR(d,c)
then e
gets the sum modulo a
, b
and c
.]
:::
::: {.exercise title="Circuits for majority" #majasymp}
Prove that there is some constant
::: {.exercise title="Circuits for threshold" #thresholdcirc}
Prove that there is some constant
See Jukna's and Wegener's books [@Jukna12, @wegener1987complexity] for much more extensive discussion on circuits.
Shannon showed that every Boolean function can be computed by a circuit of exponential size [@Shannon1938]. The improved bound of
The concept of "syntactic sugar" is also known as "macros" or "meta-programming" and is sometimes implemented via a preprocessor or macro language in a programming language or a text editor. One modern example is the Babel JavaScript syntax transformer, that converts JavaScript programs written using the latest features into a format that older Browsers can accept. It even has a plug-in architecture, that allows users to add their own syntactic sugar to the language.