You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are several concerns that are raised by this definition:
First and foremost, this definition is indeed too informal. We do not specify exactly what each step does, nor what it means to “feed x as input”.
Second, the choice of AND, OR or NOT seems rather arbitrary. Why not XOR and MAJ? Why not allow operations like addition and multiplication? What about any other logical constructions such if/then or while?
Third, do we even know that this definition has anything to do with actual computing? If someone gave us a description of such an algorithm, could we use it to actually compute the function in the real world?
Where is the explanation for the 2nd issue raised here in the text ? I am unable to fathom how another function can be used as basic operation ? I got lost in the proofs . This book is super amazing but it gets a bit dificult to get a big picture idea among the proofs and theorems. Is there any simplified explanation for the 2nd poiint ? i tried finding it out in the following chapters but could not do so.
other boolean functions were used as basic one but not multiplication or addition that is being talked here. I would love to understand how this could be possible. Is it really possible to use any symbol manipulating techniques as the basic steps of an algorithm ?
Introduction to Theoretical Computer Science: Defining Computation
Textbook on Theoretical Computer Science by Boaz Barak
https://introtcs.org/public/lec_03_computation.html
The text was updated successfully, but these errors were encountered: