I remember when I first read this sentence:

* “Our models, on the other hand, will encompass all computers that do exist, will exist and that can ever be dreamed of”*

It was during a lecture on the Theory of Automata, a mandatory course during my CS degree. I never paid too much attention to that course, and consequently I passed with a C.

However, that sentence had been at the back of my mind for a few years now and I decided to finally get to the bottom of it. So, I decided to read the textbook in that course back-to-back and I believe it has cleared quite a few of my concepts regarding the theory of computers.

Cohen introduces the concepts discussed in the book with an introduction to what we are getting into. The book discusses the historical mathematical models of computation that preceded the actual invention of the computer.

## Automata Theory

The book starts with recursive definitions and regular expressions which form the basis of recognizing patterns (read strings) of characters and validating whether they are accepted (read matched) by a particular regular expression.

Cohen then talks about **state machines**, which are mathematical models that recognize the exact patterns of strings that regular expressions recognize.

Examples of such machines are **Finite Automatons** (FA) and **Transition Graphs** (TG). These machines can be directly translated into electronic circuits, which will implement the regex based pattern. I don’t know about you, but I find that very, very cool.

The shortcomings of regular expressions are also discussed. There are some patterns of strings that can never be recognized by regular expressions no matter how we try. One such example is:

a^{n}b^{n}

The above patterns means, all those strings where there are `n`

number of `a`

‘s followed by `n`

number of `b`

‘s.

Examples would be: aaaabbbb aaaaaaabbbbbbb

As we stand, it is impossible for any regular expression to match such strings. To match these new and more powerful machines are introduced.

## Pushdown Automata

Pushdown automata (PDA) are the next class of state machines. Here, **Context-Free Grammars** are used to define the languages which need to be accepted by the **PDA**‘s.

These sets of machines are more powerful, in the sense that they can recognize more languages than **FA**‘s. For example, we can easily write a machine to recognize all strings of the type:

a^{n}b^{n}

But there are still languages and strings that can not be completely recognized by PDA’s, example

a^{n}b^{n}a^{n}

**CFG**‘s and **PDA**‘s have direct application in programming languages. Every programming language has a grammar associated with it which recursively defines the entire syntax of the programming language. CFG’s also aid in the development of parsers of these programming languages. So, if anyone is looking to write programming languages, studying CFG’s would be a good start.

I have previously developed a mini-parser for Pascal using CFG’s and JavaScript. Anyone interested can find the source code here

https://github.com/asamolion/lexparser

But Cohen doesn’t end with programming languages. He goes on to define the actual model of a complete computer.

## Turing Theory

The final part of this book moves on towards Turing Machines. Now, Turing machines were proposed by Alan Turing during the 1930s, while FA’s are the invention of the 1950s and PDA’s of the 1960s. However, Turing Machines are far more powerful than either FA’s or PDA’s because they are the model of the general purpose computer. The one you are using to read this article right now.

In this part, the author introduces many more models of computation, 2PDA’s, Post machines and nPDA’s. However, it is also proved that all of these machines are as exactly as powerful as the Turing machine. In other words, Turing machines offer us all the power to build a mathematical computer.

And here is my favorite part of the book, Cohen describes how it is possible to encode any Turing machine into a binary code and then create a TM that can take the encoded TM and an input string, and then execute the logic of that TM on the input and give us the required output.

Sound familiar? That’s exactly how a compiler or interpreter of a programming language works. After all, computers are just bits of binary code that executes other binary code on some input (which is also binary code).

## Turing Completeness

TM’s are the yardstick by which we measure whether something is possible for a computer to compute or not. If we can theoretically design a Turing machine that performs a specific set of calculations for us, then it is proven that we can design a computer to do so as well. This concept is known as Turing Completeness. However, real computers are less powerful than Turing machines in the sense that we have limited computer power and memory in the real world.

It boggles my mind, how something so sophisticated could have even be conceived almost a century ago, when such a thing as computers or even Computer Science didn’t exist. And it’s still those same basic ideas and models of computers that drive the entire world today.

## My final verdict

All in all, it was a good book and a lengthy one at that but I believe it has at least given me a basic understanding of Computer Science and how the theoretical side of computers work. There are still some questions in my mind that this book hasn’t answered but I’m hopeful I’ll find another book that does.