Well, that’s the end of our journey through the theory of computation. We’ve designed languages and machines with various capabilities, teased computation out of unusual systems, and crashed headlong into the theoretical limits of computer programming.
Aside from exploring specific machines and techniques, we’ve seen some more general ideas along the way:
Anyone can design and implement a programming language. The basic ideas of syntax and semantics are simple, and tools like Treetop can take care of the uninteresting details.
Every computer program is a mathematical object. Syntactically a program is just a large number; semantically it can represent a mathematical function, or a hierarchical structure which can be manipulated by formal reduction rules. This means that many techniques and results from mathematics, like Kleene’s recursion theorem or Gödel’s incompleteness theorem, can equally be applied to programs.
Computation, which we initially described as just “what a computer does,” has turned out to be something of a force of nature. It’s tempting to think of computation as a sophisticated human invention that can only be performed by specially-designed systems with many complicated parts, but it also shows up in systems that don’t seem complex enough to support it. So computation isn’t a sterile, artificial process that only happens inside a microprocessor, but rather a pervasive phenomenon that crops up in many different places and in many different ways.
Computation is not all-or-nothing. Different machines have different amounts of computational power, giving us a continuum of usefulness: DFAs and NFAs have limited capabilities, DPDAs are more powerful, NPDAs more powerful still, and Turing machines are the most powerful we know of.
Encodings and levels of abstraction are essential to harnessing the power of computation. Computers are machines for maintaining a tower of abstractions, beginning at the very low level of semiconductor physics and rising to the much higher level of multitouch graphical user interfaces. To make computation useful, we need to be able to encode complex ideas from the real world in a simpler form that machines can manipulate, and then be able to decode the results back into a meaningful high-level representation.
There are limits to what computation can do. We don’t know how to build a computer that is fundamentally more capable than a Turing machine, but there are well-defined problems that a Turing machine can’t solve, and a lot of those problems involve discovering information about the programs we write. We can cope with these limitations by learning to make use of vague or incomplete answers to questions about our programs’ behavior.
These ideas may not immediately change the way you work, but I hope they’ve satisfied some of your curiosity, and that they’ll help you to enjoy the time you spend making computation happen in the universe.