Computation in Science (Second Edition). Konrad Hinsen

Читать онлайн.
Название Computation in Science (Second Edition)
Автор произведения Konrad Hinsen
Жанр Программы
Серия IOP ebooks
Издательство Программы
Год выпуска 0
isbn 9780750332873



Скачать книгу

and that much of today’s scientific knowledge exists in fact only in the form of computer programs, because the traditional scientific knowledge representations cannot handle complex structured information. This raises important questions for the future of computational science, which I will return to in chapter 7.

      Computers are physical devices that are designed by engineers to perform computation. Many other engineered devices perform computation as well, though usually with much more limited capacity. The classic example from computer science textbooks is a vending machine, which translates operator input (pushing buttons, inserting coins) into actions (deliver goods), a task that requires computation. Of course a vending machine does more than compute, and as users we are most interested in that additional behavior. Nevertheless, information processing, and thus computation, is an important aspect of the machine’s operation.

      The same is true of many systems that occur in nature. A well-known example is the process of cell division, common to all biological organisms, which involves copying and processing information stored in the form of DNA [8]. Another example of a biological process that relies on information processing is plant growth [9]. Most animals have a nervous system, a part of the body that is almost entirely dedicated to information processing. Neuroscience, which studies the nervous system, has close ties to both biology and computer science. This is also true of cognitive science, which deals with processes of the human mind that are increasingly modeled using computation.

      Of course, living organisms are not just computers. Information processing in organisms is inextricably combined with other processes. In fact, the identification of computation as an isolated phenomenon, and its realization by engineered devices that perform a precise computation any number of times, with as little dependence on their environment as is technically possible, is a hallmark of human engineering that has no counterpart in nature. Nevertheless, focusing on the computational aspects of life, and writing computer programs to simulate information processing in living organisms, has significantly contributed to a better understanding of their function.

      On a much grander scale, one can consider all physical laws as rules for information processing, and conclude that the whole Universe is a giant computer. This idea was first proposed in 1967 by German computer pioneer Konrad Zuse [10] and has given rise to a field of research called digital physics, situated at the intersection of physics, philosophy, and computer science [11].

      What I have discussed above, and what I will discuss in the rest of this book, is computation in the tradition of arithmetic and Boolean logic, automated by digital computers. There is, however, a very different approach to tackling some of the same problems, which is known as analog computing. Its basic idea is to construct systems whose behavior is governed by the mathematical relations one wishes to explore, and then perform experiments on these systems. The simplest analog computer is the slide rule, which was a common tool to perform multiplication and division (plus a few more complex operations) before the general availability of electronic calculators.

      Today, analog computers have almost disappeared from scientific research, because digital computers perform most tasks better and at a much lower cost. This is also the reason why this book’s focus is on digital computing. However, analog computing is still used for some specialized applications. More importantly, the idea of computation as a form of experiment has persisted in the scientific community. Whereas I consider it inappropriate in the context of software-controlled digital computers, as I will explain in section 5.1, it is a useful point of view to adopt in looking at emerging alternative computing techniques, such as artificial neural networks.

      Computation has its roots in numbers and arithmetic, a story that is told by Georges Ifrah in The universal History of Numbers [12].

      Video courses explaining basic arithmetic are provided by the Khan Academy. It is instructive to follow them with an eye on the algorithmic symbol-processing aspect of each technique.

      The use of computation for understanding has been emphasized in the context of physics by Sussman and Wisdom [13]. They developed an original approach to teaching classical mechanics by means of computation [14], which is available online, as is complementary material from a corresponding MIT course. The authors used the same approach in a course on differential geometry [15], available online as well.

      The Bootstrap project uses games programming for teaching mathematical concepts to students in the 12–16 year age range. An example of computing as a teaching aid in university-level mathematics has been described by Ionescu and Jansson [16].

      Computer programming is also starting to be integrated into science curricula because of its utility for understanding. See the textbook by Langtangen [17] for an example.

      A very accessible introduction to the ideas of digital physics is given by Stephen Wolfram in his essay ‘What is ultimately possible in physics?’ [18].