Empirical CS

Computability, Turing, Peter Wegners Interaction Machines etc.

Title

I've always made a big detour around computability, BigO calculations etc. I used to program operating systems and integers are pretty much the only kind of numbers you need for this purpose. And I knew what my new machines and their soul could do. But David Harels "Computer Ltd. -what they really can't do" proved a nice (and short) reading on computability. It proved extremely valuable to understand why public keys really work - primes are much more frequent than you would believe. Harel shows that there are many problems that we just don't know how to solve and where size and speed of computers does not matter. Of course his basic computing model is a classic touring machine - I'll come to this shortly.

And there was this beautiful paragraph in Harels book which I would like to repeat here:Nevertheless, even under these generous conditions (endless time and resources, WK), we shall see interesting and important problems for which there simply are no algorithms, and it does not matter how smart we are, or how sophisticated and powerful our computers, our software, our programming languages and our algorithmic methods. [..] These facts have deep philosophical implications, not only on the limits of machines like computers but also on our own limits as beings with finite mass. Even if we were given unlimited amounts of paper and pencil, and an unlimited lifespan, there would be well-defined problems we could not solve. It is also important to stress that this is not just a fact about computing, by brain or by machine. It is a fact about KNOWING (emphasis by D.H.). In a strong sense what we can compute is what we are able to figure out by careful step by step processes from what we already know. The limits of computation are the limits of knowledge. We may have insight and depth, and some people have astonishing brilliance, but there is a strong case to the effect that what is deducible from facts is what can be computed from them algorithmically. Now ain't this something?

It is the last part of the whole paragraph that hit home. Turn the argument around: what we know is what we can compute. Do you remember Joseph Weizenbaums famous book on "Computer power and human reason" where he gave a parody of a psychiatrist using a short computer program. People responded to almost nonsense output from the program because it sounded like a (very simple) psychiatrist asking those questions - and with them you never know (;-). Weizenbaum wanted to demonstrate how people overestimated computers and he certainly did a good job there. But somehow to me his arguments never sounded absolutely convincing. I thought like this: IFF psychiatry is a SCIENCE, then it has to produce rules (things learned about humans e.g.) Those rules (facts) could be programmed into a computer - not at that time but theoretically yes. Then why should people not use a computer program to do self-diagnosis? If we transfer the problem into cancer or heart related diseases it sound immediately more plausible: use an expert system to diagnose yourself. Sound improbable? I've been to doctors who wouldn't be able to come up with the right diagnosis if it were written on my face. And I've been to hospitals where the only person on duty was a student in her first years....

Our chess masters lose to computers on a regular base. If we know it we can compute it... Some people where shocked about humans losing to computers. But do you really want to compete with your pocket calculator? Is chess really different? If we know it...

Let's come back to turing machines: Harel uses traditional ones, without "oracles" - in other words without external interrupts, surprising actions and inputs. But what if you use those other kinds of turing machines? I don't know how and if they could solve Harels computability problems. But there are other problems like steering a car through traffic etc. that they might be able to solve much better than the traditional models.

This leads over to Peter Wegners provocative claim of interaction being computationally much more powerful than algorithms. Interaction machines are more like turing machines with oracles and Wegner tries to prove that their computational capabilites are way beyond regular turing machines because they INTERACT with the environment or other machines. In essence they are distributed computing machines - my favorites.

In "The paradigm shift from Algorithms to Interaction" , Wegner shows that object oriented programming, distributed objects etc. are not really algorithmic. Many Objects offer an interface that can be called in any order. Like Operating systems if they are interruppt driven. This creates a non-algorithmic space. I don't think that Wegner really proves that interactions are more powerful but he certainly questions algorithms as the base of modern computer science. There are some distributed algorithms like voting in a two-phase-commit process. But I always felt that distributed algorithms are different: They don't exist locally! Only in the process of interaction between machines do they come into existence - they are emergent phenomenons. Very unlike good old local and sequential programming. Good distributed algorithms are FOUND, not so much derived or deduced in a mathematical sense. I guess that makes me a empirical computer scientist.

I should not lend my books to anybody! I had this pocketbook from a former director of thinking machines corporation (yes, that's the connection machine guys) which a colleague in Basle borrowed and never returned - shame on him. The book which I cannot locate anywhere dealt with emergent phenomenons and after reading it I was convinced that the way we programmed computers was geard towards understanding and controlling the way they solved our problems - but that this way was not really the way to get the most out of computers.

An example: Object oriented computing in many cases creates tightly coupled networks of objects which are hard to change if the focus of the problem changes. OO is categorization after all. Event driven systems which communicate through buses, channels or topics (say environment) are losely coupled. Unfortunately they are very fast and interrupt driven - something that is very hard for us humans to follow. We need a lot of tooling - in effect creating something we can understand from the running system. It seems that hardware engineers have fewer problems accepting the fact that without the simulation tools they would be unable to do their job. We software people are more like the chess lovers: how could deep blue do this..

Anyway, since Wegner and others I am even more convinced that there is a lot of computer science outside of algorithms. What we know we can compute - but who computes what we don't know?