January 10, 2010

Important Point, with More Important Implications

Steve Wolfram notes that computers do mathematics very differently from humans (I've read similar things about chess programs: they play differently from a human with enhanced speed and memory):
When Mathematica was young, we actually used to include as part of our software distribution a lot of source code for doing things like symbolic integration. It happened to be easier to do the software engineering that way. And we had the idea that perhaps occasionally someone would look at the code, and give us a good suggestion about it.

But that turned out to be completely unrealistic. The code was pretty clean. But it was mathematically very sophisticated. And only a very small number of experts (quite a few of whom had actually worked on the code for us) could understand it.

And the only tangible thing that happened were a few tech support calls from people who thought they could modify some piece of the code—and were about to horribly mess things up.
So what are the implications for AI? For the techno-optimism that future AIs will be human-friendly?

This is a worthwhile historical overview.

No comments: