That actually started when I was in high school and I had a chance to
observe some of the very primitive card sorters: they were more than sorters,
and less than calculators, but they were able to apply very ingenious
arithmetic calculations when the only memory was punch cards.
That was highly amusing to me to start with, that you could
do the equivalent of a numeral on a punch card, and so I thought
I better keep an eye on these things, they may end up being interesting
critters one day. I learned how to program that machine and then over
successive years sort of dipped in to what the current state of the art is.
I learned plugboard programming in the next generation, but didn't find
anything very interesting to do with them. And then when the 701 and then
soon after that 7090 came along, it's the IBM machine. These were...
and FORTRAN programming and so on, so it was a little higher-level language
capability. I decided I better learn those skills. I could use computation
directly in some aspects of my work, but more importantly that they had
a future, so I took the first programming courses when I was at Stanford,
'61 I believe. Started thinking what kind of really challenging
intellectual problems could these machines solve, and got together with John
McCarthy and then later on Ed Feigenbaum and discovered they were involved
in some of the early traditions of what's now called artificial intelligence,
but kind of looking for good object areas, and I suggested organic chemical
analysis was one of them. This was also connected to the designs
that we've been working on for the Mars lander, so NASA was willing to fund
some of this software research as well. So we organized a team to apply
artificial intelligence to molecular analysis, and we ended up inventing
the first comprehensive expert system. It wasn't really all that clever
in terms of how to think, but just a lot of engineering in how you introduce
a knowledge base and get it to work smoothly, and something that you can debug
dynamically. So that was the DENDRAL project. ...Quite a few people
were involved in it. I think its major impact has been the generic one,
that it was a successful expert system.
LP -
Have you been continuing with this pursuit since then?
JL -
Indirectly, I think the other thing I started on was the use of the
Internet for scientific communication, and while this had been invented
by ARPA as a means of facilitating that I think our research group
was one of the very first to make really systematic use of it as a way of
managing research projects. There was one time we had at least 20 people
exchanging notes, pieces of programs, and using ARPAnet for that purpose,
and I did foresee, I wrote a paper in the mid-70s where I anticipated
that we would be having an electronic publication by that route,
and I'm still involved in trying to see that that happens in a disciplined
way. I'm chairing a UNESCO committee on how to improve global Internet
communications for science, help third-world people get onto the Net
so they can be part of the process, and so on.
LP -
Do you have any other predictions about the future of computers?
JL -
Well, so many of the things I've predicted were technologies that were
just sitting right in front of us. It was sort of obvious how they would
come through. A lot of them have been materialized. I'm sort of running
out of raw material, but I think much more of the same. Today's supercomputer
will be tomorrow's PC, will be the day after that on a single chip, so
that process I think will go on inexorably, maybe with occasionally some
bigger jumps. I think software is a very big hurdle, and how to make use
of that computational capability, and still keep some control over it,
and there I think we have a very serious phenomenon, because if you
want to solve very complex problems, you will have to end up letting
machines work out a lot of the details for themselves, and in ways that
we don't really understand what they are doing, and the neural nets
are an example of that, and I think that's probably a passing fad,
but it does, genetic programming is a broader context, which is basically
just let the machine thrash around in some semi-disciplined way and
recognize when it has found a step in a solution rather than trying
to program it yourself.
LP -
Aren't there higher level programming languages that do that?
JL -
Yeah, but I think the net result of that is going to be a lot of
computer programs that we believe accomplish the goals that we have
established, but which almost certainly will have all kinds of second order
effects. They'll be like organisms, have a life of their own. And we better
recognize those possibilities, in terms of what we entrust the machines that
operate in that way, and put more effort (even if it has to be done
ex post facto).
If a neural net has contrived a solution to a pattern-matching
problem, don't be content with that, try and figure out how it did it,
try to get back to what principles you can learn from those discoveries,
and then you'll be better off in two ways: (1) you'll be more secure that
there are no hidden Trojan horses that are maliciously introduced
or just inadvertently, by the way the program is operated, and (2) you may
learn some operating principles about how to accomplish your goals better
next time around.
So, big leaps and big headaches with software, that's one prediction.
Molecular biology
has got a pretty clear-cut path outlined for itself, you have 100,000
human genes, every one of them is going to be investigated one way or
another and 10% of them will have some useful application, and there won't
be enough resources to test everything, so learning how to prioritize
what's really important, we have almost too much of a technology push
rather than a demand pull about where these applications ought to go,
and I think that'll come around, and, with very large implications
for what it means to be a human being, if lifespan jumps by 30 or 40 years,
and that already has enormous implications for how you have to organize
your economy, and we are only beginning to think about that, providing
enough work, when we have machines doing a lot of it.
We are just beginning to see that now. The enormous enhancement in
productivity would then result in downsizing, would then result in
unemployment. For the first time we are seeing authentic technological
unemployment on any significant scale. With any luck we'll be smart enough
to run faster and operate on still higher and higher levels of technological
output, put the rest of the world out of a job, which is what we've been
trying to do for quite a while.