I went to the morning half of the Living Heritage AI Workshop held at CSAIL today and it turned out to be a cool get-together of many of the founders of artificial intelligence from the MIT AI Lab heydays. Marvin Minsky spoke for a while, and a small, but interesting, part of his talk was a rundown of why he thought AI research essentially came to a halt in the 80s: because focus changed from solving small, focused problems to trying to come up with a single approach to solve all problems. Here is the rundown he gave of these one-size-fits-all approaches, and their shortcomings:

  • Neural Networks - tend to get stuck on local peaks
  • Rule-based Systems - don't yet use enough reflective layers
  • Baby Machines - all, so far, have failed to keep growing
  • Statistical Methods - fail to explain what causes exceptions
  • Genetic Programs - fail because they lack explicit goals
  • Situated Action - needs higher-level representations
  • Formal Logic - can't exploit reasoning by analogy
  • Fuzzy Logic - cannot support reflective thinking
  • Simulated Evolution - Fail to learn the causes of failures
  • Algorithmic Probability - More general, but needs approximations.

He followed with the comment: “Physicists prosper by showing where old theories fail. AI-researchers seldom publish their programs’ faults.”

Another cool tidbit is that he said he never remembered technically admitting anyone to the AI Lab back in the day. They had a lot of funding at the time, and people would just show up from universities overseas, and sometimes they would stay for a week and sometimes they would stick around for good.