An older friend frequently asks me, as a technologist, when computers will have human-like intelligence, and what the social/economic effects of that will be.
I struggle to take the question seriously; AI is something that was dropped as a major research goal around the time I was a student twenty years ago, and it’s not an area I’m well-informed about. As I mentioned in my review of the rebooted “Knight Rider” TV series, a car that could hold up a conversation is a more futuristic idea in 2008 than it was back when David Hasselhof was doing the driving.
And yet for all that, it’s hard to say what’s really wrong with the layman’s view that since computing power is increasing rapidly, it is an inevitability that whatever the human brain can do in the way of information processing, a computer should be able to do, quite possibly within the next few decades.
But what is “human-like intelligence”? It seems to me that it is not all that different from what the likes of Google search or Siri do: absorb vast amounts of associations between data items, without really being systematic about what the associations mean or selective about their quality, and apply some statistical algorithm to the associations to pick the most relevant.
There must be more to it than that; for one thing, trained humans can sort of do actual proper logic, about a billion times less well than this netbook can, and there’s a lot of effectively hand-built (i.e. specifically evolved) functionality in a some selected pattern-recognition areas. But I think the general-purpose associationist mechanism is the most important from the point of view of building artificial intelligence.
If that is true, then a couple of things follow. First, the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability.
But it also suggests that the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.
Humans can reach conclusions that no logic-based intelligence can get close to, but humans get a lot of stuff wrong nearly all the time. Google Search can do some very impressive things, but it also gets a lot of stuff wrong. That might not change, however much the technology improves.
There are good reasons to suspect that human intelligence is very close to being as good as it can get.
One is that thinking about things longer doesn’t reliably produce better conclusions. That is the point of Malcolm Gladwell’s “Blink” (as far as I understand it; I take Gladwell to be the champion of what Neal Stephenson called “those American books where once you’re heard the title you don’t even need to read it”).
The next, related, reason is that human intelligence doesn’t scale out very well; having more people think about a problem doesn’t reliably give better answers than having just one do it.
Finally, the fact that, in spite of evolutionary pressure, there is enormous variation in the practical usefulness of human intelligences, suggest that making it better is not simply a case of improving the design. If the variation were down to different design, then the better designs would have driven out the worse ones long ago. I think it is far more to do with circumstances, and with the fundamental difficulty of identifying the correct problems to solve.
The major limitation on conventional computing is that it can only do so much per second; only render so many triangles, only price so many positions or simulate so many grid cells. Improving the speed and density of the hardware is pushing back that major limitation.
The major limitation on human intelligence, particularly when it is augmented with computers as it generally is now, is how much it is wrong. Being faster or bigger doesn’t push back the major limitation unless it can make the intelligence wrong less often, and I don’t think it would.
What I’m saying is that the major cost of human intelligence is not in the scarce resources required to execute the decision-making, but the damage caused by all the bad decisions that humans make.
The major real-world expense in obtaining high-quality human decision-makers is identifying which of the massive surplus available are actually any good. Being able to supply vastly bigger numbers of AI candidates would not drive that cost down.
Even the specialisms that humans have might be limited more by the cost they impose on the quality of general decision-making than by the cost of actually implementing the capability.
If that’s the situation, then throwing more computing resources at AI-type activity might not change things that much: computers can be as intelligent as humans, but not more intelligent. That’s not nothing, of course: it opens the door to replacing a lot of human activity with automated activity, with all the economic effects that implies.
There will be limitations in application because if human-like intelligence really is what I think it is, then the goals being sought by an AI are necessarily as vague as everything else: they will be clumps of associations, and the “intelligence” will just do the things that are associated with the goal clump. We won’t be able to “program” it the way we program a logic-based system, just kind of point it in the right direction in the same we we do when we type something into a Google search box.
I don’t know if what I’ve put here is new: I think the view of what the major issue in intelligence is is fairly widespread (“associationism”?), but in all previous discussions I’ve seen or participated in, there’s been an assumption that if in x years from now we will have artificial human-like intelligence, then in 2x years from now, or probably much less, we will have amazing superhuman artificial intelligence. That is what I am now doubting.
With intelligences available “in the lab” we might be able to prepare and direct them more effectively than we do now. But even that’s not obviously helpful: with human education, again, the limitation is not so much how long it takes and how much work it is, rather how sure we are it is actually doing any good at all. We may be able to give an artificial intelligence the equivalent of a hundred years of university education, but is a person with that experience really going to make better decisions? The things we humans work most hard at learning and doing: accumulating raw information and reasoning logically, are the things that computers are already much better than us at. The things that only humans can do are the things we simply don’t know how to do better, even if we were to re-implement on an electronic platform, speeded up, scaled up, scaled out.
Note that all the above is the product of making statistical guesses using masses of ill-understood unreliable associations, and is very likely to be wrong.
(Further thoughts: Relevance of AI)
I recommend reading The Emperor's New mind by Roger Penrose.
opens the door to replacing a lot of human activity with automated activity
I can't help wondering whether this is really necessary, though. Don't you think we're already way past the point where diminishing returns of replacing human activity with automated activity kicked in? Most people are just not that smart, they can't all be designers and scientists (or can't be made smart quickly, it is not important which is true as practical results are similar in both cases), and it appears to me that we, the societies of the developed countries, don't know how to employ these people. Instead we let the matter be, and cast a perplexed eye on the recrudescence of developing-country societies in our midst.
I meant rather that more intelligence will hardly help the less-than-super-smart people. An AI network more intelligent than humans will have even less use for them than Ivy Leaguers do now. "You can teach a bear to ride a bicycle, but will it be useful or enjoyable for the bear?"
Although the translation leaves much to be desired, I recommend "The Time Wanderers" by Strugacky brothers (text available on Scribd).
Your points on the general and pervasive wrongness and non-utility of most human thought are on target, I think, and largely ignored in most discussions on AI. Imagine an weakly godlike AI designed by Aretae.
The one thing that leads me to suspect that human level intelligence is possible in a machine is the how quickly insect-like behaviors in autonomous robots became possible as soon as the computer passed the insect-intellectual-capacity benchmark. The next milestone is rat/mouse level, in the next year or so, and if that trend continues I think we'll at least have to consider that intelligence is in some senses an emergent property of a sufficiently interconnected and powerful engine.
Two interesting discussions of the mind/brain/computer problem, and tangentially the AI problem, are given by Hubert Dreyfus (What Computers Still Can't Do, 1992, MIT) and Raymond Tallis (Aping Mankind, 2011, Acumen). Both treatments are more philosophical than technical, and both highlight failures rather than pose solutions. If these guys are anywhere near right, the prospects for AI and discovering what consciousness is are very dim. Of course, you can go with Dennett and merely assert there is no problem.
It has often occurred to me that one roadblock to AI is the digital computer. All the successful AI machines (i. e., humans) use analog computers committed to specific problems. Analog computers have the great virtue that whatever (and whenever) the state of the machine is, that IS the solution. All digital machines iterate their way to an approximate solution. Maybe AI would make more progress by building analog machines.
it is an inevitability that whatever the human brain can do in the way of information processing, a computer should be able to do, quite possibly within the next few decades. ai techniques