Meritocracy versus Loyalty

This has been sitting in my drafts directory for three months, since I read this Ross Douthat column on Corazine. But it goes with some of what I was writing yesterday, so I’ve dusted it off.

Douthat points out, I think rightly, that the defining features of our modern elite are its arrogance and its recklessness.

Arrogance is perhaps an inevitable weakness of any elite, but I think he is right to identify the recklessness as something new since the days of a hereditary upper class.

For one thing, someone who has been elevated from a humble background wholly or mostly by their own efforts and ability is likely to have a very high opinion of that ability. that again seems almost an inevitable side-effect of having the most able people in positions of power.

I think it’s more significant that a large number of people in positions of serious power have absolutely no-one above them.  If you are Governor of a state, or CEO of a company, you are theoretically responsible to voters or shareholders, but they do not play the role of a superior in a social or psychological sense, they are more the material a politician or manager works with than the patron he works for.

If the most significant person you know of is yourself, then the brutal one-sided logic of excessive risk-taking kicks in. You’re already successful, you’ve got a well-upholstered safety net, so when you take a big gamble, if it comes off you’re a hero and move up to the next level of achievement, and if it doesn’t you take a break for a bit to play golf and then try something new.

That unbalanced incentive is widely recognised now, but in itself it is not what’s new. Limited Liability has been around a good while, as have the country houses of disastrous politicians. What is new is the end of loyalty. In the past, the bulk of those wielding power were tied not just by their rolling contacts but by bonds of loyalty to superiors. A failed gamble would impact not merely a crowd of insignficiant peasants, voters or shareholders, but would hit the status and reputation of those whose approval or disapproval actually matters.

Obviously there were always a few who were beyond any such limitations, but think about how many there are now who have no practical superiors. It would have been hard to have made a list that would have included “CEO of MF Global”.

Nor is the concept limited to business. To whom does Hillary Clinton, or the head of an agency, look up as a superior? To the President who appointed them? I don’t think so. He’s just another punter. What about Paul Krugman, or some pressure-group head?

The distinction I’m getting at, between a technical superior and a psychological superior, is whether the superior’s opinion matters beyond the immediate game being played. If you’re a department head in a company or a government agency, your boss can fire you. But that’s all he can do, and that’s the only risk you’re taking. Once he’s done that, he’s not your boss any more. On the other hand, if your boss is your lifelong mentor, then he’s a psychological superior. Even if he fires you, he doesn’t stop being your superior; you still need his approval at some level. I think such relationships were once the norm, and have been becoming steadily less common for a few hundred years.

A response has been to try to build up abstracts to which powerful people feel loyalty. Many companies try hard to impress on their people the idea of being part of something bigger than themselves, but that’s a tall order for an institution which itself is required to operate by cold logic.

The replacement of mentor-protégé relationships by meritocracy has had two drivers: first, modern communications, record-keeping, and the broadening of trust up to recent times have meant that positions are being filled from much wider pools of candidates than before, while at the same time, as I described yesterday, the concepts of personal loyalty and rewards for loyalty have become seen as suspect, even corrupt.

I therefore propose a two-pronged response to the problem of meritocratic recklessness: First, personal loyalty to a mentor should be recognised as something moral and admirable, and secondly, the most senior positions should be held by individuals on a longer-term schedule, to encourage the maintenance of such relationships.

Basic Power and Political Power

I ran into a terminological problem in the previous posts. I was making the argument that it is more acceptable for non-sovereigns to demand a share in the spoils of government than to demand a share in the actual decision-making of government.

To do that I had to classify those people who have power₁ to make demands on government, but who don’t use that power₁ to actually share in power₂ by influencing government policy. I need a different word to distinguish the capability to influence policy from the exercise of that capability. I would like to call the first “potential power”, except that Etymology Man would come crashing through the window, and it’s too cold in here as it is.

The best I’ve come up with is “Basic power” versus “Political power”. So I can say that those with basic power owe loyalty to the sovereign, but can expect to be rewarded for that loyalty. Any attempt to gain political power, rather than wealth and status, is disloyalty, and should be opposed by all right-thinking people.

The terms aren’t really obvious though, and I’m hoping to find better ones.

Honour Given and Taken

Not long ago, Fred Goodwin was a Knight, his successor Stephen Hester was in line for a £900K bonus, and Chris Huhne was a cabinet minister.

It would be neat in a literary way to show that these three withdrawn honours are part of the same thing, but it’s more interesting, and more true, to see how they’re all different.

Going in reverse chronogical order, Huhne is in some ways the most straightforward. He was in a position of trust, and he is accused of criminal dishonesty.

On more detailed reflection, oddities emerge. For one thing, while it would be nice to think that laws and policies are being made by people who are honest and trustworthy, the idea that any of his rivals or colleagues are honest enough to admit their mistakes or crimes is laughable.

For another thing, why is it the decision of the police to prosecute that triggers his resignation? The facts are not really any better known than they were before.

I suspect that what forced him out was the media deciding to claim that he must be forced out. That doesn’t necessarily indicate any particular animus to him on behalf of the media; a cabinet resignation is worth pushing for just for story value. It might be that earlier, there were reasons for the press not to try to do him in, but those are now gone.

I could suggest a couple of possible reasons: one is that the media seemed somewhat invested in the coalition, but is now more soured on it. (The 2010 story of David Laws tells against that theory somewhat, but he might have been more specifically unpopular to the media). Another theory might be that Huhne’s activity on climate change protected him, but that has mysteriously become less of a concern.

Ultimately, I don’t think we can know what’s really going on, and that’s why day-to-day party politics isn’t worth paying attention to.

On to Goodwin then. On the one hand, if Goodwin was rewarded for benefiting British Banking, it is fair to say that the any benefit he bestowed was more than undone. On the other, the whole process did not seem to have much to do with either justice or wise decision-making; rather it had all the appearance of a stampede.

Whatever knighthoods are for these days, it can’t be what they were originally for. It’s a bit murky. Interestingly, knighthoods would fit well into a formalist system, as a treatment of the coalition problems I just wrote about. It could serve as a formalisation of informal power: a recognition that the recipient has some power, is loyal to the sovereign, and is being rewarded for that loyalty. If that were the basis of honours, they would not be withdrawn for incompetence, or even for criminality, but only for disloyalty. It would mean that that person ought not be permitted to obtain any power again.

Finally Hester. Hester is CEO of a bank which is making modest profits in a difficult market. As such, he would normally expect a substantial bonus. The same stampede which took away his predecessor’s knighthood took that as well.

There are legitimate questions about the amount of money made by banks and their employees, which I am not going to address — anyone worth reading on the issue would be either more knowledgable or less personally interested than me.

The question of bonuses per se is a separate one, though. What it amounts to is that companies that award large bonuses (relative to salary) are run in a more formalist manner than most other corporations. In many organisations, valuable employees are rewarded with more responsibilities, or better job security. Arnold Kling recently raised the point that this can produce bad outcomes. These companies avoid that, giving responsibilies as tasks rather than rewards, and rewarding valuable employees more directly with cash. This is the appropriate response to the sort of issue that Arnold Kling raised, and which Aretae picked up on as a widely applicable example of bad governance.

The fact that this formalist measure to improve governance arouses such opposition (again, independently of the actual sums involved; Hester’s salary for 2011 was over a million pounds, and attracted little attention), says a lot about what is wrong with modern political culture.

So, three very different honours: a minor position in our corrupted and ineffective system of government, an anachronism that might once have been a formalist recoginition of power and reward for loyalty, and a straightforward, honest payment for value. All removed, for better or worse, in the same way, by an unthinking popular stampede, triggered by a media driven not primarily by ideology but by a need for drama.

Formalism and Coalition

Aretae insists that all government is coalitional.

Maybe so, but that doesn’t mean it’s a good thing to widen the coalition further, and spread power about randomly.

The point of formalism is that power should be aligned with some form of responsibility, so that the powerful not benefit from destructive behaviour, and that attempting to obtain more power should be illegitimate, so that energies not be directed to destructive competition for power.

Formalists tend to believe that stable, effective and responsible government would follow a largely libertarian policy, choosing to limit government action to maintaining order and protecting private property, and taking its own loot in the form of predictably and efficiently levied taxation rather than by making arbitrary demands of random subjects. Such a policy would maximise the long-term revenue stream from the state.

Given a policy which sets limits on government, it becomes reasonably straightforward to deal with those centres of power which are not sovereign but which cannot be eliminated. They get subsidies, but not power over policy. Given that the sovereign chooses, for reasons of efficiency, to take taxes and buy food with them rather than to take food directly from whereever he fancies, there is no problem in giving pensions or subsidies to those whose support is needed.

The key formalist idea is that if those with informal power go beyond what they are entitled to but seek to influence general government policy, then they are doing something anti-social and immoral. All those who have an interest in the continuation of stable, effective and responsible government will see such an attempt as a threat. Fnargl does not have a ring, and I do not much fancy engineering weapon locks implementing a bitcoin-like voting protocol, so a combination of popular will and, in due course, force of tradition is all we have to fill the gap. In as much as there is a general interest in anything, there is a general interest in good government, and I do not think it is all that far-fetched to to see sovereign authority as something that people will reflexively stand to defend, were it not that that they have been taught for 250 years to do the opposite.

What’s striking is that our current political morality holds the opposite view: that attempting to influence policy is everyone’s right, but to receive direct payoffs is unjust. The powerful are therefore rewarded indirectly via policies with enormously distorting effects on the economy or on the administration of government, whose general costs greatly outweigh the gains obtained by the beneficiaries. Further, it is easier for them to seek to protect and increase their power, than to seek reward for giving it up, even if the general interest would benefit from the latter.

I could do with an example to illustrate this — if a person has necessary power, such as a military officer, then he should keep his power and be rewarded for it. If alternatively his arm of the military is no longer needed, but he still has power because he could potentially use the arm against the sovereign, then it is preferable to pay him extra to cooperate in disbanding the arm, rather than to maintain it just to keep him loyal. The same logic might apply in the organisation of key industries, or sections of the bureaucracy.

It would not necessarily be easy to resolve these things perfectly, but it would be made easier by recognising that concentrating power over general policy — sovereignty — is a good thing, as far as it is possible, and that the sovereign who has control over policy has the right to use it in whichever way he sees fit: to hand out cash presents as much as to award monopolies.

The exercise of democracy makes things very much worse, by adding to the number of those with necessary power anybody who can sway a bloc of voters, and enabling them to make demands for more inefficient indirect sharing of the loot.

A Case for Ispettore Zen

I’ve probably mentioned before that I read a lot of crime novels. My favourites of the modern era are probably the Aurelio Zen series by Michael Dibdin. Zen, a detective of the Polizia di Stato, solves his cases with a blend of staggering luck and an involuntary bloody-mindedness which distracts him from his more important tasks of attempting to understand and navigate the women in his life and the political machinations of the Italian bureaucracy.

I have no idea how realistic Dibdin’s grotesque presentation of the corruption and hidden motivations of Italian life really is, but I have not been able to see the Costa Concordia story in any other context than as an Aurelio Zen mystery. The captain who accidentally fell into a lifeboat and then argued with the coastguard on the phone, the mysterious blonde on the bridge, the cruise line that was blaming their own captain for everything even while the passengers were still being rescued:  all we can be sure of is that nothing is what it seems to be, and nobody is telling the truth. Only Zen can actually get to the truth of it, and even if he does, we probably won’t know, because the official story might be completely different…

Monarchism and Stability in the Middle East / North Africa

Tyler Cowen at Marginal Revolution posts a link to a paper by Victor Menaldo, The Middle East and North Africa’s Resilient Monarchs.

It’s well worth a read; it’s not long, though frankly I’ll need to spend more time with it than I have this evening.

First and foremost, it’s a challenge to the Bueno de Mesquita theory that all that matters is the size of the ruling coalition and the selectorate — a theory that I found valuable but simplistic. Menaldo addresses political culture, observing that the political culture serves to distinguish regime insiders from outsiders. He finds that monarchical governments have less conflict and better economic development.

Particularly interesting to me is the account of elites within the monarchical society. These kingdoms are not the absolute autocracies of my “degenerate formalism”, but actually existing monarchies, in which the extended royal family and other important groups hold significant power. Menaldo’s argument is that the fact that the political culture defines who shares in power, the struggles between in-groups are limited. Unlike a faction in a revolutionary republic, you can lose a power struggle and still be an insider with some power.

In my view, this is also the strength of our somewhat corrupted democracies: if you’re an insider but you’re losing, it’s still not worth being extremely destructive. Better to admit defeat and preserve the system that keeps you an insider even as a loser.

Because of that, this paper doesn’t really make my argument: it shows that monarchy is better than a revolutionary republic, but not that it is better than a western democracy. Still, it’s useful that it’s showing some of the strengths that monarchy has.

It’s not without weaknesses, either. As with other work of this kind, I don’t really take the mathematics seriously. Checking that a statistical analysis bears out the impression you get from drawing a couple of graphs and watching CNN is not what I call verifying a testable hypothesis. And a relatively small data set of somewhat subjective categorisations of events seems inadequate for the amount of analysis being done on it.

Also, the paper, as far as I have seen, does not explore the possibility that foreign influence is the explanation for the difference in violence. Bahrain faced nothing like the outside pressure that Libya or Syria did. I don’t think foreign action is affected directly by whether the regime is monarchical or republican, but there might be an indirect link with foreign policy stance.

Diane Abbot

@bimadew White people love playing “divide & rule” We should not play their game #tacticasoldascolonialism

Offensive? Of course not. How can that possibly be offensive? Just because it implies that it is possible to generalise about what “white people” like? You mean like this? What rubbish.

Well, is it wrong, then? I think so, but so what? She’s a Labour MP — saying things that are wrong is her job. Further, it’s worth arguing about.

Speaking on behalf of white people, we do not love playing “divide & rule”. It’s strictly a last resort — keeping track of different groups of black people gives us a headache. Which ones are the Tutsis again? We much prefer to have “community leaders” deal with all that stuff for us¹.

I would not have been able to say that had Diane Abbot not raised the issue. She was right to raise the issue, despite being wrong: like I said, that’s her job. She should not have been shut up or made to apologise.

The reflex to hang her out to dry is understandable: we are frustrated at not being allowed to say things about race, and when one of “them” does it, we take revenge. But I think that is a bad mistake — ironically, this is one time where we have to risk that headache and play “divide & rule”. Abbott is not one of “them” that want us to shut up about race. Rod Liddle says that she has used the same tactics in the past, but when he talked about black crime, she at least disagreed with him on the merits. Probably wrongly, mind, but, Labour MP, etc. Yes, she used the R-word as well, but if everyone complaining had also engaged the argument like her, they wouldn’t have been able to shout it down. It is the likes of Alex Massie and Bonnie Greer weighing in that make it near impossible to have such a discussion.

Non-white politicians are generally willing to talk about race. (Sometimes at enormous length). Being offended is Stuff White People Like. And that’s not something I’m going to apologise for saying.

¹ If it turns out that the “community leaders” are all from one group, and are using the power we give them to exterminate another, we would rather not know about it, thank you very much.

AI, Human Capital, Betterness

Let me just restate the thought experiment I embarked on this week. I am hypothesising that:

  • “Human-like” artificial intelligence is bounded in capability 
  • The bound is close to the level of current human intelligence  
  • Feedback is necessary to achieving anything useful with human-like intelligence 
  • Allowing human-like intelligence to act on a system always carries risk to that system

Now remember, when I set out I did admit that AI wasn’t a subject I was up to date on or paid much attention to.

On the other hand, I did mention Robin Hanson in my last post. The thing is, I don’t actually read Hanson regularly: I am aware of his attention to systematic errors in human thinking; I quite often read discussions that refer to his articles on the subject, and sometimes follow links and read them. But I was quite unaware of the amount he has written over the last three years on the subject of AI, specifically “whole brain emulations” or Ems.

More importantly, I did actually read, but had forgotten, “The Betterness Explosion“, a piece of Hanson’s, which is very much in line with with my thinking here, as it emphasises that we don’t really know what it means to suggest we should achieve super-human intelligence. I now recall agreeing with this at the time, and although I had forgotten it I suspect it at the very least encouraged my gut-level scepticism towards superhuman AI and the singularity.

In the main, Hanson’s writing on Ems seems to avoid the questions of motivation and integration that I emphasised in part 2. Because the Em’s are actual duplicates of human minds, there is no assumption that they will be tools under our control; from the beginning they will be people with which we will need to negotiate — there is discussion of the viability and morality of their market wages being pushed down to subsistence level.

There is an interesting piece “Ems Freshly Trained” which looks at the duplication question, which might well be a way round the integration issue (as I wrote in part 1, “it might be as hard to produce and identify an artificial genius as a natural one, but then perhaps we could duplicate it”, and the same might go for an AI which is well-integrated into a particular role).

There is also discussion of cities which consist mainly of computer hardware hosting brains. I have my doubts about that: because of the “feedback” assumption at the top, I don’t think any purpose can be served by intelligences that are entirely isolated from the physical world. Not that they have to be directly acting on the physical world — I do precious little of that myself — but they have to be part of a real-world system and receive feedback from that system. That doesn’t rule out billion-mind data centre cities, but the obstacles to integrating that many minds into a system are severe. As per part 2, I do not think the rate of growth of our systems is limited by the availability of intelligences to integrate into them, since there are so many going spare.

Apart from the Hanson posts, I should also have referred to an post I had read by Half Sigma, on Human Capital. I think that post, and the older one linked from it, make the point well that the most valuable (and most renumerated) humans are those who have been succesfully (and expensively) integrated into important systems.

Relevance of AI

I felt a bit bad writing the last post on artificial intelligence: it’s outside my usual area of writing, and as I’d just admitted, there are a number of other points within my area that I haven’t got round to  properly putting in order.

However, the questions raised in the AI post aren’t as far from the debates Anomaly UK routinely deals in as I first thought.

Like the previous post, this falls firmly in the category of “speculations”.  I’m concerned with telling a consistent story; I’m not even arguing at this stage that what I’m describing is true of the real world today.  I’ll worry about that when the story is complete.

Most obviously, the emphasis on error relates directly to the Robin Hanson area of biases and wrongness is human thinking. It’s not surprising that Aretae jumped straight on it. If my hypothesis is correct, it would mean that Aretae’s category of “monkeybrains”, while of central importance, is very badly named: the problems with our brains is not their ape ancestry, but their very purpose: attempting to reach practical conclusions from vastly inadequate data. That is what we do; it is what intelligence is, and the high error rate is not an implementation bug but an essential aspect of the problem.

(I suppose there are real “monkeybrains” issues in that we retain too high an error rate even when there actually is adequate data. But that’s not the normal situation)

The AI discussion relates to another of Aretae’s primary issues: motivation. Motivation is getting an intelligence to do what it ought to be doing, rather than something pointless or counterproductive. When working with human intelligence, it’s the difficult bit. If artificial intelligence is subject to the problems I have suggested, then properly specifying the goals that the AI is to seek will quite likely also turn out to be the difficult bit.

I’m reminded in a vague way of Daniel Dennett’s writings on meaning and intentionality. Dennett’s argument, if I remember it accurately, is that all “meaning” in human intelligence ultimately derives from the externally-imposed “purpose” of evolutionary survival. Evolutionary successful designs behave as if seeking the goal of producing surviving descendants, and seeking this goal implies seeking sub-goals of feeding, defence, reproduction, etc. etc. etc. In humans, this produces an organ that explicitly/symbolically expresses and manipulates subgoals, but that organ’s ultimate goal is implicit in its construction, and not subject to symbolic manipulation.

The hard problem of motivating a human to do something, then, is the problem of getting their brain to treat that something as a subgoal of its non-explicit ultimate goal.

I wonder (in a very handwavy way) whether building an artificial intelligence might involve the same sort of problem of specifying what the ultimate goal actually is, and making the things we want it to do register properly as subgoals.

The next issue is what an increased supply of intelligence would do to the economy.  Though an apostate libertarian, I have continued to hold to the Julian Simon line that “Human inventiveness is the Ultimate Resource”. To doubt that AI will have a revolutionarily beneficial effect is to reject Simon’s claim.

Within this hypothesis, the availability of humanlike (but not superhuman) AI is of only marginal benefit, so Simon is wrong. Then, what is the ultimate resource?

Simon is still closer than his opponents; the ultimate resource (that is the minimum resource as per the law of the minimum) is not raw materials or land. If it is not intelligence per se, it is more the capacity to endure that intelligence within the wider system.

I write conventional business software.  What is it I spend my time actually doing? The hard bit certainly isn’t getting the computer to do what I want. With modern programming languages and and tools, that’s really easy — once I know what it is I want.  There used to be people with the job title “programmer” whose job it was to do that, with separate “analysts” who told them what the computer needed to do, but the programmer was pretty much an obsolete role when I joined the workforce twenty years ago.

Conventional wisdom is that the hard bit is now working out what the computer needs to do — working with users and defining precisely how the computer fits into the wider business process. That certainly is a significant part of my job. But it’s not the hardest or most time-consuming bit.

The biggest part of the job is dealing with errors: testing software before release to try to find them; monitoring it after release to identify them, and repairing the damage they cause. The testing is really hard because the difficult bits of the software interact with multiple outside people and systems, and it’s not possible to fully simulate them. New software can be tested against pale imitations of the real world, and if it’s particularly risky, real users can be reluctantly drafted in to “user acceptance” testing of the software. But all that — simulating the world to test software, having users effectively simulate themselves to test software, and running not-entirely-tested software in the real world with a finger hovering over the kill button — is what takes most of the work.

This factor is brought out more by the improvements I mentioned in the actual writing of software, but it is by no means new. Fred Brooks wrote in The Mythical Man-Month that if writing a program took n days, integrating it into a system would take 3n days, properly productionising it (so that it would run reliably unsupervised) would take 3n days, and these are cumulative, so that a productionised, integrated version of the program would take something like ten times as long as a stand-alone developer-run version to produce.

Adding more intelligences, natural or artificial, to the system is the same sort of problem. Yes, they can add value. But they can do damage also. Testing of them cannot really be done outside the system, it has to be done by the system itself.

If completely independent systems exist, different ideas can be tried out in them.  But we don’t want those: we want the benefits of the extra intelligence in our system.  A separate “test environment” that doesn’t actually include us is not a very good copy of the “production environment” that does include us.

All this relates to another long-standing issue in our corner of the blogosphere: education, signalling and credentialism. The argument is that the main purpose of higher education is not to improve the abilities of the students, but merely to indicate those students who can first get into and then endure the education system itself. The implication is that there is something very wrong with this. But one way of looking at it is that the major cost is not either producing or preparing intelligent people, but testing and safely integrating them into the system. The signalling in the education system is part of that integration cost.

Back on the Julian Simon question, what that means is that neither population nor raw materials are limiting the growth and advance of civilisation. Rather, civilisation is growing and advancing roughly as fast as it can integrate new members and new ideas. There is no ultimate resource.

It is not an original observation that the things that most hurt our civilisation are self-inflicted. The organisation of mass labour that produced industrialisation also produced the 20th century world wars. The flexible allocation of capital that drove the rapid development of the last quarter century gave us the spectacular misallocations with the results we’re now suffering.

The normal attitude is that these accidents are avoidable; that we can find ways to stop messing up so badly. We can’t.  As the external restrictions on our advance recede, we approach the limit where the benefits of increases in the rate of advance are wiped out by more and more damaging mistakes.

Twentieth Century science-fiction writers recognised at least the catastrophic risk aspect of this situation. The concept that the paucity of intelligence in the universe is because it tends to destroy itself is suggested frequently.

SF authors and others emphasised the importance of space travel as a way of diversifying the risk to the species. But even that doesn’t initially provide more than one system into which advances can be integrated; at best it reduces the probability that a catastrophe becomes an extinction event. Even if we did achieve diversity, that wouldn’t help our system to advance faster, unless it encouraged more recklessness — we could take a riskier path, knowing that if we were destroyed other systems could carry on. I’m not sure I want that; it raises the same sort of philosophical questions as duplicating individuals for “backup” purposes. In any case, I don’t think even that recklessness would help: my point is not just that faster development creates catastrophic risk, but that it increases the frequency of more moderate disasters, like the current financial crisis, and so wipes out its own benefits.

Speculations regarding limitations of Artificial Intelligence

An older friend frequently asks me, as a technologist, when computers will have human-like intelligence, and what the social/economic effects of that will be.

I struggle to take the question seriously; AI is something that was dropped as a major research goal around the time I was a student twenty years ago, and it’s not an area I’m well-informed about. As I mentioned in my review of the rebooted “Knight Rider” TV series, a car that could hold up a conversation is a more futuristic idea in 2008 than it was back when David Hasselhof was doing the driving.

And yet for all that, it’s hard to say what’s really wrong with the layman’s view that since computing power is increasing rapidly, it is an inevitability that whatever the human brain can do in the way of information processing, a computer should be able to do, quite possibly within the next few decades.

But what is “human-like intelligence”?  It seems to me that it is not all that different from what the likes of Google search or Siri do: absorb vast amounts of associations between data items, without really being systematic about what the associations mean or selective about their quality, and apply some statistical algorithm to the associations to pick the most relevant.

There must be more to it than that; for one thing, trained humans can sort of do actual proper logic, about a billion times less well than this netbook can, and there’s a lot of effectively hand-built (i.e. specifically evolved) functionality in a some selected pattern-recognition areas. But I think the general-purpose associationist mechanism is the most important from the point of view of building artificial intelligence.

If that is true, then a couple of things follow. First, the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability.
But it also suggests that the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.

Humans can reach conclusions that no logic-based intelligence can get close to, but humans get a lot of stuff wrong nearly all the time. Google Search can do some very impressive things, but it also gets a lot of stuff wrong. That might not change, however much the technology improves.

There are good reasons to suspect that human intelligence is very close to being as good as it can get.
One is that thinking about things longer doesn’t reliably produce better conclusions. That is the point of Malcolm Gladwell’s “Blink” (as far as I understand it; I take Gladwell to be the champion of what Neal Stephenson called “those American books where once you’re heard the title you don’t even need to read it”).

The next, related, reason is that human intelligence doesn’t scale out very well; having more people think about a problem doesn’t reliably give better answers than having just one do it.

Finally, the fact that, in spite of evolutionary pressure, there is enormous variation in the practical usefulness of human intelligences, suggest that making it better is not simply a case of improving the design. If the variation were down to different design, then the better designs would have driven out the worse ones long ago. I think it is far more to do with circumstances, and with the fundamental difficulty of identifying the correct problems to solve.

The major limitation on conventional computing is that it can only do so much per second; only render so many triangles, only price so many positions or simulate so many grid cells. Improving the speed and density of the hardware is pushing back that major limitation.

The major limitation on human intelligence, particularly when it is augmented with computers as it generally is now, is how much it is wrong.  Being faster or bigger doesn’t push back the major limitation unless it can make the intelligence wrong less often, and I don’t think it would.

What I’m saying is that the major cost of human intelligence is not in the scarce resources required to execute the decision-making, but the damage caused by all the bad decisions that humans make.

The major real-world expense in obtaining high-quality human decision-makers is identifying which of the massive surplus available are actually any good.  Being able to supply vastly bigger numbers of AI candidates would not drive that cost down.

Even the specialisms that humans have might be limited more by the cost they impose on the quality of general decision-making than by the cost of actually implementing the capability.

If that’s the situation, then throwing more computing resources at AI-type activity might not change things that much: computers can be as intelligent as humans, but not more intelligent. That’s not nothing, of course: it opens the door to replacing a lot of human activity with automated activity, with all the economic effects that implies.

There will be limitations in application because if human-like intelligence really is what I think it is, then the goals being sought by an AI are necessarily as vague as everything else: they will be clumps of associations, and the “intelligence” will just do the things that are associated with the goal clump. We won’t be able to “program” it the way we program a logic-based system, just kind of point it in the right direction in the same we we do when we type something into a Google search box.

I don’t know if what I’ve put here is new: I think the view of what the major issue in intelligence is is fairly widespread (“associationism”?), but in all previous discussions I’ve seen or participated in, there’s been an assumption that if in x years from now we will have artificial human-like intelligence, then in 2x years from now, or probably much less, we will have amazing superhuman artificial intelligence. That is what I am now doubting.

With intelligences available “in the lab” we might be able to prepare and direct them more effectively than we do now. But even that’s not obviously helpful: with human education, again, the limitation is not so much how long it takes and how much work it is, rather how sure we are it is actually doing any good at all.  We may be able to give an artificial intelligence the equivalent of a hundred years of university education, but is a person with that experience really going to make better decisions? The things we humans work most hard at learning and doing: accumulating raw information and reasoning logically, are the things that computers are already much better than us at. The things that only humans can do are the things we simply don’t know how to do better, even if we were to re-implement on an electronic platform, speeded up, scaled up, scaled out.

Note that all the above is the product of making statistical guesses using masses of ill-understood unreliable associations, and is very likely to be wrong.

(Further thoughts: Relevance of AI)