"End-to-end encryption"

The question of regulating encrypted communication has come up again. I was going to write again about how the politicians don’t understand the technologies, and they probably don’t, but if they did, what would they do about it?  The details are too complex to debate on TV news. What percentage of the viewing public even knows what public-key encryption is?

Politicians often talk as if “end-to-end encryption” is a technology, and one which is rare and might practically be banned. There are then huge arguments about whether such banning would be good or bad, which leave me somewhat bemused.

Of course, “end-to-end encryption” is no more a technology than “driving to a friend’s house” is a technology. Cars and roads and driving are technologies, driving to a friend’s house, or to a restaurant, or to work, are social or economic practices that make use of the technology.

Similarly, sending encrypted messages is a technology. sending “end-to-end” encrypted messages is not a technology, it’s just sending encrypted messages to an intended end recipient. Whether a particular message is “end-to-end” encrypted depends on who the end is.

The soundbites talk about one kind of messaging: messages sent person-to-person from a sender to a recipient via a service provider like Whatsapp, Microsoft or Google.

In 2017, most data sent over the internet that is at all personal is encrypted. Huge efforts have been made over the last five or so years to get to this stage, yet the debates about encryption have not even touched on the fact. Data in motion seems to be invisible. The encryption used to send the messages is very strong; again, a few years ago, there were quite a few bugs in commonly used implementations, but efforts have been made to find and fix such bugs, and while there are likely to be some left, it is plausible that nearly all such encrypted messages are unbreakable even by the most powerful national security organisations.

However, the way most of these services work today is that the sender makes a connection to the service provider and authenticates himself with a password. The Service Provider also authenticates itself to the sender with a certificate, though that’s mostly invisible. The sender then sends their message encrypted to the Service Provider, which decrypts it and stores it. Later (or simultaneously) the recipient makes a connection to the Service Provider the same way, and the Service Provider encrypts the message and sends it to the recipient. This is fundamentally the same whether we are talking about messaging apps, chat, or email, and whether the devices used are computers, phones or tablets.

Anyway, call this method 1. Service Provider Mediated

A few of these services now have an extra feature. The sender’s app first encrypts the message in a way that con only be decrypted by the recipient, then encrypts it again to send to the Service Provider. The Service Provider decrypts one level of encryption, but not the second. When the recipient connects, the Service Provider re-encrypts the already encrypted message and sends to the recipient. The recipient decrypts the message twice, once to get what the Service Provider had stored, and then again to get what the sender originally wrote.

That is why the politicians are talking about Whatsapp, Telegram and so on.

This is method 2. Service Provider Mediated, with provided end-to-end encryption

An important question here is who keeps track of the encryption keys. If the Service Provider has that responsibility, then it can support interception by giving the sender the wrong encryption key; one that it or the government can reverse. If the sender keeps the recipient’s encryption key, that is not possible, the Service Provider receives no messages that it is able to decrypt.

Going back to method 1, if the Service Provider doesn’t guide the end-to-end encryption, it’s still possible to add it with special software for the sender and recipient. This is awkward for the users and has never caught on in a big way, but it’s the method that the authorities used to worry about, decades back.

Method 3. Service Provider Mediated with independent end-to-end encryption

There are plenty more. The sender connects to the Service Provider and indicates, via an encrypted message, what recipient they want to message. The Service Provider replies with an endpoint that the sender can connect to. The sender then directly connects to the recipient and transmits an encrypted message, which the recipient decrypts.

This peer-to-peer messaging isn’t fundamentally different in technology from the end-to-end encrypted scenario. In both cases the actual networking is “store-and-forward”: An intermediary receives data, stores it, and then transmits it to either another intermediary or the recipient. The only difference is how long the data is stored from; a typical router will store the data for only a fraction of a second before transmitting and deleting it, whereas a Service Provider’s application server will store it at least until the recipient connects to retrieve it, and quite likely will archive it permanently. (Note there are regulations in some jurisdictions that require Service Providers to archive it permanently, but that applies to their application servers and not to routers, which handle orders of magnitude more data, most of which is transient).

It’s not always obvious to the user whether a real-time connection is mediated or not. Skype calls were originally peer-to-peer, and Microsoft changed it to mediated after they bought Skype. The general assumption is that this was at the behest of the NSA to enable interception, though I’ve not seen any definitive evidence.

Another thing about this kind of service is that the Service Provider does not need nearly as much resource as one that’s actually receiving all the messages their users send. There could be a thousand different P2P services, in any jurisdiction. With WebRTC now built into browsers, it’s easy to set one up.

Method 4. Service Provider directed peer-to-peer.

It’s not actually hard to be your own Service Provider. The sender can put the message on his own server, and the recipient can connect to the sender’s server to receive it. Or, the sender can connect to the recipient’s server, and send the message to that. In either case, the transmission of the messages (and it’s only one transmission over the public internet, not two as in the previous cases) will be encrypted.

As with method 2,  the Service Provider might manage the encryption keys for the user, or the user’s app might retain encryption keys for the correspondents it has in its directory.

The software is all free and common. Creating a service requires a little knowledge, but not real expertise. I estimate it would take me 90 minutes and cost £10 to set up a publicly-accessible email, forum and/or instant messaging service, using software that has been widespread for many years, and that uses the same secure encryption that everything else on the internet uses. Whether this counts as “end to end encryption” depends entirely on what you count as an “end”.  If I want the server to be in my house instead of a cloud data centre in the country of my choice, it might cost me £50 instead of £10, and it’s likely to have a bit more downtime. That surely would make it “end-to-end”, at least for messages for which I am either the sender or the recipient.

This is getting easier and more common, as internet speeds improve, connected devices proliferate, and distrust of the online giants’ commercial surveillance practices grows. There have been one or two “server in a box” products offered which you can just buy and plug in to get this kind of service — so far they have been dodgy, but there is no technical barrier to making them much better. Even if such a server is intended and marketed simply as a personal backup/archive solution, it is nevertheless in practice a completely functional messaging platform. The difference between an application that saves your phone photos to your backup drive and a full chat application is just a little bit of UI decoration, and so software like owncloud designed to do the first just throws in the second because it’s trivial.

That is Method 5. Owned server

There are several variants covered there. The user’s own server might be on their own premises, or might be rented from a cloud provider. If rented, it might be a physical machine or a virtual machine. The messages might be encrypted with a key owned by the recipient, or encrypted with a key configured for the service, or both, or neither. Whether owned or rented, the server might be in the same country as the user, or a different country. Each of these makes a significant difference from the point of view of an investigating agency wanting to read the messages.

Investigating authorities aren’t only concerned with encryption, though, they also want to know who is sending or receiving a message, even if they can’t read it. This could make the politicians’ opposition to mediated end-to-end encryption more reasonable: the Service Providers allow users to connect to their servers more or less anonymously. Using peer-to-peer or personal cloud services, the data is secure but the identity of the recipients of messages is generally easier to trace. The Service Providers give the users that the authorities are interested in a crowd of ordinary people to hide among.

It’s easy to sneer at Amber Rudd, but can you imagine trying to describe a policy on this in a TV interview, or in the House of Commons? Note I’ve skipped over some subtle questions.

Even if you could, you probably wouldn’t want to. Why spell out, “We want to get cooperation from Facebook to give us messages, but we’re not stupid, we know that if the terrorists buy a £100 off-the-shelf NAS box and use that to handle their messages, that won’t help us”?

Summary: kinds of messaging practice

Service Provider mediated non-end-to-end

Data accessible to authorities: with co-operation of Service Provider
Identity accessible to authorities: IP addresses obtainable with co-operation of Service Provider but can be obscured by onion routing / using public wifi etc
User convenience: very convenient

Service Provider mediated end-to-end

Data accessible to authorities: No
Identity accessible to authorities: IP addresses obtainable with co-operation of Service Provider but can be obscured by onion routing / using public wifi etc
User convenience: very convenient

End-to-end layered over Service Provider (e.g. PGP mail)

Data accessible to authorities: No
Identity accessible to authorities: IP addresses obtainable with co-operation of Service Provider but can be obscured by onion routing / using public wifi etc
User convenience: very inconvenient, all users must use special software, do key management

Peer-to-peer
Data accessible to authorities: No
Identity accessible to authorities: IP addresses directly accessible by surveillance at either endpoint or at ISP
User convenience: fiddly to use, need to manage directories of some kind

Personal Internet Service (Hosted)


Data accessible to authorities: With the cooperation of the host, which could be in any country
Identity accessible to authorities: IP addresses directly accessible by surveillance at either endpoint or at ISP
User convenience: Significant up-front work required by one party, but very easy to use by all others. Getting more convenient.

Personal Internet Service (on-site)

Data accessible to authorities: If they physically seize the computer
Identity accessible to authorities: IP addresses directly accessible by surveillance at either endpoint or at ISP
User convenience: Significant up-front work required by one party, but very easy to use by all others. Getting more convenient.
Appendix: Things I can think of but have skipped over to simplify
  • Disk encryption — keys stored or provided from outside at boot
  • Certificate spoofing, certificate pinning
  • Client applications versus web applications 
  • Hostile software updates
  • Accessing data on virtual servers through hypervisor

Revisiting the Program

Alrenous has played the Thesis 11 card:

Alrenous @Alrenous  2h2 hours ago
 Finally, if you’re really confident in your philosophy, it should move you action. Or why bother?
You moved to China. Good work.

Edit: I totally misread Alrenous here: he’s not saying “Change the world”, he’s saying “change your own life/environment”. So the below, while still, in my view, true and important, is not particularly relevant to his point. Oh well.

He makes a valid point that good knowledge cannot be achieved without trying things:

Alrenous @Alrenous  3h3 hours ago
 Have to be willing to fail to do something new. Something new is patently necessary. NRx isn’t willing to fail. That’s embarrassing.

The problem with this is that neoreaction is the science of sovereignty. Like, say, the science of black holes, it is not really possible for the researcher with modest resources to proceed by experiment, valuable though that would be.

We have ideas on how to use and retain sovereignty, but less to say about how to achieve it. There is a great deal of prior art on how to gain power via elections, guerrilla warfare, coup d’état, infiltration; we don’t really have much of relevance to add to it.

We could do experiments in this area, by forming a political party or a guerrilla army or whatever, but that’s a long way from our core expertise, and though we would like to experiment with sovereignty, attempting to get sovereignty over the United States to enable our experiments is possibly over-ambitious. We could hope to gain some small share of power, but we believe that a share of power is no good unless it can be consolidated into sovereignty.

Given that we do not have special knowledge of achieving power, it seems reasonable that we should produce theory of how power should be used, and someone better-placed to get power and turn it into sovereignty should run their military coup or whatever, and then take our advice. That’s what we care about, even if cool uniforms would be better for getting chicks.

I put this forward as a goal in 2012. 

This is an ambitious project, but I think it is genuinely a feasible route to implementing our principles. Marxism’s successes in the 20th Century didn’t come because its theories were overwhelmingly persuasive; they came because Marxism had theories and nobody else did.

Since then, we have seen Steve Bannon, who apparently has at least read about and understood Moldbug, in a position of significant power in the Trump administration. We have seen Peter Thiel also with some kind of influence, also with at least sympathies towards NRx. These are not achievements in the sense that in themselves they make anything better. But they are experimental validations of the strategy of building a body of theory and waiting for others to consume it.

I have for the last few days been suggesting that Mark Zuckerberg could win the presidency as a moderate technocrat who will save the country from Trump and the Alt-Right Nazis, consolidate power beyond constitutional limits, as FDR did, and reorganise FedGov along the lines of Facebook Inc. This outcome is, frankly, not highly probable, but I insist that it is not absurd. One of the things that controls the possibility of this sort of outcome is whether people in positions of influence think it would be a good thing or a bad thing. If, with our current level of intellectual product we can get to the point of 2017 Bannon, is it not plausible that with much more product, of higher quality, much more widely known and somewhat more respectable, the environment in DC (or London or Paris) could be suitable for this sort of historically unremarkable development to be allowed to happen?

This, presumably, is the strategy the Hestia guys are pursuing with Social Matter and Jacobite, and I think it is the right one. We are at a very early stage, and we have a long way to go before a smooth takeover of the United States would be likely, though in the event of some exceptional crisis or collapse, even our immature ideas might have their day. But we do have experimental feedback of the spread of our ideas to people of intelligence and influence: if we had ten Ross Douthats, and ten Ed Wests, and ten Peter Thiels, discussing the same ideas and putting them into the mainstream, we would have visible progress towards achieving our goals.

Trophic Cascade

I’ve been blogging for 13 years, and my first post was about Islam in Europe :

I believed then that danger of Islam was exaggerated, by people who I normally agreed with such as Eric Raymond

I’ve changed my view on many things since then, from being a by-the-book Libertarian to something I had to find a new name for.

Only one thing that I wrote back then is definitely now not true:
The Muslim immigrants to Britain are integrating slowly into British culture.

This 2005 piece by me comes off looking especially bad now

This does not mean that Islam is dying out, just that, like Christianity, it is evolving into a form that makes less conflict with the practicalities of living in a developed society. I expect that in a hundred years Moslems will continue to recite the Koran and observe Ramadan, but what I am calling the “primitive” elements — intolerance of Western practices of commerce, sexual behaviour, freedom of expression, whatever — will have died out.

Among Moslems in the West, as well as the more Westernised Moslem countries like Turkey, this is already the case for the majority. And this is why the “primitives” are angry.

File that under “overtaken by events.” I did say then that it was more important for the West to be seen to win in Iraq than to achieve anything concrete, so maybe if that had been done then things would look different today. Perhaps what I predicted was at that time still possible, but whether I was wrong about that or not, the reality today is utterly different. It is moderate Islam that is declining, globally, not Islamism.

“Integration” now going backwards. Possibly that had already begun in 2004 and I hadn’t noticed, but I suspect it is something new.

Many of my online homies say that “moderate Islam” is a myth or mirage — that the history of Islam shows that it is inherently and inevitably violent and expansionist. Pitched against liberals who say that Christianity has an equally violent and aggressive history, they certainly have the better of their argument. But while the leftists are ignoring everything before the 1800s, the rightists are ignoring everything since. There was very little Islamist violence in the 20th Century. The Partition of India was a free-for-all. The major Islamic states, Egypt and Turkey, were secular socialist-nationalist in character.

Contrary to my previous assertions, the situation is getting worse not better, but it is still noticeable that Islamist terrorists in Britain are not in their national origins representative of Britain’s Muslim population. The ringleader of the 2005 train bombers was from a typical British-Pakistani background, but most of the others have come from Africa or the Middle East. Even Butt seems atypical since he came to the country as a refugee — most British Pakistanis did not come as refugees, but as Commonwealth migrants back in the 70s and families thereafter. Britain has been granting asylum to very few Pakistanis — 77 in the last quarter [pdf] .

Pakistani immigration was encouraged for economic reasons up until 1971, and since then it has been family-based. However, their numbers have increased tenfold over those 45 years, from 120,000 to 1.2 million. That’s plausible as bringing in existing family members plus marrying more and having two generations of children, but it’s towards the high end of what you would estimate. If there’s another significant contributor to that tenfold expansion I don’t know what it is. 

Striking as those numbers are, my point is that those “normal British Pakistanis” are not the Islamic terrorists in Britain. They really are the “moderate Muslims” that are alleged not to exist (The child prostitution gangs such as the Rotherham one, on the other hand, are exactly from that typical background, one reason why I see that as a totally separate issue). My biggest worry is that by adding significant numbers of African and Middle Eastern jihadis into the mix, the whole British Pakistani culture could be shifted. The Muslim population of Britain doubled between 2005 and 2015 (per Ed West)  and the non-Pakistani Muslim population was probably multipled several times. This was the effect of the “rubbing noses in diversity” — the Labour government changing the demographics of the country not even out of strategy but out of vulgar spite. That was a development I failed to imagine.

Waiting for Islam to become more moderate is no longer on the table. Forcing Islam to become more moderate is, I believe, thoroughly achievable with sensible policies. The fundamental is for law and society to be at least as tough on expression of tribalism from Muslims as they are on expression of tribalism from natives. This is currently very far from the case. I try to stay out of day-to-day politics, so when I retweet other right-wingers, it’s usually because they’re highlighting this disparity:

Twitter Moment

The other side of that is this story: In Germany, Syrians find mosques too conservative

Mosques in Western countries are now more extremist than those elsewhere in the world. This is a straightforward holiness spiral — within a community, you can gain status by professing stronger allegiance to that community’s symbols than anyone else does. In a functioning community, this tendency is moderated by the practical demands of society. But, even the large, stable, Pakistani communities in Britain are not truly functional — they are subsidised and supported by the wider society.

The wider society — the liberal West — is deeply opposed to putting any restraint whatsoever on the puritanism growing within the community. They are like the naive conservationists of the past who believed that by keeping out all predators they were allowing an ecosystem to flourish naturally, when in fact they were unbalancing it towards a destructive tipping point. It is natural and universal for religious extremism to come into conflict with its neigbours and be pushed back by them.

Basically, what I’m saying is that Tommy Robinson is a natural predator, and by suppressing him, liberal society is producing a Trophic Cascade in the extremist ecosystem.

It’s not only in a minority community that this mechanism should happen. I asked on Twitter, is there any Islamic country where the mosques are not subject to state supervision of doctrine? In majority Islamic communities, the pushback in favour of practicality comes from the state. Again, a liberal Western state disclaims any responsibility for pushing back on Islam, though it is a job that I understand most Islamic states consider necessary.

Update: It should go without saying that continuing to increase the Muslim population is also destabilising. As well as increasing the imbalance, in itself it is a sign of weakness which makes extremism more attractive and moderation less attractive. I am not saying any more than that it is not (yet) necessary to undertake more drastic measures such as mass deportations of long-standing residents. Since the continued importation of Muslims is the same political process as the active protection of extremism from its natural opposition, ending one means also ending the other.

Democracy and Hacking

The New York Times has published a long analysis of the effects of the hacking of Democratic Party organisations and operatives in the 2016 election campaign.

The article is obviously trying to appear a balanced view, eschewing the “OMG we are at war with Russia” hyperbole and questioning the value of different pieces of evidence. It does slip here and there, for instance jumping from the involvement of “a team linked to the Russian government” (for which there is considerable evidence) to “directed from the Kremlin” without justification.

The evidence that the hackers who penetrated the DNC systems and John Podesta’s email account are linked to the Russian Government is that the same tools were used as have been used in other pro-Russian actions in the past.

*Update 4th Jan 2017: that is a bit vague: infosec regular @pwnallthethings goes into very clear detail in a twitter thread)

One important consideration is the sort of people who do this kind of thing. Being able to hack systems requires some talent, but not any weird Hollywood-esque genius. It also takes a lot of experience, which goes out of date quite quickly. Mostly, the people who have the talent and experience are the people who have done it for fun.

Those people are difficult to recruit into military or intelligence organisations. They tend not to get on well with concepts such as wearing uniforms, turning up on time, or passing drug tests.

It is possible in theory to bypass the enthusiasts and have more professional people learn the techniques. One problem is that becoming skilled requires practice, and that generally means practice on innocent victims. More significantly, the first step in any action is to work through cut-out computers to avoid being traced, and those cut-outs are also hacked computers belonging to random victims. That’s the way casual hackers, spammers and other computer criminals work, and espionage hackers have to use the same techniques. They have to be doing it all the time, to keep a base of operations, and to keep their techniques up to date.

For all these reasons, it makes much more sense for state agencies to stay arms-length from the actual hackers. The agencies will know about the hackers, maybe fund them indirectly, cover for them, and make suggestions, but there won’t be any official chain of command.

So the hackers who got the data from the DNC were probably somewhat associated with the Russian Government (though a comprehensive multi-year deception by another organisation deliberately appearing to be Russian is not completely out of the question).

They may have had explicit (albeit off-the-record) instructions, but that’s not necessary. As the New York Times itself observed, Russia has generally been very alarmed by Hillary Clinton for years. The group would have known to oppose her candidacy without being told.

“It was conventional wisdom… that Mrs. Clinton considered her husband’s efforts to reform Russia in the 1990s an unfinished project, and that she would seek to finish it by encouraging grass-roots efforts that would culminate with regime change.”

Dealing with the product is another matter. It might well have gone to a Russian intelligence agency, either under an agreement with the hackers or ad-hoc from a “concerned citizen”: you would assume they would want to see anything and everything of this kind that they could get. While hacking is best treated as deniable criminal activity, it would be much more valuable to agencies to have close control over the timing and content of releases of data.

So I actually agree with the legacy media that the extraction and publication of Democratic emails was probably a Russian intelligence operation. There is a significant possibility it was not, but was done by some Russians independent of government, and a remote possibility it was someone completely unrelated who has a practice of deliberately leaving false clues implicating Russia.

I’ve often said that the real power of the media is not the events that they report but the context to the events that they imply. Governments spying on each other is completely normal. Governments spying on foreign political movements is completely normal. Governments attempting to influence foreign elections by leaking intelligence is completely normal. Points to Nydwracu for finding this by William Safire:

“The shrewd Khrushchev came away from his personal duel of words with Nixon persuaded that the advocate of capitalism was not just tough-minded but strong-willed; he later said that he did all he could to bring about Nixon’s defeat in his 1960 presidential campaign.”

The major restraint on interference in foreign elections is generally the danger that if the candidate you back loses then you’ve substantially damaged your own relations with the winner. The really newsworthy aspect of all this is that the Russians had such a negative view of Clinton that they thought this wouldn’t make things any worse. It’s been reported that the Duma broke into applause when the election result was announced.

The other thing that isn’t normal is a complete public dump of an organisation’s emails. That’s not normal because it’s a new possibility, one that people generally haven’t begun to get their heads around. I was immediately struck by the immense power of such an attack the first time I saw it, in early 2011. No organisation can survive it: this is an outstanding item that has to be solved. I wouldn’t rule out a new recommended practice to destroy all email after a number of weeks, forcing conversation histories to be boiled down to more sterile and formal documents that are far less potentially damaging if leaked.

It is just about possible for an organisation to be able to adequately secure their corporate data, but that’s both a technical problem and a management problem. However, the first impression you get is of the DNC is one of amateurism. That of course is not a surprise. As I’ve observed before, if you consider political parties to be an important part of the system of government, their lack of funding and resources is amazing, even if American politics is better-funded than British. That the DNC were told they had been hacked and didn’t do anything about it is still shocking. Since 2011, this is something that any organisation sensitive to image should be living in fear of.

This is basically evidence-free speculation, but it seems possible that the Democratic side is deficient in actual organisation builders: the kind of person who will set up systems, make rules, and get a team of people to work together. A combination of fixation on principles rather than practical action, and on diversity and “representativeness” over extraordinary competence meant that the campaign didn’t have the equivalent of a Jared Kushner to move in, set up an effective organisation and get it working.

Or possibly the problem is more one of history: the DNC is not a political campaign set up to achieve a task, but a permanent bureaucracy bogged down by inferior personnel and a history of institutional compromises.  Organisations become inefficient naturally.

Possibly Trump in contrast benefited from his estrangement from the Republican party establishment, since it meant he did not have legacy organisations to leak his secrets and undermine his campaign’s efficiency. He had a Manhattan Project, not an ITER.

The task of building–or rebuilding–an organisation is one that few people are suited to. Slotting into an existing structure is very much easier. Clinton’s supporters particularly are liable to have the attitude that a job is something you are given, rather than something you make. Kushner and Brad Parscale seem to stand out as people who have the capability of making a path rather than following one. As an aside, Obama seems to have had such people also, but Clinton may have lacked them. Peter Thiel described Kushner as “the Chief Operating Officer” of Trump’s campaign. Maybe the real estate business that Trump and Kushner are in, which consists more of separate from-scratch projects than most other businesses, orients them particularly to that style.

Actually Existing Capitalism

Something that’s cropped up a few times with recent discussion of neocameralism as a concept is the role of shareholders in existing firms.

Conflicts of interest between principals and agents are one of the most significant forces acting on the structure of any kind of organisation, so it is essential when discussing how to apply structures from one kind of organisation to another, to have a feel of how the conflicts are playing out in existing structures and organisations.

In particular, I have seen more than one person on twitter put forward the idea that present-day joint-stock companies totally fail to resolve the conflict of interest between shareholders and managers, with the result that shareholders are powerless and managers run companies purely in their own interest:

In discussion of this piece by Ron Carrier from November 24th the author said on twitter,

Because they are non-contractual, shares are a useful way of financing a company without ceding control…. Contrary to shareholder theory, power in the corporation is actually located in mgmt. and the board of directors.”

More recently (December 9th), Alrenous followed the same path: from the suggestion that dividend payments from public companies are in aggregate very low, he draws the conclusion that stocks are “worthless” and that those who buy them are effectively just giving their money away for managers to do what they want with.

I’m sure Alrenous understands that the theory is that a profitable company can be delivering value to shareholders by reinvesting its profits and becoming a more valuable company, capable of returning larger amounts of cash in future. And of course I understand that just because someone believes that a company has become more valuable in consequence of reinvested profits, doesn’t mean it is necessarily true.

Discussions like this among people not involved with investment professionally carry a risk of being based on factoids or rumour. In particular, mainstream journalists are fantastically ignorant of the whole subject. But in the end everything to do with public companies is actually public, if you can find the information and not misunderstand it. (Note that I am not including myself among the professionals, though I’ve worked with them in the past in an IT role).

At any rate, here is a publication dealing with aggregate dividends across the NY stock exchange. factset.com

“Aggregate quarterly dividends for the S&P 500 amounted to $105.8 billion in the second quarter, which represented a 0.8% increase year-over-year. The dividend total in Q2 marked the second largest quarterly dividend amount in at least ten years (after Q1 2016). The total dividend payout for the trailing twelve months ending in Q2 amounted to $427.5 billion, which was a 7.1% increase from the same time period a year ago.”

So, that’s getting on for half a trillion dollars in dividends paid out by the S&P 500 over the last year. Throwing numbers around without any indication of scale is another media trope, but that’s about 2-3% of US GDP, which seems like the right sort of scale.

As an aside, if some of these companies hold shares in others, the dividends are effectively double-counted: one company in the set is paying out to another, which may or may not then be paying out to its shareholders. I would assume this is not more than a few percent of the total—even investment companies like Berkshire Hathaway are likely to invest more in private companies than other S&P 500 members—but it’s an indication of the pitfalls available in this sort of analysis.

In addition to dividends, as I pointed out, share buybacks—where a company purchases its own shares on the open market—are economically equivalent to dividends: the company is giving cash to its own shareholders. If every shareholder sells an equal proportion of their holdings back to the company, then the result is that each shareholder continues to hold the same fraction of the company’s outstanding shares, and each has been paid cash by the company. Of course, some will sell and some not, but the aggregate effect is the same. The choice of whether to take cash by selling a proportion of one’s holding, or whether to simply hold shares, thereby effectively increasing one’s holding as a fraction of the company, enables shareholders to minimise their tax liability more efficiently, which is apparently why share buybacks have become more significant compared to dividends.

Alrenous found this article from Reuters, which says “In the most recent reporting year, share purchases reached a record $520 billion.”. That’s not the same period as the one I found for aggregate dividends, so adding them together might be a bit off, but it looks like we can roughly double that 3% of GDP. As I said on twitter, as a general rule, large companies are making profits and paying shareholders.

The reason neocameralism makes sense is that joint-stock companies basically work.

That is not to suggest that the principal-agent conflicts are insignificant. They are always significant, and managing the problem is a large part of any organisational practice. That is what the bulk of corporate law is there to deal with.

I picked up a recent article in Investor’s Chronicle in which Chris Dillow suggests that management is simply overpaid:

“…bosses plunder directly from shareholders by extracting big wages for themselves. The High Pay Centre estimates that CEOs are now paid 150 times the salary of the average worker, a ratio that has tripled since the 1990s – an increase which, it says, can’t be justified by increased management efficiency.”

However, Dillow also links other source with other suggestions: the 1989 Harvard Business Review article by Michael Jensen is particularly fascinating.

Jensen claims that regulation brought in after the Great Depression had the effect of limiting the control of shareholders over management:

“These laws and regulations—including the Glass-Steagall Banking Act of 1933, the Securities Act of 1933, the Securities Exchange Act of 1934, the Chandler Bankruptcy Revision Act of 1938, and the Investment Company Act of 1940—may have once had their place. But they also created an intricate web of restrictions on company ‘insiders’ (corporate officers, directors, or investors with more than a 10% ownership interest), restrictions on bank involvement in corporate reorganizations, court precedents, and business practices that raised the cost of being an active investor. Their long-term effect has been to insulate management from effective monitoring and to set the stage for the eclipse of the public corporation.

“…The absence of effective monitoring led to such large inefficiencies that the new generation of active investors arose to recapture the lost value. These investors overcome the costs of the outmoded legal constraints by purchasing entire companies—and using debt and high equity ownership to force effective self-monitoring.”

A quarter of a century on from Jensen’s paper, the leveraged buyout looks not so much like an alternative form of organisation for a business, but rather an extra control mechanism available to shareholders of a public joint-stock company. The aim of of a buyout today is, as Jensen describes, to replace inefficient management and change the firm’s strategy, but today there is normally an exit strategy: the plan is that having done those things the company will be refloated with new management and a new strategy.

The “Leveraged” of LBO obviously refers to debt: that takes us to the question of debt-to-equity ratio. A firm needs capital: it can raise that from shareholders or from lenders. If all its capital is shareholders’, that limits the rate of profit it can offer them: the shares become less volatile. If the firm raises some of its capital needs from lenders, the shares become riskier but potentially more profitable.

Under the theory of the Capital Asset Pricing Model (CAPM), the choice is arbitrary: leverage can be applied by the shareholders just as by the company itself. Buying shares on margin of a company without debt is equivalent to buying shares of a leveraged company for cash. However, this equivalency is disrupted by transaction costs, and also by tax law.

There is considerable demand in the market for safe fixed-income investments. A large profitable company is exceptionally well-placed to meet that demand by issuing bonds or borrowing from banks, and therefore can probably do so much more efficiently than its shareholders would be able to individually, were it to hold its cash and leave shareholders to borrow against the more expensive shares.

The transaction costs the other way, the ones caused by corporate indebtedness, come through bankruptcy. Bankruptcy is essential to capitalism, but it involves a lot of expensive lawyers, and can be disruptive. For an extreme example, see the Hanjin Shipping case in September. It’s clearly in the interest of the owners of the cargo to get the cargo unloaded, but the international complications of the bankruptcy of the shipping line means that it’s unclear who is going to end up paying for the docking and unloading. If Hanjin had a capital structure that gave it spare cash instead of debt, all this expensive inconvenience would be avoided.

Aside from transaction costs, the argument in Jensen’s paper is that the management of a company with spare cash is better able to conceal the company’s activities from shareholders. In his account, once the company has been bought out and restructured with debt, any expansion in the cost base has to be directly justified to shareholders and creditors, since capital will have to be raised to pay for it. This improvement in the monitoring of the management is part of what produces the increased value (in his 1980s figures, the average LBO price was 50% above the previous market value).

A quarter of a century later, we frequently read the opposite criticism, that pressure from investors makes management too focused on short-term share prices, which is a bad thing. I linked this article by Lynn Stout, and while I think the argument is very badly stated, it is not entirely wrong. The problem in my opinion is not with the idea of managing in order to maximise shareholder value: that is absolutely how a company should be managed. The problem is with equating shareholder value to the price at which a share of the company was most recently traded. Though that is most probably the best measure we have of the value of the company to its shareholders, it is, nonetheless, not a very accurate measure. Given that the markets have a relatively restricted view of the state of the company, maximising the short-term share price relies on optimising those variables which are exposed to view: chiefly the quarterly earnings.

If outside shareholders had perfect knowledge of the state of the company, then maximising the share price would be the same as maximising shareholder value. Because of the information assymetry, they are not the same. Value added to the company will not increase the share price unless it is visible to investors, and some forms of value are more visible than others. Management are certainly very concerned by the share price. As I mentioned on twitter, “in any company I worked for, management were (very properly) terrified of shareholders”

But this is a well-known problem. There are various approaches that have been tried to improve the situation. Where a company has a long-established leadership that has the confidence of investors, shareholding can be divided between classes of shares with different voting rights, so that the trusted, established leadership have control over the company without owning a majority of the equity. This is the situation with Facebook, for instance, where Mark Zuckerberg owns a majority of the voting shares, and most other shareholders hold class B or C shares with reduced or zero voting rights. Buying such shares is an act of faith in Mr Zuckerberg, more than owning shares in a more conventionally structured business. The justification is that it allows him to pursue long-term strategy without the risk of being interrupted by a takeover or by activist investors.

In fact, this year Zuckerberg increased the relative voting power of his holding, by introducing the non-voting class C shares. That has been challenged in court, and is the subject of ongoing litigation.

In summary, the arrangements of public companies consist of a set of complex compromises. There are many criticisms, but they tend to come in opposing pairs. For everyone who, like Alrenous, claims that shares are worthless because companies do not pay dividends, there are some like the Reuters article he found which complain that companies pay out all their profits and do not invest enough in growth. For everyone who, like Chris Dillow, complains that managements are undersupervised and extract funds for self-aggrandizement and private gain, there are others like Lynn Stout who complain that managements are over-constrained by short-term share price moves and unable to plan strategically.

The arrangements which implement the compromises between these failings are flexible: they change over time and adapt to circumstances. A hundred-year-old resource extraction business like Rio Tinto is not structured in exactly the same way as a web business like Facebook. The point of Chris Dillow’s article is that fewer businesses are publicly traded today than in the past (though even that is difficult to measure meaningfully).

The joint-stock company is not a magic bullet, it is a range of institutional forms, evolved over time, and part of a large range of institutiontal forms that make up Actually Existing Capitalism. They are ways of coping with, rather than solving, the basic conflict-of-interest and asymmetric-information issues that are fundamental to everything from a board of directors appointing a CEO to a coder-turned-rancher hiring a farm hand.

My worry is that Moldbug’s form of Neocameralism is an inflexible snapshot of one particular corporate arrangement, which only works as well as it does because it can be adapted to meet changing demands. That’s why I tend to think of it as one item on a menu of management options (including hereditary monarchy!)

Modelling Failures

Nothing really new here, but pulling a few things together.

Start with Joseph K’s observation:

This is a good point, and I added that the failure of financial risk models in 2008 was essentially the same thing.

The base problem is overconfidence. “People do not have enough epistemic humility”, as Ben Dixon put it.

The idea in all these fields is that you want to make some estimate about the future of some system. You make a mathematical model of the system, relating the visible outputs to internal variables. You also include a random variable in the model.

You then compare the outputs of your model to the visible outputs of the system being modelled, and modify the parameters until they match as closely as possible. They don’t match exactly, but you make the effects of your random variable just big enough that your model could plausibly produce the outputs you have seen.

If that means your random variable basically dominates, then your model is no good and you need a better one. But if the random element is fairly small, you’re good to go.

In polling, your visible effects are how people answer polling questions and how they vote. In social science, it’s how subjects behave in experiments, or how they answer questions, or how they do things that come out in published statistics. In finance, it’s the prices at which people trade various instruments.

The next step is where it all goes wrong. In the next step, you assume that your model—including its random variable to account for the unmeasured or unpredictable—is exactly correct, and make predictions about what the future outputs of the system will be. Because of the random variable, your predictions aren’t certain; they have a range and a probability. You say, “Hillary Clinton has a 87% chance of winning the election”. You say “Reading these passages changes a person’s attitude to something-or-other in this direction 62% of the time, with a probability of 4.6% that the effect could have been caused randomly”. You say, “The total value of the assets held by the firm will not decrease by more than 27.6 million dollars in a day, with a probability of 99%”.

The use of probabilities suggests to an outsider that you have epistemic humility–you are aware of your own fallibility and are taking account of the possibility of having gone wrong. But that is not the case. The probabilities you quote are calculated on the basis that you have done everything perfectly, that you model is completely right, and that nothing has changed in between the production of the data you used to build the model and the events that you are attempting to predict. The unpredictability that you account for is that which is caused by the incompleteness of your model—which is necessarily a simplification of the real system—not on the possibility that what your model is doing is actually wrong.

In the case of the polling, what that means is that the margin of error quoted with the poll is based on the assumptions that the people polled answered honestly; that they belong to the demographic groups that the pollsters thought they belonged to, that the proportion of demographic groups in the electorate are what the pollsters thought they were. The margin of error is based on the random variables in the model: the fact that the random selection of people polled might be atypical of the list they were taken from, possibly, if the model is sophisticated enough, that the turnout of different demographics might vary from what is predicted (but where does the data come from to model that?)

In the social sciences, the assumptions are that the subjects are responding to the stimuli you are describing, and not to something else. Also that people will behave the same outside the laboratory as they do inside. The stated probabilities and uncertainties again are not reflecting any doubt as to those assumptions: only to the modelled randomness of sampling and measurement.

On the risk modelling used by banks, I can be more detailed, because I actually did it. It is assumed that the future price changes of an instrument follow the same probability distributions as in the past. Very often, because the instruments do not have a sufficient historical record, a proxy is used; one which is assumed to be similar. Sometimes instead of a historical record or a proxy there is just a model, a normal distribution plus a correlation with the overall market, or a sector of it. Again, lots of uncertainty in the predictions, but none of it due to the possibility of having the wrong proxy, or of there being something new about the future which didn’t apply to the past.

Science didn’t always work this way. The way you do science is that you propose the theory, then it is tested against observations over a period of time. That’s absolutely necessary: the model, even with the uncertainty embedded within it, is a simplification of reality, and the only justification for assuming that the net effects of the omitted complexities are within error bounds is that that is seen to happen.

If the theory is about the emission spectra of stars, or the rate of a chemical reaction, then once the theory is done it can be continually tested for a long period. In social sciences or banking, nobody is paying attention for long enough, and the relevant environment is changing too much over a timescale of years for evidence that a theory is sound to build up. It’s fair enough: the social scientists, pollsters and risk managers are doing the best they can. The problem is not what they are doing, it is the excessive confidence given to their results. I was going to write “their excessive confidence”, but that probably isn’t right: they know all this. Many of them (there are exceptions) know perfectly well that a polling error margin, or a p-value, or a VaR are not truly what the definitions say, but only the closest that they can get. It is everyone who takes the numbers at face value that is making the mistake. However, none of these analysts, of whichever flavour, are in a position to emphasise the discrepancy. They always have a target to aim for.

For a scientist, they have to get a result with a p-value to publish a paper. That is their job: if they do it, they have succeeded, otherwise, they have not. A risk manager, similarly, has a straightforward day-to-day job of persuading the regulator that the bank is not taking too much risk. I don’t know the ins and outs of polling, but there is always pressure. In fact Nate Silver seems to have done exactly what I suggest: his pre-election announcement seems to be been along the lines “Model says Clinton 85%, but the model isn’t reliable, I’m going to call it 65%”. And he got a lot of shit for it.

Things go really bad when there is a feedback loop from the result of the modelling to the system itself. If you give a trader a VaR budget, he’ll look to take risks that don’t show in the VaR. If you campaign so as to maximise your polling position, you’ll win the support of the people who don’t bother to vote, or you’ll put people off saying they’ll vote for the other guy without actually stopping them voting for the other guy. Nasty.

Going into the election, I’m not going to say I predicted the result. But I didn’t fall for the polls. Either there was going to be a big differential turnout between Trump supporters and Clinton supporters, or there wasn’t. Either there were a lot of shy Trump supporters, or there weren’t. I thought there was a pretty good chance of both, but no amount of Data was going to tell me. Sometimes you just don’t know.

That’s actually an argument for not “correcting” the polls. At least if there is a model—polling model, VaR model, whatever—you can take the output and then think about it. If the thinking has already been done, and corrections already applied, that takes the option away from you. I didn’t know to what extent the polls had already be corrected for the unquantifiables that could make them wrong. The question wasn’t so much “are there shy Trump voters?” as “are there more shy Trump voters than some polling organisation guessed there are?”

Of course, every word of all this applies just the same to that old obsession of this blog, climate. The models have not been proved; they’ve mostly been produced honestly, but there’s a target, and there are way bigger uncertainties than those which are included in the models. But the reason I don’t blog about climate any more is that it’s over. The Global Warming Scare was fundamentally a social phenomenon, and it has gone. Nobody other than a few activists and scientists takes it seriously any more, and mass concern was an essential part of the cycle. There isn’t going to be a backlash or a correction; there won’t be papers demolishing the old theories and getting vast publicity. Rather, the whole subject will just continue to fade away. If Trump cuts the funding, as seems likely, it will fade away a bit quicker. Lip service will occasionally be paid, and summits will continue to be held, but less action will result from them. The actual exposure of the failure of science won’t happen until the people who would have been most embarrassed by it are dead. That’s how these things go.

President Trump

I have long ago observed that, whatever its effect on government, democracy has great entertainment value. We are certainly being entertained by the last couple of days, and that looks like going on for a while.

From one point of view, the election is a setback for neoreaction. The overreach of progressivism, particularly in immigration, was in danger of toppling the entire system, and that threat is reduced if Trump can restrain the demographic replacement of whites.

On the other hand, truth always has value, and the election result has been an eye-opener all round. White American proles have voted as a block and won. The worst of the millennial snowflakes have learned for the first time that their side isn’t always bound to win elections, and have noticed many flaws of the democratic process that possibly weren’t as visible to them when they were winning. Peter Thiel’s claims that democracy is incompatible with freedom will look a bit less like grumblings of a bad loser once Thiel is in the cabinet. Secession is being talked about, the New York Times has published an opinion column calling for Monarchy. One might hope that Lee Kuan Yew’s observations on the nature of democracy in multi-racial states might get some currency over the next few months or years.

So, yes, President Trump may save the system for another two or three decades (first by softening its self-destructive activities, and later by being blamed for every problem that remains). But Anomaly UK is neutral on accelerationism; if the system is going to fail, there is insufficient evidence to say whether it is better it fail sooner or later. If later, it can do more damage to the people before it fails, but on the other hand, maybe we will be better prepared to guide the transition to responsible effective government.

We will soon be reminded that we don’t have responsible effective government. Enjoyable as fantasies of “God Emperor Trump” have been, of course the man is just an ordinary centre-left pragmatist, and beyond immigration policy and foreign policy becoming a bit more sane, there is no reason to expect any significant change at all. The fact that some people were surprised by the conciliatory tone of his victory speech is only evidence that they were believing their own propaganda. He is not of the Alt-Right, and the intelligent of the Alt-Right never imagined that he was.

For the Alt-Right, if he merely holds back the positive attacks on white culture, he will have done what they elected him to do. Progressives can argue that there can be no such thing as anti-white racism, and that whites cannot be allowed the same freedoms as minority groups since their historical privilege will thereby be sustained. But even if one accepts that argument, it doesn’t mean that those who reject it are White Nationalists. Blurring the two concepts might make for useful propaganda, but it will not help to understand what is happening.

My assessment of what is happening is the same as it was in March: I expect real significant change in US immigration policy, and pretty much no other changes at all. I expect that Trump will be allowed to make those changes. It is an indication of the way that progressive US opinion dominates world media that people in, say, Britain, are shocked by the “far-right” Americans electing a president who wants to make America’s immigration law more like Britain’s–all while a large majority in Britain want to make Britain’s immigration law tougher than it is.

The fact that US and world markets are up is a clue that much of the horror expressed at Trump’s candidacy was for show, at least among those with real influence.

The polls were way off again. The problem with polling is that it is impossible. You simply can’t measure how people are going to vote. The proxies that are used–who people say they support, what they say they are going to do–don’t carry enough information, and no amount of analysis will supply the lacking information. The polling analysis output is based on assumptions about the difference between what they say and what they will do–the largest variable being whether they will actually go and vote at all. (So while this analyst did a better job and got this one right, the fundamental problems remain)

In a very homogeneous society, polling may be easier, because there’s less correlation between what candidate a person supports and how they behave. But the more voting is driven by demographics, the less likely the errors are to cancel out.

If arbitrary assumptions have to be made, then the biases of the analysts come into play. But that doesn’t mean the polls were wrong because they were biased–it just means they were wrong because they weren’t biased right.

On to the election itself, obviously the vital factor in the Republican victory was race. Hillary lost because she’s white. Trump got pretty much the same votes Romney did; Hillary got the white votes that Obama did in 2012, but she didn’t get the black votes because she isn’t black, so she lost.

So what of the much-talked-of emergence of white identity politics? The thing is, that really happened, but it happened in 2012 and before. It was nothing to do with Trump. The Republican party has been the party of the white working class for decades. Obama took a lot of those votes in 2008, on his image as a radical and a uniter, but that was exceptional, and he didn’t keep them in 2012.

The exit polls show Trump “doing better” among black people than Romney or McCain, but that probably doesn’t mean they like him more: it’s an artifact of the lower turnout. The republican minority of black voters voted in 2016 mostly as before, but the crowds who came out to vote for their man in 2008 and 2012 stayed home, so the percentage of black voters voting Republican went up.

The big increase in Trump’s support over Romney from Hispanics is probably not explainable the same way. A pet theory (unsupported by evidence) is that they’ve been watching Trump on TV for years and years and they like him.

The lesson of all this is that, since 2000, the Democratic party cannot win a presidential election with a white candidate. There’s a reason they’re already talking about running Michelle Obama. They’ve lost the white working class, and the only way to beat those votes is by getting black voters out to vote for a black candidate. While we’re talking about precedents, note that the last time a Democrat won a presidential election without either being the incumbent or running from outside the party establishment was 1960.

Update: taking Nate Silver’s point about the closeness of the result, my statements about what’s impossible are probably overconfident: Hillary might have squeaked a win without the Obama black vote bonus, maybe if her FBI troubles had been less. Nevertheless, I think if the Democrats ever nominate a white candidate again, they’ll be leaving votes on the table unnecessarily.

Personal and Collective Power

In the context of my writing concerning division of power, I want to make a distinction between personal power and collective power.

That is not the same as the distinction between absolute power and limited power. Absolute power can be collective, for example if a state is under the control of a committee, and limited power can be personal, if an individual has control over a particular department or aspect of policy.

There is a continuum of collective power, depending on the amount of personal influence. At one extreme there is a situation where a group of two or three people who know each other can make decisions by discussion; at the other is the ordinary voter, whose opinion is aggregated with those of millions of strangers.

Towards the latter extreme, collective power is no power at all. A collective does not reach decisions the same way an individual does. An individual can change his mind, but that has small chance of altering the action of the collective. To change the action of a collective, some more significant force than an individual impulse normally has to act on it. That’s why, when we attempt to predict the action of a collective, we do not talk about states of mind, we talk about outside forces: media, economics, events.

In many cases, we can predict the action of the collective with virtual certainty. The current US presidential election is finely balanced, but we can be sure Gary Johnson will not win.

This feature of collective power has implications for the consideration of divided power, because in the right circumstances a collective power can be completely neutralised. An absolute ruler is not omnipotent, in that he depends on the cooperation of many others, most importantly his underlings and armed forces. But as a rule they do not have personal power; they have collective power. Any one of them can be replaced. An individual can turn against the sovereign, but if he would just be dismissed (or killed) and replaced, that is not a realistic power. If too many of them do not act as the sovereign orders, he would be helpless, but that requires a collective decision, and one which with a bit of work can be made effectively impossible.

There are exceptions to this. If the sovereign is utterly dependent on a single particular individual, that individual has personal power. There have been historical cases of sovereigns in that position, and it is observed that that constitutes a serious qualitative change in the nature of the government.

Where a person can covertly act against the sovereign’s power, that is a personal power. Competent institutional design is largely a matter of making sure that rogue individuals cannot exercise power undetected by anyone. As long as there are any others who can detect this abuse, then the power once again becomes collective power, held by the individual and those placed to stop him. Again, where collectives do act in this way, it is a sign of a breakdown of government institutions. As an example, see this article describing the upper ranks of the army working together to deceive the president. If the president had absolute power and a moderate amount of sense, this sort of conspiracy would be suicidally dangerous. Once power is formally divided, then the capability to prevent this kind of ad-hoc assumption of power is massively eroded.

That is the fundamental reason why division of power is bad: whatever division of power is formally made, these gaps for further informal division will tend to be opened up by it, because limited power denies the power to enforce necessary limits on others. If anyone has power to punish those who take powers they are not formally entitled to, then that person effectively is absolute. If nobody has that power to punish, then any ambitious crooks can run wild.

If there is no single person other than the sovereign who has personal power, then I would call the sovereign absolute. His power is not infinite: he has to maintain control over the collectives which necessarily have power, but that is a lesser constraint than having to cope with personal power held in other hands. It is more akin to the other constraints on his power imposed by such things as the laws of physics and the existence of foreigners and wild animals.

Note that the nature of feudalism is that feudal aristocrats are not replaceable, and do have personal power—limited, but not collective. Feudalism is thus not a system of absolute power even under my refined definition.

The great significance of collective power is that it is subject to coordination problems. Or, since from the point of view of the sovereign, the problems of coordinating a collective can be an advantage, I will call them coordination obstacles. That is why it is not voters who have power, it is those who mediate the coordination of the voters: parties and media. A change in the way that voters can be coordinated is a thoroughly material change in what I have called the Structure of the state. The US does not have the Structure that it had 25 years ago, because (among other reasons) social media is part of the current Structure. That is an actual revolution, and why the fights over use of social media for political coordination are so significant. Note that since the Constitution doesn’t say anything about social media, the constitution in itself obviously does not define the Structure.

It also means that for a formally absolute ruler, obstructing collectives from coordinating is an important tool. In the period of formally absolute monarchy, any attempt by people of importance to coordinate in confidence was suspect: prima facie treason. The most basic right claimed by parliaments was the right to meet: simply allowing aristocrats and city leaders to meet together and discuss their interests was giving them a power that they wouldn’t otherwise have.

This is the problem with the formalism that Urielo advocates: formally establishing any power that anyone in a given Structure happens to have. Power that is held collectively and is not legitimate is often neutralised by coordination obstacles. If you make that power legitimate, that goes some way to dissolving the coordination obstacles, and thereby increases the effective collective power.

Modern political thought does not generally respect the idea that coordination by those with informal power is not legitimate (though we retain the historical unfavourable associations of the word “conspiracy”) but it went without saying for most of history. Organisations that have existed in England for hundreds of years, such as guilds and the older schools and colleges, generally have royal charters: the charter is their permission to exist.

 There are a couple of interesting exceptions to the modern toleration of conspiracy: one is anti-trust law, and another is insider trading law. Those both deal with economic activities.

They do show, however, that legal obstacles to coordination are not obsoleted by technological effects. Indeed, modern communication doesn’t mean that coordination obstacles are easily overcome, especially if the obstacles are considered legitimate. No matter what messaging options are available, if you need to identify yourself for the communication to be useful, and you cannot trust the other party not to expose your attempt to conspire, then attempting to conspire is dangerous.

Here is another example: in investment banks, it is generally not permitted for employees to coordinate on pay. It is a disciplinary offence to tell anyone how much you are paid. This is taken seriously, and is, in my experience, effective. That is an example of an obstacle to coordination imposed as part of a power structure.

Legal obstacles to treasonous coordination were removed for ideological reasons, because division of power and competition for power were considered legitimate. Effectively, “freedom of association” was one more way to undermine the ancien régime and unleash the mob. As with the other historical destabilising demands of progressives, things are starting to change now that the progressives have taken permanent control of the central power structures.

You no longer need a Royal Charter for your golf club or trade association, but that doesn’t mean you are free to coordinate: if you don’t have sufficient female or minority members, you may need to account for yourself in the modern Star Chamber.  The Mannerbund is the same kind of threat to today’s status quo as a trade union was to that of 1799.

The useful point is that it is not proved that you can run a stable society with complete freedom of association. That makes it more acceptable for me to recommend my form of absolutism, where people other than the sovereign inevitably have the capability to act against his policy by acting collectively, but such collective action is both illegitimate and made difficult by deliberate obstacles put in their way.

That underlies my view that absolute rule is more achievable than Urielo thinks, and that making divided rule stable is more difficult than he thinks. As he says, “we agree on the fundamentals, and disagree on the specifics”. 

Update: just come across this 2004 piece from Nick Szabo, where he talks about dividing power to produce “the strategy of required conspiracy, since abusing the power requires two or more of the separated entities to collude”. However, as I see it doing that is only half the job: the other half is actually preventing the separated entities from colluding.

Separation

No matter how big you grow, you are still vulnerable to a single accident. This includes a single self-inflicted accident.

For robustness, growing is helpful but not sufficient. You need to reproduce.

However, reproduction is not merely making copies. That is barely different from growth.  Again, redundant structures and information help, but they’re not sufficient.

To survive longer periods and greater risks, you need to duplicate and separate.

The bigger you are, the further you have to separate.

Your “size” is not your mass, it is the space you occupy. If you are frequently highly mobile, that is like being large, and means you have to separate further.

There are two ways to separate: either you use a different mechanism of movement for separation than for all other purposes, (like a plant seed blowing on the wind), or you make a sustained determined effort to escape, to run far away from all of your kind, with a high speed and consistency of direction that you do not use for any purpose other than separation.

Constitutions

At last I have set the necessary prerequisites to discuss Urielo / @cyborg_nomade’s discussion of constitutions.

It is possible I could have been more concise about the prerequisites: what it really amounts to is:

  • Division of power is dangerous and to be avoided
  • It’s better to have less division than more
  • Sometimes that isn’t possible

Within the context set by those propositions, the difficult parts of “neocameralism and constitutions“, as well as Land’s “A Republic, If You Can Keep It“, start to appear at least relevant. So too the considerations of control and property in Land’s “Quibbles with Moldbug“.

Let’s say that in some given situation, it is impossible to effectively unify power. The next best thing is to nearly unify power. Some small number of people have some small amounts of power, but the main power-holder can set rules about how they are allowed to use that power, and threaten to crush them like a bug if they break them. That’s workable too, provided the mechanisms of supervision and bug-crushing are adequate.

However, that’s not always the case. Sometimes, power is too divided, and crushing like a bug isn’t on the table. That’s when the hard bit starts.

What you need to do is find a pattern of division of power that is stable, and compatible with effective government. The second implies the first: if the pattern of division of power is unstable, then those in power will be incentivised to protect and expand their power, rather than to govern effectively.

Part of setting up this stable pattern might be to write a lot of rules on a long sheet of paper. I can’t see, though, how you could ever start with the paper and get to the actual division of power.

“Actual division of power” is such a mouthful. The word I wanted to use for this is “constitution”, but I suppose I will have to give in and call it something else. (I had this idea that the original sense of “constitution” meant  what I mean, and the idea of a constitution as a higher set of laws was derived from that. But it seems my idea was completely wrong). Let’s just call it the “Structure“.

So how should one design a Structure? You have to start from where you are. If at t=0 one power is effectively unchallenged, then they should just keep it that way. You don’t need a Structure.

Urielo really hits the nail on the head here:

eventually, a constitution always arise out of a multiplayer game, because conflict eventually ends with an agreement – @cyborg_nomade

A non-autocratic Structure is the the result of a peace settlement between potential or actual rivals, and a Constitution represents the terms of that peace settlement.

The aims of the settlement should be that it will last, that those who came into the settlement with power are willing to accept it, and will be incentivised to maintain it into the future and to preserve those things that incentivise the others to maintain it into the future.

The simplest peace settlements consist of a line on a map. What happens on one side is the responsibility of one party, and on the other is the responsiblity of another. The two (or more) sides invest appropriately in either defensive or retaliatory weaponry, to provide incentive to each other to keep to the agreement.

This is not normally what we think of as a Structure within a society, though it is an option. https://en.wikipedia.org/wiki/Partition_(politics) . If the powers of the participants cannot be easily separated by a line on a map, a more detailed agreement is necessary.

Another of Urielo’s tweets:
pretty much all working societies recognized some sort of power division. the estates of the realm being the European version – @cyborg_nomade

I’ve written before about the vital elements of feudalism as I see them: It resembles somewhat the “line on the map” kind of settlement: each feudal vassal had practical authority over a defined region, subject to certain duties he owed to his Lord. The Lord would spend his time travelling between his vassals, resolving disputes between them, collecting his share of the loot, and checking that they weren’t betraying him.

This worked practically, most of the time. As I wrote before, the crucial fact that necessitated a settlement between the King and his vassals was that he wasn’t physically able to administer the whole kingdom, because of limitations of communication and transport. Whoever he sent to run them, would in fact have considerable autonomy (whether the constitution gave it to them or not), and so the Structure had to accommodate that fact.

I say it worked most of the time, but it didn’t work all the time, or even nearly all the time. Conflict between King and nobles was pretty common.

If we’re talking estates of the realm, of course, then there’s more than nobles. The Medieval English Structure basically treated the church as a sort of noble. Bishops and Abbots had similar rights to Barons, but fewer duties. (That meant it would be a problem if their power increased relative to nobles.) The other group to be recognised with power within the Structure were the small landholders. At a guess, I’d put their claim to power as follows:

Fighting enemies was the responsibility of the King, and in the King’s interest. His vassals were required to supply men and/or funds to him to do this. The actual fighting would be done by Knights and men known to and under the direct control of Knights. It was therefore in the King’s interest that the Knights be incentivised to fight effectively, and would see honour and/or profit in doing so. However, to the Lords the Knights were just farmers and taxpayers; it was not in the Lord’s interest to have his Knights flourishing and strong. Therefore, the King had an interest in defending the status of Knights against their Lords.

That’s kind of a just-so story; I’m open to disagreement on specifics. In any case, this Medieval English Structure obviously depends on an agricultural economy, and military technology that relies on a relatively small number of expensively-equipped, skilled soldiers. It’s not coming back.

The commoners and serfs basically have no power recognised by the Structure. That’s probably an oversimplification, at least after the Black Death when their economic power became more significant (and serfdom faded out). But in any case, the point of the Structure is not some abstract fairness, it’s stability and efficiency.

The Structure was quite flexible and changed significantly over time. Burghers were accepted into it once trade became economically significant enough for their power to need to be preserved. But even there the simple fix was geographic: towns were made Boroughs, lines were drawn around them on the map, and the Burghers were allowed to run the towns, with a limited and transparent set of rights and duties with regard to goings-on outside the borough.

The King, Nobles and Knights form a triangle: that’s popularly considered to be stable, for the reason that if any one of the three starts to get too strong (or weak), the other two can see it and attempt to correct it with superiority. With two or more than three large power centres, it’s too easy for a theoretically weaker coalition to unexpectedly show itself strong enough to reconfigure the Structure. That’s a guideline of Structure Design that one might expect to be durable. One wonders whether Structures that are designed to have many powers (Neocameralism, bitcoin) might coalesce into three. Just a thought.

Now we come to Parliament. I don’t see the medieval English parliament as “part of government” in the sense that the modern UK Parliament is. It wasn’t responsible for law, or for any routine act of government. Its role seems to me to have been the constitutional watchdog, checking on behalf of the Lords and Knights (and later Burghers too) that the King was sticking to the constitution. Running the country was the job of the nobles, within their lines-on-the-map, and of the King, regarding defence. The power of parliament didn’t come from any constitution; it came from the fact that it could reach an agreement, and then go to the country and say “The King is infringing on his subjects’ rights”. (Or, conversely, it could say “Lord Splodgeberry has defied the King and the King is justified in going and kicking his arse”). It makes sense as a transparency mechanism rather than as a power in its own right.

Transparency, even more than Triangles, seems like a durable guideline for Structure Design. You want people with power to be working for good government, not for enhancing their own power, and you need to be able to see that that’s what they’re doing.

Having said that, I don’t think there are many general principles for Structure Design. I’ve spent this piece looking in detail on one historical Structure, to say why it was they way it was and why it worked. I think that’s what you have to do: Structure Design is a boundary value problem. You have to start from where you are.

But then again, Structure Design is a thing. Where two or more powers come together, reaching an agreement is more than just recognising their existing position. It may mean one or both giving up some power that they really hold to cement a durable deal. The establishment of rights of Knights I described above follows that pattern: the King needed it to happen so it was added to the Structure by negotiation. (That may be a stylised version of what really happened, but it could have gone that way).

So I think you can say a bit more than this:
the estates of the realm don’t arise from nowhere. they were supposed 2 formalize the *actual* structure of power that underlied sovereignty – @cyborg_nomade

What you can’t do is just dream up some “constitution” and assume that anyone will follow it. The half-life of a Structure designed that way is generally measured in weeks. Even a constitution that worked somewhere else will fail immediately if the power on the ground doesn’t match the Structure that the constitution is designed to support.

Decolonisation of Africa produced a number of experiments to demonstrate that process.

Once the holders of actual power have been identified, “constitutional design” can take place to create an arrangement by which they are incentivised to participate in an efficient government. However, “constitutional design” in a vacuum is worthless. Democracies with deviations from “one-man-one-vote” have been moderately successful in the past, but I do not think this example is rooted in any realistic assessment of power.

Similarly, various people from time to time (including even myself, long ago) have suggested random jury-type selection of decision-makers. This has attractive efficiency features, but nobody with vested power would have a clear interest in keeping it running fairly, and the scope for corrupting it would be enormous.

The way to think of creating a stable government Structure where there is intractable division of power is midway between diplomats negotiating a peace and lawyers negotiating a contract. Neither of those are trivial or negligible occupations. (At the completely rigorous level, Structure Design is a matter of game theory, but I doubt real-world situations are tractable to mathematics).

Constitutions need to resemble contracts in that they have to cover detailed interactions unambiguously, but they need to resemble peace treaties in that they need to provide for their own enforcement.

The whole Godel amending process is a bit of a red herring. In the words of Taylor Swift, nothing lasts for ever. Circumstances change, and new Structures have to accommodate them. A new Structure can be built out of an old one–such as representatives of Boroughs being included in the House of Commons alongside Knights–if the parties with power agree they are necessary. Making a constitution change is not the hard bit; making the Structure stay the same from one year to the next is the hard bit.

Sometimes a Structure has to go. Gnon has the last word.