Patri Friedman points out in a comment that, since “correlation is not causation”, using the correlation between my vote and those of others to estimate an amplified effect for my vote is bogus.
Oh yes, so it is.
That almost disposes of the question. But my thought experiment about identical robots all voting the same way is still valid, I believe. And while I and some other voter I pick out are not robots and not identical, we are phenomena in a physical universe with some strong mechanical resemblances.
Like Newcomb’s paradox, it comes down to the nature of human choice. The traditional view is that each person is an independent entity that can make uncaused choices at any point in time.
That traditional view is implicit in the question, “what difference does it make whether I vote or not?”. The assumption is that, in imagination at least, we can hold the whole world constant and consider it with or without me voting.
As I have implied by talking about robots, the traditional view is not true. My mind is part of the world, and you cannot “hold the world constant” without holding my decision also.
One response to the problem is to say that the whole question is invalid, humans do not make choices, they are “moist robots” (as Scott Adams would say) following their predetermined programs.
But the question clearly is valid. We maybe cannot hold the world constant in every last detail while varying my decision, but surely we can come close enough for the question still to make sense. We will just have to assume some small changes to the world to be consistent with my decision being changed.
Now if we vary, for instance, how much of an idiot the candidate is, we will get an answer to my question very much greater than one. But that’s silly. Whatever the question really means (because I’ve demonstrated it’s not quite as unambiguous as it looks), it doesn’t mean that. Facts we have observed must be held constant.
It would be a more sensible interpretation of the question to, for instance, hold the universe outside my skin constant, while varying the inside as far as necessary to be physically consistent with different votes.
If we do that, then the answer we will come up with is that my vote makes exactly one vote of difference – the whole argument I made in the first place is wrong.
But varying my brain is not straightforward, even in principle, because it breaks continuity over time. In order to be imagining a physically possible universe, that nonetheless is consistent with the history we have observed. I might have to vary unobserved facts that extend beyond my brain and body. Those facts may even extend into other voters’ brains and bodies, possibly giving me the >1 answer I wanted. This is what was nagging at me in the first place: the notion that “my mind” is not quite something that can have a neat boundary drawn around it, that it is some kind of extended phenotype. In the identical robots examle, there is only really one mind, that is duplicated or distributed in space, which is why one decision produces many votes. As Dennett says in Freedom Evolves, “if you make yourself very large, you can internalize anything”. In order to internalize the decision to vote, that is, to be able to describe it as something I have done, might I need to make myself large enough that I overlap with others?
That is a coherent possibility, but it seems much more likely that to create the hypothetical implied by the original question, we could vary my vote without varying past observed facts by merely varying quantum randomness in my brain between now and when I vote, or, failing that, that varying unobserved facts in my brain back to my birth would be sufficient. In either case, 1 is a reasonable answer to the question “How many votes of difference does my decision to vote make”
Summary
The question is: How many more votes will my candidate get if I vote for him than if I don’t?
The question is too vague to give an absolutely rigorous answer – changing my vote requires, in order that physics be consistent, that other things (by implication, things that are too small for us to have observed) are changed also. Depending on which other things are changed, the answer possibly could vary.
However, there is a large probability that the most straightforward possible answer to the question is, one vote, meaning that unnoticeable changes inside my body are enough to change my vote without being inconsistent with the observed past.
I’m slightly disappointed (I liked the idea of getting free extra votes), but, on the other hand, the answer is the one that is consistent with “free will”, so if you’re insecure about whether you have free will, the answer is good news for you.
And I’m pretty sure I’m close to having a good answer to Newcomb’s paradox, which is the same kind of question. It’s an attempt to turn the question of free will into a motivated question. Asking about things like free will in the abstract tends to degenerate into arguing what the words mean, and unless there’s some reason to care, then one meaning is as good as another. Taking both boxes is an assertion that you have independent free will, and that you are not just a cog in a machine, but at the same time it’s a choice that matters and could cost you money if you’re wrong.