As a fan of logic puzzles and rational decision theory, I’d encountered Newcomb’s Paradox before. The puzzle goes as follows:
Omega (a powerful (but not supernatural or causality-violating) logic-puzzle creating entity) has set up two boxes. Box A contains $1000. Box B contains $1,000,000 or nothing. Omega offers the choice of taking Box A or taking both boxes. But Omega has made a prediction (and Omega’s predictions are almost always correct) about the subject’s choice, and put the million dollars in Box B if and only if the subject was predicted to take just Box B (without using an external source of randomness, people who flip a coin and choose based on that do even worse than those that just choose both boxes).
This is one of the most contentious philosophical problems in decision theory. One of the things that’s interesting about it is that it’s hard to just deny that the premises are logically coherent. You can sustain the paradox without Omega being perfect in it’s predictions, so long as Omega can be usually right, by increasing the amount to be maybe placed in Box B.
Newcomb’s Paradox is one of the problems that the denizens of Less Wrong discuss extensively because rationality is their raison d’être and decision theory is (in one sense) the theory of what it means to make rational decisions. The consensus there is that the right solution to the problem is to one-box (that is, to take just Box B), and Eliezer Yudkowsky make a compelling argument for that, which is essentially this: Given the premises of the problem, people who take just Box B walk away with $1,000,000, while people who take both boxes walk away with $1000. Therefore, it’s best to put aside qualms about strategic dominance, (the illusion of) backwards causality, and whether or not this Omega fellow is generally a jerk; just do the thing that reliably wins.
To put it another way: It’s a premise of Newcomb’s paradox that one-boxers usually win, and it’s a pretty poor game theory that gives advice that contradicts a scenario’s premises.
I was reminded of this puzzle again recently because Chris Bertram at Crooked Timber has this unusual observation on it:
I was reading a postgraduate dissertation on decision theory today […] and it suddenly occurred to me that Max Weber’s Protestant Ethic has exactly the structure of a Newcomb problem.
[…] place yourself in the position of Max Weber’s Calvinist. An omniscient being (God) has already placed you among the elect or has consigned you to damnation, and there is nothing you can do about that. But you believe that there is a correlation between living a hard-working and thrifty life and being among the elect, notwithstanding that the decision is already made. Though partying and having a good time is fun, certainly more fun than living a life of hard work and self-denial, doing so would be evidence that you are in a state of the world such that you are damned. So you work hard and save.
[…] you work hard and reinvest, despite the dominance of partying, because you really really want to be in that state of the world such that you get to heaven.
It does seem to follow from the premises in a similar way, so presumably the conclusion would be analogous. That makes sense. When dealing with omnipotent and omniscient entities, trying to find loopholes is widely regarded as a bad idea.
I guess the problem for Less Wrongians (and here, I really must give credit to Crooked Timber commenter Prosthetic Conscience for the link, though some of the overlap in our ideas was independent) is that despite usually being atheists, they are often singularitarians, so they may genuinely worry about effectively omni* entities messing with them (or at least some version of future-them). Sinners who could yet end up in the hands of an angry god-like-entity.