“Do What You Love” as a Weapon and Shield

A recent article in Slate had a critical take on the ideology of work as self-actualization:

There’s little doubt that “do what you love” (DWYL) is now the unofficial work mantra for our time. […]

DWYL is a secret handshake of the privileged and a worldview that disguises its elitism as noble self-betterment. According to this way of thinking, labor is not something one does for compensation but is an act of love. […] Its real achievement is making workers believe their labor serves the self and not the marketplace.

DWYL ideology sweeps non-elite work under the carpet:

Think of the great variety of work that allowed [Steve] Jobs to spend even one day as CEO. His food harvested from fields, then transported across great distances. His company’s goods assembled, packaged, shipped. Apple advertisements scripted, cast, filmed. Lawsuits processed. Office wastebaskets emptied and ink cartridges filled. Job creation goes both ways. Yet with the vast majority of workers effectively invisible to elites busy in their lovable occupations, how can it be surprising that the heavy strains faced by today’s workers—abysmal wages, massive child care costs, etc.—barely register as political issues even among the liberal faction of the ruling class?

And it makes elite work more exploitative:

The reward for answering this higher calling is an academic employment marketplace in which about 41 percent of American faculty are adjunct professors—contract instructors who usually receive low pay, no benefits, no office, no job security, and no long-term stake in the schools where they work.

There are many factors that keep Ph.D.s providing such high-skilled labor for such low wages, including path dependency and the sunk costs of earning a Ph.D., but one of the strongest is how pervasively the DWYL doctrine is embedded in academia. Few other professions fuse the personal identity of their workers so intimately with the work output. Because academic research should be done out of pure love, the actual conditions of and compensation for this labor become afterthoughts, if they are considered at all.  [links theirs]

Robin Hanson had a simple explanation for the popularity of this ideology in a much earlier essay that refers to the same Steve Jobs commencement address:

Now notice: doing what you love, and never settling until you find it, is a costly signal of your career prospects. Since following this advice tends to go better for really capable people, they pay a smaller price for following it. So endorsing this strategy in a way that makes you more likely to follow it is a way to signal your status.

It sure feels good to tell people that you think it is important to “do what you love”; and doing so signals your status. You are in effect bragging. Don’t you think there might be some relation between these two facts?

Hanson also has this more recent post about status and advice:

We get status in part from the status of our associates, which is a credible signal of how others see us. Because of this, we prefer to associate with high status folks. But it looks bad to be overt about this. […] Since association seems a good thing in general […] we mainly need good excuses for pushing away those whose status has recently fallen. Such opportunistic rejection, just when our associates most need us, seems especially wrong and mean. So how do we manage it?

One robust strategy is to offer random specific advice. You acknowledge their problems, express sympathy, and then take extra time to “help” them by offering random specific advice about how to prevent or reverse their status fall. Especially advice that will sound good if quoted to others, but is hard for them to actually follow, and is unlikely to be the same as what other associates advise.

Then when your former friend fails to follow your advice, you get annoyed at them and present that (instead of their lowered status) to yourself and others as the reason why you’re not on such good terms with them anymore.

I think that advice can certainly play such a distancing role.  But I think Hanson’s explanation misses some of the things that make this mechanism complicated:  The proactive nature of advice, the loose coupling between the emotional drives behind status and actual status, and the way status-related drives aren’t isolated from other psychological drives (not by coincidence, that form makes hypocrisy more effective).

When a friend is worried that they will suffer a setback, people want to help them avoid that setback (both due to empathy and a desire to not be associated with failure).  They also want to create emotional distance so that the setback, if it happens, will hurt them less (pain caused both due to empathy and status-anxiety). Both of these motivations underlie a variety of biases in favor of advice-giving.

When offering advice, I’m tempted to overestimate the effectiveness of advice-giving for the following false reasons:

  1. When thinking about my own problems, my emotions cloud my judgment, but I see other people’s problem’s objectively.  Also, being outside of their psychology makes their problems easier to understand, even if those problems involve things like their emotions and goals.
  2. I recognize that my imperfect understanding of my own emotions and goals make my problems harder to solve, but I can solve this problem with second-order advice about what emotions or goals should be.
  3. An understanding of what worked (or would work) for me is generally applicable.
  4. I could solve my problems effectively through sheer intelligence if only I was better at “following my own advice”.  And maybe seeing someone else succeed based on my advice will motivate me!

I think all of those biases increase the tenacity of “do what you love”.

A Random Dance Down Wall Street

In the last few weeks, I’ve been reading A Random Walk Down Wall Street by Burton Malkiel.  Definitely an investment classic, required reading for anyone with more than a few months expenses in their savings.  The very simple conclusion of the book can be summed up as follows:

The strategy for the average investor with the highest expected value is to invest in broad-based low-cost index funds.  Though that’s obviously not the strategy with the optimum return, it’s impossible to reliably identify a better strategy in advance.

(If you think you’re an above-average investor, the advice still applies to you if: You want some of your investments to be relatively low-risk, you don’t want to put substantial time and effort into researching investments (you’d rather spend it increasing and/or enjoying your income), or you’re willing to consider that your assessment of your investment abilities may be incorrectly high.) 

This book suggests that markets approximate the effect of the efficient market hypothesis (the disclaimer is important, since it’s unlikely that even the weak form of that hypothesis is precisely true, that would, among other things, imply P=NP).  Prices may not reflect all public information right away.  But there’s an incentive for prices to match public information eventually, so in the long-enough term there’s no easy augury that can profit reliably.  (And in practice that term is fairly short, if you want easy money, it’s probably shorter than you like.)  Not to say there’s nothing you can do to accurately predict the future.  It’s just really hard.

There are strong logical arguments in favor of both components of the book’s theory.  As to why the average institutional investor fails to beat the market:

  1. Every time someone chooses to buy an asset that beats the market, someone else chooses to sell it.  (It takes two to tango.)
  2. Generally, both the buyer and the seller are institutional investors because institutional investors do basically all of the buying and selling.
  3. Thus any time an institutional investor gains a relative advantage, another is likely to gain a relative disadvantage.  Taken together, they’d do as well as the market average, excluding the cost of all this buying and selling.
  4. The cost of all this buying and selling is substantial, so on average they do worse.
  5. Thus, buying an index fund also gets you a pre-expenses expected performance of the market average, but with substantially lower expenses.

As to why you (probably) can’t easily beat the market:

  1. If something is predictably under-valued, people will buy it until it isn’t.  (Even if only some people realize it’s undervalued, so there’s little time lag on this component.)
  2. If something is predictably over-valued, at some point people will start to sell it.  (There’s more of a time-lag on this component because they might expect to be able to sell it at an even higher price to a “greater fool”, but savers want to spend eventually and fools have finite money.)
  3. If some investor is predictably good, people will copy them until they no longer beat the market.  (Also, past performance doesn’t guarantee future results.)

So, good stuff.  The book focuses on presenting empirical evidence to back up those arguments, along with a lot of interesting history and a critical look at competing theories.

One weakness of the book is that, for an economics book, it seems to underestimate the effects of supply and demand.  It notes that the value that the market places on earnings and dividends varies during different “eras” of the market, that it was particularly low during the “Age of Angst”, 1969-1981.  During that era, stocks failed to keep up with inflation not because earnings failed to keep up with inflation, but because the P/E multiple fell sharply.  It mentions that the era was characterized by “demand-pull inflation” (demand for spending, not for saving!) suggests that investors were rationally “scared” and thus demanded higher risk premiums, which the market provided with lower P/E multiples.  But that sounds like an ultra-roundabout way to say that lower demand for savings and lower demand for having savings in stocks relative to other (perceived as less “risky”) things caused the P/E multiple to drop.

Which is odd, if you’ve accepted the author’s frame of referring to that earnings times market-average P/E multiple as a “firm foundation of value”, when that multiple is not nearly so firm, and may have as much to do with aggregate demand for stocks (in general) as opposed to anything about the specific companies being priced.  Something like gold, which Malkiel describes as “impossible to predict” has no dividends or earnings, only that unpredictable multiple.

The Silk Road Unraveled

Wow, what a story!

A while back, I wrote a bit about the technology behind the online pseudonymous black-market called Silk Road. I talked a bit about the site from the perspective of a security problem. Now, it seems that the site’s security was not so good. The FBI has arrested the site’s owner, the notorious black-marketeer known as Dread Pirate Roberts, in real life as Ross William Ulbricht. Further reading here, including the formal complaint. (Edited to add: Another great analysis here.)

In the previous post, I talked about the mechanism that minimized knowledge buyers have about sellers. I didn’t really talk about the site administrator. (Though I did mention the administrator could strengthen the site against attack by minimizing the data the site holds on to. Which Ulbricht doesn’t seem to have done.) To be secure, the site administrator would want to minimize their connections with the site. They would log in from an unpredictable place, via TOR. They would communicate only over encrypted channels. They would keep their private key somewhere separate from the servers for the site. Ditto for their bitcoin wallet. Above all, they would minimize their connection to the site, and they would minimize their visibility to law enforcement.

Ulbricht didn’t do any of that, and it proved his undoing. He wasn’t just the president of the Silk Road for Criminals Club, he was also a customer! Using a clearly labeled as administrator account, no less, to buy illegal goods and services directly related to the running of the site. Including packages of physical goods (fake IDs) that could be tracked to his house, and allegedly going so far as to pay hitmen to murder a turncoat former employee (there’s a separate indictment for that one) and a potential blackmailer.

In my last post, I suggested:

Acquiring new accounts to do individual stings is too high cost for too little gain, especially since the value of “flipping” a Silk Road buyer is very low (there’s little they can do to get information on Silk Road sellers).

But failed to note that this does not apply if the buyer in question happens to host the whole site out of his basement.

(Edited to add: That’s hyperbole, of course. The site was hosted outside of the US. It wasn’t being operated from Ulbricht’s house, either. But he was signing in through a VPN gateway at an internet cafe near his home. And not via TOR, either. He also advertised the site soon after it started, and looked for employees for a bitcoin-related startup soon before it started, both under pseudonyms that could be traced to his real identity.)

Now that Silk Road has been seized, any records of sales can be traced. Any buyers and sellers whose records were compromised will be very quickly screwed if they didn’t employ additional money-laundering techniques. Bitcoin may be pseudonymous, but every transaction is intensely public, every node in the network has the complete transaction record.

(Edited to add: The Silk Road itself included a coin tumbler that protected buyers and sellers from knowing one another’s bitcoin addresses. However, it’s not clear if this will protect either buyers or sellers from the authorities now that they have control over whatever data Silk Road retained.)

As far as the value of Bitcoin as a whole goes? Depends, I think, on how much of the price is based on future versus present or past utility. I still think Silk Road is an edge case in the set of things Bitcoin could be used for. But it’s a large portion of the set of things Bitcoin has been used for.

Examining Narrative Games as Art

I recently finished playing Telltale Games’ The Walking Dead, a wonderfully constructed game and a tremendously moving piece of interactive fiction.  That got me thinking about video games as narrative art, and that had my mind wander back to an old debate between the late film critic Roger Ebert and Clive Barker.  The fundamental disagreement is characterized here:

Barker: “I think that Roger Ebert’s problem is that he thinks you can’t have art if there is that amount of malleability in the narrative. In other words, Shakespeare could not have written ‘Romeo and Juliet’ as a game because it could have had a happy ending, you know? If only she hadn’t taken the damn poison. If only he’d have gotten there quicker.”

 Ebert: He is right again about me. I believe art is created by an artist. If you change it, you become the artist. Would “Romeo and Juliet” have been better with a different ending? […]

Barker: […] Let’s invent a world where the player gets to go through every emotional journey available. That is art. […]

Ebert: If you can go through “every emotional journey available,” doesn’t that devalue each and every one of them? […]

And this is something that I wish Barker had characterized better because it’s so obviously a poor description of great narrative games.

Incidentally, I’m not trying to criticize or argue with Ebert.  The former would be piling on to an argument that’s long since over.  As for the latter, I’m obviously too late.  I just want to discuss that central point.  Because Ebert’s claim does seem reasonable on the face of it.  It’s hard to imagine a great film where control over direction, camera work, and even script is (sometimes) essentially handed over to the audience.  It was clear to me from experience that this didn’t destroy the narrative intent of the game creators, and that something powerful was gained in return.  But at the time Ebert’s remarks seemed so off the mark I don’t give that central point the nuanced response it deserved.

Constrained Choice

Barker failed to make a crucial distinction between an art-form as a whole and individual instances of that art-form.  Cinema could also be described as allowing the viewer to go through “any emotional journey available”, but an individual movie does not.  Great narrative (or you could say “cinematic”) video games also don’t present the player with “any emotional journey available”.  Telltale’s The Walking Dead is a tragedy.  Like Ebert’s example of Romeo and Juliet, the structure of the story does not permit a (satisfying) happy ending.  Telltale’s representation of The Walking Dead is more interactive than (most) stage presentations of Romeo and Juliet, but the interactivity of that representation of the story still doesn’t permit a happy ending.  A central question of Telltale’s The Walking Dead is whether it is more important to protect a child’s physical safety or their ethical/emotional humanity, in situations where you can’t protect both (or maybe not even either).

The Power of Dialog

Barker failed to address Ebert’s point about directorial control head-on.  Given that audience-members are (probably) not great directors, and they haven’t even looked over the script in advance, interactivity implies a loss of directorial control that is clearly a loss in terms of ease of conveying a specific artistic vision.  The correct question is:  What is gained in return?

One answer is that cinematic games have the potential to engage in actual dialog with the audience.  They can have different reactions to different player choices.  And, importantly, this is generally a distinct, small set of reactions to a distinct set of constrained choices.  Dichotomies (and false dichotomies) are a very important feature of human thought, and a key bit of artistic potential that cinematic games have that cinema does not is the ability to explore that feature through interactivity.


The dialog of interactivity (often, but not always, achieved by games through interactive dialog) gives games a powerful way of putting the audience in the shoes of a perspective character.  This is done in several ways:

1. Collaborative character interpretation: The players interpretation of a character influences interactive character decisions, which in turn influence how the character is portrayed as the narrative continues.

2. Forced parallel between audience emotions and character emotions: Games can use interactivity to force a parallel between character emotion and player emotion through game mechanics.  Well-crafted game mechanics can induce a whole range of emotions, including hope, disappointment, triumph, frustration, suspense, tedium, flow, surprise, and epiphany.  That goes for non-narrative games as well, but in a narrative game you can use those mechanics simultaneously to scenes where the character is feeling the relevant emotion.

It’s not simple, there are real costs to doing so.  If you want to create the emotion of suspense or triumph, you probably need to back that up with a real possibility of failure, often with no better way to get back to the story than “back up a bit and try that again”.  And sometimes the objectives are contradictory; it’s hard to produce a mechanic that makes the player feel the character’s feeling of frustration without thwarting the forward progress of the narrative, or making the player so frustrated that focus is drawn away from the narrative instead of into it.  Still, there are tricks that can be employed to have mechanics work one way in the game-as-game and another way in the game-as-narrative.  Often this involves concealing the true nature of a game mechanic, or setting up player expectations and then thwarting them.  Telltale’s The Walking Dead does so with quicktime events, which generally are “press X to not die” sorts of affairs, but used in other situations to get player emotions to mirror character emotions as diverse as suspense (a character doesn’t know if rescue will arrive on time), false hope (a character thinks they can struggle onwards if they try hard enough, but they can’t), and blind rage (a character thinks a fight they are in is a life-and-death struggle even after their opponent is helpless).

If you think that it’s somehow inartistic to layer over a (narratively) disconnected art-form in order to get the audience in a particularly receptive emotional frame of mind, note that cinema does the same thing with music.  Of course, games can use that trick, too.

3. False interactivity: Games can get moments where they have their cake and eat it, too, when it comes to directorial control.  If you do a good enough job with getting the audience in the right state of mind, you can create a situation where the player’s action is invisibly constrained, offering them a false choice that seems like a real choice, where the player is really getting inside the characters head when they realize that there is only one thing they can do in this situation.

Probably the most powerful example of this I’ve encountered is this scene from Ico, which occurs just after the “second act” in the game’s story.  The cutscene breaks back into interactive gameplay right in the middle, where the protagonist has been separated from his friend and must quickly decide whether to leap a widening chasm to join her or to leave her behind and flee for safety.  It’s a false choice, there’s no significant narrative for players who choose the latter, or even those who hesitate too long, just a game over screen.  But that usually doesn’t matter, everything in the narrative and the mechanics of the gameplay up to that point sets the player up to make the right choice for the narrative, without hesitation.  Instead of being a loss of directorial control, it’s a powerful moment of congruence.

Predicting the Present

Idea #7: The best way to accurately predict the future is to accurately predict the present.

I was listening to Democracy Now! this morning about the NSA scandal (ongoing) and the (now long-established) use of private contractors to analyze digital records, the sort of activity that would be obviously illegal if physical documents were involved instead of digital ones, when I was suddenly struck by the memory of Cory Doctorow’s comment about science fiction writers predicting the present. Because, in fact, Cory Doctorow wrote this one before, a short story called “The Things That Make Me Weak and Strange Get Engineered Away” (after the Jonathan Coulton song), published in 2008.

The story hits all the key points: Private contractors analyzing vast quantities of metadata for the surveillance state, and the sort of conflict between hired geeks and their authoritarian masters that results. Of course, in that story the private contractors are a cloistered society of lifehacking monks, but obviously a good science fiction has to push those predictions of the present a little in a future-weird direction. Doctorow’s story is a bit of a warning, too. The story at least raises the question of whether the withdrawal of the nerds into their own sousveilence society removed their effectiveness as an obstacle to the security state (in more way than one).

Well worth a read. And worth pointing out, especially since I’m not the only one thinking about fiction as warning in light of recent revelations.

Extremist Terrorism’s False Flag

As a resident of the Boston area in the aftermath of the marathon bombings, I have to say the conspiracy theories have already gotten really annoying.  In this case, the simple hypothesis is actually very well supported, and conspiracy theorists tend to support their hypotheses with observations that are just as likely or almost as likely if they were completely incorrect.

But I do want to say a little bit about this concept of a false flag operation in the context of terrorists like the Tsarnaevs.  One of the things that’s odd about such a terrorist attack is it’s extremely unclear what sort of goals it might hope to achieve.  At least, it seems unlikely to frighten the US towards an isolationist policy, or achieve any end that directly supports the goals of (the violent extremist flavor du jour) militant Islamists.

The proliferation of this sort of tactic might be best understood under the concept of a false flag.  In a false flag operation, an attack is disguised so as to provoke a misdirected response.  In the archetypal case, this involves a government falsifying an enemy attack (or secretly facilitating a real enemy attack) to bolster public support for military action against that enemy.  But there’s an alternative scenario, in which an enemy seeks to have one of their potential allies blamed for the attack.  Even if the ally is not fooled by this ploy, the provoked counter-attack could provide the need to unite against a common enemy.

The best counter-attack against terrorism, therefore, is as restrained as it is effective.  I don’t mind that the police and military told people to stay home on April 19.  I don’t mind that they searched Watertown house by house.  Yes, it’s costly and disruptive, but having a bomber on the loose is also costly and disruptive.  Yes, the guy wasn’t found in the initial search, but there’s only so much you can do with limited information.

Ultimately, though, the town is getting back to normal.  We feel no need to buy the extremist’s implicit declaration that there’s a war on.  We can treat them as ordinary criminals.  Boston has dealt with those before.

Real Life Cypherpunk

No, the hurricane didn’t blow this blog away, but I’ve been hosed nonetheless.  Still, I want to get back to writing, so will maybe stick to something a bit shorter-form.

Lately, I’ve been fascinated with the rise in value of Bitcoin (BTC), a distributed, anonymous, cryptographic token transaction system intended for use as a currency.  My original thought on the technology was “nifty idea”, but never would have thought it would have much in the way of real value (not that virtual goods can’t have real value, but BTC isn’t, by itself, much of a game).  I certainly didn’t see it rising again after the initial bubble and crash, but if you look at the charts, you’ll see that the value is now above the June 2011 bubble and crash.  That crash was precipitated by a security breach and subsequent flash-crash at Mt. Gox (the largest Bitcoin exchange). Subsequent high-profile security breaches in the immediate months following surely didn’t help, but it’s worth noting that such incidents didn’t cease in November 2011, BTC was able to regain its value despite the occasional digital bank-robbery.

So given my interest, and my surprise, I was fascinated by this essay by Gwern on anonymous black-market website Silk Road (the site itself can be found here, I link to this for educational/informative purposes only and not to encourage you to do anything illegal).  The essay is a very detailed, down-to-brass-tacks look at how Silk Road works and what its weaknesses might be.

Silk Road is designed to conduct business with only the minimum amount of information possible.  A normal e-commerce website ends up with the following information:

  1. Payment information for the buyer
  2. Payment information for the seller
  3. Reviews left by the buyer for the seller
  4. Information sent by buyer to seller (including at least a shipping address)
  5. Information sent by seller to buyer (if sent via site)
  6. The seller’s name / pseudonym
  7. Users IP addresses
  8. Metadata about users connections

Making the process anonymous involves several technologies:

So Silk Road actually ends up with:

  1. Bitcoin addresses the buyer used to transfer bitcoins to Silk Road
  2. Bitcoin addresses the seller used to transfer bitcoins from Silk Road
  3. The reviews left by the buyer for the seller
  4. Encrypted gibberish sent by the buyer to the seller (including at least the buyer’s address), plus a public key for the seller (which everyone can see)
  5. Encrypted gibberish sent by the seller to the buyer, if any (the buyer has no need to post a public key, they can send it to the seller in their message if they need a reply)
  6. The seller’s pseudonym
  7. The last hop of the connection path users take to access the site

Silk Road can also strengthen their resilience against outside attack by only keeping recent data for items 1, 2, 4, and 5, and no data for item 7 (there is, however, no way for users to verify that they are in fact doing so).

Silk Road also employs several technologies / methods to mitigate the effects of anonymity:

  • Pseudonymous escrow
  • Reputation economy (presumably the reason they allow for pronounceable seller pseudonyms (6), while keeping information to an absolute minimum in so many other ways), plus methods for quantitative and qualitative analysis of buyer feedback data
  • Seller account auctions (SR admins say the primary reason for this is to make the sort of attacks (note that includes scams or stings) that can be done with new accounts at least very costly to do repeatedly; of course, this also makes money for whoever’s running Silk Road)

So Silk Road not just a straightforward application of Bitcoin.  Bitcoin is just a main ingredient in the whole cypherpunk stew!

Also, this is not to imply that the system doesn’t have weaknesses.  It still falls short of the goal of full cryptographic anonymity.  For one thing, the seller ends up with a physical post address for the buyer.  Postal addresses are a lot harder to generate and anonymize than Bitcoin addresses or private keys, and the movement of physical packages is a lot easier to inspect and trace than TOR connections.

Gwern suggests that Silk Road could be brought down through DDoS or acquiring a large number of accounts for some coordinated scam.  Acquiring new accounts to do individual stings is too high cost for too little gain, especially since the value of “flipping” a Silk Road buyer is very low (there’s little they can do to get information on Silk Road sellers).  Perhaps law enforcement will decide to do some stings anyways to make an example of a few cypherpunk drug-purchasers; the ineffectiveness of that tactic as a deterrent doesn’t stop people from trying.

Gwern doesn’t mention the demise of Bitcoin scenario described by Moldbug in this post, where the value of Bitcoins is brought down by a broad-scale legal attack on the Bitcoin exchanges, indicting them all for money laundering (Bitcoin tumblers might be more deserving of this attack, but targeting the exchanges will be easier and more effective).  That wouldn’t prevent people from trading Bitcoins for goods.  But Silk Road’s selection still isn’t as good as Amazon’s, and Bitcoins are still not sufficiently liquid when it comes to things like rent and groceries, so the value of a Bitcoin in rent and groceries still depends on the exchange rate with less science-fictiony currencies.  Not that it would be impossible to find someone on Silk Road to ship you food, but you really don’t want to buy your necessities at black market prices if you can help it.  Being able to spend money earned at a black market premium on things not sold at a black market premium is a big advantage of illicit trafficking.

The Rationalist Elect

As a fan of logic puzzles and rational decision theory, I’d encountered Newcomb’s Paradox before.  The puzzle goes as follows:

Omega (a powerful (but not supernatural or causality-violating) logic-puzzle creating entity) has set up two boxes.  Box A contains $1000.  Box B contains $1,000,000 or nothing.  Omega offers the choice of taking Box A or taking both boxes.  But Omega has made a prediction (and Omega’s predictions are almost always correct) about the subject’s choice, and put the million dollars in Box B if and only if the subject was predicted to take just Box B (without using an external source of randomness, people who flip a coin and choose based on that do even worse than those that just choose both boxes).

This is one of the most contentious philosophical problems in decision theory.  One of the things that’s interesting about it is that it’s hard to just deny that the premises are logically coherent.  You can sustain the paradox without Omega being perfect in it’s predictions, so long as Omega can be usually right, by increasing the amount to be maybe placed in Box B.

Newcomb’s Paradox is one of the problems that the denizens of Less Wrong discuss extensively because rationality is their raison d’être and decision theory is (in one sense) the theory of what it means to make rational decisions.  The consensus there is that the right solution to the problem is to one-box (that is, to take just Box B), and Eliezer Yudkowsky make a compelling argument for that, which is essentially this: Given the premises of the problem, people who take just Box B walk away with $1,000,000, while people who take both boxes walk away with $1000.  Therefore, it’s best to put aside qualms about strategic dominance, (the illusion of) backwards causality, and whether or not this Omega fellow is generally a jerk; just do the thing that reliably wins.

To put it another way: It’s a premise of Newcomb’s paradox that one-boxers usually win, and it’s a pretty poor game theory that gives advice that contradicts a scenario’s premises.

I was reminded of this puzzle again recently because Chris Bertram at Crooked Timber has this unusual observation on it:

I was reading a postgraduate dissertation on decision theory today […] and it suddenly occurred to me that Max Weber’s Protestant Ethic has exactly the structure of a Newcomb problem.

[…] place yourself in the position of Max Weber’s Calvinist. An omniscient being (God) has already placed you among the elect or has consigned you to damnation, and there is nothing you can do about that. But you believe that there is a correlation between living a hard-working and thrifty life and being among the elect, notwithstanding that the decision is already made. Though partying and having a good time is fun, certainly more fun than living a life of hard work and self-denial, doing so would be evidence that you are in a state of the world such that you are damned. So you work hard and save.

[…] you work hard and reinvest, despite the dominance of partying, because you really really want to be in that state of the world such that you get to heaven.

It does seem to follow from the premises in a similar way, so presumably the conclusion would be analogous.  That makes sense.  When dealing with omnipotent and omniscient entities, trying to find loopholes is widely regarded as a bad idea.

I guess the problem for Less Wrongians (and here, I really must give credit to Crooked Timber commenter Prosthetic Conscience for the link, though some of the overlap in our ideas was independent) is that despite usually being atheists, they are often singularitarians, so they may genuinely worry about effectively omni* entities messing with them (or at least some version of future-them).  Sinners who could yet end up in the hands of an angry god-like-entity.


I’ve been away from here too long, hosed by work and politics.  The presidential debates sure are interesting.  Wait… what was that about Barack Obama?  No, no, I didn’t mean that debate.  I meant this debate:

Round two:

Who the heck is moderating these?!  I guess it makes sense when you see the guy’s campaign ad:

The political cartoon has a venerable history, but I’m beginning to think the political remix is really capturing the zeitgeist of modern political satire.  Here’s something a bit more musical:

More from MC R-Money:

But before you think Romney’s the only one who’s been taking on a turn for the musical, I had to find some quality musical remix satire for Obama.  And not just the different, though also funny, type of remix that’s not political satire per se.  (This sort of thing is somewhere in the middle.)

Here’s one that’s pretty good (though probably cheating a bit and NSFW for swears):

What makes for a great political remix?  What’s your favorite example?