Chill Out!

This post is entirely unrelated to my last post, expect that to be a theme.

So, I’ve been reading a lot from Less Wrong lately, it’s a blog on “human rationality” and quite the wiki walk.  One of the major posters is Eliezer Yudkowsky, an AI researcher for SIAI, the creator of the AI Box thought experiment, and a fiction writer of some considerable skill.  The reason I’ve been reading Less Wrong recently is that I ran into Yudkowsky’s work in a rather unexpected place and followed it back.

Anyways, I was going to write about some of the logic puzzles from Less Wrong, but then ran into something more interesting, this post from some months ago in which Yudkowsky talks about attending a conference for those signed up for post-mortem cryonics (the really short definition: a procedure in which the body is frozen immediately after legal death in the hopes that improvements in technology will allow the person in question to be somehow revived at some future point):

after attending this event, and talking to the perfectly ordinary parents who signed their kids up for cryonics like the goddamn sane people do, I’m going to come out and say it:  If you don’t sign up your kids for cryonics then you are a lousy parent. [emphasis mine]

That claim struck me as irritating, frustrating.  Pushed my buttons, you could say.  Some things that bother me:

  • Claims that all people not following some fringe idea are lousy people.  Whatever the merits of the idea in question, it’s actually quite hard to distinguish between fringe ideas that are great ideas and fringe ideas that are terrible; that most people favor the status quo is not surprising.  (Not that I necessarily expect the mainstream to be correct, but being on the fringe doesn’t necessarily mean he’s right, either.)
  • Assertions that one is (approximately) the Only Sane Man, especially immediately following an appeal to emotion / explanation why the speaker might not be thinking clearly on the topic in question.  Yes, it’s appropriate to feel emotions that fit the facts, but strong emotion can cloud thinking as well, and people who feel emotions based on false premises tend to think that their emotions fit the facts, too.
  • Lauding one’s self as “a hero”.  (Literally!)
  • Overstatement before consensus.  That is, it’s not enough to state that one is correct while (most of) the rest of the world is wrong, one must state that their chosen conclusion is obvious, True, “massively overdetermined”.

The above isn’t to say that Yudkowsky’s position in favor of cryonics is wrong, necessarily, just that his rhetoric is terrible.  And I don’t think his argument is as strong as he thinks.

The arguments for cryonics strike me as a sort of cross between Pascal’s Wager* and the Drake Equation.  Take a bunch of numbers (odds that one will be successfully cryopreserved, odds that it will be done in such a way that allows some sort of reanimation, odds that the organization doing the preserving will remain economically stable enough to continue to function, odds that technology will get good enough in the future that they’ll start to revive people (even given the potential for legal consequences in the event of failure… or success!)), multiply that by infinity (live forever in a future paradise!), disregard equally (im)plausible dystopian possibilities as “exotic” and unlikely.  Is it worth paying a few hundred dollars a year to have access to some small (how small? who knows?) chance at something really good in the event of the development of some plausible (but not necessarily possible) future technology?  Maybe?

The Alcor (cryopreservation organization) FAQ states (excerpted for brevity):

Q: Why isn’t vitrification reversible now?

A: To vitrify an organ as large as the brain, Alcor must expose tissue to higher concentrations of cryoprotectant for longer periods of time than are used in conventional organ and tissue banking research. The result of this exposure is biochemical toxicity that prevents spontaneous recovery of cell function. In essence, Alcor is trading cell viability (by current criteria) in exchange for the excellent structural preservation achievable with vitrification.

The nature of the injury caused by cryoprotectant exposure is currently unknown. […]

Q: Has an animal ever been cryopreserved and revived?

A: […] it should be obvious that no large animal has ever been cryopreserved and revived. Such an achievement is still likely decades in the future. […]

(An actual example of a human being put into cryostasis and revived would strengthen Yudkowsky’s argument quite a bit, putting the lower bounds on the probability of successful reanimation above zero.)

As far as getting you to this state of “biochemical toxicity that prevents spontaneous recovery of cell function” from which no mind has ever been recovered, well, those odds at least seem a bit better.  Of the nine patients Alcor preserved in 2009 and early 2010, three were cryopreserved within hours, the rest moved to moderately cold storage within hours and actually cryopreserved within days.

A recent post by Will Newsome critiquing Yudkowsky and other’s position on cryonics puts it well:

Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time) […] That said, the reverse is true: not getting signed up for cryonics is also not obviously correct. The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong. Strong arguments are being ignored on both sides. The common enemy is certainty. [emphasis theirs]

They go on to note:

I don’t disagree with Roko’s real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one’s real values would endorse signing up for cryonics, it’s not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn’t come up with a confident no.  Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn’t obviously correct.

That’s an interesting thought.  What should be the threshold for giving ideas “extremely careful reasoning”?  Plausibility?  Potential?  I think that might be a hard standard to live up to, there could be a lot of plausible ideas with “staggering potential benefit”.  Plenty of utopian schemes, for example.  Even if you insist on that benefit including immortality for yourself, lots of ideas to consider.

However, I will agree that at least scientists in the field should be seriously thinking about plausible ideas with staggering potential benefit in their field, which is not happening in this case.  The paper at that link mentions ethical and PR concerns as reasons for the rejection of cryonicists by cryobiologists, often under-informed by actual science.  But it fails to note how cryonicists’ rhetoric might also contribute to that effect.

Idea #1:  Have an idea that could change the world?  Pay attention to rhetoric.

* Since Yudkowsky has a reply to an argument along that line, I will say that’s not the argument I’m making; I don’t think the probablility of being revived into a future (approximate) utopia is vanishingly small because that would be really awesome.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s