Entries in futurism (15)

Monday
Aug162010

Compensating for Bias

“It’s tough to make predictions, especially about the future.”  - Yogi Berra

As a sort of follow-up to my last post, I’ll say that being aware of one’s cognitive biases is a necessary but not sufficient condition for counter-acting them.  Because even if you’re aware of your biases, those biases still apply, both in how you perceive your own bias and how you judge your own counter-action.  To put it another way:

Idea #4: Predicting the future is harder than you think, even if you know why predicting the future is harder than you think.

(With thanks to Douglas Hofstadter.)

Friday
Aug132010

Speedily, Speedily, In Our Days, Soon

Predicting significant social or technological changes would be hard enough even for a rational actor.  After all, any individual has limited (and inaccurate) information about the current state of affairs, many of the systems involved appear chaotic, and the time-scales involved can be very short indeed.  For humans, it’s even harder.

So when a Reddit post on this comic led me to this article, I was intrigued.

The relevant bit of the article:

Pattie Maes, a researcher at the MIT Media Lab noticed something odd about her colleagues. A subset of them were very interested in downloading their brains into silicon machines. Should they be able to do this, they believed, they would achieve a kind of existential immortality. […]

[…] her colleagues really, seriously expected this bridge to immortality to appear soon. How soon? Well, curiously, the dates they predicted for the Singularity seem to cluster right before the years they were expected to die.  […]

Joel Garreau, a journalist who reported on the cultural and almost religious beliefs surrounding the Singularity in his book Radical Evolution, noticed the same hope that Maes did. But Garreau widen the reach of this desire to other technologies. He suggested that when people start to imagine technologies which seem plausibly achievable, they tend to place them in the near future – within reach of their own lifespan.

The author of the article, Kevin Kelly, refers to this observation as the “Maes-Garreau Law”.

The most obvious explanation for that observation is probably wishful thinking, especially (but not always), if the change in question seems positive.  Or perhaps the desire for personal significance that comes with messianic or apocalyptic thinking.  Kelly comes up with a slightly different explanation, though:

Singularity or not, it has become very hard to imagine what life will be like after we are dead.  The rate of change appears to accelerate, and so the next lifetime promises to be unlike our time, maybe even unimaginable. Naturally, then, when we forecast the future, we will picture something we can personally imagine, and that will thus tend to cast it within range of our own lives.

If I had to guess, I would say that this is not bound by observations about the rate of change.  Rather, since consciousness is centered around narrative, we expect our lives to have narrative continuity.  Thus, predictions about the future that seem hopelessly alien are more likely placed after the imaginer is dead, and predictions about the future that seem like they could fit into the stories of our lives seem like they could also plausibly happen within our lifetimes.

An interesting logical inference:  If the above is true, than one would expect predictions of the timing of the singularity to differ wildly depending on which side of the singularity one is trying to imagine, and how good a job one does of imagining the alien-ness of a post-singularity world.

Friday
Aug062010

Wave Goodbye?

Google Wave is being discontinued as a standalone product.  I’m not sure whether to be surprised.  On the one hand, it seemed like if anyone could solve some of the flaws of email and get people to actually adopt it, it would be Google.  On the other hand, I was tremendously excited about Wave… but I never used it.

It seems that with networks as big as email, there are no good ways to push out a new protocol.  If you let everyone in right away, it doesn’t scale.  If you slowly add users, people’s friends are not on it when it’s fresh in their minds.  If you make it a separate product, it’s an inconvenience.  If you make it part of an existing product, users object to having it foisted upon them.

Still, Wave contained some fundamentally good ideas.  It makes sense to have an email client that can handle scheduling or collaborative document editing or shared to-do lists or threaded discussions; that is, instead of sending an email with a link to a web-app, why not send an email with a webapp in it?  It also makes sense to create open protocols instead of closed systems, especially if you want to build off of something as widely adopted as email.  (Not that open protocols are guaranteed winners.  Many open-source proponents would like to paint the history of the internet as a steady progression away from “walled gardens”, but that’s not necessarily the case.)

Google Wave isn’t dead yet.  It’s already used by at least two sets of enterprise collaboration software.  Hopefully, some of Wave’s features will find their way into GMail and other mail clients.

What do you think?  Will Wave rise again, or sink into obscurity?  Will the email client of some decades hence look much like one today, or will email’s role be filled by something different?  Will it be in FULL 3D?  It’s the future, after all.

Wednesday
Jul282010

The Future in the News

If I listed organizations exemplifying significant near-future trends, Wikileaks would certainly be towards the top.  Wikileaks is a platform for the anonymous submission, verification, and publication of classified or otherwise secret documents.  By operating online, with servers in multiple journalism-friendly jurisdictions, information given to Wikileaks becomes incredibly hard to suppress.  The fact that Wikileaks tries (to whatever extent possible under their journalistic ethics) to publish full documents instead of processed stories allows multiple news organizations to do their own analysis of the raw data.  Wikileaks suffered a funding crisis earlier this year, but after a donation drive, their document submission site and their published archives are back online.

Last April, Wikileaks was rocketed into the headlines when they released a video from July 2007 showing a helicopter gunship attack on suspected insurgents.  Reuters journalists with the group were also killed in the attack, as were civilians who attempted to rescue the wounded.  Two children in the rescuers’ vehicle were also seriously wounded.  The video was leaked by Private Bradley Manning, who was arrested and charged this July.

This week, Wikileaks released tens of thousands of pages of classified documents on the Afghanistan war, launching US strategy in the war back into the news and the political spotlight (or so anti-war politicians hope).

That of course means that the US Government has intensified their efforts to capture and question Julian Assange, Wikileaks founder and spokesperson.  That didn’t stop him from showing up to speak at TED Global 2010 in Oxford, but he didn’t show at The Next HOPE Conference (where he was to be the keynote speaker) last week in NYC.

So, this is one to watch.  It’s not clear to what extent Assange’s arrest would hinder Wikileaks.  It is clear that the Anthony Russos of the world now have far better technology at their disposal than a Xerox machine, that this will be a force for governments and businesses to contend with, since the issues of secrecy, security, and democracy are deeply intertwined.

Friday
May282010

Chill Out!

This post is entirely unrelated to my last post, expect that to be a theme.

So, I’ve been reading a lot from Less Wrong lately, it’s a blog on “human rationality” and quite the wiki walk.  One of the major posters is Eliezer Yudkowsky, an AI researcher for SIAI, the creator of the AI Box thought experiment, and a fiction writer of some considerable skill.  The reason I’ve been reading Less Wrong recently is that I ran into Yudkowsky’s work in a rather unexpected place and followed it back.

Anyways, I was going to write about some of the logic puzzles from Less Wrong, but then ran into something more interesting, this post from some months ago in which Yudkowsky talks about attending a conference for those signed up for post-mortem cryonics (the really short definition: a procedure in which the body is frozen immediately after legal death in the hopes that improvements in technology will allow the person in question to be somehow revived at some future point):

after attending this event, and talking to the perfectly ordinary parents who signed their kids up for cryonics like the goddamn sane people do, I’m going to come out and say it:  If you don’t sign up your kids for cryonics then you are a lousy parent. [emphasis mine]

That claim struck me as irritating, frustrating.  Pushed my buttons, you could say.  Some things that bother me:

  • Claims that all people not following some fringe idea are lousy people.  Whatever the merits of the idea in question, it’s actually quite hard to distinguish between fringe ideas that are great ideas and fringe ideas that are terrible; that most people favor the status quo is not surprising.  (Not that I necessarily expect the mainstream to be correct, but being on the fringe doesn’t necessarily mean he’s right, either.)
  • Assertions that one is (approximately) the Only Sane Man, especially immediately following an appeal to emotion / explanation why the speaker might not be thinking clearly on the topic in question.  Yes, it’s appropriate to feel emotions that fit the facts, but strong emotion can cloud thinking as well, and people who feel emotions based on false premises tend to think that their emotions fit the facts, too.
  • Lauding one’s self as “a hero”.  (Literally!)
  • Overstatement before consensus.  That is, it’s not enough to state that one is correct while (most of) the rest of the world is wrong, one must state that their chosen conclusion is obvious, True, “massively overdetermined”.

The above isn’t to say that Yudkowsky’s position in favor of cryonics is wrong, necessarily, just that his rhetoric is terrible.  And I don’t think his argument is as strong as he thinks.

The arguments for cryonics strike me as a sort of cross between Pascal’s Wager* and the Drake Equation.  Take a bunch of numbers (odds that one will be successfully cryopreserved, odds that it will be done in such a way that allows some sort of reanimation, odds that the organization doing the preserving will remain economically stable enough to continue to function, odds that technology will get good enough in the future that they’ll start to revive people (even given the potential for legal consequences in the event of failure… or success!)), multiply that by infinity (live forever in a future paradise!), disregard equally (im)plausible dystopian possibilities as “exotic” and unlikely.  Is it worth paying a few hundred dollars a year to have access to some small (how small? who knows?) chance at something really good in the event of the development of some plausible (but not necessarily possible) future technology?  Maybe?

The Alcor (cryopreservation organization) FAQ states (excerpted for brevity):

Q: Why isn’t vitrification reversible now?

A: To vitrify an organ as large as the brain, Alcor must expose tissue to higher concentrations of cryoprotectant for longer periods of time than are used in conventional organ and tissue banking research. The result of this exposure is biochemical toxicity that prevents spontaneous recovery of cell function. In essence, Alcor is trading cell viability (by current criteria) in exchange for the excellent structural preservation achievable with vitrification.

The nature of the injury caused by cryoprotectant exposure is currently unknown. […]

Q: Has an animal ever been cryopreserved and revived?

A: […] it should be obvious that no large animal has ever been cryopreserved and revived. Such an achievement is still likely decades in the future. […]

(An actual example of a human being put into cryostasis and revived would strengthen Yudkowsky’s argument quite a bit, putting the lower bounds on the probability of successful reanimation above zero.)

As far as getting you to this state of “biochemical toxicity that prevents spontaneous recovery of cell function” from which no mind has ever been recovered, well, those odds at least seem a bit better.  Of the nine patients Alcor preserved in 2009 and early 2010, three were cryopreserved within hours, the rest moved to moderately cold storage within hours and actually cryopreserved within days.

A recent post by Will Newsome critiquing Yudkowsky and other’s position on cryonics puts it well:

Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time) […] That said, the reverse is true: not getting signed up for cryonics is also not obviously correct. The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong. Strong arguments are being ignored on both sides. The common enemy is certainty. [emphasis theirs]

They go on to note:

I don’t disagree with Roko’s real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one’s real values would endorse signing up for cryonics, it’s not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn’t come up with a confident no.  Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn’t obviously correct.

That’s an interesting thought.  What should be the threshold for giving ideas “extremely careful reasoning”?  Plausibility?  Potential?  I think that might be a hard standard to live up to, there could be a lot of plausible ideas with “staggering potential benefit”.  Plenty of utopian schemes, for example.  Even if you insist on that benefit including immortality for yourself, lots of ideas to consider.

However, I will agree that at least scientists in the field should be seriously thinking about plausible ideas with staggering potential benefit in their field, which is not happening in this case.  The paper at that link mentions ethical and PR concerns as reasons for the rejection of cryonicists by cryobiologists, often under-informed by actual science.  But it fails to note how cryonicists’ rhetoric might also contribute to that effect.

Idea #1:  Have an idea that could change the world?  Pay attention to rhetoric.

* Since Yudkowsky has a reply to an argument along that line, I will say that’s not the argument I’m making; I don’t think the probablility of being revived into a future (approximate) utopia is vanishingly small because that would be really awesome.

Page 1 2