Entries in medicine (3)

Monday
Sep192011

Second City Epidemiologist

I watched The Interrupters this weekend, and I second this review, it’s well worth seeing.  The documentary chronicles the front-line agents of the organization CeaseFire, the Violence Interrupters.  CeaseFire’s founder, Gary Slutkin, is an epidemiologist who formerly worked for the World Health Organization, and he takes very seriously the analogy of the “violence epidemic”.  The approach is similar:

  1. Identify outbreaks (violent incidents)
  2. Respond at the center with a focus on limiting transmission (discouraging new retaliation by those not already involved)
  3. Build long-term resilience with vaccinations, sanitation, and so on (change norms)

A more comprehensive approach also fits into this analogy:  Infected are quarantined (criminals captured) and treated (rehabilitated) or institutionalized.  CeaseFire’s efforts, though, are mostly focused on the above, particularly step two.

On the non-metaphorical health front, similar efforts have been similarly sucessful.  An example from The Checklist Manifesto was particularly vivid in my mind while watching the movie, a study in which soap was distributed, along with simple instruction on handwashing methods and habits, to impoverished communities.  The results were dramatic.  But those results relied on the cooperation of those participating in the program, and it would be a mistake to assume that their behavior was influenced primarily by the mere availability of soap.  The instruction was also a factor.  But one factor found in follow-up study as to why that program had been more successful than some similar efforts was that the soap used was particularly high quality.  Smelled good, felt good on the hands.  Washing with it was pleasant.

One question for CeaseFire is not just how best to educate about nonviolence, or how to bring social pressure to bear in favor of nonviolence, but how to make nonviolent conflict resolution “smell good”.  (The movie contains some interesting ideas in relation to this question, I think, though it doesn’t address that directly.)

For further reading, see this post on CeaseFire as applied anthropology.  Also related to the topic of violence in Chicago and the source of the title of this post, this blog.

Wednesday
Aug312011

FLY THE PLANE

I know it’s been a while since I’ve posted here, and I still don’t have a full post together.  But I would like to write briefly about a book I read recently, Atul Gawande’s The Checklist Manifesto.

In the book, Gawande advocates for the use of checklists as a means of improving outcomes in medicine.  He bases his analysis on three key cases:  The use of coordination checklists to ensure essential communication between the parts of a construction team, the use of “read-do” and “do-confirm” checklists (routine and contingency) in the airline industry (with a particular look at the case of US Airways Flight 1549, the recent “Miracle on the Hudson”), and the design and testing of the World Health Organization’s Safe Surgery Checklist.

The book is a great example of popular nonfiction:  The information is interesting, the narrative is compelling, and the argument is sound.  The tradeoffs involved in the WHO’s design process were also interesting to me as an engineer.  A checklist (in this use) isn’t an algorithm for amateurs, but a tool to help someone who already has a great deal of expertise.  The key is to identify the tasks where a reminder is of greatest benefit; maximize the product of the likelihood that a checklist item will avoid a task being missed by the magnitude of the consequences if it is overlooked.  Extremely high-level goals often end up omitted, since they won’t be forgotten in any case.  On the other hand, sometimes important things are easy to forget in crisis situations; the subject line of this post comes from a checklist for restarting a dead jet engine (the result, one hopes, of some embarassing simulator incidents).  When the tasks themselves are unknown, the key is identifying which communication tasks have the highest probability of identifying serious potential problems before they actually occur, so the risk can be mitagated.

If you’re intersted in medicine or engineering or like reading nonfiction in general, read it.

 

Friday
May282010

Chill Out!

This post is entirely unrelated to my last post, expect that to be a theme.

So, I’ve been reading a lot from Less Wrong lately, it’s a blog on “human rationality” and quite the wiki walk.  One of the major posters is Eliezer Yudkowsky, an AI researcher for SIAI, the creator of the AI Box thought experiment, and a fiction writer of some considerable skill.  The reason I’ve been reading Less Wrong recently is that I ran into Yudkowsky’s work in a rather unexpected place and followed it back.

Anyways, I was going to write about some of the logic puzzles from Less Wrong, but then ran into something more interesting, this post from some months ago in which Yudkowsky talks about attending a conference for those signed up for post-mortem cryonics (the really short definition: a procedure in which the body is frozen immediately after legal death in the hopes that improvements in technology will allow the person in question to be somehow revived at some future point):

after attending this event, and talking to the perfectly ordinary parents who signed their kids up for cryonics like the goddamn sane people do, I’m going to come out and say it:  If you don’t sign up your kids for cryonics then you are a lousy parent. [emphasis mine]

That claim struck me as irritating, frustrating.  Pushed my buttons, you could say.  Some things that bother me:

  • Claims that all people not following some fringe idea are lousy people.  Whatever the merits of the idea in question, it’s actually quite hard to distinguish between fringe ideas that are great ideas and fringe ideas that are terrible; that most people favor the status quo is not surprising.  (Not that I necessarily expect the mainstream to be correct, but being on the fringe doesn’t necessarily mean he’s right, either.)
  • Assertions that one is (approximately) the Only Sane Man, especially immediately following an appeal to emotion / explanation why the speaker might not be thinking clearly on the topic in question.  Yes, it’s appropriate to feel emotions that fit the facts, but strong emotion can cloud thinking as well, and people who feel emotions based on false premises tend to think that their emotions fit the facts, too.
  • Lauding one’s self as “a hero”.  (Literally!)
  • Overstatement before consensus.  That is, it’s not enough to state that one is correct while (most of) the rest of the world is wrong, one must state that their chosen conclusion is obvious, True, “massively overdetermined”.

The above isn’t to say that Yudkowsky’s position in favor of cryonics is wrong, necessarily, just that his rhetoric is terrible.  And I don’t think his argument is as strong as he thinks.

The arguments for cryonics strike me as a sort of cross between Pascal’s Wager* and the Drake Equation.  Take a bunch of numbers (odds that one will be successfully cryopreserved, odds that it will be done in such a way that allows some sort of reanimation, odds that the organization doing the preserving will remain economically stable enough to continue to function, odds that technology will get good enough in the future that they’ll start to revive people (even given the potential for legal consequences in the event of failure… or success!)), multiply that by infinity (live forever in a future paradise!), disregard equally (im)plausible dystopian possibilities as “exotic” and unlikely.  Is it worth paying a few hundred dollars a year to have access to some small (how small? who knows?) chance at something really good in the event of the development of some plausible (but not necessarily possible) future technology?  Maybe?

The Alcor (cryopreservation organization) FAQ states (excerpted for brevity):

Q: Why isn’t vitrification reversible now?

A: To vitrify an organ as large as the brain, Alcor must expose tissue to higher concentrations of cryoprotectant for longer periods of time than are used in conventional organ and tissue banking research. The result of this exposure is biochemical toxicity that prevents spontaneous recovery of cell function. In essence, Alcor is trading cell viability (by current criteria) in exchange for the excellent structural preservation achievable with vitrification.

The nature of the injury caused by cryoprotectant exposure is currently unknown. […]

Q: Has an animal ever been cryopreserved and revived?

A: […] it should be obvious that no large animal has ever been cryopreserved and revived. Such an achievement is still likely decades in the future. […]

(An actual example of a human being put into cryostasis and revived would strengthen Yudkowsky’s argument quite a bit, putting the lower bounds on the probability of successful reanimation above zero.)

As far as getting you to this state of “biochemical toxicity that prevents spontaneous recovery of cell function” from which no mind has ever been recovered, well, those odds at least seem a bit better.  Of the nine patients Alcor preserved in 2009 and early 2010, three were cryopreserved within hours, the rest moved to moderately cold storage within hours and actually cryopreserved within days.

A recent post by Will Newsome critiquing Yudkowsky and other’s position on cryonics puts it well:

Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time) […] That said, the reverse is true: not getting signed up for cryonics is also not obviously correct. The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong. Strong arguments are being ignored on both sides. The common enemy is certainty. [emphasis theirs]

They go on to note:

I don’t disagree with Roko’s real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one’s real values would endorse signing up for cryonics, it’s not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn’t come up with a confident no.  Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn’t obviously correct.

That’s an interesting thought.  What should be the threshold for giving ideas “extremely careful reasoning”?  Plausibility?  Potential?  I think that might be a hard standard to live up to, there could be a lot of plausible ideas with “staggering potential benefit”.  Plenty of utopian schemes, for example.  Even if you insist on that benefit including immortality for yourself, lots of ideas to consider.

However, I will agree that at least scientists in the field should be seriously thinking about plausible ideas with staggering potential benefit in their field, which is not happening in this case.  The paper at that link mentions ethical and PR concerns as reasons for the rejection of cryonicists by cryobiologists, often under-informed by actual science.  But it fails to note how cryonicists’ rhetoric might also contribute to that effect.

Idea #1:  Have an idea that could change the world?  Pay attention to rhetoric.

* Since Yudkowsky has a reply to an argument along that line, I will say that’s not the argument I’m making; I don’t think the probablility of being revived into a future (approximate) utopia is vanishingly small because that would be really awesome.