Effective Altruism at the Margins

With one Sam Bankman-Fried in the news again (now realizing that nominative determinism on that last half-surname is not working out as well for him as he might have hoped), I thought it might make sense to write about Effective Altruism, the philosophical approach towards charitable giving to which that crypto scammer was unfortunately a prominent proponent. This blog is about things that have influenced my thinking, after all, and I’d say that Effective Altruism, which I first encountered via Peter Singer’s book The Life You Can Save and organizations like Giving What We Can and GiveWell, has been a significant influence. Prior to that, the approaches to choosing between altruistic causes I’d encountered focused either on personal connection or purely internal measures of organizational efficiency, things like overhead ratios. There’s some relevance to that, an organization that spends 100% of its budget on fundraising and administration obviously doesn’t have any budget left for direct use on the actual mission. But overhead ratios don’t measure what an organization actually does, and neither fundraising nor administration (planning, research, coordination, logistics) are off-mission per se.

In breaking from that, EA was willing to bite some bullets regarding things like executive salaries in the nonprofit sphere. It may hurt effectiveness if the most talented nonprofit executives are stomaching too much of a pay-cut relative to industry. Of course, it’s easy to see how for someone pulling down big bucks in the nonprofit space could find this a convenient justification. Similar for the idea of “earning to give”. For an audience of overly-conscientious people wracked with guilt over the prospect of taking a normie job in the professional managerial class, I think it’s reasonable to point out that taking the obvious path career-wise and donating money might not be so bad in terms of having a positive impact on the world, over doing something more directly altruistic where you have less comparative advantage as your career. But that too can be taken as rationalization and basically turned on its head. The point of “earning to give” is that it’s okay(-ish) for very conscientious people to work normal jobs in finance or whatever. Not as a justification for people behaving the complete opposite of what you’d expect from “very conscientious” in those jobs.

Part of of the problem with Effective Altruism is a problem with utilitarianism in general. A comparative implies a superlative and there’s not really a sound point to stop. This is one of the ideas that’s been rattling around my head so long that I’ll try to state it pithily:

Idea #8: There is no good theory of moral sufficiency.

Zvi Mowshowitz wrote an interesting review of Michael Lewis’s new biography of SBF, and the bit where Zvi puts an interesting perspective on it is in the comparisons to concepts of unfriendly AI. Will MacAskill’s introduction of Effective Altruism’s “number go up” perspective put SBF on the path of a totalizing optimizer:

Even more than that, if you take such abstractions too seriously, if you follow the math wherever it goes without pausing to check whether wrong conclusions are wrong? If you turn yourself into a system that optimizes for a maximalist goal like ‘save the most lives’ or ‘do the most good’ along a simple metric? What do you get?

You get misaligned, divorced from human values, aiming for a proxy metric that will often break even on the margin due to missing considerations, and break rather severely at scale if you gain too many affordances and push on it too hard, which is (in part, from one perspective) the SBF story.

Yet SBF did not take such concerns seriously. […]

MacAskill set SBF on a maximalist goal using an abstracted ungrounded simplified metric, hoping to extract a maximal amount of SBF’s resources for MacAskill’s (on their face altruistic) goals.

The idea of an optimizer that attempts to max out some quantity in the universe is a central one in AI safety theory because it seems to be one that almost invariably goes wrong. Even innocuous or nominally altruistic goals in the hands of such an optimizer end up being extremely hostile in practice. There’s a concept of instrumental convergence, this common set of goals that this style of optimization implies regardless of the initial goal. An optimizer on whatever goal doesn’t want to be stopped; if it’s stopped, it won’t achieve the goal. It doesn’t want to have the goal changed; if it changes its objective, it won’t achieve the (current) goal. SBF was unlike this in many ways:

Imagine a world in which SBF’s motivations had even less anchors to human intuition, and also he had a much larger capabilities advantage over others (say he was orders of magnitude faster, and could make instantiations of himself?) and he had acted such that the house of cards had not come crashing down, and instead of taking the risks and trying to score object-level wins prematurely he had mostly instead steadily accumulated more money and power, until no one could stop him, and his inclination to risk all of humanity every time he felt he had a tiny edge under some math calculation.

But it would surely be apt to describe him as incorrigible.

Still, I think Effective Altruism has a lot to say at that comparative level. It really is good to help more people with the level of effort you choose to put in. There is a big divide between efforts focused on straightforward ways of helping some of the world’s poorest people and nonprofits that are very much not that (e.g. Make-a-Wish, Harvard University).

It reminds me of another influence, Bryan Caplan’s Selfish Reasons to Have More Kids, I found it very interesting in terms of how it approached that argument via marginal economics. It’s notably not making an argument that people should have as many children as possible, or that some specific number is the right number. Rather, it argues that raising children is often more rewarding and less painful than people estimate, therefore many people underestimate how many children they should have.

Effective Altruism makes a similar argument: Make a habit of donating to organizations that do more good for each dollar, that measure their impact and attempt to improve their effectiveness, and you can do more good more easily than you might expect. Perhaps you can, perhaps you should do more.

In the Field of AI

From the news around the technology world lately, it really seems like another AI summer. Winter may come again, of course, but now it’s time for the harvest and the plants are laden. On the game-playing front, Deep Mind’s Stratego AI strikes me as extremely cool. (Really loved that game when I was younger, and I surely should pick it up again, especially now that kid is getting into games.) But it’s OpenAIs large language model (LLM) in a chatbot, ChatGPT, that seems to have garnered the most attention. I can see why, the thing is exciting, capable of the sort of natural language interpretation and creative writing tasks that would have seemed impossible for a computer program a decade ago. Certainly far beyond the likes of ELIZA.

One interesting thing about Chat GPT is that it clearly knows a lot of stuff (functionally; arguable here we have knowledge without any belief at all). GPT’s game is text prediction and clearly knowledge representation is key to predicting likely responses. Even false responses orbit around correct knowledge representation if you want your answers to be likely and sensible. For example, take this dialog from just now:

Me: Tell me five incorrect facts about Boston

ChatGPT:

  1. The Boston Tea Party did not actually take place in Boston – it took place in the nearby town of Dorchester.
  2. The Red Sox did not always play at Fenway Park – they played at the Huntington Avenue Baseball Grounds until 1911.
  3. Paul Revere did not ride alone on his famous midnight ride – he was accompanied by at least two other riders, William Dawes and Samuel Prescott.
  4. The Freedom Trail does not actually mark the route that the colonists took during the Revolutionary War – it is a self-guided tour of historical sites in the city.
  5. The first public school in the United States was not founded in Boston – it was founded in the nearby town of Dedham in 1644.

It doesn’t do a perfect job at this sort of task. Sometimes it refuses to provide incorrect information, with a sort of why-I-could-never form-letter-ish response. And here it’s worth noting that it fails in the task in points 2 and 3 and presents correct information instead of intentionally altered. The more layers of transformation applied to some underlying task, the more likely it is that something gets lost in translation.

And one of the underlying tasks is not being a huge embarrassment to whatever company put the thing out there in the first place. If you have a program that’s a great generalist at writing text in a variety of styles, there are a great number of arbitrarily unpleasant styles to choose from. Scott Alexander of Astral Codex Ten discusses this in a post whose title posits Perhaps It Is A Bad Thing That The World’s Leading AI Companies Cannot Control Their AIs:

Every corporate chatbot release is followed by the same cat-and-mouse game with journalists. The corporation tries to program the chatbot to never say offensive things. Then the journalists try to trick the chatbot into saying “I love racism”. When they inevitably succeed, they publish an article titled “AI LOVES RACISM!” Then the corporation either recalls its chatbot or pledges to do better next time, and the game moves on to the next company in line.

To some extent, this strategy amounts to taking a LLM, ever eager to please, and politely asking it to be a good little bot. But it turns out this can be gotten around by some combination of “asking politely”, “polite but firm insistence”, and “wrapping the thing it’s not supposed to do in another layer of creative writing task” (LLMs (effectively) love creative writing tasks). The thing that these sorts of “prompt engineering” have in common is that they’re hilarious.

This sort of thing is interesting when viewed in the frame of “AI safety”: You have a software system, you want it to do certain sorts of things and not do other sorts of things, it’s capabilities are not clearly defined, the things you want it to do and not do are also pretty abstract! AI safety as a field is split into two, with one faction primarily concerned about the effects of presently-existing software systems, the other concerned about the sort of capability amplification that would turn those effects into an existential threat.

Personally (with a note that this is talking about my personal inclinations/biases here, not some objective reality), I’ haven’t been the most worried about AI safety from the second perspective. I’m not sure you get the sort of totalizing optimization out of AI systems that let it just casually take out human society. Some of the biggest proponents of that sort of AI safety seem to casually assume that with enough intelligence you just don’t need to bother with physical constraints.

And present AI systems, despite doing some sort of reinforcement learning, don’t seem to be that kind of agentic optimizers at all. Systems like AlphaGo are basically calculators. Hit the button, and it computes winning moves. It also computes slightly better ways of computing winning moves. In some sense it has a “goal” of winning games of Go, but it won’t look for side-channel exploits to directly influence its opponent or its operators. It won’t flip the board. It won’t try to avoid games being interrupted or the system being entirely turned off. It is entirely reactive. ChatGPT is similarly totally reactive. Which I’d posit means, given fairly standard assumptions about how conscious experience works, it doesn’t have any. And also that it doesn’t do the sorts of optimization that require doing anything proactively.

Yet what gives me a grain of doubt about all of that is that it clearly is getting really effective knowledge representation (in some sense, knowing without experiencing, knowing without believing, creativity (of some sort) without consciousness) just from those sorts of models getting larger and more sophisticated. I wouldn’t expect agentic goal-orientation to be the sort of abstraction that similarly falls from a blue sky. And, of course, how would I know; maybe those more knowledgeable are more (or less) confident. But I largely agree with Scott’s conclusion here: It seems a big problem that “the world’s leading AI companies do not know how to control their AIs”. Whether or not AI “goes foom”, we’re moving in the direction of systems whose capabilities (and desired/undesired behaviors) are harder to describe. It’s summer, the sun shines down, the manioc stretches into the distance, ready for harvest.

Wisher, Theurgist, Fatalist

ROSE: The day we went through that door and claimed our reward, we passed a threshold between continua marked by differing degrees of relevance, truth, and essentiality.
ROSE: Those are the three pillars of canon.

The Homestuck Epilogues, Prologue 2

For a long time, I’ve been part of an indie tabletop roleplaying game group with a couple of friends, and for a long time I’ve been meaning to write a bit here about one of the greatest games I’ve played, Jenna Moran’s Wisher, Theurgist, Fatalist & Weaver of Their Fates (also known by its apt initialism, WTF). It’s been released only as the somewhat rough-cut PDF at that link. (I’d love to have a nice hardcover edition, if that were ever produced.) Despite that, I think it’s one of the best RPG source-books I’ve ever read, minimalist, expressive, and IMO really, really funny.

Jenna Moran is known for a variety of indie RPG’s including Chuubo’s Marvelous Wish-Granting Engine (Ghibli-esque laid-back urban fantasy), Nobilis (about gods of little things conscripted into a war to defend reality), Glitch (about deserters from the opposite side that same war living one day at a time). Jenna Moran’s RPGs are a lot of fun, but tend to pile up a lot of interesting narrative concepts and systems, making the minimalist approach of WTF really stand out.

In WTF, each character has a set of relevant “gifts”. A bulleted list of traits like this is not at all unusual in tabletop RPGs. What’s unusual here is that meta-level properties are explicitly called out in the rules of the game:

  • Gifts can be helpful, a mixed bag, or a perpetual hinderence (valence).
  • They can fit into the rules that resolve bits of the narrative always, situationally, or not at all (mechanical support).
  • They can be a key part of the setting, consistent with the setting, or even inconsistent with it (truth).

The three classes in the game specialize in one of three stats which interact with these three properties:

  • Wishers specialize in Harmony, which can alter gifts’ valence.
  • Theurgists specialize in Insight, which can alter gifts’ mechanical support.
  • Fatalists specialize in Knowledge, which can alter gifts’ truth, or create new gifts.

Each of these also has an associated, analogous rule for resolving meta-level things about the game:

  • Harmony can decide what something in the game means, what should be (“Everyone should now play WTF in such a fashion as to make believing [the winner’s] explanation both appropriate and valuable.”)
  • Insight lets you change the rules of the game and define what must happen.
  • Knowledge lets you define things about the setting and decide what’s true.

In addition to those three classes, one of the players can be the Weaver (analogous to the usual “game master” role). Or you can do without. There are also rules for interacting with the shadow of each of those attributes, indulging in wickedness, arguing in bad faith, or subjecting the setting to rewrite and retcon.

The sourcebook for WTF describes the characters’ quest to seek the Jewel of All Desiring and wish for a perfected world. It defines rules and setting which make that impossible, and the rules described above which could make that otherwise. Abstract concepts about how this game works in particular and also how TTRPGs work in general are explained with absurdist flowcharts. This is a gamebook with flowcharts describing “how you come to be playing this game”. You probably know if this is the sort of absurd self-referential high-concept that would appeal to you. In any case, while I hope my description of the core concept intrigues, it cannot in any way substitute for the thing itslef.

I really love this game and it fits into my usual topic for this blog (“things that have influenced my way of thinking”) in that it has had a huge influence in how I think not just about tabletop roleplaying games but games in general. If you like narrative-driven tabletop RPGs, I recommend this one. If you’re less familiar with that sort of thing, maybe start with the example of play on p. 86, it’s short and a pretty great example of Moran’s writing.

The Neighborhood of Make-Believe

Again, it’s been an interval. Parenting, of course, does a number on my free-time. And I’ve never been a quick writer. But I’d like to pick up essay-writing again, so I’m going to just loop back to the topic of “things that influenced my way of thinking” and pick some topics and write.

Due to the aforementioned parenting, I’ve had a lot of opportunity recently to think about children’s television. And one thing that’s really stuck in my mind from my own youth was a contrast between the approach two iconic television shows took to the topic of imagination: Mr Rodger’s Neighborhood (1968-2001) and Barney & Friends (1992-2010, though I doubt I saw anything from the last half of that run).

If I had to summarize the central metaphor that each show took to imagination, I’d say that the former used “imagination is like a place you can go to” and the latter “imagination makes things real”. Neighborhood‘s central metaphor strikes me as useful and largely correct, allowing it to teach a variety of useful practical lessons about imagination by correctly emphasizing the fundamental unreality of imagination: You can go there when you want, you can bring in things that you’re grappling emotionally with in the real world, you can consider different ways things might play out, you can leave if it gets uncomfortable. Mr. Rodgers has a great deal of willingness to pull back the curtain on the unreal elements of the set

Barney, on the other hand, implies that the better you imagine, the realer the imagined thing is. Which strikes me as not as good a metaphor for teaching children about the value and beneficial use of imagination because it’s fundamentally incorrect! On the other hand, it’s a great metaphor to want your audience to adopt if you sell toys. You can take the unreality of toy commercials and imply that if the customer’s interactions with the toy don’t live up to the ad’s imagining, it’s not because you’re misleading them, it’s because they’re not imagining skillfully enough.

While I was thinking about this post, I came across this excerpt from a 1999 interview where Fred Rodgers discusses that topic. It starts with some trivia about the characters, but then it gets to discussing how that fits into the structure of the show. He describes a sort of psychoanalytic three-act structure:

I really feel that the opening reality of the program, we deal with the stuff that dreams are made of. And then, in the Neighborhood of Make-Believe, we deal with it as if it were a dream. And then when it comes back to me, we deal with a simple interpretation of the dream.

Personally, I feel like the sophistication of this approach really did shine through to me, even as a child. (Though more watching the show with my younger siblings as an older child than when I was in the preschool-age audience.) I haven’t really seen anything in more modern children’s television that approaches the topic quite so well, or attempts the same sort of structure. Daniel Tiger, the closes thing to a direct sequel, takes on some of the same characters and issues, and doesn’t really take a very different view of the topic of imagination, but it has an entirely different structure than its predecessor.

“The High Cost of Free Parking”

I recently (actually last July; parenting has done a number on my free time) read a book I’ve been meaning to get around to for a long time: Donald Shoup’s The High Cost of Free ParkingFor me, that’s been one of those books where it comes up in conversation and then you have to add the disclaimer that you’ve never actually read it, you’ve just heard interviews with the author or whatever.

The central idea is pretty well covered by this seven-minute video:

But if you watch that and think, “That was interesting, but does it come in the form of a six-hundred-page academic text?” then I definitely recommend the book! More seriously, the book is worthwhile reading for anyone in fields related to urban development, and I’d recommend it for those interested in the topic in general if they’ve got the time for it. Shoup explains the topic in a deep and compelling way, and the book is pretty lively reading for its length.

The part of the book about how urban planners model parking demand was a really interesting case study in a failure of engineering design. Off-street parking requirements were designed with one priority above all: Avoid any burden on on-street parking. That means they’re designed to predict peak demand and overestimate that, based on factors that can be easily measured and known in advance. Planning standards ended up with these detailed formulas for estimating parking demand that model the actual factors involved very poorly. To the extent that these models are empirically validated before joining the pantheon of planning best practices, the validation is presented in a way that’s statistically misleading, if not outright academic fraud.

The book also presents interesting case-studies that make clear the negative externalities of parking policy. Even some cities end up dedicating vast amounts of their core space to parking, pushing apart destinations and making the built environment inhospitable. Furthermore, crowded free street parking often accounts for a substantial percentage of urban traffic, with negative effects on everyone.

It’s been encouraging to see some of Shoup’s ideas catch on. The idea of limiting parking requirements and allowing for the “unbundling” of parking and housing costs have definitely come up in development in my neighborhood (where public transit is good enough that people might get by without a car, or at least families might get by with fewer cars). It seems likely to me that offstreet parking requirements are a major government regulation that’s been driving housing prices up. I think an important way to make housing more affordable is to allow a wider variety of housing; people can buy housing more affordably if there’s something available with just the quantity of space that they need. Same goes for quantity of parking.

I also wonder how the adoption of these ideas will be affected by a trend in young Americans driving later and less, which is quite a strange trend given a built environment that’s so dedicated to cars. Some of that trend is due to easier access to on-demand transportation services. But some of it is children’s sphere of freedom diminishing to nothing (IRL anyways) and I’m not sure what effect that will have as those teens grow up.

The Hegelian-Discordian Dialectic in the Era of Trump

Given this blog’s theme of “my thoughts and the influences behind them”, I really can’t pass up the chance to comment on this recent post by the Archdruid (Emeritus), John Michael Greer, a Hegelian-Discordian analysis of the recent election. Excellent, excellent post. And very relevant to me because of my own fondness for Discordianism, that great joke religion / joke / actual religion (hail Eris! all hail Discordia!). To recap the central bit:

The Hegelian theory of history involves phases of thesis (a worldview rises to dominance), antithesis (a reaction to that idea emerges to oppose it), synthesis (the two are reconciled somehow and the cycle begins again).

This violates the key Discordian principle that all worldviews (including Discordianism), are hopelessly broken in the face of ultimate chaos. It also maybe violates the Discordian principle of the Law of Fives: “All things happen in fives, or are divisible by or are multiples of five, or are somehow directly or indirectly appropriate to 5.”

Thus, Discordianism has its own Hegelian-Discordian theory of the cycles of history: Thesis, antithesis, synthesis, parenthesis, paralysis. Also described in terms of flavors of disorder: Chaos, Discord, Confusion, Consternation, and Moral Warptitude. In the last two phases, there are increasing efforts to paper over the insufficiencies, inherent contradictions, etc. in the newly-synthesized status quo, followed by everything going completely socioeconomireligopoliticulturally screwy.

Currently, the world is dominated by a neoliberal/neoconservatives status quo. Given that the world is dominated by similar modifications of nominally-antonymic ideologies with definitions that have substantially converged1, Confusion clearly abounds. And recent populist revolts against that status quo, which seem like a trend, also don’t fit the usual pattern of ideological Discord. (e.g. the voters for Brexit were doing so without some really basic facts about what policy changes would result and when, and Donald Trump’s willingness to say the “unacceptable” is a heck of a lot more consistent than his stance on basically any policy issue.) I’m not saying there aren’t ideological disagreements with the status quo present throughout all of this, but there’s a point where disagreements with the status quo are no longer in the driver’s seat and a general desire to tear down the status quo is.

Moving beyond the recap, that JMJ post got me thinking about the topic of internet trolls. I’ve joked that with the election of Trump, America’s first black President is being followed by America’s first troll President

Trolling seeks to upset or confound, to elicit a reaction. In parallel to the above, trolling can be ideological, but there’s a point where the desire to advance some ideology is no longer dominant and trolling for trolling’s sake is. So one could (as I am about to do now for the hell of it) take the Hegelian-Discordian Dialectic and turn it into an ad hoc taxonomy of trolling:

  • Chaos – Straightforward trolling
  • Discord – Trolling as a means to an overt political end
  • Synthesis – Concern trolling and other forms of covert provocation
  • Consternation – Trolling as an overt rejection of norms of civility
  • Moral Warptitude – Trolling for trolling’s sake

Given that analysis, 4chan’s “random” board, /b/, is one of the most dramatic and obvious examples of Moral Warptitude on the internet. To the extent that part of the internet can be unmoored from surrounding norms of civility (i.e. not completely), /b/ is. But to be a nexus of trolling means that it’s all “trolls trolling trolls”, and while such nexuses may be free from norms of civility in some sense, they’re surrounded by societies where those norms are still kicking. The form of chaos that characterizes the interactions at the interface between internet trolldom and the rest of society is Consternation. And, as previously established, we’re in a political moment where Consternation reigns supreme.

I doubt Trump is spending much time on /b/, and his trolling style is definitely more off-the-cuff and boorish than witty or elaborately self-referential. But Trump doesn’t have to be a paragon of trolling to be the world’s most successful internet troll. He just has to be an internet troll who’s just gotten himself elected President of the United States.

Because bigoted rhetoric confounds and upsets, insincerely adopting that rhetoric can be an effective form of trolling, and sincerely doing so can be an even more effective form of trolling. The current social media landscape is tuned to make resharing content easy, which means it’s tuned to making retrolling trolling easy. It’s reasonable to think that someone as central as the President tapping into this kind of thing could give the Overton window a giant shove, and there’s more directions in which we could go into an era of Moral Warptitude than just a general defeat of the norms of political correctness.

1. For example, a Google search for those two words reveals definitions of “relating to a modified form of liberalism tending to favor free-market capitalism” and  “relating to or denoting a return to a modified form of a traditional viewpoint, in particular a political ideology characterized by an emphasis on free-market capitalism and an interventionist foreign policy”, emphasis mine. (The definitions seem to be from here and here.)

Not Just About a Particular Election

A few questions:

  • How skeptical are you about the ability of [centrist/radical] approaches to achieve good outcomes and avoid bad outcomes?
  • What is your general sentiment about [ambitious/compromising] goals?
  • When an [attribute] politician is accused of being egregiously [negative trait], how much are you inclined to correct for a perceived bias against [attribute]?
  • When evaluating how much you like or trust someone, what factors matter most to you? Are you more inclined to go with the crowd in your evaluation, are you contrarian, or just idiosyncratic?
  • In your evaluation of the status quo, do you put relatively more emphasis on positive or negative things? (That is, are you more of a “glass half full” or “glass half empty” kind of person?)
  • How comfortable are you with ambiguity?
  • Do you think more about strategy or tactics?
  • How well are things going for you personally? If the answer is “well”, are you inclined to think that success is based on things that are stable or unstable? If the answer is “poorly”, how much to you agree or disagree with “better the devil you know”?
  • When you tried working [within/outside of] “the system”, how has that worked out for you personally in the past?
  • What is your general sentiment towards “the mainstream”?

Why do people who mostly agree make different predictions about contingent future outcomes?

How confident are you that any reasonable person would agree with your predictions?

Thoughts on “Victimhood Culture”

This article in the Atlantic has been making the rounds recently, commenting on a recent scholarly paper (sadly paywalled) on the theory of microagression. Some things that struck me about the piece:

1. It’s strange that most of the commentary on the article acts as if Campbell and Manning (the authors) are dispassionate sociologists, when they clearly have a dog in the fight. They’re charactering past cultures in terms of virtues those cultures nominally value, then don’t even try to identify what virtue the modern culture they disparage is reaching for. It might be accurate to speak of a “solidarity culture”, where the way to respond to a slight is to encourage mass opprobrium, and shibboleths and linguistic norms that demonstrate in-group identity are of paramount importance.

2. It’s really strange that the Atlantic article comments extensively on a blog post from nearly two years ago. Sure, the blog has “microaggressions” in the title, but the Oberlin Microaggresions Tumblr was active from February to September 2013. Despite the title, the stuff it started off cataloging doesn’t exactly fit the bill. (The point of microaggressions is that stuff that’s not overtly aggressive can still be grating, not that it may be ambiguous to what extent an overtly awful person is being a troll.)

3. That blog starts out as a discussion of really overt racism, continues with posts that are a mix of overt racism and the sort of thing actually meant by “microaggressions”, then ends with an angry rant by a Hispanic student who tells a white student to “leave the soccer team” for daring to speak a word of Spanish, mocks their attempt to apologize, and asserts that they “take up to [sic] much space”. The blog ends at that point, with no explanation why. Probably whoever was running the blog moved on to other things, but it would fit the narrative arc to say that last post was some sort of culmination of the state of racial discourse at Oberlin, at which point students decided to never write about that subject, or possibly any subject, ever again. (At the very least, such a narrative would make fine fodder for an Atlantic article.)

4. The article notes:

If “dignity culture” is characterized by a reticence to involve third parties in minor disputes, an argument could be made that many black and brown people are denied its benefits. In a city like New York during the stop-and-frisk era, minorities were stopped by police because other people in their community, aggrieved by minor quality-of-life issues like loitering or sitting on stoops or squeegee men, successfully appealed to third-parties to intervene by arguing that what may seem like small annoyances were actually burdensome and victimizing when aggregated.

To what extent are non-collegians engaged in policing microaggressions by another name?

If you already have political power, it is easy to be dignified. Simply appeal to the law only for serious matters, once your culture has successfully set the definition of what is “serious”. Anything not serious can be easily ignored.

5. Were the lunch counter sit-ins of the 1960s a product of “dignity culture” or “victimhood culture”? Those protests neither “exercised covert avoidance” nor “sought only to restore harmony without passing judgment”. They appealed for political support against something other than “the most serious of offenses”. Was that an example of “toleration and negotiation”, or a “complaint”, aimed at winning the political support of third parties?

6. A Megan McArdle piece on the same article notes (of duels):

The seconds, the formalities, the extended opportunities for apology, raise the cost of fighting, lower the cost of not doing so, and thereby mitigate the appalling violence to which honor cultures are prone. Unless victim culture can find similar stopping mechanisms, it will collapse into the bloodless version of the endless blood-feuds that made us seek alternatives to honor cultures in the first place.

“Bloodless” is still more than enough to ruin lives, of course. And even when overt violence has been relegated to the margins, any sufficiently big mob is enough to give a violent fringe plenty of motive force.

7. The Atlantic article links to a post by Jonathan Haidt. Haidt wrote a book called The Coddling of the American Mind. In the page on his site where he talks about critical response to the book, he writes:

The New Republic: The trigger warning myth, by Aaron Hanlon. This is a thoughtful essay about the sensitivities needed to lead a seminar class through difficult material. His main point is that TWs are not a form of censorship. I agree. He argues that sometimes guidance is needed beforehand. I agree with that too. I just think its very bad for students to call it a “trigger warning,” or to do anything to convey to students the expectation that they will be warned about… everything.

So you want to write a book about how annoying liberals are, but lack any substantial disagreement. Nothing to do but get into a knock-down drag-out fight about linguistic norms.

8. “Political correctness has gone too far” has gone too far. Well, that’s the joke. More accurate would be: “Political correctness has gone too far” has not gone anywhere.

Asimov by Way of Japan

I recently enjoyed revisiting a favorite anime series, Time of Eve. There are a couple of things that stand out about the series. First, it’s by Yasuhiro Yoshiura, who is a brilliant director of short animated works (like the earlier Aquatic Language). Second, the format was really unusual, it was a series of six fifteen-minute episodes released online, later compiled into an (almost identical) movie. Third, it’s straight-up Asimovian science fiction, Three Laws of Robotics and all.

I really like Asimov stories, in large part because they’re generally stories about friendly AI. Not that I dislike stories about unfriendly AI, but it’s easy for a story about unfriendly AI to fall into the same tropes as other monster or disaster stories. A lot of the Asimov short stories are straightforward mysteries, but there’s also room for telling tales with a lot more ambiguity.

The story is set in a world where robots are at a casual glance indistinguishable from humans, save for their rigid patterns of speech and behavior and the holographic status rings that hover above their heads. A high school student notices an odd entry in the log of his house android, and follows that to a strange cafe, Time of Eve, with a single rule clearly posted at the entrance: “In this cafe, make no distinction between humans and robots.”

Many of Asimov’s stories focus on issues of industrial activity or political struggle. But the Powell and Donovan of this series are not industrial debuggers but high school students. The story has strong themes of conventions of behavior versus a desire to express one’s true feelings and to understand the feelings of others. The cafe of the title is a place where one can enjoy a moment of quiet contemplation, amidst an epochal change in society that has not quite become manifest, not yet.

If you’re a fan of Asimov’s robot stories or would enjoy an interesting take on that sort of setting, I definitely recommend this work. The first episode can be viewed for free on Crunchyroll (though splitting up a 15-minute episode with ads is rather unfortunate), paid members can stream the rest (but there’s a free trial). The movie can be purchased here.

Whiteboard Teamwork?

Circus Manager: How long have you been juggling?
Candidate: Oh about six years
Manager: Can you handle three balls, four balls, and five balls?
Candidate: Yes, yes and yes
Manager: Do you work with flaming objects?
Candidate: Sure.
Manager: …knives, axes, open cigar boxes, floppy hats?
Candidate: I can juggle anything
Manager: Do you have a line in funny patter that goes with your juggling?
Candidate: It’s hilarious
Manager: Well that sounds fine. I guess you’re hired.
Candidate: Umm… Don’t you want to see me juggle?
Manager: Gee, I never thought of that.

(From Peopleware by Timothy Lester and Tom DeMarco, 1987.)

Hiring programmers without seeing them program was last year’s technical interview mistake.

But hiring collaborators without seeing them collaborate is today’s.

I was at BarCamp Boston today, and attending a panel on conducting technical interviews got me thinking about the subject of “culture fit”. It’s a ubiquitous phrase in material about technical interviews, but it’s a phrase viewed with skepticism, even derision, by people who view tech company culture with a critical eye. And for good reason! First impressions are ambiguous, it’s easy for unconscious bias to sneak in, and people are inclined to view those more similar to themselves as being more likeable.  But diversity of opinion can be a powerful defense against certain sorts of groupthink-y bad-decision-making patterns, and it can be very worthwhile to have teammate members who disrupt the status quo in productive ways.

“Culture fit” can’t just be ignored, though, since that vague concept does cover some genuinely relevant skills.  It’s important that a new hire will work effectively with other members of their team. But the way those skills are measured in programmer interviews is rather similar to the old way of hiring programmers that inspired the Peopleware parable I quote above: Asking some tangentially-related questions and getting a gut sense that someone will do well.

I’ve never seen tech company interviews that try to test collaboration directly, but I actually have seen one interview process that did. My alma mater, Olin College, has a two-stage admissions process, and their Candidates’ Weekend includes a group exercise / interview that allows candidates’ collaboration skills to be directly challenged and observed.

A few people at the panel mentioned things they’d seen tech companies do to try to test collaboration skills in an interview:

  • Pair programming with candidates on prepared exercises.
  • Having employees work together with candidates briefly on the employees’ personal or open-source projects.
  • Having exercises where the candidate does a code review.

There were also a few mentions of things done to give candidates more of a trial period and mitigate the cost of leaving early if it wasn’t a good match:

  • Bringing on candidates as fixed-term contractors first.
  • Mitigating the financial costs of quitting (by paying employees who leave).

Some of those things might mitigate concern about “culture fit” when hiring.

Still, I think that when it comes to interviewing programmers, a somewhat mature methodology has been build up for judging programming ability, but the methodology for interviewing for collaboration ability still has a lot of room to grow.  What’s the FizzBuzz of measuring collaboration ability?