Entries in psychology (9)

Thursday
Apr072016

Not Just About a Particular Election

A few questions:

  • How skeptical are you about the ability of [centrist/radical] approaches to achieve good outcomes and avoid bad outcomes?
  • What is your general sentiment about [ambitious/compromising] goals?
  • When an [attribute] politician is accused of being egregiously [negative trait], how much are you inclined to correct for a perceived bias against [attribute]?
  • When evaluating how much you like or trust someone, what factors matter most to you? Are you more inclined to go with the crowd in your evaluation, are you contrarian, or just idiosyncratic?
  • In your evaluation of the status quo, do you put relatively more emphasis on positive or negative things? (That is, are you more of a “glass half full” or “glass half empty” kind of person?)
  • How comfortable are you with ambiguity?
  • Do you think more about strategy or tactics?
  • How well are things going for you personally? If the answer is “well”, are you inclined to think that success is based on things that are stable or unstable? If the answer is “poorly”, how much to you agree or disagree with “better the devil you know”?
  • When you tried working [within/outside of] “the system”, how has that worked out for you personally in the past?
  • What is your general sentiment towards “the mainstream”?

Why do people who mostly agree make different predictions about contingent future outcomes?

How confident are you that any reasonable person would agree with your predictions?

Monday
Sep142015

Thoughts on "Victimhood Culture"

This article in the Atlantic has been making the rounds recently, commenting on a recent scholarly paper (sadly paywalled) on the theory of microagression. Some things that struck me about the piece:

1. It’s strange that most of the commentary on the article acts as if Campbell and Manning (the authors) are dispassionate sociologists, when they clearly have a dog in the fight. They’re charactering past cultures in terms of virtues those cultures nominally value, then don’t even try to identify what virtue the modern culture they disparage is reaching for. It might be accurate to speak of a “solidarity culture”, where the way to respond to a slight is to encourage mass opprobrium, and shibboleths and linguistic norms that demonstrate in-group identity are of paramount importance.

2. It’s really strange that the Atlantic article comments extensively on a blog post from nearly two years ago. Sure, the blog has “microaggressions” in the title, but the Oberlin Microaggresions Tumblr was active from February to September 2013. Despite the title, the stuff it started off cataloging doesn’t exactly fit the bill. (The point of microaggressions is that stuff that’s not overtly aggressive can still be grating, not that it may be ambiguous to what extent an overtly awful person is being a troll.)

3. That blog starts out as a discussion of really overt racism, continues with posts that are a mix of overt racism and the sort of thing actually meant by “microaggressions”, then ends with an angry rant by a Hispanic student who tells a white student to “leave the soccer team” for daring to speak a word of Spanish, mocks their attempt to apologize, and asserts that they “take up to [sic] much space”. The blog ends at that point, with no explanation why. Probably whoever was running the blog moved on to other things, but it would fit the narrative arc to say that last post was some sort of culmination of the state of racial discourse at Oberlin, at which point students decided to never write about that subject, or possibly any subject, ever again. (At the very least, such a narrative would make fine fodder for an Atlantic article.)

4. The article notes:

If “dignity culture” is characterized by a reticence to involve third parties in minor disputes, an argument could be made that many black and brown people are denied its benefits. In a city like New York during the stop-and-frisk era, minorities were stopped by police because other people in their community, aggrieved by minor quality-of-life issues like loitering or sitting on stoops or squeegee men, successfully appealed to third-parties to intervene by arguing that what may seem like small annoyances were actually burdensome and victimizing when aggregated.

To what extent are non-collegians engaged in policing microaggressions by another name?

If you already have political power, it is easy to be dignified. Simply appeal to the law only for serious matters, once your culture has successfully set the definition of what is “serious”. Anything not serious can be easily ignored.

5. Were the lunch counter sit-ins of the 1960s a product of “dignity culture” or “victimhood culture”? Those protests neither “exercised covert avoidance” nor “sought only to restore harmony without passing judgment”. They appealed for political support against something other than “the most serious of offenses”. Was that an example of “toleration and negotiation”, or a “complaint”, aimed at winning the political support of third parties?

6. A Megan McArdle piece on the same article notes (of duels):

The seconds, the formalities, the extended opportunities for apology, raise the cost of fighting, lower the cost of not doing so, and thereby mitigate the appalling violence to which honor cultures are prone. Unless victim culture can find similar stopping mechanisms, it will collapse into the bloodless version of the endless blood-feuds that made us seek alternatives to honor cultures in the first place.

“Bloodless” is still more than enough to ruin lives, of course. And even when overt violence has been relegated to the margins, any sufficiently big mob is enough to give a violent fringe plenty of motive force.

7. The Atlantic article links to a post by Jonathan Haidt. Haidt wrote a book called The Coddling of the American Mind. In the page on his site where he talks about critical response to the book, he writes:

The New Republic: The trigger warning myth, by Aaron Hanlon. This is a thoughtful essay about the sensitivities needed to lead a seminar class through difficult material. His main point is that TWs are not a form of censorship. I agree. He argues that sometimes guidance is needed beforehand. I agree with that too. I just think its very bad for students to call it a “trigger warning,” or to do anything to convey to students the expectation that they will be warned about… everything.

So you want to write a book about how annoying liberals are, but lack any substantial disagreement. Nothing to do but get into a knock-down drag-out fight about linguistic norms.

8. “Political correctness has gone too far” has gone too far. Well, that’s the joke. More accurate would be: “Political correctness has gone too far” has not gone anywhere.

Sunday
Feb092014

"Do What You Love" as a Weapon and Shield

A recent article in Slate had a critical take on the ideology of work as self-actualization:

There’s little doubt that “do what you love” (DWYL) is now the unofficial work mantra for our time. […]

DWYL is a secret handshake of the privileged and a worldview that disguises its elitism as noble self-betterment. According to this way of thinking, labor is not something one does for compensation but is an act of love. […] Its real achievement is making workers believe their labor serves the self and not the marketplace.

DWYL ideology sweeps non-elite work under the carpet:

Think of the great variety of work that allowed [Steve] Jobs to spend even one day as CEO. His food harvested from fields, then transported across great distances. His company’s goods assembled, packaged, shipped. Apple advertisements scripted, cast, filmed. Lawsuits processed. Office wastebaskets emptied and ink cartridges filled. Job creation goes both ways. Yet with the vast majority of workers effectively invisible to elites busy in their lovable occupations, how can it be surprising that the heavy strains faced by today’s workers—abysmal wages, massive child care costs, etc.—barely register as political issues even among the liberal faction of the ruling class?

And it makes elite work more exploitative:

The reward for answering this higher calling is an academic employment marketplace in which about 41 percent of American faculty are adjunct professors—contract instructors who usually receive low pay, no benefits, no office, no job security, and no long-term stake in the schools where they work.

There are many factors that keep Ph.D.s providing such high-skilled labor for such low wages, including path dependency and the sunk costs of earning a Ph.D., but one of the strongest is how pervasively the DWYL doctrine is embedded in academia. Few other professions fuse the personal identity of their workers so intimately with the work output. Because academic research should be done out of pure love, the actual conditions of and compensation for this labor become afterthoughts, if they are considered at all.  [links theirs]

Robin Hanson had a simple explanation for the popularity of this ideology in a much earlier essay that refers to the same Steve Jobs commencement address:

Now notice: doing what you love, and never settling until you find it, is a costly signal of your career prospects. Since following this advice tends to go better for really capable people, they pay a smaller price for following it. So endorsing this strategy in a way that makes you more likely to follow it is a way to signal your status.

It sure feels good to tell people that you think it is important to “do what you love”; and doing so signals your status. You are in effect bragging. Don’t you think there might be some relation between these two facts?

Hanson also has this more recent post about status and advice:

We get status in part from the status of our associates, which is a credible signal of how others see us. Because of this, we prefer to associate with high status folks. But it looks bad to be overt about this. […] Since association seems a good thing in general […] we mainly need good excuses for pushing away those whose status has recently fallen. Such opportunistic rejection, just when our associates most need us, seems especially wrong and mean. So how do we manage it?

One robust strategy is to offer random specific advice. You acknowledge their problems, express sympathy, and then take extra time to “help” them by offering random specific advice about how to prevent or reverse their status fall. Especially advice that will sound good if quoted to others, but is hard for them to actually follow, and is unlikely to be the same as what other associates advise.

Then when your former friend fails to follow your advice, you get annoyed at them and present that (instead of their lowered status) to yourself and others as the reason why you’re not on such good terms with them anymore.

I think that advice can certainly play such a distancing role.  But I think Hanson’s explanation misses some of the things that make this mechanism complicated:  The proactive nature of advice, the loose coupling between the emotional drives behind status and actual status, and the way status-related drives aren’t isolated from other psychological drives (not by coincidence, that form makes hypocrisy more effective).

When a friend is worried that they will suffer a setback, people want to help them avoid that setback (both due to empathy and a desire to not be associated with failure).  They also want to create emotional distance so that the setback, if it happens, will hurt them less (pain caused both due to empathy and status-anxiety). Both of these motivations underlie a variety of biases in favor of advice-giving.

When offering advice, I’m tempted to overestimate the effectiveness of advice-giving for the following false reasons:

  1. When thinking about my own problems, my emotions cloud my judgment, but I see other people’s problem’s objectively.  Also, being outside of their psychology makes their problems easier to understand, even if those problems involve things like their emotions and goals.
  2. I recognize that my imperfect understanding of my own emotions and goals make my problems harder to solve, but I can solve this problem with second-order advice about what emotions or goals should be.
  3. An understanding of what worked (or would work) for me is generally applicable.
  4. I could solve my problems effectively through sheer intelligence if only I was better at “following my own advice”.  And maybe seeing someone else succeed based on my advice will motivate me!

I think all of those biases increase the tenacity of “do what you love”.

Sunday
Oct282012

The Rationalist Elect

As a fan of logic puzzles and rational decision theory, I’d encountered Newcomb’s Paradox before.  The puzzle goes as follows:

Omega (a powerful (but not supernatural or causality-violating) logic-puzzle creating entity) has set up two boxes.  Box A contains $1000.  Box B contains $1,000,000 or nothing.  Omega offers the choice of taking Box A or taking both boxes.  But Omega has made a prediction (and Omega’s predictions are almost always correct) about the subject’s choice, and put the million dollars in Box B if and only if the subject was predicted to take just Box B (without using an external source of randomness, people who flip a coin and choose based on that do even worse than those that just choose both boxes).

This is one of the most contentious philosophical problems in decision theory.  One of the things that’s interesting about it is that it’s hard to just deny that the premises are logically coherent.  You can sustain the paradox without Omega being perfect in it’s predictions, so long as Omega can be usually right, by increasing the amount to be maybe placed in Box B.

Newcomb’s Paradox is one of the problems that the denizens of Less Wrong discuss extensively because rationality is their raison d’être and decision theory is (in one sense) the theory of what it means to make rational decisions.  The consensus there is that the right solution to the problem is to one-box (that is, to take just Box B), and Eliezer Yudkowsky make a compelling argument for that, which is essentially this: Given the premises of the problem, people who take just Box B walk away with $1,000,000, while people who take both boxes walk away with $1000.  Therefore, it’s best to put aside qualms about strategic dominance, (the illusion of) backwards causality, and whether or not this Omega fellow is generally a jerk; just do the thing that reliably wins.

To put it another way: It’s a premise of Newcomb’s paradox that one-boxers usually win, and it’s a pretty poor game theory that gives advice that contradicts a scenario’s premises.

I was reminded of this puzzle again recently because Chris Bertram at Crooked Timber has this unusual observation on it:

I was reading a postgraduate dissertation on decision theory today […] and it suddenly occurred to me that Max Weber’s Protestant Ethic has exactly the structure of a Newcomb problem.

[…] place yourself in the position of Max Weber’s Calvinist. An omniscient being (God) has already placed you among the elect or has consigned you to damnation, and there is nothing you can do about that. But you believe that there is a correlation between living a hard-working and thrifty life and being among the elect, notwithstanding that the decision is already made. Though partying and having a good time is fun, certainly more fun than living a life of hard work and self-denial, doing so would be evidence that you are in a state of the world such that you are damned. So you work hard and save.

[…] you work hard and reinvest, despite the dominance of partying, because you really really want to be in that state of the world such that you get to heaven.

It does seem to follow from the premises in a similar way, so presumably the conclusion would be analogous.  That makes sense.  When dealing with omnipotent and omniscient entities, trying to find loopholes is widely regarded as a bad idea.

I guess the problem for Less Wrongians (and here, I really must give credit to Crooked Timber commenter Prosthetic Conscience for the link, though some of the overlap in our ideas was independent) is that despite usually being atheists, they are often singularitarians, so they may genuinely worry about effectively omni* entities messing with them (or at least some version of future-them).  Sinners who could yet end up in the hands of an angry god-like-entity.

Wednesday
Aug312011

FLY THE PLANE

I know it’s been a while since I’ve posted here, and I still don’t have a full post together.  But I would like to write briefly about a book I read recently, Atul Gawande’s The Checklist Manifesto.

In the book, Gawande advocates for the use of checklists as a means of improving outcomes in medicine.  He bases his analysis on three key cases:  The use of coordination checklists to ensure essential communication between the parts of a construction team, the use of “read-do” and “do-confirm” checklists (routine and contingency) in the airline industry (with a particular look at the case of US Airways Flight 1549, the recent “Miracle on the Hudson”), and the design and testing of the World Health Organization’s Safe Surgery Checklist.

The book is a great example of popular nonfiction:  The information is interesting, the narrative is compelling, and the argument is sound.  The tradeoffs involved in the WHO’s design process were also interesting to me as an engineer.  A checklist (in this use) isn’t an algorithm for amateurs, but a tool to help someone who already has a great deal of expertise.  The key is to identify the tasks where a reminder is of greatest benefit; maximize the product of the likelihood that a checklist item will avoid a task being missed by the magnitude of the consequences if it is overlooked.  Extremely high-level goals often end up omitted, since they won’t be forgotten in any case.  On the other hand, sometimes important things are easy to forget in crisis situations; the subject line of this post comes from a checklist for restarting a dead jet engine (the result, one hopes, of some embarassing simulator incidents).  When the tasks themselves are unknown, the key is identifying which communication tasks have the highest probability of identifying serious potential problems before they actually occur, so the risk can be mitagated.

If you’re intersted in medicine or engineering or like reading nonfiction in general, read it.

 

Saturday
Nov062010

On Grades and Unschooling

During my long public school career, I didn’t think much about the structure of public school.  The reasons for this are not exactly flattering for me.  I viewed school as the “one thing” I was good at (though that was not actually true), and I used my focus on academics to avoid paying attention to many of my problems.  If I wanted to do one thing dramatically different with my academic career, that would have been skipping grades, but I didn’t pursue that with any sort of determination, since I was happy to take the path of least resistance.

It wasn’t until college that I began to think about the issue seriously.  I came at it initially from the subject of grades.  I was obsessed with grades during my primary school career (obsessed with getting grades that were just barely good enough to be called “perfect” by some carefully chosen definition (e.g. 90%, “an A”); I told you this wasn’t flattering for me), and that got worse and worse until high school.  When I entered college, I resolved to not look at my grades for any class.

The college I went to was new and we took pride in being “innovative”.  But Olin’s grading system was strikingly conventional.  Evidently, the issue of grading came up during the school’s design process.  A substantial discussion led to a rough consensus favoring a very minimal “Pass/Fail/Excellence” grading system.  But the result was a temporary compromise on letter grades without +/- gradations, followed by a wholesale adoption of the conventional grading system.

The thing is, in my view, the conventional grading system is glaringly flawed.  There’s ample psychological research showing that rewards produce a lasting decrease in intrinsic motivation, long-term recall of information, and lateral thinking, and that to the extent that “good grades” are perceived as desirable, they produce the same effect.  Grades are also only minimally useful as feedback.  They’re not very useful in comparing students from different classes, much less different institutions.  (Concerns about “grade inflation” get some of that, but talk about focusing on the mote and missing the beam!)  To some extent, grades measure how well students conform to the idiosyncratic preferences of individual professors.

In other words, pretty much anything that puts grades less in the spotlight is a win in terms of the nominal goals of academia.

(Another point that would be particularly worrying for Moldbug:  To the extent that people change their behavior for the sake of grades while not believing that “good grades” are inherently worthwhile, that could serve as the insufficient justification that would make any ideological content contained in the lessons far more persuasive than it would otherwise be.  Also worth keeping that effect in mind when people emphasize how useful grades are to graduate schools and employers.)

So grades are interesting for a few reasons:

  1. It’s an example of academia pursuing a policy that doesn’t fit well the the nominal goals of academia.
  2. It’s a policy that promotes things that are very much not the overt goal of academic idealists (rote memorization, obedience, tolerance of pointless tasks, ideological conformity).
  3. It’s an example of academia conforming to an (in my opinion) obviously broken status quo because no one wants to take the cost of defecting first.

And as it turns out, there’s a movement that would apply those three points to many (if not all) of the structural features of the entire “education system”.

The Unschooling movement is heavily influenced by the teacher and educational philosopher John Taylor Gatto.  A good introduction to his view is the essay Against Schooling, originally published in Harper’s Magazine in September 2003.  This one is hard to excerpt, read the whole thing.  But here’s my attempt at extracting the kernel of it:

Do we really need school? I don’t mean education, just forced schooling: six classes a day, five days a week, nine months a year, for twelve years. Is this deadly routine really necessary? And if so, for what? Don’t hide behind reading, writing, and arithmetic as a rationale, because 2 million happy homeschoolers have surely put that banal justification to rest. Even if they hadn’t, a considerable number of well-known Americans never went through the twelve-year wringer our kids currently go through, and they turned out all right. George Washington, Benjamin Franklin, Thomas Jefferson, Abraham Lincoln? Someone taught them, to be sure, but they were not products of a school system, and not one of them was ever “graduated” from a secondary school. […]

In the 1934 edition of his once well-known book Public Education in the United States, Ellwood P. Cubberley detailed and praised the way the strategy of successive school enlargements had extended childhood by two to six years, and forced schooling was at that point still quite new. This same Cubberley - who was dean of Stanford’s School of Education, a textbook editor at Houghton Mifflin, and Conant’s friend and correspondent at Harvard - had written the following in the 1922 edition of his book Public School Administration:   “Our schools are … factories in which the raw products (children) are to be shaped and fashioned …. And it is the business of the school to build its pupils according to the specifications laid down.”

It’s perfectly obvious from our society today what those specifications were. Maturity has by now been banished from nearly every aspect of our lives. Easy divorce laws have removed the need to work at relationships; easy credit has removed the need for fiscal self-control; easy entertainment has removed the need to learn to entertain oneself; easy answers have removed the need to ask questions. We have become a nation of children, happy to surrender our judgments and our wills to political exhortations and commercial blandishments that would insult actual adults. We buy televisions, and then we buy the things we see on the television. We buy computers, and then we buy the things we see on the computer. We buy $150 sneakers whether we need them or not, and when they fall apart too soon we buy another pair. We drive SUVs and believe the lie that they constitute a kind of life insurance, even when we’re upside-down in them. And, worst of all, we don’t bat an eye when Ari Fleischer tells us to “be careful what you say,” even if we remember having been told somewhere back in school that America is the land of the free. We simply buy that one too. Our schooling, as intended, has seen to it.

One frequent observation of mine is this:

Idea #5: A naive compromise between ideologies can produce worse results than a coherent implementation of either’s favored policy.

But naive compromises are often politically expedient, so they happen anyways.

To put it more cynically, liberals (especially) love half-measures.  (I’m not immune to this, myself.)

Gatto essentially agrees with Moldbug about the purpose of the schools, but Gatto’s description of the present state of affairs misses something crucial when he overstates his case:

The reason given for this enormous upheaval of family life and cultural traditions was, roughly speaking, threefold:

1) To make good people. 2) To make good citizens. 3) To make each person his or her personal best. These goals are still trotted out today on a regular basis, and most of us accept them in one form or another as a decent definition of public education’s mission, however short schools actually fall in achieving them. But we are dead wrong. [emphasis mine]

But I’d say that view of the goals of the education system is in fact not totally wrong, which is what makes it pernicious.  Public schools do teach some about critical thinking, analyzing primary sources, the scientific method, distinguishing fact from opinion and so on.  To the extent that schooling is “education” (to some extent it is) and a diploma is (economically) valuable, schools convince people that education is valuable.  “Good people” believe that education is valuable.  Actually, I do believe that statement if taken at face value: Good people do (correctly) believe that education is valuable.  Convincing someone to agree with a true statement on a technicality, however, does not make them better people in any meaningful sense.

A strategy of “make education a good investment and people will realize it’s intrinsically valuable on its own” also starts having some serious problems when there’s a mismatch between the educational system and the economy.  I’ve read a lot of talk recently about a “higher education bubble” and of a mismatch between the education system and the “21st century economy”.  The former has some talk of unschoolers (it uses the term “edupunks”).  Unfortunately for unschoolers (and for reactionaries like Moldbug), the dominant force among education reformers seems to be the neoconservative/neoliberal/”free market” types (need a better name for that faction, and that probably deserves another post).  Their favored solution, charter schools, is the usual mix of privatized profits and socialized costs.  Charter schools may be more willing to be “innovative”, but I wouldn’t expect them to solve the problems above or defect from educational norms, no matter how counter-productive, when defecting first is costly.

(I wish I had a better way to wrap things up here.  I still need to read more Gatto and Holt and Illich and Llewellyn.  I want to know more about how compulsory education (in particular, as opposed to public education in general) took off—not just what interests that might work well for, but how it was argued and implemented politically.  I read and would recommend Llewellyn’s Teenage Liberation Handbook to parents, adolescents, and those interested in unschooling.)

Monday
Aug162010

Compensating for Bias

“It’s tough to make predictions, especially about the future.”  - Yogi Berra

As a sort of follow-up to my last post, I’ll say that being aware of one’s cognitive biases is a necessary but not sufficient condition for counter-acting them.  Because even if you’re aware of your biases, those biases still apply, both in how you perceive your own bias and how you judge your own counter-action.  To put it another way:

Idea #4: Predicting the future is harder than you think, even if you know why predicting the future is harder than you think.

(With thanks to Douglas Hofstadter.)

Friday
Aug132010

Speedily, Speedily, In Our Days, Soon

Predicting significant social or technological changes would be hard enough even for a rational actor.  After all, any individual has limited (and inaccurate) information about the current state of affairs, many of the systems involved appear chaotic, and the time-scales involved can be very short indeed.  For humans, it’s even harder.

So when a Reddit post on this comic led me to this article, I was intrigued.

The relevant bit of the article:

Pattie Maes, a researcher at the MIT Media Lab noticed something odd about her colleagues. A subset of them were very interested in downloading their brains into silicon machines. Should they be able to do this, they believed, they would achieve a kind of existential immortality. […]

[…] her colleagues really, seriously expected this bridge to immortality to appear soon. How soon? Well, curiously, the dates they predicted for the Singularity seem to cluster right before the years they were expected to die.  […]

Joel Garreau, a journalist who reported on the cultural and almost religious beliefs surrounding the Singularity in his book Radical Evolution, noticed the same hope that Maes did. But Garreau widen the reach of this desire to other technologies. He suggested that when people start to imagine technologies which seem plausibly achievable, they tend to place them in the near future – within reach of their own lifespan.

The author of the article, Kevin Kelly, refers to this observation as the “Maes-Garreau Law”.

The most obvious explanation for that observation is probably wishful thinking, especially (but not always), if the change in question seems positive.  Or perhaps the desire for personal significance that comes with messianic or apocalyptic thinking.  Kelly comes up with a slightly different explanation, though:

Singularity or not, it has become very hard to imagine what life will be like after we are dead.  The rate of change appears to accelerate, and so the next lifetime promises to be unlike our time, maybe even unimaginable. Naturally, then, when we forecast the future, we will picture something we can personally imagine, and that will thus tend to cast it within range of our own lives.

If I had to guess, I would say that this is not bound by observations about the rate of change.  Rather, since consciousness is centered around narrative, we expect our lives to have narrative continuity.  Thus, predictions about the future that seem hopelessly alien are more likely placed after the imaginer is dead, and predictions about the future that seem like they could fit into the stories of our lives seem like they could also plausibly happen within our lifetimes.

An interesting logical inference:  If the above is true, than one would expect predictions of the timing of the singularity to differ wildly depending on which side of the singularity one is trying to imagine, and how good a job one does of imagining the alien-ness of a post-singularity world.

Thursday
Jul152010

The Politics of Behavioral Economics

A while back, someone posted this video (talking a bit about some modern research into the psychology of motivation) to the Liberal community on LiveJournal:

They asked:

Is this a liberal or a conservative idea? I mean, if we’re increasing productivity and creating more effective work places, isn’t that basically conservative? But we’re talking about empowering individuals and normalizing pay scales, and isn’t that basically liberal?

Which seemed to me like a silly question.  I wouldn’t attribute political views to the result of research unless making accusations about bias.  The truth itself isn’t ideological; what sort of political policies you promote based on the truth is ideological.

It’s popular for conservatives and liberals to accuse one another of “legislating morality”, but the truth is that both do.  Politics is making value judgments about what the government should or shouldn’t do.  And once you get beyond the sort of pure volunteerism that few (anarchists and hardcore libertarians) think should define the political process, that includes constraints on what people in general can or cannot do.

The morality in question simply has a different focus.  Conservatives tend to focus on deontological ethics, since if you seek to preserve traditional institutions, it makes sense for your morality to flow from the authority of traditional institutions.  Liberals favor teleological ethics, since if you believe that traditional institutions run the gamut from pretty good to hopelessly immoral and corrupt, you’d better focus on an ethical system that can tell the difference.

(That’s not quite the same explanation discussed in the essay Red Family, Blue Family by Doug Muder, which I list as one of my influences.  (That essay is in turn discussing the book Moral Politics: How Liberals and Conservatives Think by George Lakoff.)  However, I think that the “government as strict father” / “government as nurturant parents” and “inherited obligation” / “negotiated commitment” distinctions are related to the deontological/teleological distinction.)

Of course, that doesn’t cover the whole “conservative” / “liberal” distinction, since there’s more to politics than where people stand on traditional institutions in general (for Americans, especially given the way all political difference is crammed into a dichotomy in a two-party system).  A lot of “conservatives” are fine with traditional institutions being substantially reformed (especially when discussing past examples), so long as the government isn’t involved (in my opinion, a tricky distinction to defend).