There are no easy shortcuts to ending obesity.

I can’t remember the last time I was so annoyed reading an article as “How Junk Food Can End Obesity” by David Freedman, which I believe was the cover story of The Atlantic’s July/August issue. Don’t read it (I know, you probably weren’t going to read it at this point anyway, I’m a bit behind the curve). It’s 10000 words of mostly anecdotes and poor logic. Now that I think about it, I can remember the last time I was so annoyed by an article: when I wrote my first-ever blog post in response to some eugenics garbage back in May.

Anyway I came across this Atlantic article when Steven Pinker posted the following on twitter in late September: “@sapinker The 1st article on food & obesity I’ve read in years that makes sense – forget Bittman & Pollan.” I honestly know very little about Dr. Pinker and actually appreciate his oft-inflammatory links but couldn’t get over the inanity in this article. Yes, to some degree it’s foodie-baiting. And after reading it and seeing that dozens of people/internet-news-types have already critiqued the piece Mr. Freedman probably got what he wanted. That said, I think there’s some interesting science to talk about here that disputes much of this article’s ‘logic’ and might make you a little more aware of what we can say about food, nutrition and obesity.

To begin, I’ll mention that Mr. Freedman does get one thing correct: there’s nothing inherently wrong with ‘processing’. Just like there’s nothing inherently wrong with “GMOs” or “drugs” or “chemicals”. Each can be bad or good depending on the circumstances and use. Labeling foods as bad because of semantics on what is ‘processed’ or ‘natural’ is indeed a poor way to go about nutrition. And Mr. Freedman goes on for a while with some anecdotal stories about how this ‘natural’ food isn’t as healthy as you might think and this ‘real’ food is just as bad for you as a processed food. This is hardly a new idea but I agree with his tenet in this case.

What Mr. Freedman gets wrong is his assumptions the rest of the article are based upon. He takes certain facts for granted largely because they’re the status quo, and NOT because they’ve been unequivocally shown to be true by science. Science is hard, and what I’ve come to find out is nutrition science is EXTRA hard. There are a lot of reasons for this, but suffice it to say that different goals for different people with different genetics and different environments at different ages creates a ton of noise in any studies on what’s good for us. In short: humans are complicated organisms.

I find Mr. Freedman’s reliance on the status quo as extra ironic because he’s written a book called “Wrong: Why Experts Keep Failing Us—And How to Know When Not to Trust Them.” Daniel Engber at  wrote about this at Slate and it appears he’s actually read the book so his word is better than mine, but in short Mr. Freedman’s inability to recognize how few objective truths we have when it comes to science on  obesity is nothing short of hypocritical considering he wrote a book on how wrong ‘experts’ often are.

This is most apparent from Mr. Freedman’s continued use of the word ‘obesogenic’ throughout the article. 10 times, in fact. Well, here’s some not-all-that-surprising-considering-the-obesity-epidemic news for everyone: we don’t know what makes people fat! Some people think it’s sugar. For decades government ‘experts’ agreed it was because we were eating too much fat (more on this oversimplification later). David Berreby wrote a fascinating article about our animal roots and obesity that gets at a lot of the science behind our fatness, but he concludes by saying that the answer’s not that easy: obesity probably has dozens of causes that run the gamut from our evolutionary propensity for a quick calorie fix to hormone-imitating chemicals to microorganisms. Now THAT is a good, well-researched article.

Here’s one particular assumption being made by Mr. Freedman that blows a hole right through his basic premise. IT’S NOT ALL ABOUT CALORIES IN/CALORIES OUT. This seems logical and everything, so much so that my college biology teacher (who, ironically, was fat) was spouting it to hundreds of us naïve ankle-biters, but it is extremely misleading. Yes, in terms of ‘thermodynamics’, eating too many calories will make you fat. But food isn’t just calories. Food is varied and sets off feedback mechanisms and hormonal responses that can make you want to eat more or less food depending on what you’ve eaten already, the context of when/where you’re eating it and if YOU’RE eating it (i.e., people respond differently due to their own genes, microbiota and environments). To quote that superb David Berreby article, “Chemicals [with hormonal properties] ingested on Tuesday might promote more fat retention on Wednesday.”

For example, Mr. Freedman goes on at length about how we need to work with Big Food companies because they can trick us into eating lower calorie foods that taste as good or satisfy us just as much. Well, the problem is: the calories themselves are a major factor in being satisfied. One cool 2008 study in particular used mice that had their taste receptors for sugar knocked out (knockout mice have their genes modified such that selected traits can be eliminated) and had them choose between either sugar water or water. At first, the mice didn’t care which they drank from, since they couldn’t taste the sugar. However, over time they found the mice would drink significantly more from the sugar water. This means their bodies actually could ‘sense’ the calories and send a message back to the brain saying ‘dude, drink THAT one’. From an evolutionary perspective this makes total sense: your taste buds probably don’t know what nutrients you’re lacking but your body does. While they’re tuned to more immediate problems like ‘is this poison?’ or ‘these berries aren’t as sweet as THOSE berries’ that you need to act on immediately, what your body actually cares about is essential nutrients like calories (or protein, as shown in a later study). As a result, when you’re drinking non-caloric diet soda with sugar substitutes that are hundreds of times more sweet than sugar, they might satisfy you for a bit but you’re hardly fooling your body into being less hungry. Which helps explain why regular consumers of diet drinks are actually fatter than non-drinkers. Edit: there’s also a new study directly linking the consumption of artificial sweeteners to obesity/diabetes.

There’s even more to it than that, since those taste receptors do report on how sweet that diet soda is, and there are feedback mechanisms that alter how you take in future calories. A 2013 Journal of Neuroscience paper found that rats given a 3% saccharin solution for 7 days had similar brain changes (specifically, trafficking of AMPA postsynaptic receptors in the nucleus accumbens—sort of like your brain’s pleasure center) to rats given a 25% solution of sucrose for 7 days. This means those obesogenic effects (notice how when I use the term there is actually a study behind it) from diet soda could very well be a direct result of your brain thinking there are way more calories coming in than it actually gets.

Something else that annoyed me even more about Mr. Freedman’s article was his later retort to the many, many articles telling him how wrong he was. While arguing on the internet is hardly productive anyway, his responses further show how little he really knows about modern thinking on nutrition. In particular, here is what he said to a nice response piece from Tom Philpott at Mother Jones:

It was really sad to see this piece. If any publication should have empathy for the plight of the unhealthy poor, you’d think it would be Mother Jones. But no, this was just a dopey, rote screed by an Atkinite, that small but incredibly loud cult of ultra-low-carbers who have become the LaRouchians of the dietary world. Calories don’t matter! Exercise doesn’t help! Eat all the fatty foods you want! It’s all about the carbs! The Atkinites like to claim that everyone else is stuck in the “low-fat craze” of the 1980s. They don’t like to mention that the low-carb craze dates to the 1860s. For the record: It’s best to trim both carbs and fat. Ask your doctor, or any obesity expert.

NO NO NO NO NO. Tom Philpott says nothing about the Atkins diet. Instead, he points out REAL, MODERN research that shows how little eating fat correlates with being fat (this might turn out to be English’s most costly homonym). There is a wealth of modern evidence indicating that low carb diets work quite well (point 10 here has a nice, quick compilation of evidence from the last 10 years, while I can give you days of additional reading material. I recently enjoyed Denise Minger’s exhaustive  takedown of Forks over Knives, for one). Meanwhile, Mr. Freedman thinks we should just ask our doctor. Ugh. Is he trolling us at this point? As an expert on experts you’d think he would be aware how little ‘your doctor’ knows about nutrition. Or most things for that matter—that’s what specialists are for.

In fact the more I read the article the more it just seemed like a giant, thickly-veiled troll. For example:

 I finally hit the sweet spot just a few weeks later, in Chicago, with a delicious blueberry-pomegranate smoothie that rang in at a relatively modest 220 calories. It cost $3 and took only seconds to make. Best of all, I’ll be able to get this concoction just about anywhere. Thanks, McDonald’s! If only the McDonald’s smoothie weren’t, unlike the first two, so fattening and unhealthy. Or at least that’s what the most-prominent voices in our food culture today would have you believe.

You’re drinking a cup of sugar water. Maybe it has some remnant of fruit in it, but honestly I wouldn’t suggest eating non-organic berries anyway. And I’m sure Mr. Freedman knows it’s sugar water.  If he’s trying to claim that sugar water in smoothie form should cost $3 instead of $8 that’s fine by me. But when he’s trying to act like fast food enterprises have anything but profits in mind he’s quite mistaken. As I’ve mentioned above, your body is designed to detect calories, and if these companies don’t know this scientifically they know it from their profit margins. McDonald’s smoothies are a perfect example of this, in fact. Only 220 calories! Which means you’ll need to eat a bunch more of their food to feel full. And since it’s sugar, it’ll probably just make you more hungry anyway. But since it has the words ‘blueberry’, ‘pomegranate’ and ‘smoothie’ it hides under the guise of healthy. And I know HE knows this, so his point is completely moot. Maybe he was getting paid by the word?

I suppose that’s enough vitriol for now. I suggest a few blogs below who cite legitimate scientific studies and use their expertise to critique the modern literature. It’s not easy reading though, and as I said earlier with how complicated nutrition is the findings are often quite incremental. That said, modern techniques are starting to uncover some fascinating pieces here and there (e.g. here’s a great study on a direct link between eating red meat and accelerating heart disease), and unlike Mr. Freedman will have you believe, we’re making progress without having to resort to tricking ourselves into thinking we can trick our bodies into becoming thinner.

Stephan Guyenet’s site is a good source of well-referenced, science-based info.

Mark’s Daily Apple looks kind of ridiculous because the site is littered with pictures of him with his shirt off (my friends and I like to refer to it as Mark’s Daily Ab-shot) but the info is pretty legit and usually well-referenced.

New neuroscience on why we dream.

I thought I’d write a post on a topic I love to talk about: dreams. You wouldn’t believe this dream I had last night: there was this nutria and he was riding on a surfboard made of cheetos and…wait sorry I’m getting off topic. What I really want to talk about is the neuroscience (not the usual pseudoscience) behind dreams. I study memory and a lot of work has shown the heavy importance of sleep in forming memories, but dreaming is a bit more of a mystery. Does it serve a purpose, or is it just an accidental offshoot from memory consolidation? To be honest: we really don’t know yet. But we do know some interesting pieces here and there that serve as first steps to unraveling the mystery of dreaming.

To talk about dreaming you have to start with sleep. We don’t even really know the purpose of sleep. For certain you don’t do very well without it and can even die from enough insomnia, but why sleep is so important that our brain uses approximately 1/3 of our lives doing it is a bit of a mystery. Oh, and this isn’t just true for us, as far as I know all higher animals sleep (including sharks and dolphins contrary to popular belief—they just do it half a brain at a time). A recent theory proposed that neurons need to be turned off at night for “prophylactic cellular maintenance,” which seems reasonable considering long-term sleep deprivation appears linked to higher mortality rates, and your brain IS shuttling around all sorts of charged ions all day. If you want to read more about why it’s a good idea for YOU to sleep more, check out my buddy Pascal’s pensees on sleep.

While we haven’t proven why you need to sleep, it is clear that it helps us form memories. As far as I know the first papers that linked sleep to memory were from the mid-90s out of the Weizmann Institute in Israel. Karni et al showed in 1994 that memory consolidation (the process of short term memories being stored) specifically happened during REM sleep. They did this by disrupting people as they entered REM, which led to their memories for a visual task being compromised. Many other studies have gone on to show how sleep improves memory, even when the memory tests are significantly further away in time. If you have subjects learn a finger-tapping task (e.g. press keys in the order 5-2-4-3-1 and remember to do it hours later) and then test them after 8 hours of being awake versus 8 hours awake plus 8 hours sleep, the group that slept will be both more accurate and faster at repeating the correct key order.

Now we can start talking about dreams a little bit. Radiolab had a good episode on sleep with a segment on dreams that I’m referencing some anecdotes from. Robert Stickgold started things off in 2000 by publishing the first (real) scientific study on dreaming in decades (as Wikipedia points out the last notable one from 1977 has ‘compromised factual accuracy’, which sounds about right). Stickgold personally had noticed that when he went hiking when he later fell asleep he would dream of continuously climbing up slopes. Knowing that trucking a bunch of undergraduates to whatever mountains are near Harvard and then bringing them back to test them falling asleep wasn’t feasible, he sat on the idea for a while until someone mentioned Tetris to him. Apparently when you play Tetris a bunch you see falling blocks connecting to each other in your sleep. He quantified this with some undergrads and found that a large percentage of both ‘experts’ (I would have liked to see the signs posted for that: ‘Wanted: Tetris experts’) and novices had dreams about Tetris after playing for a few hours. More anecdotally, Stickgold also described how people had more abstract dreams when they slept longer outside the lab. As opposed to the more literal Tetris dreams measured by Stickgold when people were awoken soon after falling asleep (these quick dreams when falling asleep are referred to as hypnagogic dreams), subjects returning days later would describe things like ‘[I was] thinking about a project I have for work that involves designing a garden space indoors and as I was thinking about it in my mind little Tetris pieces kept falling down into the garden spaces’. This implies that the Tetris dreams are going from being replayed quite literally in the beginning of sleep to being incorporated into abstract episodic memories associated with other aspects of life later in the night. REM sleep is also enriched the longer you’ve been sleeping, which is taken as a reason for increased dreaming right before you wake up. I, for one, have tried on many occasions to fall back asleep in the morning to reenter a sweet dream!

Also interestingly, they got a group of amnesiacs to play. While the amnesiacs didn’t remember playing the game or the experimenter and didn’t get any better at the game like normal subjects did, some of them did still report Tetris-related dreams when awoken right after falling asleep. This implies that dreaming can take place outside the hippocampus/medial temporal lobe, which is the seat of the malfunctioning brain tissue in those amnesiacs. This is a bit of a surprise, since we think of the medial temporal lobe as the initiator of the short to long-term consolidation process of memory (see step 3 in a few paragraphs to understand what ‘consolidation’ is). Plus, studies of rats ‘dreaming’ find many of those seemingly dream-related signals in the hippocampus (the best part of the medial temporal lobe. Just kidding. ALL the brain is cool! But mostly the hippocampus).

Speaking of those rat studies: Matt Wilson’s work was also featured in that Radiolab episode. Wilson discovered a process now termed ‘memory replay’ back in the early 90s. He describes in the episode how he was training a rat to run around a track and the hippocampal neurons would fire spikes in order: one cell after another cell after another cell. As a neurophysiologist you get used to the signature sound of these events as the voltage measurements are attached to a speaker. While recording from these rats he was surprised to hear the cells firing in sequence when he hadn’t put any food in the maze to motivate the rat to run. When he looked over at the rat it had dozed off on the track. The ‘replay’ is these groups of hippocampal cells firing in order while sleeping in the same way as they were when it is awake. Here’s a visualization:

Image

(from: http://neuro.bcm.edu/jilab/?m=static&id=3)

“RUN” is what 9 hippocampal neurons do when the rat is running through a maze. Notice how they fire one after another, with each vertical tick representing a single neuron firing. “SLEEP” is from when the rat is sleeping. When the same 9 neurons are recorded, they fire in the exact same order, an indication what the rat is thinking of when it was previously RUNning. If you look closely you’ll notice the replay timescale is 0.2 s. while the running timescale is 1.0 s. This means the replay is done at approximately 5x the speed of the actual event, which is believable to me since dreams often seem like they’re not in real-time, but counter to some other dream research that I’m not even sure has been published.

If you had the rat run in a different enclosure those same cells might not even be active. And if they were they almost certainly would not be firing in the same sequence. This gives us some idea of why the “dream” is taking place—the order of cell firing must be important for memory and the rat is replaying the order to store them in memory. We don’t know exactly why the rat has to replay them for the memory to hold, but future work has indicated that these sequences of cell firing also takes place in the cortex, an indication that memories are being transferred to this region for long-term storage.

This is in line with the canonical theory of how memories are consolidated, which I’ll describe briefly since it’s pretty interesting and will help explain some dream theory later:

1.) you experience the stimuli with the cortex on the outer parts of your brain (e.g. your auditory cortex first interprets Tubthumping by Chumbawumba).

2.) the stimuli are passed on to your medial temporal lobe (including the hippocampus) for contextualization. This region unites the disparate senses into episodes and also can link them to other important facets of the experience like emotion/reward/anxiety (e.g. you remember hearing Tubthumping at a super awkward but tasty middle school pizza party).

3.) the important stimuli are selected to be kept and the cortical region that originally experienced the stimuli then stores them permanently (i.e. your auditory cortex still remembers what Tubthumping sounds like, sadly, even to this day). This is called ‘consolidation’ of the memory.

Therefore, the replays might work as some kind of signal being sent from the hippocampus indicating which memory was important enough to keep, with the sequence of cellular firing possibly the code itself that represents that specific episode. There’s evidence the hippocampus then later indexes these memories so you can form associations with other memories, sort of like a really, really good Pandora Radio for your experiences that robustly finds the right memory when given a related experience as a suggestion. You know, much like Pandora Radio fails to do for music.

Anyway, in the Radiolab episode Wilson goes on to speculate that the actual physical replay of cell firing that happens when the rat is acting out the experience is a way to combine information from various memories for storage. He claims they’ve seen a rat run around one track, run around a second track and then have replays that combined the sequential firing from both tracks. Frankly, there has been no real scientific evidence of this from what I can find. I actually even emailed him once and he pointed to a Loren Frank paper that didn’t really describe this idea. So, while it’s a nice thought, we’re not really sure if these replays actually are done to help make connections between distinct experiences*. More on this later.

One thing that has confused these ideas is the discovery of more immediate replay. Rats won’t just replay events when they fall asleep, they’ll also do it right after experiencing the event when they’re standing still. Strangely, some of these replays are even in reverse, with the cells firing in the opposite order in the awake replays from when they were running on the track. So while replays are likely happening when you dream, they don’t seem exclusive to dreams themselves but are more ubiquitous in any kind of memory-related process.

A new paper from the Wilson group in 2012 took another step by actually biasing the rat’s ‘dreams’. They trained rats to go left or right in an enclosure for reward (often chocolate sprinkles—I’m told rats love chocolate sprinkles) depending on which of two sounds they heard. One sound would net them reward for going left, the other for going right. They found that while the rat was sleeping, if they played these tones they were able to get the rat to preferentially replay (or think about) running left or right in response to the left and right tones. Again, like in previous work, these replays were sequences of cells firing one after another. They were even able to estimate where the rat was running in its dream by matching up the replay to the sequences from when it was running in the enclosure. Here’s their graph showing this Inception-like dream-reading, possibly even colored to remind you of Inception:

Bias in Bayesian

(from http://www.nature.com/neuro/journal/v15/n10/fig_tab/nn.3203_F7.html …and IMDB)

I don’t want to go into details about their model, but the darker colors basically indicate that the neurons predict the rat to be on the right side of the track in this case. Again, this doesn’t tell us a ton about why events need to be replayed during dreams. But it does indicate that dreams can be influenced by external stimuli even while sleeping, which hints at your brain assigning valence to certain memories that it decides are important during sleep. Interestingly, all this ‘dream biasing’ happened during rat’s slow-wave sleep (SWS). This is a bit confusing since people usually remember dreams when awoken from REM sleep, again the reason you tend to dream right before you wake up as you get more and more REM sleep throughout the night. Apparently in humans it has been shown that biasing dreams doesn’t work during REM sleep, and even though you don’t necessarily remember dreams during SWS they’re still happening. This speaks to how ALL stages of sleep are important, and why all that crazytalk about the da Vinci sleeping pattern doesn’t make a lot of sense. But that’s really for another blog post.

Alright, so now that you’re up to speed on some recent dream research, you still probably don’t feel all that satisfied. Sure, dreams could be important for memory, but what’s the purpose of semi-consciously rehearsing only select events? And why are they so freakin’ weird? And what about all those stories of people having insights in dreams (e.g. Mendeleyev claims the layout for the periodic table came to him in a dream), isn’t that related in some way? I’ll start on the speculation train in a moment, but first one more supercool study to help answer this last question.

Jan Born’s group has a paper from 2006 titled simply: “Sleep Inspires Insight.” They had people learn a numbers game where they had to predict the next numbers in a sequence based on previous strings of numbers. What they were not told was there was a ‘hidden rule’ that once discovered would allow them to figure out the next numbers in the sequence considerably faster if they figured it out. And, as the title spoiled for you already, this rule was discovered significantly more often by people that were able to sleep on it for 8 hours versus people that did the task again after 8 hours of being awake. That is so cool—I love it when science backs up anecdotal intuition!

Now you can see where Matt Wilson was going with his idea about rats combining replays of different tracks together into single representations: he was trying to describe how these distinct memories could be combined and analyzed in an offline state while the rat was ‘dreaming’. This still doesn’t explain why the dreams have to be conscious, since a lot of our problem-solving takes place subconsciously, but it does assign them a useful enough purpose for us (and our wall-headbutting pets) to be actively rehearsing daily events during sleep.

I also think the word ‘offline’ is key in that previous paragraph. One idea is that your brain can only process so much remembered information when it’s currently engaged in inputting new information. Since, as I described before, the same bit of cortex that first experiences the memory is the place where the memory is stored long-term, there’s clearly going to be some interference screwing things up as your brain tries to pull double duty. For example, you probably have experienced how hard it is to remember how a song goes when loud music is playing. The reason is your auditory cortex is being engaged by the loud music, likely by many of the same cells that hold the memory of the previous song you’re trying to remember, and this makes pulling the song out of your head significantly more difficult than when everything is silent. It’s actually quite impressive you can do this at all—it speaks to the robustness of the cellular encoding process. But it would certainly be a lot easier for your brain to recall ‘true’ memories without distraction while nothing new is coming in–like when you’re sleeping. Further, it would be particularly useful for there to be no new interference coming in if you were trying to combine multiple memories together and form new insights. This includes stimuli like songs playing, but could also include such distractions as emotional reactions not necessarily relevant to a memory you experienced earlier.

This is in line with new work by Matt Walker (not Matt Wilson—I know it’s confusing) that postulates that REM sleep serves as ‘emotionally safe’ periods of reactivation. He found that people had reduced responses to emotionally charged photos after sleep, which also showed up as a reduction in activity in their emotional processing centers from fMRI scans. To directly quote him, “We know that during REM sleep there is a sharp decrease in levels of norepinephrine, a brain chemical associated with stress,” Walker said. “By reprocessing previous emotional experiences in this neuro-chemically safe environment…we wake up the next day, and those experiences have been softened in their emotional strength.” Could replaying such events with reduced emotions during dreams make ‘truer’, more objective memories?

Alright, over 2500 words later and maybe dreaming makes a bit more sense to you. As I said in the beginning (and harped on a few more times throughout), we still don’t really know why semi-consciously recalling daily events during sleep is useful. It’s even less clear why crazy, invented dreams occur. But the evidence at least points to some kind of process that both selects for important memories and does some problem-solving to boot while you’re not being distracted by new experiences. Further, more and more evidence seems to indicate that the brain is consistently biasing and predicting future events, so the possibility that these dreams might be some kind of exploration into the vast realms of ideas we form every day seems like a reasonable one.

*I stumbled upon this paper from David Redish’s group recently where they managed to show neuronal signatures of rats envisioning “shortcuts” on parts of a maze that they had never traversed. This is pretty good evidence that the brain is using hippocampal replays to collaborate past experiences to imagine future events, although doesn’t necessarily say that dreams have anything to do with such problem-solving “nexting”.

Anosognosia and the Unknown Unknowns

This post is more of a long-winded suggestion to read something else.  That something else is very long, so it might take a bit of convincing.  This series of articles, entitled the Anosognosic’s Dilemma and written by well-known documentarian Errol Morris, is one of the best things I’ve read on the internet (and the internet is obviously the best place to read things, right?).  It’s long, but you should read it.  There are many, many interesting parts.  It’s largely about ‘unknown unknowns’; essentially: what you don’t even know you don’t know.  I’ll talk about a couple interesting parts of this five part essay below.  To keep you reading, some teasers: Part 1 is about a fascinating psychology study that more or less proved how ALL people (not just their mothers) think they’re slightly above average.  Part 2 is about how one of the presidents of the United States probably wasn’t mentally competent for over a year (I promise this isn’t a long-form GW Bush joke)!  Which one?!  Read on…

First, what is now called the Dunning-Kruger effect.  Pretty badass when you write a paper and they name the phenomenon after you.  Since I’m trying to be all scientific on here, I’ll go ahead and do a mini-synopsis of the actual paper.  It was published in 1999 by Cornell psychologist David Dunning and his graduate student Justin Kruger.  The study was designed to quantify a fairly everyday phenomenon: the idea that the majority of people think they’re above average (for example, I am amongst the majority that thinks of himself as an above average driver; I’m probably not).  Here’s a summary of the studies right from the paper:

“We explored these predictions in four studies.  In each, we presented participants with tests that assessed their ability in a domain in which knowledge, wisdom, or savvy was crucial: humor (Study1), logical reasoning (Studies 2 and 4), and English grammar (Study3).  We then asked participants to assess their ability and test performance.”

Obviously as I’ve talked about before test-taking is hardly an end-all-be-all, but as the authors are largely studying learned phenomenon like ‘knowledge’ and ‘wisdom’ we’re not having an IQ debate; more an assessment of how good people are within the specific realm of each study.  And the subjects were dozens of Cornell University undergraduates, so we’re talking about a somewhat homogenous population of Ivy Leaguers that they’re divvying up into quartiles (obviously using the population at large might extend the absolute scores quite a bit, although the logic applies regardless).  Study 1 compared the undergraduates’ assessment of the funniness of jokes to those of 7 professional comedians.  Study 2 gave them 20 LSAT questions.  Study 3 was a 20 question test on grammar.  The 4th another logic test called the Wason selection test.

The nice thing about the paper is that even though there are 4 separate studies, all 4 graphs look almost identical.  Here’s one:

Image

(source: doi: 10.1037/0022-3514.77.6.1121)

This graph is hilarious!  It’s the grammar one from Study 3 if you care.  It tested 84 subjects.  So, the average value you see in the bottom quartile is for the worst-performing 21 subjects.  They tested out at the 15th percentile, so the average person in that group did better than only 15% of the entire group.  The top 21 subjects answered better than 90% of the entire group.  The actual data (grey line) has a nice slope up, meaning the test did a pretty good job of separating people.

The hilarious part is how uniform the participants’ perceptions are.  All four groups, no matter how poorly or well (I almost said ‘bad’ or ‘good’—I probably belong in the bottom quartile) they actually did, on average thought they performed between 60-70%.  They also perceived their overall ability to be around 60-70%.  And as I said if you look at the other 3 graphs for the other 3 tests they’re all largely the same.  In fact, for all 4 studies none of the 16 quartiles ever gave themselves a perceived score less than 50% (or greater than 75%).  We all like to think we’re average (but usually only a little above average; most of us have some humility).

While the data is cleaner than I would have imagined, the basic principles of the study make sense to me.  As humans we like to normalize ourselves to others, so sprinkle in a little confirmation bias and psychological optimism and you got a Dunning-Kruger stew going.  I think it says a lot about us as a species though.  Tying this back into the Errol Morris article that alerted me to this study, if you haven’t looked up what an anosognosic is yet, it’s someone with a psychological deficit in recognizing what he/she doesn’t understand.  An extreme example is people that cannot move one of their arms but don’t realize it.  You ask them to move their left arm, and they’ll pick it up with their right.  You accuse them of this cheat, and they claim they’re not doing it.  It seems unbelievable, but how our brain shapes our reality is not as objective as we like to think.  And this is the ‘anosognosic’s dilemma’: you don’t know what you don’t even know!  And as this paper by Kruger & Dunning shows, in this varied and complicated world of ours, we all probably know less than we think we do.

Now, if you made it this far, it’s probably only because you wanted to hear about our mysterious anosognosic president (or you skipped ahead.  Ctrl-f is a great hotkey).  I had never heard about this before from anywhere else, and frankly it just sounds like there wasn’t reporting back in the day like there is now for us to know what was going on.  And this anosognosiac phenomenon likely shaped world history more than we’ll ever know.  I make it sound like this president must have been from a while ago.  But he was in office within the last century!

Directly borrowing from Errol’s article to further the build-up:

“. . . I waited up there until Doctor Grayson came, which was but a few minutes at most.  A little after nine, I should say, Doctor Grayson attempted to walk right in, but the door was locked.  He knocked quietly and, upon the door being opened, he entered.  I continued to wait in the outer hall. In about ten minutes Doctor Grayson came out and with raised arms said, “My God, the President is paralyzed!”

Woodrow Wilson had a stroke on October 2, 1919.  We have the day-to-day notes of his condition from his doctor, Admiral Cary T. Grayson.  He suffered from complete paralysis of the left side of his body and a loss of vision in the left field in both eyes.  He had difficulty swallowing and speaking from the left side due to paralysis in his throat/mouth.  He also incessantly recited limericks to his doctors.  He also denied there was anything wrong with him.

At the least read the 3rd part of this Errol Morris article to get the whole account.  It’s unbelievable (but actually believable).  Wilson’s anosognosia—not realizing he was not his true self—led to capricious behavior and the firing of many of his associates.  Meanwhile anyone who spoke out about his potential disabilities was covered up.  Meaning the leader of the most powerful nation in the world (even in 1919) might well have not been of sound mind for over two years of his presidency.  And hardly anyone knew.  This very well might have led to the dissolution of the League of Nations, which in turn might have promoted WWII.  This is all speculation now, much like our speculation on how disabled Woodrow Wilson really was post-stroke.  At the least, I’ve likely turned an unknown unknown for you into a known unknown; hopefully you can appreciate the intricacies of the anosognosia we all experience every day.  Your brain does NOT give you an objective account of the real world.

Sugar is…HFCS is…Sugar

Spent a little time reading about high fructose corn syrup (HFCS) after my cousin posted a few things on facebook about its evils. Thought I might as well share a couple interesting things I learned.

I recently stopped drinking purple grape juice. I love (loved :_( ) purple grape juice. But after seeing a few scary (and seemingly reliable) things about sugar I’ve been trying to cut much of it from my diet. I stopped drinking soda a few years ago. And I finally just admitted that juice–100% or not–is no better.

But is this actually true? The anti-HFCS argument is that the enrichment in fructose makes it worse for you than sugar, the latter of which is ‘natural’. First off, refined sugar is not natural. It takes a bunch of processing and some absurd number of feet of sugar-beets to make that much sugar (I forgot the actual foot to grams conversion, I think it’s in here but am too lazy to listen again). Same with juice: it takes a lot of processing to get all that filling fiber out of there.

Second, what I just read about is how the ‘high’ in HFCS can be a misnomer. People think sucrose is ‘better’ than HFCS because your body breaks it down into 50% glucose & 50% fructose. Glucose has feedback responses that tell you to slow down; fructose’s are apparently retarded [Edit: as @Pascallisch points out, there are a LOT of other things wrong with fructose as Dr. Lustig talks about in the video I linked to above]. Now, according to Wiki, the kind of HFCS used in soft drinks (HFCS 55, which is conveniently a liquid for soda makers) is 57% fructose & 43% glucose. So, yes, the fructose % is higher, but not by that much. I doubt a 7% decrement in glucose is really going to make you stop eating that sugary treat.

But here’s the funny thing, the HFCS used in most food has LESS fructose than glucose. Regular corn syrup is really 44% fructose & 56% glucose. Just like honey, actually. It’s still called HFCS (HFCS 42 actually) because it’s still ‘high’ in fructose, but if you believe in the feedback mechanism stuff then it’s actually ‘better’ for you than sugar.

One final thing people often complain about with HFCS is this study (here’s a good summary) that showed obese people who ate 25% of their calories from fructose gained more bad (visceral) fat than obese people that ate 25% from glucose. However, they still both gained the same net amount of weight, just in different places. But here’s the thing: they each ate pure solutions of their respective sugar. HFCS, as I noted above, is not 100% fructose. It’s not even close. The name always made me think it was. Meaning that the results of this study only marginally apply to eating HFCS vs. regular sugar. And, as I said, if you’re not getting HFCS in soft drink form, it’s actually giving you less net fructose than sugar is.

Anyway, long story short, I’m not all about defending HFCS: ALL SUGAR IS BAD FOR YOU. And so tasty. Believe me, giving up (100% natural, haha) grape juice has been hard.

PS–N.b.: As Dave Chappelle aptly points out, as a stereotypical white guy, I was into grape juice, not grape drink.

How your glia make mice—and maybe you (and maybe Einstein)—smarter

It would be interesting to take a poll of non-neuroscientists on whether or not they are aware of glia.  I’m guessing <5% of people know they are the other half of the ~170 billion cells in your brain (besides neurons).  I for one had never heard of them until I took my first (and only, actually) neuroscience course.  And it turns out that since Prof. Lubischer (whose splendid class helped further my interest in neuroscience) studied glia, she gave us a running list of all the jobs they performed in the brain.  It’s probably a bit outdated 5 years later, but for those interested I put all 12 of them at the end of this post.

I think the main reason hardly anyone knows about glia is that they aren’t so easy to study.  Unlike neurons, which scream to be measured with their significant and persistent electrical discharges, glia act largely in a support role—almost like little helper robots to the master neurons.  Their main functions involve regulating ions, neurotransmitter levels and therefore the electrical signals of neurons themselves at various key points (e.g. along the neuronal axon, at the synapses between neurons).  And since these little chemists work with ~attoliter amounts of product, measuring their effects is tricky.

Now, for the crazy, mad scientist stuff I hinted at in the title.  Despite their passive role relative to neurons, glia potentially can have effects on a cognitive level.  In a paper published this year in Cell Stem Cell, a group of neuroscientists led by Stephen Goldman and Maiken Nedergaard at the U. of Rochester Medical Center took human progenitor glial cells (straight from ~5 month old aborted fetuses) and grafted them into the brain of immunodeficient (I presume so the mouse brain wouldn’t attack the invading human glia) baby mice.  Humans have notably larger and, frankly, better glia than mice when it comes to their role in regulating ions and speeding up neurotransmission.  Therefore, they probably went in thinking the human glia might alter the performance of the native mouse neurons.

The left picture below shows some of these human glia (in green; the nuclei of human glia are in white) that were incorporated into my favorite part of the brain: the hippocampus.  The authors note the human cells—after 14 months in the mouse brain—are particularly enriched in the dentate gyrus region of the mouse hippocampus.  Dentate gyrus granule neurons are the strong c-shaped band of blue cells in the left figure.  These neurons are one of only two groups that can be newly born in mammalian adults and are heavily implicated in memory and emotions (I study this region in primates as it is also thought to be an essential brain region for how we separately encode new memories).  Human glia also were expressed to a lesser degree throughout the mouse cortex (the cortex is kind of like a heating helmet for the important hippocampus underneath—haha I’m just kidding. ALL of the brain is important).

Image

(from http://dx.doi.org/10.1016/j.stem.2012.12.015)

In the right picture are a few of these human glia (green) compared to the native mouse glia (red arrows)—specifically, astrocytes.  As you can see, this subclass of glia are called this because of their distinctive star-like shape.  I’ll largely refer to the human glia as human astrocytes from here on since the authors used this obvious morphology to select for glial progenitor cells that had ‘grown up’ within the mouse brain into developed astrocytes.  And when these human astrocytes did grow up, they managed to grow to the size regular astrocytes do in the human brain—much bigger than the native mouse ones (see middle graph). This confirmed previous work that had shown how human glia incorporated themselves into mouse brains, performed a usual glia task as I explained above (in this previous case—insulating ‘leaky’ axons to heal congenitally deformed mice), and maintained their typical size and morphology in mouse brains as normally seen in human brains.  Basically: the human astrocytes are like the honey badger—they don’t care what brain they’re in, they’re gonna go regulate some neurons!  Already kinda sweet, right?

Well it gets better.  A quick overview of some nitty-gritty details: the authors then went about seeing if there were any biophysical changes in the human vs. mouse astrocytes by studying slices from the postmortem chimeric mouse hippocampus (a common in vitro method).  Indeed, they found waves of calcium (an important ion that astrocytes use to regulate neuronal transmission) were 3x faster in the human glia compared to the mouse glia. They also found excitatory postsynaptic potentials (EPSPs) were stronger in the chimeric brains.  Further, long-term potentiation (LTP)—a commonly measured signal in the hippocampus that is important in learning and forming memories through the strengthening of neuronal connections—was enriched in the mice with human astrocytes.  The authors were even able to show how the human astrocytes specifically did it: by releasing a chemical called TNFα that told the neurons to make more excitatory receptors (the place where neurotransmitters have their effect—more receptors equals easier excitation, hence the boost in LTP).  The human astrocytes essentially were able to improve the ability of the mouse brain to transmit signals.

So, you can probably guess what’s next: test these super-brained mice on some typical mousey tasks and see if their more potent brains lead to cognitive enhancements.  And amazingly: they did!  Chimeric mice showed better memory for a context they had previously been shocked in vs. a similar one where they had not: indicative of a learned, hippocampal memory.  In another test of hippocampally-dependent learning, the chimeric mice achieved greater success in the Barnes maze (where the mice must remember the location of a small, dark escape hole–mice hate open, well-lighted places.  And Hemingway.).  Finally, a third test of hippocampal learning showed the chimeric mice were better able to recognize a familiar object in a novel location.  These three tasks might not sound that exciting to prove your supermouse’s worth, but they’re standard tests that depend on the hippocampus.  Mice with lesioned hippocampi perform worse on such tasks.  Interestingly, these tests are similar to those used to screen for antidepressants (that often then work when given to humans—we’re not so concerned about mouse depression) as the hippocampus is also tied into emotional centers of the brain.

Crazy/sexy/cool right?  Okay maybe just 1&3.  But what does this mean for you, science-interested human that made it this far?  For one thing, we can no longer just think of glia as passive, boring helpers.  They’re more like strong lobbyists that may be essential for our cognitive function!  And the fact that all it took to enhance the learning and memory of mice was the strategic implementation of one chemical (TNFα), we can now start to target both this cytokine and glial functionality in disorders of cognitive processes.  Even better, Stephen Goldman’s lab has been able to induce pluripotent human stem cells from skin cells.  This not only eliminates the need for fetal stem cells, but also allows for stem cells already tailored for a specific person to be created (using foreign stem cells could cause an immune response; hence the use of immunodeficient mice in this study).  His lab is reportedly already using this system to study mouse models of schizophrenia and Huntington’s, and I’d bet you a greenback that Alzheimer’s won’t be far behind.  Depression seems like an awesome candidate for glial therapy as well.  Practically no new class of antidepressants has been made for 50 years—most work has concentrated on the same few monoamines in the same neuronal pathway.  But it is possible that glia could be used to modulate any number of factors, and do so with the attoscale specificity that popping in a pill of Prozac lacks.

Finally, as a fun aside, this isn’t the first time glia have been implicated in improving cognitive function.  Specifically, Marian Diamond’s lab analyzed tissue from Albert Einstein’s brain and found he had an enhanced number of glia compared to an average person.  This study was done a while ago and is under some debate (and is possibly not the only abnormality in Einstein’s brain), but it makes for a fun story.  Plus, when research started indicating glia could send long-distance chemical signals to each other, their potential role in cognitive functions was no longer that crazy.  I, for one, welcome our new glia overlords.  And can’t wait to hear about more glial therapies.

List of glial functions:

Glia can…

…regulate the extracellular milieu (e.g. potassium buffering).

…influence axonal propagation through myelin.

…direct localization of membrane proteins.

…uptake neurotransmitters at synapses.

…express neurotransmitter receptors and directly modulate synaptic transmission.

…synthesize neurosteroids.

…provide guidance for migrating neuroblasts (specifically radial glia).

…can provide chemical clues for axonal pathfinding.

…can modulate synaptogenesis.

…make trophic factors for neurons.

…mediate some forms of synaptic plasticity.

…influence the neuronal response to injury and disease.

Problems with using “IQ” in…anything

In the interest of actively keeping a blog, I thought I’d elaborate on a topic I touched on in my last post about the potential perils of eugenics.  In particular: inborn ‘intelligence’.  Hopefully in the future I’ll keep these relatively short so people actually care enough to read them.  This one goes on a tad (spoiler alert!).

First off, as a disclaimer, I don’t study intelligence.  I study memory and learning.  I do study cognitive processes though and I guess learning is tied in pretty heavily with intelligence studies.  Anyway, my point is I think my arguments here are ripe to be argued.  And since I am not an intelligence expert, some passersby might find what I say unoriginal and under-cited.  I welcome comments.

Part of what spurred this post was an interesting link to a graph tweeted by my friend and NYU CNS colleague Pascal Wallisch.  I have not discussed this with him yet—so am not actually sure his intent in this case—but suspect HE is a bit suspect (maybe I should use less homonyms in the same sentence for those foreign readers out there?) of what this graph really measures.  I went ahead and embedded it here:

motionquotie

(Citation: http://dx.doi.org/10.1016/j.cub.2013.04.053)

As you can see, this graph shows a clear correlation between people’s ability to suppress background motion and do well on a “standard” IQ test.  The interpretation in the linked write-up is that suppressing background motion is very similar to ignoring distractors, specifically in this case ignoring the world at large around you while focusing on smaller objects in the foreground.  On its face this makes a bit of sense: separating the wheat from the chaff is an important everyday task that successful people are likely to excel at.  But (I should probably just call this blog “but…”) there are so many confounding factors here (as there can be with any correlation without evidence of causation).  I’d like to explain what I think some major ones are as exemplars for why ‘intelligence’ isn’t so easily quantifiable.  I should point out that there’s nothing wrong with this study per se, I’m more using it as an example of the perils of correlating anything to IQ as a “standard” measure.

1.) The obvious one first: measuring one’s IQ.  I haven’t taken any full, true IQ tests, but think I get the gist from my CAPT/PSAT/SAT/GRE days and various fun things I’ve come across online like Wonderlic and IQ test examples.  And I hardly need to have taken any to tell you the problem: they’re written by people just like me.  College-educated fellow native-born Americans.  I’d even venture that they’re largely written by white guys living well above the poverty line.  And since my Venn diagram of experience is bound to overlap quite highly with these test-writing folks, I have a natural advantage on the tests they create.  This makes the claim that doing better on any test not written by someone quite like you (this again) will not help your ability to understand where it’s coming from (are double negatives a sign of intelligence?  Shakespeare used them right?).

A further problem with the IQ-quantification-by-test fallacy: you can get better at them.  This is certainly the reason for SAT/LSAT/all-AT preparatory courses and I have little doubt it’s true for any ‘aptitude’ test.  Practice in the realm of the test—whether spatial or linguistic or mathematical—or even just in the form of similar test questions themselves is a huge component of how well you’ll score.  So, bringing this back to the graph above, simply stating “IQ” is always a rather arbitrary measure, and is likely determined more by environmental factors (like how or where you were brought up) than anything biologically innate.

2.) I get the impression that what the authors of these kind of works are trying to get at though IS this biological innateness.  I think that using a visual task is hardly a way to do this though if your end goal is to inform on intelligence.  You know what makes a great student?  One who isn’t distracted by the background environment when they’re taking a test.  Or when they’re studying for a test.  Or when they’re learning.  Or when they’re trying to enhance how many marshmallows they’re given.  But are people less ‘smart’ because they can’t force themselves to sit down and concentrate on some rather arbitrary measures of their spatial/linguistic/marshmallow-withholding/whatever qualities?  Certainly not.  For whatever reason, biological or environmental (or nonshared environmental—I can think of so many random, unlucky things that could negatively bias people in test-taking situations), concentrating on the specified goal at hand is not one of their stronger abilities.  So for these people, OF COURSE their poor IQ would correlate with an inability to suppress background motion: you’re just seeing the same thing measured twice (or as Ben Gibbard would say, different names for the same thing).

3.) Now, for the speculative (and most fun!) part.  Let’s say on the graph above, instead of “suppresion of large moving objects” on the Y-axis it said ‘reaction time at completing eye fixation task’.  There are certainly studies on this (here’s an interesting recent one from a commenter on the Wiring the Brain blog).  Now we’re getting at something seemingly more innate.  I get the impression what IQ-jockeys want to hold their hat on is the idea that ‘smart people’ are wired better, think quicker and this is where the genetic component of IQ comes into play.  And, for the sake of argument, let’s assume this trend—namely, that reaction times correlate to IQ—exists for people with a similar environmental background.  Like similarly-educated and reared people from the same place.  Or, even better, young babies, wherein you’re hoping to minimize the environmental factors.  In these cases: does an innate (almost wholly genetic) ability to complete a fixation task make them smarter?  Or more prepared for any and all tasks that humans need to accomplish?

I most certainly think not.  Yeah if you’re pushing tin for a living it probably helps to have an innate knack for inputting spatial info a little faster.  But there are so many other things that weigh into how ‘smart’ our decisions are.  It’s entirely possible a faster reaction to such stimuli limits the amount of bandwidth being used for a decision.  Sure for a simple task like looking at where a dot moves this isn’t going to matter.  There are certainly many situations where a too-fast decision without sufficient time for processing could come at a cost though.  In fact, your brain already limits the stream of components that contribute to your consciousness.  It doesn’t separately inform you of what you’re hearing, seeing and feeling in succession.  It accumulates this information, writes a little report on it and then passes it on to upper management only AFTER multiple subsystems pare out what they consider useless information.

A great related example comes to mind: chimpanzees KICK OUR ASS at short-term memory tasks.  This included a test where information was shown very briefly and chimps were able to recall this info way better than some slow-brained university students (of course I’m being facetious, Kyoto University has a rather good reputation).  It took a long time for us to design a task to figure this out—probably because we knew it would hurt our anthropocentric egos—but chimpanzees have brains better wired for such tasks.  We hardly think they’re going to start writing novels, however.  The quick assimilation of visual information is just one of many streams of input information, and just because they answer faster in a relatively complicated task certainly doesn’t give them a leg up on us in a whole bunch of other things.

In conclusion, these are the kind of problems in measuring innate “intelligence” with such tasks as the graph above.  You could just be measuring spurious correlations.  Or you could be measuring something hardly indicative of what we’d consider important for success.  I’d be fascinated to know if ‘successful’ (not IQ-high) people have faster reaction times.  Don’t get me wrong: more data gives us a more complete understanding of our brain, so I hardly think this kind of research is a waste of time (for example, reaction times may prove useful in diagnosing Alzheimer’s).  But did Charles Darwin have enhanced visual reaction times?  (ok maybe—he was a birdwatcher).  Did Albert Einstein?  Did Virginia Wolff (speaking of stream of consciousness)?  I don’t know.  And even if they did: did they also have them as babies?  And most importantly: was this crucial to their success?  My suspicion is what made them geniuses in their chosen fields were brains developed to handle and manipulate multiple high-level processes at once, and not just spit out answers to simple tasks a hair quicker than the average person (but not a chimpanzee).

Some thoughts on the newfangled eugenics stuff

I decided that there was an unelaborated side to this recent eugenics debate that’s been cropping up on the twittersphere. As a brief summary, a few people (@matingmind [Geoffrey Miller] and @razibkhan) have been defending recent work on “China engineering genius babies”. Vice might have overstepped a bit with the phrasing—no surprise there—but reading that interview with @matingmind leaves a pretty bad taste in one’s mouth. I don’t want this to be a personal attack, so I’ll leave the immodesty stuff out of this, but I have some nits to pick.

To start off, Kevin Mitchell already has a great post on the underlying genetics of trying to select for ‘intelligence’. If you haven’t read it, you should probably go read that first (here’s another good post from Ed Yong). He describes how traits—particularly advanced cognitive ones—are likely influenced by thousands of mutations. Some of these could be quite rare, making the basic task of selecting for any of them Herculean. In response, Razib Khan did admit this by saying “It makes sense to be skeptical of the scientific possibilities in the near to medium term.”

For my first point, I wanted to add something from the scientific perspective: I’ve recently had my eyes opened on the nature vs. nurture stuff. Sure, we’ve all had it beaten into our heads that it’s not nature vs. nurture, it’s nature AND nurture and different proportions of each. But here’s the thing: there’s a (sometimes HUGE) third component. As explained by Eric Turkheimer in this fascinating paper, the ‘nonshared environment’ of an organism will have a large impact on how it turns out. Here’s a key snippet:

“Why don’t identical twins raised in the same family have identical outcomes? There are two important reasons. The first is measurement error. The second is the self-determinative ability of humans to chart a course for their own lives, constrained but not determined by the genes, family, and culture, and in response to the vagaries of environmental experience with which they are presented. The nonshared environment, in a phrase, is free will.”

And the more complicated a trait is, the more this nonshared environment variable is going to come into play. Your nonshared environment isn’t going to influence your height much. But it has the potential to have a large impact on your personality. You can certainly imagine a person that has a bunch of fluky things go wrong turning into an embittered individual. This isn’t just conjecture anymore either, as there’s a brand new study by Gerd Kempermann’s group that quantifies this in genetically identical mice that grow up in the same place (hat tip to Virginia Hughes for the explanation). In short, stochasticity plays a role in who you are. From the quantum mechanical level on up there is a lot of randomness there to screw up the best laid genes of mice and men. You might still call this ‘nurture’, but it’s nurture at a different level. Or you might just call it ‘shit happens’.

So, bringing this back to the eugenics debate, it’s not just hard to measure an advanced trait like intelligence—even if you believed in one test of it—but there are so many things that can go wrong even if you COULD select for some kind of ‘smart’ gene.

This gets at my second point: how I interpret ‘intelligence’ as a neuroscientist, and why trying to ‘improve’ humans is totally misguided.

I’m probably being simplistic, but eugenics made sense when I was 12. Back then I thought intelligence was a quantifiable thing. Some people were really smart and hey there are tests to prove it! But, as anyone who reads about these kind of things knows, it is not so simple. There are so many environmental factors and, personally, I now know how lucky I had it to grow up a white male in America. This of course leads to how intelligence testing itself is flawed. And if you don’t agree with the comic, here’s a great post from Frans de Waal about how even our studies of animal intelligence have been misguided, and their capabilities are a lot more nuanced than we realized.

Again, I don’t want to get into the Howard Gardner stuff and make a bunch of hippie arguments about ‘no one can even know anything’ but here’s a simple example I like to tell people: LeBron James is a genius. Have you seen the way he drives to the hole? No one else his size has ever moved like that on a basketball court (or maybe anywhere). Just because he probably couldn’t pass a test for MENSA, does he not still possess a staggering intelligence in his selected field? He’s also a pretty good public speaker (maybe not always content-wise), so he hardly has a one-track mind.

Now, let’s bring this back to eugenics. And we’ll use my favorite movie as an example: Gattaca. In case you haven’t seen it, the basic premise of Gattaca is that even in a world where people are selected for and typecasted based on ‘superior’ genetics, the human spirit can overcome such boundaries. This is epitomized by a few tremendous scenes where Ethan Hawke’s character, who is the natural-born runt of the litter, defeats his genetically-selected-for brother in harrowing games of chicken based on who can swim the farthest into the ocean. So, let’s say my imaginary eugenicist friend (not really my friend), points out: ‘well we can just select for the traits that gave Ethan Hawke his enhanced human spirit!’ On its face that doesn’t sound that bad. Successful people are motivated and have some gumption, heck I’d argue that that’s most of what comprises intelligence (does one read a lot because he/she is smart, or is he/she smart because that person is motivated to read?).

BUT is this really what we want? I can just imagine selecting for a bunch of Ethan Hawkes. Sure, THAT one in Gattaca was a good guy, but you know what you get when you take an average guy with a lot of gumption? A hedge fund douche. Suddenly we’d be selecting for a bunch of competitive people, and with all the different ways our environments can shape us (both by chance and by the families/places we’re raised in), you’re going to get some good ones and some bad ones. And then you’re going to get the housing crisis.

So the answer is NO. No matter what trait you come up with, there are problems. Further, that evolution stuff works prettttty well. Creating a spectrum of humans with a spectrum of strengths and weaknesses is what makes the world go ‘round. Limiting people by selecting for some kind of trait that maximizes ‘g’ is truly that: limiting.

I should point out: I’m not against trying to weed out mutations for debilitating diseases. No one should have to live through Huntington’s if it can be avoided. But when @razibkhan says “So sorry to turn this upside down, but personal eugenics may in fact be a boon for the ugly, stupid, and psychologically unstable, because it gives them a opportunity to close much of the gap with those who were lucky in the genetic lottery”, we’re dealing with some very QUALITATIVE traits that make humans what they are. Having to overcome adversity is part of what makes Ethan Hawke swim farther, and part of that adversity came because he did NOT win the genetic lottery. That was kinda the point of the movie. As for psychological instability, it is not easy to quantify, and many of the great works and creations of all time came from people on the edge. You want to select AGAINST this? If we did this already, we probably wouldn’t be the same advanced human race we are today.

So, to sum up, when @matingmind says “Obviously you should make babies genetically healthier, happier, and brighter,” I’m willing to agree with the first of those (physically, at least). Hopefully as I’ve explained here, you’ll understand why I think the latter two are fool’s errands, and ones that will probably harm humankind.