Is there really such thing as a lazy person?

I read an interesting post lately titled “Laziness Does Not Exist” by social psychologist Devon Price. I was fairly questioning on first read, but it’s definitely a thought-provoking idea. A quick summary is that you have to consider the context before you assume someone is lazy, since you don’t know what (mental/physical/familial/bureaucratic/situational/etc.) barriers they are facing. They use procrastination as an everyday example of why some of their students might at first blush appear “lazy.” It’s pretty unlikely that the graduate students taking their classes would be in those classes if they were inherently lazy, but once you consider reasons like 1) the anxiety associated with matching up to a classroom of fellow graduate students, 2) confusion about how to start a project involving upper-level graduate material, 3) any number of outside influences like mental health or family problems, you can begin to understand why from the teacher’s perspective some people seem hard-working and others seem unmotivated. As someone that is constantly worried about free-loaders and people getting undue advantages, I’ll admit this is challenging to my own worldview.

Anyway, read the piece if you want better elaboration. I began to think of this piece again when I recently read a twitter thread/Bloomberg piece from economist Noah Smith arguing that “poverty is NOT mainly caused by bad behavior.” I often disagree with Smith since he’s one of those ‘capitalism in his bones’ types of economists, but I really appreciate the case he’s building here. Briefly, he argues that many Americans think that people become poor through some combination of “laziness, criminality, or irresponsibility.” And we can assess these factors by measuring things like “violence, drugs, out-of-wedlock births, and unwillingness to work.” So, if we find a place that is relatively low in those metrics, does it no longer have poor people?

As you may have guessed from the title: Japan is a perfect example of such a place. It’s low violence:

a.png

people don’t do drugs, single parenthood is relatively rare (4.6x lower than the US), and Japanese people have higher employment rates and work longer hours than Americans.

And guess what? Japan still has a poverty problem. It’s slightly better than the US, but significantly worse than other comparable countries:

b.png

Smith continues with more evidence of Japan’s poverty problem if you’re interested.

In conclusion, even in a country where laziness—and other indicators of bad behavior—largely don’t exist, poverty is still rampant. This really blows a hole in the conservative trope that poor people should just work harder. And, in my mind, gives quantitative evidence to Devon Price’s idea that “laziness does not exist”.

Now that I’ve gotten all that quantitative gobbledygook out of the way, a thought experiment came to mind. Imagine a litter of puppies. After a few months of motor development they start playing like puppies do by roughhousing and other physical interactions.

c.png

Now imagine you see one of the puppies doesn’t do this. When the other puppies are playing, he avoids all contact, doesn’t roughhouse with his littermates, and sits off to the side. Would you consider this puppy lazy? Would you jump to the conclusion that some puppies are just inherently lazier than others and leave him to his lot? I don’t think so. I imagine you’d try and get that puppy to interact with his mates like a normal puppy. And if you were unsuccessful, I bet you’d take him to the vet to try to figure out what the problem is. Since certainly it can’t be the puppy’s choice—no, you’d attribute it to something physically being wrong with him.

Which ties back into both of the above pieces. If you think puppies are a poor proxy, all these same arguments could be made for toddlers. Maybe you’d say this is more of a semantic argument about what “lazy” means, but my point (along with Price and Smith) is what we consider laziness is hardly an intrinsic trait. Most of the variability in laziness can be explained by upbringing, environment, and situation. Since, like a puppy that wants to play, it is in our nature to interact with the world and find our place in it, and not make conscious choices to do nothing.

 

 

 

What if memories are stored WITHIN neurons???

For those in the know: this is a bold claim! Neuroscience dogma is that memories are stored (somehow) within the synapses between neurons, and that memories become stronger or weaker with the concomitant strengthening and weakening of these synapses. However, there is very little evidence for such an unquestioned theory. We have solid proof that small groups of neurons can form a memory, so the engram must be somewhere within that circuit! But does it really make sense for it to be held in the synapses between those neurons? Charles Gallistel (who admittedly is in the bold idea phase of a legitimately impressive career) was willing to take a stand and say no, from a computational perspective it doesn’t make sense for memories to be stored in synapses. He points out that “No code has been proposed for use with plastic synapses” and cites seminal work from the father of information theory himself in asserting: “We cannot have a computationally relevant understanding of the hypothesized synaptic engram until we have a testable hypothesis about the code by which altered synaptic conductances specify numbers.”

An alternative possibility for how the brain stores memories is within the intrinsic machinery of neurons themselves. After all, we certainly know there are codes within cells capable of storing and reading out huge amounts of information (hint, it starts with a D ‘n ends with ‘n A). But is there any evidence that single neurons can hold memories?

I’m glad you asked! Here I’ll briefly go over the results of Johansson et al (PNAS 2014), which marks a triumphant culmination from over two decades of research from Germund Hesslow studying the formation of conditioned memories in ferret Purkinje neurons. I presented this paper in lab meeting and we found the results hard to question. In fact, considering the iconoclastic nature of the findings, I’m surprised more people don’t know about this paper. It only has 59 citations since its publication in 2014, and literally every neuroscientist I’ve ever brought it up to hasn’t heard of it (admittedly a limited sample size—lets face it I don’t have that many neurofriends). Meanwhile, Gallistel’s “The Coding Question” TICS piece that features Johansson et al has only 7 citations. So I’m not even sure if people take this paper or Gallistel seriously. In fact, I’d love to hear people’s opinions to the contrary! As it is, I’ll summarize where this paper came from and its relatively straightforward results to help people understand the evidence showing that the memory for a conditioned length of time is able to be stored within a single neuron.

First, to understand the results, it’s important to understand conditioned and unconditioned stimuli (CS and US, respectively). I never found these terms particularly intuitive, so it helps to use a real-life example. In previous work, the CS is a touch to the ferret’s paw, while the US is a puff of air on its eye. You then create a temporal memory by doing the CS first (touch!), waiting a couple hundred milliseconds, and then doing the US (puff!). After a number of trials, the ferret implicitly learns to close its eye after the touch (about 50 milliseconds before the expected air puff in anticipation of this negative stimulus).

As a second piece of key background info, let me introduce the key neural signature of this memory. Hesslow did this experiment all the way back in 1994, and found that single Purkinje neurons would silence their action potentials during the exact amount of time from the touch of the ferret’s paw (the CS) to when the ferret would start to close its eye (the US). A typical neuron’s response looks like this:

Untitled.png

You can see the clear conditioned response in the Purkinje cell from just after the CS (which begins at the left of the blue bar) until just before the US, as action potential firing is inhibited during this time period. The pause in firing during this key period made researchers think that these neurons were part of the circuit that forms the memory necessary to keep track of this time gap between the CS and US. Further work over the next few decades has helped to confirm the necessity of these Purkinje cells in such conditioned memories, as using either optogenetic or chemical methods to stop the CS-US pause in the Purkinje cell firing coincides with removal of the behavior (in this case, touching the ferret paw would no longer lead to the just-trained eye blink).

Now that we’ve gone over this background: we can talk about the result crazy enough for me to blog about it. After a few decades of experiments, as you can tell by the paper trail of the Hesslow lab, researchers kept trying to create the CS and US by stimulating closer and closer to the actual Purkinje cell itself. Remember, the first experiment involved touching the paw and blowing on the eye. Well if you go a few synapses up from the paw (♫ the paw neuron’s connected to the: spinal neuron; the spinal neuron’s connected to the: brain neuron ♫), the neurons that touch the Purkinje cells on one side are parallel fibers from cerebellar granule cells. And if you go a few synapses up from the eye, the neurons that touch the Purkinje cell on the other side are climbing fibers from the olivary nucleus.

The names and regions aren’t important, but what is important is we now can do three things simultaneously: 1) create the CS by sticking an electrode on one side of a Purkinje cell and zapping it (consider it a fake paw touch), 2) create the US by sticking an electrode on the other side of the cell and zapping it (consider it a fake air puff), and 3) recording the electrical signal from the Purkinje cell itself while this experiment happens. And what do they see? The same pause in firing that was seen when they used the paw and the eye blink as the CS and US:

Untitled2.png

The blue box represents the expected 200 millisecond pause in firing between the CS and US. You might have noticed that in this case they also injected Gabazine during the experiment. This was to control for one more possible confound: that maybe when they zap the parallel fiber to create the CS that this causes local inhibitory neurons­ connected to the parallel fiber to then silence the whole region (such autoinhibition of sorts could cause a fake pause). I’ll ignore the remaining details, but this figure shows that even with Gabazine, which was shown to stop this local inhibition, the existence of this key CS-US pause in the Purkinje cell was still there!

Okay, so at this point you might be wondering: soooooooo? Allow me to contextualize. Somehow, these Purkinje cells are remembering a temporal memory. In this case, a 200 ms. pause in firing. Numerous computational neuroscientists have come up with circuit models (involving many neurons) to explain this Purkinje cell temporal response. Instead, it appears they were all overthinking it. The answer was much simpler than a complicated circuit somehow keeping track of time: the neuron itself appears to be keeping track of time. As in, within the messy milieu of the neuron, some sequence of chemical reactions is likely able to form a temporal memory (in fact, Fredrik Johansson from the Hesslow lab, the same author of the paper described here, has since published a paper showing one key receptor protein involved).

The implications for this finding could turn neuroscience on its head (does neuroscience’s head have a brain?). As I explained in the intro, consensus thinking is that memories are stored in the synapses between neurons. The paper presented here says: well in at least one well-studied location in the brain, the memory is intrinsic to the cell. And it seems they’re not the only neuroscientists with results that question this dogma. Now the question becomes: what kinds of memories can be stored inside neurons themselves? Are they limited to short-term conditioned response memories in the cerebellum, or are there more complicated forms of memory that can be stored intrinsically in neurons in other parts of the brain? Charles Gallistel featured this paper—in the opinion piece I detailed above—in support of the latter idea. I’m not sure what to make of his argument that all memories come down to numbers, and that computationally it is unlikely for synapses to contain these numbers and read them out in a rate code. But I am intrigued at the possibility that the storage and retrieval of memories in our brain might be way different than the neuroscience world currently believes.

9 Reasons why we suck at eating

Long time, no post. I’ve been a little hesitant on this one since it might read as something like an advice column. My main goal is to dispel a number of nutritional status quos/myths that a lot of people adhere to despite little hard evidence. Unfortunately, as I’ve discussed in a previous post, studying human nutrition is REALLY HARD.

That said, I find nutrition interesting and want to do what I can to figure out what (and when!) we should eat. But to be clear, you shouldn’t take my word as gospel, so if something piques your curiosity please check out the reference I provide (and after doing so feel free to comment if you think I’m in error).

MYTH 1: You should eat lots of small meals throughout the day
MYTH 2: Breakfast is the most important meal of the day
MYTH 3: Eating fats is bad for you
MYTH 3.5: Eating cholesterol is bad for you
MYTH 4: Sugar is better for you than corn syrup
MYTH 5: Salt consumption raises your blood pressure
MYTH 6: Organic foods don’t have pesticides
MYTH 7: “Natural” is better than artificial
MYTH 8: MSG gives you headaches
MYTH 9: Diet sodas help you diet

ss 2017-02-10 at 12.36.42 PM.png

MYTH 1: You should eat lots of small meals throughout the day: I’m not really clear where this one came from. In terms of really basic biochemistry, it makes sense to have a constant stream of input calories that are then instantly utilized for output energy to avoid free radical electrons from hanging around. But the fact is our bodies didn’t evolve with bowls of nuts and fruit laying about the savannah. So having your digestion system constantly working throughout the day doesn’t seem like something your body was meant to do.

On the contrary, studies in animals (from yeast to mammals) show strong evidence that consistently experiencing starvation conditions makes them age better (as in, they stay healthy for longer). I realize this is counterintuitive, but it helps to think of things in an evolutionary sense. Our ancient ancestor, let’s go as far back as one much like a rodent, only lived for a couple of years before dying of old age/cancer. And if he/she happened to be born in the middle of a drought, which could last for months upon years, that’d be devastating for a species with such a small window before death. It appears evolution’s solution to this was to express certain survival genes—known as sirtuins—which slow down your metabolism and therefore your entire aging process so you can wait to worry about procreating in more bountiful times.

So why’d I tell you all of that? Because we’ve shown these sirtuin genes seem to have stuck around with higher species, as they have much the same effect in dogs and monkeys as they do in rodents. And the good news is “starvation” might not be as difficult to achieve as you think. As outlined in this NY Times piece and detailed in PNAS in 2014, even relatively small windows of starvation can cause sirtuin expression and possibly give you long-term longevity benefits. In short, there are 3 ways to achieve such starvation conditions, in what I’d consider increasing order of ease:

[Hard]: Calorie restriction: eat something like 30% less calories than a typical person over the course of your life. This is what has shown longevity benefits from extensive testing in rodents in addition to limited evidence in monkeys. Some people actually do this.

[Moderately Hard?]: Intermittent fasting: go through long periods of minimal calories each week. For example, Jimmy Kimmel said he maintains his weight by doing a 5 and 2 diet, where he eats what he wants on weekdays but only 500 calories a day on weekends. Hugh Jackman and Benedict Cumberbatch are also apparently fans. But no women. Just kidding, there must be some. Right?

[Not Bad]: Time-restricted feeding: go through a moderate period of fasting every day. Recent research in rodents has shown this can be as short as 15 hours a day (which sounds like a lot, but it’s basically just skipping breakfast or dinner).

I’ve actually been loosely doing this one for almost a year now, and I can confirm it’s “Not Bad”. I happen to never be hungry for breakfast anyway and just eat a late lunch around 2-3pm. I like to think each time I get hungry around 11am (which goes away fairly quickly as your body goes into ketosis and starts burning fat stores—your brain can’t go without glucose for long!) I’m adding another healthy day to the end of my life. To be clear though: a lot of research still needs to be done to understand the long-term effects in humans.

MYTH 2: Breakfast is the most important meal of the day: Considering I just told you that I don’t eat breakfast, this might seem like a conflict of interest. I’ll let Aaron Carroll (whose NY Times pieces are definitely worth keeping up with if you’re interested in sound, reasonable nutritional advice) take this one: Sorry, There’s Nothing Magical about Breakfast. The bottom line: “… evidence for the importance of breakfast is something of a mess. If you’re hungry, eat it. But don’t feel bad if you’d rather skip it, and don’t listen to those who lecture you.”

MYTH 3: Eating fats is bad for you: This probably won’t come as a big surprise to anyone voluntarily reading a health-related blog post on the internet, but it turns out fat doesn’t make you fat (as I’ve put it in the past, this might be the most costly homonym in language). If you don’t believe me, one of the best long reads of 2016 explains how egotistical scientists starting all the way back in the 1960s mistakenly maligned fat instead of sugar for decades. Fat is a calorie-dense substance that has feedback mechanisms that tell you you’re full. As a result, it’s more difficult to overeat with a high-fat meal. Saturated fat, which is particularly blamed, is not your enemy! Of course, there are bad fats (in fact, I plan a future post pontificating as to why fast food is bad for you. My hypothesis: an abundance of vegetable oils). We’ve all heard of trans fats and hydrogenated oils seem dangerous as well.

But the fact is there are fats in every cell of your body. So some of them are certainly good! And it appears the ratio of fats that comprise your membranes likely have long-term health consequences. This is where essential fatty acids come in, and more specifically, why people often talk about the Omega 3:6 ratio. Getting too much of one of these building blocks without getting enough of the other could be harmful. Fortunately, contrary to what many sources will tell you, your body can make its own omega 3 (essential) fatty acids. However, the enzymes responsible for this don’t appear to be particularly well expressed, and since they’re essential in your brain, retina, etc. it’s important to seek out a dietary source like fish, flax, chia seeds, etc.

For more reading, this blog has definitive guide to fats with some well-referenced information. And there are certainly mainstream sources too.

Some take-home points of my consolidated impressions:

-animal fats like lard or butter are pretty neutral nutritionally. They’re full of calorie-dense saturated fat, which makes you feel full. However, they’re relatively low in vitamins/antioxidants (except for vitamins A and D) so I largely think of them as filler. Tasty, tasty filler.

-fats from NATURALLY OILY FOODS like nuts/plants are winners! Olive oil, coconut oil, avocado oil, macadamia nut oil, etc. are high in healthy fats and, unlike animal fats, tend to be enriched in vitamins/antioxidants to boot.

-fats from plants that require major extraction methods aren’t so good for you. Hydrogenation was invented to squeeze the oil out of cotton seeds so Procter and Gamble could make money on something they were throwing away. The hydrogenation process makes many seed oils like this edible, but they’re almost certainly not good for you in large quantities. It’s not like you thought those french fries cooked in rapeseed oil were good for you, did you? Also of note is that hydrogenated oils are often called “vegetable oils” for marketing purposes, as vegetable is a misnomer.

-fats from fish are good for you! Sure, you can certainly overeat fish (particularly if it’s high in mercury), but getting a source of omega-3s a few times a week seems important for long term health. The American diet almost certainly has too many omega-6s (highly enriched in vegetable oils) and too few omega-3s.

MYTH 3.5: Eating cholesterol is bad for you: Another one you probably have heard about already, but just in case I’ll slide it in here as it’s similar to Myth 3. The relatively conservative Dietary Guidelines for America found “available evidence shows no appreciable relationship between consumption of dietary cholesterol and serum (blood) cholesterol, consistent with the AHA/ACC (American Heart Association / American College of Cardiology) report. Cholesterol is not a nutrient of concern for overconsumption.” 85% of the cholesterol in your blood is made in your liver, so issues with cholesterol overwhelmingly have to do with your genetics.

MYTH 4: Sugar is better for you than corn syrup: I wrote a whole post on why sugar is corn syrup is sugar. All sugar is bad for you. There are articles and talks galore if you don’t believe it. It also recently came to light that the “fat is bad, carbs are good” craze was perpetuated by Harvard scientists paid off by the sugar industry in 1967. If you’re going to ruin millions of lives at least do it for more than $50k! One of the main problems with sugar is the feedback mechanisms that are meant to tell your body you’ve had enough calories are stymied. To put it another way: sugar is nothing less than an addictive substance. Fortunately, our bodies are awesome at getting us to breeding age without many deleterious effects (#1 pick and former NBA MVP Derrick Rose apparently survived largely on gummy bears and sour straws through college. Of course then he blew out his knee. Coincidence?). However, if you’re looking to live a healthy life past 25, I’d suggest trying to limit your intake.

MYTH 5: Salt consumption raises your blood pressure: Now for some good news: there’s no convincing evidence that a lot of salt consumption is bad for you! The link between salt intake and blood pressure looks like a myth perpetuated by decades-old science. The jury is still out on this one, and I’m sure we’ll be seeing back and forth pieces for decades (including a recent NEJM article which says too little OR too much salt is bad for you—which is good advice for most nutrients!). It seems obesity is a way more important factor for unhealthy blood pressure, as you might expect.

MYTH 6: Organic foods don’t have pesticides: they often do, but were just grown with pesticides that made it on the USDA’s organic-approved pesticide list. Some of these “organic” pesticides are almost certainly bad for you if you get enough of them (mineral oil, borax, copper fungicides definitely don’t sound healthy) and there’s no way to really know how much were used. Which isn’t to say that you should totally abandon organic food, since they do tend to have less pesticide residue, but be aware that some foods are just harder to grow than others without using pesticides (organic or not). Something with a rind like an orange/avocado/mango/banana not only is harder for pests to get through but also keeps the actual pesticide from touching the part you’re going to eat. Meanwhile, produce with thin skin like berries or apples (which have their own problems since grafting reduces their ability to evolve their own natural pesticides) tend to require more pesticides and even worse they get absorbed more easily into the part you’re going to eat. I’m not saying you should stop eating fruits and veggies! But be aware of the most and least pesticided fruits (EWG keeps a convenient list for you) and try not to eat too many of the bad ones.

MYTH 7: “Natural” is better than artificial: Fairly self-explanatory, but it’s another pet peeve of mine just like “organic” pesticides. Just because something is natural doesn’t mean it’s good for you. Arsenic is natural. Piranhas are natural. And just because something is synthetic doesn’t mean it’s bad for you. We can synthesize the sleep aid melatonin, and it’s the same exact chemical as what we extract from animals’ pineal glands. “Natural” is a mostly meaningless marketing word. Much the same way “chemicals” and “GMOs” are. There are good and bad kinds of each, since they describe a huge range of different products.

MYTH 8: MSG gives you headaches: This old wives’ tale was largely propagated by an NEJM article from 1968 written by one cranky old doctor diagnosing his own symptoms (N=1!). 538 did a deep dive on this, but for the vast majority of people MSG, which is just sodium paired with glutamate (an essential amino acid we all have in abundance), has no effect in the doses used to give foods that tasty umami kick. Spread it on!

MYTH 9: Diet sodas help you diet: This isn’t to say that sugar water is good for you either (I clearly indicated in Myth 4 that it’s not), but it turns out that despite avoiding calories, artificial sweeteners have their own deleterious effects that lead to obesity. I went over more details in a previous post about how your brain isn’t tricked so easily, but the key is regular consumers of diet drinks are actually fatter than non-drinkers. New work seems to indicate part of the problem is sugar substitutes change your gut bacteria in a way that might lead to obesity.

Where do long “lost” memories come from?

A little while back (853 days ago(!) WHAT HAVE I BEEN DOING WITH MY LIFE?!), I wrote a post on the neuroscience of dreaming. After writing that post, rereading it approximately 15x (it’s amazing how little you remember from your own writing) and thinking about my buddy Peytonz’s* comments, a related question crept into my mind: where do “lost” memories come from? And not just random lost memories, but random ones from long ago? What spurs their retrieval?

Let me give some examples to make sure we’re on the same page. I started thinking about such “lost” memories a few years ago and now I (with surprising frequency) catch myself having recalled them. When I notice that I’ve recalled one of these obscure memories, I’ll try sifting through recent thoughts to see if I can figure out how it was cued. Like a few weeks ago I was eating pasta, thinking about the bitterness of the sauce and remembered something I hadn’t thought of for over 20 years: when my aunt served us homemade tomato sauce and I made a fuss because it tasted weird to me and she heated up Ragu to put on my spaghetti. In this case the cue was clear: I was eating pasta with a bitter sauce and this reminded me of a time when I didn’t have a taste for good, homemade tomato sauces (sorry Aunt RoRo!**). But this isn’t quite the kind of memory I want to discuss.

The kind of “lost” memory I’d really like to explore is when you recall something out of the blue that as far as you know is totally unrelated to what you were thinking about. An uncued memory, essentially. A recent one I had: I was at work sorting through pictures of generic shapes that I was creating for a task and I suddenly recalled throwing a baseball with my buddy Brianz* in Raleigh’s Jaycee Park ~7 years ago. In this case, as hard as I tried, I couldn’t come up with how ‘baseball’, ‘Brianz’, ‘Jaycee Park’ or anything related had gotten into my head. In essence: the memory appeared to come from nowhere. Now that you’ve read this, try catching yourself uncovering an uncued memory in your own life. They happen surprisingly often (note I’m not talking about “repressed” memories, which might not even be a real thing).

And now that I’ve identified this exact kind of memory—those of the “lost” uncued variety—let’s explore where I think they might come from. Please note that these are speculations based only somewhat on established science.

Idea 1: There was a cue, but you didn’t have conscious access to it. Your brain has many different parts with many different ‘jobs’ being performed. Since you take in so much data through your senses at every moment of every day, in addition to a whole bunch of data in storage for you to access, your brain needs to make complicated decisions by sorting through what all these data streams tell you. And because you take in more data than you could possibly use, your conscious brain only has access to a limited stream of (highly processed) information. As an analogy, when you’re the boss of a large company, you don’t have time to micromanage every single person/report/price/whatever so you have various managers accumulate information and execute smaller decisions for you while only informing you when grand decisions need to be made (feel a bit more important now, don’t you?). With this idea in mind (consciousness meta-alert), we can imagine how your senses frequently pick up information that you are not privy to, which can then spur memories to be recalled and brought to your attention without you having any conscious access to what caused the original cue. What kind of “unknown” sensations am I talking about? Take subliminal images as an example. Studies have shown that when you flash an image very quickly on a screen people will report they have not seen it. However, fMRI scans show that brain regions notice these images, and people are even able to win money by using information from these images without actually being able to recognize the images. At some point in our evolutionary past our ancestors had no use for single images that showed up and then disappeared in 10 milliseconds, so the visual stream in your brain, which does have access to such quickly-disappearing images, can decide that they’re not important enough to pass on to higher regions. So, to tie things up, when I had that memory of throwing a baseball with my friend, maybe one of the shapes I was looking at reminded me of a baseball. Or the shape was weird like Brianz’s throwing motion. In which case even though I didn’t consciously make the connection, the memory had an explainable cue and only seemed to come out of nowhere.

Idea 2: You’re dumb and forgot the connection between what you sensed and what you remembered. Just kidding. You’re smart, as evidenced by you making it this far into my post. And good-looking!

Idea 3: Coincident connections in your neural network. In my post on the neuroscience of dreaming, I brought up replay events that we’ve found occur in rodents when they seem to be reimagining their behavior during rest. Here’s an image of one of these events:

Untitled

In these events, each neuron fires spikes one after another, with the key being that they fire in this same order both when the rodent was running AND later on when it was resting, as if to say the rodent’s brain was recapitulating the rodent’s path in memory by replaying the sequential cell activity. There isn’t causal evidence yet for sequential cell firing actually being an essential signature of memories (Loren Frank is trying to make this happen, though!), but the specific code doesn’t matter for this thought experiment. Let’s just take it for granted that some network of N neurons is responsible for one of your episodic (autobiographical) memories. In this case, being fussy about bitter pasta sauce at your Aunt’s house when you were little. Now, I sensed the bitter pasta sauce in the present day by seeing/tasting/smelling it, which activated mnemonic representations in my visual/gustatory/olfactory long-term storage systems. Our brains likely store memories redundantly, in which similar memories are stored in similar engrams by a similar group of neurons. And some percentage of those N neurons were activated by ‘pasta with sauce’—in this case ‘bitter pasta sauce.’

There’s a putative process in the hippocampus called “pattern completion”, which is when a partial cue can recapitulate a whole memory. So when I am sensing this present-day bitter pasta sauce that I’m eating, maybe 5% of those N neurons were active (many of which are always active when you eat pasta sauce, just like neurons we’ve found in human brains were always active to specific concepts like Rachel from Friends or The Simpsons). And, by some chance, this small group of neurons “pattern completed” from the initial cue into the full reinstatement of the N neurons that collaboratively hold the episodic memory of me rejecting bitter pasta in my Aunt’s house. This gives some intuition for how related memories can be recalled.

Now, that gives a possible explanation for related memories. But what about seemingly unrelated memories? What I’m postulating is that with the redundancy necessary in our mnemonic system, in which each neuron is almost certainly part of many memories that are completely unrelated, the right pattern of activation of some subset of neurons could ‘pattern complete’ into a memory unrelated to current sensations. So, for my example above of tossing a baseball with my friend: let’s say 127 neurons are key for me to recall that memory. And, based purely by chance, some stimulus or thought I had happened to activate 42 of those same neurons. Through pattern completion my hippocampus could have taken the signal from those 42 and, due in part to the randomness of neural spiking within the structure of a neural network, happened to reactivate the other 85 neurons necessary to recollect that specific episodic memory. The fact this coincident neural firing occurred 7 years after the fact would therefore be a chance event.

These thoughts on chance happenings in the brain remind me of a fascinating topic in neuroscience: randomness in our neural code might well be an essential feature for survival. In fact, some smart cookies even consider the concept of randomness in the brain as one of the major paradigm shifts in neuroscience the past 15 years. While this seems a bit counterintuitive, if an organism is 100% predictable in its actions, then its predators will be 100% likely to learn how to eat it (as Kevin Mitchell put it, the collective noun for animals with no randomness: lunch). And beyond tricking predators, randomness can also help animals like rats explore new strategies or even aid fruit flies in achieving more successful love songs. And in the case of memories, maybe coming up with random ideas once in a while could provide some survival advantage to species as well. Obviously you want your most salient memories to be robustly recalled (from what I remember baby bears usually means there are mother bears, and mother bears EAT people like me), but for those of less obvious importance, it’s interesting to think the randomness of our memories—our creativity, to some degree—might be seen in our occasional ‘lost’ memories.

*names changed for anonymity.

**name not changed for anonymity, since this is a funny name.

Why giving my sequencing data to 23andme doesn’t bother me.

Recently 23andme, a company that’s made a name for itself by sequencing thousands of key regions in thousands of people for $99 each, signed a $60m deal with biotech company Genentech to allow access to data from their 800,000 customers. This seemed to put off people on the internet, for example an antagonistic article on gizmodo titled “Of course 23andme’s business plan has been to sell your genetic data all along.” To which I’ll argue: so what?

The main thing that seems to bother people is that a piece of them is being used for profit. That somehow their data—because it’s of the kind embedded in DNA—shouldn’t be tapped for material gain. I want to espouse on why this really doesn’t bother me at all. And unless you are immune to modern technology, why I don’t think it should bother you.

1.) You already give companies plenty of your data. This one is pretty self-explanatory, but Google, Yahoo, Twitter, et al already make plenty of money selling your data. Why is it that giving up small sections of your DNA is any worse than letting them know the location of the rash you want to get rid of? Or what route you’re taking to Maine tomorrow? Or what kind of porn you like? I’d argue this type of data could be used in a way worse way than knowing you have a few irregular polymorphisms. And trusting Google to anonymize our searches is no different than trusting 23andme to anonymize those polymorphisms. Also, to be clear: 23andme does not sequence your whole genome. It only looks at key places known to have phenotypic effects.

I also get the impression that people are personally offended since it’s their DNA. Let me be clear: there’s nothing special about your DNA. Unless you’re literally a 1 in a billion person (like her), you’re just a big mixed bag of genes like everyone else. And since your data is anonymous anyway, there’s no way for your employer/health insurance company to see it. You’re just 1 of a sample size of 800,000 used in these studies to learn how certain mutations lead to measureable changes.

Edit: here’s a new article called “Your DNA is Nothing Special“.

2.) Progress is progress. Government funding of science is not unlimited. And while the NIH has taken steps to require publications from government funding to be open after 1 year, most data taken from government grants is not released to the public anyway. Which is just like biotech and pharmaceutical companies, which invest billions of dollars to create their own data in an attempt to find the next hot technology or pill. Sure, they do it to make money, but they are creating new products that (generally) make people healthier largely with their own research funding. Edit: In fact, companies spend more money than the government on research:

2015/01/img_2174-1.png
(from: http://jama.jamanetwork.com/article.aspx?articleid=2089358)

Further, sometimes government funding just isn’t enough. If you come up with a great idea and want to implement it at a large scale: you have to start a company. Here’s a great example: Oxitec. I heard about this company when someone I know posted on Facebook that an evil company was going to kill all the mosquitos in the Everglades. Instead they suggested that this was a job for scientists—not some company trying to make a profit. Well, when I looked into this, it turns out Oxitec was founded by Oxford University scientists who invented a way to thwart mosquitoes from having babies (specifically: they release male mosquitoes that are able to sterilize females in the wild when they mate). Since the threat of dengue fever has become worrisome in Florida—and tests of the technology had gone well in Brazil and other countries—the state decided it was in their best interest to lower the mosquito population. The key is: this would not have been possible if these Oxford scientists hadn’t acquired investor funding to grow their product and do field testing. And this is just one example where the science needed funding to be tested at a large scale to improve our world (interesting side note: Radiolab did a fun piece on whether eliminating mosquitos would hurt the world in some way, and even the entomologists they interviewed didn’t see a problem with it. And mosquitoes do kill more people than any other organism–even other people!).

3.) Companies aren’t evil. It’s trendy to vilify companies. I can’t go a week on Facebook without seeing someone blame Monsanto for something (despite the fact they might be the only thing keeping people in the developing world from starving). Heck, as a New Yorker with a liberal slant I do plenty of company-bashing myself. But the fact of the matter is companies aren’t evil. Companies try to make money. And since when do Americans hate capitalism? And trying new technology? And jobs? 23andme came up with a novel idea, acquired seed funding, implemented the idea and gave you some cool information in the process; don’t they deserve to make money for implementing their great idea? Sure, individual people at companies could be evil, but I don’t get the impression that the founder of 23andme is looking to take over the world (and she’s not exactly hurting for money herself as the separated wife of Google founder Sergey Brin), but instead seems interested in changing it for the better.

So, if you’ve submitted your data to 23andme: feel good about it. I do. And I totally suggest getting their sequencing service myself: I used it to discover who my great, great, great grandfather was, what diseases I should be concerned about as I get older and that I’m resistant to noroviruses! For a science guy like me: that’s nerd gold!

My new paper on time-context neurons in the primate hippocampus

I often come across papers from other fields (and even within my own field) and think to myself “yeah, but why’s this important?” It’s not that I don’t believe that it’s important, it’s just that I don’t know enough to get why. With that in mind, I thought I’d contextualize my new paper for the lay reader in a few paragraphs. For the tl;dr version: scroll down to the parts IN BOLD AND ALL CAPITALS WHERE IT LOOKS LIKE I’M SHOUTING BUT I’M REALLY NOT where I try to highlight the main findings.

The title is “Context-dependent incremental timing cells in the primate hippocampus.” In normal-speak: I find monkey neurons that seem to help us (yes us—presumably we’re just fancy monkeys) keep track of time in certain contexts by changing their firing rates incrementally. To be fair: I don’t show anything proving how these neurons track time. My former colleague (and paper coauthor) Yuji Naya did that with some amazing previous work. In short, Yuji recorded from multiple brain regions in a task where monkeys had to keep a couple of pictures in order. He found that neurons would incrementally time (by slowly rising or falling in firing rate) during a delay in between when these pictures were shown where the monkey had to remember one picture in preparation for learning the second picture. The key was: this incremental signal was strongest in the hippocampus, and the farther you looked from the hippocampus into other nearby (more “vision-related”) brain regions, the worse the signal got. This plus a bunch of other work points to the hippocampus as important for how our brain keeps track of time.

So, my first question was: do neurons do this in all tasks or just when the monkey has to keep pictures in order? Therefore I went digging through some data taken by Sylvia Wirth where monkeys were tasked to memorize the associations between objects in certain places for reward (think animal pictures in different locations on a computer screen). When I started looking at this data, I saw cells rising or falling in firing rate during a similar delay period as Yuji’s (we look during delay periods because this is when the monkey is keeping the information in memory for the animal to respond later). There isn’t much for standard ways to statistically analyze firing rates that change over time like this, so I used resampling methods (aka bootstrapping aka Monte Carlo) to find groups of these cells (See Fig. S4 for details; here’s another paper dealing with much the same problems I did that might have been useful for my analysis if it’d come out earlier). I also spent a lot of time trying to make sure these changing signals weren’t just signaling anticipation like in other brain regions (e.g. amygdala, parietal cortex). This stuff is all in the paper and kinda boring so look there for details.

Anyway, the key point is: I found a bunch of these cells rising and falling in firing rate while the monkey remembered objects in certain places. AND MANY OF THESE NEURONS DID THIS TIME TRACKING ONLY WHEN CERTAIN OBJECTS WERE IN CERTAIN PLACES. This is where the “contextual” part comes in, as these single neurons seem to be uniting temporal information with contextual information.

Now, at this point you might be like: “what’s the big deal, man/mate/eh? You just found a bunch of moving firing rates.” What I think is really cool about this is when we looked at the animal’s behavior. This task was pretty darn hard for a monkey, so the animal only learned a new object in a new place in 71% of daily sessions. But when I found these time-context cells, the monkey had a 93% change of learning during that session. Therefore, THESE SPECIAL TIME-CONTEXT NEURONS ALMOST EXCLUSIVELY SHOWED UP WHEN THE MONKEY LEARNED NEW CONTEXTS. And since contextual encoding is something known to require the hippocampus as part of our episodic memories, it makes some sense that this part of the brain is keeping track of context. What is novel about this is SINGLE NEURONS SEEM TO BE KEEPING TRACK OF TIME AND CONTEXT SIMULTANEOUSLY.

I also looked at what happened when the monkey got the answer right or wrong. What I found was the timing signal dissipated when the monkey was about to answer incorrectly. And since this incremental timing signal occurs during a delay period where the monkey is keeping the object-place combination in its mind, this could show that THESE TIME-CONTEXT NEURONS MIGHT STRENGTHEN CORRECT MEMORIES BY INCREASING THEIR INCREMENTAL SIGNAL.

For someone outside of neuroscience (or even someone insi…in neuroscience), this might not be so surprising (“Yeah so neurons keep track of things: duh.”). But what these results imply is how complicated the neural code might be. Let me give you an example.

There are multiple studies that use fMRI with humans while they do things like learn new objects in new places (here’s a new one). They’ll look at say 1000 voxels (3-dimensional pixels—each usually about a cubic millimeter with thousands of neurons inside) in the person’s brain and find that maybe 400 go up and 400 go down in activity during the period the object is shown (usually summing activity for as many seconds as the image is shown). Then when the person recalls the same object, maybe 425 neurons will go up and 375 of them will go down. And using some advanced stats methods they can “predict” which object the person is remembering by showing that this similar group of neurons go up or down (this is called training a classifier). People do this kind of thing all the time for individual neurons instead of voxels: measure dozens or hundreds of them and ask if they go up or down and run some linear regression to correlate how the neuronal network changes during a given task.

The problem is: they’re assuming that all neurons do is go “up” or “down” during a single timeframe. While the neurons I show go up or down, if these studies used the neurons I found they’d be throwing away information, since my neurons actually change over time (instead of just peaking and dissipating quickly like we often show in other neurons):2014-12-09 12_19_25-PNAS-2014-Sakon-1417827111.pdf - Foxit Reader

Each color represents a different combination of objects+places. The neuron is tuned to the red object-place combination in this case as the firing rate (FR) increases during the delay period.

Therefore, NEURONS CAN CODE FOR SIGNALS IN TWO DIMENSIONS. Studies that only look at firing rates in a given timeframe are only working in one dimension: firing activity. But if you look at how this neuronal activity changes over time that represents a second dimension of signaling. This doesn’t mean that the studies using only one dimension don’t work: it’s very possible that with enough signal from many voxels/neurons you can accomplish perfectly good decoding for whatever your goal is (like interfacing with a machine). But it’s also possible that until we understand the real code we’ll always be playing a game of “you’re getting warmer” with each and every neuron without ever getting the right answer because we’re not directly measuring how the brain actually works. This is the goal of “basic” (I realize it’s a bad term) research: to understand the fundamental principles of the brain so that in the future we know enough to improve human lives as technology develops.

One great example of this was a recent work where a tetraplegic woman fed herself for the first time using her thoughts. I can’t find the link at the moment but the authors noted that there were about a dozen essential animal studies (starting in the 60s) that led up to this achievement, each one building off the last until we understood enough about the system to tap into how the brain controls movement. Of course, we didn’t know these dozen were the seminal ones until hundreds were done. So, in conclusion, the hope is that work like mine will help inform future neuroscientists/neuroengineers on key principles about how our brain works. But at the moment it’s just an intriguing way that the brain seems to use to encode information.

New neurons are born in your brain every day, which we know because of…nuclear bomb testing?

Long before I started in neuroscience I’d heard that you are born with all the neurons you’ll ever have. I always found this a bit disconcerting, so was pleased when I first heard of studies showing that mammals are able to produce new neurons in adulthood. Surprisingly, the first of these studies was all the way back in 1967 by Joseph Altman, who first showed evidence of new adult neurons born in the hippocampus of guinea pigs (how cliché). The topic wasn’t picked up (or believed, I guess) by the neuroscience mainstream until the 1980s-90s though, maybe because these methods couldn’t be used (for ethical reasons) to show adult neurogenesis in humans. Another reason is that there wasn’t unequivocal proof that these neurons were new and not just being repaired. I’ll save you most of the biochemistry, but the methods involved injecting either radioactive or Brominated thymidine (the T in the A T C G of DNA) into the brain and then measuring how much of these special Ts were later incorporated into neuronal DNA. The problem with these methods is your DNA is always breaking, and instead of throwing away every cell with a DNA mutation, your body has repair mechanisms in place to fix most of these mutations. Meaning the neurons could have just been refurbished (which everyone knows doesn’t sound as good as new) with the injected special Ts.

Now, this is where the eye-catching title—and some really cool science—comes in. From 1945-1998 there have been 2,053 nuclear bomb tests worldwide. Here’s an eerie video made by Japanese artist Isao Hashimoto showing all of them in a time-lapse map (make sure the sound is on). A by-product of nuclear reactions is the breakdown product carbon-14, a carbon atom with 8 neutrons. Even though carbon-14 can occur via cosmic rays acting on nitrogen in the atmosphere, levels spiked sharply during the peak of nuclear bomb tests in the 60s (even though it’s still only 1 part per trillion of the atmosphere’s carbon). See the black line in this graph to visualize how carbon-14 amounts changed over time:

Image(from Cell http://www.cell.com/abstract/S0092-8674%2814%2900137-8)

Now, based on my intro, you might guess where this is going: can we measure how carbon-14 incorporates itself into our DNA? Indeed, since carbon-14 is mixed into all the carbon we ingest throughout our lives, if new neurons were being born, we should be able to compare how much carbon-14 is in our DNA with the carbon-14 in the atmosphere. Jonas Frisén’s lab at the Karolinska Institute in Sweden has pursued this work over the last decade, using mass spectrometry to measure carbon-14 levels in DNA amounts as small as 30 micrograms (about 15 million neurons worth).

The studies go like this: donated, post-mortem human brains are measured for the amount of carbon-14 in the neuronal DNA of different brain regions. The authors can then graph how much carbon-14 they found versus what year that person was born. For example, look at the blue dot in the graph above, which is the carbon-14 from an example person’s hippocampus. This person was born in 1928, and shows MORE carbon-14 in their brain than was in the atmosphere in 1928. If this person was born with all their neurons, they should have the same amount of carbon-14 in their neurons as was in the atmosphere at that time. Instead, this dot shows that this person’s hippocampus has excess carbon-14, which must* have come from the rise in carbon-14 from new neurons that incorporated the increasing levels of carbon-14 throughout their lifetime (again, the black line). Now, look at the pink square representing carbon-14 in a person’s cortex. This person was born in 1968, and their cortex has the same carbon-14 as the atmosphere did. Therefore this brain region DOES have the same number of neurons as they did when they were born (their first 2005 paper showed that adult neurogenesis does NOT happen in the human cortex this way). Just to make sure you get what’s going on, look at the red dot again from hippocampus. In this case, this person born in 1976 has less carbon-14 than was in the atmosphere when they were born. Therefore, some of these hippocampal neurons must have been created after they were born, when carbon-14 levels had dropped below the atmospheric level at the year of their birth.

Now you can understand the main graph of their most recent (2014) paper:

Image

(from Cell http://www.cell.com/abstract/S0092-8674%2814%2900137-8)

Again, each dot is one person. And again, people born before the extensive rise in nuclear testing in the mid-1950s show increased carbon-14, indicative that some of their neurons were new. And again, people born after the peak of nuclear testing in 1965 show less carbon-14 than was present in the atmosphere at their birth, indicative that they had new neurons born after birth.

What’s extra fascinating about this most recent result from 2014 is where they found these new adult neurons: in the human striatum. No one had ever found newly born adult striatal neurons before (in any animal)!! In most mammals (largely rodents), new adult neurons had been found in the hippocampus and olfactory (smell) bulb. Frisén’s lab confirmed human hippocampal adult neurogenesis in 2013, and had surprisingly shown that humans—unlike their mammalian ancestors—had lost the ability to make new olfactory neurons. In fact what seems to have happened is that at some point in our recent evolutionary past, as our primate ancestors lost the smelling capabilities of their predecessor mammals, the neuroblasts (kind of like baby neurons) born in the lateral ventricle migrated into the striatum instead of the olfactory bulb. This means one unique feature of the primate (or maybe even just human) brain is new neurons in this region!

Now what does this mean, exactly? What’s a striatum (pronounced str-eye-ate-um)? Well, it turns out that this is a rather complex region with a lot of functions. Historically the striatum has been associated with motor control, but has more recently been shown to be important for a number of cognitive functions such as motivation, saliency in reward and working memory. In fact, a subregion of the striatum called the nucleus accumbens is considered to be the seat of substance dependency, including all addictive drugs like alcohol and cocaine, as well as a key region in reward for food and sex. What these new neurons do at this point is only speculative, although this paper gives us a few leads. First, as shown in previous work, these new adult striatal neurons were shown to be interneurons (I’ll trust you to read these graphs by now):

Image

(from Cell http://www.cell.com/abstract/S0092-8674%2814%2900137-8)

About 75% of striatal neurons are medium spiny neurons, but it’s the other 25% that show evidence of new adult neurons: the interneurons. Interneurons are typically inhibitory, meaning their job tends to be in more of a regulatory role to temper down excitatory motor/sensory neurons. And in fact, they looked at the brains of a few people that had Huntington’s disease and found a relative decrease in these new (tempering) interneurons in their striatums (striata?). This makes some intuitive sense, as the disease is first characterized by a lack of movement control until it progresses into higher cognitive decline. Therefore these results provide an interesting new avenue for therapies in such neurodegenerative disorders such as this and Parkinson’s, both of which are largely rooted in/near the striatum, wherein rescuing adult neurogenesis has the potential to reverse symptoms (as has actually been shown in mouse models).

One other novel finding from these adult neurogenesis papers you might find interesting is they can model the turnover rate of new neurons at various ages by comparing how many new neurons there are in the brains of people that died at various ages. And, with similar numbers as their previous work in the hippocampus, they show a notable decline in the turnover rate of neurons in the striatum as a person ages:

Image

(from Cell http://www.cell.com/abstract/S0092-8674%2814%2900137-8)

At the extremes, about 10% of a 20 year-old’s striatal neurons are turned over each year, while <1% of an 80 year-old’s striatal neurons are replaced with new ones. As shown in the 3rd graph above, only some types of striatal neurons are replaced by new neurons (the interneurons), and they estimate that overall within this “renewing fraction” 2.7% of neurons are turned over per year. I don’t know about you, but I feel younger already! Thanks, brain.

In the end, these papers that unequivocally prove and expand upon our knowledge of human adult neurogenesis might be the singular good thing to come from nuclear bomb testing (at least until we have to blow up an earth-targeted asteroid). And since adult neurogenesis has become a BIG topic with tons of implications in things such as exercise, antidepressants and new learning/memory, hopefully the future improvements in our lives will provide some solace in the fact we’ve nuked our own world over 2000 times.

 

References:

Humans cortex has no adult neurogenesis (2005): http://www.cell.com/cell/abstract/S0092-8674%2805%2900408-3

Humans don’t produce new adult olfactory neurons (unlike rodents) (2012): http://www.cell.com/neuron/abstract/S0896-6273%2812%2900341-8

Dynamics of hippocampal neurogenesis in humans (2013): http://www.cell.com/cell/abstract/S0092-8674%2813%2900533-3

The first proof of striatal neurogenesis (2014): http://www.cell.com/abstract/S0092-8674%2814%2900137-8

 

*Clever readers might have picked up a problem here: what about those pesky DNA repair mechanisms? Couldn’t the carbon-14 have been integrated into old neurons by those? They rebut this possibility in their method in the most recent paper (4th reference) by saying that:

A.) only some of the neurons have altered levels of carbon-14 (while all neurons should have had some DNA repair).

B.) they’ve looked for carbon-14 in many other brain regions in the past decade and found none, as you would expect if DNA repair mechanisms account for very little carbon-14 integration. This includes in the cortex of neurons after a stroke, when massive DNA damage would have taken place.

C.) they used other biochemical markers to show evidence of neuroblasts (baby neurons) as well as a lack of pigmentation that increases with aged cells (lipofuscin) in these newly born striatal cells.

Worried about waiting to have children?

Edit: It’s been pointed out that my post is largely from the perspective of the father/child. Some relevant reading for mothers weighing this question includes decreasing chances of becoming pregnant as you age (http://pensees.pascallisch.net/?p=1010) and increasing chances of fetal loss as mothers age (http://www.bmj.com/content/320/7251/1708 …I’ve attached the relevant (rather dismaying) graph at the bottom of this post). For the sake of your kid’s mental health, however, read on…

Edit 2: A reader pointed out that mental retardation, which I didn’t touch on, HAS been linked with maternal age (even though schizophrenia/autism have not). I added another relevant graph at the end of this post. Recurring theme: risks increase for mothers older than 35.

I’m a big fan of science that has an effect on our everyday lives. Of course, I’m a big fan of science in general, but in this case what I’m specifically referring to are studies I can tell friends (or enemies) about and change their lives for the better (or worse). This is largely why I personally decided to switch fields from molecular biophysics to cognitive neuroscience—I wanted to study things with more every day, macroscopic applicability.

I came across this short paper on twitter and think it has some implications that will be interesting to pretty much anyone. More so, I hope it alleviates some worry for my NYC/city-living/northern brethren out there, who tend to skew older in their decision of when to have children. It’s called “The importance of father’s age to schizophrenia risk” and was published last May in Molecular Psychiatry. It analyzes a dataset I had not heard of before—even though it appears to be a big deal in genetics research—called the Danish nationwide registers. This database of all Danish people born from 1955-1996, which is apparently linked to their medical information, provides an incredible sample size to look at diseases across populations.

Before I get into the details of what this study shows, a little background on what we know about older parents. I get the impression the status quo is that having children as an older mom is risky for chances of cognitive disease in the progeny (whether it be schizophrenia, autism, ADHD, etc.—such higher-level cognitive diseases are not easy to define and MANY seem to be related). However, this never made much sense, since these kinds of diseases are thought to come from genetic mutations, and females are born with their lifetime supply of eggs so there’s no reason to think that the female germ line will be degraded as the mother ages. Therefore blame has come to rest on the (older) fathers, who produce new sperm every 15-16 days (I’ve read these numbers in at least 3 places although can’t find the original source. For argument’s sake we can at least assume men regenerate their sperm throughout life). This results in about two point mutations per year in the germline of men. And since there’s strong correlative evidence in older fathers having a greater chance of schizophrenic or autistic children, it seems like a reasonable conclusion that these mutations accumulated throughout a dad’s life progressively make his odds worse for having a normal-functioning child the older he gets.

Wait wait don’t tab over to match.com and settle just yet!! We still have the results of this aforementioned Danish study. They took all second or later born children from the 40 year period I mentioned and determined if they developed schizophrenia from 1970-2001 (at least 15 years after birth). 8,865/1,334,883 (0.66%) of this group were diagnosed as such. Fortunately there’s only one figure in this paper, so I’ll just go ahead and show it. First, as shown in blue in Figure a below, even taking only these second or later born children, they reproduce the previously found association between schizophrenia and increasing paternal age at the birth of this child (proband means the child with schizophrenia, in this case). Older father==higher incidence of schizophrenia==confirmed.

schizo

(from Molecular Psychiatry)

Now, you’ll notice in green they also graphed the father’s age when he had his first child. This is a little confusing, so I’ll try to spell it out (in the present tense, so it sounds FRESH). Take the 8,865 schizophrenic children born second or later. Now, look at the age of their dad when the firstborn came from these same parents. What the authors find is that there’s a greater incidence of schizophrenia (in later children) when dads had their first child at an older age.

The second graph (Figure b) makes this clearer. Again, this is a graph showing the incidence of schizophrenia in future kids, but this time shows the age of the dad when he had his firstborn kid (different colors) and the age when he had his future kid that has schizophrenia (Paternal age on the bottom axis). To explain this intuitively, let’s focus on the blue line. If a dad has his first child <24 years old, no matter what children he has in the future there is no indication of increased schizophrenia risk (the line stays near the population average value of 1). Now, the black line: if a dad has his first child after 50 years old, the chances of his later children having schizophrenia is 2.5x more likely. For the orange line, having a first child after 40 made the chance of a later child having schizophrenia enhanced by ~1.5x (note: it’s likely there was an enhancement of firstborn children with schizophrenia in these older dads too, but this has already been shown and is not what they’re interested in proving).

You probably understand the data by now, but what does this mean? As explained above, if dads were having schizophrenic children because they were gaining mutations over time, you’d expect someone that had multiple kids to have each successive kid increase in the chance of schizophrenia. The blue line in Figure b disputes this. Instead, when a dad doesn’t have any children until he is quite old, the chances of schizophrenia increase. This implies that there’s something inherently wrong with fathers who are unable to have children until later in life. This could be for a lot of reasons, but I’m willing to guess this doesn’t have to do with successful, career-driven people making conscious decisions not to start a family. What seems more plausible is that these older fathers are people with a harder time fitting in socially and finding a partner willing to have their child (or maybe it’s just that someone that suddenly decides to have his first child at 50 has a few screws loose). While I’m out on this interpretation limb, the factors making it difficult for such people to find a mate might well be a mild or precursor form of a cognitive disease like schizophrenia, the genetics of which would lead to a better chance than average of more obvious, full-blown symptoms expressing themselves in progeny. For me this fits into the theoretical framework of the genetics of emergent phenotypes, wherein multiple assaults on the genetic system can lead to following worse and worse pathways that results in cognitive disorder (here’s a great read for an explanation of the genetics of emergent phenotypes).

These results are great for us never-want-to-settle-down types, right? I’d like to think I choose not to settle down as I build my career, and as I delay potential progeny, my normative fecundity abilities aren’t dwindling away. I mean two random mutations a year can’t be that bad, right? It also means I don’t have to worry about freezing my sperm, as was suggested in Nature. Although not being able to beat my child in basketball when he’s 10 (and I’m 60) would hit me hard for more egotistical reasons, so I’m not exactly suggesting to wait too long.

For those not yet satisfied, a few more thoughts:

If you’re the questioning sort like me you might be wondering if there could be another biological force at play influencing these results. I agree that ‘age of father when he has his first kid’ isn’t the most satisfying, quantitative result. This study did account for the following factors though: “Incidence rate ratios were adjusted for age, sex, calendar time, birth year, maternal age, maternal age at her first child’s birth, degree of urbanization at the place of birth and history of mental illness in a parent or sibling.” Further, a very recent study (2014) out of Johns Hopkins I just came across while researching this topic came to a similar conclusion: that de novo mutations from the father are likely not the cause of increased schizophrenia risk in older fathers. Further confirmation that waiting to have kids doesn’t look like such a bad idea.

These controlled conditions aside, I came up with one alternate biological theory that I (just now after writing all of this) realized is actually supported by this 2014 study. One of the pillars of the genetic basis for homosexuality is the “birth order effect,” wherein each additional male child from one mother is 33% more likely to be gay. The hypothesis for this effect is called the “maternal immunization hypothesis.” In short, the mother is thought to respond to hormones from the in utero male child, which are novel to her as they’re from the male chromosome, and her body begins an immune response by forming antibodies to fight these invaders. Theoretically such antibodies could make it back into the developing male brain and alter the chemistry of maleness in the developing fetus.

Now you can imagine a similar process happening for any child in the womb. No matter what, when there are foreign genes present (in fact the placenta—which fights with the mother’s body for resources—actually comes from the father’s genes), the mother’s body could mount some kind of immune response. And with each successive child, the response could become stronger in the mother and affect the developing brain of her later-born children. And indeed, this 2014 study finds that birth order is more important for the mom than the dad (which makes some sense since the sperm don’t know about birth order, but the mother’s body could certainly be keeping track). To quote the paper: “The strong maternal age association might suggest that parity [birth order] adversely influences intrauterine development, and obstetrical complications (OCs) are also a risk factor for schizophrenia.” I left that last part in there to mention one more thing found to be associated with schizophrenia: problems in the physical birth of the child. In this case schizophrenic children are more likely when they are firstborns, although this is probably not saying very much since anyone with such complications is less likely to have more children (which would have theoretically been at a higher risk for birth complications and therefore schizophrenia as well).

The fact is: assaults on the brain at any point in development could lead it down the wrong path and support the emergence of cognitive disorders I mentioned earlier. The 2014 paper even points out that schizophrenia could be fostered by bad parenting after birth (again by older parents still—those effects are real), truly broadening the scope of the developmental window for diseases like schizophrenia. So, as it is with most things as complicated as cognitive disorders, there probably isn’t one cause and there are probably a lot of ways to get there. Hopefully new science will narrow our knowledge of these paths so we can make more informed decisions on how to live our lives and raise our families. I, for one, after reading this work, am not so concerned about popping out children any time soon…which is also probably not the right way to put it since I’m a guy.

Edit 1:

2014-03-10 13_27_09-Maternal age and fetal loss_ population based register linkage study _ BMJ(from BMJ)

Again from the Danish registry data with 634,272 women and 1,221,546 pregnancy outcomes. The wintergreen line indicates mothers are not at much increased risk until 35. I’m dismayed by the amount of fetal loss in general! The authors point out that there might be a greater chance of older mothers being admitted to the hospital, so hopefully these numbers are skewed a little high. It’s also pointed out in the comments that many Danish women smoke and I don’t think they correct for that. More recent numbers of fetal loss in America don’t look nearly this bad: http://www.cdc.gov/nchs/data/databriefs/db169.htm  

Edit 2:

2014-03-10 19_28_37-Maternal_age-specific_live_birth_prevalence_of_Down's_syndrome_in_the_United_Sta

(from Wikipedia)

Sleep while you’re awake. Really!

When I was a young lad I remember being tired in the middle of a long day. My dad said to go lay down for a few minutes before we had to leave and I told him I wouldn’t be able to nap in such a short period. He countered that just vegging out for a bit without falling asleep would be almost as good as a nap. Well, it turns out, while I’ve come to find many of my dad’s tales weren’t quite correct, he was actually onto something with his awake resting idea.

Giulio Tononi’s group at U. of Wisconsin published a fascinating paper back in 2011 entitled ‘Local Sleep in Awake Rats’. The title pretty much says it all: sleep-deprived rats had cortical neurons that showed signatures of being asleep while the rats were still doing awake rat things. In particular, local regions had their neurons stop firing while slow waves—which are characteristic of sleep state—increased. See the figure below for a visual explanation.

Image

“Off” period in an awake rat. Notice the slow-wave (boxed) in the LFP while multiple neurons in the MUA (multi-unit activity) quiet their firing. Source.

This actually wasn’t a huge surprise, since we’ve known for some time that all kinds of animals sleep half a brain at a time. In fact, from an evolutionary perspective, I’m more surprised we’ve survived so long while spending 1/3 of our lives in an unconscious, assailable state.

But of course this study was only in rats, which aren’t always the best models for primates like us. In this spirit, Valentin Dragoi’s group at the University of Texas Medical School presented results at the 2013 Society for Neuroscience conference titled ‘Occurrence of local sleep in awake humans using electrocorticography’ (don’t you just love these simple, straightforward titles?). They showed sleep-deprived humans, while in a ‘drowsy’ state, were showing these symptoms as a result of local parts of their brain going to sleep just like in Tononi’s study on rats! Specifically, they found increases in slow, delta waves correlated with decreases in faster, more cognitively active theta waves the longer people were awake.

Tying this all together: ‘sleep’ isn’t a binary state. When you’re ‘half asleep’ and performing more poorly as a result, it’s literally because parts of your brain are catching some Zs. This proves something conventional wisdom already knows: you should be making decisions after a good night’s rest. Further, getting sufficient sleep is key to your daily productivity. Finally, I don’t think it’s too much of a stretch to say you don’t have to actually fall asleep to rest your brain—as my dad once suggested—but chilling out for a bit will still have some restorative power.

One question these results beg: what is so important about sleep anyway that all animals do it? I talked before in my post about why we dream how scientists were beginning to theorize that sleep is important for prophylactic cellular maintenance. Neurons are just like little machines, transmitting electricity and firing robustly enough to keep your mind and body going. And like any electricity-run machine, step number one is to turn it off before playing around with it, since you can’t change the car battery while the car’s on (and you also don’t want to get electrocuted). Stretched metaphors aside, since my last post on dreams, there’s been a major paper supporting this prophylaxis theory. Maiken Nedergaard’s lab at the U. of Rochester Medical Center published a paper in October titled ‘Sleep Drives Metabolite Clearance from the Adult Brain’. In particular, they found the brain increasingly clearing “neurotoxic waste” like β-amyloid during sleep. This particular toxin you might have heard of before as it is heavily linked to Alzheimer’s (although it does have “significant non-pathological activity”, so don’t hate the player, hate the game).

These are the kind of findings I love about science: research that can inform us on how we live our daily lives. Or at the least, point us in the direction of living a ‘better’ life. This is largely what pushed me into science, and further compelled me to move from molecular biophysics research to studying a higher-level topic like the neuroscience of memory.

There are no easy shortcuts to ending obesity.

I can’t remember the last time I was so annoyed reading an article as “How Junk Food Can End Obesity” by David Freedman, which I believe was the cover story of The Atlantic’s July/August issue. Don’t read it (I know, you probably weren’t going to read it at this point anyway, I’m a bit behind the curve). It’s 10000 words of mostly anecdotes and poor logic. Now that I think about it, I can remember the last time I was so annoyed by an article: when I wrote my first-ever blog post in response to some eugenics garbage back in May.

Anyway I came across this Atlantic article when Steven Pinker posted the following on twitter in late September: “@sapinker The 1st article on food & obesity I’ve read in years that makes sense – forget Bittman & Pollan.” I honestly know very little about Dr. Pinker and actually appreciate his oft-inflammatory links but couldn’t get over the inanity in this article. Yes, to some degree it’s foodie-baiting. And after reading it and seeing that dozens of people/internet-news-types have already critiqued the piece Mr. Freedman probably got what he wanted. That said, I think there’s some interesting science to talk about here that disputes much of this article’s ‘logic’ and might make you a little more aware of what we can say about food, nutrition and obesity.

To begin, I’ll mention that Mr. Freedman does get one thing correct: there’s nothing inherently wrong with ‘processing’. Just like there’s nothing inherently wrong with “GMOs” or “drugs” or “chemicals”. Each can be bad or good depending on the circumstances and use. Labeling foods as bad because of semantics on what is ‘processed’ or ‘natural’ is indeed a poor way to go about nutrition. And Mr. Freedman goes on for a while with some anecdotal stories about how this ‘natural’ food isn’t as healthy as you might think and this ‘real’ food is just as bad for you as a processed food. This is hardly a new idea but I agree with his tenet in this case.

What Mr. Freedman gets wrong is his assumptions the rest of the article are based upon. He takes certain facts for granted largely because they’re the status quo, and NOT because they’ve been unequivocally shown to be true by science. Science is hard, and what I’ve come to find out is nutrition science is EXTRA hard. There are a lot of reasons for this, but suffice it to say that different goals for different people with different genetics and different environments at different ages creates a ton of noise in any studies on what’s good for us. In short: humans are complicated organisms.

I find Mr. Freedman’s reliance on the status quo as extra ironic because he’s written a book called “Wrong: Why Experts Keep Failing Us—And How to Know When Not to Trust Them.” Daniel Engber at  wrote about this at Slate and it appears he’s actually read the book so his word is better than mine, but in short Mr. Freedman’s inability to recognize how few objective truths we have when it comes to science on  obesity is nothing short of hypocritical considering he wrote a book on how wrong ‘experts’ often are.

This is most apparent from Mr. Freedman’s continued use of the word ‘obesogenic’ throughout the article. 10 times, in fact. Well, here’s some not-all-that-surprising-considering-the-obesity-epidemic news for everyone: we don’t know what makes people fat! Some people think it’s sugar. For decades government ‘experts’ agreed it was because we were eating too much fat (more on this oversimplification later). David Berreby wrote a fascinating article about our animal roots and obesity that gets at a lot of the science behind our fatness, but he concludes by saying that the answer’s not that easy: obesity probably has dozens of causes that run the gamut from our evolutionary propensity for a quick calorie fix to hormone-imitating chemicals to microorganisms. Now THAT is a good, well-researched article.

Here’s one particular assumption being made by Mr. Freedman that blows a hole right through his basic premise. IT’S NOT ALL ABOUT CALORIES IN/CALORIES OUT. This seems logical and everything, so much so that my college biology teacher (who, ironically, was fat) was spouting it to hundreds of us naïve ankle-biters, but it is extremely misleading. Yes, in terms of ‘thermodynamics’, eating too many calories will make you fat. But food isn’t just calories. Food is varied and sets off feedback mechanisms and hormonal responses that can make you want to eat more or less food depending on what you’ve eaten already, the context of when/where you’re eating it and if YOU’RE eating it (i.e., people respond differently due to their own genes, microbiota and environments). To quote that superb David Berreby article, “Chemicals [with hormonal properties] ingested on Tuesday might promote more fat retention on Wednesday.”

For example, Mr. Freedman goes on at length about how we need to work with Big Food companies because they can trick us into eating lower calorie foods that taste as good or satisfy us just as much. Well, the problem is: the calories themselves are a major factor in being satisfied. One cool 2008 study in particular used mice that had their taste receptors for sugar knocked out (knockout mice have their genes modified such that selected traits can be eliminated) and had them choose between either sugar water or water. At first, the mice didn’t care which they drank from, since they couldn’t taste the sugar. However, over time they found the mice would drink significantly more from the sugar water. This means their bodies actually could ‘sense’ the calories and send a message back to the brain saying ‘dude, drink THAT one’. From an evolutionary perspective this makes total sense: your taste buds probably don’t know what nutrients you’re lacking but your body does. While they’re tuned to more immediate problems like ‘is this poison?’ or ‘these berries aren’t as sweet as THOSE berries’ that you need to act on immediately, what your body actually cares about is essential nutrients like calories (or protein, as shown in a later study). As a result, when you’re drinking non-caloric diet soda with sugar substitutes that are hundreds of times more sweet than sugar, they might satisfy you for a bit but you’re hardly fooling your body into being less hungry. Which helps explain why regular consumers of diet drinks are actually fatter than non-drinkers. Edit: there’s also a new study directly linking the consumption of artificial sweeteners to obesity/diabetes.

There’s even more to it than that, since those taste receptors do report on how sweet that diet soda is, and there are feedback mechanisms that alter how you take in future calories. A 2013 Journal of Neuroscience paper found that rats given a 3% saccharin solution for 7 days had similar brain changes (specifically, trafficking of AMPA postsynaptic receptors in the nucleus accumbens—sort of like your brain’s pleasure center) to rats given a 25% solution of sucrose for 7 days. This means those obesogenic effects (notice how when I use the term there is actually a study behind it) from diet soda could very well be a direct result of your brain thinking there are way more calories coming in than it actually gets.

Something else that annoyed me even more about Mr. Freedman’s article was his later retort to the many, many articles telling him how wrong he was. While arguing on the internet is hardly productive anyway, his responses further show how little he really knows about modern thinking on nutrition. In particular, here is what he said to a nice response piece from Tom Philpott at Mother Jones:

It was really sad to see this piece. If any publication should have empathy for the plight of the unhealthy poor, you’d think it would be Mother Jones. But no, this was just a dopey, rote screed by an Atkinite, that small but incredibly loud cult of ultra-low-carbers who have become the LaRouchians of the dietary world. Calories don’t matter! Exercise doesn’t help! Eat all the fatty foods you want! It’s all about the carbs! The Atkinites like to claim that everyone else is stuck in the “low-fat craze” of the 1980s. They don’t like to mention that the low-carb craze dates to the 1860s. For the record: It’s best to trim both carbs and fat. Ask your doctor, or any obesity expert.

NO NO NO NO NO. Tom Philpott says nothing about the Atkins diet. Instead, he points out REAL, MODERN research that shows how little eating fat correlates with being fat (this might turn out to be English’s most costly homonym). There is a wealth of modern evidence indicating that low carb diets work quite well (point 10 here has a nice, quick compilation of evidence from the last 10 years, while I can give you days of additional reading material. I recently enjoyed Denise Minger’s exhaustive  takedown of Forks over Knives, for one). Meanwhile, Mr. Freedman thinks we should just ask our doctor. Ugh. Is he trolling us at this point? As an expert on experts you’d think he would be aware how little ‘your doctor’ knows about nutrition. Or most things for that matter—that’s what specialists are for.

In fact the more I read the article the more it just seemed like a giant, thickly-veiled troll. For example:

 I finally hit the sweet spot just a few weeks later, in Chicago, with a delicious blueberry-pomegranate smoothie that rang in at a relatively modest 220 calories. It cost $3 and took only seconds to make. Best of all, I’ll be able to get this concoction just about anywhere. Thanks, McDonald’s! If only the McDonald’s smoothie weren’t, unlike the first two, so fattening and unhealthy. Or at least that’s what the most-prominent voices in our food culture today would have you believe.

You’re drinking a cup of sugar water. Maybe it has some remnant of fruit in it, but honestly I wouldn’t suggest eating non-organic berries anyway. And I’m sure Mr. Freedman knows it’s sugar water.  If he’s trying to claim that sugar water in smoothie form should cost $3 instead of $8 that’s fine by me. But when he’s trying to act like fast food enterprises have anything but profits in mind he’s quite mistaken. As I’ve mentioned above, your body is designed to detect calories, and if these companies don’t know this scientifically they know it from their profit margins. McDonald’s smoothies are a perfect example of this, in fact. Only 220 calories! Which means you’ll need to eat a bunch more of their food to feel full. And since it’s sugar, it’ll probably just make you more hungry anyway. But since it has the words ‘blueberry’, ‘pomegranate’ and ‘smoothie’ it hides under the guise of healthy. And I know HE knows this, so his point is completely moot. Maybe he was getting paid by the word?

I suppose that’s enough vitriol for now. I suggest a few blogs below who cite legitimate scientific studies and use their expertise to critique the modern literature. It’s not easy reading though, and as I said earlier with how complicated nutrition is the findings are often quite incremental. That said, modern techniques are starting to uncover some fascinating pieces here and there (e.g. here’s a great study on a direct link between eating red meat and accelerating heart disease), and unlike Mr. Freedman will have you believe, we’re making progress without having to resort to tricking ourselves into thinking we can trick our bodies into becoming thinner.

Stephan Guyenet’s site is a good source of well-referenced, science-based info.

Mark’s Daily Apple looks kind of ridiculous because the site is littered with pictures of him with his shirt off (my friends and I like to refer to it as Mark’s Daily Ab-shot) but the info is pretty legit and usually well-referenced.