9 Reasons why we suck at eating

Long time, no post. I’ve been a little hesitant on this one since it might read as something like an advice column. My main goal is to dispel a number of nutritional status quos/myths that a lot of people adhere to despite little hard evidence. Unfortunately, as I’ve discussed in a previous post, studying human nutrition is REALLY HARD.

That said, I find nutrition interesting and want to do what I can to figure out what (and when!) we should eat. But to be clear, you shouldn’t take my word as gospel, so if something piques your curiosity please check out the reference I provide (and after doing so feel free to comment if you think I’m in error).

MYTH 1: You should eat lots of small meals throughout the day
MYTH 2: Breakfast is the most important meal of the day
MYTH 3: Eating fats is bad for you
MYTH 3.5: Eating cholesterol is bad for you
MYTH 4: Sugar is better for you than corn syrup
MYTH 5: Salt consumption raises your blood pressure
MYTH 6: Organic foods don’t have pesticides
MYTH 7: “Natural” is better than artificial
MYTH 8: MSG gives you headaches
MYTH 9: Diet sodas help you diet

ss 2017-02-10 at 12.36.42 PM.png

MYTH 1: You should eat lots of small meals throughout the day: I’m not really clear where this one came from. In terms of really basic biochemistry, it makes sense to have a constant stream of input calories that are then instantly utilized for output energy to avoid free radical electrons from hanging around. But the fact is our bodies didn’t evolve with bowls of nuts and fruit laying about the savannah. So having your digestion system constantly working throughout the day doesn’t seem like something your body was meant to do.

On the contrary, studies in animals (from yeast to mammals) show strong evidence that consistently experiencing starvation conditions makes them age better (as in, they stay healthy for longer). I realize this is counterintuitive, but it helps to think of things in an evolutionary sense. Our ancient ancestor, let’s go as far back as one much like a rodent, only lived for a couple of years before dying of old age/cancer. And if he/she happened to be born in the middle of a drought, which could last for months upon years, that’d be devastating for a species with such a small window before death. It appears evolution’s solution to this was to express certain survival genes—known as sirtuins—which slow down your metabolism and therefore your entire aging process so you can wait to worry about procreating in more bountiful times.

So why’d I tell you all of that? Because we’ve shown these sirtuin genes seem to have stuck around with higher species, as they have much the same effect in dogs and monkeys as they do in rodents. And the good news is “starvation” might not be as difficult to achieve as you think. As outlined in this NY Times piece and detailed in PNAS in 2014, even relatively small windows of starvation can cause sirtuin expression and possibly give you long-term longevity benefits. In short, there are 3 ways to achieve such starvation conditions, in what I’d consider increasing order of ease:

[Hard]: Calorie restriction: eat something like 30% less calories than a typical person over the course of your life. This is what has shown longevity benefits from extensive testing in rodents in addition to limited evidence in monkeys. Some people actually do this.

[Moderately Hard?]: Intermittent fasting: go through long periods of minimal calories each week. For example, Jimmy Kimmel said he maintains his weight by doing a 5 and 2 diet, where he eats what he wants on weekdays but only 500 calories a day on weekends. Hugh Jackman and Benedict Cumberbatch are also apparently fans. But no women. Just kidding, there must be some. Right?

[Not Bad]: Time-restricted feeding: go through a moderate period of fasting every day. Recent research in rodents has shown this can be as short as 15 hours a day (which sounds like a lot, but it’s basically just skipping breakfast or dinner).

I’ve actually been loosely doing this one for almost a year now, and I can confirm it’s “Not Bad”. I happen to never be hungry for breakfast anyway and just eat a late lunch around 2-3pm. I like to think each time I get hungry around 11am (which goes away fairly quickly as your body goes into ketosis and starts burning fat stores—your brain can’t go without glucose for long!) I’m adding another healthy day to the end of my life. To be clear though: a lot of research still needs to be done to understand the long-term effects in humans.

MYTH 2: Breakfast is the most important meal of the day: Considering I just told you that I don’t eat breakfast, this might seem like a conflict of interest. I’ll let Aaron Carroll (whose NY Times pieces are definitely worth keeping up with if you’re interested in sound, reasonable nutritional advice) take this one: Sorry, There’s Nothing Magical about Breakfast. The bottom line: “… evidence for the importance of breakfast is something of a mess. If you’re hungry, eat it. But don’t feel bad if you’d rather skip it, and don’t listen to those who lecture you.”

MYTH 3: Eating fats is bad for you: This probably won’t come as a big surprise to anyone voluntarily reading a health-related blog post on the internet, but it turns out fat doesn’t make you fat (as I’ve put it in the past, this might be the most costly homonym in language). If you don’t believe me, one of the best long reads of 2016 explains how egotistical scientists starting all the way back in the 1960s mistakenly maligned fat instead of sugar for decades. Fat is a calorie-dense substance that has feedback mechanisms that tell you you’re full. As a result, it’s more difficult to overeat with a high-fat meal. Saturated fat, which is particularly blamed, is not your enemy! Of course, there are bad fats (in fact, I plan a future post pontificating as to why fast food is bad for you. My hypothesis: an abundance of vegetable oils). We’ve all heard of trans fats and hydrogenated oils seem dangerous as well.

But the fact is there are fats in every cell of your body. So some of them are certainly good! And it appears the ratio of fats that comprise your membranes likely have long-term health consequences. This is where essential fatty acids come in, and more specifically, why people often talk about the Omega 3:6 ratio. Getting too much of one of these building blocks without getting enough of the other could be harmful. Fortunately, contrary to what many sources will tell you, your body can make its own omega 3 (essential) fatty acids. However, the enzymes responsible for this don’t appear to be particularly well expressed, and since they’re essential in your brain, retina, etc. it’s important to seek out a dietary source like fish, flax, chia seeds, etc.

For more reading, this blog has definitive guide to fats with some well-referenced information. And there are certainly mainstream sources too.

Some take-home points of my consolidated impressions:

-animal fats like lard or butter are pretty neutral nutritionally. They’re full of calorie-dense saturated fat, which makes you feel full. However, they’re relatively low in vitamins/antioxidants (except for vitamins A and D) so I largely think of them as filler. Tasty, tasty filler.

-fats from NATURALLY OILY FOODS like nuts/plants are winners! Olive oil, coconut oil, avocado oil, macadamia nut oil, etc. are high in healthy fats and, unlike animal fats, tend to be enriched in vitamins/antioxidants to boot.

-fats from plants that require major extraction methods aren’t so good for you. Hydrogenation was invented to squeeze the oil out of cotton seeds so Procter and Gamble could make money on something they were throwing away. The hydrogenation process makes many seed oils like this edible, but they’re almost certainly not good for you in large quantities. It’s not like you thought those french fries cooked in rapeseed oil were good for you, did you? Also of note is that hydrogenated oils are often called “vegetable oils” for marketing purposes, as vegetable is a misnomer.

-fats from fish are good for you! Sure, you can certainly overeat fish (particularly if it’s high in mercury), but getting a source of omega-3s a few times a week seems important for long term health. The American diet almost certainly has too many omega-6s (highly enriched in vegetable oils) and too few omega-3s.

MYTH 3.5: Eating cholesterol is bad for you: Another one you probably have heard about already, but just in case I’ll slide it in here as it’s similar to Myth 3. The relatively conservative Dietary Guidelines for America found “available evidence shows no appreciable relationship between consumption of dietary cholesterol and serum (blood) cholesterol, consistent with the AHA/ACC (American Heart Association / American College of Cardiology) report. Cholesterol is not a nutrient of concern for overconsumption.” 85% of the cholesterol in your blood is made in your liver, so issues with cholesterol overwhelmingly have to do with your genetics.

MYTH 4: Sugar is better for you than corn syrup: I wrote a whole post on why sugar is corn syrup is sugar. All sugar is bad for you. There are articles and talks galore if you don’t believe it. It also recently came to light that the “fat is bad, carbs are good” craze was perpetuated by Harvard scientists paid off by the sugar industry in 1967. If you’re going to ruin millions of lives at least do it for more than $50k! One of the main problems with sugar is the feedback mechanisms that are meant to tell your body you’ve had enough calories are stymied. To put it another way: sugar is nothing less than an addictive substance. Fortunately, our bodies are awesome at getting us to breeding age without many deleterious effects (#1 pick and former NBA MVP Derrick Rose apparently survived largely on gummy bears and sour straws through college. Of course then he blew out his knee. Coincidence?). However, if you’re looking to live a healthy life past 25, I’d suggest trying to limit your intake.

MYTH 5: Salt consumption raises your blood pressure: Now for some good news: there’s no convincing evidence that a lot of salt consumption is bad for you! The link between salt intake and blood pressure looks like a myth perpetuated by decades-old science. The jury is still out on this one, and I’m sure we’ll be seeing back and forth pieces for decades (including a recent NEJM article which says too little OR too much salt is bad for you—which is good advice for most nutrients!). It seems obesity is a way more important factor for unhealthy blood pressure, as you might expect.

MYTH 6: Organic foods don’t have pesticides: they often do, but were just grown with pesticides that made it on the USDA’s organic-approved pesticide list. Some of these “organic” pesticides are almost certainly bad for you if you get enough of them (mineral oil, borax, copper fungicides definitely don’t sound healthy) and there’s no way to really know how much were used. Which isn’t to say that you should totally abandon organic food, since they do tend to have less pesticide residue, but be aware that some foods are just harder to grow than others without using pesticides (organic or not). Something with a rind like an orange/avocado/mango/banana not only is harder for pests to get through but also keeps the actual pesticide from touching the part you’re going to eat. Meanwhile, produce with thin skin like berries or apples (which have their own problems since grafting reduces their ability to evolve their own natural pesticides) tend to require more pesticides and even worse they get absorbed more easily into the part you’re going to eat. I’m not saying you should stop eating fruits and veggies! But be aware of the most and least pesticided fruits (EWG keeps a convenient list for you) and try not to eat too many of the bad ones.

MYTH 7: “Natural” is better than artificial: Fairly self-explanatory, but it’s another pet peeve of mine just like “organic” pesticides. Just because something is natural doesn’t mean it’s good for you. Arsenic is natural. Piranhas are natural. And just because something is synthetic doesn’t mean it’s bad for you. We can synthesize the sleep aid melatonin, and it’s the same exact chemical as what we extract from animals’ pineal glands. “Natural” is a mostly meaningless marketing word. Much the same way “chemicals” and “GMOs” are. There are good and bad kinds of each, since they describe a huge range of different products.

MYTH 8: MSG gives you headaches: This old wives’ tale was largely propagated by an NEJM article from 1968 written by one cranky old doctor diagnosing his own symptoms (N=1!). 538 did a deep dive on this, but for the vast majority of people MSG, which is just sodium paired with glutamate (an essential amino acid we all have in abundance), has no effect in the doses used to give foods that tasty umami kick. Spread it on!

MYTH 9: Diet sodas help you diet: This isn’t to say that sugar water is good for you either (I clearly indicated in Myth 4 that it’s not), but it turns out that despite avoiding calories, artificial sweeteners have their own deleterious effects that lead to obesity. I went over more details in a previous post about how your brain isn’t tricked so easily, but the key is regular consumers of diet drinks are actually fatter than non-drinkers. New work seems to indicate part of the problem is sugar substitutes change your gut bacteria in a way that might lead to obesity.

Where do long “lost” memories come from?

A little while back (853 days ago(!) WHAT HAVE I BEEN DOING WITH MY LIFE?!), I wrote a post on the neuroscience of dreaming. After writing that post, rereading it approximately 15x (it’s amazing how little you remember from your own writing) and thinking about my buddy Peytonz’s* comments, a related question crept into my mind: where do “lost” memories come from? And not just random lost memories, but random ones from long ago? What spurs their retrieval?

Let me give some examples to make sure we’re on the same page. I started thinking about such “lost” memories a few years ago and now I (with surprising frequency) catch myself having recalled them. When I notice that I’ve recalled one of these obscure memories, I’ll try sifting through recent thoughts to see if I can figure out how it was cued. Like a few weeks ago I was eating pasta, thinking about the bitterness of the sauce and remembered something I hadn’t thought of for over 20 years: when my aunt served us homemade tomato sauce and I made a fuss because it tasted weird to me and she heated up Ragu to put on my spaghetti. In this case the cue was clear: I was eating pasta with a bitter sauce and this reminded me of a time when I didn’t have a taste for good, homemade tomato sauces (sorry Aunt RoRo!**). But this isn’t quite the kind of memory I want to discuss.

The kind of “lost” memory I’d really like to explore is when you recall something out of the blue that as far as you know is totally unrelated to what you were thinking about. An uncued memory, essentially. A recent one I had: I was at work sorting through pictures of generic shapes that I was creating for a task and I suddenly recalled throwing a baseball with my buddy Brianz* in Raleigh’s Jaycee Park ~7 years ago. In this case, as hard as I tried, I couldn’t come up with how ‘baseball’, ‘Brianz’, ‘Jaycee Park’ or anything related had gotten into my head. In essence: the memory appeared to come from nowhere. Now that you’ve read this, try catching yourself uncovering an uncued memory in your own life. They happen surprisingly often (note I’m not talking about “repressed” memories, which might not even be a real thing).

And now that I’ve identified this exact kind of memory—those of the “lost” uncued variety—let’s explore where I think they might come from. Please note that these are speculations based only somewhat on established science.

Idea 1: There was a cue, but you didn’t have conscious access to it. Your brain has many different parts with many different ‘jobs’ being performed. Since you take in so much data through your senses at every moment of every day, in addition to a whole bunch of data in storage for you to access, your brain needs to make complicated decisions by sorting through what all these data streams tell you. And because you take in more data than you could possibly use, your conscious brain only has access to a limited stream of (highly processed) information. As an analogy, when you’re the boss of a large company, you don’t have time to micromanage every single person/report/price/whatever so you have various managers accumulate information and execute smaller decisions for you while only informing you when grand decisions need to be made (feel a bit more important now, don’t you?). With this idea in mind (consciousness meta-alert), we can imagine how your senses frequently pick up information that you are not privy to, which can then spur memories to be recalled and brought to your attention without you having any conscious access to what caused the original cue. What kind of “unknown” sensations am I talking about? Take subliminal images as an example. Studies have shown that when you flash an image very quickly on a screen people will report they have not seen it. However, fMRI scans show that brain regions notice these images, and people are even able to win money by using information from these images without actually being able to recognize the images. At some point in our evolutionary past our ancestors had no use for single images that showed up and then disappeared in 10 milliseconds, so the visual stream in your brain, which does have access to such quickly-disappearing images, can decide that they’re not important enough to pass on to higher regions. So, to tie things up, when I had that memory of throwing a baseball with my friend, maybe one of the shapes I was looking at reminded me of a baseball. Or the shape was weird like Brianz’s throwing motion. In which case even though I didn’t consciously make the connection, the memory had an explainable cue and only seemed to come out of nowhere.

Idea 2: You’re dumb and forgot the connection between what you sensed and what you remembered. Just kidding. You’re smart, as evidenced by you making it this far into my post. And good-looking!

Idea 3: Coincident connections in your neural network. In my post on the neuroscience of dreaming, I brought up replay events that we’ve found occur in rodents when they seem to be reimagining their behavior during rest. Here’s an image of one of these events:

Untitled

In these events, each neuron fires spikes one after another, with the key being that they fire in this same order both when the rodent was running AND later on when it was resting, as if to say the rodent’s brain was recapitulating the rodent’s path in memory by replaying the sequential cell activity. There isn’t causal evidence yet for sequential cell firing actually being an essential signature of memories (Loren Frank is trying to make this happen, though!), but the specific code doesn’t matter for this thought experiment. Let’s just take it for granted that some network of N neurons is responsible for one of your episodic (autobiographical) memories. In this case, being fussy about bitter pasta sauce at your Aunt’s house when you were little. Now, I sensed the bitter pasta sauce in the present day by seeing/tasting/smelling it, which activated mnemonic representations in my visual/gustatory/olfactory long-term storage systems. Our brains likely store memories redundantly, in which similar memories are stored in similar engrams by a similar group of neurons. And some percentage of those N neurons were activated by ‘pasta with sauce’—in this case ‘bitter pasta sauce.’

There’s a putative process in the hippocampus called “pattern completion”, which is when a partial cue can recapitulate a whole memory. So when I am sensing this present-day bitter pasta sauce that I’m eating, maybe 5% of those N neurons were active (many of which are always active when you eat pasta sauce, just like neurons we’ve found in human brains were always active to specific concepts like Rachel from Friends or The Simpsons). And, by some chance, this small group of neurons “pattern completed” from the initial cue into the full reinstatement of the N neurons that collaboratively hold the episodic memory of me rejecting bitter pasta in my Aunt’s house. This gives some intuition for how related memories can be recalled.

Now, that gives a possible explanation for related memories. But what about seemingly unrelated memories? What I’m postulating is that with the redundancy necessary in our mnemonic system, in which each neuron is almost certainly part of many memories that are completely unrelated, the right pattern of activation of some subset of neurons could ‘pattern complete’ into a memory unrelated to current sensations. So, for my example above of tossing a baseball with my friend: let’s say 127 neurons are key for me to recall that memory. And, based purely by chance, some stimulus or thought I had happened to activate 42 of those same neurons. Through pattern completion my hippocampus could have taken the signal from those 42 and, due in part to the randomness of neural spiking within the structure of a neural network, happened to reactivate the other 85 neurons necessary to recollect that specific episodic memory. The fact this coincident neural firing occurred 7 years after the fact would therefore be a chance event.

These thoughts on chance happenings in the brain remind me of a fascinating topic in neuroscience: randomness in our neural code might well be an essential feature for survival. In fact, some smart cookies even consider the concept of randomness in the brain as one of the major paradigm shifts in neuroscience the past 15 years. While this seems a bit counterintuitive, if an organism is 100% predictable in its actions, then its predators will be 100% likely to learn how to eat it (as Kevin Mitchell put it, the collective noun for animals with no randomness: lunch). And beyond tricking predators, randomness can also help animals like rats explore new strategies or even aid fruit flies in achieving more successful love songs. And in the case of memories, maybe coming up with random ideas once in a while could provide some survival advantage to species as well. Obviously you want your most salient memories to be robustly recalled (from what I remember baby bears usually means there are mother bears, and mother bears EAT people like me), but for those of less obvious importance, it’s interesting to think the randomness of our memories—our creativity, to some degree—might be seen in our occasional ‘lost’ memories.

*names changed for anonymity.

**name not changed for anonymity, since this is a funny name.

Why giving my sequencing data to 23andme doesn’t bother me.

Recently 23andme, a company that’s made a name for itself by sequencing thousands of key regions in thousands of people for $99 each, signed a $60m deal with biotech company Genentech to allow access to data from their 800,000 customers. This seemed to put off people on the internet, for example an antagonistic article on gizmodo titled “Of course 23andme’s business plan has been to sell your genetic data all along.” To which I’ll argue: so what?

The main thing that seems to bother people is that a piece of them is being used for profit. That somehow their data—because it’s of the kind embedded in DNA—shouldn’t be tapped for material gain. I want to espouse on why this really doesn’t bother me at all. And unless you are immune to modern technology, why I don’t think it should bother you.

1.) You already give companies plenty of your data. This one is pretty self-explanatory, but Google, Yahoo, Twitter, et al already make plenty of money selling your data. Why is it that giving up small sections of your DNA is any worse than letting them know the location of the rash you want to get rid of? Or what route you’re taking to Maine tomorrow? Or what kind of porn you like? I’d argue this type of data could be used in a way worse way than knowing you have a few irregular polymorphisms. And trusting Google to anonymize our searches is no different than trusting 23andme to anonymize those polymorphisms. Also, to be clear: 23andme does not sequence your whole genome. It only looks at key places known to have phenotypic effects.

I also get the impression that people are personally offended since it’s their DNA. Let me be clear: there’s nothing special about your DNA. Unless you’re literally a 1 in a billion person (like her), you’re just a big mixed bag of genes like everyone else. And since your data is anonymous anyway, there’s no way for your employer/health insurance company to see it. You’re just 1 of a sample size of 800,000 used in these studies to learn how certain mutations lead to measureable changes.

Edit: here’s a new article called “Your DNA is Nothing Special“.

2.) Progress is progress. Government funding of science is not unlimited. And while the NIH has taken steps to require publications from government funding to be open after 1 year, most data taken from government grants is not released to the public anyway. Which is just like biotech and pharmaceutical companies, which invest billions of dollars to create their own data in an attempt to find the next hot technology or pill. Sure, they do it to make money, but they are creating new products that (generally) make people healthier largely with their own research funding. Edit: In fact, companies spend more money than the government on research:

2015/01/img_2174-1.png
(from: http://jama.jamanetwork.com/article.aspx?articleid=2089358)

Further, sometimes government funding just isn’t enough. If you come up with a great idea and want to implement it at a large scale: you have to start a company. Here’s a great example: Oxitec. I heard about this company when someone I know posted on Facebook that an evil company was going to kill all the mosquitos in the Everglades. Instead they suggested that this was a job for scientists—not some company trying to make a profit. Well, when I looked into this, it turns out Oxitec was founded by Oxford University scientists who invented a way to thwart mosquitoes from having babies (specifically: they release male mosquitoes that are able to sterilize females in the wild when they mate). Since the threat of dengue fever has become worrisome in Florida—and tests of the technology had gone well in Brazil and other countries—the state decided it was in their best interest to lower the mosquito population. The key is: this would not have been possible if these Oxford scientists hadn’t acquired investor funding to grow their product and do field testing. And this is just one example where the science needed funding to be tested at a large scale to improve our world (interesting side note: Radiolab did a fun piece on whether eliminating mosquitos would hurt the world in some way, and even the entomologists they interviewed didn’t see a problem with it. And mosquitoes do kill more people than any other organism–even other people!).

3.) Companies aren’t evil. It’s trendy to vilify companies. I can’t go a week on Facebook without seeing someone blame Monsanto for something (despite the fact they might be the only thing keeping people in the developing world from starving). Heck, as a New Yorker with a liberal slant I do plenty of company-bashing myself. But the fact of the matter is companies aren’t evil. Companies try to make money. And since when do Americans hate capitalism? And trying new technology? And jobs? 23andme came up with a novel idea, acquired seed funding, implemented the idea and gave you some cool information in the process; don’t they deserve to make money for implementing their great idea? Sure, individual people at companies could be evil, but I don’t get the impression that the founder of 23andme is looking to take over the world (and she’s not exactly hurting for money herself as the separated wife of Google founder Sergey Brin), but instead seems interested in changing it for the better.

So, if you’ve submitted your data to 23andme: feel good about it. I do. And I totally suggest getting their sequencing service myself: I used it to discover who my great, great, great grandfather was, what diseases I should be concerned about as I get older and that I’m resistant to noroviruses! For a science guy like me: that’s nerd gold!

My new paper on time-context neurons in the primate hippocampus

I often come across papers from other fields (and even within my own field) and think to myself “yeah, but why’s this important?” It’s not that I don’t believe that it’s important, it’s just that I don’t know enough to get why. With that in mind, I thought I’d contextualize my new paper for the lay reader in a few paragraphs. For the tl;dr version: scroll down to the parts IN BOLD AND ALL CAPITALS WHERE IT LOOKS LIKE I’M SHOUTING BUT I’M REALLY NOT where I try to highlight the main findings.

The title is “Context-dependent incremental timing cells in the primate hippocampus.” In normal-speak: I find monkey neurons that seem to help us (yes us—presumably we’re just fancy monkeys) keep track of time in certain contexts by changing their firing rates incrementally. To be fair: I don’t show anything proving how these neurons track time. My former colleague (and paper coauthor) Yuji Naya did that with some amazing previous work. In short, Yuji recorded from multiple brain regions in a task where monkeys had to keep a couple of pictures in order. He found that neurons would incrementally time (by slowly rising or falling in firing rate) during a delay in between when these pictures were shown where the monkey had to remember one picture in preparation for learning the second picture. The key was: this incremental signal was strongest in the hippocampus, and the farther you looked from the hippocampus into other nearby (more “vision-related”) brain regions, the worse the signal got. This plus a bunch of other work points to the hippocampus as important for how our brain keeps track of time.

So, my first question was: do neurons do this in all tasks or just when the monkey has to keep pictures in order? Therefore I went digging through some data taken by Sylvia Wirth where monkeys were tasked to memorize the associations between objects in certain places for reward (think animal pictures in different locations on a computer screen). When I started looking at this data, I saw cells rising or falling in firing rate during a similar delay period as Yuji’s (we look during delay periods because this is when the monkey is keeping the information in memory for the animal to respond later). There isn’t much for standard ways to statistically analyze firing rates that change over time like this, so I used resampling methods (aka bootstrapping aka Monte Carlo) to find groups of these cells (See Fig. S4 for details; here’s another paper dealing with much the same problems I did that might have been useful for my analysis if it’d come out earlier). I also spent a lot of time trying to make sure these changing signals weren’t just signaling anticipation like in other brain regions (e.g. amygdala, parietal cortex). This stuff is all in the paper and kinda boring so look there for details.

Anyway, the key point is: I found a bunch of these cells rising and falling in firing rate while the monkey remembered objects in certain places. AND MANY OF THESE NEURONS DID THIS TIME TRACKING ONLY WHEN CERTAIN OBJECTS WERE IN CERTAIN PLACES. This is where the “contextual” part comes in, as these single neurons seem to be uniting temporal information with contextual information.

Now, at this point you might be like: “what’s the big deal, man/mate/eh? You just found a bunch of moving firing rates.” What I think is really cool about this is when we looked at the animal’s behavior. This task was pretty darn hard for a monkey, so the animal only learned a new object in a new place in 71% of daily sessions. But when I found these time-context cells, the monkey had a 93% change of learning during that session. Therefore, THESE SPECIAL TIME-CONTEXT NEURONS ALMOST EXCLUSIVELY SHOWED UP WHEN THE MONKEY LEARNED NEW CONTEXTS. And since contextual encoding is something known to require the hippocampus as part of our episodic memories, it makes some sense that this part of the brain is keeping track of context. What is novel about this is SINGLE NEURONS SEEM TO BE KEEPING TRACK OF TIME AND CONTEXT SIMULTANEOUSLY.

I also looked at what happened when the monkey got the answer right or wrong. What I found was the timing signal dissipated when the monkey was about to answer incorrectly. And since this incremental timing signal occurs during a delay period where the monkey is keeping the object-place combination in its mind, this could show that THESE TIME-CONTEXT NEURONS MIGHT STRENGTHEN CORRECT MEMORIES BY INCREASING THEIR INCREMENTAL SIGNAL.

For someone outside of neuroscience (or even someone insi…in neuroscience), this might not be so surprising (“Yeah so neurons keep track of things: duh.”). But what these results imply is how complicated the neural code might be. Let me give you an example.

There are multiple studies that use fMRI with humans while they do things like learn new objects in new places (here’s a new one). They’ll look at say 1000 voxels (3-dimensional pixels—each usually about a cubic millimeter with thousands of neurons inside) in the person’s brain and find that maybe 400 go up and 400 go down in activity during the period the object is shown (usually summing activity for as many seconds as the image is shown). Then when the person recalls the same object, maybe 425 neurons will go up and 375 of them will go down. And using some advanced stats methods they can “predict” which object the person is remembering by showing that this similar group of neurons go up or down (this is called training a classifier). People do this kind of thing all the time for individual neurons instead of voxels: measure dozens or hundreds of them and ask if they go up or down and run some linear regression to correlate how the neuronal network changes during a given task.

The problem is: they’re assuming that all neurons do is go “up” or “down” during a single timeframe. While the neurons I show go up or down, if these studies used the neurons I found they’d be throwing away information, since my neurons actually change over time (instead of just peaking and dissipating quickly like we often show in other neurons):2014-12-09 12_19_25-PNAS-2014-Sakon-1417827111.pdf - Foxit Reader

Each color represents a different combination of objects+places. The neuron is tuned to the red object-place combination in this case as the firing rate (FR) increases during the delay period.

Therefore, NEURONS CAN CODE FOR SIGNALS IN TWO DIMENSIONS. Studies that only look at firing rates in a given timeframe are only working in one dimension: firing activity. But if you look at how this neuronal activity changes over time that represents a second dimension of signaling. This doesn’t mean that the studies using only one dimension don’t work: it’s very possible that with enough signal from many voxels/neurons you can accomplish perfectly good decoding for whatever your goal is (like interfacing with a machine). But it’s also possible that until we understand the real code we’ll always be playing a game of “you’re getting warmer” with each and every neuron without ever getting the right answer because we’re not directly measuring how the brain actually works. This is the goal of “basic” (I realize it’s a bad term) research: to understand the fundamental principles of the brain so that in the future we know enough to improve human lives as technology develops.

One great example of this was a recent work where a tetraplegic woman fed herself for the first time using her thoughts. I can’t find the link at the moment but the authors noted that there were about a dozen essential animal studies (starting in the 60s) that led up to this achievement, each one building off the last until we understood enough about the system to tap into how the brain controls movement. Of course, we didn’t know these dozen were the seminal ones until hundreds were done. So, in conclusion, the hope is that work like mine will help inform future neuroscientists/neuroengineers on key principles about how our brain works. But at the moment it’s just an intriguing way that the brain seems to use to encode information.

New neurons are born in your brain every day, which we know because of…nuclear bomb testing?

Long before I started in neuroscience I’d heard that you are born with all the neurons you’ll ever have. I always found this a bit disconcerting, so was pleased when I first heard of studies showing that mammals are able to produce new neurons in adulthood. Surprisingly, the first of these studies was all the way back in 1967 by Joseph Altman, who first showed evidence of new adult neurons born in the hippocampus of guinea pigs (how cliché). The topic wasn’t picked up (or believed, I guess) by the neuroscience mainstream until the 1980s-90s though, maybe because these methods couldn’t be used (for ethical reasons) to show adult neurogenesis in humans. Another reason is that there wasn’t unequivocal proof that these neurons were new and not just being repaired. I’ll save you most of the biochemistry, but the methods involved injecting either radioactive or Brominated thymidine (the T in the A T C G of DNA) into the brain and then measuring how much of these special Ts were later incorporated into neuronal DNA. The problem with these methods is your DNA is always breaking, and instead of throwing away every cell with a DNA mutation, your body has repair mechanisms in place to fix most of these mutations. Meaning the neurons could have just been refurbished (which everyone knows doesn’t sound as good as new) with the injected special Ts.

Now, this is where the eye-catching title—and some really cool science—comes in. From 1945-1998 there have been 2,053 nuclear bomb tests worldwide. Here’s an eerie video made by Japanese artist Isao Hashimoto showing all of them in a time-lapse map (make sure the sound is on). A by-product of nuclear reactions is the breakdown product carbon-14, a carbon atom with 8 neutrons. Even though carbon-14 can occur via cosmic rays acting on nitrogen in the atmosphere, levels spiked sharply during the peak of nuclear bomb tests in the 60s (even though it’s still only 1 part per trillion of the atmosphere’s carbon). See the black line in this graph to visualize how carbon-14 amounts changed over time:

Image(from Cell http://www.cell.com/abstract/S0092-8674%2814%2900137-8)

Now, based on my intro, you might guess where this is going: can we measure how carbon-14 incorporates itself into our DNA? Indeed, since carbon-14 is mixed into all the carbon we ingest throughout our lives, if new neurons were being born, we should be able to compare how much carbon-14 is in our DNA with the carbon-14 in the atmosphere. Jonas Frisén’s lab at the Karolinska Institute in Sweden has pursued this work over the last decade, using mass spectrometry to measure carbon-14 levels in DNA amounts as small as 30 micrograms (about 15 million neurons worth).

The studies go like this: donated, post-mortem human brains are measured for the amount of carbon-14 in the neuronal DNA of different brain regions. The authors can then graph how much carbon-14 they found versus what year that person was born. For example, look at the blue dot in the graph above, which is the carbon-14 from an example person’s hippocampus. This person was born in 1928, and shows MORE carbon-14 in their brain than was in the atmosphere in 1928. If this person was born with all their neurons, they should have the same amount of carbon-14 in their neurons as was in the atmosphere at that time. Instead, this dot shows that this person’s hippocampus has excess carbon-14, which must* have come from the rise in carbon-14 from new neurons that incorporated the increasing levels of carbon-14 throughout their lifetime (again, the black line). Now, look at the pink square representing carbon-14 in a person’s cortex. This person was born in 1968, and their cortex has the same carbon-14 as the atmosphere did. Therefore this brain region DOES have the same number of neurons as they did when they were born (their first 2005 paper showed that adult neurogenesis does NOT happen in the human cortex this way). Just to make sure you get what’s going on, look at the red dot again from hippocampus. In this case, this person born in 1976 has less carbon-14 than was in the atmosphere when they were born. Therefore, some of these hippocampal neurons must have been created after they were born, when carbon-14 levels had dropped below the atmospheric level at the year of their birth.

Now you can understand the main graph of their most recent (2014) paper:

Image

(from Cell http://www.cell.com/abstract/S0092-8674%2814%2900137-8)

Again, each dot is one person. And again, people born before the extensive rise in nuclear testing in the mid-1950s show increased carbon-14, indicative that some of their neurons were new. And again, people born after the peak of nuclear testing in 1965 show less carbon-14 than was present in the atmosphere at their birth, indicative that they had new neurons born after birth.

What’s extra fascinating about this most recent result from 2014 is where they found these new adult neurons: in the human striatum. No one had ever found newly born adult striatal neurons before (in any animal)!! In most mammals (largely rodents), new adult neurons had been found in the hippocampus and olfactory (smell) bulb. Frisén’s lab confirmed human hippocampal adult neurogenesis in 2013, and had surprisingly shown that humans—unlike their mammalian ancestors—had lost the ability to make new olfactory neurons. In fact what seems to have happened is that at some point in our recent evolutionary past, as our primate ancestors lost the smelling capabilities of their predecessor mammals, the neuroblasts (kind of like baby neurons) born in the lateral ventricle migrated into the striatum instead of the olfactory bulb. This means one unique feature of the primate (or maybe even just human) brain is new neurons in this region!

Now what does this mean, exactly? What’s a striatum (pronounced str-eye-ate-um)? Well, it turns out that this is a rather complex region with a lot of functions. Historically the striatum has been associated with motor control, but has more recently been shown to be important for a number of cognitive functions such as motivation, saliency in reward and working memory. In fact, a subregion of the striatum called the nucleus accumbens is considered to be the seat of substance dependency, including all addictive drugs like alcohol and cocaine, as well as a key region in reward for food and sex. What these new neurons do at this point is only speculative, although this paper gives us a few leads. First, as shown in previous work, these new adult striatal neurons were shown to be interneurons (I’ll trust you to read these graphs by now):

Image

(from Cell http://www.cell.com/abstract/S0092-8674%2814%2900137-8)

About 75% of striatal neurons are medium spiny neurons, but it’s the other 25% that show evidence of new adult neurons: the interneurons. Interneurons are typically inhibitory, meaning their job tends to be in more of a regulatory role to temper down excitatory motor/sensory neurons. And in fact, they looked at the brains of a few people that had Huntington’s disease and found a relative decrease in these new (tempering) interneurons in their striatums (striata?). This makes some intuitive sense, as the disease is first characterized by a lack of movement control until it progresses into higher cognitive decline. Therefore these results provide an interesting new avenue for therapies in such neurodegenerative disorders such as this and Parkinson’s, both of which are largely rooted in/near the striatum, wherein rescuing adult neurogenesis has the potential to reverse symptoms (as has actually been shown in mouse models).

One other novel finding from these adult neurogenesis papers you might find interesting is they can model the turnover rate of new neurons at various ages by comparing how many new neurons there are in the brains of people that died at various ages. And, with similar numbers as their previous work in the hippocampus, they show a notable decline in the turnover rate of neurons in the striatum as a person ages:

Image

(from Cell http://www.cell.com/abstract/S0092-8674%2814%2900137-8)

At the extremes, about 10% of a 20 year-old’s striatal neurons are turned over each year, while <1% of an 80 year-old’s striatal neurons are replaced with new ones. As shown in the 3rd graph above, only some types of striatal neurons are replaced by new neurons (the interneurons), and they estimate that overall within this “renewing fraction” 2.7% of neurons are turned over per year. I don’t know about you, but I feel younger already! Thanks, brain.

In the end, these papers that unequivocally prove and expand upon our knowledge of human adult neurogenesis might be the singular good thing to come from nuclear bomb testing (at least until we have to blow up an earth-targeted asteroid). And since adult neurogenesis has become a BIG topic with tons of implications in things such as exercise, antidepressants and new learning/memory, hopefully the future improvements in our lives will provide some solace in the fact we’ve nuked our own world over 2000 times.

 

References:

Humans cortex has no adult neurogenesis (2005): http://www.cell.com/cell/abstract/S0092-8674%2805%2900408-3

Humans don’t produce new adult olfactory neurons (unlike rodents) (2012): http://www.cell.com/neuron/abstract/S0896-6273%2812%2900341-8

Dynamics of hippocampal neurogenesis in humans (2013): http://www.cell.com/cell/abstract/S0092-8674%2813%2900533-3

The first proof of striatal neurogenesis (2014): http://www.cell.com/abstract/S0092-8674%2814%2900137-8

 

*Clever readers might have picked up a problem here: what about those pesky DNA repair mechanisms? Couldn’t the carbon-14 have been integrated into old neurons by those? They rebut this possibility in their method in the most recent paper (4th reference) by saying that:

A.) only some of the neurons have altered levels of carbon-14 (while all neurons should have had some DNA repair).

B.) they’ve looked for carbon-14 in many other brain regions in the past decade and found none, as you would expect if DNA repair mechanisms account for very little carbon-14 integration. This includes in the cortex of neurons after a stroke, when massive DNA damage would have taken place.

C.) they used other biochemical markers to show evidence of neuroblasts (baby neurons) as well as a lack of pigmentation that increases with aged cells (lipofuscin) in these newly born striatal cells.

Worried about waiting to have children?

Edit: It’s been pointed out that my post is largely from the perspective of the father/child. Some relevant reading for mothers weighing this question includes decreasing chances of becoming pregnant as you age (http://pensees.pascallisch.net/?p=1010) and increasing chances of fetal loss as mothers age (http://www.bmj.com/content/320/7251/1708 …I’ve attached the relevant (rather dismaying) graph at the bottom of this post). For the sake of your kid’s mental health, however, read on…

Edit 2: A reader pointed out that mental retardation, which I didn’t touch on, HAS been linked with maternal age (even though schizophrenia/autism have not). I added another relevant graph at the end of this post. Recurring theme: risks increase for mothers older than 35.

I’m a big fan of science that has an effect on our everyday lives. Of course, I’m a big fan of science in general, but in this case what I’m specifically referring to are studies I can tell friends (or enemies) about and change their lives for the better (or worse). This is largely why I personally decided to switch fields from molecular biophysics to cognitive neuroscience—I wanted to study things with more every day, macroscopic applicability.

I came across this short paper on twitter and think it has some implications that will be interesting to pretty much anyone. More so, I hope it alleviates some worry for my NYC/city-living/northern brethren out there, who tend to skew older in their decision of when to have children. It’s called “The importance of father’s age to schizophrenia risk” and was published last May in Molecular Psychiatry. It analyzes a dataset I had not heard of before—even though it appears to be a big deal in genetics research—called the Danish nationwide registers. This database of all Danish people born from 1955-1996, which is apparently linked to their medical information, provides an incredible sample size to look at diseases across populations.

Before I get into the details of what this study shows, a little background on what we know about older parents. I get the impression the status quo is that having children as an older mom is risky for chances of cognitive disease in the progeny (whether it be schizophrenia, autism, ADHD, etc.—such higher-level cognitive diseases are not easy to define and MANY seem to be related). However, this never made much sense, since these kinds of diseases are thought to come from genetic mutations, and females are born with their lifetime supply of eggs so there’s no reason to think that the female germ line will be degraded as the mother ages. Therefore blame has come to rest on the (older) fathers, who produce new sperm every 15-16 days (I’ve read these numbers in at least 3 places although can’t find the original source. For argument’s sake we can at least assume men regenerate their sperm throughout life). This results in about two point mutations per year in the germline of men. And since there’s strong correlative evidence in older fathers having a greater chance of schizophrenic or autistic children, it seems like a reasonable conclusion that these mutations accumulated throughout a dad’s life progressively make his odds worse for having a normal-functioning child the older he gets.

Wait wait don’t tab over to match.com and settle just yet!! We still have the results of this aforementioned Danish study. They took all second or later born children from the 40 year period I mentioned and determined if they developed schizophrenia from 1970-2001 (at least 15 years after birth). 8,865/1,334,883 (0.66%) of this group were diagnosed as such. Fortunately there’s only one figure in this paper, so I’ll just go ahead and show it. First, as shown in blue in Figure a below, even taking only these second or later born children, they reproduce the previously found association between schizophrenia and increasing paternal age at the birth of this child (proband means the child with schizophrenia, in this case). Older father==higher incidence of schizophrenia==confirmed.

schizo

(from Molecular Psychiatry)

Now, you’ll notice in green they also graphed the father’s age when he had his first child. This is a little confusing, so I’ll try to spell it out (in the present tense, so it sounds FRESH). Take the 8,865 schizophrenic children born second or later. Now, look at the age of their dad when the firstborn came from these same parents. What the authors find is that there’s a greater incidence of schizophrenia (in later children) when dads had their first child at an older age.

The second graph (Figure b) makes this clearer. Again, this is a graph showing the incidence of schizophrenia in future kids, but this time shows the age of the dad when he had his firstborn kid (different colors) and the age when he had his future kid that has schizophrenia (Paternal age on the bottom axis). To explain this intuitively, let’s focus on the blue line. If a dad has his first child <24 years old, no matter what children he has in the future there is no indication of increased schizophrenia risk (the line stays near the population average value of 1). Now, the black line: if a dad has his first child after 50 years old, the chances of his later children having schizophrenia is 2.5x more likely. For the orange line, having a first child after 40 made the chance of a later child having schizophrenia enhanced by ~1.5x (note: it’s likely there was an enhancement of firstborn children with schizophrenia in these older dads too, but this has already been shown and is not what they’re interested in proving).

You probably understand the data by now, but what does this mean? As explained above, if dads were having schizophrenic children because they were gaining mutations over time, you’d expect someone that had multiple kids to have each successive kid increase in the chance of schizophrenia. The blue line in Figure b disputes this. Instead, when a dad doesn’t have any children until he is quite old, the chances of schizophrenia increase. This implies that there’s something inherently wrong with fathers who are unable to have children until later in life. This could be for a lot of reasons, but I’m willing to guess this doesn’t have to do with successful, career-driven people making conscious decisions not to start a family. What seems more plausible is that these older fathers are people with a harder time fitting in socially and finding a partner willing to have their child (or maybe it’s just that someone that suddenly decides to have his first child at 50 has a few screws loose). While I’m out on this interpretation limb, the factors making it difficult for such people to find a mate might well be a mild or precursor form of a cognitive disease like schizophrenia, the genetics of which would lead to a better chance than average of more obvious, full-blown symptoms expressing themselves in progeny. For me this fits into the theoretical framework of the genetics of emergent phenotypes, wherein multiple assaults on the genetic system can lead to following worse and worse pathways that results in cognitive disorder (here’s a great read for an explanation of the genetics of emergent phenotypes).

These results are great for us never-want-to-settle-down types, right? I’d like to think I choose not to settle down as I build my career, and as I delay potential progeny, my normative fecundity abilities aren’t dwindling away. I mean two random mutations a year can’t be that bad, right? It also means I don’t have to worry about freezing my sperm, as was suggested in Nature. Although not being able to beat my child in basketball when he’s 10 (and I’m 60) would hit me hard for more egotistical reasons, so I’m not exactly suggesting to wait too long.

For those not yet satisfied, a few more thoughts:

If you’re the questioning sort like me you might be wondering if there could be another biological force at play influencing these results. I agree that ‘age of father when he has his first kid’ isn’t the most satisfying, quantitative result. This study did account for the following factors though: “Incidence rate ratios were adjusted for age, sex, calendar time, birth year, maternal age, maternal age at her first child’s birth, degree of urbanization at the place of birth and history of mental illness in a parent or sibling.” Further, a very recent study (2014) out of Johns Hopkins I just came across while researching this topic came to a similar conclusion: that de novo mutations from the father are likely not the cause of increased schizophrenia risk in older fathers. Further confirmation that waiting to have kids doesn’t look like such a bad idea.

These controlled conditions aside, I came up with one alternate biological theory that I (just now after writing all of this) realized is actually supported by this 2014 study. One of the pillars of the genetic basis for homosexuality is the “birth order effect,” wherein each additional male child from one mother is 33% more likely to be gay. The hypothesis for this effect is called the “maternal immunization hypothesis.” In short, the mother is thought to respond to hormones from the in utero male child, which are novel to her as they’re from the male chromosome, and her body begins an immune response by forming antibodies to fight these invaders. Theoretically such antibodies could make it back into the developing male brain and alter the chemistry of maleness in the developing fetus.

Now you can imagine a similar process happening for any child in the womb. No matter what, when there are foreign genes present (in fact the placenta—which fights with the mother’s body for resources—actually comes from the father’s genes), the mother’s body could mount some kind of immune response. And with each successive child, the response could become stronger in the mother and affect the developing brain of her later-born children. And indeed, this 2014 study finds that birth order is more important for the mom than the dad (which makes some sense since the sperm don’t know about birth order, but the mother’s body could certainly be keeping track). To quote the paper: “The strong maternal age association might suggest that parity [birth order] adversely influences intrauterine development, and obstetrical complications (OCs) are also a risk factor for schizophrenia.” I left that last part in there to mention one more thing found to be associated with schizophrenia: problems in the physical birth of the child. In this case schizophrenic children are more likely when they are firstborns, although this is probably not saying very much since anyone with such complications is less likely to have more children (which would have theoretically been at a higher risk for birth complications and therefore schizophrenia as well).

The fact is: assaults on the brain at any point in development could lead it down the wrong path and support the emergence of cognitive disorders I mentioned earlier. The 2014 paper even points out that schizophrenia could be fostered by bad parenting after birth (again by older parents still—those effects are real), truly broadening the scope of the developmental window for diseases like schizophrenia. So, as it is with most things as complicated as cognitive disorders, there probably isn’t one cause and there are probably a lot of ways to get there. Hopefully new science will narrow our knowledge of these paths so we can make more informed decisions on how to live our lives and raise our families. I, for one, after reading this work, am not so concerned about popping out children any time soon…which is also probably not the right way to put it since I’m a guy.

Edit 1:

2014-03-10 13_27_09-Maternal age and fetal loss_ population based register linkage study _ BMJ(from BMJ)

Again from the Danish registry data with 634,272 women and 1,221,546 pregnancy outcomes. The wintergreen line indicates mothers are not at much increased risk until 35. I’m dismayed by the amount of fetal loss in general! The authors point out that there might be a greater chance of older mothers being admitted to the hospital, so hopefully these numbers are skewed a little high. It’s also pointed out in the comments that many Danish women smoke and I don’t think they correct for that. More recent numbers of fetal loss in America don’t look nearly this bad: http://www.cdc.gov/nchs/data/databriefs/db169.htm  

Edit 2:

2014-03-10 19_28_37-Maternal_age-specific_live_birth_prevalence_of_Down's_syndrome_in_the_United_Sta

(from Wikipedia)

Sleep while you’re awake. Really!

When I was a young lad I remember being tired in the middle of a long day. My dad said to go lay down for a few minutes before we had to leave and I told him I wouldn’t be able to nap in such a short period. He countered that just vegging out for a bit without falling asleep would be almost as good as a nap. Well, it turns out, while I’ve come to find many of my dad’s tales weren’t quite correct, he was actually onto something with his awake resting idea.

Giulio Tononi’s group at U. of Wisconsin published a fascinating paper back in 2011 entitled ‘Local Sleep in Awake Rats’. The title pretty much says it all: sleep-deprived rats had cortical neurons that showed signatures of being asleep while the rats were still doing awake rat things. In particular, local regions had their neurons stop firing while slow waves—which are characteristic of sleep state—increased. See the figure below for a visual explanation.

Image

“Off” period in an awake rat. Notice the slow-wave (boxed) in the LFP while multiple neurons in the MUA (multi-unit activity) quiet their firing. Source.

This actually wasn’t a huge surprise, since we’ve known for some time that all kinds of animals sleep half a brain at a time. In fact, from an evolutionary perspective, I’m more surprised we’ve survived so long while spending 1/3 of our lives in an unconscious, assailable state.

But of course this study was only in rats, which aren’t always the best models for primates like us. In this spirit, Valentin Dragoi’s group at the University of Texas Medical School presented results at the 2013 Society for Neuroscience conference titled ‘Occurrence of local sleep in awake humans using electrocorticography’ (don’t you just love these simple, straightforward titles?). They showed sleep-deprived humans, while in a ‘drowsy’ state, were showing these symptoms as a result of local parts of their brain going to sleep just like in Tononi’s study on rats! Specifically, they found increases in slow, delta waves correlated with decreases in faster, more cognitively active theta waves the longer people were awake.

Tying this all together: ‘sleep’ isn’t a binary state. When you’re ‘half asleep’ and performing more poorly as a result, it’s literally because parts of your brain are catching some Zs. This proves something conventional wisdom already knows: you should be making decisions after a good night’s rest. Further, getting sufficient sleep is key to your daily productivity. Finally, I don’t think it’s too much of a stretch to say you don’t have to actually fall asleep to rest your brain—as my dad once suggested—but chilling out for a bit will still have some restorative power.

One question these results beg: what is so important about sleep anyway that all animals do it? I talked before in my post about why we dream how scientists were beginning to theorize that sleep is important for prophylactic cellular maintenance. Neurons are just like little machines, transmitting electricity and firing robustly enough to keep your mind and body going. And like any electricity-run machine, step number one is to turn it off before playing around with it, since you can’t change the car battery while the car’s on (and you also don’t want to get electrocuted). Stretched metaphors aside, since my last post on dreams, there’s been a major paper supporting this prophylaxis theory. Maiken Nedergaard’s lab at the U. of Rochester Medical Center published a paper in October titled ‘Sleep Drives Metabolite Clearance from the Adult Brain’. In particular, they found the brain increasingly clearing “neurotoxic waste” like β-amyloid during sleep. This particular toxin you might have heard of before as it is heavily linked to Alzheimer’s (although it does have “significant non-pathological activity”, so don’t hate the player, hate the game).

These are the kind of findings I love about science: research that can inform us on how we live our daily lives. Or at the least, point us in the direction of living a ‘better’ life. This is largely what pushed me into science, and further compelled me to move from molecular biophysics research to studying a higher-level topic like the neuroscience of memory.