Dear Unsettled Science Subscribers,
As I mentioned in my last post, I’ll be moving my writing over to my new Substack, Uncertainty Principles, while Nina will continue writing here for Unsettled Science.
For those who want to continue reading my thoughts and observations, I’ve sprinkled sign-up buttons throughout this post. They look like this:
My next few posts are likely to be on both sites—cross-posted in the Substack lingo—and then exclusively on U.P.
I look forward to seeing you all on the new site.
Now, on to the commentary.
What’s The Problem with Nutrition Education in Medical School: Quantity or Quality?
There is a modern military cliche, `Never reinforce failure', which means broadly that to thrust reinforcements in among soldiers who have failed in an attack, feel themselves beaten and are trying to run away is merely to waste the newcomers' energies in a struggle against the thrust of the crowd and to risk infecting them with its despair.
John Keegan, The Face of Battle, 1976
Anxiety about failure reinforcement should be a common theme when writing critically about nutrition science. Not because we’re worried about infecting a new generation of nutritionists with despair, but because the failure is so blindingly obvious, as agreed essentially by all: the ongoing epidemics of obesity and diabetes and their related chronic diseases.
Not obvious is the reason for the failure, which means neither is the solution.
Both physicians and nutritionists would like to believe that the problem lies, at least in part, with the failure to properly disseminate the message of healthy eating to the patients who suffer from diet-related diseases. This lays the blame on the paucity of nutrition education that physicians receive in their training. How can they pass on sage nutrition advice to their patients, if they’re never formally educated on what that is?
But here’s another possibility: what if the failure is with the quality of nutrition-education—i.e., the content—not the quantity, what’s being taught, not how much?
Can we know, in effect, whether or not we’re merely reinforcing failure, if we haven’t done the hard work necessary to understand, beyond reasonable doubt, why we failed in the first place?
I wrote about this issue last April for Unsettled Science when the New England Journal of Medicine announced a new series of review articles on “Nutrition in Medicine.” The series was motivated, as the NEJM editors explained, by the ever-growing number of diet-related diseases and deaths: “Worldwide, 11 million deaths per year are attributed to suboptimal diets,” including those related to obesity, heart disease, stroke, type 2 diabetes and cancers. A series of reviews recapitulating what nutritionists had come to believe about diet-related diseases and therapies, so the editors reasoned, would help solve this public health crisis, empowering physicians on the frontlines with the necessary confidence and expertise to effectively and safely treat their patients. I was not optimistic.
Last month, the reinforcement-of-failure scenario appeared again in a “consensus statement” in the journal JAMA Open Network. Now the specific problem to be solved, according to the JAMA article, is “the discrepancy between nutrition-related medical education and national health outcomes,” i.e., the ongoing epidemics. The consensus statement is about solving that discrepancy at its source—the medical school curricula and the physician training programs of internships and residencies. Whether that will improve “national health outcomes,” of course, is the salient question. Once again, I’m less than optimistic.
The consensus statement is the latest step in a process that began in May 2022 with a resolution passed by the U.S. House of Representatives “calling for meaningful nutrition education in medical schools, residency, and fellowship programs.
Concerned about the increasing prevalence of diet-related disease and Medicare costs, totaling $800 billion in 2019, and conscious that federal funds are the single largest source of GME funding, the resolution urged medical training programs to meaningfully increase nutrition education to “ensure competency in nutrition for physicians and other health professionals.”
The 2022 resolution led to a “nutrition summit,” hosted by the Accreditation Council for Graduate Medical Education, and then to the formation of a multidisciplinary “cross-continental” expert panel composed of 22 “subject matter experts who teach nutrition and/or have authored curricula relating to culinary medicine and nutrition education involving teaching kitchens” plus another 15 medical residency directors. This was the panel that authored the consensus statement and came to agree on 36 “nutrition competencies” that should be required learning for students in medical school or newly minted MDs in their training programs.
Many of these nutrition competencies are so benign that they brook no argument from me: “Starts a sensitive, nonjudgmental conversation about food and lifestyle in a brief consultation within a primary or secondary setting,” for instance, as all but one of the experts agreed was necessary. Or “Listens carefully, compassionately, and nonjudgmentally while taking a nutrition history.” Or “Demonstrates empathy and sensitivity when counseling patients with obesity and diabetes.”
But a significant number of the proposed competencies are based implicitly or explicitly on the interpretation of the evidence-base for nutrition research; what works, what doesn’t. The very first on the list, for instance, is “Provide evidence-based, culturally sensitive nutrition and food recommendations to patients for the prevention and treatment of disease.” All of the assembled experts signed off on this, as would I. All but one expert agreed that medical students and physician trainees should “[Integrate] evidence-based nutrition information from national nutrition guidelines, scientific publications and other sources into patient care.”
Now consider the magnitude of the problem these nutrition competencies are expected to solve—i.e., the nature of the failure—as described in the consensus statement:
Dietary patterns are one of the strongest behavioral influences on disease risk regardless of individual genetics. Seven of the 10 leading causes of death in the US are directly affected by diet. In 2020, 42% of US adults and nearly 20% of US children were classified as having obesity. More than 10% of US adults have diabetes, and 38% have prediabetes. It is estimated that 60% of current US children will develop obesity before the age of 35 years. … The collective effect of all of these statistics comes at a great cost, with the US currently spending $4.3 trillion on health care annually, 90% of which is spent on care for patients with chronic diseases. Diet plays a key role in the pathogenesis of many of these chronic diseases; therefore, food and dietary interventions offer an opportunity for improving population health and reducing morbidity, mortality, and health care costs.
Establishing nutritional competency in the treatment of these diseases—the successful communication from physician to patient of safe and effective “food and dietary interventions”—depends almost entirely on the quality and interpretation of the evidence base, on whether those “national nutrition guidelines” got it right, and perhaps what those “other sources” happen to be.1
Now we’ve opened a can of worms, and the prospect of reinforcing failure looms large.
The root causes of a public health failure of the magnitude of the ongoing obesity and diabetes epidemics may be far more profound than the thinking that led to the consensus statement. A seemingly obvious possibility, as suggested, is that the failure in this context is not in the limited nutrition-related education disseminated to medical students and young doctors, but the fact that the nutrition-related medical education itself is ill-conceived? Maybe physicians are not failing to counsel patients on how to eat healthy for obesity or diabetes; maybe they’re failing to counsel them to eat in a way that actually helps.
I can understand why nutrition authorities would find this proposition hard to accept. After all, they did get sufficient nutrition-related education and that’s what they’ve been passing along to the rest of us. But that does not mean the proposition is wrong.
The question, as I’m always asking, is how would we know?
How about reinforcing uncertainty?
Step one is not doubling down on the existing nutrition-related dogma but acknowledging instead the existence of considerable uncertainty. If physicians are taught that justifiable uncertainty exists, or at least a lack of agreement about what works and why—i.e., what dietary interventions return their patients to health—they are more likely to pay attention to what their patients do when it does work. They might be less likely to blame diet-related treatment failures on their patients, rather than the nature of the dietary advice they’d been giving. They can learn from their own clinical experience and then, so can we all.
Back in the mid-1960s, the editors of the NEJM published two volumes of a book, Controversy in Internal Medicine, that itself created a bit of controversy. The subject matter was a range of topics about which medical authorities could not agree, from the nature of a healthy diet and the value (or lack thereof) of the epidemiological tools employed to answer that question, to a host of issues about the appropriateness of various drug therapies for different diseases. Each chapter discussed one controversy and included articles on the competing perspectives. If nothing else, the intellectual honesty of this approach communicated to physicians, young and old, the need to keep an open mind about what they were being taught.
In the 1980s, the evidence-based medicine movement was born, launched by physicians who went looking for the evidence to support what they were telling their patients and learned instead that “medical decision making was not built on a bedrock of evidence or formal analysis, but was standing on Jell-O.”2
Now, after the prevalence of obesity and diabetes has reached (long ago already) a level that would have been unimaginable to either the NEJM editors of the 1960s or the EBM founders of the 1980s, we have everyone from Congress to the nutritionists and the journal editors insisting that we know the answers, that a bedrock of evidence exists, when it doesn’t, and if we can just get the physicians properly educated we could make progress in solving this problem.
I’d be a trifle more optimistic if the consensus statement even acknowledged the existence of uncertainty on the nature of a healthy diet. But the expert panel fails us here. Their article does include a box “on possible gaps in the recommended competencies” but not even the possible gaps include that one.
In the introduction to the second volume of Controversy in Internal Medicine, Franz Ingelfinger, the legendary NEJM editor-in-chief (for whom the Ingelfinger rule was named), summed up their 1960’s era thinking this way:
…the airing of a controversy has as its purpose not merely the exhibition in public of a to-do. It should serve to uncover areas where evidence is lacking, whether in quantity of quality. It should help to force reexamination of old tenets, to test the validity of shibboleths, and to discourage that specious base for so much medical dogma, `it is generally acknowledged that.’… Let the publication of these consequences of our uncertainties serve the function, not of promoting nihilism or a `plague o’ both your houses’ attitude, but rather of fostering a healthy and objective attitude of self-appraisal, leading to a wiser and less dogma-determined interpretation and care of our patients’ difficulties.
That pretty much sums up the first nutrition competency that I would want to see hammered into the minds of medical students and young physicians whose waiting rooms and so careers will be overwhelmed by patients suffering from the existing failure that confronts us. If we can start there, we might make progress.
Sugar Anxieties and a Very Clever Quasi-Experiment
So the real question for me as an educator is, if I go out and tell people that I think they are eating too much sugar, if I go out and tell mothers I think they should stop their kids from eating so much sugar because it is bad for them, am I going to get flak from the scientists? Or am I going to be allowed to make that statement without travail, on the grounds that even though we do not have hard evidence to link sugar with a specific disease, we do know that a dietary pattern containing considerably less sugar, in which sugar is replaced by a complex carbohydrate, would be a much healthier diet?
Joan Gussow, chairman, Columbia University nutrition department, 19753
The latest news is that Gussow’s “real question” was dead on. Although, regrettably, we’re still in the process of learning exactly how bad sugar might be?
Is sugar a long-term toxin causing disease over the course of decades, or is it just a benign source of excess calories?
An article published early this month in the journal Science suggests the former. The fact that the article appeared in Science and not a nutrition journal suggests that there’s something unique about it to which we should pay attention. The authors were a trio of economists, led by Tadeja Gracner of the University of Southern California and the RAND Corporation. The title gave the game away, albeit not quite how clever this study was or how potentially important the observation: “Exposure to sugar rationing in the first 1000 days of life protected against chronic disease.”
The sugar-rationing of the title was that instituted by British authorities during World War 2. When it came to an end, in September 1953, the UK’s per capita sugar consumption doubled over the course of a single year. Gracner and her colleagues had looked at the health records of British citizens who were born either before or after the end of rationing. Here’s what they found:
…early-life rationing reduced diabetes and hypertension risk by about 35% and 20%, respectively, and delayed disease onset by 4 and 2 years. Protection was evident with in-utero exposure and increased with postnatal sugar restriction, especially after six months when solid foods likely began. In-utero sugar rationing alone accounted for about one third of the risk reduction.”
That sugar consumption might cause diabetes is an idea that’s been around for at least a few hundred years, although the idea that in-utero exposure (the mother’s sugar consumption during pregnancy) could impact diabetes incidence 50 or 60 years later is rarely discussed. It certainly should be. That sugar in the diet could be a major cause of hypertension or cause a predisposition to becoming hypertensive is also rarely discussed in nutrition circles. Significant evidence exists to support both propositions (as I discuss in my books Good Calories, Bad Calories and The Case Against Sugar). This Science report may be the first time I’ve seen them discussed in an influential journal and so, as a result, reported widely by the media.
Even with the wide dissemination, the more profound implications tended to get lost in the media translation. (Here’s The New York Times write-up, for instance, and here’s The Guardian’s.) The media presents the result in the context of the title of the paper, in line with the prescriptive tendencies of nutrition journalism these days: keeping kids away from sugar in early life will make them healthier later.
If sugar avoidance reduces risk, then…?
But this isn’t the only implication of the research: if sugar avoidance in early life reduces risk of diabetes and hypertension half a century later, then consuming sugar in early life either causes these diseases or predisposes us to them. Now we’re speculating about sugar causing diabetes and hypertension with the same meaning of the word “cause” that we use when we say cigarette smoking causes lung cancer, a process that also takes decades.
The difference between these two perspectives was evident in the USC press release in quotes from Gracner’s two co-authors. Here’s Claire Boone, from McGill University and the University of Chicago, giving the diet- or parenting-advice perspective:
Parents need information about what works, and this study provides some of the first causal evidence that reducing added sugar early in life is a powerful step towards improving children’s health over their lifetimes.
And here’s Paul Gertler of UC Berkeley and the National Bureau of Economic Research4 giving the more dire public-health context:
Sugar early in life is the new tobacco, and we should treat it as such by holding food companies accountable to reformulate baby foods with healthier options and regulate the marketing and tax sugary foods targeted at kids.
Now it’s not a question of what we advise parents and particularly expectant mothers regarding sugar avoidance, but how we discuss the long-term harms of sugar consumption and whether federal and state governments should regulate sugar sales, tax them, and perhaps go after marketing of sugary beverages and foods.
From this perspective (assuming the implications are true), doctors can tell patients that if they don’t want to become hypertensive or diabetic, they should cut out sugar; if they don’t want their kids to contract these diseases when they’re in middle age, they should keep sugary beverages and sugar-rich foods and snacks out of the house and explain to the kids why they’re harmful. Treat them like cigarettes, in effect.
This is a very different messaging than the “avoid too much sugar” our public health agencies have been preaching since the 1980s. After all, physicians don’t tell their patients to moderate cigarette consumption; they don’t tell them not to smoke “too much”. They strongly advocate that they quit. Saying as much for sugar is an extreme perspective, but maybe it shouldn’t be. The Science article by Gracner and her colleagues is one of the very few I’ve seen that brings this issue to the table.
A brief history of sugar and disease: the nutrition transition effect
The reason we’re only having these discussions now, rather than a few decades ago, is because nutritionists have been averse to considering sugar anything more than empty calories. If sugar causes harm, they’ve preferred to think, it’s because the excess calories make people fat and that’s the worst they’ll say of it.
But back in the 1960s and 1970s, before nutritionists and physicians settled on the dogma we’ve been living with ever since—dietary fat causes heart disease; salt causes hypertension; eating too much and so getting fat causes type 2 diabetes—influential British nutritionists led by John Yudkin were arguing persuasively that sugar was a primary cause of these chronic diseases. These chronic diseases tended to cluster together in patients and in populations and became western diseases: obesity, diabetes, hypertension, heart disease, stroke and cancer, most notably. They are still the chronic disorders that are most likely to kill us prematurely in the modern world.
That these western diseases might have a single dietary or lifestyle cause, or perhaps a related few that cluster together in populations, was initially based on a simple observation. These diseases appeared in populations only when they transitioned from eating whatever their traditional diets were to eating like us in the west, when they went through what nutritionists would later come to call (thanks to the University of North Carolina economist Barry Popkin) a nutrition transition.
This concept, too, would later be influenced by the nutritionists’ assumptions about the harms of fat, salt, and overeating, but initially it was based only on the unbiased observations of physicians working with these populations at the time. And what they were observing, first and foremost, were relatively massive increases in sugar consumption that came with the process of westernization and urbanization.
This was true of the U.S. in the second half of the 19th Century, when diabetes went from being an extremely rare disease to one that was relatively common, coincident with the birth of the candy, chocolate and soft drink industries. It was reported in Native American populations and the First Nations People in Canada in the mid-20th Century, although the diabetes epidemics then were far more dramatic, the disease transitioning from very rare to afflicting perhaps one in three or even one in two adults over the course of just a few decades.
It was reported in studies from Israel, Africa, Australia, and the South Pacific, all linking the significant increase in sugar consumption that went with embracing western lifestyles and diets to the transformation of diabetes from a rare to a common or very common disease.5 “A veritable explosion of diabetes is taking place in these people,” as the South African diabetologist George Campbell said of a local population in 1966. Similar observations were made about the emergence of hypertension in these populations, as well.6
The catch, and it’s a hell of a catch, is that this sugar-disease association tends not to appear in modern epidemiological studies. So maybe it’s not real. Maybe sugar does not predispose entire populations to acquiring obesity and these chronic diseases. Or maybe, the problem is with the modern epidemiological surveys and what they study, which is not a nutrition transition.
Since the 1970s, nutrition researchers and epidemiologists trying to establish diet-disease relationships by surveying populations that long ago went through their nutrition transitions: ours, for instance, in the 21st Century. Now most everyone is eating relatively large to huge amounts of sugar compared to pre-transition populations and, if they’re not, they still did when they were children. Perhaps more importantly, the ones who are not eating sugar-rich diets are doing so out of choice.
Why does freedom of choice make life so difficult?
This is a (the?) major source of confounding in nutritional epidemiology: they’re detecting associations between disease and the people who eat more or less of a particular food or macronutrient, not between the disease and the food itself. Whatever the association observed, it could be a result of the type of people who make these particular dietary choices, not a result of the actual foods consumed or avoided.
In the case of sugar, those who abstain and eat little may already be struggling to maintain a healthy weight. Hence weighing more or being the kind of person who puts on weight easily can be a cause of sugar avoidance, Hence, the more sugar participants eat in these studies—or admit to eating—the leaner they might be. This reverse causality could make sugar look like a veritable health food in these studies, an ideal food to eat for weight loss. On the other hand, the sugar-avoiders may be particularly health-conscious. If so, their sugar abstinence would likely be accompanied by a host of other health-conscious behaviors, not to mention higher socio-economic status and higher education, all of which can influence their health status in beneficial ways. This healthy-user bias could make sugar look more harmful than it is.
These complications make modern epidemiologic research almost impossible to interpret reliably.
So, the question: Is there a way to minimize these confounders and biases without doing the kind of exceedingly expensive and time-consuming randomized controlled trial that we require to establish the safety and efficacy of drugs?
One possible solution: find a sugar-related nutrition transition in a population that has already been westernized, which is already eating all the other foods that go with western diets and modern living and see what happened to them. With such a population, we could have (some) faith that the decision whether or not to eat a sugar-rich diet was not a personal one—i.e., the folks who avoided sugar did so because they had no choice. Whether or not they were health conscious or fattened easily would not (necessarily) enter into it.
That kind of quasi-experiment is what Gracner and her colleagues found, and that’s what they reported in Science.
Early in World War 2, the British government instituted nation-wide rationing on a host of food items from butter and meat to sugar. The rations stayed in effect until years after the war ended: September 1953 for sugar and six months later, for foods like meat, butter and dairy. As the British population went back to eating sugar and candy without restrictions, their total sugar consumption doubled over a single year.
“Sugar rationing offered a rare opportunity,” Gracner and her colleagues write: “Babies conceived on either side of 1953 would have had very different early-life sugar exposure, but were similar in all other respects. Although other products such as butter were also derationed in the mid-1950s, none saw such a leap in consumption.”
Using the health records of British residents born between 1951 and 1956 (UK Biobank data), Gracner and her colleagues found that the earlier these men and women were born before sugar rationing ended—the more months of their childhood was spent in a sugar-rationed world—the lower their risk of diabetes and hypertension later in life. Kids born a year and a half before the rationing ended had a “40% percent lower risk of diabetes and a 20% lower risk of hypertension” than those born after.
Even in utero exposure to a low sugar environment (when the mother’s sugar consumption would be the determining factor) was associated with a lower risk of disease. This could happen through a mechanism known as fetal programming. The mother’s sugar consumption when pregnant “could affect health by altering physiological programming in utero,” Gracner et al write.
Our findings on in-utero sugar effects are consistent with animal studies demonstrating that high-sugar diets during pregnancy increase risk factors for T2DM and hypertension, such as insulin resistance and glucose intolerance in adulthood, and with studies on humans demonstrating an association between sugar-rich diet during pregnancy and lactation with increased offspring’s obesity risk.
There’s another interpretation of this association, which is not quite so grim: maybe early life exposure to sugar triggers a sweet tooth, in effect, “intensifying a lifelong preference for sweetness.” Hence, one possible explanation for the association observed is that the amount of sugar to which the children were exposed in early life, or their mother’s exposure when pregnant, leads to a greater preference for sugary foods and beverages, and so greater sugar consumption or even greater calorie consumption later in life.
In this sweet-tooth scenario, the association observed does not reflect a direct biologic effect of sugar exposure in utero or in infancy, but rather confounding or mediation by sugar or calorie consumption throughout the life course. Maybe what’s programmed is the tastes of those with higher exposure to sugar; they get accustomed to higher levels at an early age, and that follows them through life. Now it’s cumulative intake over their lifetime that does the actual damage, starting perhaps with early obesity and caused simply by the excess calories consumed.
The viability of this latter scenario is why we can’t use even this clever quasi-experiment to argue with conviction that sugar causes diabetes and hypertension just as cigarettes cause lung cancer (much as I might like to). It might, but that jury remains out.
But now let’s speculate about the implications if this association is due to a biological effect of sugar in early life. If so, then a third of all cases of diabetes and a fifth of all cases of hypertension can be attributed to just the sugar consumption by the mother when pregnant and by the child in the first couple of years of life.
But what about the rest of childhood and the entirety of adult life?
In doing the research for my 2016 book, The Case Against Sugar, one observation that I found revelatory was the explosion in sugar consumption in the 1950s and 1960s among children in the United States. Refrigerators had only started to become common home appliances the 1930s. Not until the post-war years did children in the U.S. have fruit juices and sugary sodas readily accessible at home.
Sugary cereals also made their appearance in this era, beginning with Sugar Crisp (Post), Ranger Joe (Nabisco) and Sugar Corn Pops and Sugar Frosted Flakes (Kellogg’s) all between 1949 and 1951. As I wrote in the book:
Over the next twenty years, the cereal industry would create dozens of sugar-coated cereals, some with half their calories derived from sugar. The greatest advertising minds in the country would not only create animated characters to sell the cereals to children—Tony the Tiger, Mr. MaGoo, Huckleberry Hound and Yogi Bear, Sugar Bear and Linus the Lionhearted, the Flintstones, Rocky and Bullwinkle—but give them entire Saturday-morning television shows dedicated to the task of doing so.
These companies would spend enormous sums marketing each cereal—six hundred million dollars total in a single year by the late 1960s.
All of this was for children old enough to eat with a spoon, and so older than those in the Science study, but young enough nonetheless to possibly experience physiological effects that might last a lifetime.
Duly noting that this is speculation, I’ve often wondered if the increase in diabetes prevalence in the U.S. that can be seen in the data in the 1960s, and in obesity, which begins to noticeably turn upward in the 1970s and explodes in the 1980s, was the result of what was done to the diet of American children in these post-War decades— not just the normalization of juices and sugary sodas as beverages to be consumed multiple times a day, but the conversion of the American breakfast into a minor variation on dessert.
It was the kids from the 1960s, after all, who had become the adults of the 1980s when the obesity rates shot upward, and the obesity epidemic became undeniable. And if the fetal programming observations hold true, then each generation of mothers born in this sugar-rich environment would pass on the problem to their children. Obesity and diabetes prevalence would increase with each passing generation, even if sugar consumption itself plateaued as, by 1999, it eventually did.
What we need now are nutritionists seeking out other such quasi-experiments to see if the observations from this one are replicated, which means we need them to care about the science.
That the Harvard T.H. Chan School of Public Health at Harvard served as coordinating center for this expert panel and its consensus statement is a probably a give-away, a bad sign, for those who have read my writing and Nina Teicholz’s.
National Academy of Sciences (NAS). 1975. Sweeteners: Issues and Uncertainties. Washington, D.C.: National Academy of Sciences, p. 96.
Are the dual associations here a new trend? Specific to economists? Or just a coincidence?
For those who want the full story on these nutrition transition studies and the relevant references, see my 2016 book The Case Against Sugar.
The hypothesis that seed oils are the primary cause of these diseases—obesity, diabetes and heart disease, most notably—is based largely on a similar argument. One of the reasons I find that argument uncompelling (although, of course, I could be wrong) is because the observations of disease appearance in association with increases in sugar consumption were made contemporaneously by observers on the scene and of the era – typically colonial or missionary physicians. If seed oils were present or new additions to these traditional diets when the western diseases began to appear, I could find no mention of it at the time.
Great job Gary
As a 72 year old x 3 years keto carnivore I woke up this morning quietly reflecting on what food was probable today and after a few minutes I noticed that carbs were nowhere in sight and all that I saw was meat
If you offer enough money to Noberl Laureates they will swear the moon is made of green cheese.
Medicine is just a commercial business racket. If Robert Kennedy controls the money pipelines they will chant his name. Just whack the lot.