Sleuth or Cynic?
A sleuth gathers and sifts evidence and employs his or her critical faculties in search of the truth. The sleuth isn’t overly attached to their conclusions and is open to reasoned debate. A cynic delights in endless deconstruction and disputation ending with contempt for all who disagree.
Sleuthing is commendable, but rarely practiced. It is simple to shop for a prepackaged opinion online that confirms my biases. It is so much easier to be a confirmed critic than to painstakingly construct a personal position persuaded by reason and evidence.
In this episode Dr. Daniel L. Smith of the University of Alabama Department of Nutritional Sciences positions himself as approaching science from the vantage point of healthy skepticism. Healthy skepticism is propelled by questions. It is not gullible, but neither is it implacable.
Learn how science works and when its conclusions deserve deference.
An approximate transcript of the podcast episode follows.
[00:03] Mike Gray: Welcome to Season Nine of the Deep and Durable Learning podcast. This season I'm going to concentrate on issues that demand the highest level of thinking, wisdom. Knowledge and understanding are crucial, of course, but they're distinct from wisdom. Wisdom is often defined as discerning the best means to the best end. Wisdom requires experience, and experience is only gained over time. Wisdom is aligned with the way God made things. I've been encouraged to head this direction by some of you, my listeners. I make no claim to infallibility, but I hope these explorations will pique your interest and challenge your assumptions, catalyzing you personally to deeper deliberation. My guest today is a research scientist at a leading university who describes himself as a science skeptic. A skeptic, not a cynic. Stay tuned.
I welcome today, Daniel Smith, who's an associate professor in the department of nutrition sciences at the University of Alabama, Birmingham. His PhDs in biochemistry and molecular genetics from the University of Virginia. Welcome to the podcast, Daniel.
[01:28] Daniel Smith: Thank you for having me.
[01:30] Mike Gray: I’ve been looking forward to it. So, you are in the department of Nutrition Sciences. Can you tell us a little bit about your research and what motivated you with a degree that could have taken you a lot of places to move in the direction of human nutrition?
[01:49] Daniel Smith: Certainly. So, as you know, others may not, I actually did a good part of my undergraduate at Bob Jones University and was in the department of Biology, had a pre-med emphasis with the biology focus there instead of the chemistry or physics side. And at the end of my time in undergrad, I actually pursued a masters in counseling, fully intending at some point to go on to med school, and during the process realized that my natural born tendencies towards asking questions and being inquisitive and stuff like that was not fully aligning with maybe the medical practice of just learning and then practicing as much as maybe I could also consider research, which I really hadn't done heavily in undergrad. So, I started looking at both opportunities and ended up applying to both med school and graduate school for a research position. I was very blessed to be accepted to a couple and chose to go to Virginia, largely at some advice from some friends who are older. And really, I think the path, when you ask about what kind of motivated that direction, I honestly think that's the way just God created me. I've always had this natural inclination towards asking questions and wanting to understand stuff, and I grew up in a home where that was encouraged rather than kind of tried to be suppressed. My parents allowed us to ask questions as long as we did so respectfully, and we defer to the what's known. All right let's look it up. Let's find out. So, I got to go to grad school, got to be in a department where I tried different routes. It was a feeder program so you could go into the cell molecular biology training program, but they let you rotate micro, pathology wherever you wanted to go, and I ended up settling down in biochemistry, molecular genetics, and spent my graduate research training with one mentor through the whole time there, and ended up indirectly becoming interested in nutrition because it turns out when you feed yeast different types of media, you get very different outcomes. And it just intrigued me that, wow, a single cell organism, you change the nutrients just a little bit and everything else changes. So, I became very interested in how that might impact us as humans eating different types of foods.
[04:07] Mike Gray: Okay, so was your actual PhD research? I know it was with yeast, right? Was it nutritional in focus or something else?
[04:19] Daniel Smith: So, the lab I joined, the leader of it, he actually had a background in chromatin/ heterochromatin formation. And at that time, there were a lot of studies coming out saying that genes that were involved in the maintenance of chromatin were important for aging. So, we got interested in using the power of yeast genetics to actually understand what other genetic contributors might be relevant for aging. So, we did some screens across the full gene deletion collection. And sure enough, like for every other cell type, you have to grow the cells in certain types of media. And the media has names, but it also has ingredients. And what we realized was that wow, if we change some of these ingredients, it completely changes which genes are involved in aging for the cell yeast model. So, I really began to pursue that as part of a pillar within my dissertation and then continue that both as a postdoc with some yeast work and still today, but also branched out to other organisms, mice, and even doing some clinical human studies.
[05:20] Mike Gray: And if I remember right, that got you into thinking for a while (maybe still) about calorie restriction. So not just what they eat, but how much they eat. I'm not sure how you constrain the appetite of a yeast. (So I do have some ideas there, but.)
[05:38] Daniel Smith: Yeah. And that actually becomes part of the challenge with science in general. Right. So, the model. And how do you actually invoke the question that you're wanting to ask? So, in the yeast model, right, they're absorbing nutrients through transporters and cell membrane, not like us with the GI tract, but if you actually change the proportionality of the ingredients within the media. So in other words, if you change the amount of glucose that you have in the starting culture, you can mimic the effects of calorie reduction because they need some type of carbon and nitrogen source as well as the micronutrients for growth and reproduction, but also for maintenance, long term and survival. So, you can individually manipulate everything from amino acids to vitamins, minerals to types of carbon sources and complexity of carbon sources and other things like you would in other organisms. It was a very fun time to explore and kind of think about these larger questions of, wow, nutrition is pretty complex. We think about calorie restrictions. You eat less of something, but wow, that could be very different for different people. So, what does that really mean when you start to try to study it in other organisms, including humans? And how do you implement that and interpret that whenever everybody's diets are so different?
[06:55] Mike Gray: Yeah, I think we're going to be talking about the difference in complexity of humans as research subjects and yeast. We'll get there in a few minutes, I'm sure. A weekly news magazine that I subscribe to used to have a column. They still have the column, but they changed the title. I thought it was a terrible, but the title used to be called the “health scare of the week” and this is called the Week magazine. So, I think all of us have experienced a confusing welter of claims like, coffee's bad for you. No, wait, it's good. It's actually full of antioxidants. And you know, those are definitely good, or red meat/s a death sentence, or. Well, maybe that's an overreaction. I just read an article this morning in the New York Times summarizing a clinical study at NIH on the dangers of so called ultra-processed foods, which the article claims constitute 58% of the average American diet. So, I know even as a professional, you have some skepticism about the field of human nutrition. So, what's the basis for your skepticism?
[08:12] Daniel Smith: Yeah. So, to start off one, to be very clear with anybody listening, I do not have an RD. I'm not a registered dietitian or dietetic intern. So, my coming to the field is very much from a nutritional biochemistry standpoint. So, I am interested in the foods we eat and the compositions that they have and how we process them and everything else. But certainly, I'm not a registered dietitian. The skepticism, though, that the word kind of can have two different meanings. One of those I more think of are being critically evaluating claims. To say, is this really founded on truth versus. There's also kind of this idea of skepticism as saying there is no such thing as truth. I'm more in the first field, where, okay, so you've done some experiment, you made observations, you might have even randomized. But what was the design? What was the observation? What was the intervention, if there was one? And what was the outcome? And then how do you interpret that? In some of the classes I teach, we actually ask students, okay, if you read a paper that's been published like you're talking about, sometimes the media takes things a little farther than what the paper actually says. But if you go to the actual paper, even, what do you read? First, the title, abstract, introduction, background, the methods, results, and then discussion. Which one do you choose? Most people start with the title. That's what gets their interest. They go to the abstract, which is kind of a summary, but from that point on, it's very different. Some people jump right into the background. They want to learn more about why the study was done, then they go to the methods, and then they just work their way through the paper. Other people jump into the, okay, I've read the abstract. I know what's going on now. I'm going to the methods. What did you actually do? And honestly, I'm one of those people. Because when you say skepticism, sometimes what you did changes the outcome—the interpretation of what you observe. And I'd like to know how you went about that. And then I'll read the other stuff too, the results. But quite frankly, the results should stand the test of time, even if I misinterpret them, and my discussion is off for some reason. So, I really spend more time in those two sections of the paper than the others. But the skepticism comes because I've seen this happen over and over where, you know, intentionally or not, we may choose a design trying to ask a question, and then when we get our result, we have to interpret it with some amount of background knowledge and context and everything else. But then that's limited to that time and space and what our individual capacity or the authors’ collective capacity is. But as new information comes along, wow, that same result may be missed. May be interpreted a completely different way in the future. So there's a personally, a healthy dose of skepticism about claims whenever I know some of the designs can be slightly misleading in the sense of they don't prove something as much as they give additional information to either support or reject, but they can't be fully determined to be causal in their observation. But yet sometimes we want to overly interpret or aggressively interpret our efforts to say we've made a difference. And there's a balance between that being honest about what we know and what we still don't know.
[11:28] Mike Gray: Yeah. I don't know if I would have called skepticism or not, but when I was finishing my PhD many decades ago, I remember particularly the discussion section was one where I tended to be fairly close to the vest. This is what I definitely proved, and my mentor definitely wanted me to extend. I felt more like pressure to be a creative writer. Like, what might this mean rather than what I'm sure, as far as I can tell, the way I designed the experiments, it was, you know, definitely able to demonstrate that I was willing, you know, to defend my dissertation, as it were. You know, that much I'm willing to defend. But there was always this pressure to extend the results beyond what you'd actually demonstrated.
[12:22] Daniel Smith: Yeah. And some of that gets to both a personal desire to make a contribution and a difference. But honestly, sometimes it can get over into the idea of trying to elevate the significance of your work, to be able to continue your work and to get additional funding, support and things like that. So, it is something that I think the field has been becoming more aware of, more talked about, more cautious of, and trying to develop policies in ways that reduce the impetus for that, so that we do get on the same page. Now, that being said, as soon as you publish something and it's out in the public, the media can do whatever they want with it.
[13:06] Mike Gray: They do.
[13:07] Daniel Smith: Bloggers can do whatever they want with it. You never know exactly how your paper, your results, even if you write something a particular way, how it will be interpreted by others. And that can be a challenge in trying to write precisely, but at the same time have relevance to people who are not in the field to be able to understand what was done. So, it is hard communication, it's good science, and doing good science is hard.
[13:35] Mike Gray: Probably the most provocative thing you told me in the last few years. You once said something like, (you can correct me if my memory is faulty,) but something like, you could tell a scientist how to design an experiment so they could get the results they wanted. Now we just let that stand. There are several parts of that that I think are kind of provocative. Would you limit that statement in any way, say, perhaps to certain scientific fields, or do you think that's a generalization that holds?
[14:12] Daniel Smith: So, I think some fields may be more resistant to that than others, given that some fields, to do certain types of experiments require a huge amount of effort, planning, agreement among individuals, providing input about, because the experiment can only be done a set number of times in a certain way at a certain place. So probably some that are less susceptible. Whereas when you get down to some of the molecular biology or even in vivo experiments that are performed, there's so many choices that have to be made in the design of the study, the selection of the conditions, the variables, the performance of it, the duration of outcome for collection of measures, all these things. There's so many pieces. It does, unfortunately, lend itself to the potential that you can choose things that would either favor an outcome or even predict an outcome or not, but by that design. So, an easy example is in the yeast. I was telling you, when we were doing experiments, we were doing them with what's called a synthetic complete media. And that simply means that the ingredients are known. Some of them are actually synthesized or extracted from other sources. They're purified. So, you know what is being put into the media, and that includes the carbon sources, that includes the amino acids for the proteins. So, if you read papers then, and a group says, we used synthetic complete media, you would think, all right, so I know what this is. If you actually go into the details of the paper and look, well, what recipe? It's kind of like saying, I made peach cobbler. Well, which recipe did you use? They're all peach cobbler. Right? So, when you get down to that, it actually, well, the levels of particular amino acids, or maybe, well, we only use the essential amino acids because the non essentials can be made, or that we autoclaved the ingredients before use to sterilize them, or we filter sterilize them so they weren't heat treated. So, it ends up there are so many variables that can still be called the exact same thing, synthetic complete media, that you can actually get difference in outcomes between labs simply by the way that they have made the formula for their media. In the same way with the yeast strains, some strains have different gene mutations in the background that interact with your other gene of interest. So, I can say I studied Gene X, but in fact, I may have Gene ABCD and E also mutated, and Gene X may do something, but only in the context of Gene D. And I didn't test that because I just assumed that background things didn't matter. So, this is the point of it becomes a real challenge to be careful about, which is known to consider things that maybe you don't think are important, but could be, and test those in a rigorous and systematic way, and to be clear about what you have done versus just use a term that other people may interpret. So, the NIH is trying to address this and some of their policies about how do you report other institutes have come along and say, here's some reporting guidelines. These are essentials that you really need to have. And then even data sharing, like, okay, what is this formulation for this media, for the mice? Okay, you fed this diet, what is the exact diet number? What are all the ingredients? What are all of the composition outcomes for the calories, for the micronutrients, all that stuff. So therefore, small differences can have profound effects and can change outcomes if you just select variables to move them one direction or another.
[18:05] Mike Gray: So, the scenario you're talking about, how does that play into getting the results you wanted?
[18:15] Daniel Smith: So sometimes it can play in intentionally. There are clearly situations where people are wanting a particular result, and sometimes that is rightfully so. So, for instance, if you say, well, I want to study a mouse model of obesity, then you need to choose a particular type of diet where that strain of mouse has been shown to be susceptible to inducing a phenotype of excess adiposity. Otherwise, your experiment that you're testing, if it relates to an obese phenotype, will be irrelevant, because some strains of mice will not develop excess adiposity when fed as much as they want of a given diet, whereas others will. And even if that strain has been shown to be susceptible to developing excess adiposity from a particular diet, it may be the females respond differently than the males. It may be that it works at a particular age. So sometimes you actually need to intentionally choose those variables to set up the experiment design to ask the question of interest. Other times, you may unintentionally or intentionally have biased the design to give an outcome that is predicted or that is expected. That becomes the challenge of knowing and applying a process to design and implement studies that limits the intentional or unintentional bias in the way that it's performed. To actually really ask fundamental questions and to get a new knowledge and new data that can be informative for the scientific community.
[19:57] Mike Gray: Intriguing that even in mice, there's some of them that can get away with eating anything they want and not put on weight, right? Just, just like humans.
[20:05] Daniel Smith: Yeah, it is fascinating.
[20:07] Mike Gray: Those envied people out there, but not many.
[20:10] Daniel Smith: And what's really fascinating about that is even within the same strain of mouse, so much like dog breeds, right? You can have mice that have been bred and they're inbred and they're 99 point whatever percent genetically identical. At least that's the way it's tracked from a nuclear identity. Meaning they don't really have large differences in their genome. But even if you take like 50 of those same strain mice, male mice from the same breeding colony, from the same facility, and you put them all in individual cages and you let them eat as much as they want of a high fat diet. You're going to get a bell curve distribution, and some of the mice are going to be really susceptible to the diet and develop obesity, and others don't develop it at all, even though they're genetically mostly identical from a nuclear genome standpoint. So, there is this difficulty then, when you design studies and you could, like in that example, you could unintentionally bias an outcome by doing a treatment. And if you don't randomize carefully within the distribution, you could end up with a lot of resistance to the diet induced obesity. Animals in a particular group that then makes it look like a treatment actually is fixing obesity, but it has nothing to do with the treatment. It actually has to do with the inappropriate or unsuccessful randomization. Stuff like that can occur, too. So, yeah, it is fascinating and challenging in the context in which experiments are done and the results are obtained, actually are probably the most informative part, because that really helps to understand and explain and produce new questions and new hypotheses that should be tested.
[21:52] Mike Gray: So, I suspect most people who are listening are more intrigued about human studies and implications for human diets. So, we move from yeast to mice. So, what makes research on human subjects particularly challenging? There's already this underlying—well, if we had problems with yeast and we have problems with mice, can you bar the door for human subjects?
[22:20] Daniel Smith: Yeah. So, I have a spouse and four children, and I tell people if I had a few more, I could do fully randomized studies on the kids.
[22:29] Mike Gray: Yeah, we had five. And we could have done that.
[22:32] Daniel Smith: Yeah, yeah.
[22:33] Mike Gray: With a few more. Yeah, definitely.
[22:35] Daniel Smith: So part of the challenge is if. I'll draw a parallel, right. With a yeast, I put it in this place. It doesn't crawl anywhere. I know where I can go back and find it. With a mouse, I can put it in a cage. Any other animal within a research facility is contained and it gets what I give it, right? So, it doesn't have the option to leave the facility, go acquire food somewhere else or pick up some other exposure or whatever. It's contained, is controlled, is rigorous. With people, you know, you've got the whole gamut of experimental possibilities. Honestly, most nutrition experiments are done observationally, meaning that there's a population somewhere. Either observational data is collected indirectly or directly through surveys or other types of measures. There's some type of correlation analysis that is run, or if, even if you get more sophisticated longitudinal analysis with a variety of types of time series data analyses that are performed. But it doesn't often include a randomization issue, fundamentally. So, therefore, there's the issue of cause and effect, direction or interpretation, or other types of confounding, or all types of things, but that individual lives of a life of free choice. And even if you say, like you were talking about coffee, you can say, okay, well, who drank coffee? And then how much did they drink? Okay, well, that's two pieces of information, but you have to now either adjust or control for all this other stuff. What was their age? What was their sex? What was their body weight? What else foods do they have? How active were they? And then, depending on your outcome, it's become very, very complicated. How do you actually adjust for all these things appropriately? And there are statistical models where that is attempted, but it assumes that we know what to adjust for. Some of the more machine learning approaches are taking a more naive approach, saying, even if we don't, what other things might we be looking for? Could we look for? Even then, though, that is still just an observation of a correlation. When you really get rigorous now, in human research with nutrition, you have people come into a facility and they will live on the ward, like in the dorm room, and you will do measures on them. You will provide all of the food for them. You will make sure nothing is getting in from the outside, and you have an observation, data points, right? But that doesn't represent the real world because we don't live in dorms where all of our food is given to us and we never make choices. So, you have this balance of, how do you do a study to get the highest quality, most rigorous implementation we yet have relevance and generalizability to real world experiences. And on top of that, we're not all genetic twins like these mice are, right? We have this huge amount of genetic diversity. We have all these previous life exposures. We aare different. So, trying to design studies and have inclusion exclusion criteria, have recruitment, have implementation. And even if I do all that, if I do the study in Alabama, the population is going to be slightly different. If I do it in Massachusetts or if I do it in California, the diet backgrounds are going to be very different, depending on my cultural ethnic heritage. So, it is a huge, huge challenge to try to understand.
[26:08] Mike Gray: So, in your mind, does that make the findings, the conclusions, the claims suspect? What do you do with that? Do we just throw all human oriented research out?
[26:23] Daniel Smith: So, I would say not as much suspect, as cautious in the sense of the claims, for sure, and in the willingness to acknowledge the limitations based on the design. So, you gave the example with calorie restriction, where I have done some research in that, and we actually had a large enough mouse study being performed where we were feeding them the diet, right? So, they only get mouse chow every single day for their whole life. They don't get variability, they just get the mouse chow. And what we did is we were actually implementing different levels of calorie restriction, but there was a group that was fed as much as they wanted, an ad lib group. So, the theory, right, is that if you restrict the calories, they eat less, they weigh less, they have lower body fat, they have better metabolic profiles, they have less stress, all these different things, and they live longer. And we were testing that in a variety of ways, but we also decided to look at the group that got to eat as much as they wanted. And sure enough, some animals ate way more than the others. So, the question becomes, and if I did an observational study on those mice, do the animals that voluntarily eat less calorie restrict, actually live longer? And the answer is no. If I had done that same observational experiment and then designed the study, I would have predicted the exact opposite. So, there's this kind of disconnect sometimes between our observations and then making the hypothesis and doing the prediction and then doing a randomized study and trying to interpret it. So, when it comes to the human nutrition suspect claims, I think there are some things that there's so much data, there's so much experience, there's so many observations, there's so many tests. It's like, yes, this is, this is well supported, but even in that case, there are individual differences in how one might respond to it. So, a new initiative that the National Institutes of Health has funded is kind of this nutrition for precision health. And the idea is that, you know, even if I, if you were to recruit subjects for a weight loss study and say, we're going to cut back your calories by x amount, some individuals in that study will still actually stay the same weight or gain weight, while as many of them will lose weight. So, you'll have a population mean because of the calorie reduction of weight loss, but individually, they may respond very differently. So the idea that I can tell you exactly what you should eat to the nth detail to, to give you x outcome, I can tell you what has generally worked in studies, and then we may need to test that in you, and we may need to refine that through a series of observations and watching how you, you know, are able to either. This is a big one with humans, too, right? Compliance. The mice don't get the choice they have to comply, whereas humans have a free will. We do have a choice in what we eat. So even if you say this is the best thing for you, you may say, I really don't like that food, and I don't eat it. So, yes, I think it's always good to say there are some things we can, though. There are some fundamental observations that are really important, but in the areas where we're not, let's not be overly interpreting them. And also let's allow for the space that there may be genetic differences, there may be physiologic differences in people where even though we think it works in general, it's not working for them. So, okay, let's reconsider. Let's adapt, let's modify. Let's test something new. Hope that makes sense.
[30:03] Mike Gray: Yeah. So occasionally, though, isn't there upheaval? I'm thinking of the red meat story over 30 years of my lifetime, where there seemed to be this strong consensus, and people made recommendations to the diet, and some people decided, well, if that's true, then I'm going to modify my diet, even though I like red meat. And a lot of people said, well, you know, you got to die sometime, and so I'm going to die eating red meat. So, can you trace a little bit about that story?
[30:41] Daniel Smith: So, yeah, maybe not in exact detail. Yeah, I would like, but, and it's part of a theme that happens in research. So, the great thing about data are that you can interact with it and you can test and you can develop hypotheses. And really, you know, over the last hundred years, that has become a much more tractable. Tractable and achievable thing to do. Data from human nutrition, from health outcomes, they really have been benefited by the development of, you know, hospitals and medical records, as well as large national surveys, food, agricultural, agronomy records, consumption records, things like that. And it becomes possible in populations and then looking at specific health outcomes to try to merge together these different types of data and make predictions or make observations that are very valid observations. Right? So, you could say, oh, wow. In this window of time for this population, those who ate more of x had a higher incidence of y, and it becomes a reasonable hypothesis. So therefore, the reasoning is, if you eat less of x, you should have a lower amount of y. Well, that's a hypothesis. You actually might want to go back and test that and do some randomized studies and find out, because it could be that those in that population who ate more of x also had something else that they did frequently that you didn't measure. So what ends up happening over time is you have these observations, you have, these are find theories and hypotheses that develop, and then you have studies that start to be performed where they're actually testing that directly and saying, all right, now we're going to randomize a population where hopefully the variables are in the background, are not confounded and are controlled, and we're going to see if more of x causes y. Sometimes it does, but a lot of times it doesn't. So, with the coffee studies, you randomize people to say, oh, you're going to drink x amount of coffee, and then it's like, don't see that outcome here. But with nutrition, it actually becomes particularly difficult because unlike a drug study, where most people are not on a drug, so therefore you can randomize them to be on a drug or not on a drug. There's an inherent population difference, like with the red meat question that you're talking about. You could have observed that in a population, but now you actually, if you want to do a randomized study, do you want your control group to have any red meat consumption at all? Well, if they don't, that means they're actually different than the normal population who has some amount of red meat consumption in the background. So having the ability to separate out those effects results over time in more and more data coming forward to say, well, this study, this study, this study, they all, they're really not supporting that primary hypothesis or observation anymore. And in fact, maybe we need to reconsider it. That takes a way more amount of time than generating a hypothesis from some observational data, which often gets picked up and drives the field. And you can do that over and over and over. Here's a new hypothesis, here's a new observation, here's blah, blah, blah. But then people go back and say, well, let's actually test it. Okay, well, now it's a lot harder to say for sure. That's actually cause and effect a true influence.
[34:15] Mike Gray: Is it simplistic to say there's a scientific method?
[34:19] Daniel Smith: It's, it's simplistic, but it's fundamental. And we all, it's funny, too, right? As scientists, we're like, oh, we do the scientific method. My kids do the scientific method. When they play video games, they're actually often applying the scientific method, right? They observe something, they interact with it. They have an avatar do something. They see the outcome from it. They go back and try in their mind, right? They're figuring out what is the relationship between the actions I did and my performance on the game, and they modify it, and they keep going. Eventually, they get better and better at it. It's part of that same process. Right? We interact with the world around us. We have data and observations. We have the ability to make hypotheses about the relationship between those. We can then design. We can implement an intervention or experiment. We can measure the outcomes. We can interpret the results, and then we say yes or no, or we need to modify it and test it a different way. So, I think they're fundamentally, yes. There is a scientific method, the agreement about the exact process and steps and what qualifies as, you know, a real scientific study versus a kind of quasi scientific. Those types of things are a little more intangible. But, yes, definitely in nutrition, you can actually do real science with the scientific method.
[35:39] Mike Gray: So, I've heard you categorize problems in scientific research. You've listed some already, but under three headings, so maybe we can use those to kind of organize what's been said and what still needs to be said. And those are rigor, reproducibility, and transparency. So, let's just take them one at a time. What do you mean by rigor in the context of scientific research? I mean, is human oriented research intrinsically less rigorous than, say, physics research or research on yeast if we're staying in the biological realm, or in my realm, the E. coli, you know, the most thoroughly researched organism in all of time so far. So, what's rigor?
[36:32] Daniel Smith: Yeah, so those came up partly in my training and background through the 2000’s up till today, as part of a focus of the National Institutes of health. Right. They fund a large amount of research in the United States through federal tax dollars. And one of the things about being responsible as a steward of that money is that it's being used in scientific research in a productive way. And they came up with some themes, because one of the observations was that, you know, group a can do a set of experiments, but how well does that predict what the outcome of group b would be if they did the same experiments? And what kind of variables are important? And there was a whole initiative regarding this, and rigor, reproducibility, and transparency were part of the primary outcomes. Like, these are things we really need to consider carefully if we're going to be working as a collective to move the scientific field forward. So specifically with rigor, it is more like a strict application of the scientific method. And the idea is to try to limit the amount of bias both in the performance and in the reporting of the results. So, it does require then a certain amount of control in selection as well as execution of the study designs, and therefore ways that it can be done very well in some study design. But in other study designs, because of the general nature of the study, it's a little more susceptible to being less rigorous. And therefore, the quality of the evidence that comes from a particular study can kind of be ranked based on that level of rigor. But to be fair, like I was mentioning with the human studies, right, bringing somebody in and having them live on the ward and disrupt their lifestyle and their normal diet patterns and everything else that has a high level of rigor, potentially, but that may be at the sacrifice of a bit of generalizability for those who aren't going to be living in that condition for the future. So, it is really important concept, and it is something, like I mentioned, kind of as a theme. It's something that we think we can do well. But sometimes I don't even know that we understand fully all of the variables that are relevant. I can give you an example here. You mentioned E. coli and yeast. Instead of doing a single test tube of a particular yeast culture, you know, with a lot of the robotics and scaling down of molecular biology, you can use what's called a multi well plate. So instead of having one test tube is like having 96 or 384 or even more little bitty test tubes on a single plate that's about the size of, you know, a three by five index card. In doing some of those experiments, though, right? The amount of manipulation of fluids and everything else gets remarkably small. So, you have to have highly precise, highly accurate instrumentation to be able to execute your experiments so that they're consistently done across the full totality of the wells that are being tested. But even beyond that, what we've observed in some of our own studies is that, oh, my goodness. Well, if you put a plate lid on to cover it, so you're not getting any type of other organisms that are falling into the wells and growing and kind of contaminating your culture. The fluid dynamics and the amount of hydration loss that occurs over the surface of the plate is variable, meaning that those wells that are closest to the edge, because the lid is not a completely sealed lid, it has a little bit of air exchange to be able to provide for gases for the organism, they dry out a little faster. In fact, some of the outcomes that are influenced them by osmolarity or other things are a little bit different. So rigor really is getting down to the level of trying to measure everything that you can and knowing then that if you did this experiment and you put all of your control group on the top row, right on the edge of the plate, that's actually not as good a design as randomizing, so that all five of your groups beyond the control actually have a share in that top row because it has a different outcome. So, rigor really is getting down to the details of what did you choose for your design, what are the variables? How did you select them? And then telling people what those are so that other people understand what they are in both the performance and interpretation.
[41:17] Mike Gray: So, circling back, is human research intrinsically less rigorous?
[41:24] Daniel Smith: By the definition? It doesn't have to be, but it often can be, mainly because there are things that are beyond your control. Right. So, you could say, well, I really want to control the genetics. Okay, well, now you're limited. You're going to have to do twin studies, or you're going to have to come ito what level of genetic control do you want? Is it at a population cultural level, or is it at a familial level? Previous environmental exposures? Okay, now you're at the limit of. Do you want to take into account geographic location or proximity to a particular river, water source, or some type of air exposure? So those who live along the highway. So, it can be. And it's a challenge, but the more I think people document and the more that it is incorporated into the design and interpretation, the better it can be performed. I just don't think we're at the same level in a lot of the human biological, clinical sciences is like you were talking about in some of the other disciplines, like physics and stuff. Right?
[42:29] Mike Gray: So, reproducible, I think it's probably the simplest of the three.
[42:34] Daniel Smith: It is, but there's actually a nuance here. So reproducible, there's two words sometimes that are used in the field together, and there's reproducibility versus replicability. So, reproducibility, more strictly, is kind of the idea that if I gave you my results and I gave you the manuscript, I gave you all of the raw data files for all the outcomes and everything else, you could actually go in and step by step reanalyze and reproduce the figures, the graphs, the statistical analysis and everything else, and you would come back and say, yes or no? Yes. You gave me everything I did what you said you did. I got the same result that you said you got. Replicability really takes it a step further and says that now that I know what you did, I'm going to go and redo that. I'm going to start from scratch in a completely different experiment and see if I get the same outcome from the same design, the same implementation and everything else. Sometimes reproducibility is. Say that, okay, well, I did the experiment more than once and I got the same result in the same lab for the same set of experiments for the same paper. And that is a version of reproducibility. But it's actually probably, by some definitions in the field, a little bit more of the replicability of the findings. And unfortunately, and this gets back to the rigor, frankly, in many studies, it's not always stated clearly how many times an experiment was performed. So with a large clinical study, you register that and usually you're going to do one because it's a lot of time, a lot of effort, a lot of people, a lot of man hours, a lot of money. Whereas in a molecular biology experiment, you know, I could walk into the lab and decide, today I'm going to do four different experiments. Oh, well, one of those worked, so we're going to put that one in the paper. But then it doesn't say in the paper that, oh, we tried this three other times and they didn't work. And it may very well be that there are good reasons it didn't work. Right. So it may be that the solvent that was used was a problem and you realize that on the third time, and then after that you fix the electrical conditions or the whatever. Right. You could have finally got it to work. But the fact is, it's often not told the things that didn't work and what those conditions were for when they didn't work. It's just, this is the result. This worked and all the other stuff is left out. And that actually gets back to this issue of transparency that you mentioned as the third one, because you really have a hard time being reproducible if you're not transparent at the same time.
[45:22] Mike Gray: Okay, so that's a, that's kind of an overarching one about your design. Your communication is as complete as you know how to make it. That, yeah.
[45:33] Daniel Smith: And that in that you're actually sharing that with people.
[45:37] Mike Gray: Right, right.
[45:37] Daniel Smith: So there was a couple of studies that have really kind of highlighted this after the 2000’s. One of those was with the National Cancer Institute. They funded some groups to try to reproduce some of the big findings in the field and had other labs take the papers, take the designs, take the work and try to do it and see if they got the same experimental outcomes. And then there's been some others who have written more generally about, you know, really how, how well can we interpret and trust our interpretation of scientific findings? And some people come to conclusions that, hey, actually most of what we interpret today is probably not true. It's probably false. And that's for a whole host of different reasons. But one of them is the, the ability and willingness to be fully transparent. You will hear it sometimes, right. Somebody will be talking at a meeting or different things and they'll be like, yeah, we never could get that result either. And then we changed x and we got the result. But they don't tell you, did you keep doing the experiment and see if after you changed X now you can get the same result two or three times? Or did you just do it once and now you published it and you never go back to it and test it again? If that's the case, how do you know X was what actually changed it? Stuff like that. And that's part of the challenge of the resources and had the amount of details because there is really, in grad school, people used to talk about, hey, we need that journal of the publications for the scientific findings that don't show any significant results. Right. The journal of non-significant results. Well, there are some that you can do that in, but really, that's not the ones that are going to get the attention. It's the ones that find a significant finding. But in fact, the non-significant results are just as informative in many cases. If you really know what was done and it wasn't significant, well, that's important, but it doesn't get emphasized.
[47:34] Mike Gray: Thought you were going to talk about the famous Journal of Irreproducible Results, which I subscribed to for a while.
[47:40] Daniel Smith: Yeah, that's, that was a little bit different.
[47:44] Mike Gray: Yes. Well, as we kind of move this to application, for the average listener who's not a scientist, I don't think this is putting words in your mouth, but it's not using exactly the same words. Skepticism is not the same thing as cynicism. Right.
[48:04] Daniel Smith: Right.
[48:05] Mike Gray: And I think a significant fraction of the general population is actually moved from skepticism about science to cynicism about science and unwilling to grant it any significant authority. Let's take one concrete example that's been in the news the last couple of weeks, which is an outbreak of salmonella infections in California due to the consumption of raw milk. Are you familiar with that at all?
[48:35] Daniel Smith: Just seeing the headlines and like many things, I have not had the time to go in. I actually do like to see headlines and then read what the actual paper is or the, you know, the result and observation. But I did see it.
[48:50] Mike Gray: That's part of the beauty of my stage of life. If I want to read something more, I have probably a way to arrange my schedule to read something more.
[48:58] Daniel Smith: And honestly, it's a superpower I would encourage everybody to participate in instead of just assuming that whatever the interpretation is, is correct.
[49:06] Mike Gray: Yes.
[49:06] Daniel Smith: Yeah, read it. That's what it's there for.
[49:08] Mike Gray: That's part of the problem. I think people are shopping for a prepackaged opinion that reinforces what they thought already.
[49:16] Daniel Smith: Confirmation bias. Yeah.
[49:18] Mike Gray: Yes, indeed. But this idea, I would say that in general, I'm wired to be skeptical upfront, and I don't think that's particularly a bad thing. My wife thinks sometimes I'm overly skeptical, but I'm here to protect her with my skepticism sometimes, you know, whether it's I think it's probably too late to go out for a walk tonight or whatever it is. But cynicism is quite a different place to live than skepticism. So, in the raw milk situation, I mean, pasteurization of milk goes as one of the major public health breakthroughs of the 1900’s. So, I'm speaking as a microbiologist, but milk was an ideal vehicle for all kinds of infectious agents before the advent of pasteurization. So, I find it interesting that we have a group of people who want to go back to that because their authorities are generally not on the cutting edge scientifically. They tell them there are vital factors in raw milk that are destroyed by pasteurization along with infectious agents. And they're willing to take their chances with infectious agents because some dairies, like the one in California, assure them that they take elaborate precautions. Well, the hygiene of a cow is never going to be exemplary, I just have to put it that way. And so, there's a sense in which they're playing Russian roulette with every consumption act of milk. You don't know. It's erratic, it's not reproducible. But this skepticism that says, well, we have 100 plus years of history of pasteurizing milk, and it actually demonstrably changed the whole landscape of childhood infectious diseases and willingness to say, we'll chuck all that. I'm a cynic about the validity of all of that. What I'm in is this. You're messing around with something that, you know, God didn't produce pasteurizers on cows. So, you know, we want it the way God made it. And we don't think your hazards amount to anything. So, I would say that's cynicism. When you've got a body of established knowledge and you're willing to ignore all that and be the judge and jury on that. So, what do you say about this unwillingness to grant significant authority to science? Even science that has been demonstrated in large populations over long periods of time.
[52:16] Daniel Smith: Yeah. And to be fair, I've seen it too, in the sense of people I interact with and the questions they have. Man, I share. Like I've told you from the beginning, I'm an inherent skeptic in the sense of I would like to know if the claim is being made, what information underlies the support for that claim.
[52:38] Mike Gray: It's just really critical thinking. That's really all we're talking about.
[52:42] Daniel Smith: Right. And in some situations, like you're talking about, the body of evidence that exists is sufficient that even if you were to try to come up with an elaborate theory, like you're saying that, okay, well, there are other things that are being lost. Even if that's true, the data suggests from time and exposures through multiple generations that the tradeoffs might be favorable still, and you can get those from somewhere else. But there is this cynicism that actually is doing much of what you're saying. It's going to reject A, but what ends up happening is you become susceptible to anything else. It's not just A, I'm rejecting A because B is actually true. It's I reject A and therefore I'll take whatever else you have to offer. And I'm not going to be the same type of critic on that as I was on whatever the reason, I gave you for rejecting A. So, the inconsistency with which people are sometimes applying their… Then many of them are asking questions. Right. And I think whenever the authorities or the leaders in the scientific field have not done a good job or been unwilling to answer questions in an understandable way, it provides this further separation, which I think is a problem because it causes distrust. So, it turns into, well, now I'm not even sure I trust you. So, no matter what you say, I'm going to go the opposite, when in fact the opposite may not be good. So, you've done a logic trick to yourself that actually has put you in harm's way. I think from, from a side of the science and communication, everything else, you know? And it's. It happens sometimes unintentionally. People get a perception that, oh, this person, they've gone to a lot of education, and they think they're better than me, and I know better because I'm going to go do X, I'm going to read this. And I could, and it is absolutely true. I grew up in farm country, and I can say without a doubt that I know individuals who, some of them never even would have finished high school, who I consider to be extremely intelligent. And had they been given the opportunity, they would, they would outshine most of the people I know who are in research, who have advanced degrees for multiple years of schooling. They're intelligent people. I think sometimes the disconnect that occurs is whenever people forget that, oh, just because I have an education doesn't mean that somebody else doesn't understand, and therefore, they treat them in a way that they don't engage in a useful and productive way. And you end up having this response where people become very cynical, very disconnected and very alienated and saying, I don't trust anything that you're saying. And it's sad because a lot of stuff could be helpful. I have some family members who will send me stuff all the time, say, hey, I'd like for you to watch this, or I'd like for you to read this because this is really good. And then they'll come back with, what did you think? And I'm like, well, here's what I truly think. I think honestly, most of what was being said, about 95% of it, it's supported by evidence. I'm like, you know, 5% of it, probably not. And the problem is that 5% has potential to do some real harm. What? Like, well, then it's good because 95% of it is true. And I'm like, no, it's not, because the 5% that has potential to do harm, you don't know what that 5% is. So how are you going to implement whatever it is that they're saying? And they're like, well, you can tell me. And I'm like, I could go through it with you, but you really should be looking at the evidence yourself, and that is the time constraint we all don't have. Like I told you, I haven't read those papers. I've seen the headlines. But if that were be, if I were a lifestyle was really going to intentionally adopt, I would want to know. I would be looking at the evidence. I would be asking those questions. And unfortunately, I think people do a lot of what you're saying. I don't trust A, but I am going to trust B, even though I don't apply the same level of critical thinking to B as I should. So, it's happening. I wish there were an easy answer. I encourage my colleagues, you know, having people asking questions is a good thing. It means they're interested. And sometimes, and this is another thing I think is a challenge for science, sometimes the answer is, I don't know for sure, or we have never tested that, or it's inconclusive versus. I'm going to give you a dogmatic answer, and you can't question me at all because I'm the authority. That becomes a problem. That is never a good place.
[57:28] Mike Gray: Well, we'll leave it there. Those are some words of wisdom, some balance that I think all of us would do well to work toward. Thanks for being with us today, Daniel. Appreciate your time and your expertise.
[57:42] Daniel Smith: It was great to be able to see you. Thank you for the invitation.
[57:49] Mike Gray: A good scientist desires objectivity, which includes a willingness to change his or her mind when confronted with persuasive evidence. In two weeks, we'll tackle another issue that demands discernment—the consumption of beverage alcohol. Within the conservative Christian church, the younger generation has largely swept aside the teetotalism of the past 100 years. At the same time, the medical community has begun to urge caution, fingering alcohol as a carcinogen. In 2024, authorities in Ireland (of all places) decided to place strong warning labels on all alcoholic beverages beginning in 2026. Join me in two weeks as we objectively sort through this important issue.