The idea that science is a “value-free” enterprise is deeply entrenched. “Under standard conditions, water boils at 100°C.” This and countless other facts about nature are mind-independent; that is, they do not depend on what you or I think or feel. And the procedures by which we discover such facts are available to and respected by a diverse public, man or woman, black or white, rich or poor. It may seem, then, that the activities and results of science are inherently insulated from racism, sexism, political agendas, financial interests, and other value-laden biases that permeate the larger social context. Some even vigorously insist on keeping values out of science.
Do you agree? Many philosophers of science do not. Indeed, the idea that science is or should aim to be value free — even as an ideal — has been widely challenged in recent decades, with some arguing that values can, in fact, be seen to influence scientific practice in all manner of ways. I would go so far as to say that this is a feature of science that we cannot afford to ignore.
Getting Beyond the Value-Free Ideal
Value judgments are clearly involved in decisions about the application of scientific findings, about how we ought to use the insights of scientists. But almost everyone also acknowledges that values influence certain aspects of scientific practice. For instance, values influence the scientist’s choices about whether or not to inquire into nature in the first place, which phenomena to select as significant enough for study, what methods to adopt, or how limited resources should be allocated in conducting research. Furthermore, there are questions about moral responsibilities in practicing science with integrity, ethical constraints that we impose on research, and how the personal biases and background assumptions of scientists can impact their work in subtle ways. Values might influence the choices scientists need to make about how they frame their research questions, about the concepts they employ in their theories, or about how they describe their observations.
If we accept that values play at least some role in scientific practice, a case can be made for not treating science as if it were a value-free enterprise. Rather than trying to eliminate values from science, we should attend self-consciously to the particular values that exert an influence on scientific practice and facilitate public discussion about them. Being mindful of the potential influences of social, political, moral, and religious values, we should squarely face questions about the most effective ways both to critique bad or illegitimate influences and to decide which values should play a role at those points where value judgments are appropriate or even necessary.
One might argue that, far from compromising the objectivity of science, recognizing the many ways in which aspects of scientific practice are value-laden is an important step to securing it. Ensuring that the empirically grounded methods and the many other processes on which science relies are subject to intense public scrutiny from a diverse range of interests is at least part of what allows it to yield results that are more, rather than less, objective than other types of inquiry.
The Nature of Scientific Reasoning
What about the content of science? If the reasoning involved in formulating scientific conclusions or in appraising scientific theories — deciding which theory is correct — were even partially shaped by values, wouldn’t we end up with lousy science, with subjective opinions rather than scientific facts?
According to an influential picture of science, scientists put forward theories with testable predictions and then proceed to attempt to refute — or “falsify” — them directly through experiments. Those theories that withstand rigorous attempts at falsification are the ones we should accept. The logic of this process, formulated by philosopher Karl Popper, would seem to follow a clear-cut logical pattern, what philosophers call “modus tollens,” a rule of logic that can be represented like this: “If H, then O; Not O, therefore, Not H.” Consider, for example, some specific prediction entailed by Maxwell’s equations for electricity and magnetism: “If Maxwell’s equations are correct, then a test charge will respond in such and such a manner in the presence of this electric field.” If our observations then fail to match the prediction, it might seem that Maxwell’s equations are in trouble. Even more to the point, it might seem that pure logic and observational evidence together can decisively falsify a scientific theory, so that values play no necessary role in judging which scientific theories we should accept.
However, cases of real-world experimental testing are typically more complex than this, and not only because scientists often continue to work with, rather than discard, theories that face contravening evidence. The reason is that scientific hypotheses are not tested in isolation from other theories and background assumptions. When a scientist tests whether water boils at 100°C, say, she must make a number of assumptions, many of which remain implicit. For example, in addition to assuming that her laboratory conditions are sufficiently stable and standardized, that the sample water is sufficiently pure and that the quantity of air dissolved in it falls in a specified range, and that her instruments for measuring temperature and air pressure are reliable, she also assumes that the theories we draw upon to make these scientific claims and explain these tools — such as thermodynamics and physical chemistry, in this case — are correct. To return to the logical characterization above, the relevant antecedent is not “H” by itself, but rather “H” together with a host of other, “auxiliary” assumptions. (Logically, we would represent this not by “If H, then O,” but by “If H and A1 … and An, then O.”) Thus, when a scientific theory yields a prediction that conflicts with observations — say, the water does not boil at 100°C — all the scientist can deduce (assuming the observations are reliable and the interpretation of the evidence is satisfactory) is that something is amiss in that total package of theory and background assumptions. In other words, something is wrong; but the observation does not tell the scientist precisely what. Is it her hypothesis or one of her many assumptions? Does water actually boil at some other temperature? Or is there perhaps a degree of indefiniteness in the phenomenon we are trying to characterize? Or were her laboratory conditions inadequate? Or, less plausibly, is the theory of thermodynamics wrong?
In principle, there might be many ways to adjust the theoretical package to restore the fit with the evidence. But because, strictly speaking, observation and logic alone are not enough to tell us what is wrong, this task falls to the good sense of the practicing scientist. For instance, it would not make good sense to throw out the theory of thermodynamics based simply on the observation that the water in the scientist’s lab didn’t boil at 100°C. The important point is that in making this decision about which part of the theoretical package to reject, the scientist is making a kind of value judgment: “It is more plausible that my laboratory conditions were inadequate or that my own hypothesis is false than that thermodynamics is wrong.”
In practice, it is often easy enough to see where the problem lies, and the roles that values play in theorizing will be different in some areas of research than in others. We shouldn’t expect values to exert the same level of influence in every branch of inquiry or at every level of theorization. For example, the way that empirical data constrain the role that values play in appraising a given theory may be different in theoretical physics or molecular biology compared to the social or environmental sciences. But the scientist might well find herself faced with competing theories, each of which is just as compatible with all the available observational data as the other. In such a situation, the choice between rival theories may remain unsettled — whether only temporarily and in particular cases, or perhaps even in the long run with respect to theories of the world at large. In the choice between competing theories, value judgments will play a greater role than we often realize in determining which scientific theories we accept as true.
Which Values?
What considerations are relevant to the “good sense” by which scientists choose between theories in such circumstances? As our discussion has already supposed, adequacy to observations — what philosophers call empirical adequacy — is an essential criterion when judging the success of scientific theories. Indeed, one might even argue that the construction of theories that match all of the observable phenomena is the chief aim of natural science. Other criteria that are often called upon in the choice between theories include internal consistency, fit with other well-established theories, scope, explanatory power, and a track record of successful new predictions. “Don’t forget simplicity!” fans of Ockham’s razor will remind us.
But the degree to which a given theory fits these rather vague criteria is often itself a matter of dispute. What counts as simple, and to whom? Moreover, these values can stand in tension: What if the simplest theory is the least adequate to empirical observations? This raises questions about the relative weight to assign to these criteria. Should we value explanatory power over simplicity or predictive power? Here, again, it appears that certain value judgments will play a significant role in selecting one theory as overall better than another.
A number of questions lurk in these deeper philosophical waters. In ordinary usage, to “value” something is to take it to have importance or worth — to care about it. Leave aside questions about the clarity of this notion, or about whether the common distinction between facts and values is tenable. How are values to be justified as criteria when formulating and appraising scientific theories? And who is to say that some values are permitted to play a role but not others? And if appealing to values is permitted in science, why not also appeal to those that fit with our liberal democratic sensibilities, or other social, moral, and religious values, as criteria for theory choice? Is this not the path to an “anything-goes” relativism that would undermine the very objectivity that makes science a paradigm of rational inquiry?
One way to make sense of the idea that criteria such as simplicity, predictive power, and internal consistency play a role in science is to consider the role they play in helping us to achieve our goals. For instance, one reason for valuing the internal consistency of scientific theories is that, in science, we are aiming at truth; that is, we are trying to come up with true theories, and a logically incoherent theory cannot possibly be true. Thus “values” such as internal consistency or a track record of successful predictions might be included among our criteria for theory choice because we have learned over time that theories that fail to exhibit these properties are less likely to be true.
Some philosophers see a contrast between more truth-oriented values — what they call “epistemic values” — and so-called “non-epistemic values,” for example the interests of social justice. Precisely which values are to be regarded as epistemic is a matter of ongoing dispute, as are questions about how both kinds of values should or should not influence theory appraisal. Is there any reason to think, for example, that the fact that a theory conflicts with the interests of social justice makes it less likely to be true? Or, are simple scientific theories really more likely to be true than complex theories, or are simple theories just more useful, or easier to revise when they fail to fit the data?
Which kinds of values are relevant to theory choice will depend on what sort of a choice we are talking about. Is the choice about whether to accept or reject a scientific theory best seen as a question about what to think or how to act? If our aim is to form an opinion concerning the likely truth or falsity of a theory, non-epistemic values will not be directly relevant. For instance, if we are aiming for truth, we might come to believe a theory because of its explanatory power and fit with the available data, even though it appears to conflict with the interests of social justice.
But the distinction between “what to think” and “how to act” is not always so clear either. Most of us are good “fallibilists” these days, allowing that even our best scientific theories are, in principle, open to revision. How much evidence is sufficient for flat-out acceptance? Practical consequences and non-epistemic values may be relevant to where we set this threshold. While in some cases it won’t matter much if scientists make a mistake, sometimes the risks of being wrong are quite considerable. For example, even if we are ninety-five percent confident that a new chemical is not carcinogenic, we might, given the potentially serious moral and practical consequences of being mistaken in this case, think it wise to gather more evidence and to hold ourselves to a higher standard before classifying it as such, particularly if we are aware that this will likely lead to its approval for use as a widely distributed food preservative.
One solution to these problems is to argue that scientists should confine their judgments about theories to the probability that they are true — that is, to how likely they are to be true given the evidence — and should leave aside questions about practical consequences. Of course, when deciding how to act, we generally do also consider the possible outcomes and their utility. In deciding whether to accept or reject a scientific theory, then, we are well-advised to take into account what the potential consequences would be of getting things right and also of being mistaken. But this is to shift away from the theoretical aspect of science toward the realm of action. Thus, according to this view, scientists should merely assign probabilities to their theories, leaving non-epistemic values to the rest of us when deciding which theories to take as the basis for action.
Science and Democracy
But can judgments about the likely truth of scientific theories be so cleanly separated from decisions about the acceptance or rejection of those theories? And even if they can, does the act of acceptance or rejection really take us outside the proper domain of science? We can imagine, for example, that assigning a very high likelihood to a theory already presupposes that this theory should be accepted as the basis for action. Moreover, one might wonder whether the assigning of probabilities to scientific theories really is so free of, or shielded from, the prior influence of non-epistemic values. After all, even if non-epistemic values are excluded from playing a direct role in assigning probabilities to theories, might not earlier value-laden choices — for example about what areas of research are most important for helping to solve certain social problems — have downstream consequences for what evidence is available? Or might these prior value-laden choices also have consequences for the range of available theories between which we are making comparative evaluations?
Such considerations have led some to reject the idea that it is the responsibility of scientists only to inform the public about how probable a theory is, given the available evidence, leaving policymaking to citizens or their elected representatives. Instead of this proposed division of labor, these scholars argue that scientific experts are citizens themselves, and since they are often best attuned to the relevant social and moral considerations, they should take a more active role in offering policy recommendations as part of their civic duty.
If we chart the latter course, we benefit from having scientists serve in more significant advisory roles, but we risk embroiling science in the worst manifestations of our political disagreements and disenfranchising non-scientific voices. Might there be a way to navigate this path in a democratic society, learning from scientists while also empowering ordinary citizens to participate in serious conversations about the role of values in science and related matters of moral concern?
Discussion Questions:
- What concrete examples come to mind of values influencing scientific practice, for good or for ill?
- If values play a role in scientific practice, what does this mean for the objectivity of science? Are we better off if we aspire to the ideal of value-free science?
- Can scientists avoid making value judgments when appraising scientific theories? Does considering only the probability of a theory, given the evidence, help in this regard?
- What sorts of ethical and social obligations do experts have in communicating scientific results? Should we ask them to “just stick to the facts” and report only how well supported current theories are? Or are we better off asking scientists to take more responsibility in advocating for particular policies?
- How do we discern when to properly defer to experts and when experts are making overly confident pronouncements on matters outside of their area of expertise?
Discussion Summary
In our discussion, readers raised a number of interesting questions about the place of values and science as well as other disciplines, such as philosophy, and the role that non-scientists should play in our public debates about these issues. A few key points I noted in my responses:
- A healthy respect for science and mathematics need not prevent us from seeing the value of philosophy, which sometimes begins with the Socratic insight that matters we once took to be obvious often, on reflection, turn out to be more complex than they initially appeared.
- From Galileo’s day to our own, it’s been clear that we can do a disservice to science by failing to acknowledge the fallibility of our methods, at least in principle.
- There is a need to have structures in place that enable us to recognize areas of scientific activity where values are playing significant roles and ways of identifying which values are operating.
- Perhaps because I tend to take it for granted that the credibility of good science does not stand in much need of defense, I think that being more rather than less transparent about the role of values in science contributes to the credibility of science. The more pressing threat to scientific credibility, it seems to me, is when we pretend that science is more value-free than it is.
Superb article. I have been in basic molecular biology research and seen all of this. But I have not ever seen it so well and accurately described.
There are tremendous forces at work that create an incredible amount of bias. Some scientific questions are almost entirely neglected and promising avenues closed down because of political correctness. Another powerful aspect of our sin nature that seems to have strong effects on the subjectivity of science is the need for recognition and success (pride and the desire for acceptance). This might be the most powerful force that overtly or subtly, even subconsciously, affects outcomes and distorts (or manipulates) statistics. Other potent factors that exert pressure to get the ‘right results’ are the hugely wasteful and dysfunctional grant rewarding process and the academic tenure process. As I said, the need for politically correct research and results is rampant (work on global warming is a classic example).
Thanks William! I agree that the desire for recognition and various kinds of influence on funding processes, alongside a host of other internal and external factors, can distort aspects of scientific practice and judgment in all sorts of ways. I would add what I take to be the uncontroversial point that some of ways of organizing the scientific practice, such as appropriate recognition of distinguished achievements or placing moral and legal constraints on the testing of human subjects, can also channel such impulses in tremendously productive ways.
[…] From Daniel J. McKaughan at Big Questions Online: […]
In his “History of Western Philosophy Bertrand Russell said “not much of anything happened until science showed up”. Look at how math and science are used to study cause and effect relations and we can pretty much skip the discussion on all other topics in philosophy. Philosophizing about the obvious is a waste of time. Math and science are philosophy. It’s obvious that science is value laden.
Tom, although I think I understand the sentiment behind your comment, I couldn’t find your quotation in my copy of Russell’s A History of Western Philosophy. However, I did notice that Russell says in the introduction that “Science tells us what we can know, but what we can know is little, and if we forget how much we cannot know we become insensitive to many things of very great importance.” He also recognizes and appreciates the fact that we encounter mathematics and various inquiries into nature in ancient Egypt, Babylon, and Greece. Anyway, thanks for prompting me to look back at Russell’s book. His interest in clarity and rigor, combined with a willingness to reflect carefully on the big questions of his time, was one inspiration for my interest in philosophy and I’ve long thought of him as a kind of mentor. As I see it, a healthy respect for science and mathematics need not prevent us from seeing the value of philosophy, which sometimes begins with the Socratic insight that matters we once took to be obvious often, on reflection, turn out to be more complex than they initially appeared.
In short, isn’t this why most scientific teachings are theories vs laws.? And isn’t it a problem when non scientific folk fail to understand exactly how much data is behind something like the ‘theory’ of evolution?
Yes, evolutionary biology is an extremely well-developed scientific field and it is regrettable that we have not been more successful in conveying this to the public, particularly in the US — as is environmental science. I’m interested in thinking constructively about ways that science educators and communicators can more effectively convey the evidential status of scientific information. From Galileo’s day to our own, it’s been clear that we can do a disservice to science by failing to acknowledge the fallibility of our methods, at least in principle. In using the term “theory,” for example, we do well to remind folks that many of our latest and best theories are supported by a great deal of evidence even if something short of “proof” is called for. Don Howard strikes an admirable chord regarding these issues in his blog post on “How to Talk about Science to the Public — 2. Speak Honestly about Uncertainty,” which is also relevant to issues raised in my response to Dr. Kessler below.
Science depends on how accurately one can measure these values .
I’m not quite sure what sort of measure you have in mind here, but perhaps some of what I say in response to Stephanie below will also be relevant. It seems like we could call for a scientific study of any empirical phenomenon, although some topics are more easily amenable to scientific investigation than others. Maybe what you are getting at concerns the need to have structures in place that enable us to recognize areas of scientific activity where values are playing significant roles and ways of identifying which values are operating? If so, I think that a lot of philosophers of science would agree about the importance of addressing just these sorts of issues.
Could the converse be true, that is, could scientific theories play a role in inquiry about values? For instance, if a scientific theory tells us that the emission of greenhouse gases will cause flooding and drought for our descendants in a hundred years, or that smoking causes cancer, do these theories only supplement pre-existing values (such as that floods are bad and the interests of people not yet alive matter, or that cancer is bad) in a kind of value + belief = decision equation? Or might these theories also shape our other values, such as one’s position on the justice of government intervention in the economy, if government intervention seems to be the most effective way for addressing these problems?
Good questions, Stephanie! Here are a few thoughts. First, one would hope that our experiences and what we learn about the world can and often do shape our values. To run with your example, suppose someone who values limited government also, on consideration, values the well-being of future generations. Surely bringing to light information about how likely particular outcomes (e.g., severe environmental damage) are, given certain courses of action (e.g., no government regulation in a particular industry) might lead an open-minded person to make all sorts of adjustments in the relative weight she gives to these values (e.g., be more open to government oversight of the industry) and might well lead her to change her mind on policy issues. Second, it seems to me that we can understand those sorts of changes independently of whatever we make of the prospects for a scientific study of values. Your example of smoking also reminds us that these changes in our values aren’t forced upon us — since many people choose to smoke while fully aware of its potentially harmful consequences. Third, even if, as Hume maintains, we cannot derive an ought from an is or values from facts, there are nevertheless facts about what you and I value. Given our goals, say, of keeping our lungs healthy for years to come, or of protecting the environment for future generations, there are objectively better and worse ways of trying to realize those goals.
If values play a role in scientific inquiry, does that give non-scientists more of a role in criticizing scientific theories? For instance, could moral philosophers or other scholars in the humanities criticize the values implicit in a scientific theory and thereby undermine its credibility?
I think that I see your concern Dr. Kessler. But, perhaps because I tend to take it for granted that the credibility of good science does not stand in much need of defense, I think that being more rather than less transparent about the role of values in science contributes to the credibility of science. If socially relevant values are playing a significant role in areas of scientific practice and application that are of public interest, then — yes — ordinary folk should have a say. There are some notable cases where, for example, feminist critiques of biological theories/models have resulted not just in different science but in better science. But it strikes me that influencing not just the practice and application, but the content of science substantially in that sort of way will most often require both mastery of the relevant science and careful reflection on the role that values might play. The more pressing threat to scientific credibility, it seems to me, is when we pretend that science is more value-free than it is. Don Howard has a great blog post about this entitled “How to Talk about Science to the Public — 2. Speak Honestly about Uncertainty.”
Doesn’t it depend on which science (or sciences) we’re talking about? I might well concede that climate change modeling, say, involves value judgments of the sort you describe but still think that quantum or relativistic physics does not (unless we’re just talking about “epistemic” values — a less controversial question).
I think that we agree here, TM. As I affirmed:
Brilliant piece! Reminds me of a recent Nature Physics commentary “Science needs a reason to be trusted”. I appreciate the reminder of the effect of values on scientific inquiry and possibly on scientific integrity. The challenge I think will be effectively addressing the effect of values and using them. Thanks again!