The process depends both on making careful observations of phenomena and on inventing theories for making sense out of those observations. Change in knowledge is inevitable because new observations may challenge prevailing theories. No matter how well one theory explains a set of observations, it is possible that another theory may fit just as well or better, or may fit a still wider range of observations.
In science, the testing and improving and occasional discarding of theories, whether new or old, go on all the time.
Scientists assume that even if there is no way to secure complete and absolute truth, increasingly accurate approximations can be made to account for the world and how it works. Although scientists reject the notion of attaining absolute truth and accept some uncertainty as part of nature, most scientific knowledge is durable.
The modification of ideas, rather than their outright rejection, is the norm in science, as powerful constructs tend to survive and grow more precise and to become widely accepted. For example, in formulating the theory of relativity, Albert Einstein did not discard the Newtonian laws of motion but rather showed them to be only an approximation of limited application within a more general concept.
The National Aeronautics and Space Administration uses Newtonian mechanics, for instance, in calculating satellite trajectories. Moreover, the growing ability of scientists to make accurate predictions about natural phenomena provides convincing evidence that we really are gaining in our understanding of how the world works.
Continuity and stability are as characteristic of science as change is, and confidence is as prevalent as tentativeness. There are many matters that cannot usefully be examined in a scientific way. In other cases, a scientific approach that may be valid is likely to be rejected as irrelevant by people who hold to certain beliefs such as in miracles, fortune-telling, astrology, and superstition.
Nor do scientists have the means to settle issues concerning good and evil, although they can sometimes contribute to the discussion of such issues by identifying the likely consequences of particular actions, which may be helpful in weighing alternatives.
Fundamentally, the various scientific disciplines are alike in their reliance on evidence, the use of hypothesis and theories, the kinds of logic used, and much more. Nevertheless, scientists differ greatly from one another in what phenomena they investigate and in how they go about their work; in the reliance they place on historical data or on experimental findings and on qualitative or quantitative methods; in their recourse to fundamental principles; and in how much they draw on the findings of other sciences.
Still, the exchange of techniques, information, and concepts goes on all the time among scientists, and there are common understandings among them about what constitutes an investigation that is scientifically valid.
Scientific inquiry is not easily described apart from the context of particular investigations. There simply is no fixed set of steps that scientists always follow, no one path that leads them unerringly to scientific knowledge. There are, however, certain features of science that give it a distinctive character as a mode of inquiry.
Although those features are especially characteristic of the work of professional scientists, everyone can exercise them in thinking scientifically about many matters of interest in everyday life.
Sooner or later, the validity of scientific claims is settled by referring to observations of phenomena. Hence, scientists concentrate on getting accurate data. Such evidence is obtained by observations and measurements taken in situations that range from natural settings such as a forest to completely contrived ones such as the laboratory. To make their observations, scientists use their own senses, instruments such as microscopes that enhance those senses, and instruments that tap characteristics quite different from what humans can sense such as magnetic fields.
Scientists observe passively earthquakes, bird migrations , make collections rocks, shells , and actively probe the world as by boring into the earth's crust or administering experimental medicines. In some circumstances, scientists can control conditions deliberately and precisely to obtain their evidence. They may, for example, control the temperature, change the concentration of chemicals, or choose which organisms mate with which others.
By varying just one condition at a time, they can hope to identify its exclusive effects on what happens, uncomplicated by changes in other conditions. Often, however, control of conditions may be impractical as in studying stars , or unethical as in studying people , or likely to distort the natural phenomena as in studying wild animals in captivity. In such cases, observations have to be made over a sufficiently wide range of naturally occurring conditions to infer what the influence of various factors might be.
Because of this reliance on evidence, great value is placed on the development of better instruments and techniques of observation, and the findings of any one investigator or group are usually checked by others. But they tend to agree about the principles of logical reasoning that connect evidence and assumptions with conclusions. Scientists do not work only with data and well-developed theories.
Often, they have only tentative hypotheses about the way things may be. Such hypotheses are widely used in science for choosing what data to pay attention to and what additional data to seek, and for guiding the interpretation of data. In fact, the process of formulating and testing hypotheses is one of the core activities of scientists. To be useful, a hypothesis should suggest what evidence would support it and what evidence would refute it.
A hypothesis that cannot in principle be put to the test of evidence may be interesting, but it is not likely to be scientifically useful. The use of logic and the close examination of evidence are necessary but not usually sufficient for the advancement of science. Scientific concepts do not emerge automatically from data or from any amount of analysis alone. Inventing hypotheses or theories to imagine how the world works and then figuring out how they can be put to the test of reality is as creative as writing poetry, composing music, or designing skyscrapers.
Sometimes discoveries in science are made unexpectedly, even by accident. But knowledge and creative insight are usually required to recognize the meaning of the unexpected. Aspects of data that have been ignored by one scientist may lead to new discoveries by another. Scientists strive to make sense of observations of phenomena by constructing explanations for them that use, or are consistent with, currently accepted scientific principles.
The credibility of scientific theories often comes from their ability to show relationships among phenomena that previously seemed unrelated. The theory of moving continents, for example, has grown in credibility as it has shown relationships among such diverse phenomena as earthquakes, volcanoes, the match between types of fossils on different continents, the shapes of continents, and the contours of the ocean floors.
The essence of science is validation by observation. But it is not enough for scientific theories to fit only the observations that are already known. Theories should also fit additional observations that were not used in formulating the theories in the first place; that is, theories should have predictive power. Demonstrating the predictive power of a theory does not necessarily require the prediction of events in the future.
The predictions may be about evidence from the past that has not yet been found or studied. A theory about the origins of human beings, for example, can be tested by new discoveries of human-like fossil remains. For example, when it is known that similar data can be produced by factors that have nothing to do with the phenomenon of interest, Monte Carlo simulations, regression analyses of sample data, and a variety of other statistical techniques sometimes provide investigators with their best chance of deciding how seriously to take a putatively illuminating feature of their data.
But statistical techniques are also required for purposes other than causal analysis. To calculate the magnitude of a quantity like the melting point of lead from a scatter of numerical data, investigators throw out outliers, calculate the mean and the standard deviation, etc.
Regression and other techniques are applied to the results to estimate how far from the mean the magnitude of interest can be expected to fall in the population of interest e. The fact that little can be learned from data without causal, statistical, and related argumentation has interesting consequences for received ideas about how the use of observational evidence distinguishes science from pseudoscience, religion, and other non-scientific cognitive endeavors.
First, scientists are not the only ones who use observational evidence to support their claims; astrologers and medical quacks use them too. To find epistemically significant differences, one must carefully consider what sorts of data they use, where it comes from, and how it is employed. The virtues of scientific as opposed to non-scientific theory evaluations depend not only on its reliance on empirical data, but also on how the data are produced, analyzed and interpreted to draw conclusions against which theories can be evaluated.
Data are produced, and used in far too many different ways to treat informatively as instances of any single method.
Thirdly, it is usually, if not always, impossible for investigators to draw conclusions to test theories against observational data without explicit or implicit reliance on theoretical resources. Bokulich has helpfully outlined a taxonomy of various ways in which data can be model-laden to increase their epistemic utility. She focuses on seven categories: data conversion, data correction, data interpolation, data scaling, data fusion, data assimilation, and synthetic data.
Of these categories, conversion and correction are perhaps the most familiar. In more complicated cases, such as processing the arrival times of acoustic signals in seismic reflection measurements to yield values for subsurface depth, data conversion may involve models ibid. In this example, models of the composition and geometry of the subsurface are needed in order to account for differences in the speed of sound in different materials.
Bokulich rightly points out that involving models in these ways routinely improves the epistemic uses to which data can be put. Interpolation involves filling in missing data in a patchy data set, under the guidance of models.
Data are scaled when they have been generated in a particular scale temporal, spatial, energy and modeling assumptions are recruited to transform them to apply at another scale. For instance, when data from ice cores, tree rings, and the historical logbooks of sea captains are merged into a joint climate dataset. Scientists must take care in combining data of diverse provenance, and model new uncertainties arising from the very amalgamation of datasets ibid.
Synthetic data are virtual, or simulated data, and are not produced by physical interaction with worldly research targets. Bokulich emphasizes the role that simulated data can usefully play in testing and troubleshooting aspects of data processing that are to eventually be deployed on empirical data ibid. It can be incredibly useful for developing and stress-testing a data processing pipeline to have fake datasets whose characteristics are already known in virtue of having been produced by the researchers, and being available for their inspection at will.
When the characteristics of a dataset are known, or indeed can be tailored according to need, the effects of new processing methods can be more readily traced than without.
In this way, researchers can familiarize themselves with the effects of a data processing pipeline, and make adjustments to that pipeline in light of what they learn by feeding fake data through it, before attempting to use that pipeline on actual science data.
Such investigations can be critical to eventually arguing for the credibility of the final empirical results and their appropriate interpretation and use. Data assimilation is perhaps a less widely appreciated aspect of model-based data processing among philosophers of science, excepting Parker ; Thus, data assimilation involves balancing the contributions of empirical data and the output of models in an integrated estimate, according to the uncertainties associated with these contributions.
Bokulich argues that the involvement of models in these various aspects of data processing does not necessarily lead to better epistemic outcomes. Done wrong, integrating models and data can introduce artifacts and make the processed data unreliable for the purpose at hand ibid. Empirical results are laden with values and theoretical commitments.
They have worried about the extent to which human perception itself is distorted by our commitments. They have worried that drawing upon theoretical resources from the very theory to be appraised or its competitors in the generation of empirical results yields vicious circularity or inconsistency.
Do the theory and value-ladenness of empirical results render them hopelessly parochial? That is, when scientists leave theoretical commitments behind and adopt new ones, must they also relinquish the fruits of the empirical research imbued with their prior commitments too? In this section, we discuss these worries and responses that philosophers have offered to assuage them. If you believe that observation by human sense perception is the objective basis of all scientific knowledge, then you ought to be particularly worried about the potential for human perception to be corrupted by theoretical assumptions, wishful thinking, framing effects, and so on.
Working in , Worthington investigated the hydrodynamics of falling fluid droplets and their evolution upon impacting a hard surface. At first, he had tried to carefully track the drop dynamics with a strobe light to burn a sequence of images into his own retinas. The images he drew to record what he saw were radially symmetric, with rays of the drop splashes emanating evenly from the center of the impact.
However, when Worthington transitioned from using his eyes and capacity to draw from memory to using photography in , he was shocked to find that the kind of splashes he had been observing were irregular splats ibid. Even curiouser, when Worthington returned to his drawings, he found that he had indeed recorded some unsymmetrical splashes. He had evidently dismissed them as uninformative accidents instead of regarding them as revelatory of the phenomenon he was intent on studying ibid.
In attempting to document the ideal form of the splashes, a general and regular form, he had subconsciously down-played the irregularity of individual splashes. Perceptual psychologists, Bruner and Postman, found that subjects who were briefly shown anomalous playing cards, e. For a more up-to-date discussion of theory and conceptual perceptual loading see Lupyan By analogy, Kuhn supposed, when observers working in conflicting paradigms look at the same thing, their conceptual limitations should keep them from having the same visual experiences Kuhn , , —, , —1.
It is plausible that their expectations influence their reports. Indeed, it is possible for scientists to share empirical results, not just across diverse laboratory cultures, but even across serious differences in worldview.
Much as they disagreed about the nature of respiration and combustion, Priestley and Lavoisier gave quantitatively similar reports of how long their mice stayed alive and their candles kept burning in closed bell jars. Priestley taught Lavoisier how to obtain what he took to be measurements of the phlogiston content of an unknown gas. A sample of the gas to be tested is run into a graduated tube filled with water and inverted over a water bath. Priestley, who thought there was no such thing as oxygen, believed the change in water level indicated how much phlogiston the gas contained.
Lavoisier reported observing the same water levels as Priestley even after he abandoned phlogiston theory and became convinced that changes in water level indicated free oxygen content Conant , 74— A related issue is that of salience. Kuhn claimed that if Galileo and an Aristotelian physicist had watched the same pendulum experiment, they would not have looked at or attended to the same things. These last were salient to Galileo because he treated pendulum swings as constrained circular motions.
The Galilean quantities would be of no interest to an Aristotelian who treats the stone as falling under constraint toward the center of the earth ibid. Thus Galileo and the Aristotelian would not have collected the same data. Absent records of Aristotelian pendulum experiments we can think of this as a thought experiment. Interests change, however. Scientists may eventually come to appreciate the significance of data that had not originally been salient to them in light of new presuppositions.
The moral of these examples is that although paradigms or theoretical commitments sometimes have an epistemically significant influence on what observers perceive or what they attend to, it can be relatively easy to nullify or correct for their effects. When presuppositions cause epistemic damage, investigators are often able to eventually make corrections.
Thus, paradigms and theoretical commitments actually do influence saliency, but their influence is neither inevitable nor irremediable. Thomas Kuhn , Norwood Hanson , Paul Feyerabend and others cast suspicion on the objectivity of observational evidence in another way by arguing that one cannot use empirical evidence to test a theory without committing oneself to that very theory. This would be a problem if it leads to dogmatism but assuming the theory to be tested is often benign and even necessary.
For instance, Laymon demonstrates the manner in which the very theory that the Michelson-Morley experiments are considered to test is assumed in the experimental design, but that this does not engender deleterious epistemic effects This difference in path length would show up as displacement in the interference fringes of light in the interferometer.
In particular, the null results of these experiments were taken as evidence against the existence of the aether. Naively, one might suppose that whatever assumptions were made in the calculation of the results of these experiments, it should not be the case that the theory under the gun was assumed nor that its negation was. Although Michelson assumed no contraction in the arms of the interferometer, Laymon argues that he could have assumed contraction, with no practical impact on the results of the experiments.
The predicted fringe shift is calculated from the anticipated difference in the distance traveled by light in the two arms is the same, when higher order terms are neglected.
Thus, in practice, the experimenters could assume either that the contraction thesis was true or that it was false when determining the length of the arms. Either way, the results of the experiment would be the same. Morley and Miller then set out specifically to test the contraction thesis, and still assumed no contraction in determining the length of the arms of their interferometer ibid. Epistemological hand-wringing about the use of the very theory to be tested in the generation of the evidence to be used for testing, seems to spring primarily from a concern about vicious circularity.
How can we have a genuine trial, if the theory in question has been presumed innocent from the outset? While it is true that there would be a serious epistemic problem in a case where the use of the theory to be tested conspired to guarantee that the evidence would turn out to be confirmatory, this is not always the case when theories are invoked in their own testing. Woodward summarizes a tidy case:.
For any given case, determining whether the theoretical assumptions being made are benign or straight-jacketing the results that it will be possible to obtain will require investigating the particular relationships between the assumptions and results in that case. When data production and analysis processes are complicated, this task can get difficult.
But the point is that merely noting the involvement of the theory to be tested in the generation of empirical results does not by itself imply that those results cannot be objectively useful for deciding whether the theory to be tested should be accepted or rejected. Kuhn argued that theoretical commitments exert a strong influence on observation descriptions, and what they are understood to mean Kuhn , ff; Longino , 38— They might all use the same words e. It is important to bear in mind that observers do not always use declarative sentences to report observational and experimental results.
Instead, they often draw, photograph, make audio recordings, etc. But disagreements about the epistemic import of a graph, picture or other non-sentential bit of data often turn on causal rather than semantical considerations.
Anatomists may have to decide whether a dark spot in a micrograph was caused by a staining artifact or by light reflected from an anatomically significant structure. Physicists may wonder whether a blip in a Geiger counter record reflects the causal influence of the radiation they wanted to monitor, or a surge in ambient radiation.
Chemists may worry about the purity of samples used to obtain data. Such questions are not, and are not well represented as, semantic questions to which semantic theory loading is relevant.
Late 20 th century philosophers may have ignored such cases and exaggerated the influence of semantic theory loading because they thought of theory testing in terms of inferential relations between observation and theoretical sentences. Nevertheless, some empirical results are reported as declarative sentences.
Looking at a patient with red spots and a fever, an investigator might report having seen the spots, or measles symptoms, or a patient with measles. Watching an unknown liquid dripping into a litmus solution an observer might report seeing a change in color, a liquid with a PH of less than 7, or an acid. The appropriateness of a description of a test outcome depends on how the relevant concepts are operationalized.
What justifies an observer to report having observed a case of measles according to one operationalization might require her to say no more than that she had observed measles symptoms, or just red spots according to another. But it is more faithful to actual scientific practice to think of operationalizations as defeasible rules for the application of a concept such that both the rules and their applications are subject to revision on the basis of new empirical or theoretical developments.
So understood, to operationalize is to adopt verbal and related practices for the purpose of enabling scientists to do their work. Operationalizations are thus sensitive and subject to change on the basis of findings that influence their usefulness Feest Definitional or not, investigators in different research traditions may be trained to report their observations in conformity with conflicting operationalizations.
Thus instead of training observers to describe what they see in a bubble chamber as a whitish streak or a trail, one might train them to say they see a particle track or even a particle. This may reflect what Kuhn meant by suggesting that some observers might be justified or even required to describe themselves as having seen oxygen, transparent and colorless though it is, or atoms, invisible though they are Kuhn , ff.
To the contrary, one might object that what one sees should not be confused with what one is trained to say when one sees it, and therefore that talking about seeing a colorless gas or an invisible particle may be nothing more than a picturesque way of talking about what certain operationalizations entitle observers to say.
Some would expect enough agreement to secure the objectivity of observational data. Others would not. Still others would try to supply different standards for objectivity. With regard to sentential observation reports, the significance of semantic theory loading is less ubiquitous than one might expect. The interpretation of verbal reports often depends on ideas about causal structure rather than the meanings of signs. Rather than worrying about the meaning of words used to describe their observations, scientists are more likely to wonder whether the observers made up or withheld information, whether one or more details were artifacts of observation conditions, whether the specimens were atypical, and so on.
Note that the worry about semantic theory loading extends beyond observation reports of the sort that occupied the logical empiricists and their close intellectual descendents.
Combining results of diverse methods for making proxy measurements of paleoclimate temperatures in an epistemically responsible way requires careful attention to the variety of operationalizations at play. Happily, the remedy for the worry about semantic loading in this broader sense is likely to be the same—investigating the provenance of those results and comparing the variety of factors that have contributed to their causal production.
Kuhn placed too much emphasis on the discontinuity between evidence generated in different paradigms. Even if we accept a broadly Kuhnian picture, according to which paradigms are heterogeneous collections of experimental practices, theoretical principles, problems selected for investigation, approaches to their solution, etc.
As we discussed above, the success that scientists have in repurposing results generated by others for different purposes speaks against the confinement of evidence to its native paradigm. Even when scientists working with radically different core theoretical commitments cannot make the same measurements themselves, with enough contextual information about how each conducts research, it can be possible to construct bridges that span the theoretical divides. One could worry that the intertwining of the theoretical and empirical would open the floodgates to bias in science.
Human cognizing, both historical and present day, is replete with disturbing commitments including intolerance and narrow mindedness of many sorts. If such commitments are integral to a theoretical framework, or endemic to the reasoning of a scientist or scientific community, then they threaten to corrupt the epistemic utility of empirical results generated using their resources.
While proponents of the value-free ideal might admit that the motivation to pursue a theory or the legal protection of human subjects in permissible experimental methods involve non-epistemic values, they would contend that such values ought not ought not enter into the constitution of empirical results themselves, nor the adjudication or justification of scientific theorizing in light of the evidence see Intemann , As a matter of fact, values do enter into science at a variety of stages.
Like theory-ladenness, values can and sometimes do affect judgments about the salience of certain evidence and the conceptual framing of data. Indeed, on a permissive construal of the nature of theories, values can simply be understood as part of a theoretical framework. Studies reporting that home births are less safe typically attend to infant and birthing parent mortality rates—which are low for these subjects whether at home or in hospital—but leave out of consideration rates of c-section and episiotomy, which are both relatively high in hospital settings.
Thus, a value-laden decision about whether a possible outcome counts as a harm worth considering can influence the outcome of the study—in this case tipping the balance towards the conclusion that hospital births are more safe ibid. Note that the birth safety case differs from the sort of cases at issue in the philosophical debate about risk and thresholds for acceptance and rejection of hypotheses.
In accepting an hypothesis, a person makes a judgement that the risk of being mistaken is sufficiently low Rudner When the consequences of being wrong are deemed grave, the threshold for acceptance may be correspondingly high. Thus, in evaluating the epistemic status of an hypothesis in light of the evidence, a person may have to make a value-based judgement. However, in the birth safety case, the judgement comes into play at an earlier stage, well before the decision to accept or reject the hypothesis is to be made.
The fact that values do sometimes enter into scientific reasoning does not by itself settle the question of whether it would be better if they did not. In paraphrase: 1 orientation in a field, 2 framing a research question, 3 conceptualizing the target, 4 identifying relevant data, 5 data generation, 6 data analysis, 7 deciding when to cease data analysis, and 8 drawing conclusions Anderson , Ward presents a streamlined and general taxonomy of four ways in which values relate to choices: as reasons motivating or justifying choices, as causal effectors of choices, or as goods affected by choices.
By investigating the role of values in these particular stages or aspects of research, philosophers of science can offer higher resolution insights than just the observation that values are involved in science at all and untangle crosstalk. Similarly, fine points can be made about the nature of values involved in these various contexts. Such clarification is likely important for determining whether the contribution of certain values in a given context is deleterious or salutary, and in what sense.
Consider a laboratory toxicology study in which animals exposed to dioxins are compared to unexposed controls. Douglas discusses researchers who want to determine the threshold for safe exposure. Admitting false positives can be expected to lead to overregulation of the chemical industry, while false negatives yield underregulation and thus pose greater risk to public health.
The decision about where to set the unsafe exposure threshold, that is, set the threshold for a statistically significant difference between experimental and control animal populations, involves balancing the acceptability of these two types of errors. That scientists do as a matter of fact sometimes make such decisions is clear. They judge, for instance, a specimen slide of a rat liver to be tumorous or not, and whether borderline cases should count as benign or malignant ibid.
Moreover, in such cases, it is not clear that the responsibility of making such decisions could be offloaded to non-scientists.
Many philosophers accept that values can contribute to the generation of empirical results without spoiling their epistemic utility. Sometimes these include theoretical commitments that lead experimentalists to produce non-illuminating or misleading evidence. In other cases they may lead experimentalists to ignore, or even fail to produce useful evidence.
For example, in order to obtain data on orgasms in female stumptail macaques, one researcher wired up females to produce radio records of orgasmic muscle contractions, heart rate increases, etc. Although female stumptail orgasms occurring during sex with males are atypical, the experimental design was driven by the assumption that what makes features of female sexuality worth studying is their contribution to reproduction ibid. This assumption influenced experimental design in such a way as to preclude learning about the full range of female stumptail orgasms.
Anderson presents an influential analysis of the role of values in research on divorce. This background assumption, which is rooted in a normative appraisal of a certain model of good family life, could lead social science researchers to restrict the questions with which they survey their research subjects to ones about the negative impacts of divorce on their lives, thereby curtailing the possibility of discovering ways that divorce may have actually made the ex-spouses lives better ibid.
This is an example of the influence that values can have on the nature of the results that research ultimately yields, which is epistemically detrimental. In this case, the values in play biased the research outcomes to preclude recognition of countervailing evidence.
Fortunately, such dogmatism is not ubiquitous and when it occurs it can often be corrected eventually. Above we noted that the mere involvement of the theory to be tested in the generation of an empirical result does not automatically yield vicious circularity—it depends on how the theory is involved.
Furthermore, even if the assumptions initially made in the generation of empirical results are incorrect, future scientists will have opportunities to reassess those assumptions in light of new information and techniques. Thus, as long as scientists continue their work there need be no time at which the epistemic value of an empirical result can be established once and for all.
This should come as no surprise to anyone who is aware that science is fallible, but it is no grounds for skepticism. It can be perfectly reasonable to trust the evidence available at present even though it is logically possible for epistemic troubles to arise in the future.
A similar point can be made regarding values although cf. Yap Moreover, while the inclusion of values in the generation of an empirical result can sometimes be epistemically bad, values properly deployed can also be harmless, or even epistemically helpful.
As in the cases of research on female stumptail macaque orgasms and the effects of divorce, certain values can sometimes serve to illuminate the way in which other epistemically problematic assumptions have hindered potential scientific insight.
By valuing knowledge about female sexuality beyond its role in reproduction, scientists can recognize the narrowness of an approach that only conceives of female sexuality insofar as it relates to reproduction.
By questioning the absolute value of one traditional ideal for flourishing families, researchers can garner evidence that might end up destabilizing the empirical foundation supporting that ideal.
Empirical results are most obviously put to epistemic work in their contexts of origin. Scientists conceive of empirical research, collect and analyze the relevant data, and then bring the results to bear on the theoretical issues that inspired the research in the first place.
However, philosophers have also discussed ways in which empirical results are transferred out of their native contexts and applied in diverse and sometimes unexpected ways see Leonelli and Tempini Cases of reuse, or repurposing of empirical results in different epistemic contexts raise several interesting issues for philosophers of science.
For one, such cases challenge the assumption that theory and value ladenness confines the epistemic utility of empirical results to a particular conceptual framework. Ancient Babylonian eclipse records inscribed on cuneiform tablets have been used to generate constraints on contemporary geophysical theorizing about the causes of the lengthening of the day on Earth Stephenson, Morrison, and Hohenkerk This is surprising since the ancient observations were originally recorded for the purpose of making astrological prognostications.
Nevertheless, with enough background information, the records as inscribed can be translated, the layers of assumptions baked into their presentation peeled back, and the results repurposed using resources of the contemporary epistemic context, the likes of which the Babylonians could have hardly dreamed.
Furthermore, the potential for reuse and repurposing feeds back on the methodological norms of data production and handling. In light of the difficulty of reusing or repurposing data without sufficient background information about the original context, Goodman et al.
For example, general relativity doesn't mesh with what we know about the interactions between extremely tiny particles which the theory of quantum mechanics addresses. Will physicists develop a new theory that simultaneously helps us understand the interactions between the very large and the very small? Time will tell, but they are certainly working on it! Classical mechanics, by the way, is still what engineers use to design airplanes and bridges, since it is so accurate in explaining how large i.
Nevertheless, the theories described above did change. A well-supported theory may be accepted by scientists, even if the theory has some problems. In fact, few theories fit our observations of the world perfectly. There is usually some anomalous observation that doesn't seem to fit with our current understanding.
Scientists assume that by working at such anomalies, they'll either disentangle them to see how they fit with the current theory or contribute to a new theory.
And eventually that does happen: a new or modified theory is proposed that explains everything that the old theory explained plus other observations that didn't quite fit with the old theory.
When that new or modified theory is proposed to the scientific community, over a period of time it might take years , scientists come to understand the new theory, see why it is a superior explanation to the old theory, and eventually, accept the new theory. However, occasionally, special interest groups try to misrepresent a non-scientific idea, which meets none of these standards, as inspiring scientific controversy.
0コメント