The history of science is an ancient pursuit, but a relatively young discipline. From Aristotle through the early nineteenth century, practitioners of one or another branch of knowledge have variously used the history of their field to argue for its dignity and importance, introduce it to beginners, situate it within a broader cultural milieu, summarize the literature to date, position themselves in relationship to that literature, praise and blame predecessors, give evidence of progress, extrapolate a program for future research, and draw lessons concerning the nature of knowledge and the conditions for its flourishing. In works like Joseph Priestley's The History and Present State of Electricity (1767) or Georges Cuvier's Rapport historique sur les progrès des sciences naturelles depuis 1789 (1808), the history was inseparable from the science. Particularly in fields dominated by empirical research, the judicious sifting and ordering of past results was a precondition for coherence and a means for achieving consensus on what was reliably known and where the major challenges for future research lay. These functions are preserved today in the scientific review article and the posing of key ‘problems’ (e.g., the celebrated Hilbert problems in mathematics), and practicing scientists occasionally still appeal to the history of their field for present guidance, especially in times of crisis (Graham et al., 1983). By the mid-nineteenth century, however, histories of science had become distinct from scientific publications, although they were still written primarily by scientists, including prominent figures such as William Whewell, Marcellin Berthelot, Ernst Mach, and Pierre Duhem. Their histories often criticized the current state of science by establishing the genealogy of a controversial hypothesis (e.g., atomism), analyzing the origins of a suspect concept (e.g., absolute space) for hidden flaws, or pleading the superiority of one approach to science over another (e.g., Kantian ideas over Comtean facts). By 1900, histories of science had become a genre distinct from science, but they were still motivated by, and deeply engaged with, contemporary scientific developments (Laudan, 1993).

History of science coalesced only gradually as a recognizable discipline in the twentieth century, with its own distinctive program of training, institutions (journals, professional societies, university positions), and scholarly standards. Spurred by the organizational efforts of such scholars as Paul Tannery in France and Karl Sudhoff in Germany, there was a spurt of professionalizing activity around 1900, when the first international congress on the subject was held in Paris. The German Gesellschaft für Geschichte der Medizin und der Naturwissenschaften was established in 1901; the first chairs in the history of science also appeared at European universities around this time (Kragh, 1987). The Belgian historian George Sarton (Pyenson and Verbruggen, 2009), who emigrated to the USA after World War I, brought with him the journal Isis (est. 1912), which became the official organ of the North American History of Science Society after its establishment in 1924 and remains the specialist journal with the largest circulation. Other journals dedicated to the history of science (including the history of the human sciences, which now boasts several specialist journals and societies) in all its aspects have since proliferated in many languages, to which the entries in the annual installments of the Isis Current Bibliography provide at least a partial guide.

Although Sarton's Comtean view of the history of science as a saga of intellectual and moral progress that should evaluate the past in light of present scientific understanding (Sarton, 1948 [1927]) has left little trace in current historiography (Dear, 2009), his tireless attempts to institutionalize and organize the discipline in the form of university departments, specialized journals, and research tools like the Isis Current (earlier CriticalBibliography (History of Science Society, 1967), and Dictionary of Scientific Biography (Gillispie, 1981) had a lasting impact on the field (Thackray and Merton, 1972; Holton, 2009). During the 1950s, graduate programs dedicated to the history of science (sometimes in conjunction with the philosophy or sociology of science) multiplied; by 1990, there existed almost 200 degree-granting programs in 25 countries.

By no means all history of science is done under the auspices of university departments or published in journals expressly devoted to the subject. The disciplinary affinities of the history of science are many and mutable, with distinctive national differences. Until the 1970s, the primary connection, both in Europe and North America, was probably with the sciences themselves. This connection was rooted deep in the origins of the field and sustained by the continuing importance of some form of the history of science in the intellectual and social initiation of young scientists, as well as by the recruitment of historians of science from the ranks of those with scientific training. In the UK and North America, history largely replaced philosophy during the 1980s and 1990s as the discipline to which historians of science felt most akin (and within which the majority of them found employment), although interactions with sociology and, more recently, anthropology have also exerted a powerful influence, especially in connection with programs in science studies. In Western Europe, philosophy has remained a significant framework for the history of science, in diverse forms ranging from neo-Kantianism to logical empiricism, though science and technology studies have also been powerful influences since the 1980s. In Eastern Europe, Russia, Taiwan, and China, the history of science, sometimes with a national focus, has been cultivated within state-sponsored academies of science; in Latin America, Africa, and India science studies, postcolonial studies, and the history of medicine have often been the loci of academic teaching and research. In addition to its disciplinary polyvalence with respect to choice of problems and methods, the history of science is closely linked by its subject matter to the histories of medicine and technology, albeit to differing degrees depending on the historical period in question. The phrase ‘science and technology studies’ bears witness to these criss-crossing ties to other disciplines, serving as an abbreviation for the conglomerate ‘history, philosophy, sociology, and anthropology of science, medicine, and technology,’ which, however cumbersome, accurately reflects the ecumenical perspective of many historians of science.

Transfer of Learning

Robert E. Haskell, in Encyclopedia of Applied Psychology, 2004

7.3 Transfer in Science

The history of science and invention is replete with those who are skilled at transfer. For example, Darwin, who developed human evolutionary theory, transferred the idea of farmers’ selective breeding (i.e., artificial) of animals to his development of the principle of natural selection, in which natural processes were selecting genetic traits in place of the farmer. Physicist Louis De Broglie noticed that the mathematical equations of another well-known physicist, Neils Bohr, who described the orbits of an electron, were the same equations used to describe the vibrating waves of a violin string. With this transfer, De Broglie revolutionized atomic physics and laid the foundations of quantum mechanics.

As these two examples show, many advances in science are made on the basis of what, after the fact, seems to be a simple “it’s like …” type of transfer. These examples are included in this section because theoretical thinking and a complex knowledge base were required before the higher order of transfers could be made.

Ethical Issues in Artificial Intelligence

Richard O. Mason, in Encyclopedia of Information Systems, 2003

III.D. Moral Implications of Using AI as a Tool or Instrument

The history of technology is the story of humanity's efforts to control its environment for its own benefit by creating tools. Tools are artifacts that are constructed to aid a human being to solve a problem. Thus, tools amplify human behavior, but they are morally malleable. Inherently, they are neither good nor evil. Their social value depends on how they are used by those who employ them. Put to use as a tool, technology both shapes its users as subject and affects other parties in its role as an agent. That is, tools serve as a means to an end.

Computers have often been treated as tools. In a 1984 article psychologist Donald Norman, for example, argues that “computers are tools, and should be treated as such; they are neither monsters or savants, simply tools, in the same category as the printing press, the automobile, and the telephone.” Historically, computers were developed only to serve people. For example, Charles Babbage (circa 1821) intended his Analytical Engine to be used to calculate accurate tables of mathematical results—such as logarithms—to help people who needed them. The Analytical Engine was to be a tool. The trend continues. Circa 2000, in excess of 100 million people use computers as aids. These workers and citizens from all walks of life use this technology for all sorts of personal and professional tasks. Many of these computer programs, such as Microsoft's Front Page, have routines embedded in them that are derived from the results of AI research. In addition, most word processors anticipate problems and provide help functions and most spreadsheet programs incorporate some intelligence to structure calculations.

When AI is used as a tool the moral onus rests on the user—the user dominates the tool. A user who uses an Al-aided spreadsheet for nefarious purposes, for example, bears the burden of responsibility. In some cases, however, the computer manufacturer and the software programmer are clearly responsible for the program performing as advertised. Under human control, however, some programs go beyond their use as mere tools. It is possible to incorporate decision-making functions and, with robots, decision-taking functions within these computer-based systems. Used in this way AI-based systems can assume the social roles of slaves or of partners.

The Very Existence of Personality

David C. Funder, in Personality Judgment, 1999

PERSONALITY REAFFIRMED

The history of science has included several interesting periods where it was unclear whether a key concept would be redefined, replaced, or declared not to exist. For instance, more than a century ago Lavoisier convinced the scientific community that phlogiston did not exist and that the phenomena phlogiston was meant to explain were better accounted for by his new concept called “oxygen.” But later observers have noted that he could almost as well have argued that phlogiston simply had some properties not hitherto recognized, and kept the old term (Stich, 1996).

The term “personality trait” may be undergoing a similar crisis at present. Some psychologists are eager to discard the term altogether, doing research on individual differences that in principle is little different from trait research (including the widespread use of self-report questionnaires) except that it avoids like the plague any use of the term “trait.” Thus the literature has filled with questionnaires to measure any number of “schemas,” “strategies,” “implicit theories,” and other putatively nontrait individual difference variables.

This shift in terminology is not completely empty; it gives investigators a fresh empirical and theoretical start. But it also loses much. The empirical and theoretical fresh start is so fresh that it has led some researchers to neglect elementary principles of reliability, homogeneity, and convergent and discriminant validity that apply to individual difference constructs no matter what they are called (recall the discussion in Chapter 1). And it has led to a neglect of the possibility that some and perhaps many of these new constructs are simply relabelings of constructs that have been around a very long time. For instance, Fuhrman and Funder (1995) showed that the procedures used by Markus (1977, 1983) to identify individuals with “sociability schemas” were equivalent to identification of high scorers on the sociability scale of the California Psychological Inventory (CPI; Gough, 1990). Specifically, individuals who scored high on CPI sociability manifested the same patterns of self-descriptive reaction time Markus found for her so-called schematic individuals.

The newer constructs of cognitive personality psychology include some properties not always included in older conceptualizations of traits, such as imputed processes of perception and cognition. But to the extent these properties prove to be useful they can of course be incorporated into our understanding of the nature of personality traits, an understanding that has been evolving for most of the 20th century. So I end this chapter with a conservative conclusion, not that phlogiston be revived as a viable concept for chemistry, but that the ordinary terms of personality description be broadened in their meaning and application to include new properties as they are discovered and understood. For only if we keep in our scientific lexicon terms like sociability, conscientiousness, and impulsiveness will we be able to assess the degree to which we are accurate—or inaccurate—when we think we perceive these traits in the people we meet and get to know.

The Dynamics of Brain–Body–Environment Systems

Randall D. Beer, in Handbook of Cognitive Science, 2008

Introduction

The history of science can often be characterized as a sequence of revolutions and reactions. The birth of cognitive science can be traced to the cognitive revolution of the mid-20th century, which was a reaction against the behaviorist revolution of the early 20th century. Behaviorism in turn was a reaction to the introspectionist tradition of the late 19th and early 20th centuries. More recently, the connectionist revolution was a reaction to some of the symbolic assumptions of the computational core of cognitive science.

In the mid-1980s, just as mainstream cognitive science was becoming aware of connectionism, two new ideas appeared on the intellectual landscape: situateedness and embodiment. These were quickly followed in the early 1990s by a third: dynamics. Broadly speaking, situatedness concerns the role played in an agent's behavior by its ongoing interactions with its immediate environment. Embodiment, in contrast, concerns the role of the physical properties of an agent's body in its behavior. Finally, dynamical approaches emphasize the temporal dimension of behavior, seeking to apply the concepts and tools of dynamical systems theory to the analysis of agents. Of course, none of these ideas are really new. As is often the case in science, they each had important historical precedents, including cybernetics (Walter, 1953; Ashby, 1960; Braitenburg, 1984), phenomenology (Heideggar, 1962; Merleau-Ponty, 1962; Dreyfus, 1992), and ecological psychology (Gibson, 1979).

Historically, these three ideas entered cognitive science somewhat independently (Beer, in press). Situatedness arose primarily as a reaction against the classical AI planning view of action (Agre & Chapman, 1987; Suchman, 1987). In contrast, embodiment arose primarily from a dissatisfaction with the inability of symbolic AI approaches to cope with the sorts of problems encountered by real robots moving around in real environments (Brooks, 1991). Finally, dynamics arose from a rejection of the discreteness (in both time and state) of classical computationalism (Thelen & Smith, 1994; Van Gelder, 1995). Even today, there are people who hold each of these positions individually without necessarily committing themselves to the others.

However, it is becoming increasingly clear that situatedness, embodiment, and dynamics work much better as a unit. Combining these three ideas leads to the notion of a brain–body–environment system, wherein an agent's nervous system, its body, and its environment are each conceptualized as dynamical systems that are in continuous interaction (Beer, 1992, 1995a; Figure 6.1). Taking such a perspective seriously has fundamental implications across the cognitive, behavioral, and brain sciences, but it also raises many difficult empirical and theoretical challenges. Exploring these implications and addressing these challenges has been a major focus of my research program for almost 20 years (Beer, 1990, 1992, 1995a, b, 1997, 2003). In this chapter, I review both the experimental and the theoretical accomplishments of this research program to date, and then discuss some of the major challenges that remain.