Mike Knight. The Psychological Record. Volume 44, Issue 2, Spring 1994.
Darwinian functionalism integrates psychology’s past with the present to create an interdisciplinary research paradigm that promotes hypothetico-deductive theory construction which is computational in nature and constrained by a volkerpsychologie knowledge base encompassing the evolutionary history of the organism. This paper summarizes ideas abstracted from evolutionary psychology, cognitive science, and paleoanthropology in an attempt to describe such a paradigm.
Anderson (1990) has argued that cognitive science is in need of a shift in emphasis away from modeling mental architecture in favor of an evolutionary or “adaptionist” perspective on human cognition. The implication is that cognitive psychology’s neostructuralist research paradigm is giving way to a neofunctionalist one. It is also a signal that cognitivism has progressed through what Kuhn (1970) labeled normal science and is currently experiencing a crisis of nonconfidence. A shift in emphasis is necessary because cognitivism is in danger of degenerating into a kind of mentalism where unconstrained model building leads to an ever increasing need for new models to account for the anomalies resulting from new data (Amsel, 1989; Knight, 1990). Ultimately an unrestrained emphasis on explanation will overwhelm a discipline with more answers than there were questions to begin with. What is needed is not more answers in the form of more models, but better questions.
If psychological theory is fragmented and incoherent it is because we have not been asking good (well-formed) questions. We have not been asking good questions because we have not had a theoretical basis from which to deduce expectancies. A return to functionalism, in the form of evolutionary psychology, has the potential for providing a hypothetico-deductive theoretical base that achieves prediction, explanation through meaningful organization, and in addition has an aesthetic quality that has historically characterized successful theorizing in science. What follows is an exposition of Darwinian functionalism as the elusive paradigm required to achieve Thomas Kuhn’s criterion for a science–to achieve psychology as science.
Arnold Buss (1973) suggested that advances in genetics, evolutionary theory, and ethology had made possible a comprehensive theory which for the first time could effectively organize all of psychology around it. The missing ingredient seems to have been supplied by cognitive science with its unique perspective on the brain as a thinking machine. The functioning of cognitive mechanisms supplied the common ground for rapprochement. As James (1890) said in his Principles of Psychology, “the only thing which psychology has a right to postulate at the outset is the fact of thinking itself”. The last 20 years of cognitive science research have been spent developing a viable language for the description of mental modeling. Cosmides and Tooby (1987) were in a position to declare that, “there is emerging a new method, here called evolutionary psychology, which is made possible by the simultaneous maturation of evolutionary biology, paleoanthropology, and cognitive psychology”.
Darwinian Functionalism
Even if all the hawks in the world were to vanish, their image would still sleep in the soul of the chick. — Joseph Campbell.
At the heart of evolutionary psychology is the question of function. It makes little sense to question an organism’s structure without a consideration of how it functions. In like manner a consideration of how something functions can only be understood within the context of what it was designed to do. The functionalism of James and Dewey emphasized the importance of defining something in terms of what it does. Darwinian functionalism goes a step further and recognizes the inseparability of function and design. Understanding design makes it possible to ask meaningful questions regarding function and guides the study of structure. The internal mechanics of a telephone and the ways in which it can be used are meaningless apart from the knowledge that humans are a socially intelligent species with a need for information. The logical deduction from this knowledge is that an enhanced ability to communicate will be selected for and as a consequence will form an integral part of any social ecology.
Evolutionary design provides the knowledge base from which theoretical constructs, in the form of cognitive mechanisms, can be deduced (Anderson, 1990; Cosmides & Tooby, 1987). It is for this reason that the first premise of Darwinian functionalism is that behavior is the product of evolved cognitive mechanisms. As numerous others have pointed out it makes little sense to speak of evolved behavior. Behavior did not evolve any more than vision evolved. The brain and the eye have evolved and vision is the consequence. Similarly, behavior must be conceptualized as the result of evolved cognitive mechanisms in the functioning brain. Whether we refer to these mechanisms as mental organs (Chomsky, 1980), stimulus analyzers (Mackintosh, 1977), stimulus detectors (Eibl-Eibesfeldt, 1989), psychological mechanisms (Cosmides & Tooby, 1987), P-cognitions (Margolis, 1987), or Kant’s a priori (innate) categories of thought (Hergenhahn, 1986), the emphasis is on the filtering of information through these devices for the purpose of structuring perception (Gregory, 1983).
A theory is an algorithmic description of how cognitive mechanisms function. To build this theory requires an understanding of the evolution of cognitive mechanisms as adaptations–adaptations reflecting the demand characteristics of particular physical and social ecologies. Meaningful questions in the form of postulates or hypotheses can be deduced from the theory and translated into the if/then logical consequences associated with experimentation. A particularly good example of this process is provided by social exchange theory (Cosmides, 1989; Cosmides & Tooby, 1989b) where knowledge of the constraints inherent in a paleolithic social environment revealed that humans must have evolved unique abilities to process information if they were to be successful in solving their adaptive problems and that the design features of these evolved cognitive mechanisms must operate on a cost-benefit representation of the exchange interaction and be able to perform an inferential procedure which enhances the probability of detecting cheating on a social contract. From the algorithmic description of the cognitive mechanisms which govern human reasoning about social exchange, Cosmides (1989) designed an elaborate series of experiments to test the sometimes anomalous predictions regarding how humans use logic to solve problems. The interested reader is referred to this theory and the accompanying experiments as a model for research which exemplifies the principles outlined in this article and inherent in Darwinian functionalism.
As a paradigm for research Darwinian functionalism can be conceptualized as a five phase process: (1) naturalistic observation, (2) prediction of evolutionary contingencies, (3) hypothetico-deductive theory construction, (4) experimental design, and (5) empirical observation. Each of these phases will be considered separately.
The Research Paradigm
1. Asking Questions about Behavior
We begin, as always, with naturalistic observation. In the past psychologists have been notoriously bad observers of ongoing nonlaboratory behavior. It is the biologists, in particular the ethologists, who have developed the requisite skills of the trained observer of behavior in its natural context. The beginnings of paradigmatic psychology as science must start here. In this regard I highly recommend Eibl-Eibesfeldt’s book Human Ethology (1989).
Wundt referred to the empirical but nonexperimental (nonmanipulated) study of behavior as geisteswissenschaft. From our empirical observations we first, to use William James’s term, “psychologize” about causes and ultimately formulate expectancies in the form of working hypotheses. The error that has been made in the past is to postulate either the organism or something inside the organism as the initiating agent and therefore the cause of the observed behavior.
Skinner (1981) identified this error of trying to explain the unknown by postulating the still more unknown, as ignotum per ignotius (also see Stephenson, 1980). We saw this error at its most blatant in the medical model of psychiatry and McDougall’s instinct psychology.
For many years we convinced ourselves that neobehaviorism’s use of theoretical constructs somehow circumvented this error when in fact it was simply made less obvious.
The problem is still one of circularity because the theory is constructed from the behavior it is supposed to explain. For well-formed questions to be asked, the theory must be deducible from a knowledge base independent of that which is to be explained. That knowledge base for Darwinian functionalism is the kind of psychology Kant called rationalistic anthropology and Wundt called Volkerpsychologie; it is the evolutionary history of the organism.
2. Understanding the Past
Hilgard (1987) points out that Wundt’s Volkerpsychologie is best described as historical-cultural because “its method is historical and its substance is the data of cultural residues”. Wundt’s 10-volume anthropological study of how the human mind was designed to function has a distinctly modern flavor. Hilgard compares Volkerpsychologie to Kant’s rationalism because of the emphasis on innate cognitive structures. For example, Wundt’s empirical study of language acquisition is strikingly similar to Chomsky’s. This is particularly impressive because Chomsky’s conceptualization of a language acquisition device as a mental organ which has evolved as an information-processing mechanism for a specialized cognitive function epitomizes the approach of Darwinian functionalism.
As with Chomsky the Darwinian functionalist uses evolutionary theory to define the organism’s adaptive problems.
Natural selection theory allows one to develop computational theories for adaptive information processing problems, because for humans, an evolved species, natural selection in a particular ecological situation defines and constitutes “valid constraints” on the way the world is structured. (Cosmides & Tooby, 1989a, p. 39)
That is, evolved cognitive mechanisms will mirror ecological constraints. Conversely, we can “see” the cognitive mechanism by looking at the environment which did the selecting for it in the first place. To understand the past is to understand how cognitive mechanisms function. Paleoanthropological evidence can and should be used to construct psychological theories.
Evolutionary history can also be used to evaluate predictions derived from these theories once formulated. An egregious error we have made in doing science in psychology is to think of prediction as synonymous with predicting the future. It is possible to formulate hypotheses that make specific testable predictions about the past. The criterion is refutability regardless of whether the evidence is obtained from the past or an experiment yet to be conducted.
Notice from Figure 1 that the evolutionary past is represented using a three-circle Venn diagram with three main effects: genes, habits, and ideas, three two-way interactions, and a three-way interaction. For two of these main effects, genes and habits, the applicability of the Darwinian model of evolution through a process of variation and selection (chance and necessity) has long been recognized. Darwin had a profound influence on both James and Thorndike and as a result all subsequent theories of learning.
Modeling from phylogeny to ontogeny is particularly appealing (Skinner, 1966). Imagine a hypothetical individual named Ono. Ono is 42 years old. He has changed in many ways during his 42 years as a result of the experiences he has had. He has learned about the contingencies in his environment and successfully adapted to them. Now imagine an individual named Phylo who has lived for millions of years. He too has learned about the contingencies in his environment and been successful in adapting to them. From this analogy it becomes clear that we can conceptualize phylogenetic adaptation in the same way that we think about ontogenetic learning. Their commonality is neurophysiological change as a result of experience with environmental contingencies. More significantly the cybernetic mechanism for the evolutionary process of change at each level is identical.
Cybernetic communication with the environment in the form of functional feedback is the “glue” which bonds the organism and its ecological niche creating a unified whole (Maturana, 1975). It also provides a basis for understanding cultural evolution (Dawkins, 1990). A triarchic theory of evolution emphasizing variation and selection as the mechanism for change has gained widespread popularity in recent years (Knight, 1983).
We can distinguish between three levels of adaptation: genetic evolution; adaptive behavioral learning; and scientific discovery. On all three levels the mechanism of adaptation is fundamentally the same: variation and selection (Popper, 1974, p. 73).
Human behavior is the joint product of (i) the contingencies of survival responsible for the natural selection of the species and (ii) the contingencies of reinforcement responsible for the repertories acquired by its members, including (iii) the special contingencies maintained by an evolved social environment. (Ultimately, of course, it is all a matter of natural selection.) (Skinner, 1981, p. 502)
In A Matter of Consequences (Skinner, 1983) these three levels are given the labels Life, Mind, and Zeitgeist. Popper (see Eibl-Eibesfeldt, 1989) has characterized the products of the world of ideas, as represented in scientific theories, books, rituals, and so forth, as World 3 knowledge, with the mind and acquired behavioral repertories as World 2, and the physical world as World 1. Dawkins (1990) takes a more reductionistic, and perhaps more pragmatic approach when he describes a meme (derived from the Greek word mimeme, meaning imitation) as the cultural analog of a gene. Regardless of the terminology used it is clear that the emphasis is on cultural evolution.
Although the process of evolutionary change for genes and habits is typically described by the terms phylogeny and ontogeny, where the suffix -geny means the production or development of, there is no comparable term to describe the evolution of cultural knowledge. If we extend the use of the -geny suffix to include Popper’s World 3, the world of knowledge, the term epistemogeny seems appropriate. However, it should be reemphasized that it is the genes, habits, and ideas that are varying and being selected by consequences.
With regard to the two-way interactions the greatest attention has been given to biological constraints on learning. Skinner’s 1966 article, “The Phylogeny and Ontogeny of Behavior,” stressed the importance of understanding the development of habits (learning) in the context of genetic predispositions. The biology of behavior has always been the unique specialty of ethology. Although the model identifies interactions between phylogeny x epistemogeny and ontogeny x epistemogeny, their parameters are not as well articulated in the literature. It is interesting that Piaget describes his theory of cognitive development as a theory of genetic epistemology. It also seems probable that the O x E interaction is compatible with the constructs of social learning theory, cognitive behaviorism, and social cognition. In this context Alexander (1989) identifies scenario-building in a social ecology as the defining characteristic of the human psyche.
As is true regarding the possible outcomes of a three-way analysis of variance, any, all, or none of these combinations may be significant in their contribution to an identified behavior. What is important is that we think in terms of this triarchic theory of evolution and the resulting 7-factor knowledge base in attempting to model cognitive mechanisms.
3. Building Models of Mental Life
In its purest form the modeling of cognitive mechanisms is the hypothetico-deductive method of Newton, Darwin, and Hull.
The method of evolutionary psychology is hypothetico-deductive, rather than speculative. In the speculative approach, one first discovers a psychological mechanism, and then one speculates about what adaptive problem it evolved to solve. The approach advocated here is the reverse: first, one uses existing and validated theories from evolutionary biology to define an adaptive problem that the human mind must be able to solve, and to deduce what properties a psychological mechanism capable of solving that problem must have. Then one tests to see whether there is evidence for a psychological mechanism having the hypothesized properties. It is a constrained and predictive approach, rather than a compilation of post hoc explanations for known phenomena. (Cosmides, 1989, p. 190)
Cosmides (1989) characterized a computational theory as “an answer to the question: what must happen if a particular function is to be accomplished”. David Marr’s pioneering work in visual perception revealed that the brain contains cognitive mechanisms which have evolved specific design features as solutions to adaptive problems in particular ecological domains. A computational theory first evaluates the nature of the problem the cognitive mechanism was designed to solve and then specifies an algorithmic solution given the constraints of the environment. The isomorphism between the design features of the cognitive mechanism and the constraints manifested in the way the world is structured means that the organism has an innate frame of reference which provides an expectancy of what is likely to be true in a particular domain of experience.
Our theories then are models of how we think the demand characteristics of a specified ecological niche are reflected in the functioning of the cognitive mechanism. The cognitive mechanism being conceptualized as a perceptual filter which organizes information according to the biological parameters derived from these demand characteristics. Our model of this mechanism explicitly defines how we think incoming sensory information is analyzed. Ideally the model is an algorithmic one emphasizing the sequential flow of information as it is processed.
Because they are concrete, models enhance visualization of the functioning cognitive mechanism and can be used to “elucidate operational relationships and make predictions which can be experimentally tested” (Eibl-Eibesfeldt, 1989, p. 165).
4. The Necessary and Sufficient Conditions for Defining a Cognitive Mechanism
The previous two sections have emphasized that a theory should be constructed from, and constrained by, the volkerpsychologie knowledge base. The virtue of hypothetico-deductive theory construction is that the resulting postulates can then be tested experimentally. A postulate or hypothesis being defined as a conjectural statement regarding a possible relationship between variables, and an experiment being defined in terms of the manipulative control of the variables contained in the postulate.
Wundt referred to the doing of experiments as naturwissenschaft, the complement of geisteswissenschaft, the empirical but nonexperimental study of behavior. Naturwissenschaft represents an experimental test of the computational theory. The variables are operationally defined and the hypothesis offers an if/then prediction which can be refuted through empirical observation. Because predictions are explicitly defined as logical consequences deduced from a theory, the postulate is, in effect, an operational definition of how the variables are related. In this sense the postulate specifies the necessary and sufficient conditions defining the cognitive mechanism represented in the computational theory.
Naturwissenschaft consists of developing a methodology for the purpose of manipulating the environment in order to test the predictions derived from a theory. The overriding emphasis is on predictions resulting from observed correlations, not causal explanation. Skinner (1981), Maturana (1975), and Bateson (Dell, 1985) have forcefully refuted the concept of causality in the classical Newtonian sense where Event A is assumed to cause, and therefore explain, Event B. When billiard ball A strikes billiard ball B and B is set in motion the Newtonian belief is that A caused B’s motion. However, given the nature of interactions it is obvious that it is the structure of B as much as the action of A that determines the outcome. For example, what if B were glued to the table. Now billiard ball A caroms away and the Newtonian would explain this effect by positing B as the cause. The error is in looking for linear causal explanations in the form of initiating agents. Looking for causal agents in science makes no more sense than trying to assign fault or blame in domestic disputes; in both situations the result is an infinite regression of ever more obscure causes. Postulating an initiating agent has the singular purpose of achieving explanation and eliminating questions and this is antithetical to science.
Classical mechanics is inadequate for a science of psychology because: it neglects relativity, it assumes objectivity, and it is contradicted by Heisenberg’s uncertainty principle. If linear causality is misguided then what is it replaced by? Maturana (1975) argues that the world is structure determined.
With this single bold insight (i.e., structure determinism) Maturana has retrieved the grand mechanistic universe that was envisioned by Newton–but with a difference. Maturana’s determinism differs from that of Newton in a way that elegantly suits today’s relativistic, Einsteinian world. Newton portrayed a mechanistic world wherein forces and impacts causally determined the behavior of objects. Maturana insists that this view of causal determinism is ontologically impossible. Forces and impacts cannot and do not determine, specify, or instruct the behavior of an object. They are merely the historical occasion for the system to continue its structure-determined behavior. (Dell, 1985, p. 7)
Maturana’s mechanistic view means that for any interaction there exists a complementarity of structure, like a lock and its key, one component can not be functionally understood without reference to the other. The principle of complementarity has a long history in science beginning with William James in 1890 and introduced in quantum physics by Niels Bohr in 1927 (Bohr, 1950; Stephenson, 1986). Maturana’s incorporation of this principle in the form structure determinism is identical to Marr’s observation regarding the fit between a cognitive mechanism and the ecological niche in which it is functioning. What the “Frog’s Eye Tells the Frog’s Brain” (Lettvin, Maturana, McCulloch, & Pitts, 1959) is, if it is small, dark, and moving it is good to eat. This unity of structures means that frogs have evolved bug detectors that reflect bug structure. In like fashion the nature and characteristics of bugs tells us something about frogs.
It should be emphasized that the coupling of structures is by no means a static thing, “Structures alter with every interaction: this is especially true with regard to dynamic living systems which are constantly undergoing changes in their components and the relations among those components” (Dell, 1985, p. 7). Through cybernetic communication the structure of each defines the other. Notice that this reciprocity of influence in a dynamic system is nonlinear and renders questions about causes meaningless.
This unity of structures is depicted in the Darwinian functionalism paradigm. In a temporal sense evolution produces the intact organism, represented by the ubiquitous “black-box,” but the environment (stimulus) is shown as occurring simultaneously with the organism. This is quite different from the old S |right arrow~ R model where causality is implied by linear precedence (post hoc ergo propter hoc; after this therefore because of this).
5. Issues of Measurement in the Observation of Behavior
As we have seen the necessary and sufficient conditions for defining a cognitive mechanism are contained in the operational definitions for the manipulation of variables and the observation of outcomes. In science a phenomenon is defined by specifying the conditions of observation such that the reliability of correlated events can be empirically verified. Specifying conditions of observation is analogous to writing a recipe. The assertion is that an observed outcome is correlated with a specified sequence of manipulations.
As is true with recipes in general, quantification is indispensable in science. For both, the purpose is the successful replication of a desired outcome. Can you imagine the difficulty of writing a recipe for carrot cake without the use of numbers? Questions of measurement are of paramount importance because effective science requires measures which are valid, reliable, and sensitive. All too often in psychology, however, the tail has ended up wagging the dog. Because of the absence of good questions and our concern for good measurement we have done experiments simply because they were do-able.
More than anyone else Wundt transformed psychology into an empirical science. In large measure this transformation was accomplished by emphasizing to his students the importance of thinking in terms of measurement. Hilgard (1987) reports that Wundt’s labs were filled to overflowing with equipment and each student was assigned to an apparatus upon arrival. Pedagogically this is extremely valuable because it inculcates an attitude regarding operationalism, but perhaps even more importantly it ensures that the student, from the onset, “makes contact with behavior,” an experience that most psychologists recall as significant in their own development as scientists (Skinner, 1956, 1976). The unfortunate thing is that once you achieve an expertise as a methodologist it is all too easy to become functionally fixated at this level of doing science. The end result is that you are, in effect, using the organism to learn about the behavior of your apparatus rather than the other way around (Gibson, 1985).
Like the beginning physics lab experience, behaviorism taught us how to do experiments, but it also had a stultifying effect on theory. What cognitive psychology attempted to do was correct this imbalance by returning to more meaningful questions regarding the contents of the mind, but this too was misguided and degenerated into mentalism (Amsel, 1989; Knight, 1990). Psychology is not a science of mental architecture, but rather mental life (James, 1890; Anderson, 1990).
In 1989 Richard Alexander ended his essay, “The Evolution of the Human Psyche,” by stating,
When I described to a friend the problem of titling this essay about the human psyche, he suggested with a sly smile that I might call it “Psychology.” Having completed the essay, I discover that, indeed, I have argued, in agreement with Humphrey (1976, 1986) and using his phrase, that the function of the human psyche is to “do psychology” -that is, to study itself as a phenomenon, in ourselves and other conspecific individuals, and to manipulate, in particular, the versions of itself found in those other individuals. When I read this statement back to the same friend, he nodded and added, ‘Unconsciously.’
This is an excellent Darwinian description of mental life. It emphasizes our sentience as social animals whose primary function is to build scenarios (tell stories) with self-reference. Such an ability would be selected, because stories exist as coherent units that subsume a cadre of specific details, and as a consequence enhance memory, and because scenario building enables us to simulate the future and vicariously experience alternative outcomes.
Humphrey, Alexander, and others in anthropology are suggesting that the human evolved as a machine with a very large brain because its primary activity is subjective behavior, which raises an interesting question: If psychology is the science of mental life how do we measure phenomenological behavior which is by definition unobservable? For Skinner the answer to this question was obvious–introspection. Radical behaviorism did not reject introspection, what it rejected was an appeal to initiating agents in an attempt to achieve causal explanation (Skinner, 1974). For Skinner this type of mentalism is similar to the error of linear causality in classical mechanics (Skinner, 1981). It was Skinner’s contention that individuals achieve awareness of self through a process of attribution where they come to know themselves through experience, that is, by standing apart from themselves and observing their own behavior. Their verbal behavior with self-reference reflects these subjective observations (Knight, Frederickson, & Martin, 1987). Interestingly this is exactly what Wundt meant by selbstbeobachtung, which was subsequently translated as introspection (Marx & Cronan-Hillix, 1987).
Stephenson (1987) has astutely pointed out that the terms self-perception or self-awareness can only perpetuate Descartes’ error regarding the mind as an observer. As with S |right arrow~ R psychology, self |right arrow~ awareness implies a belief in a reality apart from the process of observation and establishes the self as an initiating agent. The importance of this distinction is made even more apparent if we contrast the phases self-perceiving and perceiving self. It is obvious that when we say self-perceiving we are conceptualizing an entity who is riding around in the body observing reality. In contrast, to say perceiving self emphasizes that this is a process of communication which is structure determined and should not be misinterpreted simply because the communication involves self-reference.
Verbal behavior with reference to self reflects subjective observation of a world model where self is a construct. Dennett (1991) argues that verbal behavior, in the form of textual narrative, is uncontrovertible evidence for consciousness. In agreement with Minsky (1986) and Gazzaniga (1985) he envisions a society of selfs where SELF is the center of gravity in a narrative world. As with Alexander, Dennett seems to be identifying scenario-building as a uniquely human activity where the texts we create can be studied as objective data reflecting mental life. This perspective on sentience as narrative dependent is also supported by Rumelhart (1975) who has proposed a cognitive mechanism for textual narrative which functions in accord with an inherent story grammar analogous to the rules of grammar postulated for Chomsky’s language acquisition device.
With regard to measurement the selbstbeobachtung perspective in the study of textural narrative means that verbal behavior can be objectified as operant behavior. It is not coincidental that Stephenson’s Q-Methodology is called operant subjectivity. In physics the rejection of classical mechanics led to a reliance on measurement techniques which are similar to those of Q-methodology, “It happens that factor theory in psychology is the same as quantum theory in physics, both rooted in the same mathematics, and for comparable purposes in the two disciplines alike,” (Stephenson, 1980, p. 97). Those comparable purposes are to achieve probability descriptions of subjective reality given the ambiguity barrier posited by Heisenberg’s uncertainty principle (Knight et al., 1987).
Inherent in their concordance regarding selbstbeobachtung Wundt, Skinner, and Stephenson share a methodological concern for the importance of the individual case to measurement in science. Stephenson (1987) observes that psychometry based on group averages has flooded us with “pseudo-testing of every conceivable kind” and further that “it is basically categorical only, and will one day disappear, one may hope, into a ‘black hole’ of grand illusions”. Skinner’s assessment was equally harsh, “it is the function of learning theory to create an imaginary world of law and order and thus to console us for the disorder we observe in behavior itself” (Skinner, 1956, p. 226), and further, “When you have the responsibility of making absolutely sure that a given organism will engage in a given sort of behavior at a given time, you quickly grow impatient with theories of learning”. One of my favorite Stephenson quotes in this regard is “When a physicist theorizes about a particular metal any piece of it will serve his experimental purposes” (Stephenson, 1953, p. 3). In operant research and psychophysics alike, intense study of the individual provides expectancies to be tested in the next experiment with the next individual. Because of this condition, it seems probable that signal detection theory (Green & Swets, 1966) in combination with the operant procedures of Q-methodology would provide an opportunity to develop ecologically valid measures to assess the functioning of cognitive mechanisms.
Conclusions
Thomas Kuhn’s (1970) definition of science in terms of an existing paradigm has haunted psychology for 20 years as we have vainly searched for a unifying perspective (Amsel, 1989). However, in 1982 Woodward argued that
Controversies are sometimes a telltale indication that the parties agree on the existence of that ultimate intellectual prize, a scientific “discovery”….
My contention is that the very existence of these disagreements reveals an underlying agreement, a “discovery,” as it were, of Newtonian proportions for our discipline. The tacit agreement on a functional mechanism derives from the impact and assimilation of evolutionary thought in psychology. If psychology is concerned about its status as a science, it would do well to appreciate the distinguished heritage it already has. This heritage may be the bearer of a revolution as profound as those of the physical, chemical, and biological sciences during the past three centuries.
Woodward is quite correct in identifying functional feedback (i. e., cybernetic communication between structures) as the force which attracts all elements of psychology, however, this identification alone is not enough to define our paradigm. What is missing is a description of how theories are constructed and function as a part of the paradigm. This lack is critically important because the absence of coherent theories is the signal weakness of psychology. With regard to the study of perception Gibson (1985) observed,
The conclusions that can be reached from a century of research on perception are insignificant. We have no adequate theory of perception, and what we have found in the search for physical sensations is a mixed batch of illusions, physiological curiosities, and bodily feelings. The implications are discouraging. A fresh start has to be made on the problem of perception.
In many areas of psychology the absence of well-formed experimental questions deduced from theory has resulted is similar trivialities. But how do we avoid this problem? Pragmatically a theory is defined in terms of how it functions. We must construct our theories to be compatible with three primary functions.
1. Theory has a generative function in that it makes predictions which are refutable. Theories generate new knowledge by providing the expectations which enable us to be wrong, and being wrong is the life blood of science because it means something new is about to be learned.
2. Theory has a meaningfulness function acquired through organization and explanation. A theory makes information meaningful by functioning as a scientific advance organizer (Ausubel, 1963) or schema which defines interrelationships and in the process achieves explanation.
3. Theory has an aesthetic function. In physics this is called the beauty principle because as Keats said, “Truth is beauty, beauty truth.” Gel-Mann (Judson, 1980) stresses the point that our theories are models of reality and if accurate will mirror the symmetry and balance we sense in nature.
In the past our strength has been methodology and our weakness theory construction. If science is truly a schemapiric activity as S. S. Stevens has asserted (Marx & Cronan-Hillix, 1987) then psychology as science must achieve a reciprocity between its theories and its methodologies such that each provides sustenance for the other. This coupling represents an epistemological unity of structures without which science is not viable.
Clearly our methodologist persona has dominated the theorist in psychological science. This is partially caused by the fact that when the theorist has rebelled the resulting theories have not been successful, for examples, psychoanalytic theory, drive theory, and the theories of cognitivism. The worth of a theory, like that of research, should be judged relative to the ratio of questions to answers. A worthy theory will raise more questions than it provides answers. A malfunctioning theory will give you more answers than you have questions. In the past our theories have ultimately malfunctioned because of their circular nature. That is to say, they were not constrained by a knowledge base apart from the behavior from which they were constructed. In contrast the theories of Darwinian functionalism can be refuted at two levels. Geisteswissenschaft predictions can be refuted by knowledge obtained from the volkerpsychologie knowledge base and naturwissenschaft predictions deduced from the computational theory can be refuted by empirical observation. Refutability is the cardinal virtue of science and the distinctive puissance of Darwinian functionalism.
What is psychology’s new paradigm? It is an interdisciplinary compilation of all that has gone before. From evolutionary biology, to Wundt’s volkerpsychologie, to Wiener’s cybernetic communication, to Maturana’s structure determinism, the past has been progressing toward this future. Like the wise men studying the elephant we have all been conceptualizing the same “truths” from different perspectives. It was Louis Pasteur who said, “Chance favors the prepared mind.” We seem to be prepared at last to recognize Darwinian functionalism as psychology’s once and future paradigm.