Theories of Knowledge: What We Know about What We Know

Eliezer Geisler. Knowledge and Knowledge Systems: Learning from the Wonders of the Mind. Hershey, PA: IGI Publishing, 2008.

This chapter reviews the main streams of research on knowledge, assembled from a diversity of academic disciplines, such as philosophy (epistemology), philosophy of science, psychology, economics, and management and organization sciences. For the sake of continuity in the style in which this book is written, and for the comfort of the reader, I shall do my best to refrain from overusing esoteric terminologies of these streams or research movements. Terms such as “positivism,” “phenomenology,” “modernism,” “reconstruction,” and similar descriptions will be avoided, as will “post” and “neo” in conjunction with any of the above. Simply, the review will address what I believe to be the key arguments and the foundational components of prior scholarship and their contributions to the modeling of the structure of knowledge.

It all began some two million years ago when one of our ancestors, named Homo habilis, found a way to make tools, thus paving the way for Homo sapiens (the “wise” human). As humans evolved, they continued to aggregate knowledge about their surroundings and increasingly engendered knowledge on how to survive in such environments, even how to master them. They developed a pool of knowledge on the making and use of tools and artifacts for hunting and gathering, and rudimentary language skills to maintain and diffuse these skills to their offspring.

Some ten thousand years ago (it is not clear why), the Paleolithic age ended and the Neolithic age appeared. Humans dramatically improved their tool-making, converting from hunting-gathering to food cultivation and animal husbandry as the key method for survival. To safeguard their growing populations and the surpluses of food and artifacts, the Neolithic humans developed communities, then cities, then the knowledge and technologies to administer them and to record their possessions.

Clearly, the knowledge our ancestors possessed was practical knowledge inherent in skills that allowed them to perform simple tasks that kept them alive and made them prosper. The questions posed by their inquisitive minds were perhaps directed toward the “how”: How to start a fire? How to record the hunt on a cave’s wall? How to prepare and use a hunting device?

With the growth of communities and the widespread appearance of farming, transportation, and storage of food surpluses, there was a need to deal with more complex stocks of knowledge. Solutions had to be found to problems never before encountered. The complexity of growing communities and the contemporaneous development of language and writing became evident forces that propelled people with public responsibilities to increasingly ask “why?” They observed patterns that were more complex than nature’s rhythms of celestial movements, the seasons of the year, or the flooding calendars of mighty rivers.

To a large extent they procured the answers to “why” in divine or supernatural sources. But these solutions were not entirely satisfactory, thus forcing the ancients to engage in a more systematic pursuit of their natural surroundings. At first they transferred to their offspring their knowledge about hunting, gathering, tool making, and usage based on experience and imitation. Paleolithic humans gained knowledge from direct experience by their close proximity to nature. Suddenly, at some point some 15,000 years ago, there is a marked increase in applications and innovations in knowledge about farming and collective or social congregation of larger groups of humans sharing in the means of production and distribution of food and artifacts. Scholars who examined this change generally like the explanation of a spike or a revolutionary change in behavior caused by a cataclysmic event or some other unforeseen circumstance. In this case explanations vary from dramatic changes to an explosion in population.

Practical knowledge, learned by hands-on experience and imitation, gave way to conceptual knowledge, transferred and learned by a method that is “one-step removed” from actual practice. This was done with the help of language, writing, and drawings. It meant that practical knowledge on how to start a fire or kill a prey with a sharp instrument was replaced by the notion or concept of creating fire with artifacts, and of obtaining food from prey by “long distance” hunting. Stories around the communal fire in the cave or other habitation may have been accompanied by movements that imitated an impressive hunting episode, but the listeners “were not there,” thus had to imagine the encounter via notions of “hunting with instruments” and concepts of “bravery” and “communal good.”

Perhaps the progress of knowledge, from practical to conceptual, was not so sudden. Even prehistoric humans had the capacity to imagine concepts such as deities, beyond the patterns they observed in nature. By cumulative practical knowledge they, and the Neolithic humans who followed, introduced small, yet meaningful improvements to the ways they procured food and shelter, and the means by which they governed the allocation of their resources.

Our human ancestors drew pictures from memory, requiring them to reconstruct shapes and events. They had to know and remember where food, in the form of edible vegetables and fruits, could be found and the dangers inherent in getting there and returning without harm. All this required accumulation of a variety of items of knowledge, and their positioning in a framework that would be amenable to reconstruction and to transfer to others. It seems that early humans possessed the abilities to generate and process knowledge beyond the rudimentary practical or “hands-on” experience and imitation of key activities necessary for their survival.

Theories and Key Streams of Research on Human Knowledge

Gorman (2002) has offered an interesting summary of types of knowledge. He started with the main classification of knowledge as explicit and tacit. The former is knowledge that can be clearly and perhaps even completely told or transferred by the knower. Tacit or implicit knowledge is that knowledge which is embedded in the knower, and which the knower is unable or unwilling to exchange.

Gorman proposes four types of knowledge under these main categories. The first is “information,” answering questions about what. This type of knowledge includes accretion of facts via memorization and reconstruction of reality from bits of information embedded in human memory. This type of knowledge is accomplished by using external memory aids, which help the knower to “find” the information needed.

A second type of knowledge refers to “skills,” answering questions about how. This knowledge is also defined as procedural, so that algorithms may be established and the procedures codified. Gorman argues that such procedural knowledge under the explicit (declarative) category will be made of algorithms, whereas under the tacit (implicit) category, it consists of heuristics and hands-on knowledge.

The third type is “judgment,” answering questions of when. The knower recognizes “that a problem is similar to one whose solution path is known and knowing when to apply a particular procedure” (p. 222). Under the explicit category, such knowledge relies on rules, whereas under the implicit (tacit) category, it is based on cases, mental models, and mental frameworks.

The fourth type is “wisdom” knowledge, answering questions of why. Under the explicit category this knowledge relies on codes and under the tacit category it is based on moral imagination.

Gorman’s summary is an excellent illustration of the focus on knowledge as a utilitarian tool or mechanism, employed to answer questions and to achieve individual and organizational goals. As models of knowledge emerge, stressing its functionality and utility, so have arguments that linked these models to the context of cultural and social influences. Knowledge, it is argued, can be classified and its utility categorized only in the context of these cultural and social values and customs. Hence, perhaps we cannot, or should not, at this juncture, propose general models of how knowledge is utilized.

The focus on applications and utility that characterizes more recent scholarship on the nature of knowledge limits the exploration to somewhat vague, and certainly quite broad, taxonomies. As recent as this scholarship may be, it has not advanced our examination of the nature of knowledge to the level of its components. This is not only the gap in current scholarship, but has also been a constant gap in the models and theories offered by philosophers who studied epistemology. These philosophical works, beginning with Aristotle and flowering during the period of the “enlightenment,” continue to this day. This body of work, although containing a variety of approaches, nonetheless illustrates a key stream in the study of knowledge with a focus on the overall nature of knowledge, its morphology, and its ethical implications. The basic elements of knowledge were not adequately explored. The emphasis was, and continues to be, on what constitutes knowledge and whether the knower indeed “knows.” Thus, epistemologists are concerned with how we know and how we attest or recognize what we know. The other stream of scholarship, anchored in knowledge management, concerns the notion of knowledge and in particular its diffusion and manipulation.

The Search for the Nature of Knowledge

Even this subtitle would require an entire book to adequately describe the research involved in such a search. I chose a selected set of authors, from Immanuel Kant to Karl Popper and F. Hayek in our times. In the two centuries I will attempt to cover, there has been a marvelous crop of scholars and philosophers who engaged in the study of knowledge. They represent a sundry array of viewpoints and approaches.

It would be unnecessary and certainly unwise to engage the reader in a full discourse of the various “schools of thought” and streams in philosophical and epistemological studies. I chose instead to briefly describe the main models, arguments, and findings of the key scholars.

The early philosophers concerned with knowledge had been the Greek scholars, known as Sophists. They posited that knowledge is wholly derived from experience. Plato (428-348 B.C.) disagreed with the Sophists. His theory of knowledge contended that knowledge based on sensorial experience is a low level of awareness. He argued that these are merely opinions and that the true level of knowledge or awareness is made of unchanging ideas or immutable forms that can be attained by reason and intellectual pursuit rather than by empirical experiences.

Aristotle (384-322 B.C.) was a student of Plato, later also becoming a teacher in Plato’s Academy in Athens. Aristotle initially supported Plato’s view that abstract or ideal knowledge is indeed a superior mode of awareness, but he later contended that there are two ways of acquiring knowledge. The first applies to most forms of knowledge and can only be achieved from experience. Another mode is a deductive method, by which superior knowledge is gained by following the rules of logic, such as in the form of the syllogism.

These early Greek philosophers had framed the debate about the nature of knowledge for centuries to come. The core issue of the debate was the mode by which knowledge is acquired by the human mind. Is it by means of experience, through our senses, or by means of deductive reasoning? This debate, somewhat dormant in the Middle Ages (to an extent discussed by Thomas Aquinas, 1225-1274), was intensely reignited during the Age of Enlightenment in Europe, from 1600 to the early twentieth century.

Two groups of philosophers had emerged in the three centuries of this dichotomous intellectual struggle. One group, the rationalists, argued that human knowledge is obtained and can be verified by conducting a logical or rational exercise on principles of nature. These are given, self-evident postulates or, in their terminology, “axioms.” From the Greek word for honor, axioms are accepted as true statements because of their intrinsic value, which one therefore must honor. The main method by which knowledge is gained, starting with these axioms, is the system of deduction.

The best known rationalists are Rene Descartes (1596-1650), Baruch Spinoza (1632-1677), and Gottfried Leibniz (1646-1716). The main assertion of this school of philosophy (concerning the issue of epistemology, or knowledge) was that the human mind has the ability to recognize and be cognizant of the world outside it by means of application of rational processes, without direct recourse to empirical experience.

Descartes had argued that rational manipulations allow for the identification of universal principles or truths. These are indigenous to the mind and are independent of external events such as individual experiences. In the current terminology of knowledge management, Descartes proposed that we possess tacit knowledge of rational exercises that by deduction allow us to arrive at knowledge about all other aspects of the physical world. In other words, Descartes believed that we possess, in our mind, the formula that allows us by mental processes of rational thinking to know the physical world that exists outside of our own self.

Descartes held that there is a fundamental separation between intellect and body, and that knowledge is based on absolutes derived from rational deduction. He therefore refused to accept any belief unless it was the product of rational examination, doubting even his own existence. Hence his famous “Cogito, ergo sum” (“I think therefore I exist”). This rational deduction allowed him to declare it an axiom and to proceed from there to the deduction of the principles and laws of the natural world.

Another rationalist was the Dutch philosopher Baruch Spinoza, the son of Portuguese Jews who emigrated to Holland. Spinoza lived a life of solitude and contemplation. Because of his philosophical ideas, he was excommunicated in 1656 by the rabbinical council of Amsterdam.

Spinoza argued that knowledge can be deduced from the basic laws and axioms, in the same way as in geometry and mathematics in general. He critically examined Descartes’ duality of mind and matter, and arrived at the conclusion that mind and matter are two manifestations of the same phenomenon, existing in parallel trajectories. Hence, knowledge is itself a mode or a form of matter or substance. This conclusion did not solve the problem of the duality of mind and matter, but allowed Spinoza to suggest that by parallel existence, mind and matter “appear” to interact by our perception that they travel in the same wavelengths.

Leibnitz also believed that a rational format or plan is responsible for the natural world. He proposed a system by which the physical world is made of “monads,” which are items or centers of energy, acting as microcosmic representations of nature. They exist in harmony in light of the plan that God has pre-determined. Perfectly rational knowledge is to understand God’s plan for the harmonious co-existence of the monads. However, the human mind is not capable of grasping such a perfectly divine plan and is therefore limited in its rational capability.

On the other side of the quest for the nature of knowledge was a group of philosophers commonly known as empiricists. By chance rather than by cultural design, the most famous empiricists were British, whereas the key rationalists came from the continent. Empiricists believe that knowledge is primarily based on experience and our ability to sensorially capture the empirical world. As a school, the early empiricists rejected the notion of axioms and self-evident principles from which the mind can deduce most valued knowledge. John Locke (1632-1704) was an early proponent of this approach. He received his education at Oxford University and held public office, but never a professorial position. Locke’s theory of knowledge is closely intertwined with his political theory. He was an ardent Protestant and strongly opposed the divine right of kings as being inconsistent with his philosophical belief that pre-existing ideas or notions are not valid. His theory of knowledge proposed that the human mind is in its origin a “tabula rasa” (an empty platform) into which sensorial inputs are imprinted as empirical manifestations of human experiences. Since there are no pre-conceived ideas, Locke believed that each mind (hence each individual) is equal in his attempt to gain and to utilize knowledge. Although he died almost a century before the American Revolution, Locke’s ideas featured prominently in the deliberations of the first Congress and in the drafts of the American Constitution.

But the more influential empiricists were Berkeley and, particularly, Hume. George Berkeley (1685-1753) was a clergyman who taught at the Universities of Dublin and Oxford, where he is also interred. In his treatise on human knowledge, Berkeley rejected Locke’s distinction between ideas and the physical world (empirical objects). He argued that knowledge is confined to ideas that we form in our mind about the empirical world. The physical world outside the human mind is irrelevant, since these things that are in such a world cannot be construed by the mind as concrete and real. The mind can only contemplate its own ideas.

Berkeley was deeply concerned with the skepticism and atheism of the philosophical approaches of his time. He therefore arrived at the conclusion that the thoughts in the human mind are there by transfer from a more able mind, that of God. He also argued that: “The ideas of sense are allowed to have more reality to them…but this is no argument that they exist without the mind” (Berkeley, 1957, p. 38).

Berkeley is considered the founder of “idealism” due to his belief that objects of the real world only exist if the human mind perceives them as such. What if the mind does not (temporarily) perceive the objects in the physical world outside the mind? Berkeley then argued that they are being perceived by God, hence at any given time objects are perceived. His famous phrase was: “esse est percipi” (to be is to be perceived). The knower’s mind does not have evidence of the “true” existence of the physical world of objects outside the knower, because the knower perceives by means of a stream of sensorial inputs from this world. But these sensorial inputs are lodged in the mind—hence we are confined to the reality as it is perceived by and within the mind.

A more radical view of empiricism was advanced by David Hume (1711-1776). Born in Edinburgh, Hume spent many years in France, where he befriended Rousseau and other French scholars. Hume believed, as did Berkeley, that true knowledge of the natural world is impossible, and that knowledge is only possible by means of experience. The knower perceives such experiences, with all his flaws and subjectivity. Hume’s scepticism is generally exemplified by his view of causality and inductive reasoning. He doubted both. Laws of cause and effect, he argued, are mere beliefs, and there are no logical or rational grounds to draw any inferences from past events to the future.

Hume made a very influential distinction between what he called “impressions” and “ideas,” so that this distinction served as background to Kant’s criticism of Hume and to the development of Kant’s perspective of knowledge. Hume defined impressions as those experiences that we receive directly from our senses, as a sensorial representation of the external universe. Ideas, on the other hand, are those experiences that we know because we are able to extract them from impressions we had already experienced. These are, in a way, “derivatives” of the more powerful and real impressions, which are derived directly from our senses.

Reconciliation and a Brilliant Step Forward

At the core of the dispute between the two schools searching for the nature of knowledge (rationalism and empiricism) was the distinction between the roles of sensorial inputs and rational manipulations of ideas, notions, and concepts. There was also a search for two distinct, yet complementary aims. One was “What is knowledge?” and the other was “How do we know?” The quest for what is knowledge followed a path of philosophical inquiry into “true” knowledge, and the human ability to “really” know the physical world outside the individual self and outside the mind. This line of inquiry has produced perspectives on the ontology of knowledge, the ethical and religious implications of what it means to “really” know, and a fertile field of conjectures concerning the link that true knowing provides between mind and universe.

Very little came out of this line of inquiry that could illuminate the question of the structure of knowledge as an ontological entity (i.e., having its distinct form). The second school did not fare much better. The quest for understanding how we know followed a path of inquiry into how the mind processes whatever inputs it receives, from the external world (senses) and from itself (logic and reasoning). As I described earlier, the two schools of thought (rationalists and empiricists) held extreme views, favoring either inputs from sensorial experiences or rational manipulations of ideas and concepts. There came a time when the need for the reconciliation and synthesis of these views became urgent and timely.

The first and monumental effort to reconcile and to synthesize the divergent approaches was the work of a professor at the German University of Konigs-burg, Immanuel Kant (1724-1804). He wrote several books, two of which I will reference here: the Critique of Pure Reason (published in 1781) and the Critique of Judgment (published in 1790).

Kant was dissatisfied with the state of the philosophy of knowledge of his time. He believed that in order to reconcile between the distinct schools of thought, he needed to construct a unique and new framework of the nature of knowledge, with its very own terminology and concepts. Such a logical framework should also address the questions: What is knowledge? and How do we know?—that is, how the combined effects of sensorial inputs and rational manipulations are combined in the mind to create knowledge.

Kant’s framework is based on the distinction he makes (in the human mind) between perception and thinking. Perception deals with the sensorial inputs, and understanding deals with concepts. He classified concepts into three types: a posteriori, a priori, and ideas. Kant now faced the challenge of explaining how the two, seemingly diametrically opposed scenarios or models of the processing of knowledge indeed work in human cognition. This was not an easy task. He started by proposing that the human mind possesses “interactions,” which are the criteria of time and space by which perceptions can be judged. Another attribute or capability of the human mind are a priori concepts called categories. So, the external world becomes knowable when sensorial perceptions are posited in the categories, within the criteria of the intuitions, thus forming judgments as to whether these sensorial inputs represent the external reality. The world outside the mind exists in the form of what Kant called “noumena,” or the thing-in-themselves, but those are not knowable unless we can apply our perceptions of them for the categories.

There are, according to Kant, four groups of categories, each having three subcategories. These are:

  • Quantity (unity, plurality, totality)
  • Quality (reality, negation, limitation)
  • Relation (substance and accident, causality and dependence, community or interaction)
  • Modality (possible-impossible, existence-nonexistence, necessity-contingency)

By means of the categories, we are able to perceive objects in the physical world around us in a way that they seem to interact with each other and have causal relationships with each other and with us, the knower. This was the Kantian framework for applying empirical inputs in the creation of knowledge. But the mind also knows abstract notions or “ideas,” which are higher-level constructs. Ideas, Kant posited, are not the outcome of sensorial or empirical perceptions that had been applied to the categories. Rather, they are the result of logical inference—the rationalist perspective.

Kant also proposed two types of judgments: analytic and synthetic (a priori and a posteriori). Analytic judgments, propositions, or statements are inherently “true,” hence they are known, but they do not provide us with knowledge about the world. Synthetic judgments are a “synthesis” between the knower and the world outside the knower. The statement: “The house on Main Street is a prairie-style architecture” is a synthetic judgment.

In Kant’s framework of knowledge, synthetic a posteriori statements are based on the processing of sensorial data by the platform of categories. However, Kant struggled with the issue of synthetic a priori judgments or propositions. He argued that they do exist. The problem is that these judgments produce knowledge about the world without the input from sensorial data to the point where we have knowledge about this world that we are certain is true and known as we know analytic statements.

Kant argued that synthetic a priori knowledge is the mainstay knowledge in mathematics and in the sciences. General laws of science are not the result of sensorial inputs from our universe, but synthetic a priori knowledge that allows us then to organize our perceptions of the physical world into a meaningful set of connections. As the individual knower applies the general rules to sensory perceptions, the Kantian categories allow the knower to identify the connections and form a meaningful (or knowable) and nonchaotic perception of the world. This, in essence, is what Kant called the “transcendental logic.”

Kant’s influence extended beyond his contribution to the scholarship on knowledge. He influenced the work of Marx, Hegel, Schopenhauer, Fries, Heidegger, Hayek, Popper, and a host of other philosophers and political scientists. His framework of how knowledge is processed in human cognition and his synthesis of the rationalistic and experiential perspectives turned out to be a very viable platform to understand the nature of knowledge, albeit also leading to selective criticism.

From Kant to the Present

After Kant we find a hiatus in the pursuit of the nature of knowledge. The emphasis has thus shifted from exploration of the structure of knowledge to a focus on the linguistic and symbolic exchange of knowledge. The comprehensiveness of Kant’s scheme had a long-standing impact, so that scholars of knowledge were largely contented with examining the meaning, ramifications, and implications of Kant’s contributions. Kant’s scheme not only bridged the conflicting approaches to knowledge processing (positivism and empiricism), but also created a system that attempted to describe and explain the elements of human cognition. This was such an all-encompassing effort that it provided a platform for a diverse group of followers to pick and choose aspects of the scheme and build upon them their specific theories. Kant’s logical framework was like a “supermarket” of possible avenues for exploration: political or philosophical, ethical or economic, religious or sociological. The spin-offs from Kant’s framework were essentially limitless, thus occupying the attention of scholars for decades afterwards.

In the past two centuries since Kant, the exploration of the nature of knowledge was carried out by a mixed bag of sociologists (such as Durkheim), social-anthropologists (Levi-Strauss), communication scientists, psychologists, and more recently, information scientists. This trend inevitably led to the emergence of the linguistic philosophy of knowledge and to Wittgenstein, Quine, Russell, and Chomsky.

Analytic and Linguistic Approaches to Knowledge

The general trend of the pursuit of the structure of knowledge in the twentieth century had focused on propositions or statements and their characterization of knowledge. The emphasis was on how people exchange and communicate what they know, rather than the structure of what they know. Statements in the language that people use include concepts and notions in their entirety, therefore they do not require a more in-depth exploration into what makes these statements bearers of knowledge. This means that the onus is now on determining whether such knowledge-laden statements are true or false, and the modes or procedures that one would use to ascertain their veracity.

Ludwig Wittgenstein (1889-1951) was a student of Bertrand Russell (1872-1970). Both may be credited with founding the school in philosophy known as logical positivism. Russell, an ardent mathematician, believed that the complex physical world can be explained by simplifying its components into precise and meaningful propositions. He named them atomic propositions. In cooperation with Alfred North Whitehead (1861-1947), Russell introduced mathematical symbols to simple propositions that describe the physical world. He argued that such logical propositions are meaningful, in that they correspond to the elements of nature, in what Russell called logical atomism. This one-to-one correspondence between the logic of language and the universe allows us to gain knowledge about our universe and to characterize it in a form that is meaningful and exchangeable with others.

Wittgenstein was strongly influenced by Russell. He and other philosophers of his time (such as Mach and Schlick) formed what was known as the Vienna circle or school of linguistic philosophy. Wittgenstein believed, as did Russell, that language can be reduced to elementary propositions that describe the physical world. These propositions are meaningful when they describe facts, such as propositions of scientific knowledge.

However, in a later book, Philosophical Investigations, Wittgenstein (1968) recognized the different uses of language to give meaning to scientific analysis, as well as in religious, commercial, and other uses. He argued, therefore, that propositions must be understood within the context in which they are utilized. He introduced the notion of “language games” that people play by using language as a tool in their dealings with their universe (Mounce, 1990).

More recently, Willard Quine (1908-2000) extended Wittgenstein’s notions of the uses of language (Quine, 1951). He criticized the distinction made between synthetic and analytic statements or propositions. Quine addressed the issue of how one knows the world by suggesting that the use of language and the choices one makes in linguistic varieties have a great effect on the way one perceives the external world. Knowledge is therefore a reflection of the use of language.

The linguistic perspective in the pursuit of the structure of human knowledge has championed this search to the extent that it became bogged down with issues of the form and usage (functionality) of language. Even the study of cognition has a strong bias toward the role of language in the processes of the human mind.

Noam Chomsky is a leader in the study of linguistics. He challenged existing theories on the structure of language by suggesting that such theories should also explain how language is used in processes of the human mind. His contribution had to do with the distinction between the knowledge of language skills and the specific uses that humans make of these skills. He spoke of “generative grammar,” which is the link between the structure of language and its applications in human cognition (Chomsky, 1972, 2002).

These theories that emphasized the linguistic perspective have side-tracked the pursuit of the nature and structure of knowledge. The focus was on propositions, statements, and the nature and verifiability of complex descriptions of nature. At the level of language, these scholars already started from a complex point of concepts and notions that can be expressed in words and arranged in propositions and statements. Moreover, this line of scholarship had led them to believe that cognitive processing of knowledge is in the form of propositions.

At issue was the seemingly conflicting view of how the mind perceives the external world. Does the human mind form a pictorial or analog image of external reality, or does it perceive it by means of statements that indirectly describe the external world (digital representation)? Although this conflict continues to exist, some recent studies have attempted to better explore this issue by focusing on human problem solving and its similarity with how computers operate.

Management, Problem Solving, and Psychology

Comparison of human cognition with the newly developed computers has been a catalyst to a large portion of scholarship in the areas of decision making, problem solving, and psychology. Some early work after the Second World War was carried out by Herbert Simon (1916-2001) and his colleagues at Carnegie Mellon University. They contributed to the generation of the areas of artificial intelligence, automata, and robotics. They endeavored to create computer programs and machines or instruments that are capable of reasoning that approaches human thought.

Simon also developed the concept of bounded rationality. He argued that in human (particularly in managerial) decision-making processes, it is impossible to gather, absorb, and analyze all the information one would need to make a completely rational decision so that it would maximize the benefits from such a decision (Simon, 1991). Instead, Simon suggested that managers make decisions on the basis of the amount and quality of information that satisfies their level of comfort with the decision and its outcomes—rather than continually pursuing the “maximized” or “optimized” level of decision making.

More recently, there have been developments in psychology and managerial cognition to address issues of human perception, cognition, and imaging. Kahneman, Slovic, and Tversky (1982), for example, demonstrated that humans make choices and exhibit preferences in uncertain environments based on mental or psychological representations that are generally different from logical rules of inference of a rational decision theory.

The State of Affairs

The study of human knowledge and its ramifications into larger systems such as organizational and managerial systems of knowledge has been a multi-disciplinary effort. There has been little cross-fertilization or inter-disciplinary research (Hazlett, McAdam, & Gallagher, 2005; Nonaka & Teece, 2001; Patriotta, 2004).

This has constrained researchers to delve ever more deeply into their own conceptual frameworks and their parochial methodologies. The emphasis has largely been on how knowledge is processed, rather than an exploration of its elemental structure and its unit of analysis. The adherence to the chains of data-information-knowledge has also led to a focus on the information-knowledge flow. As information scientists extended their exploration into the concept of knowledge (as a natural continuation of the chain), they also inflicted upon the study of knowledge the ideas, methods, and focus of information science.

The combination of a diversified disciplinary landscape and the emphasis on process and later on relevance and applications has created a state of affairs in which knowledge has become an orphaned creature of the massive research effort in the fields of information, cognitive sciences, and management. Even in the emerging literature that specifically targets knowledge management, the focus of research remained within processes, value, and utilization (Agarwal & Lucas, 2005; Cheng et al., 2004; Lockett & McWilliams, 2005).

This book is one small step aimed at a remedy for this state of affairs. The first two parts of the book focus on the basic unit of structure of knowledge and on a model of its progress. The latter part of the book follows the extant literature by linking the model thus developed to the world of knowledge systems and their applications. If I defiantly stray from the main in the initial half of the book, I then obsequiously return the narrative to the mainstream body of research of applications and utilization.

What do We Know?

This intense intellectual effort we have witnessed in the recent past has not resulted in much progress in the quest for understanding the structure of knowledge. The combination of research on linguistics and semiotics, and on rationality and the architecture of reason has been bogged down in trends that lead away from investigation of knowledge, its structure, and its dynamics.

A very revealing book by Zeno Vendler (1972) portrays a good illustration of the state of affairs in our understanding of knowledge. Following Chomsky, Vendler attempted to relate language to ideas or mental images. He concluded that:

“One could argue that although this theory might explain the ease children display in learning a language and thus may have some importance for scientific psychology, with respect to the philosophical problem of ideas it offers no solution—it merely pushes the problem further back in time. By suggesting that these ideas are native in individual humans (as we know them now), one does not say anything about the absolute origin of these ideas…In consequence, we are still up in the air concerning their relation to the world.” (p. 217)

Vendler argues that such native ideas are subject to human evolution, and are a tool with which human beings are able to confront and understand their external reality. He quipped: “It is bad enough that we are born as a ’naked ape’ in the body; why should we start out with a tabula rasa for a mind as well?”

Vendler is correct. Although progress has been made in several ancillary intellectual areas, we have “pushed the problem back in time.” As we had embarked in recent years on the study of propositions and their linguistic and rational meanings, we are still much in the dark on what constitutes knowledge, how it is structured, and what its elemental constituents are.

We do recognize that the structure of human knowledge is composed of two major elements: the processing of signals from our environment and the conceptual tools (ideas, categories, etc.) with which we undertake such processing. We also recognize the roles that beliefs, biases, perception, and other psychological phenomena of our mind play in processing inputs from the external world. Finally, we understand the role of language, semantics, and semiotics in portraying and describing the external world and our knowledge of what we consider to be reality.

Emerging Interest in the Working of the Mind

Since the mid-1990s there has been a surge in the levels of both popular and academic interest in the human mind. This resulted in a flurry of books and scholarly publications. This phenomenon may be credited to the converging effects of three factors. The first was the increasing ubiquitousness of medical imaging and diagnostic technologies. There has been a dramatic leap in the uses of such technologies as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), single photon emission computed tomography (SPECT), and X-ray tomography. These technologies offered much more advanced, more focused, and more discriminating pictures of the brain, its activity levels, and the positioning of selected emotions and cognitive functions within the geography of the brain.

The second factor has been the innovative developments in research and applications of the cognitive sciences. Increasingly, there were discoveries of various aspects of cognitive impairment, such as Alzheimer’s. These advances have captivated the public’s imagination and have diverted the limelight to the functioning and mysteries of the human brain.

Thirdly, the unparalleled developments in human genetics have contributed to the overall revolutionary belief in the public opinion that humanity—driven by scientific progress—is on the verge of finding cures for many hitherto less understood and untreatable maladies. This belief had been extended to the complexities of the human brain and to its deficiencies and pathologies. Such a phenomenon gained prominence in particular as the “baby boomers” began to age.

The combined impacts of these factors have led to clinical advances in the imaging of the brain and the resulting improvements in diagnostic techniques and successes in the discriminate identification of cognitive impairment. In addition, advances in research into the cognitive sciences and new discoveries in pharmacology have created a host of “miracle” drugs for the treatment of ailments such as depression, eating disorders, and schizophrenia.

As scientists continue their explorations into the workings of the human brain, we are entering in the early years of the 21st century into a clinical revolution of discoveries in diagnostics and therapeutics.

In parallel, there have been advances in economics, management, and organization theory further discussed in this book. These disciplinary areas identified the emergence of the knowledge economy and knowledge workers as the new assets of the post-industrial world. Within the span of a few years, there has been a rapid growth in the interest by academics and practitioners of how to harness knowledge and how to construct effective knowledge systems for use by managers and their work organizations. The combination of these phenomena is sorely wanting to deal with our basic understanding of knowledge.

The complex structure and ubiquitousness of knowledge systems are some of the key forces that challenge us to “look inside the box” and to gain a better understanding of how knowledge is structured. To this end I embarked on the journey described in this book. The starting point is the next chapter, where I examine the seeds of knowledge: What is the basic unit of that which we call “knowledge?”