Knowledge and Knowledge Systems: Learning from the Wonders of the Mind. Eliezer Geisler. Hershey, PA: IGI Publishing, 2008.
What is Knowledge?
In virtually every aspect of our lives, we deal with knowledge. From the moment of birth, we are a sieve through which data is constantly feeding and knowledge is created. We are told that “knowledge is power” and the gateway to prosperity (Allee, 2002; Burke, 1999; Leonard, 1998). Since antiquity, people with inquisitive minds have attempted to define, classify, and measure this illusive notion of knowledge.
It is definitely not an easy task. Philosophers and scholars from a variety of disciplines have failed for many centuries to make this notion amenable to taxonomies and explanation. But, is the topic of knowledge also of interest to the intelligent general reader? The editor of a respected publisher once expressed doubts that such readers would be interested in an abstract notion such as knowledge. Although it may well be illustrated with a variety of examples, the editor exclaimed its focus is such a broad question that it can only be explained with esoteric terminology. The general reader, however curious and educated, would find it uncongenial. I beg to differ. I have much more confidence in the general reader. True, some of the terms in this book are scholarly and perhaps unfamiliar. Terms such as epistemology, although they represent complex phenomena, are not repulsive nor complicated. Epistemology is a branch of philosophy concerned with knowledge. Its base are two Greek words: episteme (knowledge) and logos (theory). When accompanied by direct and concise narrative, these terms become companions to a delightful understanding of the notions they define. Hawking (2002) has done so. He made physics and astrophysics a popular theme. Gould (2002) made evolutionary theory an exciting topic of general interest. It has been done and I attempt to do so in this book.
So, the question, What is knowledge?, can be answered in three different yet complementary streams. The first would explore the way in which knowledge is structured. That is, what are the elements that make it? What is knowledge composed of? Is it data, information, or some other component? The second way would examine the nature of the dynamics and progress of knowledge. It would attempt to answer questions such as: How does knowledge progress? How does knowledge accumulate and grow? What are the principles we may discover that explain the patterns of growth and progress? The third stream would examine the uses of knowledge in the lives of individuals and how they apply their knowledge to their involvement in the economy and social affairs of their communities and their nations. This stream would focus on the ethics of using knowledge and the means by which knowledge is put to the test of human activities (Bali, 2005; Bock, Smud, Kim, & Lee, 2005; Kankanhalli & Tan, 2005).
Indeed, this third stream has been the focus of attention of scholars for over two millennia. They concentrated on the ethics of knowledge possession and utilization, and on the ways and means in which it serves as a tool for human action (Artigas & Slade, 1999; Cassirer, 1950; Feyerabend, 2000; Kant, 1999). They formulated theories and arguments that explored the role played by knowledge in religion, ethical behavior, and social involvement of individuals and organizations.
But, despite all this intellectual effort, there was very little learned about the structure of knowledge and its progress. As the reader will find in the following pages, only very recently has there been some intellectual work toward theories of evolution of knowledge and its more fundamental classification. For instance, Kant (1999), the influential German philosopher (1724-1804), critically addressed the nature of knowledge. He proposed a structure of knowledge composed of two distinct forms of “knowing” the world. One was empirical statements or propositions, which are dependent upon sensory perception. The other was a priori propositions (categories). These are fundamentally valid and are not the result of sensorial perception (as Kant called them: “intuitions”). But Kant stopped short of asking: What are the components or basic elements of knowledge? How is knowledge created in the human mind (beyond intuitions and categories)? How does knowledge grow, accumulate, and progress? To Kant’s credit (and this is a good indicator of his work being broad and conceptual), he influenced political theorists such as Karl Marx. His philosophical work on knowledge influenced a substantial number of theorists, such as Hegel, Fichte, and Cassirer.
Kant and many other scholars who followed were engaged in this tremendous effort to classify knowledge and to try to understand it as a function of human existence. Although some of my colleagues will certainly disagree with the following statements, this effort remained very broad. The literature, since the age of enlightenment in the seventeenth century, has focused on arguments, counter-arguments, and constructs of knowledge that, although very insightful, were nevertheless at a high level of abstraction (Jacobs, 1999; Meek, 2003). The tasks of defining knowledge by its components and measuring its growth and progress remained woefully unfinished. It is the first two streams that I cover in this book. The initial stream is the structure of knowledge, followed by the discussion of the progress of knowledge.
It is no coincidence that there is an increased interest in defining knowledge, and that such a definition includes data and information as key ingredients of the answer to what is knowledge. Let me put into perspective the recent developments in the scientific inquiry and applications of knowledge.
Why the Recent Surge in Interest?
Much has been published in the late 1990s about the “information age” and the “information society.” Indeed, the twenty-first century began with the effects of the information age spreading to almost every corner of the world and to virtually every aspect of economic and social living. But this phenomenon began almost half a century earlier.
After the Second World War, the invention and proliferation of computers had generated changes in the way industry and government conducted their operations. Initially, in the 1950s and 1960s, computers (or, as later called, “information and telecommunication technology”—ITC) were confined to large scientific, business, and government applications for the purpose of “data crunching.” Scientists dealt with massive amounts of data they needed to calculate, and business and government organizations struggled with accounts payable, receivables, payrolls, and inventory control. All of this was confined to what we call “backroom operations,” hidden from the public eye but crucial to the routine activities of large organizations. The software at that time was relatively uncomplicated. Languages had emerged with names such as Fortran (science-oriented) and COBOL (business-oriented). Later, other languages began to appear (BASIC, PL1, and IBM’s C family of languages).
The impetus for the emergence of the information revolution came in the form of three converging phenomena that began in the 1970s. First, the performance of computing power (hardware) soared, due to the introduction of integrated circuits. This, in turn, led to a continuous relative decline in cost of computing (Moore’s Law). The second phenomenon was the invention and rapid proliferation of desktop computing (personal computers, or PCs). Invented in the 1970s, these machines soon appeared in corporate and government offices, thus venturing beyond the “backroom” to the “front offices” of managers in all the functional departments of their organizations. A tremendous boost to the diffusion of PCs and to the ease of their usage was delivered by Microsoft and its high-performance operating system. The third phenomenon was a consequence of the first two, and the advent of the hypertext software which facilitated the proliferation of electronic communication in the form of the Internet and the World Wide Web networks. The phenomenon was an enormous spike in the level of investments in ITC by business and government organizations. In the last fifteen years of the twentieth century, companies invested heavily in information and telecommunication technologies, and made these technologies ubiquitous throughout their organizations. More hardware, better software, and growing budgets for maintenance of systems were expanded to such areas as manufacturing, marketing, and even customer relations.
We sometimes tend to look at the sudden rise and precipitous fall of the Internet-based companies of the 1990s with awe, perhaps even incredibility. How could all this happen in one decade? Yet, this story of “boom and bust” obscures the fact that the basic elements of information technology and the Internet were not the only factors that hastened the information age. As electronic networking became a reality in businesses, government, and private homes, investments in these systems continued to soar. The more we had, the more we used, and the more we wanted of the digital world that now became an integral and indispensable part of our lives.
As a result, business and government organizations were faced with the annoying problem of huge investments in information technologies, at a rate of growth that seemed intractable and unstoppable. Simultaneously, they began to question the benefits that supposedly were being derived from these runaway expenditures.
The Productivity Paradox
Even before the attempt to link investments in information technology to productivity, economists and other scholars pondered over the link between investments in research and development (R&D) and industrial productivity. In general, it was very difficult to show with actual statistics that investments in technology engender corresponding increases in corporate productivity. Although in the mid-1980s there emerged a widespread view that we were entering “the age of productivity” in the U.S. and world economics, the numbers simply did not add up. Productivity is measured as real outputs per worker, where we correlate multiyear investments in information and telecommunication technologies with gains in productivity in the American economy. The results failed to show an increase in productivity that could be attributed to these investments. This phenomenon was termed the “productivity paradox.”
If indeed such a paradox exists, there are several explanations for it. Some economists believed that despite massive investments in ITC (which should have boosted plant and office productivity), the costs of running businesses had increased in those years to an extent that they tended to offset gains from ITC investments. Another explanation pointed to the cost and effort in implementation and absorption of ITC into corporate routines—so that gains in productivity are delayed or overshadowed by the “learning curve” effect of the introduction of new technologies. Another explanation focused on the apparent problems in measuring productivity as a consequence of investments in ITC. That is, we lack adequate instruments to measure accurately the link between investments in ITC and the resulting changes in productivity.
From Information Technology to Knowledge Management
The “productivity paradox” had two important outcomes. As the 1990s came to a close, it became abundantly clear to both scholars and managers that investments in information technologies are a fixture in economic life. Regardless of the level of benefits they produce, the trend for the foreseeable future is for more, not less, investments in these technologies. Therefore, if productivity is not the best indicator of the benefits from ITC, we must look elsewhere.
All this led to the second outcome: the shift to a focus on ITC and its benefits as intellectual assets. The concept is hardly new, but its application gained vigor in the late 1990s as a strong alternative to explaining the contributions derived from information technologies. The basic idea was the application of the capabilities of current information technologies to collect, assemble, store, manipulate, and diffuse business-related knowledge. The quest thus intensified to understand how knowledge can be harnessed in corporate mission and activities. If the 1990s were the period in which ITC systems had been created and introduced into organizational operations, the twenty-first century would be the age in which we put these systems to use so that human and organizational knowledge can be utilized as an economic asset. This seemed to be a natural extension of the previous period in which investments in ITC had laid the foundation for such a pursuit of the role of knowledge in organizations.
The net result was a massive entry by information and management scholars into the study of knowledge as an organizational and managerial discipline. These scholars took over where philosophers have dwelt for centuries. Their focus was on applications, manipulation, results, and efficiencies. The questions they asked had shifted from “What is knowledge and how does it affect beliefs, behavior, and ethics?” to “How can knowledge become a factor in organizational success?”
This historical account places the development in knowledge management within the context of the key trends in the study of information technologies and their applications. As a nascent effort, the literature on knowledge management shows very little focus on the need to identify or measure the elements of knowledge. Much of the effort is on taxonomical attempts to get a general idea of what the types of knowledge are, mainly in the framework of organizational life.
The Quest for Knowledge as an Asset
What I described above is the current effort to understand human and organizational knowledge as a technological and economic asset. Defined in terms of intellectual property, knowledge is thus viewed by current scholars as a variable similar to capital, land, and equipment—as a factor in the production of goods and services. Broader terms such as “The Knowledge Economy” and “The Age of Knowledge” are even stronger indicators of this trend.
Yet, the process by which knowledge does impact the economic behavior of organizations is not adequately understood. The quest remains within the limited region of very broad models of the utilization of knowledge (and information) by economic and social organizations. Knowledge is viewed as a quantity, at best defined in terms of “nuggets,” propositions, and predicates. The focus is not on what constitutes knowledge, but on how it flows, how it is absorbed, and how it is implemented to create wealth.
The operational answer to this quest is the knowledge management system (KMS). Designed and established as an organizational creation, such systems are supposed to be the vehicles for knowledge exchange, as well as the physical container for the storage and flow of knowledge. Synnott (1987) argued in the early days of this quest that the information revolution had three phases: hardware (1980-1985), software (1985-1990), and knowledgeware (1990s and beyond). Encapsulated in management systems, knowledge could then be utilized, directed, and put to good use as are other forms of economic factors, such as capital and land. Once hardware and software were in place, the next logical step would be to harness knowledge. This did not happen as well or as fast as was hoped. Knowledge management systems are a far cry from the aims and dreams of their creators. Rubenstein and Geisler (2003), for example, listed several categories of factors that seem to act as barriers to the successful performance of these systems. They include factors inherent in the system itself (focus, search capabilities), human factors (such as fear and unwillingness of organizational members to use them), and organizational factors (how the system is implemented).
Thus, the failure to create and implement truly workable knowledge management systems has led to an even more vigorous focus on the effectiveness and applications of these systems. This, of course, is at the expense of an effort to better measure the elements of knowledge and to improve our understanding of how knowledge is structured. To an extent, the previous effort at metrics of information and information theory has not been fully translated into a similar program aimed at knowledge. It is lamentable that such a natural progression has not occurred. Knowledge, in the context of managerial and organizational studies, has remained merely a variable in the search for performance and effectiveness. I return to this topic in Chapter XIII to revisit the applications of knowledge systems as seen through the prism of the model I advance in this book of how knowledge is structured and how it progresses.
Bridging Philosophy and Management
The quest for the definition of knowledge has now assumed two seemingly independent routes. Traditional epistemology (the study of knowledge) is a mature area of philosophical inquiry. Simultaneously, as I described above, management scholars are increasingly addressing the topic of knowledge and its role in organizations. It seems to me that there is a need to bridge the two streams of intellectual pursuit. What is it that we know about knowledge from philosophers, and how can we join this with what we know about knowledge from management scholarship?
A review of the relevant literature will show that there has been little exchange between the two streams. Partly because of disciplinary isolationism and the different objectives of their pursuit, the two streams lack workable bridges that would allow them to share ideas and methodologies. Some approaches to knowledge definition are at the margins of the two streams. For example, research on epistemic value, information theory, and information ethics may produce results that would be useful to scholars who study knowledge management systems.
In this book I embark on the path of trying to bridge the two streams. If, I pondered, we can advance a plausible model of the structure and growth of knowledge (as an outflow of current philosophical models), and make the connection of this model to knowledge systems—the claim to relevancy will be answered. Thus, the definition of knowledge, as a start, will be based on prior art in philosophy, but will also be developed to frame a model with tremendous implications for the design and application of knowledge-based systems. The first step will require a review of the widely used typology of knowledge and its constituents.
The Great Taxonomy: Data Information and Knowledge
Anytime we approach the construct of knowledge, it will inevitably be linked to information or described in terms of information and its attributes. The most prevalent taxonomy is the triad: data, information, and knowledge. Presented as a hierarchy of the components of information systems, this taxonomy defines data as ’streams of raw facts representing events occurring in organizations or the physical environment before they have been organized and arranged into a form that people can understand and use.” I prefer the following: “Data are observations on the physical or human world coded in a form that can be stored, manipulated, transferred, and shared.”
However we define data, these are things that describe the world in a form that people and machines can both understand and use. The utilitarian approach will distinguish “just raw facts, or observations” from data, which is usable within a well-defined context of human or machine capabilities. If, for example, a tree falls in the forest, this fact by itself does not constitute datum, unless it can both be understood and potentially useful. This means that by force of the definition, a data point or unit (datum) can only be so defined in relation to an entity external to the datum: a biological being or a machine, capable of understanding it and possibly also using it.
Information is usually defined as “data that have been shaped into a form that is meaningful and useful to human beings” (Laudon & Laudon, 2002). This definition contains two parts. The first describes the flow or volume of information that moves through a channel. This is flow without inherent meaning, simply measuring the capacity of a channel to move volumes of data, also known as “syntactic” information. The second describes an inherent meaning (semantic) to the flow of data.
These definitions are important inasmuch as they guided the research that follows. For example, Shannon’s theory of information initiated a stream of research into the flow of information, transmission channels, and the mathematical representation of communication systems between humans, machines to machines, and humans-to-machines. For Shannon, the unit of information has the “bit” (a zero or a 1) in the digital world of computers. He shows how we can compute channel capacity to transmit bits/second thus providing a quantitative model of communication and information transmittal.
As soon as the definition includes conceptual meaning in the information, the result is a dependence on the interpretation of information and the creation of “meaning”—to and by a knower who is able to make such an interpretation. In turn, this ability requires some criteria for analysis of the information, in order to make sense of it and extract its meaning. Thus, the meaning is not embedded in the information, but in the receiver (or knower), who has the analytical or rational tools to extract meaning.
Knowledge is often defined as “actionable information” (Nonaka & Takeuchi, 1995). Here there is a more extensive reliance on the external entity who naturally must be able to extract meaning, but also must have the capability of integrating such information into a plan of action—that is, being able to act on the basis of the information received. This definition is concise as it is very broad. It does not include a clear reference to what action is, nor how information would be used in action-taking by the knower.
What then do we know about knowledge and its unit? From the taxonomy of data-information-knowledge, we have learned very little. This taxonomy fails to offer a robust hierarchy of complexity or a tractable flow from the elemental to the compound. The unique difference that separates data from information and from knowledge is the reference to an external entity and to the potential of what we are categorizing to perform a given function for the external entity (the knower). To summarize, data are raw items describing reality in a form that can be understood. Once we add “meaning” to this definition, we are now in the realm of information, whereas when we add a tenuous relation to the ability to use it within a given scheme of action, we are now describing knowledge.
There is a tremendous gap between Claude Shannon’s definition of bits, and the trinity of definitions from data to information to knowledge. The over-reliance on the external entity brings with it too many variables to consider. For example, who is the entity who will make information actionable? How would such a transformation occur? Under what circumstances, constraints, and conditions will there be another round of interpretation and extraction of meaning as we move from the level of information to that of knowledge? Are these levels hierarchical? How we long for the simplicity of Shannon’s Theory where information resembles a physical quantity amenable to mathematical measurement.
”The Big Rift”: Severing the Tie Between Information and Knowledge
The current literature conceives of knowledge as part of a continuum of data-information-knowledge-wisdom. This notion has various interpretations and definitions, but consistently positions the ontology of knowledge as a ’transformation” or as a more complex conception of information (Lueg, 2001; Tsoukas & Vlachiminou, 2001). Kakabadse, Kakabadse, and Kouzmin (2003) reviewed the literature and offered the following definitions: “Data reports observations or facts out of context that are, therefore, not directly meaningful” (p. 77). Information is largely defined as “placing data within some meaningful content, often in the form of a message” (p. 77). By extension, knowledge is defined as “information put to productive use.” Thus, when one acts upon information, extracts value from it, and makes it useful for a given end, knowledge is generated. Finally, “through action and reflection one may also gain wisdom. Knowing how to use information in any given context requires wisdom” (p. 77).
These and similar theoretical and empirical definitions of knowledge confine the notion to the analytical and conceptual space of information. It also resorts to an external anchor, such as the utilization of information, as a key dimension of what constitutes knowledge (Kankanhalli & Tan, 2005; Zack, 1999). The outcome is that knowledge is simply the processing of information into a form that produces value to a user and can be utilized for a purpose (Guah & Currie, 2004; Xirogiannis, Glykas, & Staikouras, 2004).
I reject this constrained definition on two grounds. The first is the lack of a clear boundary between the conceptions of information and knowledge. Where does information end and knowledge begin? Even when construed as distinct stages in the flow or chain (from data to wisdom), these stages lack empirical boundaries that distinguish between what constitutes information and the subsequent notion or stage of knowledge (Hunt, 2003).
The criterion of “productive use” or utilization towards a given purpose fails to sufficiently define knowledge as distinct from information. When we define knowledge as a variant of “useful information,” this exercise in terminology does not make an advance towards a distinct concept. Information and useful information are similar definitions of the same notion.
Secondly, I reject the definitional link between information and knowledge because it prevents knowledge from being defined as an independent entity, with its own ontological integrity. As a recombinant version of information, knowledge is not conceived of as a distinct notion. The problem is exacerbated because the literature on knowledge management considers knowledge to be a reified concept and possessing a “stand-alone” format (Day, 2005; Muthusamy & Palanisamy, 2004; Reisman & Xu, 1992; Zack, 1999). Although the literature on knowledge management valiantly attempts to disengage itself from the information systems and information research literature, it is nevertheless bounded by the information-laden definition, hence by the methodology of this more established area of research (Hendricks & Vriens, 1999; Pritchard, Hull, Chumer, & Willmott, 2000). When knowledge is viewed as intellectual “assets” or “capital,” there is a tendency to reify it as a distinctly measurable entity with organizational antecedents and strategic implications (Bontis, 2001; Glazer, 1998).
Thus we find ourselves inexorably immersed in the claws of the ontological conception of data and information. Yet, we fail to demonstrate where knowledge begins and what independent form it assumes within the information-bound perspectives. Finally, how can the nascent literature on knowledge and knowledge management address the metrics of knowledge without being shackled to the information systems literature, its methodology, and its research topics (Alavi & Leidner, 2001; Earl, 2001; Grover & Devenport, 2001; Martin, 2004)?
The answer to exercising a “rift” or disengagement from the information framework is to assume a radically different perspective to the ontology of knowledge. In the current conception of knowledge, the notion of information—extended to knowledge—is accepted as it was devised and developed by information scientists. There is little, if any, conceptual modification or novel contributions to the notion of information knowledge, except for minor changes to the definition.
The model of the structure and generation of knowledge proposed in this book contends that the starting point of a flow in which knowledge and information participates is knowledge, not data or information. What we know originates in the human mind, where sensorial inputs are clustered, and where perceptible distortions are conjoined to form the basic units of knowledge.
The cognitive processes of the mind in which sensorial inputs (such as a taste, touch, or sound) from our five senses are clustered into knowledge are entirely different from the current conception of information being transformed into knowledge. Rather, humans possess the ability (albeit not always the will) to share, transfer, and transcribe their knowledge in a form usable to others. This transcribed and shared form of knowledge may be termed “information,” but the origin of its appearance and ontology is the knowledge clustered in the mind from inputs it derives from the five senses. I do not define such inputs as “data” because, in my opinion, they are not what we commonly defined as data, and also because of the bias inherent in the history of data processing and information systems.
By the current definitions “data” are representations of “facts” or “ideas,” whereas sensorial inputs are a very crude form of human cognitive manipulation of inputs from its internal and external environments. The mental conjoining of a flicker of light, a sound, and a sharp momentary pain are different from a “fact” that there are six cars on the road or that freedom is a right of all people.
To grasp these facts and to make sense of them and the “information” they form (in the data → information flow), there is a need for certain structural foundations that will allow such processing. This has been the core of the philosophical and later informational search for such a process, architecture, and categories of the mind (Dalkir, 2005; Kant, 1999; Tsoukas, 2005; Vail, 1999).
These attempts to explain how data converge into information and into knowledge have been largely unsuccessful (Perry, 2005). When the approach is reversed and knowledge is viewed as the initial ontology to be generated by the human mind, then the constructs of data and information become artificial notions that are contingent upon the processes by which humans diffuse, share, and exchange knowledge. Data and information in the model proposed in this book are the reification of the knowledge being transferred from one individual to another (and by extension from individual to an organized knowledge system and between such systems).
The direction of a flow is now reversed. Knowledge is the origin of cognition, formed as a clustering of sensorial inputs (see Chapters IV and V). Once such clustering forms what we term “knowledge,” and it is amenable to transfer and to sharing outside the mind, we may now define whatever is transferred as “information.”
To transfer “facts” or “information” that there are six cars on the road requires the foundational knowledge of what constitutes cars, road, the number six, and any implications such “facts” or “information” are conveying. In the human mind the only way to form knowledge—hence to be able to absorb such external “information”—is by converting sensorial signals into clusters, which then form the “nuggets” or elemental units of knowledge.
Although the existence of a “flow” of sequential events (such as data-information-wisdom) is an attractive construct, once the direction of this flow is reversed and knowledge is the initial construct to be generated, there is no need for a flow of notions. Whether such a flow obeys criteria of complexity (from the simple to the complex) or temporal distinction (from the past to the present), such a flow is a superfluous and irrelevant explanation of the transfer and sharing of human knowledge.
Tacit and Explicit Knowledge
Ever since Polanyi (1966) there has been a widespread acceptance of the distinction between “tacit” or subjective knowledge, and “explicit” or objective knowledge. Nonaka and Takeuchi (1995) extended Polanyi’s typology. They defined tacit knowledge as a mode of experience, being simultaneous and analog (practical), whereas they defined explicit knowledge as rational, sequential (there and then), and digital (theoretical).
Michael Polanyi (1891-1976) has argued that humans “know more than they can tell,” thus engendering a challenge to successive researchers to identify processes, means, and ways in which we can exercise “knowledge conversion”—from tacit to explicit. The differences between tacit and explicit knowledge (what we know and what we are able to transcribe and to share) became the fundamental content of many models attempting to describe such transcription or conversion (Chua, 2002; Earl, 2001; Eddington et al., 2004; Geisler, 2006). Nonaka and Takeuchi (1995) suggested four modes of conversion: socialization (from tacit to tacit), externalization (from tacit to explicit), combination (from explicit to explicit), and internalization (from explicit to tacit). Chua (2002) proposed a taxonomy of organizational knowledge in which the classification scheme entails individual and collective knowledge, and these are further classified into tacit and explicit knowledge.
All knowledge is tacit, the result of the clustering of sensorial inputs as they are cumulatively created in the mind. Hence, the distinction between tacit and explicit knowledge is at best an artificial differentiation. The terms “tacit” and “explicit” simply denote a temporary location of knowledge once it is defined, rather than describing a different ontology.
If knowledge is still in the form of clustered sensorial inputs, encloistered in the human mind, then the distinction is irrelevant, because at this point there is no apparent means or mode to access this knowledge. If, however, the knowledge has been shared in the form of nuggets, for example, then it is purely a matter of existing in the accessible universe of human interactions and communication. Thus, whatever is or is not encloistered or embedded in the human mind is irrelevant, immaterial, and inconsequential to shared knowledge. We now need to better define the clustering of knowledge in the mind on the elemental unit of knowledge and to adequately describe the means by which knowledge is shared among minds. Perhaps we may even arrive at the conclusion that whatever is clustered in the mind is indeed “knowledge” and whatever is shared is not knowledge, to be termed “information” or some other connotation.
“Tacit” knowledge may at best refer to the potential of the human mind to generate knowledge from sensorial signals, and to the repository of such clustered formations in the human mind over the person’s lifetime. This may mean that the longer the person is in existence and the mind operates, the more knowledge has been generated and cumulatively deposited in the individual’s knowledge base in the mind. This does not take into consideration the quality and other attributes of the content of this knowledge base. One must also take into account the levels of attrition and loss of content due to the aging of the brain and other detrimental biological factors of decay.
Such “tacit” knowledge is only meaningful when it can be measured and useful when it can be accessed and shared. Much of the intellectual effort of scholars and practitioners in the areas of knowledge and knowledge management has been devoted to improving the flow from “tacit” to “explicit” (Alavi & Leidner, 2001; Bock & Kim, 2002; Davenport & Prusak, 1998; Lee & Choi, 2003; Rothberg & Ericson, 2004; Sharp, 2003). On the whole this effort produced puny results, leading to several crises in knowledge management and in the continuing failure of many organizational knowledge systems to perform at the level promised by the discipline. As I describe in Part IV of this book, individuals in organizations are reluctant and even averse to sharing what they know.
Michael Polanyi was correct when he asserted that people know much more than they share, but for the wrong reason. We fail to share more than a small portion of what we know not because we refuse to do so (for personal, organizational, or other reasons), but because we are, by design, unable to do more than we have been doing. Hence, attempts to “improve” such a phenomenon of transfer and sharing of knowledge are bound to fail.
Where Knowledge Resides
The effort to define what we know and what we understand by the notion of “knowledge” also leads to questions about the locus of knowledge. Who has it and where does it reside? The notion of the knower will be amply discussed in this book. This notion has two venues: the human cognition in the mind and machines hosting knowledge.
The appearance of computers in the second half of the twentieth century introduced a new dimension to our view of candidates for hosting knowledge. In addition to their ability to receive and process information, these machines engendered a new area of study—human-machine interaction—that is raising many complex questions. Among these are: How well can machines perform as information (or knowledge) processors when compared with the human mind? How similar are these machines to a human mind? How do humans interact with these machines?
In 1936 Alan Turing proposed an effective method of computation known as the “Turing Machine.” Independently, Alonzo Church (1941) had offered a similar conjecture. A Turing Machine is a notion of a computing device or method able to compute any recursive functions. The impact of this notion of machine capability has been substantial. It led to a growing appreciation of the power of machines to harness and to process information, hence knowledge. A natural development was the suggestion that the human mind is essentially a Turing Machine. Conversely, with the adequate programming that emulates human mental processes, machines (computers) could eventually think. This concept is known as “machine functionalism.”
The human mind may thus be modeled as a machine. Mental states are viewed as “automatic formal systems” whose outcomes are continuously interpreted as a system of relationships. Knowledge is produced as a result of these computations that define mental state at the level of calculations—similar to what a Turing Machine would be doing.
The computer-model of the human mind was extended to the area of exploration known as artificial intelligence (AI). Since the early 1980s, a considerable effort in both financing and intellectual resources has been expended in developing AI devices, components, and models. Previously known as “expert systems,” these were computer programs aimed at performing complex tasks, to the extent that meaning, concept formation, and understanding could be achieved. In medicine, for example, expert systems, such as the pioneering Mycin, were designed to provide a diagnosis of a disease based on symptoms, medical history of the patient, and the results from relevant tests performed on the patient.
Turing himself had suggested a test by which a machine could be defined as possessing humanlike intelligence. The machine would be interviewed by an educated human being (without direct physical contact between human and machine), and if the interviewer is unable to tell whether the entity on the other side is human or machine, the machine passes the test of intelligence.
The basic notion of this effort in extending cognition to machines is that “intelligent beings are semantic engineers—in other words, automatic formal systems with interpretations under which they consistently make sense’ (Beedle, 1998, p. 243). But, although AI systems have become more sophisticated, there is still a very wide gap between their performance and human intelligence capabilities. In over three decades of the existence of expert systems, those that survived are only used for limited tasks rather than in a broad capacity as sources of knowledge. In particular, this applies to such systems as programs that contain and process knowledge. In effect they contain little knowledge, as defined here. None has passed the “Turing test,” nor have computers achieved a level of thought comparable to the human mind. More on these machines, expert systems, and artificial intelligence is in Part IV of this book, where I consider the relation and impact of the structure and progress of knowledge on data and knowledge bases and systems.
How Much Do We Really Know About Knowledge?
The effort described above did little to bring together the stream of scholarship by philosophers engaged in epistemology with those in the nascent field of knowledge management. Both streams adopted the level of propositions, language, concepts, and predicates. Epistemologists explore the truth of these propositions, their value, and their role in ethics—among other goals. Knowledge management is concerned with usage and effectiveness of the systems of knowledge. Neither system ventured to the level of the elemental structure of knowledge. In the allegory of physics, they remained in Newtonian physics, unwilling or unable to delve into sub-atomic explorations and the realm of quantum mechanics.
The issue of definitions, as elaborated in the trinity of classifying data to information to knowledge, is a practice largely favored by management scholars and practitioners. The definitions lead to a better operationalization of concepts, processes, and practices. Therefore, the focus on definitions below the traditional level of higher discourse by epistemologists can be credited for the advent of management scholarship and its pursuit of data and information as components of organizational analysis. Such emphasis was, and continues to be, guided by organizational variables of concern to the extent that there are few, if any, incentives to explore the basic components of knowledge—so long as the existing definitions adequately support models and notions of organizational performance and similar analytical pursuits.
In the current state of affairs, we find these streams of research that are centered around distinct objectives with inconsequential confluence or cross-fertilization. The knowledge management “movement” is concerned with its managerial and organizational issues in pursuit of applications, utility, and the exploration of the systemic attributes of knowledge. It is also overly concerned with transforming “tacit” knowledge into “explicit” knowledge. This stream seems bent on inventing its own notions, research questions, hypotheses, and terminology. Thus far the harvest from this endeavor is relatively puny. We know somewhat more about knowledge systems, but little, if anything, more about knowledge itself: its structure and its progress.
What Else Needs to be Known About Knowledge
I am obliged, in a manner similar to those engaged in knowledge management, to create some generic terms and to define or redefine basic notions of knowledge structure and its dynamics. From epistemology we learned of knowledge as a rational intercourse, conversant and exchanged in language, and argued within human experience and the ontological aspects of what is known. Combined with outcomes from the management quarters, the result is a discourse of knowledge at high levels of consideration, and as a utilitarian “thing” employed by the rational mind and by organizations that such a mind can create.
What we do not know at this juncture, and what needs to be known about knowledge, can be summarized in three items. First, we need to know how knowledge is structured beyond the trinity of hierarchical definitions of data, information, and knowledge. We need to know how knowledge is created, what its basic components are, and if we can regard it in such a design as an ontological unit, rather than an assembly of lower-level components. Can knowledge be considered an entity with its own structure and characteristics, or are we merely reifying an agglomeration of, say, items of useful information? This question has been amply discussed by epistemologists. What we need to know is centered around the problem of the structure of knowledge, assuming that it is indeed ontologically viable.
Second, we need to know how knowledge progresses and how it grows in size and magnitude. If knowledge is indeed ontologically acceptable as a unit of analysis by whatever taxonomy is applied, size or volume would be one of its characteristics. This would mean not only the actual growth of the stock of knowledge by individuals and their social groupings, but also the diffusion mechanisms by which knowledge is exchanged and transmitted among individuals. Finally, we need to know how a modified model that links structure and progress impacts databases and knowledge systems. In fact, these three items are the three basic tomes of this book.
Why is this important and what might be the contributions of this book to gaining such an improvement in the stock of what we already know? In our daily lives we are surrounded by databases and knowledge systems. They not only contain much information and knowledge about us—who we are and what we do—but they also serve as the basis for our actions. We rely on these systems to make inferences, to interpret our surroundings and the forces that confront us, and to make judgments on what we should and can do.
In the latter aspect of knowledge utilization, we have gained many insights from the work of such researchers as Leon Festinger on cognitive dissonance, and Amos Tversky and Daniel Kahneman on decision making under uncertainty, to list only a few. The psychology of using knowledge to make decisions is better illuminated by these scholars, and we are hence better equipped to understand how knowledge is stored, diffused, and manipulated.
Although we have learned a substantial amount on knowledge and its utilization, there is ample room for a model or theory that links structure and progress. I am not referring to a unifying theory (a “theory of the whole”) as physicists prefer to call it, but to a modest attempt to advance a model that links the basic structure of knowledge with a corresponding perspective of how such knowledge progresses and grows.
In the following pages the first section of this book reviews the existing theories and models of knowledge. This is then followed by a description of the model I propose for the structure of knowledge. The second section offers the model as it relates to the progress of knowledge. The third section describes the area of “epistemetrics,” and the fourth section of the book addresses the impact of the model on databases and knowledge systems, and how we utilize them to interpret our reality and to act upon it.