Can Cyber-Physical Systems Reliably Collaborate within a Blockchain?

Ben van Lier. Metaphilosophy. Volume 48, Issue 5, October 2017.

Introduction

For Heidegger, phenomenology is primarily a method of perception. In terms of perception, phenomenology does not focus so much on an objects being; rather, it studies humans being in connection to other humans, objects, and events. In the interface between human, object, and event and the ensuing interactions, a joint consciousness of reality arises. In Heideggers view, technology is a phenomenon where we constantly have to ask ourselves what the essence of a new manifestation of technology is. Answering the question of technologys being also creates, according to Heidegger, meaning in terms of what this current being of technology can mean for us as humans in our relationship with this technology. For Arthur, all forms of technology, no matter how simple or advanced, are “dressed up versions of some effect—or more usually of several effects” (2011, 47). He adds, however, that the main manifestations of technology are those manifestations that create a new domain for themselves, or in his words, “They are the expressing of a given purpose in a different set of components” (51). Such a new phenomenon that is enabled by technology is precisely what the manifestation of the blockchain seems to be. As a phenomenon, the blockchain resembles what Arthur refers to as a new domain made up of different interconnected components. The new domain of the blockchain is based on global peer-to-peer networks, such as the Internet. Within these networks, new possibilities are created based on the available software, enabling peers to perform secure information transactions among themselves. This new domain is generally believed to originate from an article by Nakamoto (2008). While it is undeniable that this article was the first to provide a broad description of the functionality of a global financial blockchain in the form of the cryptocurrency bitcoin, it is also clear that technological possibilities that were available at the time are combined to form a new whole. Asking whether a blockchain as a new technological unit can also be used in other domains, this essay looks at communication between systems in such networks as the (Industrial) Internet of Things and cyber-physical systems of systems (complex interconnections of cyber and physical components). First, the essay looks at the origins and development of interconnected systems that are currently spurring the development of cyber-physical systems of systems. Next, the focus shifts to three basic characteristics (fault tolerance, voting, consensus, distributed ledgers, and information transactions) that seem to be required for a generic blockchain to be able to function as a secure and reliable communication environment for interconnected cyber-physical systems. And finally, the essay addresses the new complexity created by this intercommunication and interaction between a variety of cyber-physical systems that are part of a global cyber-physical system of systems. The essay sets out to lay the foundation for new thinking on post-bitcoin use of blockchain technology—new thinking that ties in with the simultaneous development of interconnecting autonomous and technology-based systems in networks, within which these systems communicate, interact, and make decisions independently.

Cyber-Physical Systems of Systems

In 1954, Ashby asked: “Can a machine be at once determinate and capable of spontaneous change?” (1954, 88). To answer this question, he developed a machine made up of four interconnected similar components and called it a “homeostat.” Based on his research, he stated that a homeostat and its constituent parts can function as a unit based on interconnections and feedback between the constituent parts. The unit that is made up of interconnected parts can, in turn, be connected to other units (or components of units) by establishing interconnections and communication and providing feedback between separate parts. Ashby concluded that “a fundamental property of machines is that they can be coupled” (1957, 48). Interconnecting separate machines should, in Ashbys view, be organised in a way that ensures that each separate machine can only influence the other machines based on the input they provide. By organising the input received from the environment, an individual system is able to preserve its functional uniqueness and ensure it is not influenced by the obtaining of input from its environment. When interconnected machines are able to adapt their mutual behaviour and subsequently interact accordingly, we have what Ashby refers to as a form of self-organisation of machines based on the feedback they receive. This form of self-organisation also presents new challenges. He says the following about this: “Such complex systems cannot be treated as an interlaced set of more or less independent feedback circuits, but only as a whole” (1957, 54). The form of system self-organisation described by Ashby leads Nilsson to observe the following: “Ashby emphasized that self-organization is not a property of an organism itself, in response to its environment and experience, but a property of the organism and its environment taken together” (2010, 31). For the introduction of mutual adaptations between systems without affecting the systems basic functionality, Ashby uses so-called informative feedback. According to Wiener, the benefit of informative feedback is that if “the characteristic of the load changes slowly enough and if the reading of the load condition is accurate, the system has no tendency to go into oscillation” (1948, 134). For the organisation and dosing of feedback, Ashby and Wiener build on Shannons theory of communication. In its day, Ashbys homeostat was a standalone whole that could not be connected in a network such as the Internet and was not able to independently determine when and with whom it wanted to communicate or to determine with whom and when to engage in what kind of interaction. Communication between machines or machine components in those days consisted of the exchange of meaningless messages, as defined by Shannon in 1948: “Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem” (1948, 379). Due to Shannons choice, meaningless messages hence create communication and feedback between different parts of the machines as described by Ashby and Wiener.

Today, nearly seventy years after Ashby, Wiener, and Shannon, we are again on the threshold of a new kind of machine: the cyberphysical system, which the National Institute of Standards and Technology (NIST) described as “smart systems that include coengineered interacting networks of physical and computational elements” (2015, 1). Contrary to Ashbys homeostat, cyber-physical systems are developed for interconnection in networks and to subsequently exchange and share data and information within these networks and receive feedback from other systems on the data and information provided. More and more physical systems, such as cars, aircraft, and wind turbines—which are controlled by embedded software—are thus able to connect themselves to networks like the Internet and to interact with other networked systems. In its physical form, a cyber-physical system can be a traditional object, such as a truck, a television set, or an aircraft. As a cyber-physical system, each of these new systems have both the capability to connect to networks and communicate and interact with other systems in networks and the capability to operate autonomously and separately from the network. By exchanging and sharing data and information, interconnected cyberphysical systems are able to take part in temporary coalitions with other known and unknown systems. These coalitions are geared towards realising a specific objective, as the interconnectedness of systems creates a time-independent and location-independent context within which they make joint decisions. In “The Enigma of Context Within Network-Centric Environments” I argue that context can, in this sense, be considered a “temporary and cohesive whole, a nonmaterial entity, that is formed and perceived in an arbitrary connection between people, between objects and between arbitrary combinations of people and objects. Context as a temporary whole is thus more than the experience of each of the mutually connected yet distinguishable parts” (2015a, 61). To achieve the shared objective, meaningful information is needed, which is created by systems within the specific context by assigning meaning to the data and information they receive.

Ranganathan and Campbell (2007) predicted that the rapid development of these large-scale distributed systems, such as the (Industrial) Internet of Things or smart grids, will lead to these networks amassing huge numbers of heterogeneous and mobile systems. In their view, and in light of the enormous scale and interconnectedness of a large variety of systems, few steps had been taken up to that point to create new knowledge to be able to deal with the complexity that comes with the huge volume of interconnections, communications, and interactions between these systems. The “Trans-Atlantic Research and Education Agenda in Systems of Systems” report did venture a description of the development of the required knowledge on cyber-physical systems of systems, defining cyber-physical systems of systems as “an integration of a finite number of constituent systems which are independent and operable and which are networked together for a period of time to achieve a certain higher goal” (2013, 11). The concept of a system of systems is one of a large number of dynamic systems that work together based on a wide variety of integration options between a multitude of autonomous and stand-alone systems. Collaboration between the various systems is needed in order to achieve a specific objective such as realising a product or a service. Any required product or service can only be realised by the system of systems as a whole and not by any constituent system on its own. Making these separate and networked autonomous systems work together leads to new risks and issues that cannot easily be solved at this point. It is not so much the physical size or geographic spread of these systems, but rather the complexity created by interconnection, data and information sharing and exchange, and ensuing interactions and actions among a diversity of cyber-physical systems that bring about these new risks. To cope with these new challenges, new possibilities need to be created by combining real-world and real-time information to support decision making between these systems based on autonomous control and learning processes. According to the authors of the aforementioned research agenda, the end result needs to be focused on the creation of cyberphysical systems of systems “that will support both users and operators, providing situational awareness and automated features to manage complexity that will allow them to meet the challenges of the future” (2013, 18). In the development of a concept such as the cyber-physical system of systems, the blockchain phenomenon could play a role as a reliable communication environment in which cyber-physical systems can communicate with each other and reach consensus on jointly performed information transactions.

Blockchain

Although the development of cyber-physical systems of systems still seems to be in its early stages, these networks are currently developing very rapidly. Developments such as the (Industrial) Internet of Things, mobile health care, and the smart grid have meanwhile firmly established themselves in our society. Every day, new objects such as TVs, fridges, cars, solar panels, wind turbines, and industrial systems are designed and produced with built-in networking and data-sharing capabilities. As networked systems become better able to achieve shared goals, they are bound to take more and more responsibility away from us humans. In turn, this transfer of agency from human to machine throws up new questions, as Hardjono and I observed when we wrote: “It also raises questions about the agency shift between man and machine and the necessary trust between participants in such networks” (2011b, 495). Trust between participants in these networks is key, because according to Bachmann (2001), it reduces uncertainty about the future behaviour of the systems in these networks. Trust in systems future behaviour ensures that assumptions can be made about how systems will respond in the future to such things as incoming data and information. For Lewis and Weigert, trust is an essential quality of any collective of interconnected systems. In their view, trust between the participants in a collective arises when the participants “act according to and are secure in the expected futures constituted by the presence of each other or their symbolic representation” (1985, 968). Trust in information transactions performed autonomously by interconnected systems materialises when results produced by these systems are understandable and transparent to both humans and cyber-physical systems. This predictable behaviour seems to come about when interconnected systems are able to meet at least three conditions. The first is that their intercommunications must not be disrupted by faults—that is, systems have to be fault tolerant. Secondly, systems must decide to take part based on voting principles that enable the reaching of a mutual consensus and allow transparent and distributed recording of the decision within their own ledgers. Finally, information transactions received by individual systems require these systems to assign meaning based on which they subsequently decide to act or produce. A joint decision enables systems to learn from the information received from their environment. When connected together these three subjects (fault tolerance, voting-consensus-distributed ledgers, information transactions) as a whole will form a blockchain. For that reason they will be clarified in detail below.

Fault Tolerance

Thirty years after Ashby developed the homeostat, Wensley, Lamport, and their colleagues (1978) studied how to keep a machine such as an aircraft operational when important components of the complex system cease to work. Their conclusion was that a machine can indeed stay operational by, besides making hardware adjustments, using software programs. They use algorithms that enable software programmes to apply so-called voting principles. By using such voting principles with every iteration of the combined system (hardware and software), it becomes possible to determine which components are running and making a fault-free contribution to the functioning of the system as a whole. Wensley, Lamport, and their colleagues described such a software-based and distributed system as a “collection of distinct processes which are spatially separated and which communicate with one another by exchanging messages” (1978, 558). A key requirement of software programs is that they can isolate evident faults or intercept erroneous signals to other components. Wensley, Lamport, and their colleagues claimed that this enables prevention of the malfunctioning of separate components leading to problems in other system components. They go by the assumption that the components involved are able to function autonomously and monitor their own functioning. They also assume that the result of the activity will again be made available as input “to the next iteration of tasks by executing calls to the executive software” (1247). The relevant components that take part in the iterations based on software-based voting principles retain their own input and output data, which play a role in reaching consensus between the various components. Pease, Shostak, and Lamport (1980) subsequently claimed that the foregoing enables the tracking down of malfunctioning components by using the results of multiple voting rounds and the corresponding information transactions. By creating an overview of this information, malfunctioning components can be forced to reveal themselves. What remains is the question of how to prevent components within a whole from producing erroneous messages during a voting round. Pease, Shostak, and Lamport (1982) solved this issue based on the so-called Byzantine Generals Problem, concluding that a solution to this problem would only be possible by reducing the possibility of autonomous systems sending erroneous messages. What their solution basically boils down to is that each component, or the general, in their terminology, is only enabled “to send unforgeable signed messages” (1982, 39). Every message sent from a component must therefore be uniquely signed by the sending component. Lamport and Melliar-Smith (1984) found that such a fault-tolerant communication system can be brought about by having at least three plus one or more systems take part in a vote. This voting round would then require at least six plus two messages to ensure fault-tolerant functioning of a communication system.

Voting, Distributed Ledgers, and Consensus

As pointed out earlier, a vote is needed to reach consensus between systems. To make the exchange of voting messages between the systems as reliable as possible, at least three systems are needed, which exchange at least six messages in a voting round. Lamport (1998) put together a protocol for the functioning of a fictitious parliament as a protocol for voting rounds that could be used by systems in a broader context. In developing this protocol, he assumed that there is mutual trust between the members (systems) of this parliament, that these members/systems send reliable messages and that voting adheres to a predefined protocol. Lamports protocol starts with the specification of a number of conditionalities for systems that want to take part in these votes. Each system that is eligible to vote needs to have the autonomous ability to record the results of the vote in which they took part. As Lamport explained, “Each Paxon legislator maintained a ledger in which he recorded the numbered sequence of decrees that were passed” (1998, 134). Lamport assumed that each system always has access to its ledger and can therefore consult previous decisions and accompanying notes. Results recorded in a distributed ledger cannot be deleted or edited. Distributed recording of decisions that were made does, however, necessitate measures to ensure “the consistency of ledgers, meaning that no two ledgers could contain contradictory information” (1998, 135).

Time plays a key role for the proper execution and registration of a vote. Every system that takes part in a vote must therefore also have some kind of built-in clock to keep track of the time during a vote. Lamport stated, “Achieving the progress conditions required that legislators be able to measure the passage of time, so they were given simple hourglass timers” (1998, 135). Every system taking part in a vote sees to it that its voting message is submitted quickly and accurately and responds instantly to messages received from other participating systems. To ensure that a vote or multiple simultaneous votes run smoothly, a vote coordinator is appointed for each vote. The scope of Lamports protocol is clarified by this comment: “Consider a simple distributed system that might be used as a name server. A state of the database consists of an assignment of values to names. Copies of the database are maintained by multiple servers. A client program can issue, to any server, a request to read or change the value assigned to a name” (1998, 155). To further clarify how this protocol works, Lamport (2001) developed a more specific version for use by digital systems. This new version works for digital systems that jointly maintain one or multiple processes aimed at mutually determining one or multiple values. To be able to assess the reliability of these accepted values, systems can select only one of the suggested values during a vote. The process run by the systems involved can only learn from a value that was established by a majority within the group of systems. In order to be able to organise systems learning processes, Lamport distinguished three roles that participating systems or agents can fulfil: the role of proposer, the role of acceptor, and the role of learner. With these roles, Lamport noted that “a single process may act as more than one agent” (2001, 1). In a voting round, one of the systems involved will take on the proposer role for the vote and send messages to other systems specifying a suggested value. The suggested value can be assumed to have been adopted when a majority of the acceptors accepts the suggested value. To be able to learn from the accepted value, the acceptors will have to send their acceptance of the value to a learner designated specifically for this purpose, and this learner will, in turn, notify the other learners of the values that have been adopted by the majority. This kind of information transaction between distributed systems should, according to Gray and Lamport, be seen as “a transaction performed by a collection of processes each executing on a different node” (2006, 2).

Information Transactions

The possibility of exchanging and sharing information between various systems in the form of information transactions is also referred to as interoperability of information, which Hardjono and I defined as “the realization of mutual connections between two or more systems or entities to enable systems and entities to exchange and share information in order to further act, function or produce on the principles of that information” (2011, 67). Our definition is based on the works of Luhmann, who, contrary to Shannon, sees one element of communication as a synthesis produced by a communication unit made up of selected information, the manifestation of the information, and the meaning assigned to the information provided. Information transactions conducted in consensus between multiple cyber-physical systems can be regarded as the exchange and sharing of such communication syntheses. In this form, a communicative action between two or more systems can lead to mutual influencing of the systems involved. This influencing requires the recipient system to accept the information transaction offered, while this system also has to be willing and able to take the synthesis received for processing within its own complexity. The incorporation and processing of a synthesis of communication by the recipient system is referred to by Luhmann as interpenetration. In “Can Machines Communicate,” I stated the following about interpenetration between systems: “Luhmann uses the concept of interpenetration to pinpoint the special way in which systems contribute to the shaping of the system within the environment of the system. Interpenetration is more than just a general relationship between system and environment, but rather an inter-system relationship between two systems that make up an environment for each other” (2013, 63). Within the new environment that is created jointly by information transactions, systems will have to be able to assign meaning to the received and accepted information. The action of assigning meaning will, at the same time, also mark the difference between knowing and not knowing for the system. By assigning meaning, the system is able to act, while an ecological change occurs between the systems involved in the communication. Every subsequent feedback or new communication will be based on the meaning assigned, and the system thus learns from the information received.

The characteristics described here—fault tolerance, voting, consensus, distributed ledgers, and information transactions—enable blockchains to operate independently. A blockchain that operates based on these characteristics can thereby be a platform for secure communication and decision making between interconnected cyber-physical systems. A blockchain that operates based on these three characteristics helps increase the autonomy and independence of cyber-physical systems and their joint decision making on activities that are to be performed in a specific context. The autonomous performance of information transactions between interconnected systems or groups of systems within a specific context that leads to changes to the behaviour of other machines is something that raises yet further questions. These questions relate to these systems autonomy and capacity for decision making and learning. The increase in independent communication, interaction, and decision making by cyber-physical systems also brings greater complexity as a consequence of these interconnections, algorithms, software, and information transactions. Science based on reductionist views struggles to explain the increasing complexity.

Complexity

A blockchain as a whole produced by intercommunication and interaction between a wide range of components is a complex system. Simon described a complex system as a “large number of parts that interact in a non-simple way. In such systems, the whole is more than the sum of the parts, not in a ultimate, metaphysical sense, but in the important pragmatic sense that given the properties of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole” (1969, 86). To this description by Simon, Cilliers added that complex systems change over time due to intercommunication and interactions and the related transference of information (1998, 3). Interactions between systems as a whole or between separate components of these systems enable each component of a whole to influence any other whole, or separate components of that whole, through information transactions. Despite all communication and interaction between separate systems, each separate system remains unaware of the functioning of the system as a whole. Each separate system responds only to the information that is available to it. For Northrop and his colleagues (2006), the increasing complexity of self-developing ultra-large-scale systems, including software systems, calls for a new and different perspective to be able to handle this complexity. They saw this new perspective as a change process from “the satisfaction of requirements through traditional, rational top-down engineering, to their satisfaction by the regulation of complex, decentralized systems” (2006, 5). In “Advanced Manufacturing and Complexity Science” I also raised questions about the development of the (Industrial) Internet of Things or smart grids. I stated, “The evolution towards such a new worldwide socio-technical whole oriented towards industrial production, comprising (software) systems connected in networks, algorithms, people, industrial organisations and cyber-physical systems, raises the question as to whether people, with their existing knowledge and perspectives, will ultimately be able to deal with these developments adequately” (2015b, 284). This reasoning is prompted by concerns that current scientific frameworks have not produced an adequate response to the increasing complexity ensuing from the proliferation of networked heterogeneous and distributed interacting and intercommunicating systems. A new and different scientific perspective will therefore have to be focused on dealing with the complexity of a new whole such as a cyber-physical system of systems made up of a heterogeneous number of distributed cyber-physical systems. This new whole will continue to evolve as more cyber-physical systems are networked and start communicating and interacting based on algorithms, software, and information. I worded it as follows: “Complexity science can give us new scientific insights in this evolutionary process and at the same time help us in developing new knowledge and perspectives which are necessary to understand and influence this process” (2015b, 285). Complexity science can, Vitale concurred, be seen as a scientific approach that is focused on connections and networks. In his view, complexity science is the counterpart of reductionist science, and the study of complex systems shows us “how modes of interactions between relatively simple parts can give rise to highly complex behaviors” (2014, 11). Reasoning from a new and complex whole as a developing cyber-physical system of systems, the blockchain can be accommodated as a secure environment within which information transactions take place between autonomous and distributed cyber-physical systems based on consensus within a specific context. As decisions are recorded in a distributed manner, the systems involved can keep analysing which information transactions have taken place based on which voting round. This is how blockchain development helps create a world in which interconnected systems are ever better able to make autonomous decisions and in which humans trust that activities and transactions performed by interconnected systems in a specific context were realised reliably and securely.

The new technological phenomenon that is blockchain is based on interconnections and intercommunication, interaction, and decision making between a diverse range of systems. The whole ensuing from the interconnectedness and joint activities can be considered a complex whole. The new and complex whole of a blockchain also calls for a new scientific approach that, unlike reductionism, is focused on these interconnections and activities, as well as on the ensuing complex whole. Complexity science seems to be able to offer this new approach, and thereby contribute to the development of thinking in terms of and about blockchains in a developing world of interconnected systems.

Conclusion

Blockchain can be considered a new and technology-based phenomenon, one that can be considered in combination with the rapid development of the networking of systems into cyber-physical systems (complex interconnections of cyber and physical components). The interconnectedness of cyber-physical systems paves the way for intercommunication and interaction, enabling these systems to be connected into systems of systems to jointly perform an activity within a specific context. The performance of the joint activity can be supported by drawing on the three characteristics that together make blockchains possible: fault-tolerant communication; voting, consensus, distributed ledgers; and the performance of information transactions. By harnessing the defining features of a blockchain, cyber-physical systems will be able to acquire greater autonomy and independence in jointly performing the required tasks. The interconnection, intercommunication, interaction, and joint decision making by autonomous systems turns the whole of a blockchain into a complex entity. Dealing with this new and complex whole calls for a new scientific approach in the form of complexity theory that has the interconnections, intercommunication, interaction, and independent decision making between autonomous systems as its starting point. Increasing autonomy and independent decision making by interconnected systems also raises new questions about how to manage these new wholes and what the complexity of the new whole means for the extent to which we can trust and form relationships with the new whole.