Issue 3 · Winter 2020

Current Affairs

The Modern Memory Hole: Cyberethics Unchained

John McClellan Marshall 

“Those who cannot remember the past are condemned to repeat it.”
George Santayana, 1905

In 1949, George Orwell published what is perhaps his most famous and prophetic work, 1984. The protagonist of the novel was Winston Smith, a minor bureaucrat employed in the Ministry of Truth in the government of Oceania, a thinly disguised United Kingdom. His task was to review documents relating to the prior history of the world, in particular Oceania. The purpose of this process was to determine what information should be retained as part of the “official” history that supported the regime of Big Brother and what prior history should be removed. The portion that was to be removed was destroyed by putting it down the “memory hole” where it was burned so that none of this information could later be retrieved in order to contradict the “official” history. In reality, therefore, the Ministry of Truth was a propaganda machine in service to the state.

Indeed, human society on a global scale is in the midst of the next step, the unchaining if you will, of the concept of cyberethics and the ability of machines not merely to dominate, but to dictate, the future path of human society on a both individual and global scale.[1] The transformation that is taking place in this technologically-driven context is a function of the almost daily changes that occur in the operating systems and applications common to modern computer systems, some of which are in fact being created by machines. In those cases, the creation of the algorithms that underlie the changes may or may not be traceable in the final format. Indeed, many of these changes are seemingly so minor as to go unnoticed to the average user of the machine. Yet, when compounded over a period of time, the impact on the user is demonstrable and sometimes vexing. To some extent the true impact on the user is to render him or her irrelevant to the operation of the machine.

In the years since the publication of 1984, the world has undergone a major technological revolution that has spawned some unanticipated consequences that are reminiscent of the world created by Orwell. The “modern memory hole” is one of them. Perhaps one of the simplest illustrations of this is the evolution that followed the development of musical reproduction through the eight-track tape system developed in 1964 by the Lear Jet Corporation. This system provided a mechanism for the conversion of a wide range of relatively high-fidelity music from vinyl recordings to a format that would be available to a mass consumer public. Through various stages, it was added to automobiles and home sound systems for the next sixteen years. Toward the end of that time, it experienced competition from compact cassettes that were even smaller and in 1982 by the introduction of the compact disk, or CD, on which the music was digitally recorded by lasers. The advent of the CD not only rendered both the compact cassette and eight-track tape system obsolete, it virtually completed the elimination of vinyl disc recordings as a commercially viable medium. What was not anticipated at the time was that the rather extensive catalog of music, particularly rock music, that had been copied exclusively on vinyl records or eight-track tape was no longer available because of the lack of players to reproduce the music so recorded. One of the finest examples of the latter was Fleetwood Mac’s 1988 album Greatest Hits. The demise of the CD is happening even now because of the ability to stream audio and video from the online “cloud” to any laptop, tablet or smartphone. It is perhaps an irony that in modern times it has become fashionable to have a collection of vinyl disc recordings and the turntables to play them. So it goes.

One of the major innovations that accompanied the rise of personal computing devices was the “floppy disk”. Originally, these were magnetically programmed disks that were housed in a stiff container that could be placed into the computer to be “read” so that the contents could then be projected onto the monitor. Initially, they were several inches in diameter and could hold a considerable amount of information for the time within the limits of the technology. Later, they evolved into the smaller disk that was about two inches in diameter, yet held almost as much information as the larger versions. They were portable and compatible with a wide range of programs, and so they formed a major part of commercial activity during the period from 1980 until the turn of the 21st century when they were replaced by the “flash drive” that could hold up to 64 gigabytes of information and, a further advantage, had no moving parts.

An unintended result, due to changes in formatting driven in part because of the change in the storage medium, was that much of the information that had been stored on the floppy disks, regardless of size, was no longer easily accessible. Only a reader capable of interpreting the floppy disk that was connected to a properly programmed computer that would feed the information to the flash drive would permit the information to be available, and the readers themselves soon became increasingly difficult to obtain. At the same time, changes in the computer programming made it even more difficult to retrieve the information from the floppy disks. For example, starting in the late 1980s, Apple computers had a program entitled ClarisWorks that was used for graphics. This program, by then entitled AppleWorks, was “no longer supported” after 2007 and was replaced by iWork 08. Again, a lot of the vast array of graphics and data created and stored under ClarisWorks and its successor was no longer subject to being retrieved.

Approximately 16.3 zettabytes of information, roughly the equivalent of 16.3 trillion gigabytes, is being produced each year.

Facebook Data Center, Prineville, Oregon, 2011. Photo: Chuck Goolsbee/Facebook.

During the latter half of the 20th Century, in the American space program, one of the original computer languages that had been created to handle the complexities of launching these vehicles was ATOLL, the Apollo Test Or Launch Language. Computers at the Kennedy Space Center (KSC) were initially programmed with this system, and it was, within limits, successful. Of course, the programs were recorded on magnetic tape and stored in large machines that were controlled from a simple keyboard. The environmental support requirements for these “computers” was extensive, sometimes involving major air conditioning facilities to keep them cool. By the time that the Apollo/Saturn V space vehicles were in fact being launched in the late 1960’s, however, ATOLL was no longer used, almost forgotten in its obsolescence except by historians. A modern laptop, and even the latest smartphone, contains a memory and computing capability that is almost as great as the entire complex created at KSC by the Apollo Program, yet it cannot read ATOLL. Eight-track tapes (and the catalogues of music that they contained), vinyl recordings, and ATOLL have all gone down the modern “memory hole” almost without a trace.

In addition to these hardware and application software changes, the expansion of the “memory hole” is further accelerated by innovations in the operating systems of computers of all types. Without regard to the methodology of the creation of the various operating systems, the reality is that the installation of a new Mac operating system, such as Mojave to replace High Sierra, Sierra, and El Capitan, has the effect of altering how the machine views and manages the information stored on it. This requires that some of the applications programs, such as Microsoft Office, must be modified to accommodate this change in viewpoint. The most obvious example of this problem is the inability of a laptop manufactured and programmed after 2010 to read any document created in WordPerfect without extensive augmentation, if at all. Again, the amount and type of information that has been “lost” in this process is simply unknowable.

On a more practical and professional level, both the legal and medical communities stand in the path of this express train of destruction of corporate memory. While modern medicine has made great strides in diagnostic capabilities due to technology during the past twenty years, the storage of a patient record has undergone relatively little evolution during the same period. Records, including insurance information, stored on systems that were state-of-the-art in 1999 may not be available in 2019 due to changes in hardware, operating systems and updates in reading software. As a result, the ability of a physician to track the progress of a patient with a long-term condition could well be compromised. Similarly, the creation of a “corporate memory” related to certain diseases, in the process of medical education, could be negatively impacted by changes in the technological mechanisms that keep records. Diagnostic skills would be in danger of disappearing altogether, possibly putting physicians at a disadvantage in anticipating new illnesses. The lesson of the eight-track tape should not be forgotten.

In an extension of the “machine-machine interface” that is integral to the evolution of the “corporate memory” and the ability of technology to obliterate it, there is an emerging “transhuman” element.[2] In this context, “transhumanism” implies that the human being, when connected directly or indirectly to the operating systems that control computers, can be enhanced by becoming an extension of the machine for some purposes. One example of the impact of transhumanism is the experimental use of the emerging psychotropic drugs to relieve the symptoms of PTSD in military personnel. In another case, in March 2012 it was announced that in the United Kingdom, a researcher had installed in his own arm a “telepathy chip” that, when connected to the nerves in his arm, allowed his brain to communicate wirelessly with a robotic hand that moved as his brain dictated. The robotic hand that was demonstrated had sensors that allowed it to pick up a glass gently and to put it down without breaking it. The life-altering benefit to someone who had lost the use of his or her mind or an extremity due to misfortune is obvious. This is particularly apparent in the case of the practice of medicine and the use of biometric technology. Similarly, recent devices allow the implantation into a blood vessel of a sensor that monitors the range of sugar content in the bloodstream of a diabetic. If the sugar level goes outside the range, then it signals an insulin pump implanted in the patient to dispense an appropriate amount of insulin without any input from the patient. Obviously, this technology can be monitored externally so as to provide the physician with a record of the treatment of the patient or even put into a “smart watch” for the patient to monitor on his or her own. The problem is that the evolution of this technology and the programming of such devices could allow the invasion of the private life of the patient by hacking in such a way that his or her prior medical issues could be erased. Even worse, the record could be deliberately modified with severe consequences, such as the creation of a false history of infectious disease. Indeed, the entire medical history of a person or a group of persons could be erased or be rendered unreadable merely by technological change, such as operating system updates that do not include compatibility algorithms, with the result that the existence of the patient as a medical reality simply ceases.

Recycling, whether it be trash or data stored digitally in unrecoverable formats, effectively detaches the present from the past.

Server farm, 2017. Photo: Flickr/laboratoriolinux.

At its most negative, in the medical context, this shifts the paradigm somewhat to a “machine-machine-man interface” where the machine is no longer an extension of the person, but the person potentially becomes its agent, responding to its decisions as to the health of the “host”. The cyberethical issue becomes one of defining the limits beyond which the machine should not be capable of manipulating the data and making the decisions for people. The blurring of the line between people and machines in this context is undeniably complex in part because of the ability of time and the evolution of the algorithms to create a disconnect with the past. The larger cyberethical issue is whether technology can be employed to rescue humanity from this impending disaster, or has the learning/decision-making function of humanity been usurped beyond recall by this modern Frankenstein creation?

An example of the problem created by the growing ability of machines to learn from each other as well as from external circumstances has already appeared in automobiles that self-correct without human input. The systems now include sensors that will, hopefully, prevent one car from striking another in a rear-end collision or drift into another lane without warning the driver. In this latter situation, the driver and the passengers have become increasingly dependent upon the decision-making process of the automobile and, for their own safety, should obey the machine. Does the warning system “learn” from sensors that detect the nature of the driving conditions, i.e. rain or ice, rough pavement, etc.? More important, can the warning system tell the driver that it is not working? It is here that, rather than abdicate trustingly to machines in this process, our society needs to consider Isaac Asimov’s First Law of Robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm. From a legal standpoint, the question could well become whether it is negligence for the driver to ignore the warning. Put another way, how dependable is the warning system, and how accessible is the data on which the system based its decision to warn the driver? On the answers to these questions could well be based the decision as to who is to be sued.

If it is considered that the justice system is essentially human in both its processes and objectives, then the increasing dependence of human society on technology and its ability to engage in decision-making for humans presents a serious question. At the investigative level, there is no doubt of the utility of technology. An interesting, and positive, intersection of the legal and medical aspects of technological progress occurred in the context of a recent homicide. The victim who was wearing a “fitness watch” was killed with an axe. The recording by the watch of date, time, increased heart rate and rapid decline of heart rate, confirmed the time of the killing and led to the identification of the killer who was present at the scene. Once a case proceeds to trial, historically it can be said that a jury is a multi-headed lie detector (or truth seeker) whose job it is to receive information and determine the facts of a case. Characteristically, the process involves live testimony by witnesses who are presumed to be telling what they saw, smelled, felt, or heard to the best of their ability. The test of veracity and accuracy of a given witness has been the ability of the opposing party to cross-examine the witness. In the modern courtroom, however, there is now an increasing tendency for the jury to depend on technologically created testimony provided by an “expert.” This is sometimes referred to as the “CSI Effect,” so called after a popular television program in which the decisive evidence is often contained in a DNA report. Increasingly, criminal court juries tend to release defendants if such “conclusive” evidence is not produced by the prosecution. Such a situation effectively undermines the ability of the jury, with the help of cross-examination by attorneys, to find the truth of a matter using human tools such as common sense. The pace at which technology can change the outcomes of some matters for the better is best illustrated by the recent reversals of convictions based upon improvements in DNA screening. The overall result, however, is, in fact, a sea change in the jurisprudential structure of the justice system as we thought we knew it.

In the legal community, the advent of electronic filing systems in courthouses and their storage systems, some of which involve laser disc technology, make case records vulnerable to hacking, of course. More of a problem, however, is the situation in which a court reporter’s record of proceedings may be effectively erased with the passage of only a few years due to changes in the technology. In cases that are by their nature long-term, such as criminal appellate matters, the inability of the courts and law enforcement agencies to access records from trials and court proceedings from decades in the past can mean the difference between life and death for a defendant. Similarly, the danger of inaccessibility of records stored in obsolete formats and hardware cannot be underestimated. In the modern legal context, if the record of a trial should be unavailable, then the appellate court is compelled to order a new trial. The cost in resources, together with the potential for a negative outcome for the defendant, make such a prospect quite daunting. In such a situation, it is not just the record, but a person, who could go down the “memory hole.”

Vinton Cerf, spoke in favor of the creation of “digital vellum”: a system that would be capable of preserving the meaning of the digital records that we create and make them retrievable over periods of hundreds or thousands of years.

Server farm, CERN, Switzerland, 2009. Photo: Creative Commons/Wikimedia.

A corollary to this issue is that of the debate over “Big Data”. One estimate is that approximately 16.3 zettabytes of information, roughly the equivalent of 16.3 trillion gigabytes, is being produced each year. By 2025, this number should increase ten times.[3] In the modern day, the ability of technology to collect and retain enormous amounts of data is staggering. It has been said that “he who measures a lot measures garbage,” so the problem then becomes “What do we save, and what do we discard?” It is in the attempt to answer this question that 1984’s Winston Smith comes directly into the 21st Century. In part, the question from the cyberethical perspective actually is “How do we determine what to save and what to discard”? In reality, the answer seems to be “keep all of it, because what seems to us trash now could be treasure in the future”, subject of course to the limitations of technology to store it all. An archaeologist would be quite comfortable with this viewpoint, if applied to this technological problem, as this would make sense because much of what we know of ancient civilization is derived from the trash heaps and broken pottery that they left behind. Recycling, though, whether it be trash or data stored digitally in unrecoverable formats, effectively detaches the present from the past.

In a broader sense the cyberethical issue is not so much one of “data rot” as it is the ability of technology to erase the past without a trace, thus effectively severing modern society from its own roots. Cyberethics addresses a much broader problem in society; namely, the growing use of “political correctness in speech and thought” to modify the contemporary view of historical events.  In its simplest form, this is expressed by the application of “modern” societal norms and values to events that took place decades or even centuries ago as a means of defining them in contemporary terms as “good” or “bad.” This analysis is sometimes driven by momentary political considerations that reflect passion, rather than fact. Aside from the logical fallacy inherent in that process, such an approach to historical scholarship ignores the underlying reality that the society that exists today is in part a product of those events that are now being condemned because they do not suit modern norms. In effect, it can be used to redefine the context of historical events in such a way as to render them irrelevant to the modern world. The academic community in particular should assume the responsibility to make sure that, while being open to new ideas and ways of thinking, the educational process does not sanction the creation of a fictional history by redefining the past as something different from what it was. The past simply is, and the study of it as it was is what can be instructive.

In the educational system as it exists today, there has been a growing emphasis on STEM, which is primarily focused on the sciences and engineering. This emphasis has shifted the liberal arts into a secondary position in the modern world in part because of the economic opportunities that the advance of technology has offered graduates. The societal danger posed by this situation is that, without the liberal arts as an integral part of the educational process, science and technology may well, over time, become so detached from humanity that the needs of human beings are subordinated to the desire “to do because it can be done.”

In the cyberethical problem, the most negative characteristic is that technology can actually facilitate the modification, if not the erasure, of the past so that society become more and more rudderless, moving simply with the contemporary breezes. For example, modern programming allows the erasure of the metadata that could allow the reconstruction of the deleted information. In an effort to avoid this outcome, some effort should be made to build into the latest version of an operating system (whether it is an OSX or an app, such as Word) code that would allow it to read those files created by the immediately previous version. Word already does this through “compatibility mode.” By doing this in each iteration, one could simply keep scrolling back in “time” to the earlier versions and read what was written decades ago. It may not be a “Rosetta Stone,” but in any case, it would not be buried in the desert. That way, nothing gets lost, and society avoids mechanical censorship, however unintended. Also, if we start now, we can avoid losing what was written perhaps as recently as forty years ago (before the people who wrote it and can read it die off).

The pace of the evolution of both the hardware and software has increased with the passage of time until even Winston Smith would not have imagined the possibilities for changing history. Fortunately, there has been some interest in this problem of late. At the annual meeting of the American Association for the Advancement of Science in 2015, the vice president of Google, Vinton Cerf, spoke in favor of the creation of “digital vellum.” By this he advocated a system that would be capable of preserving the meaning of the digital records that we create and make them retrievable over periods of hundreds or thousands of years. Of course, such a project would of necessity not be a matter of preserving only bytes, but also meaning as well, with its linguistic “thesaurus” component, an expansion of the “Rosetta Stone” concept for the modern age.

Should society move toward the goal of “efficiency,” driven by transhumanist considerations and technological changes in the mechanical storage of information—at the expense of the ability to express a wide variety of thoughts and retain them in some form from which they can be retrieved by succeeding generations?

15th-century Italian manuscript of Aristotle’s Nicomachean Ethics. Vienna, Österreichische Nationalbibliothek, Cod. phil. gr. 4, fol. 45v. Photo: Creative Commons/Wikimedia.

While there is no question that the “digital vellum” concept has merit, there is an additional issue that makes the “memory hole” even more of a threat. This is the growth of functional illiteracy in civilization in general. Professor Mihai Nadin of The University of Texas at Dallas has addressed this to some extent in his book The Civilization of Illiteracy.[4] He asserts that while literacy was essential to the development of an efficient society, it is no longer adequate to sustain that efficiency. It is too slow, too ambiguous, and too subjective to be efficient. The result is that new languages, more efficient ones, are coming into the world, some driven by, or even created by, machine-created programs. On the broader view of society, this would of necessity involve attention to translational issues because the words used in earlier records might or might not reflect accurately what is meant today by the same word.

Over the past 20+ years in Europe generally and in the international legal context, there has been considerable growth of English as the lingua franca of most of the “civilized” world. Certainly, it is the language of economic opportunity throughout the world, even in China and the Arab world. The adaptability of English to a wide range of cultures is reflected, for example, in Polish, where words from English (such as “hotel”) are incorporated wholesale, both spelling and pronunciation, and merely given the Polish declension endings to let them fit into Polish grammar. If that linguistic synthesis and the ease of communication that it has fostered should dissolve because of “transhumanistic-driven efficiency,” then a systemic collapse of world society very well might follow with catastrophic results, a sort of “Transhumanist Tower of Babel.”

For example, in a recent international arbitration, one party refused to honor a contract because it said that the words didn’t have the same meaning that the other side said they had. The conflict arose both from linguistic and cultural differences between the parties. Despite using the same words, as the prison warden says in the film Cool Hand Luke, “What we have here is a failure to communicate.” Regardless of the origins, this phenomenon has become known as the “Humpty Dumpty Rule” from the passage in Lewis Carroll’s Through the Looking Glass that “When I use a word, it means just what I choose it to mean-neither more nor less.” The implications of such a viewpoint expanding throughout human communication is of concern, and it certainly has a corrosive impact on the orderly development and application of the law. In international relations between governments, such a linguistically relativistic situation could have both unforeseen and catastrophic consequences.

The very adaptability of English, even in “formal” written English language, shows that the tension between efficiency and literary clarity is already emerging due to the increasing misuse of homonyms such as “their,” “there,” “they’re,” “lead,” and “led,” clearly demonstrating the impact of efficiency on literacy. Perhaps the simplest example of this “efficient” new linguistic pattern is texting. The common sentence “See you later” becomes “C u latr,” and this illustrates the problem. As this type of communication spreads, the lexicon by which society communicates will likely tend to shrink to the point that expressive phrases such as provided by Shakespeare or The King James Bible, Polish playwright Stanisław Wyspiański or T. S. Eliot simply will not exist in the future. Indeed, such authors may become the sole province of academe (a form of “memory hole” in itself), if they remain identifiable as significant at all, because the mass of the population may not be able to read them, much less evaluate them. One of the most obvious casualties of the linguistic drive toward efficiency is, of course, poetry, and its loss inevitably would tend to diminish the human spirit. After all, Mark Antony’s oration over Caesar, “Friends, Romans, countrymen. . .” is simply not “efficient” in its phrasing. The “efficient” version could be “pepl he ded b sad.”

Within the context of cyberethics, the question must be asked: should society move toward the goal of “efficiency,” driven by transhumanist considerations and technological changes in the mechanical storage of information—at the expense of the ability to express a wide variety of thoughts and retain them in some form from which they can be retrieved by succeeding generations? The consequences of that shift without consideration of what might be lost, and how to retain it, is the entrance to the “modern memory hole.” Such a transition in society might well allow for little or no individuality and from which no discernible history can emerge. It is not so much that society would be creating the “dustbin of history” as it would be in danger of becoming the “dustbin” itself.

Aristotle teaching his pupil Alexander the Great. 15th-century Bruges manuscript of Aristotle’s Ethics. British Museum, Egerton 737.

[1] Here I use the following definition of “cyberethics,” first presented in my paper “The Terminator Missed A Chip! Cyberethics” at the International Astronautical Congress in Oslo, 1995: The relationship between the ethical (including legal) systems that have been developed by human beings to undergird civilization from ancient times to the present as expressed in societal norms of right and wrong and justice and injustice and the ability of computer­driven technology to operate outside those conventions with almost no limits. The corollary is the ability of technology to drive alterations in those conventions without regard to human input in a societal “default” to the machines.

[2] As defined in the Oxford English Dictionary, “transhumanism” is the belief or theory that the human race can evolve beyond its current physical and mental limitations, especially by means of science and technology.

[3] J. Engebretson, “Data, Data, Everywhere”, Baylor Arts and Sciences (Fall 2018), 24.

[4] Dresden University Press, 1997.

This article appears in Athenaeum Review Issue 3 (Winter 2020), pp. 94-101. Download a PDF copy.
Filed under Philosophytechnology