Essay

The Modern Barbarians at the Gate

John McClellan Marshall  Roger Malina 

Things are not always as they seem; the first appearance deceives many.

—Phaedrus

Modern social systems are often ill-adapted to the growing dominance of a kind of “reality” that is derived from technology independent of our senses. The conclusion is that humans may not be well designed if they are to understand the world, because of the physical limitations of the species. This may be an indication that perhaps both the social and physical sciences need to rethink and redesign legal and other social systems that no longer rely primarily on human evidence. Put another way, the adoption of technological innovation effectively promotes the societal changes that accommodate those innovations. While the physical sciences have worked through such changes and, among other things, discovered that our senses are filters, not windows, on the world, society has had some difficulties in adjusting its filters.

In this endeavor, scientists have developed new kinds of detectors, one of which is known as the extreme ultraviolet color detector. At one point in its development, the device “hallucinated” a very bright spot. When it was turned off and on again; the bright spot (a speck of dust on the lens) that had been detected had disappeared. Similarly, when NASA first turned on its Extreme Ultraviolet Explorer satellite, it revealed thousands of stars that had never been catalogued. Each such discovery had to be double-checked to be sure that the machine was not “hallucinating.” It took a few hours to point the telescope to the same location, and, in some cases, the bright star was no longer there. Yet, without such verification the results could never have been announced to the scientific community.

Despite such occasional setbacks, the scientific community has gone through centuries of evolutionary steps in the extension of the senses (for example, the development of microscopes and telescopes), augmentation of the senses (through X-ray imaging), and development of new sensory inputs that cannot be derived from extension or augmentation of the senses (gravitational wave observatories). Thanks to such innovations, for example, we now know that our bodies “shrink and expand” at very small scales that none of the human senses can be augmented or extended to study.

On the societal level, for as long as anyone can remember (in the law this is referred to as “time whereof the memory of man runneth not to the contrary”) courts have depended almost exclusively on evidence that is elicited by the five human senses: sight, smell, sound, taste, and “feel” (which includes, “rough,” “heavy,” “light,” and so on). This has been axiomatic in the judicial process until the advent of science and the expert testimony that follows it. Triggered by the end of World War II, the steady advance of technology has extended the perceptual capabilities of the senses beyond what could have been imagined before then. In other words, the question has become, “Is this real, or is it merely a fabrication enabled by technology?”

Put another way, “The issue is actually the shift from knowledge, as an attempt to understand reality, to data as a description of reality. We don’t need to know, we only need to measure—this is the process we are experiencing.”[1] This data is often “translated” into forms of data that human senses can grasp. Further, it has been noted that “we have gone from an age that was meaning rich but data poor, to one that is data rich but meaning poor. . . [, and] this is an epistemological revolution as fundamental as the Copernican revolution.”[2] As a result of this shift, there has been a growing tendency to look to technology to “fill in” what our senses cannot provide. A consequence of this is that we tend to have excessive trust in the data provided by machines.

This is analogous to the parallax effect that impacts the perception of an object in three-dimensional space, based upon the position of the viewer. On a more fundamental level, historically, the definition of reality has been essentially a binary concept directly opposed to the imaginary or the abstract. Plato outlined this clearly in the Allegory of the Cave.[3] In the Allegory, the focus was on the contrast between what the mind saw, and the source of that vision. To the modern reader, the shadows on the wall of the cave are not to be confused with the physical items of which they were merely shadows. . . yet, to the viewers whose heads were immobilized, these shadows were “real.”

In the modern world, the omnipresence of technology and its advances raises the question of where to draw the line between “perception” and “reality” in the presentation of evidence before a court.

In the modern world, the omnipresence of technology and its advances raises the question of where to draw the line between “perception” and “reality” in the presentation of evidence before a court. In our everyday lives, there is the customary default to machines, such as pocket calculators, as being authoritative. While that suffices for routine matters, it simply is not appropriate as a standard for evidentiary acceptance in the modern judicial process. At the center of this modern environment is the tension between technology and the humanities as filtered through the legal process. It is the collision between human perception and that of the “deep neural networks” that are at the heart of the AI world that makes the “gatekeeper” increasingly at risk of error.[4]

Part of the problem is that recent studies have shown how flawed eyewitness testimony can be. Since the emergence of modern science, the extent of the fallibility of human memory has become well known. Indeed, the mere phraseology of the questioning by a police officer could impact the accuracy of the memory of the witness.[5] Clearly, the concept of “proprioception” would alter the perception of information received by a witness through the normal senses.[6] Further, the ability to distinguish between true memory, and a false memory that has been made to seem true, undercuts the concept of authenticity that should be in front of the judge or jury.[7] In modern parlance, this would be “gaslighting” the witness, and perhaps the court, with the aid of technology.[8] Once that line is crossed, the question is presented as to whether the witness will be able to return to an authentic memory of the event.[9] Thus, it is through the “default” that the technology enters the gate to impact the “liberal artist” judge.

This presents the two-fold issue of the source of the evidence and the method by which it is adduced. Not only does this reflect the double-edged sword of technology in the judicial process, but it highlights the willingness of judges and juries to default to the machine in order to reach a verdict. As Tacitus once said, “Because they didn’t know better, they called it ‘civilization,’ when it was part of their slavery [idque apud imperitos humanitas vocabatur, cum pars servitutis esset]”.[10] Such a default necessarily should give rise to an examination of the machine, its programming, calibration, and the method of its operation as part of laying the foundation for the introduction of technoevidence in court.

Technoevidence is defined simply as information that would not be available to the trier of fact (whether judge or jury) no matter how smart the witness, in the absence of modern technology. The scientifically logical, and legal, corollary is that it should be evidence that cannot be adduced by the employment of sheer brain power by a human being. Indeed, humans are not well designed if they are to “understand” the world. The trustworthiness of such evidence will necessarily be based upon technological considerations. In this circumstance, the precision of the evidence will likely be directly dependent upon the sophistication of the instruments employed to analyze the factual input.[11]

Figure 1. The Cheese. © Ramon Guardans. Used by permission.

Figure 1 illustrates the vector by which technology can, and does, impact both the sourcing and production of evidence in the modern world.[12]

The chart is derived from a study of molecular biology and illustrates “oriented flow,” in this case from the lower left to the upper right. The circle in the center, “The Cheese,” represents the range of phenomena that normally can be perceived by human senses. These are sight, smell, sound, taste, touch, without technological augmentation.

To the lower left are “nano” matters that cannot be perceived independently by the senses without technological assistance. This represents the lower end of the technoevidentiary trajectory. As the “flow” moves across the circle toward the upper right-hand corner, the perceptive range of the observer becomes steadily broader. The evidentiary picture has evolved in complexity from the microscopic to the visible, then to the “macro,” if not galactic, scale. It is true that much of the evidence in a conventional trial will be confined to the circle. On either side of the circle, though, technology may well be of importance in the presentation of the “facts” that comprise the case at hand. It follows that the farther away from the circle in either direction is the source of the information, the greater is the need for technology. That portion of the chart that is outside the circle while being on the vector represents the field in which technoevidence moves from being an assistant to becoming dominant.

The simplest example in modern times, DNA, found at the lower left-hand side of the diagram, is a factor in the resolution of many types of lawsuits. The most obvious are criminal cases, particularly those involving rape. Since the early 1990s, there has been a trend of acquittal in up to 95 percent of rape cases where the DNA evidence was either inconclusive or non-existent. On the other hand, evidence supporting conviction almost always included DNA linking the defendant to the event. The advance of technology, however, has altered those trends considerably. In cases where the DNA was at the base of the conviction in the 1990s and early 2000s, there has been a recent effort to re-examine the samples. It is this presence or absence of forensic evidence that gives rise to the “CSI Effect” in criminal cases.[13] The result has been that, with the increased precision of the definition of the markers in the genome, there has been a startling number of reversals of rape convictions from the earlier days.

In a somewhat more practical context, the examination of DNA also has become crucial in the determination of parentage. A child inherits the genetic characteristics of its parents, as transmitted through DNA. The mother’s DNA, or mtDNA, can clearly determine who the mother is. The other component of the child’s DNA is from the father. The problem arises when there is more than one claimant to male parentage, so that the court must obtain DNA from all of the candidates to determine who is the father. This has the practical effect of determining who is going to support that child into adulthood.

While that sounds simple enough, the technology that allows for the close examination of the human genome also allows for the modification of the genome prior to birth in order to remove certain genetically transmitted defects, such as those for sickle cell anemia or a tendency to high cholesterol. In such a case, the question would be whether such a modification alters the determination of who the father might be. At the present time, such modifications are highly experimental and, in some parts of the world, illegal. Even so, close regulation, such as the prohibition of such experimentation on an embryo prior to the passage of fourteen days, can allow some experimentation in some jurisdictions. That said, there is an argument that prenatal genetic engineering could very well lead to a healthier human population globally. A potential problem with that, however, is that to engineer humanity to meet the problems of the current environment may expose the species to extinction if that environment should change radically. The dinosaurs are instructive in this matter.

In the “macro” level of the evidentiary spectrum, the modern tendency is to employ what is generally referred to as “artificial intelligence” or “AI”. Setting aside the oxymoronic nature of the phrase, at its most basic, this concept can be defined as the cyber extension of human intelligence as augmented by at least one other computer. To that extent, the human factor has the potential simply to disappear, as the computer complex performs its own analysis in response to the algorithms supplied by the last human to operate the machine. The flaws in such a concept as to the authenticity of the output are self-evident.

There is an increasing tendency to use AI software to draw conclusions from data, but without being able to explain how the discovery was made in the first place so that it can be repeated and verified by others. There is even a view that the software be named a co-author of the discoveries. Considering the wide range of software creators and operators who may have had input to a given machine at any particular time, the evidentiary gatekeeper can be at sea.

Most recently, the advent of “chatbots,” such as ChatGPT, has become fashionable in the day-to-day operation of the practice of law. This form of “research assistant” is both fast and convenient in terms of its ability to search data bases or compose documents, and some lawyers see it as transformational.[14] This debate, a very timely one in the legal community, is already injecting caution into the discussion relative to the authenticity of what the chatbot discovers.[15] There is a further problem arising from the distinction between digital and analog data. Since the chatbot searches digitally and not by analog means, the potential for manipulation of the data that is retrieved is considerable. The point is that the product of a chatbot search that is digital in origin should be viewed at the outset with caution.

There is an increasing tendency to use AI software to draw conclusions from data, but without being able to explain how the discovery was made in the first place so that it can be repeated and verified by others.

For a trial court, the difficulty is at the threshold, because it is the existential intersection of technology and humanity under the umbrella of the search for truth. In fact, the overwhelming majority of lawyers and judges are products of what might be called the “liberal arts”, i.e. literature and history, with some economics and business administration as an adjunct. Very few have any academic or professional background in computer science or biotechnology. Ironically, the judge is, legally, the “gatekeeper” as to the admission or suppression of evidence. Thus, it is clear that there is a very small information base from which a judge may evaluate evidence that is a product of the extension of human senses by technology.[16] The problem is not whether to receive the expanded sense evidence, but how to do so with some authenticity and accuracy. To some extent, the issue is one of how “perception” of what is factual can be expanded by technology in such a way as to alter the perception of “reality”.

Indeed, this problem can have a major impact in the real world of the trial court in terms of misleading the “gatekeeper.” Perhaps one of the most flagrant examples arose in a federal court in New York in which the lawyers for a plaintiff used ChatGPT to research authorities in preparation for a presentation in court. When confronted by the judge as to their use of “bogus” and “nonexistent case law,” it became clear that the lawyers had simply allowed the machine to do the “research” and produce the pleadings. They never checked the citations of the cases to verify their authenticity. After two weeks reflection, the judge sanctioned the attorneys finding that they had “abandoned their responsibilities” to check their work to a level of “bad faith.” The bottom line was that the existing rules impose a “gatekeeping” responsibility on lawyers to insure the accuracy of their filings.[17] There is no question that this case illustrates the statement of Schiller that “Against stupidity the gods themselves contend in vain.”[18] The import of this decision is to alter by expansion the “gatekeeper” function to include attorneys who practice before the court, and that in itself raises the question of whether the initial responsibility rests solely on the attorney or includes the creator of the software that led to such an outcome.[19]

One of the factors that makes this relationship more complicated is that the law tends to adapt slowly to changing societal conditions. This is in part a result of the principle of stare decisis that influences the judicial decisions of the present based upon those of the past. For example, the recognition of corporations, admittedly non-human institutions, into the legal system as “persons” fully only happened in the 20th Century. Now that we are in the 21st Century, we face the possibility that, like animals, AI-led and controlled entities may be entering the legal system as well.[20]

One such “evolutionary step” was dramatized prophetically in the original Star Trek episode “The Ultimate Computer” in which a computer was programmed with the brain waves [“N-grams”] of a scientist who turned out to be insane. The result was a disaster that cost many lives. In the modern context of the societal default to machines, this placement of such near-absolute trust in an entity that presents itself as “competent” could have disastrous consequences.[21]

Similarly, in the medical community, the ability of chatbots such at ChatGPT and similar platforms to collect information and prioritize data, even at the “nano” level, about a patient in a hospital has been adequately established. In that connection, it would be important that the data be limited to the information derived from an examination of the patient, rather than an internet-wide search for information. If the chatbot were permitted to scan the internet as well as the patient data, the ability of the AI algorithm to misinterpret and, hence, misdiagnose the condition of the patient, the “macro” end product, seems to inhere in the process. To suggest that life and death would be in the balance under such circumstances is to state the obvious and to reveal a genuine potential danger looming for the medical profession.

From the perspective of the legal and medical communities, indeed any professional body, there is unquestionably a gap between what will be, and may already be, required in the exercise of a profession and the academic preparation for that activity. Indeed, with the acceleration in the pace of technological development, that gap is likely to widen just as rapidly. Certainly some academic institutions have already expanded the concept of “STEM” [Science, Technology, Engineering, and Mathematics] to include a humanist “arts” component in the curriculum, making it “STEAM” at the undergraduate level.

A similar possible solution to “closing the gap” is to require in professional core curricula a course in computer science and related biological processes, such as brain development, at the graduate level. This would hopefully make for a more literate and critical understanding of the data generated by the machine and enhance the authenticity of the result, whether in court or in the operating room. Admittedly, that might at first seem too complex for some academic programs. At the very least, however, those professions, such as the law and medicine and their related professions such as paralegals and physician assistants, already have a built-in continuing education system. In such cases, it should not be difficult to integrate a lecture on current developments in technology as they impact the profession. This would provide a basis for avoiding the Mata problem, though it would place a somewhat higher burden on the professional to “keep up”.

Just as genetically engineering human beings could produce “dinosaurs,” so a failure to expand the paradigm by which human beings interface with machines might retard the ability of society itself to avoid becoming a dinosaur.

The authors express appreciation for her inputs to the preparation of this article by Dr. Karolina Czapla, Department of Biochemistry and Molecular Biology, Medical University of Lublin, Aleje Racławickie 1, 20-059 Lublin, Poland.

This research did not receive any grant from funding agencies in the public, commercial, or not-for-profit sectors.

Copyright © 2023 by John McClellan Marshall and Roger F. Malina. All Rights Reserved.

[1]Mihai Nadin, Comment to Judge Marshall, July 2023. Italics in original.

[2]Daniel J. Boorstin, Cleopatra’s Nose: Essays on the Unexpected. New York: Random House, 1994.

[3]Republic (c. 375 BC), Book VII.

[4]See Anne Trafton, “Study: Deep Neural Networks Don’t See The World The Way We Do,” MIT News (Ocrtober 16, 2023).

[5]See Dr. Elizabeth Loftus, Katherine Ketcham, Witness for the Defense: The Accused, the Eyewitness, and the Expert Who Puts Memory on Trial, (Macmillan, 1991).

[6]Proprioception, also called kinæsthesia (or kinesthesia), is the sense of self-movement, force, and body position. Proprioceptive signals are transmitted to the central nervous system, where they are integrated with information from other sensory systems, such as the visual system and the vestibular system, to create an overall representation of body position, movement, and acceleration.

[7]See Elizabeth Loftus, The Myth of Repressed Memory (St. Martin’s Press, 1994) for a discussion of the problem.

[8]The term derives from the famous movie “Gaslight” in which a character is subjected to a series of experiences that have no rational explanation and is convinced that she is going insane.

[9]“Once you are real you can’t become unreal again. It lasts for always.”, Margery Williams, The Velveteen Rabbit (George H. Doran Co. 1922).

[10]Tacitus, Agricola (98), Book 1, paragraph 21.

[11]See John McClellan Marshall, “Technoevidence: The ‘Turing Limit’ 2020”, Journal of AI and Society, 2021, DOI 10.1007/s00146-020-01139-z

[12] Guardans and Czeglédy, “Oriented Flows: The Molecular Biology and Political Economy of the Stew,” 42 Leonardo 2 (2009), 145.

[13]This was in part a function of a television program dealing with violent crime known as “CSI”, and the impact was “the CSI effect” that led to the “no DNA” acquittals.

[14]Brenda Derouen, “The Future of Law? How Chat GPT Is Changing My Law Practice”, 86 Tex.BarJ. 542 (September 2023).

[15]See John G. Browning, “New and Improved? The Risks of Using ChatGPT”, 86 Tex.BarJ. 544 (September 2023).

[16]Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 113 S. Ct. 2786, 125 L. Ed. 2d 469 (1993).

[17]Mata v. Avianca, Case No. 1:22-cv-01461 (US District Court for the Southern District of New York) June 2023, cited in Law 360, June 23, 2023.

[18]Die Jungfrau von Orleans (The Maid of Orleans) (1801), Act III, sc. vi (trans. Anna Swanwick).

[19]For a discussion of the problem of updating of software, see Sadegh Farhang, Weidman, Jake, Kamani, Mohammad Mahdi, Grossklags, Jens, and Liu, Peng, “Take It or Leave It: A Survey Study on Operating System Upgrade Practices”, 34th Annual Computer Security Applications Conference [ACSAC ‘18], 490-504 (2018). doi: 10.1145/3274694.3274733.

[20]Daniel J. Gervais1 and John J. Nay, “Artificial Intelligence and Interspecific Law”, 382 Science Issue No. 6669 at 376 (October 27, 2023). doi: 10.1126/science.adi8678

[21]This point was summed up by Spock when discussing those norms that can be incorporated into a machine, Practical, Captain? Perhaps, but not dersirable. Computers make excellent and efficient servants; but I have no desire to serve under them.”