By Associate Professor Kieran Tranter
Griffith Law School
What if this is not the end?
This is the question that this is chased after in my new book Living in Technical Legality (Edinburgh University Press, 2018).
Many feel very insecure about the future due to rapid technological change. News feeds scroll to suggest that humans as a species are facing a degraded future; a future of uncertain employment, automation, and increased surveillance. A future were life-changing decisions are made by algorithms with humans becoming passengers – or possibly just livestock – on a planet controlled by big data processing systems.
In short, an end to human life, autonomy and responsibility as it has been known.
In this book I try to tell a different story; a more hopeful story about human futures. I do so through science fiction.
Science fiction is the cultural repository for the hopes and fears about the future of human life with technology. So much is readily obvious when there is talk of ‘science fiction becoming science fact’ or worries about robots becoming the ‘terminator’ or GMOs as ‘Frankenfoods.’ These concerns often become linked to law. Law is seen as a way that humans in the present can avoid a future composed of science fiction’s nightmares while also legislating for some of science fiction’s daydreams to come to pass.
This identification of cultural moments when law and science fiction become intertwined is a critical moment in the book. There is a narrative at play – law is to save humans from the monster of technology. This narrative has a particular science fiction pedigree; its antecedents lie with what is often considered the first science fiction, Mary Shelley’s Frankenstein.
In Frankenstein technology is a monster, external, threatening and different to human. It is a product of an amoral science that both fails to consider the consequences of its research and eschews responsibility once the monster escapes into the world. In the story hHumans are passive and vulnerable to the monster’s murderous pursuit. Yet the monster is not inherently evil. Shelley’s monster is the product of its treatment and circumstances. With guidance, rules and nurturing – care that was denied it – it could have been beneficial to humanity. In this the novel suggests that law can regulate technology. The lesson that Frankenstein has been teaching for 200 years is that humans need to be proactive in managing technology.
However, there is a point that is often missed when Frankenstein is seen in the responses to technological changes. This has to do with how law is being conceived in relation to technology. It is an account of law as an instrument of regulation. Instruments are tools. Knowledge combined with materials to facilitate human doing in the world. In other words, when faced with anxieties over technological futures, law becomes regarded as technological. This reveals a fundamental irony. In Frankenstein terms the monster has won. Human society can only save itself from technology, ironically, with more technology.
This seems to be back to the hopelessness and ends that I wanted to get away from.
Except.
What if we accept that to be human is to be fundamentally engaged with technology. Human history is defined by technological epochs (Stone Age, Steam Age, Information Age). Our daily lives are lived through and with technological entanglements. This can be most obvious in the ubiquity of highly complex technological objects such as the smart phone or motor vehicle. These objects are a physical focus for a set of networks that establish and empower forms of human interaction. However, technological entanglements can be seen beyond the obvious machines of consumer capitalism and glimpsed in the foods we eat, clothes we wear, the air we breathe, and the dreams we dream.
Whether humans were always so entwined with technology is a moot point. Some identify the palaeolithic emergence of tools and language as the decisive turn to technology. In this story humans as a species have evolved as technological beings. Others blame the classical Greek philosophers who set up the intellectual resources for the West to count, theorise and reconstruct the world. Regardless of the origin story, many consider that the reality of the present is that humans occupy a world of ‘natureculture’ where divisions between nature and culture must properly be seen as merged in a thoroughly technological space.
Human life, autonomy and responsibility has been mediated by and through technology for a very long time. If there ever was an ‘end’ to some pre-technological sense of the human, it must have happened a long, long time ago. Nevertheless, life and living has endured after the supposed apocalypse. Humans have, and do, find meaning and worth in the technological world.
So the question becomes how can a meaningful and worthwhile life be lived in the technological world? Instead of anxiety and calls for legislation how do we plan and live as technological beings? This involves two questions. The first is to establish what does it mean to be a technological being? The second is how can that technological being live a worthwhile life?
Science fiction is particularly useful in crafting responses to these questions. Science fiction is the place where technological beings dream themselves, their society and its future. This is why science fiction references accompanies public discussions about disruptive technologies. It also explains the popularity of science fiction within mainstream contemporary culture; with science fiction being the basis for much in the way of franchised cinema and digital gaming.
In the book I take the reality of a science fiction infused culture and go further. I argue that science fiction imagines technological beings as nodes within networks. Science fiction shows embodied locations where complex systems of information and material meet. It is a location where networks constrain, but also empower, doing in the world. The being that convention calls the ‘human’ is a hybrid entity composed of biology and culture that changes as it moves through time. This is a different entity to many of the past-focused, nature-based accounts of the human inherited from the Western intellectual tradition. Where older conceptions of the human were static, and concerned with natural essences that gave rise to rights; the human as a technological being is a fluxing node where multiple networks meet. It is less a discrete ‘entity’ and more a ‘possibility.’
Which can be terrifying. Absent from this response to the question of what does it mean to be a technological being is a sense of certainty. There are two dangers. The first is of over-determination. That the node in the network is pre-programmed. There is no choice of action, everything is set by the wider context so that there is no scope for discrete, purposeful doing in the world. This is often the experience of the contemporary worker-node where automated systems, surveillance, and performance reviews mean that the scope for subjective decision making has been eliminated: ‘The computer says no’. This is understandable but false. The complexity of being a node in the network means that rarely is there unmitigated compulsion to do a certain action. While it might be unrealistic and unreasonable to use terms that imply total freedom of choice like ‘autonomy’, there is always a form of structured agency. Central to science fiction has been the screening of this sense that technological beings have constrains and are limited but are still capable of purposeful action.
This awareness of ‘structured agency’ opens to the second terrifying danger. The first danger was terrifying in that there can often be the impression of no choice. The second is the reverse. A node in the network exercising structured agency could be considered to have an amoral set of choices. It has been commented that technological decisions are as amoral, that there is indifferent to outcomes. In this perspective technological decision are, at best, concerned with efficiency of process. This is the second terrifying danger; that the exercise of structured agency is a value-free zone where any choice is equality valid. The fact of choice and doing – of impulsively tweeting hateful bile in an impulsive rage as opposed to thoughtful kindness – could be seen as a good in itself. Act rather than consequences becomes prioritised. This is the final contribution of the book, on how can nodes in the network exercise their structured agency to do well.
As nodes in the network there is a tendency to connection. For current thinkers on ethics for technological beings the tendency towards connection provide values through which choices and actions can be judged. There is responsibility to the becoming of the world. For some this is expressed in terms of making the world more complex and forging more connections. For others there is a tangible sense of connection and empathy with other nodes in the network that can be nurtured and developed. It is this sense of responsibility that should inform the exercise of the structured agency available to technological beings.
In the book I examine three specific node locations for the exercise of responsibility to the becoming of the world. Each have a connection with law and each are explained through a specific science fiction. For the node in the network that is a hybrid of biology and rights and responsibilities under law, the entity that convention calls the legal subject, Octavia E Butlers Xenogenesis story is discussed. This critically acclaimed trilogy of novels from the late 1980s explores the limits and possibilities of action in a colonised, biological over-determined space. The novels dream how to exercise the structured agency in the personal, intimate and every-day. For the node in the network that convention would call a lawyer, the BBC’s enduring and ever popular Doctor Who is invoked. Doctor Who has, for over 50 years, materialises into the world a complex account of how to be responsible in the making and breaking of networks. For the node that are legal scholars the deserts and visceral interactions of machines and bodies in the Mad Max films are explored. Through this post-apocalyptic landscape the responsibility of legal scholar to map the networks in their complexity becomes prioritised. The scholar has an empowering responsibility to identify the opportunities for the exercise of structured agency. To show how technological beings can live well after the end.
In short this is not the end; notwithstanding ever present anxieties about technological futures. But there is a need to let go of older forms of thought. To live well with technology involves embracing the science fictionality of the present. Rather than passively deferring to the machines that are making the world, it involves the seeing and seizing the opportunities for making a difference.