Knowledge Engineering: Computer Written Fiction

By Peter Swirski
Published on February 28, 2014
1 / 2
Bioevolution may have spawned technoevolution, but the accelerating pace of scientific discovery makes it all but certain that humanity is destined to take evolution into its own hands.
Bioevolution may have spawned technoevolution, but the accelerating pace of scientific discovery makes it all but certain that humanity is destined to take evolution into its own hands.
2 / 2
“From Literature to Biterature,” by Peter Swirski, is a compelling look into the possibilities of technoevolution and its inevitable impact on human life.
“From Literature to Biterature,” by Peter Swirski, is a compelling look into the possibilities of technoevolution and its inevitable impact on human life.

From Literature to Biterature (McGill-Queen’s University Press, 2013) is based on the premise that, in the foreseeable future, computers will become capable of creating works of fiction. Author Peter Swirski considers hundreds of questions, among them: Under which conditions would machines become capable of computer written fiction? Can machines have artificial creativity, or is it merely an extension of technological capabilities still undiscovered? In this excerpt, Swirski introduces some of the thought-provoking questions and theories surrounding this potential technology.

To find more books that pique our interest,
visit the
Utne Reader Bookshelf.

Narrative Intelligence

The first general-purpose — Turing-complete, in geekspeak — electronic brain was a behemoth of thirty-plus tons, roughly the same as an eighteen-wheeler truck. With twenty thousand vacuum tubes in its belly, it occupied a room the size of a college gym and consumed two hundred kilowatts, or about half the power of a roadworthy semi. Turn on the ignition, gun up the digital rig, and off you go, roaring and belching smoke on the information highway.

Running, oddly, on the decimal rather than the binary number system, the world’s first Electronic Integrator and Computer also boasted a radical new feature: it was reprogrammable. It could, in other words, execute a variety of tasks by means of what we would call different software (in reality, its instructions were stored on kludgy manual plug-and-socket boards). Soldered together in 1946 by John Mauchly and J. Presper Eckert at the University of Pennsylvania, the ENIAC was a dream come true.

It was also obsolete before it was completed. The computer revolution had begun.

The rest is history as we know it. In less than a single lifetime, ever more powerful computing machines have muscled in on almost all facets of our lives, opening new vistas for operations and research on a daily basis. As I type this sentence, there are more than seven billion people in the world and more than two billion computers — including the one on which I have just typed this sentence. And, by dint of typing it, I have done my bit to make the word “computer” come up in written English more frequently than 99 per cent of all the nouns in the language.

In a blink of an eye, computers have become an industry, not merely in terms of their manufacture and design but in terms of analysis of their present and future potential. The key factor behind this insatiable interest in these icons of our civilization is their cross-disciplinary utility. The computer and the cognitive sciences bestraddle an ever-expanding miscellany of disciplines with fingers in everything from alphanumerical regex to zettascale linguistics.

Chief among them are artificial intelligence, artificial emotion, artificial life, machine learning, knowledge engineering, software engineering, robotics, electronics, vision, computability, information science — all with their myriad subfields. Towards the periphery, they snake in and out of genetic algorithms, Boolean logic, neurology and neuropathology, natural language processing, cognitive and evolutionary psychology, decision and game theory, linguistics, philosophy of mind, translation theory, and their myriad subfields.

So swift has been the expansion of computers’ algorithmic and robotic capacities that quite a few of the assumptions forged in their formative decades no longer suffice to grasp their present-day potential, to say nothing about the future. This has never been truer than in our millennium in which research subdisciplines such as Narrative Intelligence are finally, if slowly, beginning to crawl out from the shadow of more established domains of Artificial Intelligence.

Underwriting this new field is mounting evidence from the biological and social sciences that a whole lot of cognitive processing is embedded in our natural skill for storytelling. As documented by psychologists, sociobiologists, and even literary scholars who have placed Homo narrativus under the microscope, we absorb, organize, and process information better when it is cast in the form of a story. We remember and retrieve causally framed narratives much better than atomic bits of RAM.

The power of the narrative is even more apparent in our striking bias toward contextual framing at the expense of the underlying logic of a situation. People given fifty dollars experience a sense of gain or loss — and change their behavior accordingly — depending on whether they get to keep twenty or must surrender thirty. Equally, we fall prey to cognitive illusions when processing frequencies as probabilities rather than natural frequencies. A killer disease that wipes out 1,280 people out of 10,000 looms worse than one that kills 24.14 per cent, even though bug number 2 is actually twice as lethal.

Given such deep-seated cognitive lapses, the idea of grafting an artsy-fartsy domain such as storytelling onto digital computing may at first appear to be iffy, if not completely stillborn. Appearances, however, can be deceiving. The marriage of the computer sciences and the humanities is only the next logical step in the paradigm shift that is inclining contemporary thinkers who think about thinking to think more and more in terms of narratives rather than logic-gates. The narrative perspective on the ghost in the machine, it turns out, is not a speculative luxury but a pressing necessity.

Test of Time

This is where From Literature to Biterature comes in. Underlying my explorations is the premise that, at a certain point in the already foreseeable future, computers will be able to create works of literature in and of themselves. What conditions would have to obtain for machines to become capable of creative writing? What would be the literary, cultural, and social consequences of these singular capacities? What role would evolution play in this and related scenarios? These are some of the central questions that preoccupy me.

Fortunately, even if the job is enormous, I am not starting from zero. In Of Literature and Knowledge (2007) I devoted an entire book to the subject of narrative intelligence — human narrative intelligence. Evolution has bequeathed our species with a cunning array of cognitive skills to navigate the currents of life. Human intelligence is uniquely adapted to thought-experiment about the future and to data-mine the past — in short, to interpret and reinterpret context-sensitive information.

But what about machine intelligence? Is it a sui generis different kind of animal or just an offshoot of the main branch? And how does one calibrate intelligence in computers or even determine that it is there at all? Most of all, isn’t machine intelligence, like military intelligence, an archetypal oxymoron? Jokes aside, it is not, because human and machine evolutions are ultimately two phases of the same process.

Bioevolution may have spawned technoevolution, but the accelerating pace of scientific discovery makes it all but certain that humanity is destined to take evolution into its own hands. From that point on, the difference between what is natural and artificial will begin to dissolve. Bioevolution will become a subfield of technoevolution inasmuch as the latter will facilitate steered  —  i.e., teleological — autoevolution. The evolutionary equilibrium of our species will have been punctuated in a most dramatic way. Biology will have become a function of biotech.

On the literary front, computers capable of writing fiction bring us face to face with the gnarly problems of authorship and intentionality. Both lie at the heart of my Literature, Analytically Speaking (2010), which brings order to the riot of positions and presuppositions around the matter of human art. This time around, my targets are authorial intention and extension as they pertain to machines, albeit machines of a very different order from the mindless text-compilers of today: creative, artful, and causally independent.

I made a first pass at unlocking the riddles of computer authorship (or computhorship) and bitic literature (or biterature) in Between Literature and Science (2000). Much of what I wrote there has weathered the test of time better than I could have imagined. To take only one example, my analysis of the pragmatic factors in the administration of the Turing test, such as the role of perfect and complete information available to the judges, was independently developed in the same year by the cognitive scientist Ayse Saygin.

The success of these earlier studies gives me the confidence to trespass onto the territory staked out by computer scientists and roboticists, even if it is in pursuit of inquiries that might be, on balance, of more interest to literary scholars and philosophers. Seen in this light, From Literature to Biterature is not a book of science, even though some humanists may find it overly technical for their taste. Nor is it a book of literary theory, even as it liberally trucks in narrative cases and critical lore. It is rather an old-fashioned book of discovery or, if you prefer, a modern adventure story of the mind.

Avocados or Wombats

While books on digital literature are no longer as hard to find as they were in the days of Espen Aarseth’s Cybertext (1997), literary-critical voyages to the digital shores of tomorrow are as still rare as a steak that runs on four legs. This includes the self-styled exception to the rule, a cheerleading overview of computer-assisted writing, Katherine Hayles’s Electronic Literature (2008). Limited by and large to a survey of second-order mashing protocols of yesterday, it stands in sharp contrast to my anatomy of electronic literature of tomorrow.

The aim of my explorations, the first of their kind, is simple. I want to draw a synoptic map of the future in which the gulf concealed by the term “bitic literature” — that is, the gulf between mere syntax-driven processing and semantically rich understanding — has been bridged. As it happens, the gulf in question is greater than that which separates tree-climbing pygmy marmosets from the Mbenga and Mbuti human pygmy tribes of Central Africa. As such, the ramifications of this single lemma could hardly be more far-reaching.

Most conspicuously, acknowledging biterature as a species of literature entails adopting the same range of attitudes to computhors as to human writers. Specifically, it entails approaching them as agents with internal states, such as, for instance, creative intentions. Here, however, we find ourselves on shaky ground. Computers with creative intentions are perforce computers that think, but what is the status of such thinking? Are machine thoughts and intentions real, like avocados or wombats, or are they merely useful theoretical fictions, like centres of gravity or actuarial averages?

These questions breed other questions like neutrons in a runaway chain reaction. Is thinking in computers different from that in humans? What would it mean if it turned out to be radically different or, for that matter, not different at all? Can machines really think, or is thinking merely what we could attribute to advanced future systems? And what does really really mean in this context, given that thinking about thinking machines segues seamlessly into thinking about machines with personality, identity, karma, and God knows what else?

Institute for Advanced Studies

As I work my way through this thicket of questions, I hitch a ride on the shoulders of three thinkers who have, in their respective ways, mapped out the contours of this terra incognita. First on the list is Stanislaw Lem. One of the greatest novelists of the twentieth century, he was also one of the most far-sighted philosophers of tomorrow, lionized by Mr Futurology himself, Alvin Toffler (of Future Shock fame). Not coincidentally, Lem is also the author of the only existing map of the literary future, patchy and dated though it may seem today.

But even as From Literature to Biterature is a book about literature, it is not a book of literary exegesis. The difference is fundamental. Throughout Lem’s career, reviewers marvelled at the complexity and scientific erudition of his scenarios, eulogizing him as a literary Einstein and a science-fiction Bach. His novels and thought experiments, exclaimed one critic, read “as if they were developed at Princeton’s Institute for Advanced Studies.”

Maybe, maybe not. For how would you know? Literary critics hardly flock to Princeton to get a degree in the computer or evolutionary sciences in order to find out if Lem’s futuristic scenarios are right or, indeed, wrong. Not a few contend, in fact, that this is the way it ought to be, inasmuch as literary studies are not even congruent with the cognitive orientation of the sciences — cognitive, biological, or any other. If so, I purposely step outside the confines of literary criticism by affording Lem the conceptual scrutiny he deserves, in the process tweaking his hypotheses or rewriting them altogether.

If Alan Turing’s name is synonymous with machine intelligence, it is as much for his contributions to cognitive science as for his unshakable belief in intelligent machines — contra John von Neumann, who was as unshakably certain they would never think. From Turing machines to the Turing Award (the most prestigious prize for computer scientists), down to the Turing Police, who lock horns with a supercomputer that plots to boost its intelligence in William Gibson’s modern sci-fi classic Neuromancer — his name pops up everywhere.

Celebrated as one of Time’s “100 Persons of the [Twentieth] Century,” to this day Turing is regarded as a prophet of machine intelligence. More than sixty years after it was unveiled, his test remains the single most consequential contribution to our thinking about thinking computers. Appropriately, the 2012 celebrations of the Turing Year even comprised a special Turing test conducted at Bletchley Park outside London, where the British mathematician played such a decisive role in decoding German military ciphers during World War II.

In many ways, the Turing test forms the backbone of the book in your hands. Although the protocol itself and all the Chinese Room-type critiques of it are by now nothing new, I squeeze a fair amount of juice out of the old orange by going at it from a novel angle, while adding a bunch of new twists to old themes. Indeed, as I argue throughout, the test itself is a categorically different tool — inductive rather than analytic — from what almost every analyst claims it to be. 

As for Charles Darwin, I assume you don’t need to be told who he was and why he matters. Let me just say that, even though From Literature to Biterature is not a book of fiction, it fashions a hero of epic proportions: evolution. No one can tell what role evolution will play in future computer design, but my bet is that it will be substantial. My other bet is that natural selection itself may be due for an overhaul as laws of self-organization and autocatalysis supplement what we know of this fundamental force that orders our living universe. 

Chiefly, although not exclusively, the last part of the book thus spins a series of scenarios that leave the present-day world far behind. My basic premise is that computer evolution will by its very nature exceed the pace of natural evolution a thousand- or even a million-fold. Being, at the same time, directed by machines themselves, it will leave its imprint on every page of the book of life as we know it. Whether these scenarios qualify as literary futurology, evolutionary speculation, or philosophy of the future I leave for you to judge. 

This excerpt has been reprinted with the permission of From Literature to Biterature: Lem, Turing, Darwin, and Explorations in Computer Literature, Philosophy of Mind, and Cultural Evolution by Peter Swirski and published by McGill-Queen’s University Press, 2013.

UTNE
UTNE
In-depth coverage of eye-opening issues that affect your life.