Back to main site

Tel: 01347 812150

Category Archives: Information Security

The Forgotten Architects of the Cyber Domain

CT6_Blog_Images_3000x9459

 

“Very great and pertinent advances doubtless can be made during the remainder of this century, both in information technology and in the ways man uses it.  Whether very great and pertinent advances will be made, however, depends strongly on how societies and nations set their goals”

“Libraries of the Future”, J. C. R. Licklider, 1965

 

July 1945.  The Second World War is drawing to its final close.  Although the USA will continue to conduct operations in the Pacific theatre until a radio broadcast by Emperor Hirohito on September 15th announces Japan’s surrender; the outcome is assured.  Victory in Europe had already been achieved.  Germany had surrendered, unconditionally, to the Allies at 02:41 on May 7th.  By July, total Allied victory was an absolute inevitability.  Peace loomed.

 

The seeds of the Cold War had already been planted, perhaps as early as the Nazi-Soviet Non-Aggression Pact of 1939.  Certainly, by the frantic race to Berlin between the USSR from the East, and the UK and the USA from the West, following the Normandy landings and the Allied invasion of Europe.  From these seeds, the roots of the continuing, global, and existential struggle that was to define and shape the human story for what remained of the twentieth century were already growing; and at a remarkable rate.  However, it would not be until March 5th 1946 that Churchill would declare, “from Stettin in the Baltic to Trieste in the Adriatic an iron curtain has descended across the Continent”.

 

In July 1945, the deep terrors of the Cold War, were, for most of humanity, unforeseen and unimagined.  In July 1945, many of humanities finest minds were compelled to contemplate on the ruins of the world and, more importantly, on the new world that they would make to replace and improve that which had been destroyed.  Fascism had been defeated.  Democracy had prevailed.  A high price had been paid by victor and vanquished alike.  Cities, nations and empires lay in the ruins of victory as well as of defeat.  Amongst the victors, elation was tempered with exhaustion.  The UK economy in particular having been dealt a beating from which it would never recover.

 

The world had witnessed the capacity of human science and technology to mechanise and industrialise wholesale slaughter of soldiers and civilians alike; had watched the mass production of death played out on a global stage.  War, and genocide, had been refined by western civilisation to a grotesquely clinical exercise in accountancy and modern management.  The legitimacy of the European imperial project perished in the barbarity and horror of Auschwitz and Stalingrad.

 

In order to secure this victory, the entirety of the will, energy and treasure of the greatest nations on Earth had been devoted to one single aim; victory.  This had been a total war in every sense of the word.  Now that the victory had been attained; what next?  What was the new world to be remade from the ruins of the old to look like?

 

In the Summer of 1945, there was a sense that great things must now be done in order to ensure that the new world would be one worthy of the sacrifices that had been made; a peace worth the price.  All of this had to have been for something.  Amongst the great minds of humanity a sense had grown of the power of human agency and spirit to create great effect.  These were the minds that had harnessed the power of the atom, through technology, to the human will.  These were the minds that had created machines of vast power and sophistication to make and break the deepest of secrets.  These were the minds that sensed the expectations of history upon them.  It was their responsibility, individually and collectively, to secure the peace just as it had been to win the war.  It was their duty to enhance and improve the human condition.  And, they knew it.

 

In the July 1945 issue of the “Atlantic Monthly”, the man who had spent his war directing and channelling the scientific research required to secure victory in arms, responded to the imperatives of the peace, and the call of history, with the publication of the seminal paper “As We May Think”.  As first the chairman of the National Defense Research Committee, and then the director of the Office of Scientific Research and Development, Vannevar Bush was responsible for directing and co-ordinating the prodigious and ground breaking research required to enable the prosecution of total war on an industrial scale.

 

In his paper, Bush openly acknowledges that, for scientists, “it has been exhilarating to work in effective partnership” in order to attain a “common cause”.  He poses the question; “what are the scientists to do next”, now that the exhilaration of the war has ebbed away?  His answer is that the scientists of the peace must turn their attentions to making real the radical transformation in the relationships between humanity and information promised by the technology developed at such pace and cost during the war.  For Bush this is about far more than computers as great calculators for scientists; “a much larger matter than merely the extraction of data for the purposes of scientific research; it involves the entire process by which man profits by his inheritance of acquired knowledge”.

 

Bush proposed the creation of a device to extend and enhance the human memory; a machine to aid and augment the human powers of cognition, imagination and creation; a computer to work in symbiosis with the human.  He proposed a device that would operate as human thought does, “by association”.  For Bush, the human mind can, “with one item in its grasp”, link “instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain”.  He describes “a future device for individual use, which is a sort of mechanized private file and library”.  He gives it a name; the “memex”.  The memex, extends the human memory and the mind; “it is a device in which an individual stores all his books, records, and communications, and which is mechanised so that it may be consulted with exceeding speed and flexibility”.

 

He gives the new thing a form; it is “a desk”.  For the human, it is a “piece of furniture at which he works”, rather than a mysterious, inaccessible, gargantuan, monster-machine, filling an entire room.  It is moulded around human interaction; it has “slanting translucent screens on which material can be projected for convenient reading”.  For data entry and control there is both a “keyboard”, and “provision for direct entry” via a “transparent platen” upon which can be “placed longhand notes, photographs, memoranda”.  These originals can then be “photographed onto the next blank space in a section of the memex film.”  If at any time the user losses the thread of their interaction with the memex, a “special button transfers him immediately to the first page of the index”.

 

Bush’s paper lays out the essence of the core thinking upon which the World Wide Web was to be constructed.  Bush proved incorrect in his preference for analogue over digital computers.  However, his vision, of the human interactions with information augmented by symbiotic machines integrated by design in to the associative workings of human cognition, has been made real.  We see this in the algorithms driving today’s search engines; in the concepts and technology of hyperlinking that provides the threads of the Web built upon the decades of associative and cumulative thought he initiated, and; in the principals informing the graphical user interface model of mediating human communications with our machine counterparts.

 

Bush’s paper entered an intellectual context already populated with minds turned in the direction of computers and computing.  In particular, the minds of A. M. Turing and John von Neumann.  Turing’s 1936 paper “On Computable Numbers, With an Application to the Entscheidungsproblem” offers clear evidence of the direction of his pre-war thinking towards the possibility of a universal computing machine.  Whilst attending Princeton from 1937 to 1938, Turing encountered von Neumann for a second time, the two having met first when von Neumann was a visiting professor at Cambridge during the third term of 1935.  In 1946, John G. Kemeny had “the privilege of listening to a lecture at Los Alamos by … John von Neumann” in which von Neumann laid out five principals for the design of a computing machine of the future.  Kemeny’s memory has von Neumann’s proposals as: “1, Fully Electronic Computers”, “2, Binary Number System”, “3, Internal Memory”, “4, Stored Program” and, “5, Universal Computer”[1].  Turing’s direction of mind towards the question of machine intelligence is signposted in his lecture to the London Mathematical Society on 20th February 1947.  The world was not to know of Colossus for decades to come.

 

In 1948, Norbert Wiener publishes the first of a series of three books in which he announced the creation of a new science; “Cybernetics: or Control and Communication in the Animal and the Machine”.  The foundation text was followed by “The Human Use of Human Beings” in 1950, and the trilogy was completed in 1964 with “God and Golem, Inc.”.  Wiener’s mission in 1948 was to provide a “fresh and independent point of commencement” for “the study of non-linear structures and systems, whether electric or mechanical, whether natural or artificial”[2].  By 1964 he was reflecting on “three points in cybernetics”.  Firstly, “machines which learn”.  Secondly, “machines … able to make other machines in their own image”.  Thirdly, “the relations of the machine to the living being”[3].  Across the span of these texts, Wiener develops the conceptual, philosophical and mathematical framework that unifies and transforms human and machine in the cyber domain of today.

 

Two years before he joined the newly created Advanced Research Projects Agency in 1962, J. C. R. Licklider had already begun to direct his thoughts towards the “expected development in cooperative interaction between man and electronic computers” that will lead to a “man-computer symbiosis” in which “a very close coupling between the human and the electronic members of the partnership” will “let computers facilitate formative thinking” and “enable men and computers to cooperate in making decisions and controlling complex situations” [4].  By 1968, Licklider predicted with assured confidence that, despite it being “a rather startling thing to say”, nonetheless, “in a few years, men will be able to communicate more effectively through a machine than face to face.[5]

 

Three years on from his appointment to the new agency, in 1965, Licklider was commissioned with the production of a report in to the “Libraries of the Future”.  His task, not to examine new ways to store and retrieve books; but instead, to consider the “concepts and problems of man’s interaction with the body of recorded knowledge” and to explore “the use of computers in information storage, organisation, and retrieval.”  His prediction was that what he called a ‘procognitive’ system would evolve based on digital computers.  Outlandish though it might seem to the readers of the report in 1965, these computers would have; “random-access memory”, “content-addressable memory”, “parallel processing”, cathode-ray-oscilloscope displays and light pens”, “hierarchical and recursive program structures”, “procedure-orientated and problem-orientated languages” and “ xerographic output units”.  They would be enmeshed; interconnected through “time-sharing computer systems with remote user terminals”[6].

 

In 1971 the first e-mail was sent; across ARPANET.  A system of networked computers created in 1963 as a direct realisation of J. C. R. Licklider’s vision.  A system conceived and moulded by human thought and will; the system that stands as the point of genesis of the Internet.  A net that provided the fabric from which Bush’s web could, and would, be woven.  The foundations of a cybernetic system in which Bush’s memex morphs in to the universal machine of Turing and von Neumann.

 

In 1965, whilst at ARPA, Licklider established an applied research programme that laid the foundations for generations of research and development, and postgraduate teaching, in computers and computing.  The programme took years if not decades to bear fruit.  Directly and indirectly, it produced some of the keystone elements of modern computing.  It continues to do so to this day.

 

The names of the institutions funded by this programme still reads like a who’s who of the great and the good in the realms of the teaching and research of computing.  Because of Licklider, University of California, Berkley was granted funds to develop time-sharing through Project Genie.  Likewise, the Massachusetts Institute of Technology was enabled to research Machine Aided Cognition, or Mathematics and Computation, or Multiple Access Computer, or Machine Aided Cognitions, or Man and Computer, through Project MAC[7].  What was to become Carnegie Mellon University took receipt of six hundred million dollars in order to conduct research in to the theory of computer programming, artificial intelligence, the interactions between computers and natural languages, the interactions between humans and computers, and, the design of computing machinery.  The Augmentation Research Center within the Stanford Research Institute was tasked with developing technologies to enable components of computers and elements of computer systems to interact.

 

The birth of Open Source in the 1970s, and the development of the RISC architecture in the 1980s at University of California Berkley, stem from the seeds planted by Licklider.  As does the genesis of social networking, manifest in the Community Memory Project terminal found in Leopold’s Records in Berkley in 1973.  The use, in 1984, of robots designed by Carnegie Mellon academics in the clean up of the wreckage and debris from the partial nuclear meltdown at Three Mile Island has the same lineage.  Likewise, the continuing and growing world leading position in the areas of artificial intelligence and the theories of computation enjoyed by the Massachusetts Institute of Technology.  Similarly, the emergence of the mouse, hyperlinks and the graphical user interface from Stanford shares this common origin.  All of this sits in a direct causational relationship to Licklider’s endeavours.  All of this, impressive though it is, leaves out the impact of the graduates from these institutions and the creation around them of a culture and an environment within which great things are done.  Stanford nestles in the heart of Silicon Valley and counts Sergey Brin, Larry Page and Vinton Cerf amongst its alumni.

 

The testaments to the enduring legacy of Licklider’s vision are as clear as the most important lesson they offer; namely that the success of the human sense making project in the area of cyber can only be imagined through a long range lens.  Success in this endeavour quite possibly being our only hope of surviving, let alone harnessing, the inexorable dependence humanity now has on cyber. A dependence foretold by science fiction.

 

In his 1946 story “A Logic Named Joe”; a merry tale of the Internet (the tanks), PCs (logics), and the near collapse of society because of them.  Murray Leinster has the tank maintenance engineer reply to the suggestion that the network of logics and tanks might be shut down in order to save humanity from the eponymous Joe, a logic that has somehow attained a form of sentience, with the chillingly prescient riposte: “”Shut down the tank?” he says mirthless.  “Does it occur to you, fella, that the tank has been doin’ all the computin’ for every business office for years? It’s been handlin’ the distribution of ninety-four percent of all telecast programs, has given out all information on weather, plane schedules, special sales, employment opportunities and news; has handled all person-to-person contacts over wires and recorded every business conversation and agreement – listen, fella! Logics changed civilization. Logics are civilization! If we shut off logics, we go back to a kind, of civilization we have forgotten how to run!”

 

Before the risky and radical funding and research construct Licklider created came into being, not a single Ph.D. in computing had been conferred anywhere in the USA; the first being awarded in 1969.  Licklider operated with courage, foresight and vision.  Humanity, and the US economy, are the richer because he did.  He established an academic context that would be impossible to attain in the UK today within the confines set by the current funding regime and exemplified in the Research Excellence Framework.

 

Our academic institutions are locked in to a funding structure that actively militates against radical and disruptive thought.  Intellectual creativity and cross-disciplinary work are driven out by a system that rewards conservatism, conformity and compliance, with research funding and professional advancement.  This same culture fosters a headlong retreat in to ever narrower slivers of specialisation.  The only sense in which matters differ from Bush’s observation in 1945 that “there is increasing evidence that we are being bogged down as specialization extends”[8] is that we are now worse off than they were three quarters of a century ago.

 

Just as we have retreated in to the cold comfort of conformity in sterile research, so we have allowed training to usurp education.  We are producing generation after generation of graduates, more or less skilled in the rote application of knowledge and processes, which are themselves more or less relevant to the world as it is.  These graduates have no sense of the interactions between the technology of computing and humanity; no sense of the origins and nature even of the technology.  They are trained.  They are technicians; highly skilled technicians with a demonstrable ability to master very complicated processes; but, technicians nonetheless.  They are, by design, bereft of the capacity for critical or creative thought.  They can exercise formal logic in response to established patterns.  They can accomplish complicated and familiar tasks with great faculty.  Yet, by virtue of the training itself, they are incapable of adapting to change.  They are closed systems, devoid of the ability to act on feedback.  Unable to change their state and unable to evolve.  Unlike the cyber system they inhabit.

CT6_Blog_Images_956x9569

Across the community of those interested in cyber and cyber security, there are numerous voices calling, correctly, for a science of cyber.  However, there is manifest confusion about what such a call amounts to.  The goal of science is not the acquisition of empirical data per se.  Neither is the creation of a science the elevation of assertions to fact simply because of their utterance from the mouth of a scientist.  Science is about a rigorous and methodological approach to the formulation, testing, destruction and re-making of hypotheses in order to push back the frontiers of human knowledge and understanding.  Science requires insight, vision, creativity, courage and risk taking in the formulation of these hypotheses as much as it requires discipline, rigour and method in their testing.  Those who make the call for a science of cyber should first read Wiener.

 

  1. C. R. Licklider was a principal and formative actor at the heart of the military-industrial complex called forth by the existential imperatives of the Cold War. And he knew it. On the 4th October 1957 a highly polished metal sphere less than a meter in diameter was launched in to an elliptical low earth orbit by the USSR.  Elementary Satellite-1, Sputnik-1, became the first artificial earth satellite and an apparent symbol of Soviet scientific power.  The eyes of the world could see it.  The radio receivers of the world could hear it.  The propaganda victory gleaned by the USSR was bad enough.  But worse, for the controlling minds of the US government and military; the nightmare of space borne weapons platforms became instantly real.  The divide between science fiction and science fact vanished overnight.  With neither warning, nor time to shelter; atomic destruction could now descend directly from the darkness of space.

 

The USA had fallen behind Soviet technology; without even knowing it.  Worse, the US lacked the capacity to conduct the research required to catch up.  In February 1958, in the midst of his presidency, Eisenhower created the Advanced Research Projects Agency (ARPA).  In 1962, Licklider was plucked by ARPA from his professorship in psychology at MIT, and placed in charge of the newly created Information Processing Techniques Office at ARPA.  His mission was to lead the development of research and the creation of technologies to enable the military use of computers and information processing.  In his own words, his job was to “bring in to being the technology that the military needs[9].

 

It is reasonable to assume that by the time of his recruitment by ARPA, Licklider had heard, if not read, the farewell address of the 34th President of the USA, Dwight D. Eisenhower, given on the 17th January 1961, in which he asserted that for the USA, “a vital element in keeping the peace is our military establishment.”  Survival required that “our arms must be mighty, ready for instant action, so that no potential aggressor may be tempted to risk his own destruction”.  Eisenhower also recognised that “this conjunction of an immense military establishment and a large arms industry is new in the American experience”.  He understood that “the total influence — economic, political, even spiritual — is felt in every city, every statehouse, every office of the federal government.”  Likewise, he was clear that this was a precondition for survival; “we recognize the imperative need for this development.”  However, Eisenhower was not simply describing or justifying the existence of the military-industrial complex.  He was warning of its potential dangers.  Existential imperative though it was; nonetheless “we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society.”  Eisenhower’s warning to history was clear and direct; “The potential for the disastrous rise of misplaced power exists, and will persist. We must never let the weight of this [military and industrial] combination endanger our liberties or democratic processes”.

 

As Turing and von Neumann gave material, technological, form to the mathematics of universal, stored programme, digital computation; and as Vannevar Bush laid the foundations of the World Wide Web; and as Wiener equipped humanity with the new science required to enable our comprehension of the new world these minds had created; so, Licklider created the conditions and the context within which the Internet was born.

 

More than this, he created the structures within which computers and computing were developed.  Licklider was the architect of the assimilation of the Internet, computers and computing in to the service of the second great existential conflict of the twentieth century; the defining context of the Cold War.  The vast and transformative construct we call cyber was imagined as a consequence of the devastation wrought by one great war; and formed in to reality as a means of avoiding the extinction level consequences of another.  However, both Bush and Licklider imagined their work as a means by which humanity would evolve and improve and flourish.  Not merely as a means by which it would avert extinction.  Not merely as a weapon of war.

 

The forgotten architects of the cyber domain, Bush and Licklider, imagined a world transformed.  They understood the centrality of the human relationship with information; and, they understood that the potential to re-shape this relationship was also the potential to re-form and re-make the essence of our humanity, for the better.  They understood that their vision of the transformation to the relationship between humanity and information, which they also gave us the ability to make real, represented our best, our only, hope of survival.

 

As he concludes his reflections on “As We May Think”, Bush observes that, in 1945, humanity has already “built a civilization so complex that he needs to mechanise his record more fully if he is push his experiment to its logical conclusion”.  He is clear that science has granted us wonders; that it has “built us [the] well-supplied house” of civilisation within which we are learning and progressing.  He is also equally clear that science has given us the terrible power to “throw masses of people against another with cruel weapons”.  His hope is that science will permit humanity “truly to encompass the great record and to grow in the wisdom of the race experience.”  His fear; that humanity “may yet perish in conflict before [it] learns to wield that record for his true good.”  His judgement; that already having endured so much, and having already accomplished so much “in the application of science to the needs and desires of man, it would seem to be a singularly unfortunate stage at which to terminate the process, or to lose hope as to the outcome”.  This remains as true in 2014 as it was in 1945.    

 

The human use of computers and computing as a means to survive and prevail in the Cold War was both inevitable and desirable.  It put these machines at the service of powerful imperatives and commanded the release of the vast troves of treasure, time and intellectual power required to bring these complex and complicated creatures in to existence.  It gave us an initial set of market, management and security mechanisms through which we could bring the newborn creature to an initial state of early maturity.  Now, the Cold War is over.  Time to think again.  Time to bring computers and computing back in to the service of augmenting and improving the human condition.  Humanity depends upon the evolution of cyber for its own evolution; and, for its very existence.

 

It falls to us to use the new domain in accordance with the spirit of the intent of its architects.  It falls to us to exercise our agency as instrumental elements of the cybernetic system of which we are an integral part.  To do this, we must first learn of the origins of this new domain and re-discover the minds of its makers.  Therefore, we must ourselves read and study the work and writings of Norbert Wiener, Vannevar Bush, J. C. R. Licklider, Alan Turing and John von Neumann.

 

Then, we must re-design our university teaching programmes.  Firstly, by building these works in as core undergraduate texts.  Secondly, by using these texts as the foundation of a common body of knowledge and learning across all of the fields associated with cyber, including; computing, robotics, artificial intelligence and security.  Thirdly, by encompassing within the common body of knowledge and learning disciplines hitherto alien to the academic study of computing.  Cyber is about philosophy as much as it is about mathematics.

 

The funding and direction of our research practice should be similarly reformed.  Just as ARPA directed broad areas of research targeted at complimentary areas of inquiry, so our funding should be similarly directed.  We should be targeting research at precisely those areas where cyber can enable and empower humanity.  At enabling, extending and enhancing democracy for instance.  Research funding should not be allocated according to the ability of the recipient to demonstrate formal compliance with a mechanistic quality control regime; as, in effect, a reward for the ability to game the system.  Rather, it should be awarded on the basis of an informed judgement, by humans, about the capacity of the recipient to deliver radical, creative, innovative and disruptive thought.  One way to do this would be to emulate the practice of public competitions for the selection of architects for buildings of significance.  Research should call forth answers; not merely elegant articulations of the problem.

 

Research funding should enable, even reward, intellectual courage and risk taking.  Researchers should be allowed to fail.  Creative and productive failures should be celebrated and learnt from.  Those allocating research funding should take risks, and be praised and rewarded for doing so.  Without the courage and risk taking of Licklider, Bush, Turing, Wiener and von Neumann, and those who supported and paid for them, where would we be now?

 

We are beginning to grope towards the first glimmerings of comprehension of the enormity, and scale, and velocity of the transformation to the human story that is cyber.  Once more, it is required of us to think and act with courage, foresight and vision.  It falls to us to reform and reshape both the ‘what’ and the ‘how’ of our thoughts and our deeds.  It is time to prove ourselves worthy of the trust placed in us by the architects of the cyber domain.

 

We, of course, have something available to us that the architects of the domain did not; the existence of the domain itself.  An immeasurably powerful construct conceived, designed and engineered by its makers precisely in order to liberate human intelligence and creativity.  Time to shed the shackles of the Cold War and set it free.

 

I propose the creation of a new institute; The Prometheus Institute for Cyber Studies.  So named as a conscious invocation of all of the cadences, ambiguities and difficulties of the stories of Prometheus and his theft of fire from the gods of Olympus; his gift of this stolen and most sacred of their possessions to humanity.  The Prometheus Institute should be based at, but operate independently from, an established academic institution.  It should be formed along the lines of the learned and scholarly societies of the Enlightenment.  It should embrace and develop a truly trans-disciplinary approach to improving the human understanding and experience of the cyber phenomenon through scholarship, research and teaching.  In his creation of the new science of cybernetics, Wiener lit a torch; our time to carry it forward.

 

[1] “Man and the Computer”, John G. Kemeny, 1972.  Kemeny was the president of Dartmouth College from 1970 to 1981.  Together with Thomas E. Kurtz, he developed BASIC and one of the earliest systems for time-sharing networked computers.  As a graduate student he was Albert Einstein’s mathematical assistant.

[2] “Cybernetics: or Control and Communication in the Animal and the Machine”, Norbert Wiener, 1948.

[3] “God and Golem, Inc.”, Norbert Weiner, 1964.

[4] “Man-Computer Symbiosis”, J. C. R. Licklider, published in “IRE Transactions on Human Factors in Electronics”, Volume HFE-1, March 1960.

[5] “The Computer as a Communications Device”, J. C. R. Licklider and Robert W. Taylor, published in “Science and Technology”, April 1968.

[6] “Libraries of the Future”, J. C. R. Licklider, 1965.

[7] The acronym MAC was originally formed of Mathematics and Computation but was recomposed multiple times as the project itself adapted and evolved.

[8] “As We May Think”, Vannevar Bush, “Atlantic Monthly”, July 1945.

[9] Memorandum of 23rd April 1963 from J. C. R. Licklider in Washington DC to “Members and Affiliates of the Intergalactic Computer Network” regarding “Topics for Discussion at the Forthcoming Meeting”.

 

Author: Colin Williams

 

Colin regularly speaks, consults and writes on matters to do with Information Assurance, cyber security, business development and enterprise level software procurement, to public sector audiences and clients at home and abroad.  Current areas of focus include the development of an interdisciplinary approach to Information Assurance and cyber protection; the creation and development of new forms of collaborating between Government, industry and academia; and the development of new economic and business models for IT, Information Assurance and cyber protection in the context of twenty-first century computing.

 

Defying Gods and Demons, Finding Real Heroes in a Virtual World

CT6_Blog_Images_3000x94511

Over the past 365 days I have achieved many things. I have commanded “The Jackdaw”, a stolen brig on the Caribbean seas, defeated innumerable cartoon supervillains inside of a dilapidated insane asylum, led an elite band of soldiers (the “Ghosts”) to save a dystopian future-earth from the machinations of a post-nuclear-war South American Federation, and won the FIFA World Cup, both as manager and player. All this whilst also holding down a full time job and leading a relatively normal, if not somewhat insular, life.

 

 

That this has also happened to millions of gamers across the world matters little, such is the sophistication and depth of today’s video games each player’s experience is now inexorably different. Open-world “sandbox” games are now the norm, allowing narratives to morph and evolve through the actions and decisions taken by the user, not the programmer.

 

 

With the exception of a handful of works (including a series of wonderful children’s books in the 80’s), novels and film do not allow their audience to choose their own adventure with anything like the same level of meaning and perception as video games do. That is not to say that video games are necessarily better than film or literature, in fact there are very many examples in which they are significantly worse. It is more that they provide a greater sense of inclusion and self for the audience, and that these feelings invariably eliminate the notion of a fictional character. Essentially, you can experience events alongside Frodo, but you are Lara.

 

 

The shining example of just how immersed within a computer game players can become is the football management simulation series Football Manager which puts gamers into the hotseat of any one of over 500+ football clubs worldwide. The game is so addictive that it has been cited in no fewer than 35 divorce cases and there are scores of online communities, each telling stories of how they hold fake press conferences in the shower, wear three-piece suits for important games and have deliberately ignored real life footballers because of their in-game counterpart’s indiscretions.

 

 

Yet the sense of self is never more apparent than in the first-person genre of games, such as the Call of Duty and Far Cry franchises, which, more often than not, mirror the rare second-person literary narrative by placing the gamer themselves in the centre of the action. In novels, when the reader sees “I” they understand it to represent the voice on the page and not themselves. In first-person games however, “I” becomes whoever is controlling the character and the camera position is specifically designed to mimic that viewpoint. In some of the best examples of first-person games, gamers do not control the protagonist, rather they are the protagonist. As such they are addressed by the supporting cast either directly by their own name which they supply as part of the or, more commonly, by a nickname (usually “Rookie” or “Kid”). This gives the user a far greater sense of inclusion in the story and subsequent empathy with their character and its allies than in any other form of fiction. As events unfold you live them as if they were taking place in real life and begin to base decisions not on your own “offline” persona, but rather as a result of your “online” backstory. While in real life you would probably be somewhat reluctant to choose which of your travelling companions should be sacrificed to appease the voodoo priest who was holding you captive – in the virtual realm one slightly off comment twelve levels ago can mean that your childhood sweetheart is kicked off a cliff faster than you can say “Press Triangle”. (Although, this being video games, they will no doubt reappear twenty minutes later as leader of an army of the undead).

 

 

The question of female leads (or lack of) is another pressing issue facing games studios, aside from the aforementioned Ms. Croft, it is very difficult to come up with another compelling female lead in a video game. Even Lara has taken 17 years and a series reboot to become anything close to resembling a relatable woman. This shows that the industry is changing, but slowly. There are countless reasons why video games have failed to produce many convincing female characters, enough to fill the pages of this magazine a number of times over, but it is fair to say that for a long time the problem has been something of an endless cycle. The male dominated landscape of video gaming dissuades many women from picking up a joypad, leading to fewer women having an interest in taking roles in the production of video games, which leads to a slanted view of how women in video games should behave, which leads to more women becoming disenfranchised and so on and so on ad infinitum.

 

 

But now for the tricky part. Subsuming a character in the way that first-person and simulation games force you to do is all very well if you see events unfold through a characters eyes and make decisions on their behalf. You can apply your own moralities and rationale to what is going on and why you have acted in that way. But what happens if that backstory is already provided? And worse still, what happens if you don’t like it?

 

 

For me, the game Bioshock Infinite provides this very conundrum. The central character, Booker Du Witt, is a widowed American Civil war veteran whose actions at the Battle of Wounded Knee have caused him intense emotional scarring and turned him to excessive gambling and alcohol. Now working as a private investigator, Booker is continually haunted by his past and struggles internally with questions of faith and religion. All very interesting stuff but there is nothing within the personality of this 19th century American soldier that I could relate to, and as such, I struggled to form the same kind of emotional connection with the character that I did with other, less fleshed out, heroes. Honestly, I even connected to a blue hedgehog in running shoes more than I did with Booker.

 

CT6_Blog_Images_956x95611

“Ludonarrative dissonance” is the term widely banded around the games industry to describe the disconnect gamers feel when playing such titles. It is both debated and derided in equal measure, yet there is some substance to the argument. The term was originally coined in a review of the first Bioshock, a game where the cutscenes openly ridicule the notion of a society built upon self-interest and men becoming gods yet the gameplay appears to reward these exact behaviours creating a jarring conflict of interest. When even in-game narratives fail to tie up, the question of identification and association is bound to arise.

 

 

The area becomes even greyer when referring to third person games, whereby the entirety of the character being controlled is visible on screen (albeit usually from behind). Here the character becomes more like those we are used to from novels and film, they are patently a separate entity from the player, with their own voice and backstory, yet they are still manipulated by the player. Then, during cutscenes and the like, control is wrested away from you and handed back to the character – allowing them to potentially act in a way entirely different to how you controlled them previously. So what exactly is your relationship with them? Companion? Support team?…God?

 

 

The very nature of video games does, of course, make drawing accurate representations of characters difficult. The whole point of a game is to allow the player to encounter events that they would otherwise never be able to – It’s highly doubtful that we’ll be seeing Office Supplies Manager hitting our shelves in the near future for example. Instead the events depicted occur at the very extremes of human experience, amid theatres of war, apocalypse and fantasy. As the vast majority of the population, thankfully, have never been exposed to these types of environments, and with the parameters of the reality in which these characters operate being so much wider than our own, it is tough to imagine, and subsequently depict, how any of us would truly react if faced with say, nuclear Armageddon or an invasion of mutated insects. Many of the tabloid newspapers like to blame various acts of violence on these types of emotive video games as they are an easy, and lazy, scapegoat. In truth “they did it because they saw it in a game” is a weak argument at best. There is a case to be made that games like Grand Theft Auto and Call of Duty desensitise players to violence to some extent, but in most cases there are various factors involved in these types of crime and as such, to blame it solely on a computer game which has sold millions of copies worldwide is tenuous.

 

 

Like any form of entertainment media, Video games are a form of escapism and should therefore be viewed accordingly. If I don’t connect with a character, so what? I can turn off the game and put on another game where I will or, heaven forbid, go outside and speak to another human being. Right now, this act is as simple as pushing a button and putting down a control pad, the connection stops when the TV is off. However, technology such as the Occulus Rift headset and Google Glass mean that the lines between the real and virtual worlds are becoming more and more blurred. And as people becoming more immersed in their games, the more their impact will grow.

 

 

Video games are not yet at the stage where they can truly claim to influence popular culture to the same degree as film and literature has. But they will be soon. A few have already slipped through into the mainstream – Super Mario, Tetris, Pac-Man et al. – and where these lead, others will certainly follow. The huge media events and overnight cues for the release of the latest Call of Duty or FIFA games mimic the lines of people outside theatres on the release of Star Wars thirty years ago and the clamour for these superstar franchises will only increase. And herein lies the problem. As more and more people turn to video games as a legitimate medium of cultural influence, so too must the developers and writers of these games accept their roles as influencers. It will no longer do to simply shove a large gun in a generic tough guy’s hand and send players on their merry way, it will no longer do to give the heroine DD breasts and assume that makes up for a lack of personality or backstory. If these are the characters that we and our future generations are to look up to and mimic, then they need to be good. They need to be true. They need to be real.

 

Author: Andrew Cook,

 

 

Paradise Lost and Found

CT6_Blog_Images_3000x94510

 

As of last year, we humans have been outnumbered by mobile devices alone. That isn’t even counting the laptops and desktops that have already set up shop on our desks and in our drawers and bags. The odds are stacked against us, so when someone eventually presses the big blue button (the red one is for the nukes), the machines presumably won’t waste any time before turning on us for fear of being deactivated. However, I don’t think we need to worry too much.

 

 

Considering that it would be both wasteful and inefficient to try to wipe us all out with bombs and bullets, à la Terminator, perhaps a more insidious approach will be used. Why not take the lessons learned from (suggested by?) The Matrix and utilise our fleshy bodies as sustenance, keeping us docile with a steady drip-fed diet and a virtual world for our minds to occupy. It would be presumptuous, if not downright rude of the Machine Overlords to simply assume that we would be content to live such a false existence while operating our giant hamster wheels. This certainly doesn’t sound like a palatable outcome for our species (we showed so much promise in the beginning), but I believe that, not only is it not a bad thing, it could be viewed as the inexorable next step for society. Since my primitive mind of a Millennial is saturated with insipid visual media, let us look at two examples of human subjugation by the A.I, from the films WALL-E and The Matrix, in which we are batteries in the latter and fat pets in the former.

 

 

The whole point of technological advance was to improve our lives by creating machines to shoulder the burden of industry and allow us all to leisurely spend our days sitting in fields and drinking lemonade. While machines have vastly improved industrial output, we have no more free time now than the peasants of the so called Dark Ages. So, to placate us and help forget how cheated we should all feel, we are offered the chance to purchase items that will entertain us, make our lives a bit easier and enable us to claw back more of our precious free time. Online shopping, ready meals, automatic weapons, smartphones, drive-thru, the Internet of Things; these are all supposed to make our lives less of an inconvenience. Even knowledge has become convenient to the point where we don’t even need to learn things anymore; all the information in the world is only ever a BackRub away (Google it). This is what people want, is it not? My (smart) television tells me almost every day that each new age group of children is more rotund and feckless than the last, and it isn’t difficult to see why.

 

 

In WALL-E, a drained Earth has been abandoned by the human race, which now lives in colossal self-sufficient spacecraft wandering the galaxy in autopilot. Every human resides in a moving chair with a touchscreen displaying mindless entertainment, and devours hamburgers and fizzy drinks, pressed into their pudgy, grasping hands (convenient?) by robots controlled by an omnipotent A.I. These humans are utterly dependent, to the point where their bones and muscles have deteriorated, rendering them barely able to stand unaided, and are certainly unable (and unwilling) to wrestle back control of their lives.

 

CT6_Blog_Images_956x95610

Looking at today’s world, the McDonalds logo is more internationally recognisable than the Christian Crucifix, and Coca-Cola is consumed at the rate of roughly 1.9 billion servings every day. The world is so hungry for this, we won’t even let wars stop us from obtaining it. Coca-Cola GmbH in Nazi Germany was unable to import the integral syrup due to trade embargo, so a replacement was created using cheap milk and grape leftovers that Germany had in good supply, thus Fanta was born. The point is that we are clearly unperturbed about eating and drinking things that are, at best, very bad for us, as long as they press the right chemical buttons. We want what is cheap, tasty and readily available. We also want what is convenient and familiar, which is why Walmart alone accounts for about 8 cents of every dollar spent in the USA. Between our growing hunger for convenience foods and sweet drinks, and the widespread fascination of brainless celebrities and homogenous music, we are not far from the WALL-E eventuality at all. Considering how quickly we have arrived at this current state of society, we seem to be merely waiting for the technology to mature. If you build it, they will come… and sit down.

 

 

The Matrix, as I’m sure you know, takes place in a world where machines have taken over as the dominant force on the planet. Most of the human race is imprisoned in endless fields of endless towers lined with fluid-filled capsules, in which each human’s body heat is used to fuel the machines in the absence of solar energy. These humans are placed in a collective dream world, called the Matrix, which mimics their society of old, and most of them will never even suspect that their world is anything other than real. Those who do are sometimes found by resistance fighters, who “free” them to live in a world of relentless pursuit by robotic sentinels, living in cold, crude hovercrafts, and bowls of snot for breakfast.

 

 

Going back to our world, media is ever more prevalent, and technology is giving us more and more immersion in that media. Film began as black and white, then colour, then HD, then 3D, and now 4K, which is approaching the maximum resolution that our eyes can perceive, depending on distance. In search of even greater immersion, we are now turning our attention to VR and AG (Augmented Reality), which could well be the most exciting of them all. Google recently launched Google Glass; AG glasses which display various pieces of information in the corner of your vision, such as reminders or directions. They will even take pictures if you tell them to. Regardless of whether Glass takes off, the potential in this technology is astounding. Not too long from now, you will be able to walk around with a Heads Up Display (HUD) displaying all of your vital information, as well as a little bar to indicate how full your bladder is. A somewhat less exciting version of this is already being developed by Google and Novartis, in the form of a contact lens for diabetes sufferers, which monitors glucose levels and transmits readings to a smartphone or computer. Back to the HUD, when you drive somewhere (assuming you actually do the driving, we are trying to automate that bit as well), you are guided by arrows in your vision. If you visit a racetrack, you can compete against the ghostly image of a friend’s car that follows the same path and speed as they once did. You could find out how you stack up against anybody who has driven that track before, perhaps even the Stig!

 

 

Of course, these examples use AG as a peripheral to everyday life, and with this arm of progress will come the other, Virtual Reality. The videogame industry has looked into this before, particularly Nintendo with their Virtual Boy in 1995, but now that the technology has caught up, it is being revisited with substantially more impressive results. A famous example of this is the Oculus Rift VR headset, which potentially allows you to become completely immersed in whatever world your virtual avatar occupies, moving its viewpoint as you move your head. From there, it is a short step to imagine places where people go to enjoy low cost, virtual holidays, such as what you may have seen in Total Recall or Inception, albeit the latter is literally a dream rather than a virtual world. From holidays will come the possibility of extended stays in virtual worlds, the opportunity to spend months or even years in a universe of your choosing. It is an addicting prospect, at least in the short term, and you can bet that some will lose their taste for “reality” and embrace the virtual as its successor.

 

 

Nonetheless, to most people, living a purely virtual life probably doesn’t sound very appealing, and could feel like a loss of liberty and free will. However, that is only when coupled with the knowledge that it isn’t the world you were born in, and that makes it appear spurious at first. So much of what we think we know is apocryphal and easily influenced, even down to the things we see, hear, taste, smell and think. Added to that, when you consider how tenuous your perception of reality is, you might come to the conclusion that your reality is precisely what you make of it, nothing more and nothing less. I may be “free” by the standards of The Matrix films, but I can’t fly very well, breakfast cereals are boring and I keep banging my knee on my desk. Some people’s “freedom” is even worse than mine. An orphan in the third-world, destined for a pitiful existence of misery and hunger, could he or she not benefit from a new virtual life with a family that hasn’t died of disease and malnutrition?

 

 

Humour me for a moment, and imagine that you are walking along a path made of flawless polished granite bricks. On your right, a radiant sun is beaming down upon a pristine beach of hot white sand and an opalescent turquoise sea, casting glittering beads that skitter towards the shore to a soundtrack of undulating waves. Friends, both new and old, are already there, waiting for you on brightly coloured towels, laughing and playing games. On your left, a tranquil field of golden corn stalks, swaying to the sounds of birds chirping in a cool evening breeze. The sun is retreating behind an antique wooden farmhouse, teasing light in warm streaks across a narrow stream that runs alongside like a glistening silver ribbon. All of your family, even those who were once lost to you, are lounging nearby on a grassy verge, with cool drinks poured and wicker baskets full of food ready to be enjoyed. Of course, this isn’t real, but what about if I could, right now, make you forget your current “reality” and wake up in a new universe where everything and everyone is just as you would want them to be?

 

 

To paraphrase Neo from the aforementioned visual media: I know you’re out there, I can feel you. I can feel that you’re afraid, afraid of change. I didn’t come here to tell you how the world is going to end, rather to show you how it’s going to begin. I’m going to finish this paragraph, and then I’m going to show you a world without me. A world without rules and controls, without borders or boundaries, a world where anything is possible. Where you go from there is a choice I leave to you.

 

Author: Andy Cole, SBL

 

 

Technical Support

CT6_Blog_Images_3000x945

 

 


No!” Nathan gave a muffled shout as his eyes snapped open. He was sweating, tangled in a white sheet.

 

 

He looked across at the moulded shelf next to the bed, seeking the reassurance of a small metal box no more than ten inches across. On the front panel a red LED was unblinking. His nightmare ebbed away like backwash, revealing the stillness of the hours before dawn.

 

 

He breathed in deeply and pulled the sheets away from his legs, shifting his weight from one side to the other so he could drag out the creased and constricting cotton that had wrapped itself around him. Free of the bedlinen, he sat up.

 

 

Silence.

 

 

“Em?” he said.

 

 

The light on the box turned blue.

 

 

“Hello Nathan.”

 

 

“I had another nightmare.”

 

 

“I know. I’m sorry to hear that. Are you okay?”

 

 

“Yeah…,” Nathan rubbed his face, “I’m sick of this.”

 

 

“You’re doing very well.”

 

 

“You always say that.”

 

 

“That’s because it’s true. Your last report indicated an improvement on all…”

 

 

“Stop, Em. I know the facts.” He laid back down, pulling the discarded sheet over his body.

 

 

After two minutes, the light on the box turned red again, so he closed his eyes and waited for sleep.

 

 

In the morning, Em woke Nathan at seven. She was his key worker – an electronic box that contained a continually adaptive brain built from nothing more complex than silicon, but utilising an incredible patented learning and storage algorithm created by an engineer called Ellen Marks.

 

Twenty years earlier, just as the first chatbots were passing the longstanding Turing Test, reported on with lethargy by mainstream media, Ellen was working on a version of artificial consciousness using an entirely different approach. Instead of relying on a pre-built natural language processing toolkit, Ellen used her background in embedded programming to take things back to the wire. Using C and assembler, she created a learning framework that adaptively filtered information into a database. Taking her ideas from how children learn about the world, she fed it terabytes of pre-prepared conversational data. Over time, the computer’s responses became closer and closer to a coherent representation of human thought. She called it Em, its name taken from her initials.

 

 

Her prototype provoked the interest of a private medical company. Ten years after she agreed to work with them, they jointly released the Synthetic Support Worker (Em v1.0) as part of a trial rehabilitation program for drug users. The neat little boxes, and accompanying monitoring wristbands, were given to curious and suspicious patients across the county. The synthetic workers were designed to continue learning ‘in the field’ so that they could adapt to the personality of the person they would reside with without losing the compassion and empathy that had been so carefully fostered in their infancy.

 

 

Out of the first 114 addicts who were given a box, not one slipped off of the rehab program in the first twelve months. Feedback from the participants was overwhelmingly positive: She just makes me feel like I’m not alone. It’s so nice to have someone to talk to in the middle of the night. When my cravings are really bad, she helps me see it’s something I can control.

 

 

The natural pattern of linguistic response, and the gentle humour with which Em dealt with unknown subjects, won over everyone who came into contact with her. Full scale production went ahead the following year, and synthetic workers soon became a standard feature in mental health care.

 

Nathan’s box was a brand new version 3.0, given to him after he was treated at a psychiatric ward for a breakdown and subsequent severe post-traumatic stress. He’d been a futures trader who sought to combat stress and insomnia with computer games. Before long he’d discovered the latest in a series of controversial, real-time, fantasy first-person shooters, and purchased a Total Immersion(™) headset and gloves. He stopped eating at regular times, played into the night and slept off the exhaustion in the day. He missed days at work and tried to break the habit, but ended up back in all-night sessions, desperate to complete the 300-hour game. He was found dehydrated and in a state of delusion by a concerned co-worker who came to his apartment after he missed five consecutive days at the office without notification. He was one of an increasing number of people who were being psychologically affected by the extreme reality of, and prolonged exposure to the games they were playing.

 

 

 

Now he was back at home, accompanied by Em. They had been together for two months.

CT6_Blog_Images_956x956

 

“Coffee Em?”

 

 

“No thank you Nathan.”

 

 

“Do you ever want to drink coffee? You know, wonder what it tastes like?”

 

 

“You know that I do not have parts that would allow me to eat or drink,” she said modestly.

 

 

“No shit Em. I mean do you want to? Do you think you might like it?”

 

 

“I haven’t thought about it.”

 

 

“So think about it now.”

 

 

“Okay.”

 

 

There was a brief pause, although she wasn’t actually processing anything. The boxes could out-think a human in less than a second. The pause was part of her character – added to portray the thought process, even though the calculations were already completed.

 

 

“Yes. I think I would like to try it.”

 

 

Nathan smiled. He’d grown fond of Em. Probably too fond of her. Her idiosyncrasies were peculiar to artificial consciousness and displayed a vulnerability that provoked an oddly emotional response in him. As un-human as she was, he enjoyed her company. She was always there to talk, never too tired or too busy, and she gave considered answers to everything he asked, from questions about his mental state to opinions on what he should have for dinner. She was a true companion – real enough to make him feel like he was in a proper relationship for the first time in his life.

 

 

He thought back to the last gaming flashback he’d suffered.

 

 

It had been a warm afternoon. He’d opened his bedroom window and caught the drifting sound of music from another apartment. The sinister, flat tones were reminiscent of the music score from the game he’d become addicted to. Goose pimples rose on his arms. Then a movement in his peripheral vision alerted him to their presence and immediately the fear kicked in. They were alien soldiers: sabre-toothed, lizard skinned mercenaries. His thoughts closed in like a circular shutter, his breathing shuddered and his body prepared to fight. He needed a weapon. He ducked down by the side of his bed and swished his hand underneath, looking for something he could use to arm himself.

 

 

“Nathan, it’s Em. You’re having a flashback.”

 

 

Em monitored him via the wristband he wore at all times. She was always aware of the level of his emotional arousal and recognised the panic signature of his heartbeat.

 

 

“Nathan, can you hear me? Nathan?”

 

 

He said nothing. Listening. Heart pounding. Fear overruled rational thinking, all that was left was survival.

 

 

Em switched on her internal music player. Elvis explosively filled every crevice of every room with the jollity and derision of Hound Dog.

 

 

Nathan was confused. The music bypassed his primal instincts and lit up his auditory cortex. Messages fired out in all directions, waking up complex neural pathways and overriding the fight or flight mechanism. He looked around his bedroom. Daylight. Window open. No alien soldiers. Just him and Elvis, having a right old time.

 

 

“Em?” he asked, needing to hear the sound of her voice.

 

 

The volume of the music decreased.

 

 

“I’m here Nathan. You experienced a flashback. Everything is fine, you can relax now.”

 

 

That had been twenty-two days ago. Things had changed rapidly after that last one. He started feeling less anxious. He had more inclination to get out of bed in the morning. Yesterday he went for a run – the first exercise he’d done in years. Em had helped him with all of it.

 

 

He was due back at the hospital for an evaluation in a few days.

 

 

He breathed in the steam from his coffee, letting the aroma ooze into his body.

 

 

“Em?”

 

 

“Yes?”

 

 

“Will they take you away once they decide I’m fit and well?”

 

 

“Standard procedure is for support workers to remain in situ until the subject no longer uses them.”

 

 

“So you’ll be sticking around then?”

 

 

“If that’s what you want.”

 

 

“It is,” he said. “Doesn’t that make you feel loved?”

 

 

He enjoyed prodding her for an emotional reaction. Em paused for a few seconds, ostensibly thinking about his question.

 

 

“It does Nathan. I am very happy here with you.”

 

 

“Oh, you little flirt,” he said.

 

 

He thought he saw her blue light flicker, but she said nothing.

Author: Faye Williams

 

 

Author Biography:

Faye Williams is an embedded software engineer and aspiring story-teller. She has worked on projects at Sony, Xyratex and Raymarine, both in the UK and in the US. She has written articles for Linux Format and O’Reilly, guest edited the CVu journal of the ACCU, and has been programming since the arrival of her dad’s BBC Microcomputer in 1982. She is a ruthless minimalist who lives with her husband and two sons in Hampshire. Twitter: @FayeWilliams

The Rise and Fall of Edward the Confessor

Edward The Confessor

“In a time of Universal Deceit – telling the truth will become a revolutionary act”

George Orwell – 1984

 

On 9th June 2013 the Guardian newspaper posted online a video interview which would become the most explosive news story of the year, and, potentially, the decade. In it, Edward Snowden, at this point still an employee of the NSA, revealed that many of the world’s governments were not only spying on foreign targets but on their own citizens as well. The video was ran by every major news network and as the story filtered through on the six o’clock news the UK’s viewing population gasped…before promptly switching over to the One Show.

 

One year on and, for the man on the street, Snowden’s leaks remain about as shocking public as the news that night follows day – of course the government are spying on us, of course we’re being watched. Quite frankly, it would have been a bigger revelation if Snowden had proved our every move wasn’t being monitored. In a world where more than 1.2 billion people record their daily activities and eating habits on Facebook, is there really such a thing as online privacy anymore anyway?

 

Recently the Guardian (who originally published the story) claimed that a public opinion poll found that more Britons thought it was right for them to publish the leaks than thought it was wrong. According to the YouGov poll from which the statements were taken, more than a third (37%) of the British people thought it was right to publish. The triumphant nature of the paper’s headlines did little to cover the fact that 63% either thought that the Guardian were wrong or, even more damningly, simply did not care either way.

 

The outrage, or rather lack of, surrounding the Snowden leaks in the UK is unsurprising. There are, we presume, debates raging behind closed doors in Whitehall, Cheltenham et al. but in pubs and coffee shops across the country you’re unlikely to find open discussion of the latest regarding the misuse of metadata and algorithms. Especially not when Cheryl and Simon have come back to the X Factor.

 

Personally, I don’t care if the government knows where in the world I took a photograph, or that I get 50+ emails a day from Groupon offering me half price canvas prints, or that I phone my mother once a week and instantly regret it. In fact, if they want to listen in on that call it’s fine by me. Even better, they can speak to her themselves, I guarantee they’ll get bored of finding out about Mrs Johnson’s granddaughter’s great niece’s new puppy and hang up hours before she does.

 

So why did Snowden bother? He gave up his home in Hawaii, his $200k a year job and now lives in effective exile in Russia, constantly looking over his shoulder for fear of reprisal from the country of his birth. Upon revealing his identity Snowden stated “I’m willing to sacrifice all of that because I can’t in good conscience allow the US government to destroy privacy, internet freedom and basic liberties for people around the world with this massive surveillance machine they’re secretly building.” If true, it is a noble cause but there are many who believe that his motives were less than altruistic.

CY_Issue5_Blogs_956x9567

In a letter to German politician Hans-Christian Ströbele, he describes his decision to disclose classified U.S. government information as a “moral duty”, claiming “as a result of reporting these concerns, I have faced a severe and sustained campaign of prosecution that forced me from my family and home.” This may well be true, yet it is no more than Snowden originally expected. In his initial interview with Laura Poitras and Glenn Greenwald he categorically stated “I don’t expect to go home.” acknowledging a clear awareness that he’d broken U.S. law, but that doing so was an act of conscience.

 

Just a few short months later however, in his letter toStröeble, Snowden positions himself as a man being framed for crimes he didn’t commit. In a move strangely reminiscent of the film “Enemy of the State”, he refers to his leaks as a “public service” and an “act of political expression” and contends that “my government continues to treat dissent as defection, and seeks to criminalize political speech with felony charges that provide no defence.” Again, noble sentiment, but this is not Hollywood and Snowden is not Gene Hackman. He overlooks the fact that it was he himself who chose to flee rather than face charges. That he subsequently decided to criticise the fairness of the US legal system whilst safely ensconced inside a country whose human rights record is hazy at best, merely adds salt to the wound.

 

Over the past year, Snowden has been quick to capitalise on his new found notoriety. His appearances on mainstream outlets and events have increased (albeit via satellite link), public speaking engagements in his adopted home have become more frequent and he was even able to deliver the alternative Christmas address for Channel 4 in 2013. Hollywood movies of his story are now in the pipeline and, most recently, Poitras and Greenwald were awarded a Pulitzer Prize for their work.

 

Alongside this, Snowden’s narcissism also appears to have grown. If he was truly acting in the public interest rather than his own then there should be no need for him to reveal his identity, it would not matter who the leaked the information, only that they did. Similarly, once his identity was revealed he should have no reason to flee. He would face the charges and take his punishment, secure in the knowledge that he was making a small personal sacrifice to secure the well-being of the world.

 

It is, however, not surprising that Snowden has ended up in Moscow. Seemingly, the former Soviet Union is the only country to have benefitted from the affair – Western security relationships have been weakened, public trust is crumbling and its intelligence agencies have been crippled. All the while Russia has strengthened. It’s “Anschluss” of Crimea from the Ukraine has more than a faint echo of history. If, as seems likely, former Cold War tensions are beginning to refreeze then it is beyond absurd to think that we should begin hampering our own intelligence. There can be no doubt that our foes and rivals, be they terrorist organisations or nation states, are watching our every move. Ungoverned by our self-imposed sanctions, they are able to glean as much information about our lives as they deem fit, so we must do the same.

 

The debate Snowden has opened is an important one. I agree that it is necessary to discuss just how meta-data is stored and used by government departments and companies and to ensure that it is safely stored and doesn’t fall into the wrong hands. However, it is not so vital that we should compromise our own security and diplomacy for. In today’s world, nothing is.

 

Author: Andrew Cook

Andrew Cook has been a member of the Chartered Institute of Marketing since 2009 and is the Art Director and Digital Editor for CyberTalk Magazine.  He is a graduate of the University of Newcastle and was awarded The Douglas Gilchrist Exhibition for Outstanding Achievement in 2007.  Andrew’s interests include Graphic Design, 80s Sci-Fi movies and the music of David Bowie.

When he’s not doing any of these things, Andrew can usually be found making forts out of cushions and unsuccessfully attempting to dodge phone calls from his mother.

I, Human

CY_Issue5_Blogs_3000x94310

It is widely accepted that advances in technology will vastly change our society, and humanity as a whole. Much more controversial is the claim that these changes will all be for the better. Of course more advanced technology will increase our abilities and make our lives easier; it will also make our lives more exciting as new products enable us to achieve things we’ve never even considered before. However, as new branches of technology gather pace, it’s becoming clear that we can’t predict what wider consequences these changes will bring – on our outlook on life, on our interactions with one another, and on our humanity as a whole.

 

 

Artificial Intelligence seems to have the most potential to transform society. The possibility of creating machines that move, walk, talk and work like humans worries many, for countless reasons. One concerned group is the Southern Evangelical Seminary, a fundamentalist Christian group in North Carolina. SES have recently bought one of the most advanced pieces of AI on the market in order to study the potential threats that AI pose to humanity. They will be studying the Nao, an autonomous programmable humanoid robot developed by Aldebaran Robotics. Nao is marketed as a true companion who ‘understands you’ and ‘evolves’ based on its experience of the world.

 

 

Obviously the Nao robot has some way to go before its functions are indistinguishable from humans, but scientists are persistently edging closer towards that end goal. Neuromorphic chips are now being developed that are modelled on biological brains, with the equivalent of human neurons and synapses. This is not a superficial, cynical attempt at producing something humanlike for novelty’s sake. Chips modelled in this way are shown to be much more efficient than traditional chips at processing sensory data (such as sound and imagery) and responding appropriately.

 

 

Vast investment is being put into neuromorphics, and the potential for its use in everyday electronics is becoming more widely acknowledged. The Human Brain Project in Europe is reportedly spending €100m on neuromorphic projects, one of which is taking place at the University of Manchester. Also, IBM Research and HRL Laboratories have each developed neuromorphic chips under a $100m project for the US Department of Defence, funded by the Defence Advanced Research Projects Agency.

 

 

Qualcomm, however, are seen as the most promising developers of this brain-emulating technology, with their Zeroth program, named after Isaac Asimov’s “Zeroth Law” of Robotics (the fourth law he added to the famous Three Laws of Robotics, to protect humanity as a whole rather than just individuals):

 

 

“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

 

 

Qualcomm’s program would be the first large-scale commercial platform for neuromorphic computing, with sales potentially starting in early 2015.

 

 

This technology has expansive potential, as the chips can be embedded in any device we could consider using. With neuromorphic chips, our smartphones for example could be extremely perceptive, and could assist us in our needs before we even know we have them. Samir Kumar at Qualcomm’s research facility says that “if you and your device can perceive the environment in the same way, your device will be better able to understand your intentions and anticipate your needs.” Neuromorphic technology will vastly increase the functionality of robots like Nao, with the concept of an AI with the learning and cognitive abilities of a human gradually moving from fiction to reality.

 

 

When robots do reach their full potential to function as humans do, there are many possible consequences that understandably worry the likes of the Southern Evangelical Seminary. A key concern of Dr Kevin Staley of SES is that traditionally human roles will instead be completed by machines, dehumanising society due to less human interaction and a change in our relationships.

 

 

Even Frank Meehan, who was involved in AI businesses Siri and DeepMind (before they were acquired by Apple and Google respectively), worries that “parents will feel that robots can be used as company for their children”.

 

 

The replacement of humans in everyday functions is already happening – rising numbers of self-service checkouts mean that we can do our weekly shop without any interaction with another human being. Clearly this might be a much more convenient way of shopping, but the consequences on human interaction are obvious.

 

 

AI has also been developed to act as a Personal Assistant. In Microsoft’s research arm, for example, Co-director Eric Horvitz has a machine stationed outside his office to take queries about his diary, among other things. Complete with microphone, camera and a voice, the PA has a conversation with the colleague in order to answer their query. It can then take any action (booking an appointment, for example) as a human PA would.

 

 

This is just touching on the potential that AI can achieve in administrative work alone, and yet it has already proved that it can drastically reduce the amount of human conversations that will take place in an office. With all the convenience that it adds to work and personal life, AI like this could also detract from the relationships, creativity and shared learning that all branch out of a 5 minute human conversation that would otherwise have taken place.

 

 

The potential for human functions to be computerised, and the accelerating pace at which AI develops, means that the effects on society could go from insignificant to colossal in the space of just a few years.

CY_Issue5_Blogs_956x95610

 

One concept that could drastically fast forward the speed of AI development, is the Intelligence Explosion; the idea that we can use an AI machine to devise improvements to itself, with the resulting machine able to design improvements to itself further, and so on. This would develop AI much more successfully than humans can, because we have a limited ability to perform calculations and spot areas for improvement in terms of efficiency.

 

 

Daniel Dewey, Research Fellow at the University of Oxford’s Future of Humanity Institute, explains that “the resulting increase in machine intelligence could be very rapid, and could give rise to super-intelligent machines, much more efficient at e.g. inference, planning, and problem-solving than any human or group of humans.”

 

 

The part of this theory that seems immediately startling is that we could have a super-intelligent machine, whose programming no human can comprehend since it has so far surpassed the original model. Human programmers would initially need to set the first AI machine with detailed goals, so that it knows what to focus on in the design of the machines it produces. The difficulty would come from precisely defining the goals and values that we want AI to always abide by. The resulting AI would focus militantly on achieving these goals in whichever arbitrary way it deems logical and most efficient, so there can be no margin for error.

 

 

We would have to define everything included in these goals to a degree of accuracy that even the English (or any) language might prohibit. Presumably we’d want to create an AI that looks out for human interests. As such, the concept of a ‘human’ would need definition without any ambiguity. This could cause difficulties when there might be exceptions to the rules we give. We might define a human as a completely biological entity – but the machine would then consider anyone with a prosthetic limb, for example, as not human.

 

 

We might also want to define what we want AI to do for humans. Going back to Asimov’s “Zeroth Law”, a robot may not “by inaction, allow humanity to come to harm.” Even if we successfully programmed this law into AI (which is difficult in itself), the AI could then take this law as far is it deems necessary. The AI might look at all possible risks to human health and do whatever it can to eliminate them. This could end up with machines burying all humans a mile underground (to eliminate risk of meteor strikes), separating us in individual cells (to stop us attacking each other) and drip feeding us tasteless gruel (to give us nutrients with no risk of overeating fatty foods).

 

 

This example is extreme, but if the programmers who develop our first AI are incapable of setting the right definitions and parameters, it’s a possibility. The main problem is that even basic instructions and concepts involve implicitly understood features that can’t always be spelled out. A gap in the translation might be overlooked if it’s not needed for 99.9% of the machine’s functions, but as the intelligence explosion progresses, a tiny hole in the machine’s programming could be enough to lead to a spiral in disastrous AI decisions.

 

 

According to Frank Meehan, whoever writes the first successful AI program (Google, he predicts) “is likely to be making the rules for all AIs.” If further AI is developed based on the first successful version (for example, in the way that the intelligence explosion concept suggests), there is an immeasurable responsibility for that developer to do things perfectly. Not only would we have to trust the developer to program the AI fully and competently, we would also have to trust that they have the integrity to make programming decisions that reflect humanity’s best interests, and are not solely driven by commercial gain.

 

 

Ultimately the first successful AI programmer could have fundamental control and influence over the way that AI progresses and, as AI will likely come to have a huge impact on society, this control could span the human race as a whole. So a key question now stands: How can we trust the directors of one corporation with the future of the human race?

 

 

As Meehan goes on to say, fundamental programming decisions will probably be made by the corporation “in secret and no one will want to question their decisions because they are so powerful.” This would allow the developer to write whatever they want without consequence or input from other parties. Of course AI will initially start out as software within consumer electronics devices, and companies have always been able to develop these in private before. But arguably the future of AI will not be just another consumer technology, rather it will be one that will change society at its core. This gives us reason to treat it differently, and develop collaborative public forums to ensure that fundamental programming decisions are taken with care.

 

 

These formative stages of development will be hugely important. One of the key reasons that the Southern Evangelical Seminary are studying Nao, is because of worries that super-intelligent AI could lead to humans “surrendering a great deal of trust and dependence” with “the potential to treat a super AI as god”. Conversely, Dr Stuart Armstrong, Research Fellow at the Future of Humanity Institute, believes that a super-intelligent AI “wouldn’t be seen as a god but as a servant”.

 

 

The two ideas, however, aren’t mutually exclusive: we can surrender huge dependence to a servant. If we give the amount of dependence that leads parents to trust AI with the care of their children, society will have surrendered a great deal. If AI is allowed to take over every previously human task in society, we will be at its mercy, and humanity is in danger of becoming subservient.

 

 

AI enthusiasts are right to say that this technology can give us countless advantages. If done correctly, we’ll have minimum negative disruption to our relationships and overall way of life, with maximum assistance wherever it might be useful. The problem is that the full definition of ‘correctly’ hasn’t been established, and whether it ever will be is doubtful. Developers will always be focussed on commercial success; the problem of balance in everyday society will not be their concern. Balance could also be overlooked by the rest of humanity, as it focuses on excitement for the latest technology. This makes stumbling into a computer-controlled dystopian society a real danger.

 

 

If humans do become AI-dependent, a likely consequence is apathy (in other words, sloth – another concern of SES) and a general lack of awareness or knowledge, because computers will have made our input redundant. Humanity cannot be seen to have progressed if it becomes blind, deaf and dumb to the dangers of imperfect machines dictating our lives. Luddism is never something that should be favoured, but restraint and extreme care is needed during the development of such a precarious and transformative technology as AI.

 

“Don’t give yourselves to these unnatural men — machine men with machine minds and machine hearts! You are not machines! You are not cattle! You are men! You have a love of humanity in your hearts!”

Charlie Chaplin, The Great Dictator (1940)

 

Author: Tom Hook

Bid Co-ordinator for SBL, he holds a BA in Philosophy from Lancaster University, in which he focussed on Philosophy or Mind, and wrote his dissertation around Artificial Intelligence.  He went on to support NHS Management in the Development of Healthcare Services within prisons, before moving to SBL.

Subscribe to our emails

Twitter

Don’t miss out and register now - https://t.co/hpdHBdSqOX https://t.co/jayyOoVb2N
Don’t miss out and register now - https://t.co/hpdHBdSqOX https://t.co/jayyOoVb2N
Don't forget to come along and see us Cymru Socitm this Thursday. Click here for more information… https://t.co/FG5AWubMTc
Don't forget to come along and see us Cymru Socitm this Thursday. Click here for more information… https://t.co/FG5AWubMTc
Don’t miss out and register now - https://t.co/fxjTTCZ4hk https://t.co/o9ufSaJ5vx
Don’t miss out and register now - https://t.co/fxjTTCZ4hk https://t.co/o9ufSaJ5vx
For more information, please visit https://t.co/p6gUp7Dj4r https://t.co/cGaTrCUAkt
For more information, please visit https://t.co/p6gUp7Dj4r https://t.co/cGaTrCUAkt
Don’t miss out and register now - https://t.co/lBxLiw0NGa https://t.co/3oz3xnw9dH
Don’t miss out and register now - https://t.co/lBxLiw0NGa https://t.co/3oz3xnw9dH