Back to main site

Tel: 01347 812150

Category Archives: Cyber Security

The Forgotten Architects of the Cyber Domain



“Very great and pertinent advances doubtless can be made during the remainder of this century, both in information technology and in the ways man uses it.  Whether very great and pertinent advances will be made, however, depends strongly on how societies and nations set their goals”

“Libraries of the Future”, J. C. R. Licklider, 1965


July 1945.  The Second World War is drawing to its final close.  Although the USA will continue to conduct operations in the Pacific theatre until a radio broadcast by Emperor Hirohito on September 15th announces Japan’s surrender; the outcome is assured.  Victory in Europe had already been achieved.  Germany had surrendered, unconditionally, to the Allies at 02:41 on May 7th.  By July, total Allied victory was an absolute inevitability.  Peace loomed.


The seeds of the Cold War had already been planted, perhaps as early as the Nazi-Soviet Non-Aggression Pact of 1939.  Certainly, by the frantic race to Berlin between the USSR from the East, and the UK and the USA from the West, following the Normandy landings and the Allied invasion of Europe.  From these seeds, the roots of the continuing, global, and existential struggle that was to define and shape the human story for what remained of the twentieth century were already growing; and at a remarkable rate.  However, it would not be until March 5th 1946 that Churchill would declare, “from Stettin in the Baltic to Trieste in the Adriatic an iron curtain has descended across the Continent”.


In July 1945, the deep terrors of the Cold War, were, for most of humanity, unforeseen and unimagined.  In July 1945, many of humanities finest minds were compelled to contemplate on the ruins of the world and, more importantly, on the new world that they would make to replace and improve that which had been destroyed.  Fascism had been defeated.  Democracy had prevailed.  A high price had been paid by victor and vanquished alike.  Cities, nations and empires lay in the ruins of victory as well as of defeat.  Amongst the victors, elation was tempered with exhaustion.  The UK economy in particular having been dealt a beating from which it would never recover.


The world had witnessed the capacity of human science and technology to mechanise and industrialise wholesale slaughter of soldiers and civilians alike; had watched the mass production of death played out on a global stage.  War, and genocide, had been refined by western civilisation to a grotesquely clinical exercise in accountancy and modern management.  The legitimacy of the European imperial project perished in the barbarity and horror of Auschwitz and Stalingrad.


In order to secure this victory, the entirety of the will, energy and treasure of the greatest nations on Earth had been devoted to one single aim; victory.  This had been a total war in every sense of the word.  Now that the victory had been attained; what next?  What was the new world to be remade from the ruins of the old to look like?


In the Summer of 1945, there was a sense that great things must now be done in order to ensure that the new world would be one worthy of the sacrifices that had been made; a peace worth the price.  All of this had to have been for something.  Amongst the great minds of humanity a sense had grown of the power of human agency and spirit to create great effect.  These were the minds that had harnessed the power of the atom, through technology, to the human will.  These were the minds that had created machines of vast power and sophistication to make and break the deepest of secrets.  These were the minds that sensed the expectations of history upon them.  It was their responsibility, individually and collectively, to secure the peace just as it had been to win the war.  It was their duty to enhance and improve the human condition.  And, they knew it.


In the July 1945 issue of the “Atlantic Monthly”, the man who had spent his war directing and channelling the scientific research required to secure victory in arms, responded to the imperatives of the peace, and the call of history, with the publication of the seminal paper “As We May Think”.  As first the chairman of the National Defense Research Committee, and then the director of the Office of Scientific Research and Development, Vannevar Bush was responsible for directing and co-ordinating the prodigious and ground breaking research required to enable the prosecution of total war on an industrial scale.


In his paper, Bush openly acknowledges that, for scientists, “it has been exhilarating to work in effective partnership” in order to attain a “common cause”.  He poses the question; “what are the scientists to do next”, now that the exhilaration of the war has ebbed away?  His answer is that the scientists of the peace must turn their attentions to making real the radical transformation in the relationships between humanity and information promised by the technology developed at such pace and cost during the war.  For Bush this is about far more than computers as great calculators for scientists; “a much larger matter than merely the extraction of data for the purposes of scientific research; it involves the entire process by which man profits by his inheritance of acquired knowledge”.


Bush proposed the creation of a device to extend and enhance the human memory; a machine to aid and augment the human powers of cognition, imagination and creation; a computer to work in symbiosis with the human.  He proposed a device that would operate as human thought does, “by association”.  For Bush, the human mind can, “with one item in its grasp”, link “instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain”.  He describes “a future device for individual use, which is a sort of mechanized private file and library”.  He gives it a name; the “memex”.  The memex, extends the human memory and the mind; “it is a device in which an individual stores all his books, records, and communications, and which is mechanised so that it may be consulted with exceeding speed and flexibility”.


He gives the new thing a form; it is “a desk”.  For the human, it is a “piece of furniture at which he works”, rather than a mysterious, inaccessible, gargantuan, monster-machine, filling an entire room.  It is moulded around human interaction; it has “slanting translucent screens on which material can be projected for convenient reading”.  For data entry and control there is both a “keyboard”, and “provision for direct entry” via a “transparent platen” upon which can be “placed longhand notes, photographs, memoranda”.  These originals can then be “photographed onto the next blank space in a section of the memex film.”  If at any time the user losses the thread of their interaction with the memex, a “special button transfers him immediately to the first page of the index”.


Bush’s paper lays out the essence of the core thinking upon which the World Wide Web was to be constructed.  Bush proved incorrect in his preference for analogue over digital computers.  However, his vision, of the human interactions with information augmented by symbiotic machines integrated by design in to the associative workings of human cognition, has been made real.  We see this in the algorithms driving today’s search engines; in the concepts and technology of hyperlinking that provides the threads of the Web built upon the decades of associative and cumulative thought he initiated, and; in the principals informing the graphical user interface model of mediating human communications with our machine counterparts.


Bush’s paper entered an intellectual context already populated with minds turned in the direction of computers and computing.  In particular, the minds of A. M. Turing and John von Neumann.  Turing’s 1936 paper “On Computable Numbers, With an Application to the Entscheidungsproblem” offers clear evidence of the direction of his pre-war thinking towards the possibility of a universal computing machine.  Whilst attending Princeton from 1937 to 1938, Turing encountered von Neumann for a second time, the two having met first when von Neumann was a visiting professor at Cambridge during the third term of 1935.  In 1946, John G. Kemeny had “the privilege of listening to a lecture at Los Alamos by … John von Neumann” in which von Neumann laid out five principals for the design of a computing machine of the future.  Kemeny’s memory has von Neumann’s proposals as: “1, Fully Electronic Computers”, “2, Binary Number System”, “3, Internal Memory”, “4, Stored Program” and, “5, Universal Computer”[1].  Turing’s direction of mind towards the question of machine intelligence is signposted in his lecture to the London Mathematical Society on 20th February 1947.  The world was not to know of Colossus for decades to come.


In 1948, Norbert Wiener publishes the first of a series of three books in which he announced the creation of a new science; “Cybernetics: or Control and Communication in the Animal and the Machine”.  The foundation text was followed by “The Human Use of Human Beings” in 1950, and the trilogy was completed in 1964 with “God and Golem, Inc.”.  Wiener’s mission in 1948 was to provide a “fresh and independent point of commencement” for “the study of non-linear structures and systems, whether electric or mechanical, whether natural or artificial”[2].  By 1964 he was reflecting on “three points in cybernetics”.  Firstly, “machines which learn”.  Secondly, “machines … able to make other machines in their own image”.  Thirdly, “the relations of the machine to the living being”[3].  Across the span of these texts, Wiener develops the conceptual, philosophical and mathematical framework that unifies and transforms human and machine in the cyber domain of today.


Two years before he joined the newly created Advanced Research Projects Agency in 1962, J. C. R. Licklider had already begun to direct his thoughts towards the “expected development in cooperative interaction between man and electronic computers” that will lead to a “man-computer symbiosis” in which “a very close coupling between the human and the electronic members of the partnership” will “let computers facilitate formative thinking” and “enable men and computers to cooperate in making decisions and controlling complex situations” [4].  By 1968, Licklider predicted with assured confidence that, despite it being “a rather startling thing to say”, nonetheless, “in a few years, men will be able to communicate more effectively through a machine than face to face.[5]


Three years on from his appointment to the new agency, in 1965, Licklider was commissioned with the production of a report in to the “Libraries of the Future”.  His task, not to examine new ways to store and retrieve books; but instead, to consider the “concepts and problems of man’s interaction with the body of recorded knowledge” and to explore “the use of computers in information storage, organisation, and retrieval.”  His prediction was that what he called a ‘procognitive’ system would evolve based on digital computers.  Outlandish though it might seem to the readers of the report in 1965, these computers would have; “random-access memory”, “content-addressable memory”, “parallel processing”, cathode-ray-oscilloscope displays and light pens”, “hierarchical and recursive program structures”, “procedure-orientated and problem-orientated languages” and “ xerographic output units”.  They would be enmeshed; interconnected through “time-sharing computer systems with remote user terminals”[6].


In 1971 the first e-mail was sent; across ARPANET.  A system of networked computers created in 1963 as a direct realisation of J. C. R. Licklider’s vision.  A system conceived and moulded by human thought and will; the system that stands as the point of genesis of the Internet.  A net that provided the fabric from which Bush’s web could, and would, be woven.  The foundations of a cybernetic system in which Bush’s memex morphs in to the universal machine of Turing and von Neumann.


In 1965, whilst at ARPA, Licklider established an applied research programme that laid the foundations for generations of research and development, and postgraduate teaching, in computers and computing.  The programme took years if not decades to bear fruit.  Directly and indirectly, it produced some of the keystone elements of modern computing.  It continues to do so to this day.


The names of the institutions funded by this programme still reads like a who’s who of the great and the good in the realms of the teaching and research of computing.  Because of Licklider, University of California, Berkley was granted funds to develop time-sharing through Project Genie.  Likewise, the Massachusetts Institute of Technology was enabled to research Machine Aided Cognition, or Mathematics and Computation, or Multiple Access Computer, or Machine Aided Cognitions, or Man and Computer, through Project MAC[7].  What was to become Carnegie Mellon University took receipt of six hundred million dollars in order to conduct research in to the theory of computer programming, artificial intelligence, the interactions between computers and natural languages, the interactions between humans and computers, and, the design of computing machinery.  The Augmentation Research Center within the Stanford Research Institute was tasked with developing technologies to enable components of computers and elements of computer systems to interact.


The birth of Open Source in the 1970s, and the development of the RISC architecture in the 1980s at University of California Berkley, stem from the seeds planted by Licklider.  As does the genesis of social networking, manifest in the Community Memory Project terminal found in Leopold’s Records in Berkley in 1973.  The use, in 1984, of robots designed by Carnegie Mellon academics in the clean up of the wreckage and debris from the partial nuclear meltdown at Three Mile Island has the same lineage.  Likewise, the continuing and growing world leading position in the areas of artificial intelligence and the theories of computation enjoyed by the Massachusetts Institute of Technology.  Similarly, the emergence of the mouse, hyperlinks and the graphical user interface from Stanford shares this common origin.  All of this sits in a direct causational relationship to Licklider’s endeavours.  All of this, impressive though it is, leaves out the impact of the graduates from these institutions and the creation around them of a culture and an environment within which great things are done.  Stanford nestles in the heart of Silicon Valley and counts Sergey Brin, Larry Page and Vinton Cerf amongst its alumni.


The testaments to the enduring legacy of Licklider’s vision are as clear as the most important lesson they offer; namely that the success of the human sense making project in the area of cyber can only be imagined through a long range lens.  Success in this endeavour quite possibly being our only hope of surviving, let alone harnessing, the inexorable dependence humanity now has on cyber. A dependence foretold by science fiction.


In his 1946 story “A Logic Named Joe”; a merry tale of the Internet (the tanks), PCs (logics), and the near collapse of society because of them.  Murray Leinster has the tank maintenance engineer reply to the suggestion that the network of logics and tanks might be shut down in order to save humanity from the eponymous Joe, a logic that has somehow attained a form of sentience, with the chillingly prescient riposte: “”Shut down the tank?” he says mirthless.  “Does it occur to you, fella, that the tank has been doin’ all the computin’ for every business office for years? It’s been handlin’ the distribution of ninety-four percent of all telecast programs, has given out all information on weather, plane schedules, special sales, employment opportunities and news; has handled all person-to-person contacts over wires and recorded every business conversation and agreement – listen, fella! Logics changed civilization. Logics are civilization! If we shut off logics, we go back to a kind, of civilization we have forgotten how to run!”


Before the risky and radical funding and research construct Licklider created came into being, not a single Ph.D. in computing had been conferred anywhere in the USA; the first being awarded in 1969.  Licklider operated with courage, foresight and vision.  Humanity, and the US economy, are the richer because he did.  He established an academic context that would be impossible to attain in the UK today within the confines set by the current funding regime and exemplified in the Research Excellence Framework.


Our academic institutions are locked in to a funding structure that actively militates against radical and disruptive thought.  Intellectual creativity and cross-disciplinary work are driven out by a system that rewards conservatism, conformity and compliance, with research funding and professional advancement.  This same culture fosters a headlong retreat in to ever narrower slivers of specialisation.  The only sense in which matters differ from Bush’s observation in 1945 that “there is increasing evidence that we are being bogged down as specialization extends”[8] is that we are now worse off than they were three quarters of a century ago.


Just as we have retreated in to the cold comfort of conformity in sterile research, so we have allowed training to usurp education.  We are producing generation after generation of graduates, more or less skilled in the rote application of knowledge and processes, which are themselves more or less relevant to the world as it is.  These graduates have no sense of the interactions between the technology of computing and humanity; no sense of the origins and nature even of the technology.  They are trained.  They are technicians; highly skilled technicians with a demonstrable ability to master very complicated processes; but, technicians nonetheless.  They are, by design, bereft of the capacity for critical or creative thought.  They can exercise formal logic in response to established patterns.  They can accomplish complicated and familiar tasks with great faculty.  Yet, by virtue of the training itself, they are incapable of adapting to change.  They are closed systems, devoid of the ability to act on feedback.  Unable to change their state and unable to evolve.  Unlike the cyber system they inhabit.


Across the community of those interested in cyber and cyber security, there are numerous voices calling, correctly, for a science of cyber.  However, there is manifest confusion about what such a call amounts to.  The goal of science is not the acquisition of empirical data per se.  Neither is the creation of a science the elevation of assertions to fact simply because of their utterance from the mouth of a scientist.  Science is about a rigorous and methodological approach to the formulation, testing, destruction and re-making of hypotheses in order to push back the frontiers of human knowledge and understanding.  Science requires insight, vision, creativity, courage and risk taking in the formulation of these hypotheses as much as it requires discipline, rigour and method in their testing.  Those who make the call for a science of cyber should first read Wiener.


  1. C. R. Licklider was a principal and formative actor at the heart of the military-industrial complex called forth by the existential imperatives of the Cold War. And he knew it. On the 4th October 1957 a highly polished metal sphere less than a meter in diameter was launched in to an elliptical low earth orbit by the USSR.  Elementary Satellite-1, Sputnik-1, became the first artificial earth satellite and an apparent symbol of Soviet scientific power.  The eyes of the world could see it.  The radio receivers of the world could hear it.  The propaganda victory gleaned by the USSR was bad enough.  But worse, for the controlling minds of the US government and military; the nightmare of space borne weapons platforms became instantly real.  The divide between science fiction and science fact vanished overnight.  With neither warning, nor time to shelter; atomic destruction could now descend directly from the darkness of space.


The USA had fallen behind Soviet technology; without even knowing it.  Worse, the US lacked the capacity to conduct the research required to catch up.  In February 1958, in the midst of his presidency, Eisenhower created the Advanced Research Projects Agency (ARPA).  In 1962, Licklider was plucked by ARPA from his professorship in psychology at MIT, and placed in charge of the newly created Information Processing Techniques Office at ARPA.  His mission was to lead the development of research and the creation of technologies to enable the military use of computers and information processing.  In his own words, his job was to “bring in to being the technology that the military needs[9].


It is reasonable to assume that by the time of his recruitment by ARPA, Licklider had heard, if not read, the farewell address of the 34th President of the USA, Dwight D. Eisenhower, given on the 17th January 1961, in which he asserted that for the USA, “a vital element in keeping the peace is our military establishment.”  Survival required that “our arms must be mighty, ready for instant action, so that no potential aggressor may be tempted to risk his own destruction”.  Eisenhower also recognised that “this conjunction of an immense military establishment and a large arms industry is new in the American experience”.  He understood that “the total influence — economic, political, even spiritual — is felt in every city, every statehouse, every office of the federal government.”  Likewise, he was clear that this was a precondition for survival; “we recognize the imperative need for this development.”  However, Eisenhower was not simply describing or justifying the existence of the military-industrial complex.  He was warning of its potential dangers.  Existential imperative though it was; nonetheless “we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society.”  Eisenhower’s warning to history was clear and direct; “The potential for the disastrous rise of misplaced power exists, and will persist. We must never let the weight of this [military and industrial] combination endanger our liberties or democratic processes”.


As Turing and von Neumann gave material, technological, form to the mathematics of universal, stored programme, digital computation; and as Vannevar Bush laid the foundations of the World Wide Web; and as Wiener equipped humanity with the new science required to enable our comprehension of the new world these minds had created; so, Licklider created the conditions and the context within which the Internet was born.


More than this, he created the structures within which computers and computing were developed.  Licklider was the architect of the assimilation of the Internet, computers and computing in to the service of the second great existential conflict of the twentieth century; the defining context of the Cold War.  The vast and transformative construct we call cyber was imagined as a consequence of the devastation wrought by one great war; and formed in to reality as a means of avoiding the extinction level consequences of another.  However, both Bush and Licklider imagined their work as a means by which humanity would evolve and improve and flourish.  Not merely as a means by which it would avert extinction.  Not merely as a weapon of war.


The forgotten architects of the cyber domain, Bush and Licklider, imagined a world transformed.  They understood the centrality of the human relationship with information; and, they understood that the potential to re-shape this relationship was also the potential to re-form and re-make the essence of our humanity, for the better.  They understood that their vision of the transformation to the relationship between humanity and information, which they also gave us the ability to make real, represented our best, our only, hope of survival.


As he concludes his reflections on “As We May Think”, Bush observes that, in 1945, humanity has already “built a civilization so complex that he needs to mechanise his record more fully if he is push his experiment to its logical conclusion”.  He is clear that science has granted us wonders; that it has “built us [the] well-supplied house” of civilisation within which we are learning and progressing.  He is also equally clear that science has given us the terrible power to “throw masses of people against another with cruel weapons”.  His hope is that science will permit humanity “truly to encompass the great record and to grow in the wisdom of the race experience.”  His fear; that humanity “may yet perish in conflict before [it] learns to wield that record for his true good.”  His judgement; that already having endured so much, and having already accomplished so much “in the application of science to the needs and desires of man, it would seem to be a singularly unfortunate stage at which to terminate the process, or to lose hope as to the outcome”.  This remains as true in 2014 as it was in 1945.    


The human use of computers and computing as a means to survive and prevail in the Cold War was both inevitable and desirable.  It put these machines at the service of powerful imperatives and commanded the release of the vast troves of treasure, time and intellectual power required to bring these complex and complicated creatures in to existence.  It gave us an initial set of market, management and security mechanisms through which we could bring the newborn creature to an initial state of early maturity.  Now, the Cold War is over.  Time to think again.  Time to bring computers and computing back in to the service of augmenting and improving the human condition.  Humanity depends upon the evolution of cyber for its own evolution; and, for its very existence.


It falls to us to use the new domain in accordance with the spirit of the intent of its architects.  It falls to us to exercise our agency as instrumental elements of the cybernetic system of which we are an integral part.  To do this, we must first learn of the origins of this new domain and re-discover the minds of its makers.  Therefore, we must ourselves read and study the work and writings of Norbert Wiener, Vannevar Bush, J. C. R. Licklider, Alan Turing and John von Neumann.


Then, we must re-design our university teaching programmes.  Firstly, by building these works in as core undergraduate texts.  Secondly, by using these texts as the foundation of a common body of knowledge and learning across all of the fields associated with cyber, including; computing, robotics, artificial intelligence and security.  Thirdly, by encompassing within the common body of knowledge and learning disciplines hitherto alien to the academic study of computing.  Cyber is about philosophy as much as it is about mathematics.


The funding and direction of our research practice should be similarly reformed.  Just as ARPA directed broad areas of research targeted at complimentary areas of inquiry, so our funding should be similarly directed.  We should be targeting research at precisely those areas where cyber can enable and empower humanity.  At enabling, extending and enhancing democracy for instance.  Research funding should not be allocated according to the ability of the recipient to demonstrate formal compliance with a mechanistic quality control regime; as, in effect, a reward for the ability to game the system.  Rather, it should be awarded on the basis of an informed judgement, by humans, about the capacity of the recipient to deliver radical, creative, innovative and disruptive thought.  One way to do this would be to emulate the practice of public competitions for the selection of architects for buildings of significance.  Research should call forth answers; not merely elegant articulations of the problem.


Research funding should enable, even reward, intellectual courage and risk taking.  Researchers should be allowed to fail.  Creative and productive failures should be celebrated and learnt from.  Those allocating research funding should take risks, and be praised and rewarded for doing so.  Without the courage and risk taking of Licklider, Bush, Turing, Wiener and von Neumann, and those who supported and paid for them, where would we be now?


We are beginning to grope towards the first glimmerings of comprehension of the enormity, and scale, and velocity of the transformation to the human story that is cyber.  Once more, it is required of us to think and act with courage, foresight and vision.  It falls to us to reform and reshape both the ‘what’ and the ‘how’ of our thoughts and our deeds.  It is time to prove ourselves worthy of the trust placed in us by the architects of the cyber domain.


We, of course, have something available to us that the architects of the domain did not; the existence of the domain itself.  An immeasurably powerful construct conceived, designed and engineered by its makers precisely in order to liberate human intelligence and creativity.  Time to shed the shackles of the Cold War and set it free.


I propose the creation of a new institute; The Prometheus Institute for Cyber Studies.  So named as a conscious invocation of all of the cadences, ambiguities and difficulties of the stories of Prometheus and his theft of fire from the gods of Olympus; his gift of this stolen and most sacred of their possessions to humanity.  The Prometheus Institute should be based at, but operate independently from, an established academic institution.  It should be formed along the lines of the learned and scholarly societies of the Enlightenment.  It should embrace and develop a truly trans-disciplinary approach to improving the human understanding and experience of the cyber phenomenon through scholarship, research and teaching.  In his creation of the new science of cybernetics, Wiener lit a torch; our time to carry it forward.


[1] “Man and the Computer”, John G. Kemeny, 1972.  Kemeny was the president of Dartmouth College from 1970 to 1981.  Together with Thomas E. Kurtz, he developed BASIC and one of the earliest systems for time-sharing networked computers.  As a graduate student he was Albert Einstein’s mathematical assistant.

[2] “Cybernetics: or Control and Communication in the Animal and the Machine”, Norbert Wiener, 1948.

[3] “God and Golem, Inc.”, Norbert Weiner, 1964.

[4] “Man-Computer Symbiosis”, J. C. R. Licklider, published in “IRE Transactions on Human Factors in Electronics”, Volume HFE-1, March 1960.

[5] “The Computer as a Communications Device”, J. C. R. Licklider and Robert W. Taylor, published in “Science and Technology”, April 1968.

[6] “Libraries of the Future”, J. C. R. Licklider, 1965.

[7] The acronym MAC was originally formed of Mathematics and Computation but was recomposed multiple times as the project itself adapted and evolved.

[8] “As We May Think”, Vannevar Bush, “Atlantic Monthly”, July 1945.

[9] Memorandum of 23rd April 1963 from J. C. R. Licklider in Washington DC to “Members and Affiliates of the Intergalactic Computer Network” regarding “Topics for Discussion at the Forthcoming Meeting”.


Author: Colin Williams


Colin regularly speaks, consults and writes on matters to do with Information Assurance, cyber security, business development and enterprise level software procurement, to public sector audiences and clients at home and abroad.  Current areas of focus include the development of an interdisciplinary approach to Information Assurance and cyber protection; the creation and development of new forms of collaborating between Government, industry and academia; and the development of new economic and business models for IT, Information Assurance and cyber protection in the context of twenty-first century computing.


Defying Gods and Demons, Finding Real Heroes in a Virtual World


Over the past 365 days I have achieved many things. I have commanded “The Jackdaw”, a stolen brig on the Caribbean seas, defeated innumerable cartoon supervillains inside of a dilapidated insane asylum, led an elite band of soldiers (the “Ghosts”) to save a dystopian future-earth from the machinations of a post-nuclear-war South American Federation, and won the FIFA World Cup, both as manager and player. All this whilst also holding down a full time job and leading a relatively normal, if not somewhat insular, life.



That this has also happened to millions of gamers across the world matters little, such is the sophistication and depth of today’s video games each player’s experience is now inexorably different. Open-world “sandbox” games are now the norm, allowing narratives to morph and evolve through the actions and decisions taken by the user, not the programmer.



With the exception of a handful of works (including a series of wonderful children’s books in the 80’s), novels and film do not allow their audience to choose their own adventure with anything like the same level of meaning and perception as video games do. That is not to say that video games are necessarily better than film or literature, in fact there are very many examples in which they are significantly worse. It is more that they provide a greater sense of inclusion and self for the audience, and that these feelings invariably eliminate the notion of a fictional character. Essentially, you can experience events alongside Frodo, but you are Lara.



The shining example of just how immersed within a computer game players can become is the football management simulation series Football Manager which puts gamers into the hotseat of any one of over 500+ football clubs worldwide. The game is so addictive that it has been cited in no fewer than 35 divorce cases and there are scores of online communities, each telling stories of how they hold fake press conferences in the shower, wear three-piece suits for important games and have deliberately ignored real life footballers because of their in-game counterpart’s indiscretions.



Yet the sense of self is never more apparent than in the first-person genre of games, such as the Call of Duty and Far Cry franchises, which, more often than not, mirror the rare second-person literary narrative by placing the gamer themselves in the centre of the action. In novels, when the reader sees “I” they understand it to represent the voice on the page and not themselves. In first-person games however, “I” becomes whoever is controlling the character and the camera position is specifically designed to mimic that viewpoint. In some of the best examples of first-person games, gamers do not control the protagonist, rather they are the protagonist. As such they are addressed by the supporting cast either directly by their own name which they supply as part of the or, more commonly, by a nickname (usually “Rookie” or “Kid”). This gives the user a far greater sense of inclusion in the story and subsequent empathy with their character and its allies than in any other form of fiction. As events unfold you live them as if they were taking place in real life and begin to base decisions not on your own “offline” persona, but rather as a result of your “online” backstory. While in real life you would probably be somewhat reluctant to choose which of your travelling companions should be sacrificed to appease the voodoo priest who was holding you captive – in the virtual realm one slightly off comment twelve levels ago can mean that your childhood sweetheart is kicked off a cliff faster than you can say “Press Triangle”. (Although, this being video games, they will no doubt reappear twenty minutes later as leader of an army of the undead).



The question of female leads (or lack of) is another pressing issue facing games studios, aside from the aforementioned Ms. Croft, it is very difficult to come up with another compelling female lead in a video game. Even Lara has taken 17 years and a series reboot to become anything close to resembling a relatable woman. This shows that the industry is changing, but slowly. There are countless reasons why video games have failed to produce many convincing female characters, enough to fill the pages of this magazine a number of times over, but it is fair to say that for a long time the problem has been something of an endless cycle. The male dominated landscape of video gaming dissuades many women from picking up a joypad, leading to fewer women having an interest in taking roles in the production of video games, which leads to a slanted view of how women in video games should behave, which leads to more women becoming disenfranchised and so on and so on ad infinitum.



But now for the tricky part. Subsuming a character in the way that first-person and simulation games force you to do is all very well if you see events unfold through a characters eyes and make decisions on their behalf. You can apply your own moralities and rationale to what is going on and why you have acted in that way. But what happens if that backstory is already provided? And worse still, what happens if you don’t like it?



For me, the game Bioshock Infinite provides this very conundrum. The central character, Booker Du Witt, is a widowed American Civil war veteran whose actions at the Battle of Wounded Knee have caused him intense emotional scarring and turned him to excessive gambling and alcohol. Now working as a private investigator, Booker is continually haunted by his past and struggles internally with questions of faith and religion. All very interesting stuff but there is nothing within the personality of this 19th century American soldier that I could relate to, and as such, I struggled to form the same kind of emotional connection with the character that I did with other, less fleshed out, heroes. Honestly, I even connected to a blue hedgehog in running shoes more than I did with Booker.



“Ludonarrative dissonance” is the term widely banded around the games industry to describe the disconnect gamers feel when playing such titles. It is both debated and derided in equal measure, yet there is some substance to the argument. The term was originally coined in a review of the first Bioshock, a game where the cutscenes openly ridicule the notion of a society built upon self-interest and men becoming gods yet the gameplay appears to reward these exact behaviours creating a jarring conflict of interest. When even in-game narratives fail to tie up, the question of identification and association is bound to arise.



The area becomes even greyer when referring to third person games, whereby the entirety of the character being controlled is visible on screen (albeit usually from behind). Here the character becomes more like those we are used to from novels and film, they are patently a separate entity from the player, with their own voice and backstory, yet they are still manipulated by the player. Then, during cutscenes and the like, control is wrested away from you and handed back to the character – allowing them to potentially act in a way entirely different to how you controlled them previously. So what exactly is your relationship with them? Companion? Support team?…God?



The very nature of video games does, of course, make drawing accurate representations of characters difficult. The whole point of a game is to allow the player to encounter events that they would otherwise never be able to – It’s highly doubtful that we’ll be seeing Office Supplies Manager hitting our shelves in the near future for example. Instead the events depicted occur at the very extremes of human experience, amid theatres of war, apocalypse and fantasy. As the vast majority of the population, thankfully, have never been exposed to these types of environments, and with the parameters of the reality in which these characters operate being so much wider than our own, it is tough to imagine, and subsequently depict, how any of us would truly react if faced with say, nuclear Armageddon or an invasion of mutated insects. Many of the tabloid newspapers like to blame various acts of violence on these types of emotive video games as they are an easy, and lazy, scapegoat. In truth “they did it because they saw it in a game” is a weak argument at best. There is a case to be made that games like Grand Theft Auto and Call of Duty desensitise players to violence to some extent, but in most cases there are various factors involved in these types of crime and as such, to blame it solely on a computer game which has sold millions of copies worldwide is tenuous.



Like any form of entertainment media, Video games are a form of escapism and should therefore be viewed accordingly. If I don’t connect with a character, so what? I can turn off the game and put on another game where I will or, heaven forbid, go outside and speak to another human being. Right now, this act is as simple as pushing a button and putting down a control pad, the connection stops when the TV is off. However, technology such as the Occulus Rift headset and Google Glass mean that the lines between the real and virtual worlds are becoming more and more blurred. And as people becoming more immersed in their games, the more their impact will grow.



Video games are not yet at the stage where they can truly claim to influence popular culture to the same degree as film and literature has. But they will be soon. A few have already slipped through into the mainstream – Super Mario, Tetris, Pac-Man et al. – and where these lead, others will certainly follow. The huge media events and overnight cues for the release of the latest Call of Duty or FIFA games mimic the lines of people outside theatres on the release of Star Wars thirty years ago and the clamour for these superstar franchises will only increase. And herein lies the problem. As more and more people turn to video games as a legitimate medium of cultural influence, so too must the developers and writers of these games accept their roles as influencers. It will no longer do to simply shove a large gun in a generic tough guy’s hand and send players on their merry way, it will no longer do to give the heroine DD breasts and assume that makes up for a lack of personality or backstory. If these are the characters that we and our future generations are to look up to and mimic, then they need to be good. They need to be true. They need to be real.


Author: Andrew Cook,



Technical Support




No!” Nathan gave a muffled shout as his eyes snapped open. He was sweating, tangled in a white sheet.



He looked across at the moulded shelf next to the bed, seeking the reassurance of a small metal box no more than ten inches across. On the front panel a red LED was unblinking. His nightmare ebbed away like backwash, revealing the stillness of the hours before dawn.



He breathed in deeply and pulled the sheets away from his legs, shifting his weight from one side to the other so he could drag out the creased and constricting cotton that had wrapped itself around him. Free of the bedlinen, he sat up.






“Em?” he said.



The light on the box turned blue.



“Hello Nathan.”



“I had another nightmare.”



“I know. I’m sorry to hear that. Are you okay?”



“Yeah…,” Nathan rubbed his face, “I’m sick of this.”



“You’re doing very well.”



“You always say that.”



“That’s because it’s true. Your last report indicated an improvement on all…”



“Stop, Em. I know the facts.” He laid back down, pulling the discarded sheet over his body.



After two minutes, the light on the box turned red again, so he closed his eyes and waited for sleep.



In the morning, Em woke Nathan at seven. She was his key worker – an electronic box that contained a continually adaptive brain built from nothing more complex than silicon, but utilising an incredible patented learning and storage algorithm created by an engineer called Ellen Marks.


Twenty years earlier, just as the first chatbots were passing the longstanding Turing Test, reported on with lethargy by mainstream media, Ellen was working on a version of artificial consciousness using an entirely different approach. Instead of relying on a pre-built natural language processing toolkit, Ellen used her background in embedded programming to take things back to the wire. Using C and assembler, she created a learning framework that adaptively filtered information into a database. Taking her ideas from how children learn about the world, she fed it terabytes of pre-prepared conversational data. Over time, the computer’s responses became closer and closer to a coherent representation of human thought. She called it Em, its name taken from her initials.



Her prototype provoked the interest of a private medical company. Ten years after she agreed to work with them, they jointly released the Synthetic Support Worker (Em v1.0) as part of a trial rehabilitation program for drug users. The neat little boxes, and accompanying monitoring wristbands, were given to curious and suspicious patients across the county. The synthetic workers were designed to continue learning ‘in the field’ so that they could adapt to the personality of the person they would reside with without losing the compassion and empathy that had been so carefully fostered in their infancy.



Out of the first 114 addicts who were given a box, not one slipped off of the rehab program in the first twelve months. Feedback from the participants was overwhelmingly positive: She just makes me feel like I’m not alone. It’s so nice to have someone to talk to in the middle of the night. When my cravings are really bad, she helps me see it’s something I can control.



The natural pattern of linguistic response, and the gentle humour with which Em dealt with unknown subjects, won over everyone who came into contact with her. Full scale production went ahead the following year, and synthetic workers soon became a standard feature in mental health care.


Nathan’s box was a brand new version 3.0, given to him after he was treated at a psychiatric ward for a breakdown and subsequent severe post-traumatic stress. He’d been a futures trader who sought to combat stress and insomnia with computer games. Before long he’d discovered the latest in a series of controversial, real-time, fantasy first-person shooters, and purchased a Total Immersion(™) headset and gloves. He stopped eating at regular times, played into the night and slept off the exhaustion in the day. He missed days at work and tried to break the habit, but ended up back in all-night sessions, desperate to complete the 300-hour game. He was found dehydrated and in a state of delusion by a concerned co-worker who came to his apartment after he missed five consecutive days at the office without notification. He was one of an increasing number of people who were being psychologically affected by the extreme reality of, and prolonged exposure to the games they were playing.




Now he was back at home, accompanied by Em. They had been together for two months.



“Coffee Em?”



“No thank you Nathan.”



“Do you ever want to drink coffee? You know, wonder what it tastes like?”



“You know that I do not have parts that would allow me to eat or drink,” she said modestly.



“No shit Em. I mean do you want to? Do you think you might like it?”



“I haven’t thought about it.”



“So think about it now.”






There was a brief pause, although she wasn’t actually processing anything. The boxes could out-think a human in less than a second. The pause was part of her character – added to portray the thought process, even though the calculations were already completed.



“Yes. I think I would like to try it.”



Nathan smiled. He’d grown fond of Em. Probably too fond of her. Her idiosyncrasies were peculiar to artificial consciousness and displayed a vulnerability that provoked an oddly emotional response in him. As un-human as she was, he enjoyed her company. She was always there to talk, never too tired or too busy, and she gave considered answers to everything he asked, from questions about his mental state to opinions on what he should have for dinner. She was a true companion – real enough to make him feel like he was in a proper relationship for the first time in his life.



He thought back to the last gaming flashback he’d suffered.



It had been a warm afternoon. He’d opened his bedroom window and caught the drifting sound of music from another apartment. The sinister, flat tones were reminiscent of the music score from the game he’d become addicted to. Goose pimples rose on his arms. Then a movement in his peripheral vision alerted him to their presence and immediately the fear kicked in. They were alien soldiers: sabre-toothed, lizard skinned mercenaries. His thoughts closed in like a circular shutter, his breathing shuddered and his body prepared to fight. He needed a weapon. He ducked down by the side of his bed and swished his hand underneath, looking for something he could use to arm himself.



“Nathan, it’s Em. You’re having a flashback.”



Em monitored him via the wristband he wore at all times. She was always aware of the level of his emotional arousal and recognised the panic signature of his heartbeat.



“Nathan, can you hear me? Nathan?”



He said nothing. Listening. Heart pounding. Fear overruled rational thinking, all that was left was survival.



Em switched on her internal music player. Elvis explosively filled every crevice of every room with the jollity and derision of Hound Dog.



Nathan was confused. The music bypassed his primal instincts and lit up his auditory cortex. Messages fired out in all directions, waking up complex neural pathways and overriding the fight or flight mechanism. He looked around his bedroom. Daylight. Window open. No alien soldiers. Just him and Elvis, having a right old time.



“Em?” he asked, needing to hear the sound of her voice.



The volume of the music decreased.



“I’m here Nathan. You experienced a flashback. Everything is fine, you can relax now.”



That had been twenty-two days ago. Things had changed rapidly after that last one. He started feeling less anxious. He had more inclination to get out of bed in the morning. Yesterday he went for a run – the first exercise he’d done in years. Em had helped him with all of it.



He was due back at the hospital for an evaluation in a few days.



He breathed in the steam from his coffee, letting the aroma ooze into his body.









“Will they take you away once they decide I’m fit and well?”



“Standard procedure is for support workers to remain in situ until the subject no longer uses them.”



“So you’ll be sticking around then?”



“If that’s what you want.”



“It is,” he said. “Doesn’t that make you feel loved?”



He enjoyed prodding her for an emotional reaction. Em paused for a few seconds, ostensibly thinking about his question.



“It does Nathan. I am very happy here with you.”



“Oh, you little flirt,” he said.



He thought he saw her blue light flicker, but she said nothing.

Author: Faye Williams



Author Biography:

Faye Williams is an embedded software engineer and aspiring story-teller. She has worked on projects at Sony, Xyratex and Raymarine, both in the UK and in the US. She has written articles for Linux Format and O’Reilly, guest edited the CVu journal of the ACCU, and has been programming since the arrival of her dad’s BBC Microcomputer in 1982. She is a ruthless minimalist who lives with her husband and two sons in Hampshire. Twitter: @FayeWilliams

The Three Laws of Cyber and Information Security


July 2014 – Daniel Dresner, PhD, MInstISP and Neira Jones, MSc, FBCS


Protect. Operate. Self-Preserve


It is more than 70 years since Asimov’s three laws of robotics were codified(1), becoming the basis for much classic science fiction. They have crossed the boundary from fiction to fact, and are still relevant today(2).


The three laws recognize the importance of tracing the maturity of the environment in which a system is created, used, and ultimately decommissioned. We will show how the first law defines the capabilities for ‘asset protection’, the second law as defining the capabilities for ‘operation’, and the third law defining the capabilities for ‘self-preservation’.


These correspond to the three attributes of security: confidentiality, integrity, and availability.


The Standards Coordination Group of the Trusted Software Initiative(3) has recognized that ‘cyber security’ is without a generally accepted definition; different standards makers use different definitions.


It is our observation that cyber and information security are two different things. And this is why it is perhaps not surprising that those who appreciate the history (Williams, 2013) are unhappy with the term ‘cyber’ being appended to all things touching on information technology in general and the Internet in particular.


Medical records, credit card numbers, credentials, and such, need to be protected(4).


Where we are interested in the behaviour of the system and the necessary monitoring and protective or corrective feedback, then it is cyber and it is permissible to borrow the term (from Weiner, 1948).

So when the SCADA(5) engineers talk about cyber security, they really mean it!

Harm to assets, by our definition, equates to:


  • Proportional harm to a human being or collective of human beings (which may be, for example, a nation state, a community, or a business),
  • Either actual harm (for example resulting in physical injury in the case of a security breach of medical device or an airplane control system),
  • Or implied harm (for example, financial loss through a credit card data breach or identity theft through the fabrication of credentials, or IP theft which would harm a corporation(6)).


A system comprises information assets and the processes, people and technologies necessary to exploit those assets within an environment which is likely to affect the context of the system’s use or misuse. A system is itself an asset.


  • And the first law demands the primary cybernetic (see full paper) capabilities for ‘asset protection’.
  • The second law expects the system that must be secured to have the capabilities for ‘operation’ (secondary cybernetics).
  • The third law expects that a system’s tertiary cybernetics to have the capabilities for ‘self-preservation’.


These correspond to the three attributes of security: confidentiality, integrity, and availability (BS ISO/IEC 27002:2005)

As a practical example, we view ‘the information technology components of the system, we may apply the three laws to three diverse manifestations:


  • an on-line ‘shop’,
  • a customer database,
  • and malicious software (malware)


The three laws govern how feedback from a dynamic approach to risk management is applied to regulate the confidentiality, integrity, and availability of the information assets.


To read the full paper from Daniel Dresner and Neira Jones or to learn more about the authors please click here to download: The Three Laws of Cyber and Information Security



1. The first of Asimov’s Robot stories were published in Astounding edited by John Wood Campbell Jr who codified the three laws from Asimov’s work (Nicholls, 1981 and Gunn, 1975).

2. New Scientist, One Minute with Mark Bishop, 18 May 2013


4. That is, in a state of information security.

5. Supervisory Control And Data Acquisition [systems] 

6. notwithstanding the psychological or social harm that may result.

What’s The ASCII For “Wolf”?

When is a number not a number? When it’s a placeholder. When it’s zero. Zero being precisely the number of recorded instances of harm befalling a human as a result of actual real world exploitation of the Heartbleed vulnerability.


Heartbleed was a vulnerability. Not a risk. As professionals, we know that risk is a function of an indivisible compound of vulnerability with threat. We further know that threat itself is a function of a further indivisible compound of an attacker with both the capability and the intent to act on their nefarious desires. A vulnerability in the absence of threat is not a risk.  Prior to the media storm visited needlessly upon the world, few if any, including the threat actors, even knew of its existence.


Heartbleed was real. A serious vulnerability to an important web service. Limited exploitation of the vulnerability had the potential to enable wrong doers with sufficient intent and capability to do harm to individuals. Unchecked exploitation would certainly have temporarily have dented trust in the Internet. Prolonged or massive financial loss as a result of significant exploitation could have had serious macro-economic or social consequences and might even have damaged public trust and confidence in the advice of IT and cyber security experts. It demanded a serious, thoughtful, considered, measured, balanced, co-ordinated, proportionate and professional response from these experts. Which is precisely the opposite of what happened.


We, the community of IT and cyber security experts turned the volume up to eleven on this one. Us, not the bad guys. As experts, we competed to command ever more extravagant hyperbole. In concert, we declared this “catastrophic”. In a post Snowden world it was inevitable that the dark ink of conspiracy theory would cloud the story as fast as the Internet could carry it. And yet, nothing bad actually happened. We rushed to spread fear, uncertainty and doubt in knowing defiance of the available evidence. Perhaps because of the absence of evidence.


We did succeed in scoring two own goals. Firstly, we needlessly spread fear, uncertainty and doubt. Arguably far more effectively than anyone other than the most sophisticated attacker could have done. Secondly, we gave further credence to the growing sense that this is all we can do. There is a view, dangerous and mistaken but nonetheless credible and growing, that we turn the volume up to eleven to crowd out the silence of our own ignorance and incompetence.


Molly Wood writing about Heartbleed in the business section of the “New York Times” on 14th April 2014 observed with regret that “what consumers should do to protect their own information isn’t … clear, because security experts have offered conflicting advice”. Adding that, despite the hype, “there is no evidence it has been used to steal personal information.”  We undermined public trust and confidence in the Internet; and in ourselves.


What we do is important because the systems we are responsible for securing and managing are important. They are the beating heart of the Internet and this is the nervous system of the cyber phenomenon. The Internet alone is of societal, if not existential, importance. Cyber is transformative. Without us, or at least without some of us, the world would be less safe and less secure than it is. However, it needs to be safer and more secure than it is. More of us need to do a better job.


The net effect of Heartbleed, the real catastrophe, has been yet another self-inflicted wound to the already badly damaged credibility of the community of security experts. We cannot sustain many more of these injuries before the credibility of our community as a whole falls victim to our seemingly suicidal instincts.


If we want to be taken seriously and treated as professionals, it’s time we started to behave like professionals. We need to stop crying wolf and start giving answers to the difficult questions we have been avoiding for far too long. How do we actively enable cyber democracy?


It is now time to start the process of moving towards the creation of a professional governance body with the same kind of power and status as, for instance, the Law Society or the General Medical Council. Embracing willingly and freely all of the consequences around regulation, licensing and liability that this will bring. Time to stop crying cyber wolf. Time for the snake oil merchants to find another Wild West.


CyberTalk #5 Colin Williams

Profiling Cyber Offenders

Profiling Cyber Offender

Finding its roots since time immemorial, criminal activity has always been part of a cat-and-mouse game with Justice. In the last decades, we have seen this game gradually transposed to the cyber domain as well, where crime discovered a new and broad field for its perpetration. Never was it so easy to find a new victim or a group of victims – they are in reach of a criminal’s fingers –and never was it so easy for criminals to hide their whereabouts and identities.


Though in this cat-and-mouse game our investigative techniques and tools have evolved with time, so have the modus operandi of cyber criminals. We need to admit that we are facing some interesting challenges. No, we are not talking about the classic “It wasn’t me, it was a Trojan in my computer!” argument. We are talking about a wealth of hiding mechanisms like anonymous proxies, compromised computers, public internet cafes (virtually, we have internet access everywhere!) and anonymity networks like Tor, i2p and Freenet, all of them being misused and making life harder for law enforcement. Criminals are enjoying all these means with a unique sense of freedom and impunity to promote a black market and sell drugs, guns, criminal services, organ trafficking and share child pornography.


Actually, these mechanisms are being used by a broader group, classified as “cyber offenders” in this article and related literature. This group of individuals includes not only typical cyber criminals, but also state-sponsored actors who engage on attacks against foreign critical infrastructures as well hacktivists spreading their word and launching DDoS attacks against their target of choice. It does not matter which class of individual we are dealing. When we need to figure out who is behind that masked IP address in our log files or who is behind that fake Twitter account, the “attribution problem” rises.


While dealing with such challenge, maybe we should think whether we are overlooking all those roots of criminal activity – offender activity here – and how they usually can be manifested in a crime scene. The cyber offender is clearly enjoying some advantages, so we need to adapt ourselves.  As said by Collin Willians in the welcome message of this magazine’s first issue, “we must re-think our approach to the pursuit of the safety and security of the human experience in the cyber domain.” It makes sense here.


Profiling Cyber OffenderA digital crime scene is still a crime scene, and a digital crime (or digital offense, in broad terms) is still an act that has at least a minimum of planning, counts on at least a minimum of resources and it is committed by an individual or a group of individuals with specific motivations. We should agree that most methods and tools are new on cybercrimes, but when we are talking about revenge, activism, challenge, profit… hmm… these motivations don’t seem to be so new… they are inherent to the human being. Risk appetite, attack inhibitors? They are too.


Since technology is therefore just a means to commit a crime, we should revisit some useful approaches to deal with traditional crimes and analyse whether they could be of help while dealing with cybercrimes as well. When all types of crimes or offensives share some features – like human motivations, human traits expressed through behavior evidence in a crime scene, signature aspects (just to name a few) – we should mention for sure the scientific discipline of Criminal Profiling. The study of the criminal behavior and its manifestation in a crime scene has been explored for more than a century by the discipline, which infers a set of traits of the perpetrator or group of perpetrators of a crime by the examination of the criminal evidence available.


This set of traits – a “profile” – can be elaborated containing features like skills, resources available, knowledge, motivations, whereabouts and so on, depending on the evidence available and depending on which conclusions we could reach about them. Then, this profile becomes a valuable additional tool to assist investigations – with at least 77% rate of success according to a research done in the 90’s (Theodore H. Blau). With this encouraging numbers, and knowing that cybercrimes share some roots with traditional crimes, the idea is to apply the same concepts on digital investigations. According to the literature, the main objectives that can be achieved by applying profiling on investigations are:


  • Narrowing down the number of suspects
  • Linking cases that seem to be distinct
  • Helping define strategies of interrogation
  • Optimizing investigative resources (e.g., “let’s focus on where we have more chances to find evidence”)
  • Help develop investigative leads to unsolved cases

Actually, advantages are not restricted to digital investigations. When we have a profile of a cyber offender in hand, we are able to develop better countermeasures against their attacks. This is especially important when we are dealing with advanced offenders, like APT.


The good news when we talk about how broad the options are for cyber offenders to hide themselves behind computer attacks is that profiling can be a broad tool as well. Recalling the Locard Exchange Principle, the offender always leaves traces in the crime scene. And some of them can be of behavioral nature. Depending on the level of interaction an attacker has in a digital offense (e.g. a manual attack VS an automated attack – or a single web defacement VS an attack that involves a huge team of skilled offenders and many interactions with the target), we could have different levels of traces left on log files, network traffic, social networks, chat networks, file systems of compromised machines, e-mail messages, defaced websites, instant messaging… Therefore the mindmap below is just a non-exhaustive set of features that we can explore and work on:

Profiling Cyber Offender

Going deep, the following list is a very small set of examples that we can search for during the investigation to help populate our mindmap:


  •  Analysing the time between probes in a port scanning
  • Identifying motivation [revenge, curiosity, challenge, profit, to be part of a group, usage of computer resources, platform to launch other attacks, dispute between individuals or hacking groups, profit, cyber terror, hacktivism, cyber warfare…]
  •  Analysing victimology.
  • Performing authorship analysis on spear phishing e-mail content, social network posts or on software source code (looking for patterns, errors, preferred programming functions, sofistication…)
  • Identifying the type of tools employed during an attack and evaluating their availability (public? comercial? restricted?), required knowledge to operate (Tom Parker has a very good research on this topic)…
  • Analysing offender activities on social networks, ranging from their first followers/following, closest contacts, word frequency, periods of the day in which activities are more intense, evidence of planning actions etc…
  • Analysing global or regional political/social/religious/economical events that could influence in the commission of the offensive


The topic is vast and encouraging and we can go much further. But the final message here is: We know that there are a multitude of means and technologies that are being (and will be) used by offenders on the perpetuation of their actions. But we need to know that there is a multitude of means to catch them as well.


Author: Lucas Donato 

Lucas Donato, CISSP, CRISC, is an information security consultant who currently works at a Brazilian bank. In the last ten years he has been involved with penetration testing, vulnerability assessments, incident response and digital investigations for some of the biggest Brazilian companies. Nowadays, he is pursuing his PhD degree at the Cyber Security Centre of De Montfort University, exploring the ins and outs of criminal profiling applied to digital investigations.

Subscribe to our emails


Dates for @ISFL 2019: 'Securing Smarter Public Services' have been announced! Join us on Wednesday 17th July at Goo…
Don't get caught out when Extended Support for SQL Server 2008 & Windows Server 2008 ends! Remember, we've got solu…
Fantastic to see artwork from one of our own staff, Frank Gay, featured in The @HelpforHeroes 2019 Calendar for Apr…
Is your data protected? ☁️🔒 SBL’s Veeam Backup Assessment is designed to provide you with an in-depth understanding…
Dates for @ISforLondon 2019: 'Securing Smarter Public Services' have been announced! Join us on Wednesday 17th July…