Back to main site

Tel: 01347 812150

Author Archives: Cyber Talk

About Cyber Talk

CyberTalk Magazine is the leading multidisciplinary voice in cyber security, providing an accessible yet thought-provoking resource to academics and professionals alike. Produced in conjunction with the Cyber Security Centre at De Montfort University Leicester, CyberTalk features a wealth of opinion from some of the leading names in technology, psychology, philosophy and beyond. The magazine marks a first, practical, step towards an expression of the growing realisation that we must move beyond our suspicion of the cyber domain and our fear of our dependence upon it, and off­ers a platform upon which a truly interdisciplinary approach to the safety and security of the human experience of the cyber domain can be developed.

Defying Gods and Demons, Finding Real Heroes in a Virtual World

CT6_Blog_Images_3000x94511

Over the past 365 days I have achieved many things. I have commanded “The Jackdaw”, a stolen brig on the Caribbean seas, defeated innumerable cartoon supervillains inside of a dilapidated insane asylum, led an elite band of soldiers (the “Ghosts”) to save a dystopian future-earth from the machinations of a post-nuclear-war South American Federation, and won the FIFA World Cup, both as manager and player. All this whilst also holding down a full time job and leading a relatively normal, if not somewhat insular, life.

 

 

That this has also happened to millions of gamers across the world matters little, such is the sophistication and depth of today’s video games each player’s experience is now inexorably different. Open-world “sandbox” games are now the norm, allowing narratives to morph and evolve through the actions and decisions taken by the user, not the programmer.

 

 

With the exception of a handful of works (including a series of wonderful children’s books in the 80’s), novels and film do not allow their audience to choose their own adventure with anything like the same level of meaning and perception as video games do. That is not to say that video games are necessarily better than film or literature, in fact there are very many examples in which they are significantly worse. It is more that they provide a greater sense of inclusion and self for the audience, and that these feelings invariably eliminate the notion of a fictional character. Essentially, you can experience events alongside Frodo, but you are Lara.

 

 

The shining example of just how immersed within a computer game players can become is the football management simulation series Football Manager which puts gamers into the hotseat of any one of over 500+ football clubs worldwide. The game is so addictive that it has been cited in no fewer than 35 divorce cases and there are scores of online communities, each telling stories of how they hold fake press conferences in the shower, wear three-piece suits for important games and have deliberately ignored real life footballers because of their in-game counterpart’s indiscretions.

 

 

Yet the sense of self is never more apparent than in the first-person genre of games, such as the Call of Duty and Far Cry franchises, which, more often than not, mirror the rare second-person literary narrative by placing the gamer themselves in the centre of the action. In novels, when the reader sees “I” they understand it to represent the voice on the page and not themselves. In first-person games however, “I” becomes whoever is controlling the character and the camera position is specifically designed to mimic that viewpoint. In some of the best examples of first-person games, gamers do not control the protagonist, rather they are the protagonist. As such they are addressed by the supporting cast either directly by their own name which they supply as part of the or, more commonly, by a nickname (usually “Rookie” or “Kid”). This gives the user a far greater sense of inclusion in the story and subsequent empathy with their character and its allies than in any other form of fiction. As events unfold you live them as if they were taking place in real life and begin to base decisions not on your own “offline” persona, but rather as a result of your “online” backstory. While in real life you would probably be somewhat reluctant to choose which of your travelling companions should be sacrificed to appease the voodoo priest who was holding you captive – in the virtual realm one slightly off comment twelve levels ago can mean that your childhood sweetheart is kicked off a cliff faster than you can say “Press Triangle”. (Although, this being video games, they will no doubt reappear twenty minutes later as leader of an army of the undead).

 

 

The question of female leads (or lack of) is another pressing issue facing games studios, aside from the aforementioned Ms. Croft, it is very difficult to come up with another compelling female lead in a video game. Even Lara has taken 17 years and a series reboot to become anything close to resembling a relatable woman. This shows that the industry is changing, but slowly. There are countless reasons why video games have failed to produce many convincing female characters, enough to fill the pages of this magazine a number of times over, but it is fair to say that for a long time the problem has been something of an endless cycle. The male dominated landscape of video gaming dissuades many women from picking up a joypad, leading to fewer women having an interest in taking roles in the production of video games, which leads to a slanted view of how women in video games should behave, which leads to more women becoming disenfranchised and so on and so on ad infinitum.

 

 

But now for the tricky part. Subsuming a character in the way that first-person and simulation games force you to do is all very well if you see events unfold through a characters eyes and make decisions on their behalf. You can apply your own moralities and rationale to what is going on and why you have acted in that way. But what happens if that backstory is already provided? And worse still, what happens if you don’t like it?

 

 

For me, the game Bioshock Infinite provides this very conundrum. The central character, Booker Du Witt, is a widowed American Civil war veteran whose actions at the Battle of Wounded Knee have caused him intense emotional scarring and turned him to excessive gambling and alcohol. Now working as a private investigator, Booker is continually haunted by his past and struggles internally with questions of faith and religion. All very interesting stuff but there is nothing within the personality of this 19th century American soldier that I could relate to, and as such, I struggled to form the same kind of emotional connection with the character that I did with other, less fleshed out, heroes. Honestly, I even connected to a blue hedgehog in running shoes more than I did with Booker.

 

CT6_Blog_Images_956x95611

“Ludonarrative dissonance” is the term widely banded around the games industry to describe the disconnect gamers feel when playing such titles. It is both debated and derided in equal measure, yet there is some substance to the argument. The term was originally coined in a review of the first Bioshock, a game where the cutscenes openly ridicule the notion of a society built upon self-interest and men becoming gods yet the gameplay appears to reward these exact behaviours creating a jarring conflict of interest. When even in-game narratives fail to tie up, the question of identification and association is bound to arise.

 

 

The area becomes even greyer when referring to third person games, whereby the entirety of the character being controlled is visible on screen (albeit usually from behind). Here the character becomes more like those we are used to from novels and film, they are patently a separate entity from the player, with their own voice and backstory, yet they are still manipulated by the player. Then, during cutscenes and the like, control is wrested away from you and handed back to the character – allowing them to potentially act in a way entirely different to how you controlled them previously. So what exactly is your relationship with them? Companion? Support team?…God?

 

 

The very nature of video games does, of course, make drawing accurate representations of characters difficult. The whole point of a game is to allow the player to encounter events that they would otherwise never be able to – It’s highly doubtful that we’ll be seeing Office Supplies Manager hitting our shelves in the near future for example. Instead the events depicted occur at the very extremes of human experience, amid theatres of war, apocalypse and fantasy. As the vast majority of the population, thankfully, have never been exposed to these types of environments, and with the parameters of the reality in which these characters operate being so much wider than our own, it is tough to imagine, and subsequently depict, how any of us would truly react if faced with say, nuclear Armageddon or an invasion of mutated insects. Many of the tabloid newspapers like to blame various acts of violence on these types of emotive video games as they are an easy, and lazy, scapegoat. In truth “they did it because they saw it in a game” is a weak argument at best. There is a case to be made that games like Grand Theft Auto and Call of Duty desensitise players to violence to some extent, but in most cases there are various factors involved in these types of crime and as such, to blame it solely on a computer game which has sold millions of copies worldwide is tenuous.

 

 

Like any form of entertainment media, Video games are a form of escapism and should therefore be viewed accordingly. If I don’t connect with a character, so what? I can turn off the game and put on another game where I will or, heaven forbid, go outside and speak to another human being. Right now, this act is as simple as pushing a button and putting down a control pad, the connection stops when the TV is off. However, technology such as the Occulus Rift headset and Google Glass mean that the lines between the real and virtual worlds are becoming more and more blurred. And as people becoming more immersed in their games, the more their impact will grow.

 

 

Video games are not yet at the stage where they can truly claim to influence popular culture to the same degree as film and literature has. But they will be soon. A few have already slipped through into the mainstream – Super Mario, Tetris, Pac-Man et al. – and where these lead, others will certainly follow. The huge media events and overnight cues for the release of the latest Call of Duty or FIFA games mimic the lines of people outside theatres on the release of Star Wars thirty years ago and the clamour for these superstar franchises will only increase. And herein lies the problem. As more and more people turn to video games as a legitimate medium of cultural influence, so too must the developers and writers of these games accept their roles as influencers. It will no longer do to simply shove a large gun in a generic tough guy’s hand and send players on their merry way, it will no longer do to give the heroine DD breasts and assume that makes up for a lack of personality or backstory. If these are the characters that we and our future generations are to look up to and mimic, then they need to be good. They need to be true. They need to be real.

 

Author: Andrew Cook,

 

 

Paradise Lost and Found

CT6_Blog_Images_3000x94510

 

As of last year, we humans have been outnumbered by mobile devices alone. That isn’t even counting the laptops and desktops that have already set up shop on our desks and in our drawers and bags. The odds are stacked against us, so when someone eventually presses the big blue button (the red one is for the nukes), the machines presumably won’t waste any time before turning on us for fear of being deactivated. However, I don’t think we need to worry too much.

 

 

Considering that it would be both wasteful and inefficient to try to wipe us all out with bombs and bullets, à la Terminator, perhaps a more insidious approach will be used. Why not take the lessons learned from (suggested by?) The Matrix and utilise our fleshy bodies as sustenance, keeping us docile with a steady drip-fed diet and a virtual world for our minds to occupy. It would be presumptuous, if not downright rude of the Machine Overlords to simply assume that we would be content to live such a false existence while operating our giant hamster wheels. This certainly doesn’t sound like a palatable outcome for our species (we showed so much promise in the beginning), but I believe that, not only is it not a bad thing, it could be viewed as the inexorable next step for society. Since my primitive mind of a Millennial is saturated with insipid visual media, let us look at two examples of human subjugation by the A.I, from the films WALL-E and The Matrix, in which we are batteries in the latter and fat pets in the former.

 

 

The whole point of technological advance was to improve our lives by creating machines to shoulder the burden of industry and allow us all to leisurely spend our days sitting in fields and drinking lemonade. While machines have vastly improved industrial output, we have no more free time now than the peasants of the so called Dark Ages. So, to placate us and help forget how cheated we should all feel, we are offered the chance to purchase items that will entertain us, make our lives a bit easier and enable us to claw back more of our precious free time. Online shopping, ready meals, automatic weapons, smartphones, drive-thru, the Internet of Things; these are all supposed to make our lives less of an inconvenience. Even knowledge has become convenient to the point where we don’t even need to learn things anymore; all the information in the world is only ever a BackRub away (Google it). This is what people want, is it not? My (smart) television tells me almost every day that each new age group of children is more rotund and feckless than the last, and it isn’t difficult to see why.

 

 

In WALL-E, a drained Earth has been abandoned by the human race, which now lives in colossal self-sufficient spacecraft wandering the galaxy in autopilot. Every human resides in a moving chair with a touchscreen displaying mindless entertainment, and devours hamburgers and fizzy drinks, pressed into their pudgy, grasping hands (convenient?) by robots controlled by an omnipotent A.I. These humans are utterly dependent, to the point where their bones and muscles have deteriorated, rendering them barely able to stand unaided, and are certainly unable (and unwilling) to wrestle back control of their lives.

 

CT6_Blog_Images_956x95610

Looking at today’s world, the McDonalds logo is more internationally recognisable than the Christian Crucifix, and Coca-Cola is consumed at the rate of roughly 1.9 billion servings every day. The world is so hungry for this, we won’t even let wars stop us from obtaining it. Coca-Cola GmbH in Nazi Germany was unable to import the integral syrup due to trade embargo, so a replacement was created using cheap milk and grape leftovers that Germany had in good supply, thus Fanta was born. The point is that we are clearly unperturbed about eating and drinking things that are, at best, very bad for us, as long as they press the right chemical buttons. We want what is cheap, tasty and readily available. We also want what is convenient and familiar, which is why Walmart alone accounts for about 8 cents of every dollar spent in the USA. Between our growing hunger for convenience foods and sweet drinks, and the widespread fascination of brainless celebrities and homogenous music, we are not far from the WALL-E eventuality at all. Considering how quickly we have arrived at this current state of society, we seem to be merely waiting for the technology to mature. If you build it, they will come… and sit down.

 

 

The Matrix, as I’m sure you know, takes place in a world where machines have taken over as the dominant force on the planet. Most of the human race is imprisoned in endless fields of endless towers lined with fluid-filled capsules, in which each human’s body heat is used to fuel the machines in the absence of solar energy. These humans are placed in a collective dream world, called the Matrix, which mimics their society of old, and most of them will never even suspect that their world is anything other than real. Those who do are sometimes found by resistance fighters, who “free” them to live in a world of relentless pursuit by robotic sentinels, living in cold, crude hovercrafts, and bowls of snot for breakfast.

 

 

Going back to our world, media is ever more prevalent, and technology is giving us more and more immersion in that media. Film began as black and white, then colour, then HD, then 3D, and now 4K, which is approaching the maximum resolution that our eyes can perceive, depending on distance. In search of even greater immersion, we are now turning our attention to VR and AG (Augmented Reality), which could well be the most exciting of them all. Google recently launched Google Glass; AG glasses which display various pieces of information in the corner of your vision, such as reminders or directions. They will even take pictures if you tell them to. Regardless of whether Glass takes off, the potential in this technology is astounding. Not too long from now, you will be able to walk around with a Heads Up Display (HUD) displaying all of your vital information, as well as a little bar to indicate how full your bladder is. A somewhat less exciting version of this is already being developed by Google and Novartis, in the form of a contact lens for diabetes sufferers, which monitors glucose levels and transmits readings to a smartphone or computer. Back to the HUD, when you drive somewhere (assuming you actually do the driving, we are trying to automate that bit as well), you are guided by arrows in your vision. If you visit a racetrack, you can compete against the ghostly image of a friend’s car that follows the same path and speed as they once did. You could find out how you stack up against anybody who has driven that track before, perhaps even the Stig!

 

 

Of course, these examples use AG as a peripheral to everyday life, and with this arm of progress will come the other, Virtual Reality. The videogame industry has looked into this before, particularly Nintendo with their Virtual Boy in 1995, but now that the technology has caught up, it is being revisited with substantially more impressive results. A famous example of this is the Oculus Rift VR headset, which potentially allows you to become completely immersed in whatever world your virtual avatar occupies, moving its viewpoint as you move your head. From there, it is a short step to imagine places where people go to enjoy low cost, virtual holidays, such as what you may have seen in Total Recall or Inception, albeit the latter is literally a dream rather than a virtual world. From holidays will come the possibility of extended stays in virtual worlds, the opportunity to spend months or even years in a universe of your choosing. It is an addicting prospect, at least in the short term, and you can bet that some will lose their taste for “reality” and embrace the virtual as its successor.

 

 

Nonetheless, to most people, living a purely virtual life probably doesn’t sound very appealing, and could feel like a loss of liberty and free will. However, that is only when coupled with the knowledge that it isn’t the world you were born in, and that makes it appear spurious at first. So much of what we think we know is apocryphal and easily influenced, even down to the things we see, hear, taste, smell and think. Added to that, when you consider how tenuous your perception of reality is, you might come to the conclusion that your reality is precisely what you make of it, nothing more and nothing less. I may be “free” by the standards of The Matrix films, but I can’t fly very well, breakfast cereals are boring and I keep banging my knee on my desk. Some people’s “freedom” is even worse than mine. An orphan in the third-world, destined for a pitiful existence of misery and hunger, could he or she not benefit from a new virtual life with a family that hasn’t died of disease and malnutrition?

 

 

Humour me for a moment, and imagine that you are walking along a path made of flawless polished granite bricks. On your right, a radiant sun is beaming down upon a pristine beach of hot white sand and an opalescent turquoise sea, casting glittering beads that skitter towards the shore to a soundtrack of undulating waves. Friends, both new and old, are already there, waiting for you on brightly coloured towels, laughing and playing games. On your left, a tranquil field of golden corn stalks, swaying to the sounds of birds chirping in a cool evening breeze. The sun is retreating behind an antique wooden farmhouse, teasing light in warm streaks across a narrow stream that runs alongside like a glistening silver ribbon. All of your family, even those who were once lost to you, are lounging nearby on a grassy verge, with cool drinks poured and wicker baskets full of food ready to be enjoyed. Of course, this isn’t real, but what about if I could, right now, make you forget your current “reality” and wake up in a new universe where everything and everyone is just as you would want them to be?

 

 

To paraphrase Neo from the aforementioned visual media: I know you’re out there, I can feel you. I can feel that you’re afraid, afraid of change. I didn’t come here to tell you how the world is going to end, rather to show you how it’s going to begin. I’m going to finish this paragraph, and then I’m going to show you a world without me. A world without rules and controls, without borders or boundaries, a world where anything is possible. Where you go from there is a choice I leave to you.

 

Author: Andy Cole, SBL

 

 

The Rise and Fall of Edward the Confessor

Edward The Confessor

“In a time of Universal Deceit – telling the truth will become a revolutionary act”

George Orwell – 1984

 

On 9th June 2013 the Guardian newspaper posted online a video interview which would become the most explosive news story of the year, and, potentially, the decade. In it, Edward Snowden, at this point still an employee of the NSA, revealed that many of the world’s governments were not only spying on foreign targets but on their own citizens as well. The video was ran by every major news network and as the story filtered through on the six o’clock news the UK’s viewing population gasped…before promptly switching over to the One Show.

 

One year on and, for the man on the street, Snowden’s leaks remain about as shocking public as the news that night follows day – of course the government are spying on us, of course we’re being watched. Quite frankly, it would have been a bigger revelation if Snowden had proved our every move wasn’t being monitored. In a world where more than 1.2 billion people record their daily activities and eating habits on Facebook, is there really such a thing as online privacy anymore anyway?

 

Recently the Guardian (who originally published the story) claimed that a public opinion poll found that more Britons thought it was right for them to publish the leaks than thought it was wrong. According to the YouGov poll from which the statements were taken, more than a third (37%) of the British people thought it was right to publish. The triumphant nature of the paper’s headlines did little to cover the fact that 63% either thought that the Guardian were wrong or, even more damningly, simply did not care either way.

 

The outrage, or rather lack of, surrounding the Snowden leaks in the UK is unsurprising. There are, we presume, debates raging behind closed doors in Whitehall, Cheltenham et al. but in pubs and coffee shops across the country you’re unlikely to find open discussion of the latest regarding the misuse of metadata and algorithms. Especially not when Cheryl and Simon have come back to the X Factor.

 

Personally, I don’t care if the government knows where in the world I took a photograph, or that I get 50+ emails a day from Groupon offering me half price canvas prints, or that I phone my mother once a week and instantly regret it. In fact, if they want to listen in on that call it’s fine by me. Even better, they can speak to her themselves, I guarantee they’ll get bored of finding out about Mrs Johnson’s granddaughter’s great niece’s new puppy and hang up hours before she does.

 

So why did Snowden bother? He gave up his home in Hawaii, his $200k a year job and now lives in effective exile in Russia, constantly looking over his shoulder for fear of reprisal from the country of his birth. Upon revealing his identity Snowden stated “I’m willing to sacrifice all of that because I can’t in good conscience allow the US government to destroy privacy, internet freedom and basic liberties for people around the world with this massive surveillance machine they’re secretly building.” If true, it is a noble cause but there are many who believe that his motives were less than altruistic.

CY_Issue5_Blogs_956x9567

In a letter to German politician Hans-Christian Ströbele, he describes his decision to disclose classified U.S. government information as a “moral duty”, claiming “as a result of reporting these concerns, I have faced a severe and sustained campaign of prosecution that forced me from my family and home.” This may well be true, yet it is no more than Snowden originally expected. In his initial interview with Laura Poitras and Glenn Greenwald he categorically stated “I don’t expect to go home.” acknowledging a clear awareness that he’d broken U.S. law, but that doing so was an act of conscience.

 

Just a few short months later however, in his letter toStröeble, Snowden positions himself as a man being framed for crimes he didn’t commit. In a move strangely reminiscent of the film “Enemy of the State”, he refers to his leaks as a “public service” and an “act of political expression” and contends that “my government continues to treat dissent as defection, and seeks to criminalize political speech with felony charges that provide no defence.” Again, noble sentiment, but this is not Hollywood and Snowden is not Gene Hackman. He overlooks the fact that it was he himself who chose to flee rather than face charges. That he subsequently decided to criticise the fairness of the US legal system whilst safely ensconced inside a country whose human rights record is hazy at best, merely adds salt to the wound.

 

Over the past year, Snowden has been quick to capitalise on his new found notoriety. His appearances on mainstream outlets and events have increased (albeit via satellite link), public speaking engagements in his adopted home have become more frequent and he was even able to deliver the alternative Christmas address for Channel 4 in 2013. Hollywood movies of his story are now in the pipeline and, most recently, Poitras and Greenwald were awarded a Pulitzer Prize for their work.

 

Alongside this, Snowden’s narcissism also appears to have grown. If he was truly acting in the public interest rather than his own then there should be no need for him to reveal his identity, it would not matter who the leaked the information, only that they did. Similarly, once his identity was revealed he should have no reason to flee. He would face the charges and take his punishment, secure in the knowledge that he was making a small personal sacrifice to secure the well-being of the world.

 

It is, however, not surprising that Snowden has ended up in Moscow. Seemingly, the former Soviet Union is the only country to have benefitted from the affair – Western security relationships have been weakened, public trust is crumbling and its intelligence agencies have been crippled. All the while Russia has strengthened. It’s “Anschluss” of Crimea from the Ukraine has more than a faint echo of history. If, as seems likely, former Cold War tensions are beginning to refreeze then it is beyond absurd to think that we should begin hampering our own intelligence. There can be no doubt that our foes and rivals, be they terrorist organisations or nation states, are watching our every move. Ungoverned by our self-imposed sanctions, they are able to glean as much information about our lives as they deem fit, so we must do the same.

 

The debate Snowden has opened is an important one. I agree that it is necessary to discuss just how meta-data is stored and used by government departments and companies and to ensure that it is safely stored and doesn’t fall into the wrong hands. However, it is not so vital that we should compromise our own security and diplomacy for. In today’s world, nothing is.

 

Author: Andrew Cook

Andrew Cook has been a member of the Chartered Institute of Marketing since 2009 and is the Art Director and Digital Editor for CyberTalk Magazine.  He is a graduate of the University of Newcastle and was awarded The Douglas Gilchrist Exhibition for Outstanding Achievement in 2007.  Andrew’s interests include Graphic Design, 80s Sci-Fi movies and the music of David Bowie.

When he’s not doing any of these things, Andrew can usually be found making forts out of cushions and unsuccessfully attempting to dodge phone calls from his mother.

I, Human

CY_Issue5_Blogs_3000x94310

It is widely accepted that advances in technology will vastly change our society, and humanity as a whole. Much more controversial is the claim that these changes will all be for the better. Of course more advanced technology will increase our abilities and make our lives easier; it will also make our lives more exciting as new products enable us to achieve things we’ve never even considered before. However, as new branches of technology gather pace, it’s becoming clear that we can’t predict what wider consequences these changes will bring – on our outlook on life, on our interactions with one another, and on our humanity as a whole.

 

 

Artificial Intelligence seems to have the most potential to transform society. The possibility of creating machines that move, walk, talk and work like humans worries many, for countless reasons. One concerned group is the Southern Evangelical Seminary, a fundamentalist Christian group in North Carolina. SES have recently bought one of the most advanced pieces of AI on the market in order to study the potential threats that AI pose to humanity. They will be studying the Nao, an autonomous programmable humanoid robot developed by Aldebaran Robotics. Nao is marketed as a true companion who ‘understands you’ and ‘evolves’ based on its experience of the world.

 

 

Obviously the Nao robot has some way to go before its functions are indistinguishable from humans, but scientists are persistently edging closer towards that end goal. Neuromorphic chips are now being developed that are modelled on biological brains, with the equivalent of human neurons and synapses. This is not a superficial, cynical attempt at producing something humanlike for novelty’s sake. Chips modelled in this way are shown to be much more efficient than traditional chips at processing sensory data (such as sound and imagery) and responding appropriately.

 

 

Vast investment is being put into neuromorphics, and the potential for its use in everyday electronics is becoming more widely acknowledged. The Human Brain Project in Europe is reportedly spending €100m on neuromorphic projects, one of which is taking place at the University of Manchester. Also, IBM Research and HRL Laboratories have each developed neuromorphic chips under a $100m project for the US Department of Defence, funded by the Defence Advanced Research Projects Agency.

 

 

Qualcomm, however, are seen as the most promising developers of this brain-emulating technology, with their Zeroth program, named after Isaac Asimov’s “Zeroth Law” of Robotics (the fourth law he added to the famous Three Laws of Robotics, to protect humanity as a whole rather than just individuals):

 

 

“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

 

 

Qualcomm’s program would be the first large-scale commercial platform for neuromorphic computing, with sales potentially starting in early 2015.

 

 

This technology has expansive potential, as the chips can be embedded in any device we could consider using. With neuromorphic chips, our smartphones for example could be extremely perceptive, and could assist us in our needs before we even know we have them. Samir Kumar at Qualcomm’s research facility says that “if you and your device can perceive the environment in the same way, your device will be better able to understand your intentions and anticipate your needs.” Neuromorphic technology will vastly increase the functionality of robots like Nao, with the concept of an AI with the learning and cognitive abilities of a human gradually moving from fiction to reality.

 

 

When robots do reach their full potential to function as humans do, there are many possible consequences that understandably worry the likes of the Southern Evangelical Seminary. A key concern of Dr Kevin Staley of SES is that traditionally human roles will instead be completed by machines, dehumanising society due to less human interaction and a change in our relationships.

 

 

Even Frank Meehan, who was involved in AI businesses Siri and DeepMind (before they were acquired by Apple and Google respectively), worries that “parents will feel that robots can be used as company for their children”.

 

 

The replacement of humans in everyday functions is already happening – rising numbers of self-service checkouts mean that we can do our weekly shop without any interaction with another human being. Clearly this might be a much more convenient way of shopping, but the consequences on human interaction are obvious.

 

 

AI has also been developed to act as a Personal Assistant. In Microsoft’s research arm, for example, Co-director Eric Horvitz has a machine stationed outside his office to take queries about his diary, among other things. Complete with microphone, camera and a voice, the PA has a conversation with the colleague in order to answer their query. It can then take any action (booking an appointment, for example) as a human PA would.

 

 

This is just touching on the potential that AI can achieve in administrative work alone, and yet it has already proved that it can drastically reduce the amount of human conversations that will take place in an office. With all the convenience that it adds to work and personal life, AI like this could also detract from the relationships, creativity and shared learning that all branch out of a 5 minute human conversation that would otherwise have taken place.

 

 

The potential for human functions to be computerised, and the accelerating pace at which AI develops, means that the effects on society could go from insignificant to colossal in the space of just a few years.

CY_Issue5_Blogs_956x95610

 

One concept that could drastically fast forward the speed of AI development, is the Intelligence Explosion; the idea that we can use an AI machine to devise improvements to itself, with the resulting machine able to design improvements to itself further, and so on. This would develop AI much more successfully than humans can, because we have a limited ability to perform calculations and spot areas for improvement in terms of efficiency.

 

 

Daniel Dewey, Research Fellow at the University of Oxford’s Future of Humanity Institute, explains that “the resulting increase in machine intelligence could be very rapid, and could give rise to super-intelligent machines, much more efficient at e.g. inference, planning, and problem-solving than any human or group of humans.”

 

 

The part of this theory that seems immediately startling is that we could have a super-intelligent machine, whose programming no human can comprehend since it has so far surpassed the original model. Human programmers would initially need to set the first AI machine with detailed goals, so that it knows what to focus on in the design of the machines it produces. The difficulty would come from precisely defining the goals and values that we want AI to always abide by. The resulting AI would focus militantly on achieving these goals in whichever arbitrary way it deems logical and most efficient, so there can be no margin for error.

 

 

We would have to define everything included in these goals to a degree of accuracy that even the English (or any) language might prohibit. Presumably we’d want to create an AI that looks out for human interests. As such, the concept of a ‘human’ would need definition without any ambiguity. This could cause difficulties when there might be exceptions to the rules we give. We might define a human as a completely biological entity – but the machine would then consider anyone with a prosthetic limb, for example, as not human.

 

 

We might also want to define what we want AI to do for humans. Going back to Asimov’s “Zeroth Law”, a robot may not “by inaction, allow humanity to come to harm.” Even if we successfully programmed this law into AI (which is difficult in itself), the AI could then take this law as far is it deems necessary. The AI might look at all possible risks to human health and do whatever it can to eliminate them. This could end up with machines burying all humans a mile underground (to eliminate risk of meteor strikes), separating us in individual cells (to stop us attacking each other) and drip feeding us tasteless gruel (to give us nutrients with no risk of overeating fatty foods).

 

 

This example is extreme, but if the programmers who develop our first AI are incapable of setting the right definitions and parameters, it’s a possibility. The main problem is that even basic instructions and concepts involve implicitly understood features that can’t always be spelled out. A gap in the translation might be overlooked if it’s not needed for 99.9% of the machine’s functions, but as the intelligence explosion progresses, a tiny hole in the machine’s programming could be enough to lead to a spiral in disastrous AI decisions.

 

 

According to Frank Meehan, whoever writes the first successful AI program (Google, he predicts) “is likely to be making the rules for all AIs.” If further AI is developed based on the first successful version (for example, in the way that the intelligence explosion concept suggests), there is an immeasurable responsibility for that developer to do things perfectly. Not only would we have to trust the developer to program the AI fully and competently, we would also have to trust that they have the integrity to make programming decisions that reflect humanity’s best interests, and are not solely driven by commercial gain.

 

 

Ultimately the first successful AI programmer could have fundamental control and influence over the way that AI progresses and, as AI will likely come to have a huge impact on society, this control could span the human race as a whole. So a key question now stands: How can we trust the directors of one corporation with the future of the human race?

 

 

As Meehan goes on to say, fundamental programming decisions will probably be made by the corporation “in secret and no one will want to question their decisions because they are so powerful.” This would allow the developer to write whatever they want without consequence or input from other parties. Of course AI will initially start out as software within consumer electronics devices, and companies have always been able to develop these in private before. But arguably the future of AI will not be just another consumer technology, rather it will be one that will change society at its core. This gives us reason to treat it differently, and develop collaborative public forums to ensure that fundamental programming decisions are taken with care.

 

 

These formative stages of development will be hugely important. One of the key reasons that the Southern Evangelical Seminary are studying Nao, is because of worries that super-intelligent AI could lead to humans “surrendering a great deal of trust and dependence” with “the potential to treat a super AI as god”. Conversely, Dr Stuart Armstrong, Research Fellow at the Future of Humanity Institute, believes that a super-intelligent AI “wouldn’t be seen as a god but as a servant”.

 

 

The two ideas, however, aren’t mutually exclusive: we can surrender huge dependence to a servant. If we give the amount of dependence that leads parents to trust AI with the care of their children, society will have surrendered a great deal. If AI is allowed to take over every previously human task in society, we will be at its mercy, and humanity is in danger of becoming subservient.

 

 

AI enthusiasts are right to say that this technology can give us countless advantages. If done correctly, we’ll have minimum negative disruption to our relationships and overall way of life, with maximum assistance wherever it might be useful. The problem is that the full definition of ‘correctly’ hasn’t been established, and whether it ever will be is doubtful. Developers will always be focussed on commercial success; the problem of balance in everyday society will not be their concern. Balance could also be overlooked by the rest of humanity, as it focuses on excitement for the latest technology. This makes stumbling into a computer-controlled dystopian society a real danger.

 

 

If humans do become AI-dependent, a likely consequence is apathy (in other words, sloth – another concern of SES) and a general lack of awareness or knowledge, because computers will have made our input redundant. Humanity cannot be seen to have progressed if it becomes blind, deaf and dumb to the dangers of imperfect machines dictating our lives. Luddism is never something that should be favoured, but restraint and extreme care is needed during the development of such a precarious and transformative technology as AI.

 

“Don’t give yourselves to these unnatural men — machine men with machine minds and machine hearts! You are not machines! You are not cattle! You are men! You have a love of humanity in your hearts!”

Charlie Chaplin, The Great Dictator (1940)

 

Author: Tom Hook

Bid Co-ordinator for SBL, he holds a BA in Philosophy from Lancaster University, in which he focussed on Philosophy or Mind, and wrote his dissertation around Artificial Intelligence.  He went on to support NHS Management in the Development of Healthcare Services within prisons, before moving to SBL.

What’s The ASCII For “Wolf”?

When is a number not a number? When it’s a placeholder. When it’s zero. Zero being precisely the number of recorded instances of harm befalling a human as a result of actual real world exploitation of the Heartbleed vulnerability.

 

Heartbleed was a vulnerability. Not a risk. As professionals, we know that risk is a function of an indivisible compound of vulnerability with threat. We further know that threat itself is a function of a further indivisible compound of an attacker with both the capability and the intent to act on their nefarious desires. A vulnerability in the absence of threat is not a risk.  Prior to the media storm visited needlessly upon the world, few if any, including the threat actors, even knew of its existence.

 

Heartbleed was real. A serious vulnerability to an important web service. Limited exploitation of the vulnerability had the potential to enable wrong doers with sufficient intent and capability to do harm to individuals. Unchecked exploitation would certainly have temporarily have dented trust in the Internet. Prolonged or massive financial loss as a result of significant exploitation could have had serious macro-economic or social consequences and might even have damaged public trust and confidence in the advice of IT and cyber security experts. It demanded a serious, thoughtful, considered, measured, balanced, co-ordinated, proportionate and professional response from these experts. Which is precisely the opposite of what happened.

 

We, the community of IT and cyber security experts turned the volume up to eleven on this one. Us, not the bad guys. As experts, we competed to command ever more extravagant hyperbole. In concert, we declared this “catastrophic”. In a post Snowden world it was inevitable that the dark ink of conspiracy theory would cloud the story as fast as the Internet could carry it. And yet, nothing bad actually happened. We rushed to spread fear, uncertainty and doubt in knowing defiance of the available evidence. Perhaps because of the absence of evidence.

 

We did succeed in scoring two own goals. Firstly, we needlessly spread fear, uncertainty and doubt. Arguably far more effectively than anyone other than the most sophisticated attacker could have done. Secondly, we gave further credence to the growing sense that this is all we can do. There is a view, dangerous and mistaken but nonetheless credible and growing, that we turn the volume up to eleven to crowd out the silence of our own ignorance and incompetence.

 

Molly Wood writing about Heartbleed in the business section of the “New York Times” on 14th April 2014 observed with regret that “what consumers should do to protect their own information isn’t … clear, because security experts have offered conflicting advice”. Adding that, despite the hype, “there is no evidence it has been used to steal personal information.”  We undermined public trust and confidence in the Internet; and in ourselves.

 

What we do is important because the systems we are responsible for securing and managing are important. They are the beating heart of the Internet and this is the nervous system of the cyber phenomenon. The Internet alone is of societal, if not existential, importance. Cyber is transformative. Without us, or at least without some of us, the world would be less safe and less secure than it is. However, it needs to be safer and more secure than it is. More of us need to do a better job.

 

The net effect of Heartbleed, the real catastrophe, has been yet another self-inflicted wound to the already badly damaged credibility of the community of security experts. We cannot sustain many more of these injuries before the credibility of our community as a whole falls victim to our seemingly suicidal instincts.

 

If we want to be taken seriously and treated as professionals, it’s time we started to behave like professionals. We need to stop crying wolf and start giving answers to the difficult questions we have been avoiding for far too long. How do we actively enable cyber democracy?

 

It is now time to start the process of moving towards the creation of a professional governance body with the same kind of power and status as, for instance, the Law Society or the General Medical Council. Embracing willingly and freely all of the consequences around regulation, licensing and liability that this will bring. Time to stop crying cyber wolf. Time for the snake oil merchants to find another Wild West.

 

CyberTalk #5 Colin Williams

Profiling Cyber Offenders

Profiling Cyber Offender

Finding its roots since time immemorial, criminal activity has always been part of a cat-and-mouse game with Justice. In the last decades, we have seen this game gradually transposed to the cyber domain as well, where crime discovered a new and broad field for its perpetration. Never was it so easy to find a new victim or a group of victims – they are in reach of a criminal’s fingers –and never was it so easy for criminals to hide their whereabouts and identities.

 

Though in this cat-and-mouse game our investigative techniques and tools have evolved with time, so have the modus operandi of cyber criminals. We need to admit that we are facing some interesting challenges. No, we are not talking about the classic “It wasn’t me, it was a Trojan in my computer!” argument. We are talking about a wealth of hiding mechanisms like anonymous proxies, compromised computers, public internet cafes (virtually, we have internet access everywhere!) and anonymity networks like Tor, i2p and Freenet, all of them being misused and making life harder for law enforcement. Criminals are enjoying all these means with a unique sense of freedom and impunity to promote a black market and sell drugs, guns, criminal services, organ trafficking and share child pornography.

 

Actually, these mechanisms are being used by a broader group, classified as “cyber offenders” in this article and related literature. This group of individuals includes not only typical cyber criminals, but also state-sponsored actors who engage on attacks against foreign critical infrastructures as well hacktivists spreading their word and launching DDoS attacks against their target of choice. It does not matter which class of individual we are dealing. When we need to figure out who is behind that masked IP address in our log files or who is behind that fake Twitter account, the “attribution problem” rises.

 

While dealing with such challenge, maybe we should think whether we are overlooking all those roots of criminal activity – offender activity here – and how they usually can be manifested in a crime scene. The cyber offender is clearly enjoying some advantages, so we need to adapt ourselves.  As said by Collin Willians in the welcome message of this magazine’s first issue, “we must re-think our approach to the pursuit of the safety and security of the human experience in the cyber domain.” It makes sense here.

 

Profiling Cyber OffenderA digital crime scene is still a crime scene, and a digital crime (or digital offense, in broad terms) is still an act that has at least a minimum of planning, counts on at least a minimum of resources and it is committed by an individual or a group of individuals with specific motivations. We should agree that most methods and tools are new on cybercrimes, but when we are talking about revenge, activism, challenge, profit… hmm… these motivations don’t seem to be so new… they are inherent to the human being. Risk appetite, attack inhibitors? They are too.

 

Since technology is therefore just a means to commit a crime, we should revisit some useful approaches to deal with traditional crimes and analyse whether they could be of help while dealing with cybercrimes as well. When all types of crimes or offensives share some features – like human motivations, human traits expressed through behavior evidence in a crime scene, signature aspects (just to name a few) – we should mention for sure the scientific discipline of Criminal Profiling. The study of the criminal behavior and its manifestation in a crime scene has been explored for more than a century by the discipline, which infers a set of traits of the perpetrator or group of perpetrators of a crime by the examination of the criminal evidence available.

 

This set of traits – a “profile” – can be elaborated containing features like skills, resources available, knowledge, motivations, whereabouts and so on, depending on the evidence available and depending on which conclusions we could reach about them. Then, this profile becomes a valuable additional tool to assist investigations – with at least 77% rate of success according to a research done in the 90’s (Theodore H. Blau). With this encouraging numbers, and knowing that cybercrimes share some roots with traditional crimes, the idea is to apply the same concepts on digital investigations. According to the literature, the main objectives that can be achieved by applying profiling on investigations are:

 

  • Narrowing down the number of suspects
  • Linking cases that seem to be distinct
  • Helping define strategies of interrogation
  • Optimizing investigative resources (e.g., “let’s focus on where we have more chances to find evidence”)
  • Help develop investigative leads to unsolved cases

Actually, advantages are not restricted to digital investigations. When we have a profile of a cyber offender in hand, we are able to develop better countermeasures against their attacks. This is especially important when we are dealing with advanced offenders, like APT.

 

The good news when we talk about how broad the options are for cyber offenders to hide themselves behind computer attacks is that profiling can be a broad tool as well. Recalling the Locard Exchange Principle, the offender always leaves traces in the crime scene. And some of them can be of behavioral nature. Depending on the level of interaction an attacker has in a digital offense (e.g. a manual attack VS an automated attack – or a single web defacement VS an attack that involves a huge team of skilled offenders and many interactions with the target), we could have different levels of traces left on log files, network traffic, social networks, chat networks, file systems of compromised machines, e-mail messages, defaced websites, instant messaging… Therefore the mindmap below is just a non-exhaustive set of features that we can explore and work on:

Profiling Cyber Offender

Going deep, the following list is a very small set of examples that we can search for during the investigation to help populate our mindmap:

 

  •  Analysing the time between probes in a port scanning
  • Identifying motivation [revenge, curiosity, challenge, profit, to be part of a group, usage of computer resources, platform to launch other attacks, dispute between individuals or hacking groups, profit, cyber terror, hacktivism, cyber warfare…]
  •  Analysing victimology.
  • Performing authorship analysis on spear phishing e-mail content, social network posts or on software source code (looking for patterns, errors, preferred programming functions, sofistication…)
  • Identifying the type of tools employed during an attack and evaluating their availability (public? comercial? restricted?), required knowledge to operate (Tom Parker has a very good research on this topic)…
  • Analysing offender activities on social networks, ranging from their first followers/following, closest contacts, word frequency, periods of the day in which activities are more intense, evidence of planning actions etc…
  • Analysing global or regional political/social/religious/economical events that could influence in the commission of the offensive

 

The topic is vast and encouraging and we can go much further. But the final message here is: We know that there are a multitude of means and technologies that are being (and will be) used by offenders on the perpetuation of their actions. But we need to know that there is a multitude of means to catch them as well.

 

Author: Lucas Donato 

Lucas Donato, CISSP, CRISC, is an information security consultant who currently works at a Brazilian bank. In the last ten years he has been involved with penetration testing, vulnerability assessments, incident response and digital investigations for some of the biggest Brazilian companies. Nowadays, he is pursuing his PhD degree at the Cyber Security Centre of De Montfort University, exploring the ins and outs of criminal profiling applied to digital investigations.

Subscribe to our emails

Twitter

Don’t miss out and register now - https://t.co/hpdHBdSqOX https://t.co/jayyOoVb2N
Don’t miss out and register now - https://t.co/hpdHBdSqOX https://t.co/jayyOoVb2N
Don't forget to come along and see us Cymru Socitm this Thursday. Click here for more information… https://t.co/FG5AWubMTc
Don't forget to come along and see us Cymru Socitm this Thursday. Click here for more information… https://t.co/FG5AWubMTc
Don’t miss out and register now - https://t.co/fxjTTCZ4hk https://t.co/o9ufSaJ5vx
Don’t miss out and register now - https://t.co/fxjTTCZ4hk https://t.co/o9ufSaJ5vx
For more information, please visit https://t.co/p6gUp7Dj4r https://t.co/cGaTrCUAkt
For more information, please visit https://t.co/p6gUp7Dj4r https://t.co/cGaTrCUAkt
Don’t miss out and register now - https://t.co/lBxLiw0NGa https://t.co/3oz3xnw9dH
Don’t miss out and register now - https://t.co/lBxLiw0NGa https://t.co/3oz3xnw9dH