Back to main site

Tel: 01347 812150

Monthly Archives: July 2014

The Rise and Fall of Edward the Confessor

Edward The Confessor

“In a time of Universal Deceit – telling the truth will become a revolutionary act”

George Orwell – 1984


On 9th June 2013 the Guardian newspaper posted online a video interview which would become the most explosive news story of the year, and, potentially, the decade. In it, Edward Snowden, at this point still an employee of the NSA, revealed that many of the world’s governments were not only spying on foreign targets but on their own citizens as well. The video was ran by every major news network and as the story filtered through on the six o’clock news the UK’s viewing population gasped…before promptly switching over to the One Show.


One year on and, for the man on the street, Snowden’s leaks remain about as shocking public as the news that night follows day – of course the government are spying on us, of course we’re being watched. Quite frankly, it would have been a bigger revelation if Snowden had proved our every move wasn’t being monitored. In a world where more than 1.2 billion people record their daily activities and eating habits on Facebook, is there really such a thing as online privacy anymore anyway?


Recently the Guardian (who originally published the story) claimed that a public opinion poll found that more Britons thought it was right for them to publish the leaks than thought it was wrong. According to the YouGov poll from which the statements were taken, more than a third (37%) of the British people thought it was right to publish. The triumphant nature of the paper’s headlines did little to cover the fact that 63% either thought that the Guardian were wrong or, even more damningly, simply did not care either way.


The outrage, or rather lack of, surrounding the Snowden leaks in the UK is unsurprising. There are, we presume, debates raging behind closed doors in Whitehall, Cheltenham et al. but in pubs and coffee shops across the country you’re unlikely to find open discussion of the latest regarding the misuse of metadata and algorithms. Especially not when Cheryl and Simon have come back to the X Factor.


Personally, I don’t care if the government knows where in the world I took a photograph, or that I get 50+ emails a day from Groupon offering me half price canvas prints, or that I phone my mother once a week and instantly regret it. In fact, if they want to listen in on that call it’s fine by me. Even better, they can speak to her themselves, I guarantee they’ll get bored of finding out about Mrs Johnson’s granddaughter’s great niece’s new puppy and hang up hours before she does.


So why did Snowden bother? He gave up his home in Hawaii, his $200k a year job and now lives in effective exile in Russia, constantly looking over his shoulder for fear of reprisal from the country of his birth. Upon revealing his identity Snowden stated “I’m willing to sacrifice all of that because I can’t in good conscience allow the US government to destroy privacy, internet freedom and basic liberties for people around the world with this massive surveillance machine they’re secretly building.” If true, it is a noble cause but there are many who believe that his motives were less than altruistic.


In a letter to German politician Hans-Christian Ströbele, he describes his decision to disclose classified U.S. government information as a “moral duty”, claiming “as a result of reporting these concerns, I have faced a severe and sustained campaign of prosecution that forced me from my family and home.” This may well be true, yet it is no more than Snowden originally expected. In his initial interview with Laura Poitras and Glenn Greenwald he categorically stated “I don’t expect to go home.” acknowledging a clear awareness that he’d broken U.S. law, but that doing so was an act of conscience.


Just a few short months later however, in his letter toStröeble, Snowden positions himself as a man being framed for crimes he didn’t commit. In a move strangely reminiscent of the film “Enemy of the State”, he refers to his leaks as a “public service” and an “act of political expression” and contends that “my government continues to treat dissent as defection, and seeks to criminalize political speech with felony charges that provide no defence.” Again, noble sentiment, but this is not Hollywood and Snowden is not Gene Hackman. He overlooks the fact that it was he himself who chose to flee rather than face charges. That he subsequently decided to criticise the fairness of the US legal system whilst safely ensconced inside a country whose human rights record is hazy at best, merely adds salt to the wound.


Over the past year, Snowden has been quick to capitalise on his new found notoriety. His appearances on mainstream outlets and events have increased (albeit via satellite link), public speaking engagements in his adopted home have become more frequent and he was even able to deliver the alternative Christmas address for Channel 4 in 2013. Hollywood movies of his story are now in the pipeline and, most recently, Poitras and Greenwald were awarded a Pulitzer Prize for their work.


Alongside this, Snowden’s narcissism also appears to have grown. If he was truly acting in the public interest rather than his own then there should be no need for him to reveal his identity, it would not matter who the leaked the information, only that they did. Similarly, once his identity was revealed he should have no reason to flee. He would face the charges and take his punishment, secure in the knowledge that he was making a small personal sacrifice to secure the well-being of the world.


It is, however, not surprising that Snowden has ended up in Moscow. Seemingly, the former Soviet Union is the only country to have benefitted from the affair – Western security relationships have been weakened, public trust is crumbling and its intelligence agencies have been crippled. All the while Russia has strengthened. It’s “Anschluss” of Crimea from the Ukraine has more than a faint echo of history. If, as seems likely, former Cold War tensions are beginning to refreeze then it is beyond absurd to think that we should begin hampering our own intelligence. There can be no doubt that our foes and rivals, be they terrorist organisations or nation states, are watching our every move. Ungoverned by our self-imposed sanctions, they are able to glean as much information about our lives as they deem fit, so we must do the same.


The debate Snowden has opened is an important one. I agree that it is necessary to discuss just how meta-data is stored and used by government departments and companies and to ensure that it is safely stored and doesn’t fall into the wrong hands. However, it is not so vital that we should compromise our own security and diplomacy for. In today’s world, nothing is.


Author: Andrew Cook

Andrew Cook has been a member of the Chartered Institute of Marketing since 2009 and is the Art Director and Digital Editor for CyberTalk Magazine.  He is a graduate of the University of Newcastle and was awarded The Douglas Gilchrist Exhibition for Outstanding Achievement in 2007.  Andrew’s interests include Graphic Design, 80s Sci-Fi movies and the music of David Bowie.

When he’s not doing any of these things, Andrew can usually be found making forts out of cushions and unsuccessfully attempting to dodge phone calls from his mother.

I, Human


It is widely accepted that advances in technology will vastly change our society, and humanity as a whole. Much more controversial is the claim that these changes will all be for the better. Of course more advanced technology will increase our abilities and make our lives easier; it will also make our lives more exciting as new products enable us to achieve things we’ve never even considered before. However, as new branches of technology gather pace, it’s becoming clear that we can’t predict what wider consequences these changes will bring – on our outlook on life, on our interactions with one another, and on our humanity as a whole.



Artificial Intelligence seems to have the most potential to transform society. The possibility of creating machines that move, walk, talk and work like humans worries many, for countless reasons. One concerned group is the Southern Evangelical Seminary, a fundamentalist Christian group in North Carolina. SES have recently bought one of the most advanced pieces of AI on the market in order to study the potential threats that AI pose to humanity. They will be studying the Nao, an autonomous programmable humanoid robot developed by Aldebaran Robotics. Nao is marketed as a true companion who ‘understands you’ and ‘evolves’ based on its experience of the world.



Obviously the Nao robot has some way to go before its functions are indistinguishable from humans, but scientists are persistently edging closer towards that end goal. Neuromorphic chips are now being developed that are modelled on biological brains, with the equivalent of human neurons and synapses. This is not a superficial, cynical attempt at producing something humanlike for novelty’s sake. Chips modelled in this way are shown to be much more efficient than traditional chips at processing sensory data (such as sound and imagery) and responding appropriately.



Vast investment is being put into neuromorphics, and the potential for its use in everyday electronics is becoming more widely acknowledged. The Human Brain Project in Europe is reportedly spending €100m on neuromorphic projects, one of which is taking place at the University of Manchester. Also, IBM Research and HRL Laboratories have each developed neuromorphic chips under a $100m project for the US Department of Defence, funded by the Defence Advanced Research Projects Agency.



Qualcomm, however, are seen as the most promising developers of this brain-emulating technology, with their Zeroth program, named after Isaac Asimov’s “Zeroth Law” of Robotics (the fourth law he added to the famous Three Laws of Robotics, to protect humanity as a whole rather than just individuals):



“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”



Qualcomm’s program would be the first large-scale commercial platform for neuromorphic computing, with sales potentially starting in early 2015.



This technology has expansive potential, as the chips can be embedded in any device we could consider using. With neuromorphic chips, our smartphones for example could be extremely perceptive, and could assist us in our needs before we even know we have them. Samir Kumar at Qualcomm’s research facility says that “if you and your device can perceive the environment in the same way, your device will be better able to understand your intentions and anticipate your needs.” Neuromorphic technology will vastly increase the functionality of robots like Nao, with the concept of an AI with the learning and cognitive abilities of a human gradually moving from fiction to reality.



When robots do reach their full potential to function as humans do, there are many possible consequences that understandably worry the likes of the Southern Evangelical Seminary. A key concern of Dr Kevin Staley of SES is that traditionally human roles will instead be completed by machines, dehumanising society due to less human interaction and a change in our relationships.



Even Frank Meehan, who was involved in AI businesses Siri and DeepMind (before they were acquired by Apple and Google respectively), worries that “parents will feel that robots can be used as company for their children”.



The replacement of humans in everyday functions is already happening – rising numbers of self-service checkouts mean that we can do our weekly shop without any interaction with another human being. Clearly this might be a much more convenient way of shopping, but the consequences on human interaction are obvious.



AI has also been developed to act as a Personal Assistant. In Microsoft’s research arm, for example, Co-director Eric Horvitz has a machine stationed outside his office to take queries about his diary, among other things. Complete with microphone, camera and a voice, the PA has a conversation with the colleague in order to answer their query. It can then take any action (booking an appointment, for example) as a human PA would.



This is just touching on the potential that AI can achieve in administrative work alone, and yet it has already proved that it can drastically reduce the amount of human conversations that will take place in an office. With all the convenience that it adds to work and personal life, AI like this could also detract from the relationships, creativity and shared learning that all branch out of a 5 minute human conversation that would otherwise have taken place.



The potential for human functions to be computerised, and the accelerating pace at which AI develops, means that the effects on society could go from insignificant to colossal in the space of just a few years.



One concept that could drastically fast forward the speed of AI development, is the Intelligence Explosion; the idea that we can use an AI machine to devise improvements to itself, with the resulting machine able to design improvements to itself further, and so on. This would develop AI much more successfully than humans can, because we have a limited ability to perform calculations and spot areas for improvement in terms of efficiency.



Daniel Dewey, Research Fellow at the University of Oxford’s Future of Humanity Institute, explains that “the resulting increase in machine intelligence could be very rapid, and could give rise to super-intelligent machines, much more efficient at e.g. inference, planning, and problem-solving than any human or group of humans.”



The part of this theory that seems immediately startling is that we could have a super-intelligent machine, whose programming no human can comprehend since it has so far surpassed the original model. Human programmers would initially need to set the first AI machine with detailed goals, so that it knows what to focus on in the design of the machines it produces. The difficulty would come from precisely defining the goals and values that we want AI to always abide by. The resulting AI would focus militantly on achieving these goals in whichever arbitrary way it deems logical and most efficient, so there can be no margin for error.



We would have to define everything included in these goals to a degree of accuracy that even the English (or any) language might prohibit. Presumably we’d want to create an AI that looks out for human interests. As such, the concept of a ‘human’ would need definition without any ambiguity. This could cause difficulties when there might be exceptions to the rules we give. We might define a human as a completely biological entity – but the machine would then consider anyone with a prosthetic limb, for example, as not human.



We might also want to define what we want AI to do for humans. Going back to Asimov’s “Zeroth Law”, a robot may not “by inaction, allow humanity to come to harm.” Even if we successfully programmed this law into AI (which is difficult in itself), the AI could then take this law as far is it deems necessary. The AI might look at all possible risks to human health and do whatever it can to eliminate them. This could end up with machines burying all humans a mile underground (to eliminate risk of meteor strikes), separating us in individual cells (to stop us attacking each other) and drip feeding us tasteless gruel (to give us nutrients with no risk of overeating fatty foods).



This example is extreme, but if the programmers who develop our first AI are incapable of setting the right definitions and parameters, it’s a possibility. The main problem is that even basic instructions and concepts involve implicitly understood features that can’t always be spelled out. A gap in the translation might be overlooked if it’s not needed for 99.9% of the machine’s functions, but as the intelligence explosion progresses, a tiny hole in the machine’s programming could be enough to lead to a spiral in disastrous AI decisions.



According to Frank Meehan, whoever writes the first successful AI program (Google, he predicts) “is likely to be making the rules for all AIs.” If further AI is developed based on the first successful version (for example, in the way that the intelligence explosion concept suggests), there is an immeasurable responsibility for that developer to do things perfectly. Not only would we have to trust the developer to program the AI fully and competently, we would also have to trust that they have the integrity to make programming decisions that reflect humanity’s best interests, and are not solely driven by commercial gain.



Ultimately the first successful AI programmer could have fundamental control and influence over the way that AI progresses and, as AI will likely come to have a huge impact on society, this control could span the human race as a whole. So a key question now stands: How can we trust the directors of one corporation with the future of the human race?



As Meehan goes on to say, fundamental programming decisions will probably be made by the corporation “in secret and no one will want to question their decisions because they are so powerful.” This would allow the developer to write whatever they want without consequence or input from other parties. Of course AI will initially start out as software within consumer electronics devices, and companies have always been able to develop these in private before. But arguably the future of AI will not be just another consumer technology, rather it will be one that will change society at its core. This gives us reason to treat it differently, and develop collaborative public forums to ensure that fundamental programming decisions are taken with care.



These formative stages of development will be hugely important. One of the key reasons that the Southern Evangelical Seminary are studying Nao, is because of worries that super-intelligent AI could lead to humans “surrendering a great deal of trust and dependence” with “the potential to treat a super AI as god”. Conversely, Dr Stuart Armstrong, Research Fellow at the Future of Humanity Institute, believes that a super-intelligent AI “wouldn’t be seen as a god but as a servant”.



The two ideas, however, aren’t mutually exclusive: we can surrender huge dependence to a servant. If we give the amount of dependence that leads parents to trust AI with the care of their children, society will have surrendered a great deal. If AI is allowed to take over every previously human task in society, we will be at its mercy, and humanity is in danger of becoming subservient.



AI enthusiasts are right to say that this technology can give us countless advantages. If done correctly, we’ll have minimum negative disruption to our relationships and overall way of life, with maximum assistance wherever it might be useful. The problem is that the full definition of ‘correctly’ hasn’t been established, and whether it ever will be is doubtful. Developers will always be focussed on commercial success; the problem of balance in everyday society will not be their concern. Balance could also be overlooked by the rest of humanity, as it focuses on excitement for the latest technology. This makes stumbling into a computer-controlled dystopian society a real danger.



If humans do become AI-dependent, a likely consequence is apathy (in other words, sloth – another concern of SES) and a general lack of awareness or knowledge, because computers will have made our input redundant. Humanity cannot be seen to have progressed if it becomes blind, deaf and dumb to the dangers of imperfect machines dictating our lives. Luddism is never something that should be favoured, but restraint and extreme care is needed during the development of such a precarious and transformative technology as AI.


“Don’t give yourselves to these unnatural men — machine men with machine minds and machine hearts! You are not machines! You are not cattle! You are men! You have a love of humanity in your hearts!”

Charlie Chaplin, The Great Dictator (1940)


Author: Tom Hook

Bid Co-ordinator for SBL, he holds a BA in Philosophy from Lancaster University, in which he focussed on Philosophy or Mind, and wrote his dissertation around Artificial Intelligence.  He went on to support NHS Management in the Development of Healthcare Services within prisons, before moving to SBL.

Subscribe to our emails


Dates for @ISFL 2019: 'Securing Smarter Public Services' have been announced! Join us on Wednesday 17th July at Goo…
Don't get caught out when Extended Support for SQL Server 2008 & Windows Server 2008 ends! Remember, we've got solu…
Fantastic to see artwork from one of our own staff, Frank Gay, featured in The @HelpforHeroes 2019 Calendar for Apr…
Is your data protected? ☁️🔒 SBL’s Veeam Backup Assessment is designed to provide you with an in-depth understanding…
Dates for @ISforLondon 2019: 'Securing Smarter Public Services' have been announced! Join us on Wednesday 17th July…