by James Whitehead
Earlier this year, Dmitry Rogozin, Russia’s Deputy Prime Minister, felt obliged to point out that the latest robotic creation from Android Technics and the state Advanced Research Fund was ‘not a terminator.’ The anthropomorphic FEDOR (Final Experimental Demonstration Object Research) was shown in a video wielding a pair of handguns and firing at targets on a range, replicating what seemed to many as being a real-life iteration of Skynet.
Whilst FEDOR’s other competencies are more benign (they include light DIY, driving and eventually, space travel), the ease with which the latest advancements in Artificial Intelligence can apparently be harnessed to lethal ends is a growing concern.
The technology world has long benefitted from modern society’s widespread recognition of a form of technological determinism – accepting that we are all swept along by new technologies whether we like it or not. This has been largely unproblematic for the majority of the populace as faster, smaller and more intelligent devices have made our lives significantly easier and more comfortable. Particularly in the realm of Artificial Intelligence, we have seen how professionals such as doctors, scientists and emergency responders can benefit from the assistance of AI tools and robotics which learn from their mistakes. However, what happens when AI learns to say ‘no?’
This is a question that has recently moved beyond the pages of science fiction and onto the minds of the world’s leading technologists and industrialists. In 2015, leading AI experts such as Stephen Hawking and Elon Musk signed an open letter calling on the research community to find ways of ensuring that we prevent certain ‘pitfalls’ related to undesirable developments in ‘strong’ AI. The letter emphasised the need to safeguard against the existential risk that a super-intelligent AI could pose and that we ensure AI ‘does what we want it to do.’
The letter seems alarmist when viewed in the context of our current technological landscape – presently, much AI is confined to narrow applications that automate individual processes. However, considering our willingness to usher in the latest technologies without question (many of which originated from military development), our ability to turn back the tide once AI becomes ‘strong’ is unproven to say the least.
FEDOR is an excellent example of how AI could be hijacked to threaten society rather than to help it. The video released by the Russian research agency showed FEDOR with a variety of objects in its hands – at one point it uses a power drill, cleans with a feather duster and even controls the steering in a car. It is only when then operator chooses to put the handgun into FEDOR’s hand does he become a fearsome killing machine. The question is, when will FEDOR be able to make his own choice?