Flame PR
  • Services
    • Public Relations
    • Digital Marketing
    • Broadcast PR
    • Crisis Management
  • Sectors
    • B2B Technology
    • Cybersecurity
    • Healthcare
    • Fintech
    • Education & Recruitment
    • eCommerce & Retail
  • About
    • About Us
    • Testimonials
    • Portfolio
  • Blog
  • Contact
  • Privacy Policy
  • Services
    • Public Relations
    • Digital Marketing
    • Broadcast PR
    • Crisis Management
  • Sectors
    • B2B Technology
    • Cybersecurity
    • Healthcare
    • Fintech
    • Education & Recruitment
    • eCommerce & Retail
  • About
    • About Us
    • Testimonials
    • Portfolio
  • Blog
  • Contact
  • Privacy Policy

​
​​Flame PR Blog

Cyborg killing machines or clever toasters – What’s the future for AI?

30/10/2017

0 Comments

 

by James Whitehead

Picture

​Earlier this year, Dmitry Rogozin, Russia’s Deputy Prime Minister, felt obliged to point out that the latest robotic creation from Android Technics and the state Advanced Research Fund was ‘not a terminator.’  The anthropomorphic FEDOR (Final Experimental Demonstration Object Research) was shown in a video wielding a pair of handguns and firing at targets on a range, replicating what seemed to many as being a real-life iteration of Skynet.

Whilst FEDOR’s other competencies are more benign (they include light DIY, driving and eventually, space travel), the ease with which the latest advancements in Artificial Intelligence can apparently be harnessed to lethal ends is a growing concern.

The technology world has long benefitted from modern society’s widespread recognition of a form of technological determinism – accepting that we are all swept along by new technologies whether we like it or not. This has been largely unproblematic for the majority of the populace as faster, smaller and more intelligent devices have made our lives significantly easier and more comfortable. Particularly in the realm of Artificial Intelligence, we have seen how professionals such as doctors, scientists and emergency responders can benefit from the assistance of AI tools and robotics which learn from their mistakes. However, what happens when AI learns to say ‘no?’

This is a question that has recently moved beyond the pages of science fiction and onto the minds of the world’s leading technologists and industrialists. In 2015, leading AI experts such as Stephen Hawking and Elon Musk signed an open letter calling on the research community to find ways of ensuring that we prevent certain ‘pitfalls’ related to undesirable developments in ‘strong’ AI. The letter emphasised the need to safeguard against the existential risk that a super-intelligent AI could pose and that we ensure AI ‘does what we want it to do.’

The letter seems alarmist when viewed in the context of our current technological landscape – presently, much AI is confined to narrow applications that automate individual processes. However, considering our willingness to usher in the latest technologies without question (many of which originated from military development), our ability to turn back the tide once AI becomes ‘strong’ is unproven to say the least.

FEDOR is an excellent example of how AI could be hijacked to threaten society rather than to help it. The video released by the Russian research agency showed FEDOR with a variety of objects in its hands – at one point it uses a power drill, cleans with a feather duster and even controls the steering in a car. It is only when then operator chooses to put the handgun into FEDOR’s hand does he become a fearsome killing machine. The question is, when will FEDOR be able to make his own choice? 
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

    Blog Archives:

    May 2025
    April 2025
    February 2025
    November 2024
    October 2024
    September 2024
    August 2024
    July 2024
    June 2024
    May 2024
    April 2024
    March 2024
    February 2024
    January 2024
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    March 2023
    July 2022
    March 2022
    February 2022
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    September 2020
    August 2020
    July 2020
    June 2020
    April 2020
    March 2020
    October 2019
    June 2019
    April 2019
    March 2019
    February 2019
    January 2019
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    December 2016
    October 2016
    September 2016
    August 2016
    July 2016

About     International     Portfolio     Blog     Contact

​
​​37 Pear Tree Street, London, EC1V 3AG
311 West 43rd St, New York, NY 10036


Office: +44 (0) 20 3357 9740 

Mobile: +44 7711 885404

​Sitemap