Cyborg Soldiers, Artificial Intelligence, and Robotic Mass Surveillance May be Here Sooner Than You Think

artificial intelligenceCarolanne Wright – Straight out of the science fiction film The Terminator, a 72-page Pentagon document lays out their plan for the future of combat and war, which will utilize artificial intelligence (or AI), robotics, information technology as well as biotechnology.

Proponents of advanced technology — such as robot soldiers and artificial intelligence — argue both can be made ethically superior to humans, where issues of rape, pillaging or the destroying of towns in fits of rage would be drastically reduced, if not eliminated. Many in the science community are casting a weary eye toward this technology, however, warning that it can easily surpass human control, leading to unpredictable — and even catastrophic — consequences.

Defense Innovation Initiative — The Future of War

The Department of Defense (DoD) has announced the United States will be entering a brave new world of automated combat in a little over a decade, where wars will be completely fought using advanced weaponized robotic systems. We’ve already had a glimpse of what’s to come with the use of drones. But, according to the DoD, we haven’t seen anything yet.

In a quest to establish “military-technological superiority”, the Pentagon ultimately has its sights set on monopolizing “transformational advances” in robotics, artificial intelligence and information technology — otherwise known as the Defense Innovation Initiative, a plan to identify and develop pioneering technological breakthroughs for use in the military.

Disturbingly, a new study from the National Defense University — a higher education institution funded by the Pentagon — has urged the DoD to take drastic action in order to avoid the downfall of US military might, even though the report also warns that accelerating technological advances will “flatten the world economically, socially, politically, and militarily, it could also increase wealth inequality and social stress.”

The NDU report explores several areas where technological advances could benefit the military — one of which is mass collection of data from social media platforms that is then analyzed by artificial intelligence instead of humans. Another is “embedded systems [in] automobiles, factories, infrastructure, appliances and homes, pets, and potentially, inside human beings, [where] the line between conventional robotics and intelligent everyday devices will become increasingly blurred.” These systems will help the government to monitor individuals and the population and “will provide detection and predictive analytics.”

Armies of “Kill Bots that can autonomously wage war” are also a real possibility as unmanned robotic systems are becoming increasingly intelligent and less expensive to manufacture. These robots could be placed in civilian life as well, to execute “surveillance, infrastructure monitoring, police telepresence, and homeland security applications.”

To counteract public outcry about autonomous robots having the capacity to kill on their own, the authors recommend the Pentagon should be “highly proactive” in establishing “it is not perceived as creating weapons systems without a ‘human in the loop.’”

Strong AI, which simulates human cognition — including self-awareness, sentience and consciousness — is just on the horizon, some say as early as the 2020s.

But not everyone is over the moon about these advances, especially where AI is concerned. Leaders in the field of technology, journalists and inventors are all sounding the alarm about the devastating consequences of AI technology that’s allowed to flourish unchecked.

AI Technology — What Could Possibly Go Wrong?

As the DoD charges ahead with its plan to dominate the military and surveillance sphere with unbridled advances in technology, many are questioning the serious ramifications of such a path.

Journalist R. Michael Warren writes:

“I’m with Bill Gates, Stephen Hawking and Elon Musk. Artificial intelligence (A.I.) promises great benefits. But it also has a dark side. And those rushing to create robots smarter than humans seem oblivious to the consequences.

Ray Kurzweil, director of engineering at Google, predicts that by 2029 computers will be able to outsmart even the most intelligent humans. They will understand multiple languages and learn from experience.

Once they can do that, we face two serious issues.

First, how do we teach these creatures to tell right from wrong — in our own self defense?

Second, robots will self-improve faster than we slow evolving humans. That means outstripping us intellectually with unpredictable outcomes.” [source]

 

During a conference of AI experts in 1999, a poll was given as to when they thought the Turing test (where computers surpass humans in intelligence) would occur. The general thought was about 100 years. Many believed it could never be achieved. Today, Kurzweil thinks we are already at the brink of intellectually superior computers.

British theoretical physicist and Cambridge University professor Stephen Hawking doesn’t mince words about the dangers of artificial intelligence:

“I think the development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC. “Once humans develop artificial intelligence, it will take off on it’s own and redesign itself at an ever-increasing rate.” He adds, “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

At the MIT Aeronautics and Astronautics department’s Centennial Symposium in October 2015, Tesla founder Elon Musk issued a stark warning about unregulated development of AI:

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

Furthermore, in a tweet posted by Musk in 2014, he thinks “We need to be super careful with AI. Potentially more dangerous than nukes.” In the same year, he said on CNBC that he believes the possibility of a Terminatorlike scenario could actually come to pass.

Likewise, British inventor Clive Sinclair believes artificial intelligence will be the downfall of mankind:

“Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,” he told the BBC. “It’s just an inevitability.”

Microsoft billionaire Bill Gates agrees.

“I am in the camp that is concerned about super intelligence,” he says. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

That said, Gates’ Microsoft Research has designated “over a quarter of all attention and resources” to artificial intelligence development, whereas Musk has invested in AI companies in order to “keep an eye on where the technology is headed”.

Article sources

  • https://www.defense.gov/News/Article/Article/603658/
  • https://motherboard.vice.com/en_us/article/8qxvvg/how-the-pentagons-skynet-would-automate-war
  • https://www.washingtonpost.com/…/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned
  • https://www.cnet.com/news/elon-musk-artificial-intelligence-could-be-more-dangerous-than-nukes
  • https://www.thestar.com/opinion/commentary/2017/07/14/beware-the-dark-side-of-artificial-intelligence.html
  • https://www.washingtonpost.com/news/speaking-of-science/wp/2014/12/02/stephen-hawking-just-got-an-artificial-intelligence-upgrade-but-still-thinks-it-could-bring-an-end-to-mankind/?utm_term=.32281226139a

SF Source Wake Up World Jan 2018

2 thoughts on “Cyborg Soldiers, Artificial Intelligence, and Robotic Mass Surveillance May be Here Sooner Than You Think

  1. There is a brillante show on Netflix called Black Mirror. Every episode is a stand alone and themes potential future outcomes of technology, usually scary ones, and at the same time they make you think “that could actually happen”.

    There is an episode called “Metalhead” that shows the terrifying potential of the robot dog’s that have been in development for years by the military. Also another one called “Men Against Fire” that shows a very realistic and equally terrifying possibility of soldiers combined with neural implants.

    All episodes are worth watching, but those 2 specifically relate to your article.

    Have a great weekend!
    Jesse

Please leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.