PDA

View Full Version : AI goes wild: Slaughterbots



AdamG
11-15-2017, 11:19 AM
Anyone else bothered by the potential for other-than-intended operators seizing the reins of pilotless death machines?



Perhaps the most nightmarish, dystopian film of 2017 didn't come from Hollywood. Autonomous weapons critics, led by a college professor, put together a horror show.
It's a seven-minute video, a collaboration between University of California-Berkeley professor Stuart Russell and the Future of Life Institute that shows a future in which palm-sized, autonomous drones use facial recognition technology and on-board explosives to commit untraceable massacres.

The film is the researchers' latest attempt to build support for a global ban on autonomous weapon systems, which kill without meaningful human control.
http://money.cnn.com/2017/11/14/technology/autonomous-weapons-ban-ai/index.html

Note: other threads mentioning/have elements approved by SKYNET
http://council.smallwarsjournal.com/search.php?searchid=6616517

davidbfpo
11-16-2017, 03:40 PM
Thanks to a "lurker" a pointer to a scary YouTube clip (8 mins), which starts with a presentation and then a fictional situation - viewed since release on the 12th by 790K. Described by a SME here:
both plausible and terrifyingThe origin is shown as either Stop Autonomous Weapons (https://www.youtube.com/channel/UCNaTkhskiEVg5vK3fxlluCQ) or https://futureoflife.org/ hence 491k views and 288k views.

Link:https://www.youtube.com/watch?v=HipTO_7mUOw

From another "lurker":
rather a long way to go in terms of miniature power generation let alone ai.

Bill Moore
11-17-2017, 02:59 PM
Singularity will so radically transform not only the character of war, but its very nature. Clausewitz's trinity will become less relevant, leaving many leading strategists without a fundamental basis for theory, resulting in a series of reactive responses to threats that will fundamentally transform our society and the concept of freedom we claim we defend. It will just happen, we won't see it until it is in our rear view mirror, then it will be to late to reverse the damage.

The trinity addresses passion, reason, and chance. These are the human elements of war, elements that will be either be transformed or eliminated altogether by artificial intelligence. The age of super empowered individuals and smalls groups gives these entities the ability to wage war or non-war autonomously resulting in those attacked only being able to defend. Forget addressing root causes, Mao is no longer relevant. Forget about centers of gravity and decisive points. It is time to think anew, yet the risk of legacy war will persist, resulting in trillions of dollars being spent on legacy military capabilities that are not only worthless for the new forms of war, they can be defeated by technology that's exponentially less expensive and available to a wider range of actors. Traditionalists don't like the term asymmetric warfare, but until we achieve symmetry in our capabilities, doctrine, and strategic thinking against these threats it will be asymmetric warfare and we'll have to run faster than we do now to adapt at the speed of war.

flagg
11-18-2017, 05:41 AM
Singularity will so radically transform not o my the character of war, but its very nature. Clausewitz's trinity will become less relevant, leaving many leading strategists without a fundamental basis for theory, resulting in a series of reactive responses to threats that will fundamentally transform our society and the concept of freedom we claim we defend. It will just happen, we won't see it until it is in our rearview mirror, then it will be to late to reverse the damage.

The trinity addresses passion, reason, and chance. These are the human elements of war, elements that will be either be transformed or eliminated altogether by artificial intelligence. The age of super empowered individuals and smalls groups gives these entities the ability to wage war or non-war autonomously resulting in those attacked only being able to defend. Forget addressing root causes, Mao is no longer relevant. Forget about centers of gravity and decisive points. It is time to think anew, yet the risk of legacy war will persist, resulting in trillions of dollars being spent on legacy military capabilities that are not only worthless for the new forms of war, they can be defeated by technology that's exponentially less expensive and available to a wider range of actors. Traditionalists don't like the term asymmetric warfare, but until we achieve symmetry in our capabilities, doctrine, and strategic thinking against these threats it will be asymmetric warfare and we'll have to run faster than we do now to adapt at the speed of war.

I’m afraid you are correct.

I’ve been writing on this specific topic for about 6 months:

https://www.cove.org.au/trenchline/article-the-v-twin-effect/
https://www.cove.org.au/author/chriselles/

I’m a big fan of Steve Blank(who has written blog post articles here):
https://steveblank.com/category/hacking-for-defense/

He has worked closely with Pete Newell(Rapid Equipping Force) and Joe Felter(now Deputy Assistant Secretary of Defense for South and Southeast Asia) to develop the Hacking4Defense program(I had the chance to go thru their H4D Educators Course).

I’m also a fan of Stan McChrystal’s and Chris Fussell’s books Team of Teams and One Mission.

I think the(or “a”) answer may be found in a mashup of the two.

An innovation platform and pipeline along the lines of H4D built on top of a hybrid organisational network that values and balances not just hierarchical power but referent/reputational influence.

I believe we need to develop a high level of deployment focused innovation capacity and capability organic to the defence force.

I’m just a Reserve NCO, but I’m trying to take a stab at something I call Innovation Art(publishing next week) to describe how and where innovation integrates with Operational Art and informs Strategy.

I even reference Clausewitz. Not his trinity, but friction.

Continuous decisive advantage can be found in continuous cumulative innovation.

AdamG
11-18-2017, 01:31 PM
Yo.
From the 15th -
http://council.smallwarsjournal.com/showthread.php?t=26238&highlight=slaughterbots

Moderator adds: Ah, missed that thread and so now merged here - your post becomes No.1 (with 315 views). Thanks.

Bill Moore
11-18-2017, 07:47 PM
I’m afraid you are correct.

I’ve been writing on this specific topic for about 6 months:

https://www.cove.org.au/trenchline/article-the-v-twin-effect/
https://www.cove.org.au/author/chriselles/

I’m a big fan of Steve Blank(who has written blog post articles here):
https://steveblank.com/category/hacking-for-defense/

He has worked closely with Pete Newell(Rapid Equipping Force) and Joe Felter(now Deputy Assistant Secretary of Defense for South and Southeast Asia) to develop the Hacking4Defense program(I had the chance to go thru their H4D Educators Course).

I’m also a fan of Stan McChrystal’s and Chris Fussell’s books Team of Teams and One Mission.

I think the(or “a”) answer may be found in a mashup of the two.

An innovation platform and pipeline along the lines of H4D built on top of a hybrid organisational network that values and balances not just hierarchical power but referent/reputational influence.

I believe we need to develop a high level of deployment focused innovation capacity and capability organic to the defence force.

I’m just a Reserve NCO, but I’m trying to take a stab at something I call Innovation Art(publishing next week) to describe how and where innovation integrates with Operational Art and informs Strategy.

I even reference Clausewitz. Not his trinity, but friction.

Continuous decisive advantage can be found in continuous cumulative innovation.

Flagg,

Thanks for sharing those links, both sites look interesting. To clarify my intent, I only mentioned Clausewitz to challenge the prevailing view on the enduring nature of war. His concept of friction will most likely endure indefinitely; however, if the trinity is no longer in play, then the U.S. military definition on the enduring nature of war will need to be re-examined. I worked for a senior officer who said too many officers throw Clausewitz into their articles and papers, because they feel it gives them legitimacy. I agree, as long as we cling to a 19th Century description on the nature of war, our ability to adapt will be hindered.

As much as it frustrates me to point to a former Air Force officer as someone who gets it, and can explain the strategic environment in 21st terms, John Robb at Global Guerrillas is probably the best I have seen. He comes across as a bit flippant in his writing style, which is why he isn't taken seriously in some circles, but his concept of global guerrillas, global bazars, and open source war, etc. provide a framework for understanding that others do not provide.

http://globalguerrillas.typepad.com/globalguerrillas/

Look forward to seeing how you suggest we integrate innovation into operational art. Failure to do so will result in innovation for innovation's sake, versus solving real problems and then applying that innovation.

flagg
11-19-2017, 02:31 AM
Flagg,

Thanks for sharing those links, both sites look interesting. To clarify my intent, I only mentioned Clausewitz to challenge the prevailing view on the enduring nature of war. His concept of friction will most likely endure indefinitely; however, if the trinity is no longer in play, then the U.S. military definition on the enduring nature of war will need to be re-examined. I worked for a senior officer who said too many officers throw Clausewitz into their articles and papers, because they feel it gives them legitimacy. I agree, as long as we cling to a 19th Century description on the nature of war, our ability to adapt will be hindered.

As much as it frustrates me to point to a former Air Force officer as someone who gets it, and can explain the strategic environment in 21st terms, John Robb at Global Guerrillas is probably the best I have seen. He comes across as a bit flippant in his writing style, which is why he isn't taken seriously in some circles, but his concept of global guerrillas, global bazars, and open source war, etc. provide a framework for understanding that others do not provide.

http://globalguerrillas.typepad.com/globalguerrillas/

Look forward to seeing how you suggest we integrate innovation into operational art. Failure to do so will result in innovation for innovation's sake, versus solving real problems and then applying that innovation.

Cheers for the link Bill, on first pass it looks like a real goldmine.

As a Reserve NCO, sometimes the divide between professional senior leadership and part-time enlisted can feel like a labyrinth filled with minotaurs in the O5-O6 rank bracket.

I hope I don't get hopes up too much about Innovation Art.

I should be able to post a link here once published this upcoming week.

You'll get an early peek via email.

Thanks again!

AdamG
11-19-2017, 05:25 AM
UN discusses ban on Slaughterbots
https://futurism.com/un-discusses-banning-killer-robots/

AdamG
11-20-2017, 02:27 AM
Related, in a Hank Scorpio sorta way.

From September -

Many people in Silicon Valley believe in the Singularity—the day in our near future when computers will surpass humans in intelligence and kick off a feedback loop of unfathomable change.
When that day comes, Anthony Levandowski will be firmly on the side of the machines. In September 2015, the multi-millionaire engineer at the heart of the trade secrets lawsuit between Uber and Waymo, Google’s self-driving car company, founded a religious organization called Way of the Future. Its purpose, according to previously unreported state filings, is nothing less than to “develop and promote the realization of a Godhead based on Artificial Intelligence.”

https://www.wired.com/story/god-is-a-bot-and-anthony-levandowski-is-his-messenger/

From November -

The documents state that WOTF’s activities will focus on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” That includes funding research to help create the divine AI itself. The religion will seek to build working relationships with AI industry leaders and create a membership through community outreach, initially targeting AI professionals and “laypersons who are interested in the worship of a Godhead based on AI.” The filings also say that the church “plans to conduct workshops and educational programs throughout the San Francisco/Bay Area beginning this year.”
https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/

davidbfpo
11-28-2017, 07:38 PM
Two academics debate slaughterbots after the recent film, which now have 1.8m and 575k views.

Link:https://theconversation.com/should-we-fear-the-rise-of-drone-assassins-two-experts-debate-87699

slapout9
11-28-2017, 09:25 PM
Two academics debate slaughterbots after the recent film, which now have 1.8m and 575k views.

Link:https://theconversation.com/should-we-fear-the-rise-of-drone-assassins-two-experts-debate-87699

Put the warhead on the forehead! This technology exist now IMO not on the horizon as the video says.

davidbfpo
11-29-2017, 10:36 AM
An academic article that is, well deeply pessimistic about mankind's future when automation plus makes so many surplus to requirements. The full title is: The Urbanization of drone warfare: policing surplus populations in the dronepolis.
Link:https://www.geogr-helv.net/71/19/2016/gh-71-19-2016.pdf

AdamG
01-02-2018, 02:38 PM
From the DailyWail


...a report last week sounded alarm bells over the implications of rapidly improving artificial intelligence.
The study, from the Institute for Public Policy Research (IPPR) warns of thousands of jobs being lost to robots – with those on lowest wages likely to be hardest hit.
Around 44% of jobs accounting for about £290 million in wages risk being automated in the coming decades – mostly in low-paid sectors such as call centres, offices and factories.
Mathew Lawrence, a senior researcher at the IPPR, said: “Managed badly, the benefits of automation could be narrowly concentrated, benefiting those who own capital. Inequality would spiral.”

https://www.sundaypost.com/fp/planet-of-the-apps-experts-warn-of-a-tech-take-over-as-robots-with-artificial-intelligence-seize-control/

SKYNET is pleased... (https://www.technobuffalo.com/wp-content/uploads/2012/05/Google-Cyberdyne.jpg)

AdamG
04-27-2018, 06:11 PM
Reading music https://www.youtube.com/watch?v=6Cwi0pkhoSE


Killer robots have been a staple of TV and movies for decades, from Westworld to The Terminator series. But in the real world, killer robots are officially known as "autonomous weapons."

At the Pentagon, Paul Scharre helped create the U.S. policy for such weapons. In his new book, Army of None: Autonomous Weapons and the Future of War, Scharre discusses the state of these weapons today.
https://www.npr.org/sections/alltechconsidered/2018/04/23/604438311/autonomous-weapons-would-take-warfare-to-a-new-domain-without-humans


Drone swarms. Self-driving tanks. Autonomous sentry guns. Sometimes it seems like the future of warfare arrived on our doorstep overnight, and we’ve all been caught unprepared. But as Paul Scharre writes in his new book Army of None: Autonomous Weapons and the Future of War, this has been a long time coming, and we’re currently the slow culmination of decades of development in military technology. That doesn’t mean it’s not scary, though.
https://www.theverge.com/2018/4/24/17274372/ai-warfare-autonomous-weapons-paul-scharre-interview-army-of-none

System hack? Unpossible! It could never happen to us...

https://i.imgur.com/p9RIyjD.jpg

Bill Moore
05-05-2018, 10:51 PM
Terrorists Are Going to Use Artificial Intelligence

https://www.defenseone.com/ideas/2018/05/terrorists-are-going-use-artificial-intelligence/147944/?oref=defenseone_today_nl


Max Tegmark’s book Life 3.0 notes the concern of UC Berkeley computer scientist Stuart Russell, who worries that the biggest winners from an AI arms race would be “small rogue states and non-state actors such as terrorists” who can access these weapons through the black market. Tegmark writes that after they are “mass-produced, small AI-powered killer drones are likely to cost little more than a smartphone.” Would-be assassins could simply “upload their target’s photo and address into the killer drone: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure that nobody knows who was responsible.”

Thinking beyond trigger-pulling, artificial intelligence could boost a wide range of violent non-state actors’ criminal activities, including extortion and kidnapping, through the automation of social engineering attacks.

Also discusses using it to social profile, and how criminals will employ it. The future is unknown, so we must think about a range of potential futures and how that should shape how we design future security forces.

Bill Moore
06-30-2018, 08:53 PM
This book looks promising. AI is a reality, so we need to have the hard discussions and debates on how it will shape the character and even the nature of war if you take the human out of the loop. Hard for long term professionals to envision a future where fighter pilots, submarine skippers, and special operators increasingly lose relevance, but that ridge line is rapidly approaching. Furthermore, the U.S. and its partners won't define the future of AI alone, even non-state actors will shape its future.

https://techcrunch-com.cdn.ampproject.org/c/s/techcrunch.com/2018/06/23/in-army-of-none-a-field-guide-to-the-coming-world-of-autonomous-warfare/amp/

In Army of None, a field guide to the coming world of autonomous warfare


ll that said, Army of None is a one-stop guide book to the debates, the challenges, and yes, the opportunities that can come from autonomous warfare. Scharre ends on exactly the right note, reminding us that ultimately, all of these machines are owned by us, and what we choose to build is within our control. “The world we are creating is one that will have intelligent machines in it, but it is not for them. It is a world for us.” We should continue to engage, and petition, and debate, but always with a vision for the future we want to realize.

AdamG
11-08-2018, 06:19 PM
China’s brightest children are being recruited to develop AI ‘killer bots’
Beijing Institute of Technology recruits 31 ‘patriotic’ youngsters for new AI weapons development programme
Expert in international science policy describes course as ‘extremely powerful and troubling’

https://www.scmp.com/news/china/science/article/2172141/chinas-brightest-children-are-being-recruited-develop-ai-killer

AdamG
11-16-2018, 06:25 PM
Double-tapping this nightmare. Sweet dreams, y'all.

https://dcdirtylaundry.com/surgical-robot-botches-surgery-kills-man-on-operating-table-while-doctors-sipped-lattes/

(Natural News) In the name of scientific “progress,” Newcastle’s Freeman Hospital in the United Kingdom recently tried to pioneer the use of a surgical robot that it tasked with repairing a patient’s damaged heart valve, only to have the machine go completely bonkers and ultimately kill the man on the operating table.

According to reports, this first-time-use robot not only physically assaulted a living medic while attempting to conduct its programmed surgery, but also implanted stitches into the patient’s heart in a manner that physicians present during the fiasco described as not being in “an organised fashion.”

A situation that can only be described as total chaos, with human surgeons, doctors, and nurses having to scream at each other in order to overcome the “tinny” sound coming from the robot as they were trying to control it, the attempted surgery ended up being nothing short of a complete failure. And in the end, retired music teacher and conductor, Stephen Pettitt, the guinea pig patient in this medical experiment, ultimately lost his life.

davidbfpo
11-17-2018, 10:23 AM
I was puzzled at now spotting this story in the UK media and on looking there are a number of reports. First this has become public as thee is a coroners inquest underway and second the actual death was in March 2015 - 3.5yrs ago (the operation was in February 2015). That is a long time to wait for an inquest (not that far more political and contentious matters can wait a very long time for an inquest).

The inquest's final BBC report:https://www.bbc.co.uk/news/uk-england-tyne-46143940

This linked article has a video advert for the robot:https://www.dailymail.co.uk/news/article-6363243/Pioneering-robot-KNOCKED-medics-hand-middle-heart-operation.html


(https://www.dailymail.co.uk/news/article-6363243/Pioneering-robot-KNOCKED-medics-hand-middle-heart-operation.html)

AdamG
12-07-2018, 05:32 PM
Twenty-four Amazon workers in New Jersey have been hospitalized after a robot accidentally tore a can of bear repellent spray in a warehouse, officials said. The two dozen workers were treated at five local hospitals, Robbinsville Township communications and public information officer John Nalbone told ABC News. One was in critical condition while 30 additional workers were treated at the scene.
https://abcnews.go.com/US/24-amazon-workers-hospital-bear-repellent-accident/story?id=59625712

https://media.makeameme.org/created/accident.jpg

AdamG
03-06-2019, 03:19 AM
Last month, the U.S. Army put out a call to private companies for ideas about how to improve its planned semi-autonomous, AI-driven targeting system for tanks. In its request, the Army asked for help enabling the Advanced Targeting and Lethality Automated System (ATLAS) to “acquire, identify, and engage targets at least 3X faster than the current manual process.” But that language apparently scared some people who are worried about the rise of AI-powered killing machines. And with good reason.

In response, the U.S. Army added a disclaimer to the call for white papers in a move first spotted by news website Defense One. Without modifying any of the original wording, the Army simply added a note that explains Defense Department policy hasn’t changed. Fully autonomous American killing machines still aren’t allowed to go around murdering people willy nilly. There are rules—or policies, at least. And their robots will follow those policies.

Yes, the Defense Department is still building murderous robots. But those murderous robots must adhere to the department’s “ethical standards.”

https://gizmodo.com/u-s-army-assures-public-that-robot-tank-system-adheres-1833061674

I've seen this movie before (https://thumbs.gfycat.com/ComfortableRichGemsbok.webp)