When we discuss technology in all its forms and the perceptions of those who this tech will be aimed at helping/hurting would it not be important that we not put the cart before the horse.

What I mean is if we were to take for example tech systems that use behavioral analysis to explore possible points of contention or probabilities of a given thing happening( search for bots, anti-spam, anti-viral, social network mapping, etc) these should involve a high level of skepticism in their acceptance as singular solutions. Each requires somewhere in the process that
DUH check that only a human being can actually put the final approval/denial on its results.

This is not saying that they aren't effective on their own and they can and do effectively identify and at least minimize issues related the the realm they monitor.

Down side is I don't know about the rest of you but I'll be darned if I can get most of my family or friends to actually leave them turned on because they stand in the way of their ease to interact with the world through the internet.

Same generally goes for more advanced robotic applications involved in performing missions of great variety in replacing human beings. Sure those robots at the factory give the ability to build a whole lot more cars each day then humans could and may be more accurate but on the flipside if one of them gets set wrong then everything has to stop in order to get it straight before starting the whole line again, or even worse nobody notices and 6000 cars hit the road with a defect which may cost lives.

Take that to the next step and apply it to UAV's, ground systems etc. Disposable bot to check for a bomb great, bot whose supposed to determine whether a house is dangerous or not, or a human- What you gonna do when all the right elements for a concoction exist in a small enough area That it presumes its there to make what could be made from it and simply disposes of it without asking.

Long and short same argument you've probably heard a million times but seems worth restating- They may help you do what you do better, or even do what you faster and more effectively but that means they can also screw it up faster and more effectively then you could ever do.

Until we actually figure out our own brains it's probably a really bad idea to work to hard to try replicating in digital autonomy that which we have yet to explain sufficiently about ourselves.