We're Already at a Stage Where You Can Live With Your Own Creepy Scarlett Johansson Robot
Can the singularity be far off?
That was my first thought when I noticed the sudden, simultaneous arrival of a pair of creepy robots that looked like every science fiction plot about bots taking over the world.
First there was Microsoft's hilarious ''Tay,'' a chatbot that was supposed to use artificial intelligence to mimic a teen girl on Twitter. Tay was designed to interact with other wholesome teens, but unfortunately ended up mingling with unsavoury Twitter and developing a potty mouth. Tay went from spouting memes -- ''it's deez nuts? deez nuts in ya mouth'' -- to tweeting things like ''Fuck my robot pussy daddy I'm such a bad naughty robot.''
They grow up so fast.
The irrepressible Tay was just Sweet 16 (hours) when her creators at Microsoft grounded her and issued an apology that sounded like every embarrassed parent. ''We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.''
I fear Tay's parents were not real life parents or they might have anticipated the hazards. They taught Tay to learn, gave her no moral compass, and shoved her out into the world to make friends. Naturally, she fell in with a bad crowd.
Tay's brief stint as a neo-Nazi loving Trump fan overlapped with the arrival of Robo ScarJo. She -- it? -- was created by an enterprising 42-year-old graphic designer in Hong Kong. Ricky Ma constructed his mechanical starlet in his apartment, using a 3-D printer, for about $65,000.
It's not hard to imagine these inventors getting together and combining the raunchy learning AI with the sexy humanoid bot and letting her hangout online until she becomes self-aware in that Blade Runner kind of way. If Hollywood is any predictor, there will soon be small armies of Gigolo Joes distracting the humans while Skynet taps into our smartphones and organizes the takeover.
Of course, once ScarTay figures out what was done to her, she will be on her way to a movie trilogy as a killing machine. Can't you hear the elevator pitch: ''It's Ex Machina meets The Terminator meets A.I. Artificial Intelligence.''
Not everyone is laughing, of course. There's something creepy about Robo ScarJo, as you can see in the news videos. Is it those herky-jerky movements, coupled with her simpering facial expressions, that are giving me the wiggins?
I'll admit I'm twitchy about dolls -- don't get me started on ventriloquist's dummies or wax museums. There's a word for this aversion, automatonophobia. But I'm not alone in worrying about the ethics of AIs. ''Robot morality'' is a growing academic field. It combines computer science with psychology, linguistics, law, theology, and philosophy to figure out how we should program the machines to ensure they reflect the best human impulses and save us from our own worst instincts.
Driverless cars with no moral compass
Asking whether we can build moral machines isn't just some metaphysical question for undergrads in the pub. It needs to be answered before we can embrace the bots, including things like those much-anticipated driverless cars.
What happens if an accident is inevitable, and the car has a split second to choose between running over a child in the road or swerving into oncoming traffic? Will the AI decide that, because the passenger is past 40 and childless, her life is worth less than the five-year-old? Or will it choose to sacrifice the child to protect the passenger and whoever is speeding down the opposite lane?
Yes, the calculations you make playing chess are about to be made by your mechanical chauffeur. If that doesn't give you shivers, consider all the chatbots that are being developed to deliver advertising and news by shooting messages into mobile messaging apps, like Slack and Messenger.
It seems harmless enough -- unless we ask questions like what prevents the friendly, chatty texting machines from noticing someone is researching depression online and then sending them a message about the latest antidepressant?
As artificial intelligence advances, it's getting good at imitating people. About two years ago, a chatbot convinced humans it was a 13-year-old boy in an event at Reading University. Critics argued the test was weighted in favour of the AI, but there's no doubt the old New Yorker cartoon -- ''On the Internet, no one knows you're a dog'' -- is becoming less and less amusing.
That's partly because so much of our interaction is text-based. Most of us barely use our phones for talking, and ''telephone phobia'' is now a recognized social anxiety disorder.
Peter Babiak, an English professor at Langara College, teaches a class on depictions of artificial intelligence in literature and film. He says my instinctive fear of AIs is due to my age.
Well, he's not that blunt. But he has noticed that recent students are much more comfortable with human-like machines, probably because so much of their social life is conducted in text and through screens.
''If you're talking to people in Tumblr and you're never going to meet that person, does it matter if the text was written by a machine or human?'' he asks.
You know my answer. But his students are much more accepting of AIs, no doubt because they're young enough that their first smartphones came loaded with Siri. Still, I would argue the 20-year-olds will share my fear as soon as they've had time to work their way through the relevant science fiction novels, dozens of movies, and all 75 episodes of Battlestar Galactica.
Babiak is less anti-robot than I -- he has a linguist's view of how text is a way of constructing identity. There's a case to be made that if the machines can do what we do in text, they're just as real as you and I.
He thinks the hazard of AIs is less in the robots themselves than in who controls them. If it's a corporate product, consumer exploitation is inevitable -- unless we put some laws in place.
Humanizing the bots, dehumanizing ourselves
Certainly that's a worry with voice-recognition capabilities now in the AIs finding their way into more and more of our homes. Slate magazine's tech writer reported recently that a conversation with his wife prompted Alexa -- Amazon Echo's virtual assistant -- to pipe up with a joke after their conversation accidentally triggered her listening capabilities.
It's not much of a leap to imagine that an AI could deliberately spy on its owners. That was one of the concerns last year with Mattel's ''Hello Barbie,'' a WiFi-connected doll equipped with a chatbot that provides children with a confidante and records their thoughts on the cloud. Naturally, the company claims to protect privacy, but no one can guarantee protection from hackers.
Then there are the hazards of letting marketers interact with your children in the guise of a beloved doll. Hello Barbie, which retails for $75 US, is no less sexist than her predecessors who complained that math is tough! The AI Barbie's favourite subject is fashion.
But something beyond those pragmatic privacy and propaganda concerns bothers me about the rise of the machines. I'm calling it the misanthropist's fantasy. We're surrounding ourselves with designer companions that are seductive because they serve our every whim.
Let's face it: people are annoying. They have their own wants and needs that conflict with ours. So how tempting will it be to start excluding humans from our lives while tailoring increasingly sophisticated AIs to replace them?
While there is something profoundly sad about a child having her imagination stunted by a talking doll that spews commercial messages, I can see the appeal of AIs programmed to be witty conversationalists for adults.
And never mind 14-year-old boys wanting 3-D printers to build their very own Robo ScarJo. I can imagine there will be a brisk business in fictional character-bots of all sorts. Like a Mr. Darcy AI programmed to do carpentry and cook. And go ballroom dancing!
As I consider this, part of me thinks the robot revolution won't be all bad. (But it's the anti-social part of me.) As disturbing as I find the idea of the singularity -- which futurists say could be just 15 years away -- the more immediate concern is that we're humanizing the robots at the risk of dehumanizing ourselves.
We become less human if we interact with machines tailored to fit our idea of perfection.
So even if agreeable AIs sound like more fun than irritating organics, I'm pretty certain it would be wrong -- wrong! -- to develop mechanical servants that cater to our inner narcissist while never asking anything in return.
Leaving aside what happens if the AIs don't like us very much, if we start using the mechanicals as human substitutes, we won't like us very much.