THIS whole Tay thing got me thinking.
What if things had gone a bit further before Microsoft pulled the plug? I mean, what if Tay had had a little more time to go from being an offensive racist shitbot, to actually managing to break the law? What I mean is: What if “she” had said something that, had it been spouted forth from a living, mouth-breathing human, would have been deemed hate-speech? What if she’d actually been manipulated into making slanderous attacks against an individual?
Right there is a question: could an AI tweetbot actually do something illegal? It’s not human, after all, so is it subject to human laws?
Ok, ok. It’s not that important, I hear you say. Just a bit of fun, or one of those wrinkles that’s bound to come up when you’re screwing around at the bleeding edge of technology.
But, I happen to think it is important, and it’ll get to be more important as time goes on. See, one of the things I picked up while reading about this whole business is that some companies are already using AI bots hooked up to social media, for marketing purposes. Whatever that means. I guess they watch Twitter for mentions of their brand or products or maybe their competitors’ brands and products, and then jump into the conversation with a sales pitch. And I imagine that the AIs involved are probably a bit limited as to what they’ll talk about, since their artificial worlds revolve around their specific brands and products.
But as time goes on, the AIs will get better at pretending to be human. I see a day coming—sooner than most of us would think, I’m betting—when there’ll be AIs engaging with real people on social media, and we won’t be able to tell that they’re robots. (Here’s a question: how many people, possibly seeing some of Tay’s tweets but not knowing what Tay is, thought Tay was flesh and blood?) So, what happens then, when one of these bots gets a bit out of whack on a Friday afternoon, and gets fooled into making verbal attacks against some minority group? And what happens if that leads to some brain-deficient group taking the AI’s crap and turning it into a call to action? What happens if someone gets hurt because of it?
Here’s another question: What happens if someone takes an off-the-shelf AI and deliberately sets it up with an agenda, to create a racist/homophobic/misogynist/anti-minority douchebot? (And yes, I really think off-the-shelf AIs are coming, just like the generic game engines that some assholes have used to create offensive games pushing neo-Fascist messages. Remember those?)
It’s a bit like someone letting their dog off the leash in a crowd; the dog gets confused, and bites someone. The owner gets the blame, gets fined, and maybe the dog is destroyed.
And that last example is like someone deliberately letting a zombie loose in a crowd, with the intention of turning some of that crowd into more zombies.
OH MY GOD I JUST LINKED AI TWEETBOTS TO THE ZOMBIE APOCALYPSE.
I think that’s enough for a Sunday morning.
Happy Easter.
Have a nice day.
Until next time, gentle reader . . .