Artificial intelligence can be incredibly annoying. Like when you realize the customer-service chatbot has you in a reply loop. Or when your voice assistant keeps giving you irrelevant answers to your question.
Or when the automated phone system has none of the choices you need and no way to speak to a human Sometimes when dealing with technology, the temptation to unleash anger is understandable. But as such encounters become more common with artificial intelligence, what does our emotional response accomplish? Does it cost more in civility than it benefits us in catharsis? We wondered what WSJ readers think of this emerging dilemma, as part of our ongoing series on the ethics of AI. So we asked: Is it OK to address a chatbot or virtual assistant in a manner that is harsh or abusive—even though it doesn’t actually have feelings? Does that change if the AI can feign emotions? Could bad behavior toward chatbots encourage us to behave worse toward real people? Here is some of what they told us.
A question of civility There is no excuse for bad behavior. If we claim to be civilized then surely we must act so, regardless of provocation or fear of oversight. One consequence of being harsh or abusive in a virtual setting is that it inevitably leads to similar behavior in the physical world, and that is where the greater damage lies.
Kaleem Ahmad, Ankara, Turkey Feel free to blow your top Is this a real question? These notions are preposterous. Of course it is OK. In fact, this is another potential therapeutic use of AI.
We all feel the need to let loose to relieve stress. Having a reactive robot take it in place of a spouse or a child is exactly the kind of life-enriching tool a machine is supposed to be. That’s like asking,
. Read more on livemint.com