News

Why artificial intelligence must disclose that the AI

Google has recently repitched Duplex explicitly reveal restaurant hosts the salon and the staff that they talk with the Google Assistant, and is included.

Google omitted this small but important detail when first introduced Duplex on the I/O developer conference in May. The media attention followed, and critics renewed old fears about the consequences of the release of AI agents that can imitate the behavior of people in indistinguishable ways.

By adjustment of the Duplex, Google will move a portion of that criticism. But why is it so important that companies be transparent about the identity of their agents AI?

Can AI Assistants Serve Evil Purposes?

More From PCmag

  • Need Help With Your Travel Bucket List? Try VR

  • New MacBook Pro Keyboard Is Not Absolutely Terrible

  • The Promise of Interactive TV Ruined Online Advertising

  • Why I stay With Chromebooks

“There is a growing expectation that when you receive communications from a company, you would be the interaction with a computer-powered chatbot. But when you actually hear a person speaking, generally speaking, you can expect it to be a real person,” says Joshua March, CEO of Coversocial.

March says that we are at the beginning of people who have meaningful interactions with the AI on a regular basis, and the developments in the area have the fear that hackers can exploit AI agents for malicious purposes.

“In a best-case scenario, AI bots with malicious intentions could offend people,” says Marcio Avillez, SVP of Networks at Cujo AI. But Marcio adds that we could face more serious threats. For example, AI can learn of specific linguistic patterns, which makes it easier to adapt to the technology to manipulate people, the identity of the victims and the stage of vishing (voice phishing) attacks, and similar activities.

Many experts agree that the threat is real. In a column for CIO, Steven Brykman is placed on the different ways in which a technology, such as Duplex, can be used: “at least, as the man calling the man, there is still a limiting factor—a man can only have so many calls per hour, per day. People need to be paid, to take breaks, and so on. But an AI chatbot, literally, an unlimited number of calls to an unlimited number of people in an unlimited variety of ways!”

In this phase, most of what we hear is speculation; we still don’t know the extent and severity of the threats that may arise with the advent of voice assistants. But many of the potential attacks involving voice-based assistants can be defused if the company that the technology explicitly communicating with users, when they interact with an AI agent.

Privacy

Another issue around the use of technologies, such as Duplex, is the potential risk to privacy. AI-powered systems need data from the user to train and improve their algorithms, and Duplex is no exception. How to store, protect, and use of that data is very important.

New regulations are emerging that require companies to obtain explicit consent from users when they want to collect their information, but they are usually intended to cover technologies where users intentionally initiate interactions. This is logical for AI assistants like Siri and Alexa, which by the user are activated. But it is not clear how the new rules would apply for AI assistants to users without having to be activated.

In his article, Brykman emphasizes the need for legal safeguards, such as laws that require companies to declare by the presence of an AI agent, or the law is that if you’re a chatbot or a chatbot, is it necessary to say, “Yes, I am a chatbot.” Such measures would be the human interlocutor the chance to loosen it or at least decide whether they want to cooperate with an AI system that records their voice.

Even with such laws, privacy won’t go away. “The biggest risks I foresee in the technology of the current incarnation is that it will make Google even more data on our own life didn’t already have. Up to this point, they only knew of our communication online; they will get a good insight into our real-world conversations,” says Vian Chinner, founder and CEO of Xineoh.

Recent privacy scandals in which large tech companies, which they have made use of the user data in questionable ways for their own gains, have a sense of distrust about allowing them to do more windows in our lives. “The people in general feel that the major Silicon Valley companies to see if the stock in place of the customers and a large degree of distrust towards almost everything they do, how ground-breaking and life-changing it will end,” Chinner says.

Functional Failures

Despite the presence of a natural voice and tone and the use of human-like sounds such as “mmhm” and “ummm,” Duplex ” is no different than that of other contemporary AI technologies and suffers from the same limitations.

Whether voice or text is used as the interface, AI agents are good at solving specific problems. That’s why we call them “narrow AI” (as opposed to “general AI”—the type of artificial intelligence that may be involved in general problem-solving, such as the human mind does). While narrow AI can be very good in performing the tasks it is programmed, it can fail spectacularly if the metric of a scenario that deviates from the problem domain.

“If the consumer thinks they are speaking to a man, they will probably ask something that is outside of the AI’s normal script, and then get a frustrating response when the bot does not understand,” says Conversocial March.

In contrast, when a person knows they are talking to an AI that is trained for the reservation of tables in a restaurant, then they will try to avoid that by using language to confuse the AI and cause of the behavior in unexpected ways, especially as a customer.

“Employees who receive calls from the Duplex will also receive a basic introduction that this is not a real person. This would help the communication between the staff and the AI to be more conservative and obvious,” says Avillez.

For this reason, until (and if) we develop AI that can perform on par with human intelligence, it is in the interest of the companies to be transparent about their use of AI.

At the end of the day, a part of the fear of the voice assistants, such as two-Sided is caused by the fact that they are new, and we are still getting used to the encounter, in the new settings and use cases. “At least some people seem very uncomfortable with the idea of talking to a robot, without knowing it, so for now should probably be passed on to the counterparties in the conversation,” says Chinner.

But in the long run, we get used to interact with AI agents that are smarter and more able to perform tasks which were previously seen as the exclusive domain of the human operators.

“The next generation will not care if they are talking to an AI or a human when they contact a company. They just want their answers quickly and easily, and they grew up, to speak to Alexa. Waiting on hold to speak to a human being will be much more frustrating than only the interaction with a bone,” says Conversocial March.

This article originally appeared on PCMag.com.

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most popular