SAN FRANCISCO (Reuters) – Technology executives were on the spot on a artificial intelligence summit this week, each faced with a simple question grows out of the increased public scrutiny of Silicon Valley: “When you have ethics for your business interests?’
FILE PHOTO: A research support officer and a PhD student to work on his artificial intelligence projects to train robots to autonomously perform various tasks at the Department of Artificial Intelligence at the Faculty of Information Communications Technology at the University of Malta, Msida, Malta, 8 February 2019. REUTERS/Darrin Zammit Lupi
A Microsoft Corp. executive pointed to how the company considered whether it should sell nascent facial recognition technology to certain customers, while a Google executive spoke about the company’s decision not to market a face ID.
The big news at the summit, in San Francisco, came from Google, which announced the launch of a council of a public policy, and other external experts to make recommendations on the AI ethics of the company.
The discussions at EmTech Digital, performed by the MIT Technology Review, highlighted how companies with a greater show of their moral compass.
On the top, critical activists of Silicon Valley, the question is whether or not large companies could deliver on promises to address ethical issues. The teeth of the enterprises efforts can strongly influence how governments regulate businesses in the future.
“It’s really good to see that the community is holding companies accountable,” David Budden, research, engineering team lead at Alphabet Inc DeepMind, said of the debates at the conference. “Companies think in terms of the ethical and moral implications of their work.”
Kent Walker, Google’s senior vice president for global affairs, said the internet giant is being discussed, or publishing research on automated lip-reading. While beneficial for people with a disability, and threatened to help authoritarian governments sea of people, ” he said.
Ultimately, the company found the study was “more suitable for person-to-person lip-read than supervision, so that on that basis decided to publish” the research, Walker. The study was published in July last year.
Kebotix, a Cambridge, Massachusetts startup that wants to use AI to speed up the development of new chemical substances, used part of his time on stage to discuss ethics. Chief Executive Jill Becker said that the company, its customers and partners to protect against the misuse of his technology.
Still, Rashida Richardson, director of policy research for the AI Now Institute, said: little around ethics has changed since Amazon.com Inc, Facebook Inc, Microsoft and others started the non-profit nership in the field of AI to engage the public in AI-problems.
“There is a real imbalance in priorities” for tech companies, Richardson said. In view of the fact “that the size of the resources and the position of the accelerator pedal that goes into commercial products, I don’t think the same level of investment is in making sure that their products are safe and non-discriminatory.”
Google’s Walker said the company has about 300 people working to address issues such as racial biases in the algorithms, but the company has a long way to go.
“Baby steps is probably a fair characterization,” he said.
Reporting By Jeffrey Dastin and Paresh Dave in San Francisco; Editing by Greg Mitchell