News

Think the AI is too scary? This expert wants, your fears will calm down

How do you feel about artificial intelligence? Excited? Worried?

Maybe you would want there to be “grown-up “supervision” on the hand. Relax, there is: Established in 1979, the Association for the Advancement of Artificial Intelligence’s attacks.. rubr. insurance) has kept a fairly low profile outside the academic arena, emerging when called on to add the reason behind closed doors in Washington.

But the organization now has 4000 members worldwide and its purpose is to: promote research and the guidance of the responsible use of AI; to improve the public understanding of the field; to raise standards in the training of AI innovators; and provide guidance on funding major AI initiatives.

To know more, we had a conversation with Dr. Yolanda Gil, who just took’s attacks.. rubr. insurance is the 24th president, over the phone, at her office at USC’s Information Sciences Institute (ISI). Dr. Gil joined ISI in 1992 and is currently the Director of Knowledge Technologies and Associate Division Director; she is also a Research Professor in Computer Science and Spatial Sciences with a focus on intelligent interfaces for knowledge capture and discovery. Here are edited and condensed excerpts from our conversation.

More From PCmag

  • Apple: “Very Small Percentage” of the iPhone 8 Models Are Defective

  • BitTorrent Courts File-Sharers With Token System

  • Microsoft Adds Dolby Vision Support on An Xbox

  • E-paper Sony Watch Changes Design With the Tap of an App

Dr. Gil, can you tell us why you accepted the role’s attacks.. rubr. insurance president? What was it about the organization and its mission that really drives you?
These are exciting times as AI increasingly permeates our lives. We see it in the systems of chatbots to self-driving cars to scientific discovery and many other applications to do useful work. I believe it’s attacks.. rubr. insurance is the leading forum to coordinate many areas of AI, and that we also have a strong responsible for the design of AI systems—and encourage ethical and responsible behavior. My career has always included a focus on service to the AI and computer science communities so this was a natural step for me. I am very excited about it.

Can you talk about three important goals you have for’s attacks.. rubr. insurance moving forward?
The first thing to tell you is that I see this as a listening experience, at least in the first instance, so I can respond to what the community is looking for. That said, a large area to improve and strengthen’s attacks.. rubr. insurance ties with the business community. Our annual conference has a lot of the participants from the industry, but I would like to see more presence of industrial research laboratories. Traditionally, this is a very academic conference, but today, many professors spend time in the industry. We like the sector a lot more presence. That is an important concern. I am also looking to include disadvantaged communities in our membership and to diversify strong; the launch of K-12 initiatives to grow the pipeline; and ensure that we are professionals in other areas.

As young students learn about the AI, but go into other fields, it will spread understanding instead of fear?
Exactly. Many K-12 students will go on to be doctors, entrepreneurs, engineers, or whatever they choose. But by exposure to a’s attacks.. rubr. insurance, they will know more about the possibilities for the AI in their chosen field.

Your predecessor started with a focus on AI and ethics.
Yes, and we continue that by 2019…especially in the second conference on AI and Ethics in Society” during our annual conference. We should look at the employment of the ethics within the AI at any level: how systems should be designed with different mechanisms in order to respond ethically to events; understand if an AI system would be able to harm; and so on. I am very enthusiastic about our initiative, in cooperation with the ACM and as a community we need to take a leadership role and do more research in this area: correctly, clearly, and creatively, instead of letting circumstances shape AI.

Pivoting to your own research, for a time, that we are first introduced to your work back in 2015, at DARPA. How has the project progressed since then?
At the time that [at DARPA] we are still at the beginning of the project on the use of intelligent systems for scientific discovery, assisting scientists with intelligent systems that analyze data, test hypotheses, and new discoveries. In the beginning we were focused on the capture of scientific processes as semantic workflows. We have worked on many different areas of science. Now we have a number of related projects, and see what the results are.

These results are not yet published, but you can share some insights and specific areas of focus for your intelligent systems?
Yes, for example, in the scientific field of proteomics, the biochemical study of proteins in an organism, we have now captured a lot of workflows on this type of analysis. And, interestingly enough, what we noticed is that, when the studies are published, they simply used a model [for the identification of the proteins], but she had not the explore of others, which means that many proteins are missed.

And your AI system is smart enough to comprehend this failure and correct it?
Understand. Our systems are intelligent enough to be diligent and continue to try other methods, other algorithms, to detect hundreds of proteins that are left behind otherwise. The system itself works to make new discoveries.

That is amazing. Your AI systems will create new scientific breakthroughs. A popular lab partner, one assumes.
[Laughs] We hope so. We are now working on a mechanism for the measurement of “interestingness,” so that when the system finds something new, check whether it is a major breakthrough. This is a very challenging; it requires that the system has knowledge about the current state of affairs in the field, the latest ideas on these proteins.

Otherwise, maybe just excited, but proteomics researchers will be able to say: “yes, but that’s only a prosaic protein.”
That is correct. And a good lab partner would not bother a scientist with minutia or unimportant findings. So, we have an “interesting lab partner.”

Can your AI system to take and analyze multiforms of data entry?
Yes. We are now automatically generate the machine learning workflows. We display these data and a given, i.e. the desired target and the system will start with looking at what kind of data it is. If the sound is the search for a way to process it. As the visual data, it will find a way to understand and in the catalog. Then, the application of algorithms for maximizing the metric (for example, the accuracy of the solution). It is very systematic.

Apart from the inside of the proteomics lab, are not your systems, the approach of the problems in the field with your new $13 million DARPA award?
Yes, that price is for a 4-year project with the name MINT Model for Integration, part of DARPA’s World Builders program. We build workflows to integrate into complex models of the world that relate to the hydrology, the production of food, climate, social and economy. We ask our intelligent systems to help us understand, in particular, potential food shortages, poverty and food insecurity risk-communities throughout the world.

At this time we have worked with Kimetrica, the data company which provides a large-scale investment valuations and statistics from the government reports to make use of our automated machine-learning workflows, and it turns out that they are better than the ones they did with the hand. This is still in the early stages, but we are getting good results. It is exciting to their domain of expertise, combined with our AI research in finding solutions for important problems of the world.

In the current issue of the AI Magazine, Dr. Lynne Parker, co-leader of the National Artificial Intelligence Research and Development Strategic Plan writes:it is the responsibility of us as technologists to focus on the positive, ethical development and use of AI, to ensure that everyone can benefit from the practical application of AI in the society, regardless of which nation leads in the strategic development of the technology.” What was the significance of this document?
AI has always been very international—and distributed—with the cultural richness embedded from all regions, many of which have different approaches to AI. For example, In Europe there is an incredible tradition of logic-based approaches, in Asia comes from advanced computational mathematics, while in Africa and Australia there is a strong focus on applied research. The document you refer to is very important, because it means that the US recognizes the importance of a national plan and make strategic investments.

An increased focus to compete with China and elsewhere?
To be clear, the US government has always made AI an important area for investment, but it is not yet traditionally been a great investment, while the EU and China are prioritizing the government-led research investment in AI for a while. That report is an admission that the US must continue to play a leading role. It was a large document, a very thorough and the recommendations are very right.

One thing noted is that the machine learning is important, but the human-computer collaboration is something that we cannot neglect. It explains also the emphasis on fundamental research in AI, because, when you are investing for 10, 20 or 30 years in the hard problems that we have in AI such as speech recognition—you can see astonishing results, but only when investments are sustained for a considerable period of time.

It is very frustrating for an important speech-recognition AI researchers as the popular belief is that there is a magical Google Assistant lab “thought it was about a late-night pizza-fuelled session in 2018.”
Right! [Laughs] Speech-recognition research took decades, and some of the directions didn’t work, and had to be abandoned. But because there is a significant investment in people with the problem and remained a creative and press forward.

How about with the current government, having regard to the fact that the national AI plan is a document prepared in the context of President Obama?
Actually, the current government just released a statement that they are going to invest in AI and have started a special study to come up with a roadmap for AI research. I will be the co-chairmanship of that effort.

Well, that is a cheering thought. When, as co-chair of the new AI strategic plan for the united states, submit your first paper?
We are going to participate in the National Artificial Intelligence Research and Development Strategic Plan and think very carefully about fundamental research, human-computer, cooperation, understanding of the benefits of the society as we are investing—particularly around the health and scientific discoveries and so on to create a comprehensive roadmap for the spring of 2019. We also want to highlight how innovation in the AI can boost competitiveness and benefit the industry and governments.

For more information, the’s attacks.. rubr. insurance Fall Symposium is scheduled for October. 18-20 in Arlington, Virginia. The registration deadline is Sept. 21.

This article originally appeared on PCMag.com.

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most popular