Google DeepMind investigates competitive behavior artificial intelligence
Google daughter Deepmind, specializing in artificial intelligence, investigates via a number of simple games how competitive or cooperative self-learning computer systems behave opposite to each other.
That was revealed in a new blog post on the website Deepmind, which the new research is presented.
The research builds on the so-called “prisoner’s dilemma” where two prisoners have the choice to together to the police, or just to silence. By working together, they gain both a lower sentence, but the smartest choice for both prisoners is apparent still to the other. That leads to a suboptimal outcome.
Deepmind wants this so-called game theory to explore these games through self-learning computer systems to play and see how they react. The company has two examples shared, the simple games Gathering and Wolfpack.
In Gathering both systems as much as possible, apples to gather. However, there are only a limited number of apples. The computers can they as quickly as possible, collect, or choose to use a laser beam to the competing opponent temporarily out of the game. As the number of apples scarce was begun the systems together more often to work against. Also the increase of the computing power of the systems caused less cooperation.
In the second game, the Wolfpack, the two systems must work together to create a moving object in close and so to be able to catch. This caused an increase in computing power especially for more co-operation.
According to Deepmind forms the carried out experiments a new way to get to the classic game theory to look at and has the research therefore also uses for the understanding of both human behavior and artificial intelligence.
In particular, the behavior of artificial intelligence is a hotly debated topic within the major tech, because of the risks that may be connected to the transaction of tasks to automated, self-learning systems.
A number of techgiganten this last year even a separate consortium started in which they can freely talk about the developments and risks of artificial intelligence without compromising trade secrets, in the wrong hands can fall.