We’re certainly living in an age of machine learning and artificial intelligence. Some of the most well-known innovative startups and corporations, from Uber with its city simulations to Apple with Siri, rely on these technologies in order to improve their services and become as efficient as possible. Applied to virtually any industry, AI techniques can bring increased flexibility and efficiency, first of all through data analysis and automation of jobs that used to be monopolised by humans.
There is, however, another kind of AI that could change the whole game in many unpredictable ways. Although it’s still not quite there, a number of companies are working on what’s known as “full AI,” or “artificial general intelligence” (AGI). As follows from the name, this AI would be fully capable of performing any intellectual tasks that a human being can perform, including reasoning, learning, communicating in natural language, and so on.
Full AI is arguably even more important to be aware of for today’s businesses than “narrow AI” that can only be applied for a limited range of tasks. When (and if) created, AGI could make a great many technologies obsolete, while at the same time creating a huge playing field for new ideas.
Players To Watch
Currently, there are a few notable companies working on the full AI. The biggest and most well-known is the UK-based DeepMind, which was acquired by Google in early 2014 for a reported $400 million. The company, which employs more than 100 researchers, uses deep learning algorithms to build a universal intelligent system.
“We use the reinforcement learning architecture, which is largely a design approach to characterise the way we develop our systems,” the company’s co-founder Mustafa Suleyman explained to TechWorld. “This begins with an agent which has a goal or policy that governs the way it interacts with some environment. This environment could be a small physics domain, it could be a trading environment, it could be a real world robotics environment or it could be a Atari environment.”
Using Atari games is a clever idea DeepMind has come up with to test its AI. The interesting thing is that the system receives raw pixel input and is able to trigger action buttons, but initially it has no idea what the buttons do and how the game is played. It’s only told to maximize the score on the screen. So, it learns on the go, in the same way a human being would.
Here’s how it goes with the iconic Breakout game:
DeepMind is not the only company that uses game environment to test and adjust its artificial general intelligence builds. Another notable player on the field is GoodAI, which is a spin-off of the Czech-based gamedev company Keen Software House.
GoodAI was launched last year by Keen Software House’s founder Marek Rosa, how invested $10 million of his own money in the initiative. The company employs about 30 AI specialists and also uses games to test what it does:
GoodAI has also found a way to benefit from the fact that its sister company, Keen Software House, is the developer of Space Engineers, a popular open world game.
“We’re [applying] this AI to Space Engineers for pure practical reasons,” Rosa told The Next Web. “This AI needs an environment where it will be operating, and the game seems to be a good environment for ‘childhood’ stages of the AI. It can’t do any harm in games, it can’t lead to financial losses or harm people. Any mistake it makes are really low-cost.”
GoodAI has created a tool called Brain Simulator, which it released to Space Engineers players to allow them to create AIs for in-game purposes. This way Rosa hopes to increase the rate the AI is learning at, with millions of additional “teachers” all over the world.
There are also some other, less known AGI developers like the US-based Maxima, who work on the same goals with more or less similar results—that is, the results that are made public.
How Long To Wait
It’s safe to assume that certain players might have already moved far beyond playing Breakout on Atari, but there’s still little chance that we’ll see a working full AI in the near future. Those making forecasts on the exact timeframe seem to get more optimistic over time, though.
In a job posting, GoodAI describes itself as a “long-term (10+ years) privately-funded R&D project,” which kind of gives a hint about the founder’s idea of how much time is needed to get closer to the startup’s main goal.
In 2011, a survey was conducted at the Future of Humanity Institute (FHI) Winter Intelligence conference to see what the scientists think of the probability of general AI appearing in the future. The results were that “machines will achieve human-level intelligence by 2028 (median estimate: 10% chance), by 2050 (median estimate: 50% chance), or by 2150 (median estimate: 90% chance).”
Yet another forecast was made in 2014 by Murray Shanahan, professor of cognitive robots at Imperial College London, who works on deep learning techniques. In his opinion, “within the next 30 years I would say better than fifty-fifty that we’ll achieve human-level AI.”
Arguably the most significant challenge for AGI research is that no one appears to know for sure how exactly it’s supposed to look or work. Finding the most efficient system architecture and the right way to teach the machine to think is also something done mostly by trial and error.
“I think that the biggest obstacle is [to create] neural network modules that can learn and represent the data from the environment, do the generalizations and predictions while being not too demanding on computational resources,” GoodAI’s Rosa said. “Nobody has solved this yet; there is some good progress, but it’s still not enough. We need to overcome this obstacle, and then I believe things will go much faster.”
Academic researchers mention two main issues that slow the AGI development down. First, it’s a lack of tasks and environments to properly assess the AI, which would confront all the requirements for AGI at once. The other issue, somewhat related to the first one, is that the existence of an architecture that satisfies a certain subset of the requirements doesn’t mean that it can scale to achieve all of them.
In addition to that, there’s the ethical aspect to full AI research. A list of concerns could look like this:
- An AGI could develop hostile attitudes or thoughts against humans.
- A minority could develop an AGI with the goal to fulfill their own interests (conflict with general human interests).
- An AGI could take ethical rules too seriously (or extend them in its own way) and therefore do dangerous actions as for example harming people.
- You could never trust an AGI until your mental abilities would be at a comparable level (the AGI could lie to you and you wouldn’t even have the possibility to recognize that).
- An AGI could create a new race with further AGIs just to be in an appropriate society: mankind would be eclipsed.
Fortunately, we apparently have more than enough time to solve these.
All in all, full AI does not seem an immediate threat to the existing business models and the way everything works, although it definitely is something we should be aware of and checking on every now and then. It is also an extraordinarily challenging and exciting thing to think about, which will get even more so as it gets closer.