AI is meant to do a better job than humans, otherwise what would be the point of having it? Well, a different approach, taken by researchers at Cornell University, is to try and get AI to do just as good, or bad, a job as humans. The first iteration of the AI, known as Maia, plays chess, and tries to play it just as a human player would, mistakes and all. This is an interesting approach because, apart from potentially being used to mimic real chess players, it could eventually be used in more serious settings, such as healthcare or defence, to better understand why and how mistakes are made. Maia was actually built from an open-source version of AlphaZero, Deepmind’s AI that beat the best player in the world at Go. I’ve written before about how AlphaZero learnt independently from humans which led to novel and ‘creative’ moves being made, and how this could teach us new ways of doing things. With Maia, they modified the code to create a program that learned by favouring accurate predictions of human moves rather than the ‘best’ moves, using data from LiChess, a popular online chess server, for its training data. Although it doesn’t mention this in the Wired article, I think that this approach of showing human fallibility may actually help people trust machines more.

Another way that machines could be trusted is if they were able to exhibit better social and cooperative skills. Researchers at MIT and Brigham Young University have developed a system they call S# (S Sharp) that has learnt to play cooperation games such as the Prisoner’s Dilemma. As reported in KurzweilAI, “machines designed to selfishly maximise their pay-offs can, and should, make an autonomous choice to cooperate with humans across a wide range of situations. Two humans, if they were honest with each other and loyal, would have done as well as two machines. About half of the humans lied at some point, so the AI is learning that moral characteristics are better, since it’s programmed to not lie, and it also learns to maintain cooperation once it emerges”. As with Maia, S# could help us understand our fallibilities and provide useful pointers about how to co-operate better for common good.

Both Maia and S# have been trained to solve some tricky challenges but they are still, at least for now, working within defined realms (chess and the prisoner’s dilemma). But a key part of developing AI beyond these is to give it challenging enough problems to solve. Rui Wang, an AI researcher at Uber, has developed an AI that helps train other AIs by constantly adjusting the challenges to push the training just that little bit harder. At the moment it just trains a rudimentary stick figure running across an undulating landscape (the image at the head of this piece) but the Paired Open-Ended Trailblazer (POET), as it is called, generates the obstacle courses, assesses the bots’ abilities, and assigns their next challenge, all without human involvement. As this (paywall) article in MIT Technology says, “POET hints at a revolutionary new way to create supersmart machines: by getting AI to make itself”. Using AI to build AI could actually be an important step towards Artificial General Intelligence.

The guys at DeepMind, however, think that all we need to reach AGI is Reinforcement Learning (RL). This is the subset of AI that learns by using trial and error to maximise rewards, and was famously demonstrated by AI models they built that beat the best human players at Space Invaders then Go (cf. AlphaZero mentioned earlier). Whereas most people think that AGI will come about by a combination of very specialised models, DeepMind is suggesting that, because RL is the basic way that humans learn, it can therefore cope with challenges such as knowledge, learning, perception, social intelligence, language, generalisation and imitation. It is certainly an attractive hypothesis, but as anyone who has read ‘SuperIntelligence‘ by Nick Bostrom will tell you, you have to be very careful about what goals you set the machine – even asking for global happiness (a difficult enough thing to define in the first place) can end in disaster as the machines become overly obsessed with their task. Perhaps then a dash of human fallibility built into these models might actually be a good thing?

If you’d like to get more opinions, thoughts and ideas like this, then why don’t you sign up for our monthly newsletter, That Space Cadet Glow? Just enter your email in the box below and click on Subscribe.