Francesca Benson looks deeper into the ways in which AI processes information
Just like humans can learn from experience, so too can computers. When given sample data, artificial intelligence networks can be trained in a process known as machine learning to recognise patterns or perform specific functions, such as identifying spam emails. AI can even learn from its mistakes, using positive or negative feedback to alter how it functions to optimise its output.
One factor that can lead to weird and wonderful results using AI is the fact that commands given to the network need to be very clear for proper function. AI does not share our concept of what an odd way to do things would be. In some cases, AI can take vague commands very literally, simply taking the path of least resistance to reach the end goal without knowing what the user actually wanted to happen.
Victoria Krakovna, a research scientist at AI company DeepMind, has compiled a list of some of the most interesting examples of this phenomenon titled ‘Specification Gaming Examples in AI’. In one experiment, the program’s goal was to generate creatures with bodies built for speed. This was achieved by making the creatures very tall, so that they would reach higher speeds on their way down as they fell over.
Another algorithm was made to take a list and output a version in a sorted order, however the output was always blank. It turns out that this is because a list with nothing in it was technically a sorted list.
One slightly alarming example involved a simulation of life. Energy was needed for virtual creatures to survive which they gained by eating, but the designers had placed no energy cost on giving birth. One population evolved to produce many offspring and then ate them, as the rules in this scenario essentially rendered children as free food.
Alongside the examples of AI appearing to game the system, there are also examples in the list that demonstrate AI finding solutions or identifying patterns that may not have been obvious to a human. A robotic arm which was instructed to move a block into a target position achieved this by moving not the block, but the table it was resting on. One AI was fed images of skin lesions in order to help it identify skin cancers, but one unexpected result was that the AI learned that in the sample images given skin lesions pictured next to a ruler were more likely to be cancerous. Unusual as they may seem, these examples of AI gone awry shows how AI could help us think outside the box and find creative solutions that we may have not even considered.