Home Tech How machine learning systems sometimes surprise us

How machine learning systems sometimes surprise us

176

This simple spreadsheet of machine learning foibles may not look like much but it’s a fascinating exploration of how machines “think.” The list, compiled by researcher Victoria Krakovna, describes various situations in which robots followed the spirit and the letter of the law at the same time.

For example, in the video below a machine learning algorithm learned that it could rack up points not by taking part in a boat race but by flipping around in a circle to get points. In another simulation “where survival required energy but giving birth had no energy cost, one species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children).” This led to what Krakovna called “indolent cannibals.”

It’s obvious that these machines aren’t “thinking” in any real sense but when given parameters and a the ability to evolve an answer, it’s also obvious that these robots will come up with some fun ideas. In other test, a robot learned to move a block by smacking the table with its arm and still another “genetic algorithm [was] supposed to configure a circuit into an oscillator, but instead [made] a radio to pick up signals from neighboring computers.” Another cancer-detecting system found that pictures of malignant tumors usually contained rulers and so gave plenty of false positives.

Each of these examples shows the unintended consequences of trusting machines to learn. They will learn but they will also confound us. Machine learning is just that – learning that is understandable only by machines.

One final example: in a game of Tetris in which a robot was required to “not lose” the program pauses “the game indefinitely to avoid losing.” Now it just needs to throw a tantrum and we’d have a clever three-year-old on our hands.