Blair Frank reports in Venture Beat:
Systems built on machine learning may be able to beat humans, but they can often fail in ways that humans never would, and with unintended consequences. As AI assumes an increasingly bigger role in all our lives, these sorts of errors will become annoyances that we have to work around, as well as opportunities for humans to try to hoodwink the systems that are being brought in to automate key tasks.
One of the most important things I have learned while reporting on AI is that systems built on machine learning may be able to beat humans, or approximate their results, but they can often fail in ways that humans never would, and with unintended consequences.
Consider the tale of a Redditor who asked earlier this week whether the Google Home had an internal temperature sensor. He was feeling a bit chilly at home and asked the assistant “What’s the temperature inside?” He figured that the system would figure out the temperature from his Nest thermostat and report that back to him.
Instead, the Google Assistant went and fetched the weather report for Side, a resort town in Turkey. Asking that same question through the Google Assistant app shows users a card clearly mentioning that the result is from a Turkish weather report, but no voice feedback is provided about the country of origin.
(Google has since changed the Assistant’s response to look for a temperature sensor in a user’s home.)
This seems like a case of the Google Assistant trying to be helpful by providing the most precise answer it knows, since the Redditor in question was able to get the correct temperature by asking what the Nest thermostat temperature was. But a human in the same situation (with access to both home temperature and the temperature in Turkey) would be unlikely to have encountered the same pitfall.
It’s not just the Google Assistant, either — a video released earlier this week shows how people can build 3D printed models that fool image classification systems in a way that would never confuse a human. For example, the researchers created a turtle that’s classified as a rifle, even though it looks basically like a turtle.
While some failures are funny, others have serious consequences, even in seemingly innocuous cases. See the story of a Palestinian man who was arrested as a terrorism suspect after Facebook incorrectly translated a post of him posing with a bulldozer. He wrote “Good morning” in Arabic, only to have it translated to “Attack them” in Hebrew.
These issues are only going to become more frequent and more prevalent as time goes on. As AI assumes an increasingly bigger role in all our lives, these sorts of errors will become annoyances that we have to work around, as well as opportunities for humans to try to hoodwink the systems that are being brought in to automate key tasks.
0 comments:
Post a Comment