Janelle Shane reports in the New York Times:
Expect AI to try its best to give us exactly what we ask for, and be very careful what we ask for. It would be a bad idea to use the Halloween costume algorithm’s predicted costumes without checking them first (Spirit of Potatoes, anyone?). The algorithm learned to spell words by looking at costume examples. It makes predictions about which letters used in which order to make a costume, and then checks by looking at the data. If it’s wrong, it refines its internal structure. Its predictions get better. We know an algorithm can solve a problem, but we often don’t know exactly how.
When it comes to machine learning, things can get a little spooky: We know an algorithm can solve a problem, but we often don’t know exactly how.Today’s machine-learning algorithms are considered a form of artificial intelligence, but it’s more helpful to think of them as prediction algorithms: Based on movies that this customer has rated highly, how much do we think he will like this other movie? Based on people we have hired in the past, how likely are we to hire this job candidate? Based on a list of past Halloween costumes, what might humans dress up as this year?Hoping to find an answer to that last question, we turned to a machine-learning algorithm called textgenrnn that can learn to imitate text. Its author, Max Woolf, designed it as a blank slate ready to learn any kind of text; the text we gave it to imitate was a list of 7,182 costumes that people sent to aiweirdness.com over a year. Here are some examples of what that algorithm produced.Zombie SchoolgirlToaster BoyDonald McDonaldThe algorithm learned to spell all of these words and phrases without human intervention, just by looking at the costume examples we gave it. It starts by making predictions about which letters should be used in which order to make a Halloween costume, and then it checks its own predictions by looking at the data used to train it. If it’s wrong (and at first, it almost always is), it refines its internal structure. Gradually, its predictions get better.2. Comparing newword with original data3. Output andstructural refinement1. Word generationDeviled eggsLadybugMermaidWitchWizardCinderellaTAIBBWCBCHWabzbUILZFNo original data match.Structure is refined.ATQRPDeviled eggsLadybugMermaidWitchWizardCinderellaTAIBBWCBCHWitchUILZFMatch found in data.Structure updated to reflectthe success of this output.ATQRPHere are the costumes that the neural network produced at various stages, or “epochs,” of training. (We added the illustrations.)Epoch 1GhanedasteinHeagd nallerFustrasticeVatand hampireCaptain KirkfRuth Bader Hat GuyPirate the Wild ThorFlip WItchThe Vampire StabButtery ClassicEpoch 11Sexy Beta MarxCat WitchKing DogBucketball playerDragon Ninjabaseball clownDeatheaterSlick mermaidVampire Chick SharkCentaur meijeWhen we ask it to predict new Halloween costumes, we can also tell it how creative to be. At the very lowest creativity setting it will almost always go with its top prediction, producing lots of repetition, while at the very highest setting it ventures into territory it deems less probable.Sexy SantaCreativity: 1Note that at the beginning of training, the neural network knew nothing at all about its input data: It didn’t know the difference between letters and spaces, or how to spell any words. The more repetition it sees, the more easily it can learn something. Unsurprisingly, one of the first words it learns to spell reliably is “sexy.”As its training progresses, it begins to learn more and more words, adding “steampunk,” “minion,” “cardinalfish” and “Bellatrix” to its vocabulary. It learns new and exciting ways to use “sexy,” including predicting sexy costumes that don’t yet exist:sexy King Louis XVIsexy michael cerasexy printersexy marijuana beeSexy Tin ManSexy Minecraft Personsexy abraham lincolnsexy beetThe neural network also gets better at copying costumes directly from its training data. It does this because we told it to produce costumes like those it has already seen, and as far as the neural network is concerned, exactly copying its input data is a perfect solution.Everything that the neural network knows about is something that was in its input dataset — it had no other information to work with. For example, it doesn’t know about this year’s hit movies, so it still produces “Pink Panther” more often than “Black Panther.” Even when it’s coining new words (as in “Farty Potter” or “Werefish”), it’s not building them based on any understanding of the meanings of the words but has simply determined that they are probable letter combinations based on the costumes it has seen.Hello my name isAnd this is where neural networks and other machine-learning algorithms get a bit spooky: it’s very hard to tell how they determine that these letter combinations — like “Bloody Horse,” “Gothed pines” and “Ballerina trump” — are possible. Even when we can peer inside the neural network’s virtual brain and examine its virtual neurons, the rules it learns for its prediction-making are usually very hard to interpret. People are working on algorithmic explainability; for example, a group of Google and Carnegie Mellon researchers were recently able to show an image-recognition algorithm zeroing in on floppy ear shape as a way to recognize dogs. But many algorithms are much more difficult to interpret than image-recognition algorithms — for example, the kind used to make loan or parole decisions — and for the most part these algorithms are black boxes, producing predictions without explanation.Sometimes, the ways algorithms work can have unexpected and disastrous consequences. In 2013, M.I.T. researchers trained an algorithm that was supposed to figure out how to sort a list of numbers. The humans told the algorithm that the goal was to reduce sorting errors, so the program deleted the list entirely, leaving zero sorting errors. And in 1997, another algorithm was supposed to learn to land an airplane on an aircraft carrier as gently as possible. Instead, it discovered that in its simulation it could land the plane with such huge force that the simulation couldn’t store the measurement, and would register zero force instead.It can be a scary thing to trust a decision that we don’t understand — and it should be scary. Machine-learning algorithms learn from whatever is in their training data, even if their training data is full of the behaviors of biased humans. In other words, when we are using machine-learning algorithms, we get exactly what we ask for — for better or worse. For example, an algorithm that sees hiring decisions biased by race or gender will predict biased hiring decisions, and an algorithm that sees racial bias in parole decisions will learn to imitate this bias when making its own parole decisions. After all, we didn’t ask those algorithms what the best decision would have been. We only asked them to predict which decisions the humans in its training data would have made.Landing a planeParole decisionsJob hiringThis kind of mistake happens all the time, and biased algorithms are everywhere. The moral of this story is not to expect artificial intelligence to be fair or impartial or to have the faintest clue about what our goals are. Instead, we should expect AI merely to try its best to give us exactly what we ask for, and we should be very careful what we ask for. It would have been a bad idea to put the plane-landing algorithm’s recommendations into practice without checking them carefully. And it would be a bad idea to use the Halloween costume algorithm’s predicted costumes without checking them first (Spirit of Potatoes, anyone?).
3 comments:
Joker Game ชื่อนี้คุณมั่นใจว่าคุณจะได้เล่นกับเว็บที่ดีที่สุดแห่งปี 2020 เล่นง่าย jokerauto
ชั่วโมง pgslot77 เล่นได้แล้ววันนี้! ทั้งยังโทรศัพท์มือถือสมาร์ทโฟน แท็บเล็ต รองรับทุกระบบปฏิบัติการ iOS, Android, Window, Mac หรือสามารถดาวน์โหลด และก็จัดตั้ง Application ได้อย่างไม่ยากเย็นเพียงแค่คลิกครั้งเดียว PGSlot77
I am unable to read articles online very often, but I’m glad I did today. This is very well written and your points are well-expressed. Please, don’t ever stop writing.
water based lubricant
electric masturbation cup
Post a Comment