Jeremy Kahn reports in Fortune:
Could the problems caused by A.I. be solved by artificial intelligence itself? Yes, in some circumstances. While technology could produce useful tools for detecting—and even correcting—problems with A.I. software, both scientists emphasized people should not be lulled into complacency about the need for critical human judgment. “When you deliver an A.I. system, you cannot just think about these issues at the time. Every design choice can bring unconscious bias.”
Could the problems caused by A.I. be solved by artificial intelligence itself?
I put that question to IBM’s Francesca Rossi, who leads Big Blue’s efforts on the ethics of artificial intelligence, and Antoine Bordes, a director of Facebook’s A.I. Research lab, at Fortune's Global Forum in Paris last week.
Yes—at least in some circumstances, both researchers said.
Bordes’s group, for example, is creating a benchmark test that can be used to train a machine learning algorithm to automatically detect deepfakes. And Rossi said that, in some cases, A.I. could be used to highlight potential bias in models created by other artificial intelligence algorithms.
While technology could produce useful tools for detecting—and even correcting—problems with A.I. software, both scientists emphasized people should not be lulled into complacency about the need for critical human judgment.
“Addressing this issue is really a process,” Rossi told me. “When you deliver an A.I. system, you cannot just think about these issues at the time the product is ready to be deployed. Every design choice ... can bring unconscious bias.”
0 comments:
Post a Comment