You have commented 339 times on Rantburg.

Your Name
Your e-mail (optional)
Website (optional)
My Original Nic        Pic-a-Nic        Sorry. Comments have been closed on this article.
Bold Italic Underline Strike Bullet Blockquote Small Big Link Squish Foto Photo
-Signs, Portents, and the Weather-
Artificial intelligence is racist, sexist because of data it’s fed
2018-11-20
[NYPOST] Artificial intelligence is "shockingly" racist and sexist, a study has revealed.

Researchers looked at a range of systems and datasets and found examples where AI had provided inaccurate information for women and minorities.

In one example, the team from Massachusetts Institute of Technology looked at an income prediction system and discovered it was twice as likely to misclassify female employees as low-income and male employee as high-income.

However,
ars longa, vita brevis...
the team was able to adjust the system to make sure it was less biased.

When researchers increased the dataset by a factor of 10, they found the mistakes decreased by 40 percent.

Irene Chen, a Ph.D. student who wrote the paper with MIT professor David Sontag and postdoctoral associate Fredrik D. Johansson, said it comes down to using better data.

She said: "Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms.

"But algorithms are only as good as the data they’re using and our research shows that you can often make a bigger difference with better data."

In another example, researchers found an AI system’s ability to predict intensive care unit mortality was inaccurate for Asian patients.

They warned that using existing methods to fix the system would make the predictions less accurate for non-Asian patients.

Typically researchers would just add more data to the system, but Chen said it is also the quality of the data that is important.

Instead, researchers should be getting more data from underrepresented groups.

Sontag said: "We view this as a toolbox for helping machine-learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions."

The research team will present their paper in December at the Neural Information Processing Systems in Montreal.
Posted by:Fred

#2  "Master of the Obvious" needed here.... How could anyone think that the GIGO Scenario does not apply? Pattern Recognition is something that Nature has taken a long, long time to implement hardware and it takes humans years to train properly --- it ain't easy because it ain't.
Posted by: magpie   2018-11-20 12:44  

#1  Neural Networks take too long to train. It's been known for years. The initial Motorola cellular infrastructure had a neural network component. It was noticed that no customer release ever got within %10 of the trained point before a new release came out. Eventually it was just deleted from the product as useless. (Deletion was prior to 2002.)
Posted by: 3dc   2018-11-20 06:46  

00:00