02apr20 26-24mar2020
On average, the systems misunderstood
35 percent of the words spoken by blacks but only
19 percent of those spoken by whites.
Automated speech recognition less accurate for blacks: study
All five speech recognition technologies had error rates that were almost twice as high for blacks as for whites—even
when the speakers were matched by gender and age and
when they spoke the same words.
Error rates were highest for African American men, and the disparity was higher among speakers who made heavier use of African American Vernacular English.
Hidden bias
The researchers speculate that the disparities common to all five technologies stem from a common flaw—the machine learning systems used to train speech recognition systems likely rely heavily on databases of English as spoken by white Americans. A more equitable approach would be to
include databases that reflect a greater diversity of the accents and dialects of other English speakers.
While the study focused exclusively on disparities between black and white Americans, similar problems could affect people who speak with
- regional and
- non-native-English accents,
the researchers concluded.
If not addressed, this translational imbalance could have serious consequences for people's careers and even lives.
- Many companies now screen job applicants with automated online interviews that employ speech recognition.
- Courts use the technology to help transcribe hearings.
- For people who can't use their hands, moreover, speech recognition is crucial for accessing computers.
Nenhum comentário:
Postar um comentário