Personal assistants and voice recognition technology are generally not as capable of identifying speech from black people. The Stanford paper suggests that the racial gap is likely the product of bias in the datasets that train the system.
hahaha of course it is. Because noticing differences is "waycist".😂
This study shows that dialect models don't exist for black americans and the like. That's probably because it would be suicide for a tech company to offer Black English as a language option.
Whilst I’m grateful that there is something non-Corona related in the news... of all the BS topics they went for this one?
This is Amerrikka.
Perhaps I could point out that i could better assist the A team by remaining on the ground Hannible?......nah..... sounds too much like data from Star Trek..
Voices don't have colors. Duh! The issue is that some people speak poorly. I bet "hillbilly" doesn't work so well either.
Damn 1's and 0' keeping a brotha down.
Maybe it’s time for black people to learn to speak proper English
Ah yes, racist robots.
That's normal. If you tend to mush words together the chance it is correctly detected decreases. This simply reveals ebonics are not as high fidelity in terms of data transmission compared to that of a clean voice ( ebonics is not as consistent from one user to the other as a standard tone as each user will have their own particularities (I imagine US vs UK also experiences this to an extent as british accents can also vary heavily and involve a lot of syllable slurring)).
Neutral midwestern tone ftw.
Speak better English maybe? We are talking about black people who were raised in western English speaking countries, right?
This comment may be in violation of our guidelines. You can still post it, but it will remain hidden until reviewed by one of our moderators.
This comment may be a reaction with little substance. It's at risk of being ranked lower than other comments.
Quality over quantity. You've hit our limit for comments posted. Please try again in an hour.