AI in the Driver's Seat: Bias Risks in Pedestrian Detection
No attachments for this post
The rise of artificial intelligence is revealing a concerning pattern: biases in AI training translate to real-world discrimination. From AI recruitment tools sidelining women to missteps with ChatGPT's biases, the repercussions are widening. Alarmingly, facial recognition errors by the police predominantly misidentify Black individuals.
Now, fresh research highlights another bias. Self-driving cars' pedestrian detection might be jeopardized for people of color and children due to inherent AI biases. This potentially raises their safety risks on the roads.
A collaborative research venture between the UK and China analyzed eight leading pedestrian detectors for biases related to race, gender, and age. The study revealed concerning discrepancies: detectors showed 19.67% higher accuracy for adults than children and were 7.52% more accurate for lighter-skinned individuals than those with darker skin.
Jie Zhang from King's College London, part of the research team, noted the escalated risks, emphasizing that earlier, biases might have denied services to minorities. Now, they could result in significant harm.
The underlying problem lies in the foundational open-source AI systems, often employed by businesses for building detectors. Though the study didn't involve proprietary software like Tesla's due to confidentiality, the software examined is grounded in the same open-source AI bases.
The researchers urge legislative action to mitigate biases in self-driving car software, emphasizing the importance of safeguarding individual rights.
Comments on this post
No comments have been added for this post.
You must be logged in to make a comment.