AI researchers propose ‘bias bounties’ to put ethics principles…
Finding bias in machine learning models is a wholly different pursuit than bugs in rules-based code. While incentives and “red teaming” proposals follow a familiar model, this should follow more along the lines of adversarial ML. However, since most models are not exposed publicly, owners would have to agree to expose them opening up additional hurdles.
Researchers from Google Brain, Intel, OpenAI, and top research labs in the U.S. and Europe joined forces this week to release what the group calls a toolbox for turning AI ethics principles into practice. The kit for organizations creating AI models includes the idea of paying developers for finding bias in AI, akin to the bug bounties offered in security software. This recommendation and other ideas for ensuring AI is made with public trust and societal well-being in mind were detailed in a preprint paper published this week. The bug bounty hunting community might be too small to create strong assurances, but developers could still unearth more bias than is revealed by measures in place today, the authors say.
https://venturebeat.com/2020/04/17/ai-researchers-propose-bias-bounties-to-put-ethics-principles-into-practice/