AMSTERDAM — Artificial intelligence has a racial bias problem.
From biometric identification systems that disproportionately misidentify the faces of Black people and minorities, to applications of voice recognition software that fail to distinguish voices with distinct regional accents, AI has a lot to work on when it comes to discrimination.
And the problem of amplifying existing biases can be even more severe when it comes to banking and financial services.
Deloitte notes that AI systems are ultimately only as good as the data they're given: Incomplete or unrepresentative datasets could limit AI's objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias.
Nabil Manji, head of crypto and Web3 at Worldpay by FIS, said a key thing to understand about AI products is that the strength of the technology depends a lot on the source material used to train it.
«The thing about how good an AI product is, there's kind of two variables,» Manji told CNBC in an interview. «One is the data it has access to, and second is how good the large language model is. That's why the data side, you see companies like Reddit and others, they've come out publicly and said we're not going to allow companies to scrape our data, you're going to have to pay us for that.»
As for financial services, Manji said a lot of the backend data systems are fragmented in different languages and formats.
«None of it is consolidated or harmonized,» he added. «That is going to cause AI-driven products to be a lot less effective in financial services than it might be in other verticals or other companies where they have uniformity and more modern systems or access to data.»
Manjisuggested that blockchain, or
Read more on cnbc.com