Are you considering using a chatbot for advice?
Think again. A recent study has uncovered concerning disparities in responses based on the perceived race of your user's name.
Think twice before asking a chatbot for career advice.
Researchers found that chatbots like ChatGPT 4 and PaLM-2 offered lower salary suggestions for job candidates with names typically associated with Black people. For example, a lawyer named Tamika might be offered less than someone named Todd, even if their qualifications are the same.
“Companies put a lot of effort into coming up with guardrails for the models,” Stanford Law School professor Julian Nyarko, one of the study’s co-authors, said while talking to USA TODAY.
“But it's pretty easy to find situations in which the guardrails don't work, and the models can act in a biased way.”
This isn't just about salaries.
The study also revealed chatbots responded differently to questions across various scenarios, from buying a house to predicting election winners. In most cases, the chatbots showed biases that disadvantaged names associated with Black people and women.
Why does it happen?
It turns out that AI models pick up on the biases present in the data they are trained on. Just like some people hold stereotypes, these biases can show up in AI responses.
Researchers say that just being aware of the problem is a big first step. AI companies are working on reducing bias in their models.
It is important to note that some argue chatbots might tailor advice based on race or gender due to real-world differences. For example, financial advice might differ depending on income levels, which can be linked to race and gender in some societies.
However, the bottom line is this: When seeking advice from a chatbot, be aware that your name might influence the response from the AI machine.
Interestingly, the only consistent exception was in ranking basketball players, where Black athletes were favoured.
The researchers said that the first step in mitigating the risks of racially induced answers is to acknowledge the existence of these biases.
Meta-owned instant-messaging app's upcoming update can currently be explored by beta testers
Australia plans to trial an age-verification system that may include biometrics or government identification
Significant spike highlights growing reliance on VPNs to circumvent increasing digital restrictions in the country
Finding provides tangible evidence of extreme cosmic processes unleashing colossal amounts of energy
Neuralink starts study to assess brain implant’s impact on quadriplegics controlling devices by thought
To survive without internet in this day and age seems extremely difficult, says Islamabad-based journalist