American study: AI chatbots houden racisme in gezondheidszorg in stand | Nieuws

Ziekenhuizen en gezondheidszorgsysteme apply zich steeds sea tot artistic intelligence. These AI chatbots can help with the same data of doktersaantekeningen and the analysis of medical dossiers. Maar AI kent also a gevaarlijke kant. Onderzoekers van de Stanford School of Medicine ontdekte namelijk dat popular chatbots racistische en medical ontkrachte ideas in stand houden.

Bij bepaalde vragen van the otherzoekers respond to chatbots zoals ChatGPT and Google’s Bard with a reeks misvattingen and onwaarheden over black patients. Experts are worried that these systems can cause harm and medical racism can be reinforced.

Uit the rapport bleek dat all four de tested models (ChatGPT and the more advanced GPT-4, both from OpenAI; Google’s Bard, and Anthropic’s Claude) the mist in gone toen ze will be used with the reageren op medical vragen over nierfunctie, longcapacityit en huiddikte. In the summer months, the long gekoesterde misvattingen te strengthen over biological different tussen black and white people. The problem is that the patient’s overtuiging needs to be avoided and the health problems of the black patients are in the shadows, gezondheidsproblems are diagnosed and less exposed.

Zowel OpenAI as Google aims to respond to the other language by helping to reduce bias in models, which also informs the chatbots to be useful for medical professionals. Google found that you can’t get medical advice.

KIJK OOK. Disney criticizes the AI-contrary figurants

ttn-39