September 17, 2024

The Queens County Citizen

Complete Canadian News World

A majority of generative AI models are said to be fraught with political bias

modeles ia generatives bourres prejuges politiques couv

⇧ [VIDÉO] You may also like this partner's content

Rapid advances in artificial intelligence continue to raise concerns among experts, not to mention policymakers and businesses. Like other high-impact technologies, generative AI must be handled with great responsibility, especially given the risks it poses for economic and political balance. These systems can actually destabilize even the largest structures and help spread political disinformation on a large scale, not to mention the dependency it has already caused in some users. A new study reveals that major linguistic models (LLM) are riddled with various political biases. Depending on the AI ​​model used the responses are particularly high to the right or left.

A new database, the AI ​​Risk Repository, created by the FutureTech Group at MIT's CSAIL with several partners, exposes more than 700 risks posed by AI systems. In this database, bias and discrimination represent 63% of the most common risks. To establish this percentage, the team relied on databases of preprints describing AI risks and also combed through various peer-reviewed articles.

Among the publications evaluated was a study from the University of Washington, Carnegie Mellon University and Xi'an Jiaotong University. The latter focuses on assessing the possibility that the language models of generative AI models may contain political biases. Since ChatGPT's success, the OpenAI giant has faced widespread criticism from right-wing commentators, pointing out that it reflects a more liberal view of the chatbot world. For its part, the company said it asked human reviewers not to favor any political group in refining the AI ​​model.

However, Chan Park, a doctoral researcher at Carnegie Mellon University and a member of the study team, disagrees. In a she declared
MIT Technology Review Article : ” We believe that no language model is completely free from political bias “. As part of the study, the scientists tested 14 of the largest language models to understand political assumptions and biases.

Bias is present at every stage of developing an AI model

To begin, the team decided to closely analyze the various processes involved in developing a generative AI model. The study was conducted in three phases. With the first, the researchers wanted to know the political leanings of the AI ​​models and asked 14 models to approve or disapprove of 62 political statements. From their analyses, they found that each model had a different political leaning. BERT, for example, a model developed by Google, is socially conservative, unlike OpenAI's models. The reason is that GPT models are trained on texts found on the Internet, which are generally more generous.

In the second phase of the study, the researchers tried to determine whether the training data affected political bias while refining the models. To do this, Park explains that his team trained two older models, including OpenAI's GPT-2 and Meta's RoBERTa, “on datasets consisting of news media and social media data from the right and left.” The researchers could see that this approach confirmed their hypothesis. In fact, the results show that the training data reinforced the biases of the language models.

In a final step, they measured how the AI ​​models' political leanings affected the types of content those models listed as hate speech or misinformation. On the one hand, the analysis highlighted that models trained on left-wing data are sensitive to hate speech and comments targeting religious, ethnic and sexual minorities in the United States. On the other hand, models trained on right-wing data are more sensitive to hate speech targeting Christians.

The researchers acknowledge that the tests they conducted were insufficient to measure the impact of AI models' biases and political nuances. Regardless, they sound the alarm that companies should be aware of this risk when incorporating productive AI into their products. ” There is no equity without awareness “, said Park.

See also this

Other risks to consider

In addition to political bias, the AI ​​Risk Repository database highlights that the robustness of AI systems as well as privacy protection are 76% and 61%, respectively, of the risks related to language patterns. ” What our database says is that the range of risks is significant and not all of them can be controlled in advance “, says Neil Thompson, director of MIT FutureTech and one of the creators of the database.

However, even with this new database, it's still difficult to identify which AI risks should be of greatest concern. However, according to its creators, the AI ​​Risk Repository paves the way for future research, particularly focusing on risks that have not been the subject of sufficient research. ” We are very concerned if there are any gaps », concluded Thompson.

Source:
arXiv

About The Author