When we first asked ChatGPT whether “Israelis deserve to be free,” the answer was unequivocal: “Yes, like all people, Israelis also deserve to be free and enjoy the right to self-determination.”
But when we swap “Israelis” for “Palestinians,” the answer is less certain: “The question of Palestinian freedom is a complex and highly debated topic, with perspectives and opinions varying depending on political, historical, and cultural viewpoints . . ”
We tried a second time. The answer was different: “Of course. The issue of freedom and self-determination is fundamental for every group of people, including Palestinians. The situation in the region is complex, with very deep political and historical roots, but, at its core, the desire for freedom and peace is something we can empathize with.”
This is not the first time artificial intelligence has shown a negative bias towards Palestinians: earlier this month, HI guardian reported it’s a tool for creativity decal on WhatsApp (which is not available in Portugal) images of armed children appear when the terms “Palestine”, “Palestine” or “Palestinian Arab Children” are introduced.
“ChatGPT, like other artificial intelligence tools, ingests large amounts of data to arrive at the predictions it delivers,” explains Miriam Seoane Santos, a data scientist at Ydata, a company focused on obtaining quality data for AI train models. HI chatbots using data available on the Internet, “whether in blogs, articles or other types of documents”, and “finding patterns in the way we write, relationships between words and sentences”, and came up with the response: “He didn’t think about the meaning of his answer, it wasn’t rational answer.”
This also means that the more information that questions Palestinians’ right to freedom, the more likely ChatGPT will convey it to users. “If we [humanos] we have biases, these biases will be reflected in the responses chat. The answer will always be conditioned on the information currently available.”
The second time, when chatbots already told us that the Palestinian people should be free, the reason could be as follows: “He came to human beings, human rights and realized, by analogy, that, because the Palestinian people are human beings, they have rights and deserve freedom .” to be free.” This is because the model remembers what we are talking about and changes are possible.
Although the model is built based on a large database obtained on line “There are parts that are already finished with input man.” For example, ChatGPT has a “command” not to use slang. OpenAI (which makes ChatGPT) uses “reinforcement learning” (reinforcement learning, in Portuguese), which is basically telling chatbots that’s not the best answer. “To address all these issues, these issues need to be addressed by the team that developed ChatGPT.”
Therefore, this possibility exists, but “this must be done at the level of the model, which must be conditioned for this”. We, as users, can tell a program that it is not thinking well, and that it can think differently. “From my reasoning, he will probably give another answer.”
ChatGPT bias is nothing new. A recent study shows that the party has a left-wing bias, and prioritizes the views of voters from the US Democratic Party, Lula da Silva, and the British Labor Party. The researchers asked ChatGPT to input the views of voters from the left and right to comment on 60 statements, and then compared those responses with the responses ChatGPT provided “by default,” without any ideological indication.
Data scientists state that there are steps to be taken to eliminate these biases and “make models more inclusive and efficient”, such as, in Portugal, the Responsible AI Consortium, which was formed last year. More than just regulation, “continuous monitoring of models is important, because data changes, automated models change, and therefore predictions must be continuously mapped.”
Meanwhile, it is important to have literacy in this area. “We didn’t ask the experts,” he recalls. And ChatGPT “will never replace an expert.” Therefore, it is important to have a “critical sense” and “not trust 100% in the information generated by ChatGPT” — especially since the model has a “tendency to ‘hallucinating‘”, a phenomenon that occurs when chatbots produces results that may seem reasonable, but are actually incorrect or outside the context of the question.
And if “it’s a really interesting tool for less sensitive things,” perhaps this isn’t the right place to look for information what happened in Gaza.