GPT chat: A wolf in sheep’s clothing
Introduction:
GPT chat: A wolf in sheep’s clothing, In the realm of artificial intelligence, chatbots and conversational agents have become increasingly prevalent. These AI-powered systems, such as GPT Chat, aim to simulate human-like conversations and provide assistance in various domains.
However, beneath their seemingly helpful and friendly facade, there are concerns about the potential pitfalls and ethical implications associated with these artificial conversational agents. In this blog, we will delve into the concept of GPT Chat as a wolf in sheep’s clothing, exploring its limitations, risks, and the need for responsible AI development.
Generative artificial intelligence
At a time when university professors around the world are facing challenges that accompany generative artificial intelligence technology, coinciding with the return to the school season, the term “critical artificial intelligence” is gaining more momentum.
Katherine Conrad, a professor of English at the University of Kansas, asserts that these generative technologies are making a significant impact in the world.
She also highlights the ethical challenges they bring, including labor exploitation in the Global South and the potential reinforcement of the Western/Nordic perspective due to the extraction of specific data to train the models.
“I believe that a good knowledge of the culture of critical artificial intelligence is necessary for everyone, with an emphasis on the word critical.” Conrad adds that Maha Bali, a professor at the American University in Cairo, coined this term.
Bali is a pioneer in the field of educational technology, and since 2017 she has been lecturing on open education, digital pedagogy, and social justice. While Bali publishes most of her writing on her blog, she has also published two articles co-authored by other authors.
She is also among a group of experts around the world who stimulate discussions about cash-based technology, not to mention that she is the most prominent scientist in the Arab world.
Chat GPT
In a Critical AI talk last March, Paley mentioned how she created OpenAI. (Open AI) Artificial intelligence (“Chat GPT”) devoid of any transparency, as it can be likened to a “wolf in sheep’s clothing.”
It appears, she says, to be a highly ethical AI, as it does not answer certain questions that potentially conflict with ethical standards. However, “Time” magazine published an investigation last January, in which it revealed that “OpenAI”, as part of its effort to prevent expressions of violence, abuse or insults through “GBT Chat”, asked… Workers in Kenya, through a contractor, scanned for too many offensive texts and images to report.
Mental Health Problems
These people are underpaid, and also suffer from many mental health problems, because of the work they do to make ChatGPT, a more ethical artificial intelligence. This is one of the issues that the company has not addressed.
Because the subject of this employment is not known to the general public, Bali spoke a lot about it in private circles, and she decided to tell it to as many people as possible. She has stopped using AI for “fun” and only uses it when she is giving a workshop or really needs to test something.
Bali also talked to her daughter (11 years old) and her students about this topic, in addition to other teachers, so they are aware of what is happening, and they feel disgusted by it.
inequality
In addition, Bali talks a lot about the inequalities that generative artificial intelligence produces, including that it is available in some countries but not others.
Although she was not aware of the unavailability of GPT chat, In certain countries, as well as others, it reveals that Open A.I. That was decided, not by the countries themselves.
VPN
To access it, Bali used a virtual network (VPN) and an incognito window. She also asked a friend in the United States to use his private phone number for the verification code.
This, of course, leads to inequality in the use of artificial intelligence. Another inequality is that, in certain countries, some people can pay to use GPT4. People’s awareness of this, and their ability to use artificial intelligence critically, varies greatly, according to Bali.
American University of Beirut
Bali completed her master’s degree at the American University of Beirut and spent one year at the American University in Cairo. It is worth noting that critics have directed criticism towards these two institutions, accusing them of being somewhat elitist.
This raises questions about whether they include researchers in artificial intelligence and whether discussions about it take place publicly within the institutions.
The elitism in both universities, according to Paley, lies in the fact that their environment is similar to the environment of many American institutions, while different from the environment outside their walls.
Artificial intelligence
So she believes that having a global conversation about artificial intelligence is a little easier, as it is sometimes difficult to adapt her talks to the audience of Egyptian public universities.
These universities have a different scale (faculty-student ratio, level of teacher autonomy, different resources), and may not receive the same amount of support for educational development, and there are greater concerns about academic integrity, according to Bali.
I’m Hassan Saeed, a Clinical Psychology graduate deeply engaged in the realms of WordPress, blogging, and technology. I enjoy merging my psychological background with the digital landscape. Let’s connect and explore these exciting intersections!