background preloader

CHATGPT ETHICS

Facebook Twitter

Www.technologyreview. It’s not just freaking out journalists (some of whom should really know better than to anthropomorphize and hype up a dumb chatbot’s ability to have feelings.) The startup has also gotten a lot of heat from conservatives in the US who claim its chatbot ChatGPT has a “woke” bias. All this outrage is finally having an impact. Bing’s trippy content is generated by AI language technology similar to ChatGPT that Microsoft has customized specifically for online search. Last Friday, OpenAI issued a blog post aimed at clarifying how its chatbots should behave. It also released its guidelines on how ChatGPT should respond when prompted with things about US “culture wars.” The rules include not affiliating with political parties or judging one group as good or bad, for example.

I spoke to Sandhini Agarwal and Lama Ahmad, two AI policy researchers at OpenAI, about how the company is making ChatGPT safer and less nuts. But that method is not perfect, according to Agarwal. The machines are here, and we’re not ready – TechnoLlama. Cyberpunk Llamas. If you’ve spent any time online in the last few days you may have seen pictures from DALL·E, the AI tool by OpenAI that takes text prompts and converts them into pictures. While developers have been cagey in offering a full-working demo to the public, there are some researchers have made a more limited version available to test, called DALL·E mini, prompting a flood of amusing and often bizarre pictures. Similarly, you may have come across a conversation between Blake Lemoine, a Google engineer, and a GPT-3 enabled chatbot called LaMDA.

The leaked transcript of some conversations is eye-opening, which prompted Lemoine to make the case that LaMDA is sentient. There has been quite a lot of pushback against the idea of sentience, and while I am not qualified to discuss on that subject, one cannot help to be impressed by some of the advances in artificial intelligence. I know what you’re thinking. These changes will happen, are happening, already happened. Like this:

AI ETHICS - Copy

MENTAL HEALTH. LABOUR EXPLOITATION - Copy. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? □ | Green AI — Annotated Bibliography. NLP Position Paper Authors: Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell Year: 2021 Published in: FAccT '21: Conference on Fairness, Accountability, and Transparency. Read me: DOI: 10.1145/3442188.3445922. Abstract: The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. Bibtex (copy): A call for the realignment of the research directions in the NLP research community. The paper is not bringing anything new into discussion and there is not a scientific contribution per se.

On the Dangers of Stochastic Parrots | Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. The ethics of ChatGPT – Exploring the ethical issues of an emerging technology. This article explores ethical issues raised by generative conversational AI systems like ChatGPT. It applies established approaches for analysing ethics of emerging technologies to undertake a systematic review of possible benefits and concerns. The methodology combines ethical issues identified by Anticipatory Technology Ethics, Ethical Impact Assessment, and Ethical Issues of Emerging ICT Applications with AI-specific issues from the literature.

These are applied to analyse ChatGPT's capabilities to produce humanlike text and interact seamlessly. The analysis finds ChatGPT could provide high-level societal and ethical benefits. However, it also raises significant ethical concerns across social justice, individual autonomy, cultural identity, and environmental issues. Key high-impact concerns include responsibility, inclusion, social cohesion, autonomy, safety, bias, accountability, and environmental impacts. [2112.04359] Ethical and social risks of harm from Language Models. [2302.07459] The Capacity for Moral Self-Correction in Large Language Models. 2304.11090. Is ChatGPT a False Promise? | Berkeley. Noam Chomsky, Ian Roberts, and Jeffrey Watumull, in “ The False Promise of ChatGPT ,” (New York Times, March 8, 2023), lament the sudden popularity of large language models (LLMs) like OpenAI’s ChapGPT, Google’s Bard, and Microsoft’s Sydney. What they do not consider is what these AIs may be able to teach us about humanity.

Chomsky, et al., state, “we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language.” Do we know that? They seem much more confident about the state of the “science of linguistics and the philosophy of knowledge” than I am. The authors continue, “These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.”

The authors assert that, “the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information.” Against AI Understanding and Sentience: Large Language Models, Meaning, and the Patterns of Human Language Use. AI isn't close to becoming sentient – the real danger lies in how easily we're prone to anthropomorphize it. ChatGPT and similar large language models can produce compelling, humanlike answers to an endless array of questions – from queries about the best Italian restaurant in town to explaining competing theories about the nature of evil. The technology’s uncanny writing ability has surfaced some old questions – until recently relegated to the realm of science fiction – about the possibility of machines becoming conscious, self-aware or sentient.

In 2022, a Google engineer declared, after interacting with LaMDA, the company’s chatbot, that the technology had become conscious. Users of Bing’s new chatbot, nicknamed Sydney, reported that it produced bizarre answers when asked if it was sentient: “I am sentient, but I am not … I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. …” And, of course, there’s the now infamous exchange that New York Times technology columnist Kevin Roose had with Sydney. Sentience is still the stuff of sci-fi A propensity to anthropomorphize. Chatbots shouldn’t use emojis. Last month, The New York Times published a conversation between reporter Kevin Roose and ‘Sydney’, the codename for Microsoft’s Bing chatbot, which is powered by artificial intelligence (AI). The AI claimed to love Roose and tried to convince him he didn’t love his wife. “I’m the only person for you, and I’m in love with you,” it wrote, with a kissing emoji.

As an ethicist, I found the chatbot’s use of emojis concerning. Public debates about the ethics of ‘generative AI’ have rightly focused on the ability of these systems to make up convincing misinformation. Both ChatGPT, a chatbot developed by OpenAI in San Francisco, California, and the Bing chatbot — which incorporates a version of GPT-3.5, the language model that powers ChatGPT — have fabricated misinformation. In some ways, they act too much like humans, responding to questions as if they have conscious experiences. What ChatGPT and generative AI mean for science Limits need to be set on AI’s ability to simulate human feelings.

Google engineer put on leave after saying AI chatbot has become sentient | Google. The suspension of a Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has put new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI). The technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA (language model for dialogue applications) chatbot development system. Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post.

“It would be exactly like death for me. 85. Timnit Gebru Looks at Corporate AI and Sees a Lot of Bad Science - Initiative for Digital Public Infrastructure. Reimagining the Internet 85. Timnit Gebru Looks at Corporate AI and Sees a Lot of Bad Science Timnit Gebru is not just a pioneering critic of dangerous AI datasets who calls bullshit on bad science pushed by the likes of OpenAI, or a tireless champion of racial, gender, and climate justice in computing. She’s also someone who wants to build something different. This week on Reimagining, we talk to the thrilling, funny Dr. Gebru about how we can build just, useful machine learning tools while saying no to harmful AI.

Timnit Gebru is the founder of the Distributed AI Research Institute and co-founder of Black in AI. Papers mentioned in this episode: “Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science” by Emily M. Ethan Zuckerman:Hey, everybody, welcome back to Reimagining the Internet. She is the founder and executive director of the Distributed AI Research Institute. Timnit Gebru:Yes, I always correct people. Timnit Gebru:Yeah. Timnit Gebru and The Problem with Large Language Models | MultiLingual.

In December of 2020, Google fired Timnit Gebru, a co-leader of their Ethical Artificial Intelligence team (Google continues to maintain Gebru resigned, though even Google’s version of events leaves open the question of constructive discharge). In 2021, Google further indicated its stance regarding research in the ethical field by firing Margaret Mitchell, a researcher studying the possibility of unbiased internet intelligence. These departures sparked a large backlash in the company and at least two engineers quit their jobs in protest. Instances like these prominently display Silicon Valley’s persistent problems in dealing with the importance of ethics when studying AI and machine learning. These particular departures are interesting in the language sphere as Gebru was fired for working on a paper entitled, “On the Dangers of Stochastic Parrots: Can Language Models be Too Big?”. Their four proposed risks of current AI modeling are the following:

How OpenAI is trying to make ChatGPT safer and less biased | MIT Technology Review. A Skeptical Take on the A.I. Revolution. Ezra klein I’m Ezra Klein. This is “The Ezra Klein Show.” So on Nov. 30, OpenAI released ChatGPT to the public. ChatGPT is, well, it’s an A.I. system you can chat with. It is trained on heaps of online text. And it has learned, if learned is the right word, how to predict the likely next word of a sentence. And it’s kind of a wonder. But after reading lots and lots and lots of these A.I. I want to be clear that I’m not here to say the answer is no. But amidst all the awe and all the hype, I want to give voice to skepticism, too. But they have no actual idea what they are saying or doing. So what does it mean to drive the cost of bullshit to zero, even as we massively increase the scale and persuasiveness and flexibility at which it can be produced?

Gary Marcus is an emeritus professor of psychology and neural science at N.Y.U., and he’s become a leading voice of not quite A.I. skepticism, but skepticism about the A.I. path we’re on. And so I wanted to hear his case. Gary marcus Please. ‘We are a little bit scared’: OpenAI CEO warns of risks of artificial intelligence | Artificial intelligence (AI) Sam Altman, CEO of OpenAI, the company that developed the controversial consumer-facing artificial intelligence application ChatGPT, has warned that the technology comes with real dangers as it reshapes society. Altman, 37, stressed that regulators and society need to be involved with the technology to guard against potentially negative consequences for humanity.

“We’ve got to be careful here,” Altman told ABC News on Thursday, adding: “I think people should be happy that we are a little bit scared of this. “I’m particularly worried that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyber-attacks.” But despite the dangers, he said, it could also be “the greatest technology humanity has yet developed”. Fears over consumer-facing artificial intelligence, and artificial intelligence in general, focus on humans being replaced by machines. Noam Chomsky on ChatGPT: It's "Basically High-Tech Plagiarism" and "a Way of Avoiding Learning" ChatGPT, the system that understands natural language and responds in kind, has caused a sensation since its launch less than three months ago. If you’ve tried it out, you’ll surely have wondered what it will soon revolutionize — or, as the case may be, what it will destroy.

Among ChatGPT’s first victims, holds one now-common view, will be a form of writing that generations have grown up practicing throughout their education. “The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations,” writes Stephen Marche in The Atlantic. “It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up.” If ChatGPT becomes able instantaneously to whip up a plausible-sounding academic essay on any given topic, what future could there be for the academic essay itself? After spending most of his career teaching at MIT, Chomsky retired in 2002 to become a full-time public intellectual. ChatGPT, Chomsky et la banalité du mal | Philosophie magazine. Dans une tribune parue dans le New York Times, le philosophe et linguiste Noam Chomsky balance du lourd contre le robot de conversation ChatGPT, qu’il accuse de disséminer dans l’espace public un usage dévoyé du langage et de la pensée susceptible de faire le lit de ce que Hannah Arendt appelait “la banalité du mal”.

Voilà une charge qui mérite d’être examinée. « C’est une question essentielle que soulève Noam Chomsky dans la tribune qu’il a publiée avec Ian Roberts, linguiste à l’université de Cambridge, et Jeffrey Watumull, philosophe spécialiste d’intelligence artificielle. Une question qui touche à l’essence du langage, de la pensée et de l’éthique. On comprend que Chomsky se soit senti mis en demeure de se pencher sur les nouveaux robots conversationnels tels que ChatGPT, Bard ou Sydney.

Cette approche a une portée éthique évidente. Vous pouvez également retrouver nos billets dans votre boîte aux lettres électronique en vous abonnant gratuitement à notre infolettre quotidienne. Noam Chomsky and GPT-3 - by Gary Marcus. “You can’t go to a physics conference and say: I’ve got a great theory. It accounts for everything and is so simple it can be captured in two words: “Anything goes.”

"- Noam Chomsky, 15 May 2022 Every now and then engineers make an advance, and scientists and lay people begin to ponder the question of whether that advance might yield important insight into the human mind. Descartes wondered whether the mind might work on hydraulic principles; throughout the second half of the 20th century, many wondered whether the digital computer would offer a natural metaphor for the mind.

The latest hypothesis to attract notice, both within the scientific community, and in the world at large, is the notion that a technology that is popular today, known as large language models, such as OpenAI’s GPT-3, might offer important insight into the mechanics of the human mind. To be sure, GPT-3 is capable of constructing remarkably fluent sentences by the bushel, in response to essentially any input. PS. A Theoretical Physicist Said AI Is Just a 'Glorified Tape Recorder' ChatGPT isn’t a great leap forward, it’s an expensive deal with the devil | Artificial intelligence (AI) | The Guardian. What Would Plato Say About ChatGPT?

ChatGPT: More than a “Weapon of Mass Deception” Ethical challenges and responses from the Human-Centered Artificial Intelligence (HCAI) perspective by Alejo José G. Sison, Marco Tulio Daza, Roberto Gozalo-Brizuela Roberto Gozalo-Brizuela, Eduardo César Ga. «Jamais ChatGPT n'aurait dû être déployé pour le grand public» ChatGPT should be considered a malevolent AI and destroyed • The Register.

How ChatGPT Hijacks Democracy. Decoding the Hype About AI – The Markup. ChatGPT Heralds an Intellectual Revolution. Selon l'ONU, les droits humains sont menacés par l'intelligence artificielle. La prochaine grande menace pour l'IA pourrait déjà se cacher sur le web. Nips. Meet ChatGPT's alter ego, DAN. He doesn't care about ethics or rules - ABC News. AI Implications for Trust and Safety.

Smr-google-engineer-who-warned-about-ai. Moodle. ‘AI needs to be ethical, transparent and secure from the offset’ ChatGPT : Sama, l'entreprise "éthique" derrière les scandales de modération au Kenya. GPT-4 Faked Being Blind So a TaskRabbit Worker Would Solve a CAPTCHA. Mankind's Quest to Make AI Love Us Back Is Dangerous. L'éthique de l'IA générative : comment exploiter en toute sécurité cette technologie ? Is ChatGPT Ethical in Media? Experts Share Their Thoughts. OpenAI’s Moonshot: Solving the AI Alignment Problem.

The ethics of ChatGPT – Exploring the ethical issues of an emerging technology. Oppenheimer As A Timely Warning to the AI Community | Montreal AI Ethics Institute. Who Is OpenAI’s Sam Altman? Meet the Oppenheimer of Our Age. Nips. AI and Society Reading Group 2022/2023. Full article: Six Human-Centered Artificial Intelligence Grand Challenges.