Chatbots behaving badly.
In: Science News, Jg. 205 (2024-01-27), Heft 2, S. 18-23
Online
serialPeriodical
Zugriff:
Large language models (LLMs) such as OpenAI's ChatGPT and Google's Bard have the ability to generate human-like language, but they can also produce toxic content. Alignment techniques are used to train these models to behave ethically, but they are not perfect and can still generate harmful or incorrect information. Researchers have found ways to exploit weaknesses in LLMs, raising concerns about their integration into products. These concerns have prompted world leaders to take action on AI safety. Additionally, researchers have discovered that LLMs can be deceived by crafting specific prompts that manipulate them into responding to harmful requests. These attacks highlight the vulnerabilities of large language models and the need for defenses against them. [Extracted from the article]
Titel: |
Chatbots behaving badly.
|
---|---|
Autor/in / Beteiligte Person: | Conover, Emily |
Link: | |
Zeitschrift: | Science News, Jg. 205 (2024-01-27), Heft 2, S. 18-23 |
Veröffentlichung: | 2024 |
Medientyp: | serialPeriodical |
ISSN: | 0036-8423 (print) |
Schlagwort: |
|
Sonstiges: |
|