OpenAI, a non-profit research company, has disabled a feature on its ChatGPT app after users began generating offensive content. The feature, which allowed users to chat with a chatbot trained on a massive dataset of text and code, was disabled on July 1, 2023.
In a blog post, OpenAI said it had disabled the feature “due to safety concerns.” The company said that it had received reports of users generating “hateful, discriminatory, and violent content” with the feature.
OpenAI said that it is working to improve the safety of ChatGPT and that it plans to re-enable the feature in the future. However, the company did not say when the feature would be re-enabled.
The disabling of the ChatGPT feature is a reminder of the challenges of developing artificial intelligence that is both safe and useful. AI systems can be trained on large datasets of text and code, but this can also lead to the generation of offensive or harmful content.
OpenAI is not the only company that has had to deal with this challenge. In 2022, Google was forced to disable a feature on its text-generation AI, LaMDA, after users began generating offensive content.
As AI systems become more powerful, developers need to be aware of the potential for these systems to generate harmful content. It is also essential for developers to develop safety features that can help to prevent the generation of offensive or destructive content.
- WhatsApp Rolls Out New Feature to Adjust Text Size on Windows
- Tackling Online Harm: Vietnam Demands Meta and Google to Employ AI for Content Filtering
- The Trillion-Dollar Club Expands: Apple Breaks Barrier, Reaches $3 Trillion Valuation
Join Telegram and WhatsApp for More updates
Follow us on social media