OpenAI, the company behind the groundbreaking GPT-4 language model, has announced its commitment to ensuring the safety and wide-ranging benefits of AI systems. The organization has emphasized the importance of real-world use in improving safeguards and iteratively deploying increasingly safe AI systems over time.
OpenAI stated that they conduct rigorous testing, engages external experts, and employs techniques like reinforcement learning with human feedback before releasing any new system. For GPT-4, OpenAI spent over six months working on safety and alignment before its public release.
"We believe that powerful AI systems should be subject to rigorous safety evaluations. Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take." OpenAI wrote in the announcement.
In addition, OpenAI says they work closely with governments on regulatory measures and fosters collaboration and open dialogue among stakeholders. The company is also focused on protecting children, respecting privacy, and improving the factual accuracy of AI-generated content.
OpenAI says they do not permit our technology to be used to generate hateful, harassing, violent or adult content, among other categories. "GPT-4 is 82% less likely to respond to requests for disallowed content compared to GPT-3.5 and we have established a robust system to monitor for abuse." the team added.
OpenAI has made significant efforts to minimize risks to children and is working on features that allow developers to set stricter standards for model outputs. For example, when users try to upload Child Sexual Abuse Material to our image tools, OpenAI will block and report it to the National Center for Missing and Exploited Children.
All Comments