OpenAI and Anthropic have agreed to let the US government access major new AI models before release to help improve their safety.
The companies signed memorandums of understanding with the US AI Safety Institute to provide access to the models both before and after their public release, the agency announced Thursday. The government says this step will help them work together to evaluate safety risks and mitigate potential issues. The US agency said it would provide feedback on safety improvements, in collaboration with its counterpart agency in the UK.
Sharing access to AI models is a significant step at a time when federal and state legislatures are considering what kinds of guardrails to place on the technology without stifling innovation. On Wednesday, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), requiring AI companies in California to take specific safety measures before training advanced foundation models. It’s garnered pushback from AI companies including OpenAI and Anthropic who warn it could harm smaller open-source developers, though it’s since undergone some changes and is still awaiting a signature from Governor Gavin Newsom.
US AI Safety Institute director Elizabeth Kelly said in a statement that the new agreements were “just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
For more trending news articles like this, visit DeFi Daily News.
Artificial intelligence (AI) continues to advance at a rapid pace, with companies like OpenAI and Anthropic leading the way in developing cutting-edge AI models. In a groundbreaking move, these companies have agreed to allow the US government access to their new AI models before release to ensure their safety and mitigate potential risks.
By signing memorandums of understanding with the US AI Safety Institute, OpenAI and Anthropic are taking proactive steps to collaborate with regulatory agencies and experts in the field to enhance the safety of their AI models. This partnership will not only benefit the government in evaluating safety risks but also provide valuable feedback to improve the overall safety of AI technology.
Sharing access to AI models marks a significant milestone in the ongoing debate around regulating AI technology. With federal and state legislatures exploring ways to implement safety measures without hindering innovation, initiatives like the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act in California highlight the importance of ensuring responsible AI development.
While there may be concerns from AI companies about the potential impact of regulations on smaller developers, the collaboration between industry leaders and regulatory bodies is vital in shaping the future of AI technology. The recent agreements between OpenAI, Anthropic, and the US government underscore the industry’s commitment to advancing AI technology responsibly.
Elizabeth Kelly, the director of the US AI Safety Institute, emphasized the significance of these agreements as a crucial step towards ensuring the safe and ethical development of AI. By fostering collaboration between industry stakeholders and regulatory agencies, we can collectively work towards stewarding the future of AI in a responsible manner.
As AI technology continues to evolve, it is essential for companies and governments to work together to address safety concerns and promote transparency in AI development. The agreements between OpenAI, Anthropic, and the US government represent a positive step towards establishing a framework for responsible AI innovation.
Remember to stay updated on the latest news and trends in AI by visiting DeFi Daily News for more insightful articles!