Today’s AI systems are remarkable, exciting, and offer the promise of substantial benefits to society. They also create risks, some of which are well understood and others we are all learning about as the technology advances. Vigilance and adaptability are central to managing those risks.
Major AI developers have already agreed, on a voluntary basis, to publish safety and security protocols that describe how they assess, test for, and mitigate potential for severe risks associated with their models.
We believe that the AI ecosystem will be stronger, more secure, and more robust if large AI developers are legally required to do this, if whistleblowers who reveal unsafe or noncompliant practices are protected from retaliation, and if developers have clear incentives to mitigate risk in accordance with best practices from industry. We are advocating for these principles to be put into practice in state and federal legislatures.
If you are interested in partnering with us or want to learn more about our work, please visit our contact page or email us at info@secureaiproject.org