Today’s AI systems are remarkable, exciting, and offer the promise of substantial benefits to society. They also create risks, some of which are well understood and others we are all learning about as the technology advances. Vigilance and adaptability are central to managing those risks.
Major AI developers have already agreed, on a voluntary basis, to publish safety and security protocols that describe how they assess, test for, and mitigate potential for severe risks associated with their models. Thanks in part to our work and the work of many others, they are now required to publish such protocols and to follow them if they operate in California or New York.
However, there are still no requirements that company safety practices be reasonable or follow best practices, or that anyone outside the company check that they are followed. We are advocating for these principles to be put into practice in state and federal legislatures.
Our work is funded by individual donors and nonprofit institutions who believe in our mission. We do not accept corporate funding or funds from foreign governments.
If you are interested in partnering with us or want to learn more about our work, please visit our contact page or email us at info@secureaiproject.org