In an increasingly technology-driven world, the debate around the safety of Artificial Intelligence (AI) continues to grow. With AI seen to offer some of the highest risks when it comes to human safety, a non-profit organization known as Partnership on AI (PAI) has highlighted that companies must take responsibility to prove their AI is safe to use.
Founded in 2016, the Partnership on AI is composed of leading technology firms, such as Google, Apple, Microsoft, Amazon, IBM and Facebook. The group’s role is to come together and work on discussions and research related to Artificial Intelligence. In a statement released by PAI, the organization has asserted that AI companies must “proactively prove that their algorithms are safe, equitable and responsible”.
More specifically, PAI has outlined eight areas that companies must focus on in order to prove the safety of their AI systems. These include data safety and privacy, AI transparency, equitable outcomes, accountability and fairness, AI governance, data sharing, AI strategies, and algorithmic bias. The group has also developed several resources to help companies ensure they are prepared for the future of AI safety.
The group also states that the onus must be on companies to make sure their AI is safe for use by the public. For companies, this means that it is their responsibility to monitor their own AI systems and use accessible data sharing platforms while also seeking feedback from experts in the field. It is also essential for companies to stay up-to-date with the latest research when it comes to technology law.
It is expected that with the need for increased AI safety, there will be further discussions and regulations about the use of AI in the public space. Companies must take these regulations and the opinions of non-profit organizations, such as PAI, into account in order to keep their AI systems safe and avoid any potential lawsuits.
Overall, it is clear from the positions of Partnership on AI that companies need to take more responsibility when it comes to making sure their AI systems are safe for public consumption. With the demand for AI increasing, this responsibility must be met to ensure that AI is a benefit rather than a hindrance for society – and companies must be prepared to prove their AI is safe to use.