As you likely have noticed over the past few months, there has been an AI frenzy around the ethical risks of new AI approaches, especially around Generative AI and ChatGPT from OpenAI.
The Vector Institute, a globally-renowned AI Institute, headquartered in Toronto Canada, just released their updated AI Ethical Principles built on international themes gathered from across multiple sectors to reflect the values of AI practitioners in Vector’s ecosystem, across Canada, and around the world.
See the list below that was distributed by their President, Tony Gaffney, just a few minutes ago.
1. AI should benefit humans and the planet.
We are committed to developing AI that drives inclusive growth, sustainable development, and the well-being of society. The responsible development and deployment of AI systems must consider equitable access to them along with their impact on the workforce, education, market competition, environment, and other spheres of society. This commitment entails an explicit refusal to develop harmful AI such as lethal autonomous weapons systems and manipulative methods to drive engagement, including political coercion.
2. AI systems should be designed to reflect democratic values.
We are committed to building appropriate safeguards into AI systems to ensure they uphold human rights, the rule of law, equity, diversity, and inclusion, and contribute to a fair and just society. AI systems should comply with laws and regulations and align with multi-jurisdictional requirements that support international interoperability for AI systems.
3. AI systems must reflect the privacy and security interests of individuals.
We recognize the fundamental importance of privacy and security, and we are committed to ensuring that AI systems reflect these values appropriately for their intended uses.
4. AI systems should remain robust, secure, and safe throughout their life cycles.
We recognize that maintaining safe and trustworthy AI systems requires the continual assessment and management of their risks. This means implementing responsibility across the value chain throughout an AI system’s lifecycle.
5. AI system oversight should include responsible disclosure.
We recognize that citizens and consumers must be able to understand AI-based outcomes and challenge them. This requires the responsible transparency and disclosure of information about AI systems – and support for AI literacy – for all stakeholders.
6. Organizations should be accountable.
We recognize that organizations should be accountable throughout the life cycles of AI systems they deploy or operate in accordance with these principles, and that government legislation and regulatory frameworks are necessary.
The Vector Institute’s First Principles for AI build upon the approach to ethical AI developed by the OECD. Along with trust and safety principles, definitions are also necessary for the responsible deployment of AI systems. As a starting point, the Vector Institute recognizes the Organization for Economic Co-operation and Development (OECD) definition of an AI system. As of May 2023, the OECD defines an AI system as follows:
“An AI system is a machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives. It uses machine and/or human-based data and inputs to (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through analysis in an automated manner (e.g., with machine learning), or manually; and (iii) use model inference to formulate options for outcomes. AI systems are designed to operate with varying levels of autonomy.”
Vector also acknowledges that widely-accepted definitions of AI systems may be revised over time. We have seen how the rapid development of AI models can change both expert insight and public opinion on the risks of AI. Through Vector’s Managing AI Risk project we collaborated with many organizations and regulators to assess several types of AI risk. These discussions informed the language around risks and impact in the principles.
The dynamic nature of this challenge necessitates that companies and organizations should be prepared to revise their principles as they respond to the changing nature of AI technology.
Research Notations of Interest
- According to a white paper from the Berkman Klein Center for Internet and Society at Harvard, the OECD’s statement of AI principles is among the most balanced approaches to articulating ethical and rights-based principles for AI.
- AI labs working on AI Ethical Issues include: Mila in Montreal, the Future of Humanity Institute at Oxford, the Center for Human-Compatible Artificial Intelligence at Berkeley, DeepMind in London, and OpenAI in San Francisco. The Machine Intelligence Research Institute in Berkeley, CA
- Other research groups include: AI Safety Support works to reduce existential and catastrophic risks from AI, Alignment Research Center working to align future machine learning systems with human interests. Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. The Center for Human-Compatible Artificial Intelligence The Center on Long-term Risk addresses worst-case risks from the development and deployment of advanced AI systems. DeepMind is one of the largest research groups developing general machine intelligence in the Western world.
- OpenAI was founded in 2015 with a goal of conducting research into how to make AI safe.
- Redwood Research conducts applied research to help align future AI systems with human interests.
- Helpful AI Reading List
Research Source Acknowledgements
The 6 AI Ethical Principles from The Vector Institute Website can be found here and was a major research source for this article.
Read the full article here