Blake Brannon is the Chief Product & Strategy Officer at OneTrust where he defines the trust intelligence market.
Artificial intelligence (AI) is transforming the business landscape. Venture capital is pouring funding into generative AI projects, and enterprise investment in AI is at an all-time high. Forrester predicted in 2022 that the AI software market would grow 50% faster than the overall software market over the next two years. Yet this hype has been closely matched by serious concerns about AI’s potential risks to humanity.
The Beginning Of The End Of The ‘Wild West Era’ Of AI
While the rise of OpenAI’s ChatGPT has driven rapid interest in generative AI adoption and investment, it hasn’t been without caution and warning. AI has already set itself apart from the major technological advances that came before. Unlike cloud computing, VR, AR, mobile and 5G technologies that have also seen widespread investment and adoption, AI offers countless use cases and infinite possibilities across companies and industries. This makes it difficult to predict and manage the sheer number of risks it poses as well as its potential impact on people, companies and our society.
The solution is responsible AI, which promotes the ethical, transparent and trustworthy development and deployment of AI systems. Responsible AI helps ensure people can trust that the AI they interact with in their daily lives is not perpetuating bias, that algorithms are regularly audited with human oversight and that it’s designed and implemented in a way that’s easy to understand. However, it can be difficult to know where to start.
Navigating The Complex AI Risk Landscape
Before developing and deploying AI in a responsible and ethical manner, it’s important to understand the different types of risks associated with AI development and how to address these risks.
• Privacy risks. AI requires access to large amounts of data to learn and improve its performance, which can include personal and sensitive information. The misuse or mishandling of personal data can lead to breaches, identity theft and other harms—infringing on privacy rights.
• Ethical risks. Ethical issues center around bias and discrimination, transparency and accountability, and the impact of AI on employment and society. As AI is increasingly used for decision-making, the stakes become higher to ensure fair and accurate outcomes.
• Compliance risks. New regulations are being proposed, such as the European Union’s AI Act, that will require organizations to disclose how they’re using, governing and managing AI’s use. Organizations will need to keep close governance and records of the full pipeline of how models are used, how they were trained, what data was used and more.
• Transparency and trust risks. AI models can be complex and difficult to understand, making it challenging to know how they arrived at certain decisions or predictions. As AI adoption skyrockets, algorithms will play a powerful (and potentially harmful or biased) role in people’s lives. Transparency—or lack thereof—will significantly impact how much trust people will place in the companies and industries leveraging AI.
• Operational risks. Using AI can create new dependencies, and businesses that rely heavily on AI systems may lack the ability to operate effectively without it. This can create significant business continuity risks if an AI system fails or becomes compromised—or if regulatory changes deem it unusable.
Building The Trust And Safety Layer
For responsible AI adoption, start with culture.
Once organizations understand the different risks associated with AI, they can develop and implement a comprehensive AI strategy and responsible AI framework. Collaboration and open communication between teams and stakeholders is the linchpin of responsible AI adoption within organizations.
To ensure buy-in across the organization, stakeholders will need to start by educating and driving AI literacy among the board and C-suite.
Key stakeholders, board members and executives should all be involved in developing a responsible AI approach that meets the needs of the business to innovate, scale and show impact while supporting a strong responsible AI framework that integrates privacy, ethical, operational, transparency and compliance considerations.
Visibility is key.
Effectively managing the AI risk requires more than cultural buy-in; organizations need to ensure a whole new level of visibility, encompassing the many different areas where AI may be used across the business. Responsible AI can’t be fully realized unless an organization can identify every application of AI in its business.
The three main AI use cases include:
• Where the company has developed AI for use in internal systems, such as an employee-facing HR platform.
• Where the company is developing AI for products and solutions they sell to customers, such as in the company’s platform.
• Where the company’s vendors are using AI in the supply chain.
Integrate AI into existing data management initiatives.
The idea that data can be an organization’s greatest asset or its biggest threat tends to focus on the potential privacy and security risks when data isn’t used responsibly. This rings especially true for AI, given an organization’s ability to differentiate with AI depends on the quality of the data set.
Similarly, companies face a host of privacy, ethical or compliance risks if the data used in the ML pipeline is personal or sensitive, is being used without consent, or could create bias.
This is why it’s critical for companies to ensure the data used in their AI initiatives is integrated into their existing data governance and risk management programs. Often, data used for AI isn’t owned by just one team; the same data is being used elsewhere across the business. However, AI programs too often exist in siloes.
By integrating responsible AI programs into existing larger data management initiatives, organizations can ensure the right privacy and security policies and safety and control measures are applied to AI data sets and models. As AI becomes a strategic priority, this practice also reinforces broad awareness of AI among key privacy, security, data science and MLOps teams.
By bringing AI into the fold of broader data management programs, organizations can effectively manage risk, demonstrate due diligence and assure that the use of AI is ethical, responsible and compliant with the necessary standards and regulations.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here