In late May, an image showing thick, black billows of smoke rising from the headquarters of the U.S. armed forces building near the Pentagon popped up on a prominent social media platform.
The photos were determined to be a false report of an explosion near the federal building. Local and national officials quickly refuted the claim, but the post was still shared nationally and internationally in investment circles causing the S&P 500 to drop, albeit briefly, before a rebound. The image, and other similar images with claims of a White House explosion, were likely created using generative AI.
Only days later more than 350 AI experts, public figures and industry leaders warned in an open letter that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
None of this is meant to scare business leaders, but to illustrate a few key points:
- Generative AI is already here, and there’s no turning back. Even reigning it in will be difficult. Creating a culture of AI awareness and preparing your team will be critical in navigating unchartered waters.
- Legislative guardrails will take time to develop in the United States. We can’t wait for legislation before creating plans around AI’s use, implementation, security, or disaster response. Companies need to realistically assess threats and build defenses now.
- For bad actors, AI has significantly lowered the barrier to entry. Those who have bad intentions but who didn’t before have the technical know-how or intelligence to carry out attacks can now engineer something that looks and sounds authentic.
Business leaders should ask themselves, “Is my organization ready, and if not, how can I prepare?”
AI Security Considerations for Enterprises
A generative AI platform, Writer, recently revealed nearly half of senior executives believe corporate data has been unintentionally shared with ChatGPT, the most widely used generative AI platform among enterprises. These concerns aren’t baseless.
In fact, cybersecurity veteran David Lefever, founder, principal and CEO of The Mako Group and one of Centric Consulting’s business partners, has found that today, many business leaders are concerned with an increasing number of threats.
Among those is “leaky data,” or the unintentional sharing of information with a third-party system without proper documentation and authorization. This can lead to privacy breaches, invalid and unreliable information, accidental security risks, and other threats.
At a minimum, all AI security plans should include:
- Vulnerability management: Zero-day attacks could become more commonplace as AI enables cyberattacks to be found more rapidly. As its name implies, this type of attack means there are zero days between the time a vulnerability is discovered and when an attack takes place.
- Fraud and threat detection: AI can enable advanced fraud. Fraud and threat detection are key to creating a cybersecurity program that reduces the risk of an attack and minimizes impacts should one happen.
- Continuous penetration testing: Conducting internal and external penetration testing isn’t one and done. Company leaders are starting to realize that even quarterly testing is not enough and that ongoing monitoring is required.
- IP risks: Generative AI poses unique IP risks in that your information could be exposed without your knowledge. Further, if you ask an AI tool to create something and use it, you may be inadvertently infringing on trademarks or copyrights of other companies.
- Monitoring and maintaining compliance: Protecting data isn’t simply an expectation, it’s now becoming law. Monitoring and maintaining compliance are other important considerations in your organization’s overall security strategy.
How to Prepare for AI Security Impacts Now
The best ways to prepare for the security implications of artificial intelligence are to educate, create governance, remain vigilant and plan for recovery in case of a breach:
Create Security Awareness Across Your Organization
With any risk or vulnerability, there’s a software component and a people component. To successfully leverage AI in a secure manner, your organization will have to address both, starting with creating awareness and providing comprehensive and ongoing training for the workforce.
“AI can create such convincing content to the average person that it’s going to be difficult for them to discern what’s real without intensive training,” Lefever said. “Social engineering approaches will become much more sophisticated and convincing, and it will require teaching the workforce to be critical thinkers around security.”
Communicating policies, guidelines, best practices and updates to these living documents will be critical in creating and maintaining a security mindset.
Set Up AI Guardrails and Governance
A key part to creating a security mindset is establishing a governance plan that promotes the responsible and ethical use of AI tools while helping ensure compliance and managing risk in a continually evolving landscape.
- Designate a cross-functional AI governance committee
- Define AI guidelines and best practices
- Establish a decision-making process for using AI
- Audit AI tool usage and monitor performance.
Provide a Safe Environment for Your Team to Be Innovative
Every company and employee has a certain level of responsibility when they begin interacting with AI tools. Organizations should promote innovation while ensuring secure collaboration.
In cybersecurity, this is often known as a “sandbox,” or a place to execute ideas separate from network resources production systems and infrastructure that could otherwise be impacted. These testing environments can also be used to test solutions or custom code before deploying it to a broader audience.
Companies should not only provide a safe place to explore these tools and their capabilities, but they should also make sure employees know about the space and encourage them to make use of it.
Continually Perform AI Risk Assessments
As AI usage climbs, companies should frequently conduct penetration testing. Leaders must also provide their teams with tools to help determine human vs AI-generated content.
Keep close tabs on your technology investments and build in tools that screen and flag AI-generated content in communication such as emails, which are used to create phishing and other scams. Other new and evolving technology can help teams better catch and guard against malware and bad code.
It’s also important to reevaluate current technology you’re using to ensure it can support the sophistication AI brings.
Create an AI Incident and Disaster Recovery Response Plan
No matter how proactive you are with cybersecurity, you’re never 100 percent “safe.” Bad actors, or even employee mistakes, can sneak in and cause damage to even the most diligent companies. CIOs and business leaders must plan for an AI incident and plan appropriate responses.
Lefever suggested attending or hosting an incident response or disaster recovery tabletop where you present several scenarios and brainstorm together how they might be managed. “Response scenarios should represent realistic but challenging threats, resulting in better communication and more mature controls. The key is to challenge your leadership team to think through scenarios and execute swiftly with a well-planned response,” he said.
Depending on the security framework your company has in place, you may have to not only expand your process or framework but expedite the process to build maturity more quickly than you originally planned.
Security Impacts From AI Aren’t Inevitable
While we will continue to hear about attacks and security breaches from bad actors with malicious intent, it’s important to remember the proverb “an ounce of prevention is worth a pound of cure.”
Governance, defined policies, guidelines and best practices, communication, a clear decision-making process, and regular audits are all ways to help minimize security risks when leveraging AI tools.
Create living documents and response plans with an understanding that they can (and will) change rapidly over the next several years. And don’t be afraid to ask for help from an outside expert. If there’s one thing business leaders have learned over the past three years, it’s that we’re all in this together, and by supporting one another, we can create a secure business landscape where innovation thrives.
Read the full article here