Kris is Co-Founder and Chief Security Officer at Egnyte, responsible for the company’s security, compliance and core infrastructure.
Generative AI tools such as ChatGPT are one of the latest tech waves to proliferate among businesses and consumers alike, from summarizing a multi-page contract to writing a poem for a loved one.
According to IDC, worldwide spending on AI is expected to reach $154 billion in 2023. And in a recent Gartner poll, 45% of executive leaders indicated that the rise of ChatGPT has prompted them to increase their AI investments.
For small- to mid-sized businesses (SMBs), in particular, generative AI can offer benefits for those that are limited in resources or personnel. However, this also comes with concerns about the security and privacy of data, whether a user unknowingly uploads sensitive information or a cybercriminal produces an authentic-looking phishing email.
As SMBs explore the potential of generative AI, it is imperative that they also take into account the data security and privacy implications. For example, organizations need to distinguish between human- and AI-generated content, which Google has already announced they are working on. Below are a couple of other key considerations.
Stay up to date with data privacy regulations.
Maintaining compliance with data privacy requirements is an area of concern when it comes to generative AI. Regulations like the European Union’s General Data Protection Regulation (GDPR) are intended to prevent personal data from being processed without the consent of the data subjects. These regulations are actively enforced, with organizations facing fines and regulatory action for noncompliance.
In March 2023, the Italian Data Protection Authority ordered ChatGPT maker OpenAI to temporarily stop processing Italian users’ personal information, citing that this data was collected unlawfully.
Meta was also fined $1.3 billion by European Union regulators for data transfers to the U.S., the largest fine for violating GDPR to date (since then, there has been an adequacy decision for the EU-U.S. Data Privacy Framework regarding data sharing).
As such, SMBs should carefully review data privacy regulations and how they apply to their respective businesses. While steps have been taken by the European Union and others to regulate generative AI, it is in the best interest of organizations to take proactive steps now to protect their data, especially as global data privacy laws expand and strict enforcement continues.
The reality is that some SMBs may not be able to afford the fines or loss of brand reputation that could result from noncompliance.
On another note, it is encouraging to see the National Artificial Intelligence Advisory Committee’s (NAIAC’s) May 2023 report, which recommends that U.S. government agencies adopt the NIST Artificial Intelligence Risk Management Framework toward creating a safe and “responsible” AI that will emphasize “human centricity, social responsibility, and sustainability.”
Ask yourself these questions.
When determining whether generative AI is right for their business, SMBs should make sure they ask the following types of questions.
• How will my data be classified?
• What are the potential risks of sensitive data being exposed?
• What are the potential risks of data privacy violations?
• What measures should be taken to remove potential bias from the AI model?
• How will my data be leveraged for other learning models?
• What company policies should be implemented to ensure AI is used responsibly and securely?
Some organizations may decide that the risks of generative AI outweigh the benefits, or to develop internal tools. For instance, in May 2023, Samsung banned the use of ChatGPT and other external generative AI services after employees accidentally uploaded sensitive code to the platform. The company was concerned about the security risks of data stored on external servers, so they are now working on an in-house AI solution for employees.
This just goes to show that generative AI is not always a one-size-fits-all approach.
Cybersecurity hygiene is still mission-critical.
Generative AI has created more data complexity than ever before, but managing and securing increasing volumes of content has already been a challenge for organizations over the past couple of years—especially for SMBs—as evidenced by a surge in cyberattacks.
Therefore, it’s important that companies continue to practice good cybersecurity hygiene, including:
• Providing employees with the least amount of system access that allows them to complete their jobs.
• Deleting or archiving data that is no longer needed while also disabling inactive accounts.
• Conducting ongoing cybersecurity training for all employees, specifically focusing on social engineering and phishing attacks.
• Encouraging employees to speak up if they see a potential IT security issue. In the case of generative AI, organizations should be on the lookout for instances of shadow IT.
Stay vigilant and be responsible.
Simply put, generative AI is here to stay. While it may still be considered a shiny object, data privacy and security concerns around generative AI remain. SMBs should approach generative AI cautiously by experimenting with particular use cases and understanding potential risks before making a long-term decision on whether the technology makes sense for their business.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here