Implement AI without data risks
Tuesday, September 3, 2024
Richard Harris |
Andrew Smith, Kyocera’s CISO, shares his top five tips to implement AI without data risks, to ensure any organization can safely leverage AI without compromising security. As organizations continue to adopt generative AI, outdated security protocols pose significant risks, making these guidelines crucial for data protection.
The Gen AI bubble might not be growing as quickly as it was in 2023, but as adoption continues apace, organizations across the globe are still being caught out by outdated security protocols.
Tips to implement AI without data risks
To combat the risks associated with AI and to help more organizations take advantage of it, Andrew Smith, CISO for Kyocera Document Solutions UK has shared his top five tips for making sure any organisation can implement AI without putting their data security at risk:
Tip 1: Avoid using personal or proprietary information in Gen AI LLMs
It is not common knowledge how and where data is used when utilizing generative AI models. Often end users do not know the sensitivity of the data they are uploading and are more focused on the potential outcome AI technology can generate. The important approach for business leaders is to ensure they do not restrict AI use, and in turn create shadow use, but instead provide education to users on how to safely use AI in addition to providing AI models that are safe to use in the business domain.
Tip 2: Create a company policy on AI & Privacy
From my experience the challenge colleagues are facing here is the lack of reference material and best practise in which to build from. Instead, the source of reference is best practise in data use, safety and privacy and adopting this approach in the utilization of AI. This way the core topic in which is the data is utilised and generated is protected and considered by the foundation of well-established data and privacy policies.
Tip 3: Manage data privacy settings
Data privacy settings are challenging in this space with many different AI toolsets being launched on a daily basis being web based.
Our approach in this space is utilising broader data privacy controls and data boundaries and sources to ensure extraction of data is understood and controlled prior to it being uploaded to insecure sources.
As more private AI tools and models are released, IT have the ability to control the use cases and abilities of the toolsets as well as expanding the outcomes and outputs of the technology. This is where we believe mainstream adoption may be achieved.
Tip 4: Regularly change passwords and use data access controls
It is important that companies have strong IT policies that guide and control how users use systems and in particular the rules in which they must comply with. Modern IT platforms and data loss prevention policies and controls allow IT to have greater influence on user behaviour, BUT end user education is always essential to ensure the best possible protection for corporate IT systems.
Tip 5: Audit AI interactions and monitor data breaches
The important element with trying to audit AI use and subsequent data breaches is first to ensure there is strong guidance around permitted use cases and to utilise work groups that understand how users want to develop business operations utilising AI.
Depending on the AI use case and in particular with new private AI models there are options for IT to have much greater control and insight.
For data breaches, it is essential to utilize IT controls alongside industry leading Cyber toolsets to monitor and spot potential data leaks or breaches.
Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.
MEMBERS GET ACCESS TO
- - Exclusive content from leaders in the industry
- - Q&A articles from industry leaders
- - Tips and tricks from the most successful developers weekly
- - Monthly issues, including all 90+ back-issues since 2012
- - Event discounts and early-bird signups
- - Gain insight from top achievers in the app store
- - Learn what tools to use, what SDK's to use, and more
Subscribe here