Do you have a question? Want to learn more about our products and solutions, the latest career opportunities, or our events? We're here to help. Get in touch with us.
When customers ask me what the best way to secure their organisations is, my response is just two words: “zero trust”.
This is the increasingly predominant approach to improving cybersecurity maturity that rests on the premise that you can’t trust anything or anyone connected to or accessing your network, applications and devices. Instead, you basically assume your computers are already compromised, that everyone is a risk. You verify everything, all the time.
That’s a shift away from the “trust but verify” approach that long dominated cybersecurity – an approach that assumed that once you were logged in, you were trusted. That approach has become unsustainable in a hyper-connected world of increasingly sophisticated and frequent cyber-attacks.
Zero trust implementations of cybersecurity are still evolving and every day at Datacom, we are helping our customers employ zero trust principles for better cyber resilience. But we have also quite suddenly found ourselves in the era of generative AI.
Services based on large language models (LLMs) from the likes of OpenAI, Microsoft, AWS, Google and Meta are already powering customer service chatbots, writing computer code, and summarising contracts and legal documents.
But our trust model for AI is incredibly immature. Even OpenAI’s creators of the generative pre-trained transformer (GPT) technology that underpins ChatGPT, the most rapidly adopted technology in history, can’t fully explain the answers it comes up with.
We’ve been told to expect generative AI systems to make and repeat mistakes and occasionally ‘hallucinate’ or generate false information. That doesn’t mean we shouldn’t use these systems - they offer the potential to boost productivity and power compelling new products and services, but we need to come up with methods to ensure we can trust what they produce and protect how we interact with them.
The field of AI needs to go through the evolution cybersecurity did to get to zero trust, but on an accelerated timeline. If the zero trust triangle in cybersecurity rests on verifying devices, users, and applications (all actions and actors), when it comes to AI systems, it’s about input data, outputs, and the users and machines who access the outputs.
The best AI minds in the world have developed these incredible LLMs and transformer technologies. But AI is fundamentally still about data stewardship. It’s about being very clear that you understand the data you are feeding into an AI system and how appropriate its application is for the intended use case. Determining who should have access to that data and to the outputs being generated from it is also an essential consideration.
There’s a fairly good understanding in the business community that feeding proprietary and sensitive data into AI systems like ChatGPT is a bad idea - it will be shared with the underlying model potentially exposing sensitive data in the results served up to other users.
But even when your data is ‘ring-fenced’ there are still a host of questions you need to ask yourself about what data AI systems should draw on and when. What data will be relevant to the intended outcome? Do we have permission to use the data? Are there likely to be biases in there? Could the data be unintentionally exposed?
We need to be constantly verifying and checking the data fed into AI systems. On the other side we should also be constantly verifying and checking the outputs served up by AI. This can’t be a one-off – outputs need to be monitored over time. The AI system in the middle may, essentially, be a black box to many organisations, particularly those using systems from third party vendors like OpenAI, Microsoft or AWS.
But we can take a zero trust approach to data stewardship and to verifying exactly who has access to the data at every step of the process. That requires human oversight, though tools are increasingly available to help automate parts of the process.
“Is a zero-trust AI mentality always necessary?” IEEE Fellows Phil Laplante and Jeffrey Voas asked in an editorial for Computer last year that preceded the arrival of GPT-4.
“No. But we are suggesting that you should extend this instinctual, self-preserving distrust to AI. And we suggest that any critical AI-based product or service should be continuously questioned and evaluated; however, we acknowledge that there will be an overhead cost in doing so.”
At Datacom, we are helping our customers make the most of AI while ensuring the process of protecting their systems and data is manageable and affordable. We’ve kicked off a series of proof-of-concept projects in the AI space. One involves us exploring uses of Microsoft’s new AI-powered Copilot product, which will soon be in the hands of millions of users.
In our cybersecurity practice, we are looking at it through a risk lens – examining data access and potential leakage, making sure that verification processes can constantly take place to ensure AI tools serve up trustworthy results to the right people. This work is giving us insights into the opportunities and the challenges products like Copilot represent.
AI is also changing the face of cybersecurity itself. We’ve already seen extensive use of AI to automate network, device and data security. Generative AI will allow for faster and more intuitive security assessments and help tackle the cyber skills shortage.
On the flipside, AI is being adopted by the threat actors trying to exploit our customers’ networks, devices, data, and the identities of their employees. There are already ChatGPT equivalent for hackers. For us to keep up, our use of AI has to augment our response, making us faster and more effective.
The onus is on us, our customers and the vendors we partner with to harness AI to stay ahead of the threat actors who are employing it to their own ends.
The zero trust approach to cybersecurity is helping us do that. It will serve us well in the world of AI as well.