Every silver lining has a cloud. Digitalization is no different.
Clearly, digital innovation has huge potential benefits. It offers organizations the opportunity to optimize, expand, scale and deliver exceptional experiences, products and services unrestrained by geography. And it’s all delivered by accessible, affordable and simpler-to-use cloud and AI (artificial intelligence) technologies.
All of these tools are familiar: the first algorithm was deployed in 1995, and cloud computing has been discussed and debated for about two decades. Yet what we’re seeing today is the planets aligning—we’ve got scalable, accessible compute power, thanks to cloud, while the barriers (price and availability) for deploying AI tools are constantly falling.
Even better is that AI, in particular, can co-exist with what you already have. You don’t need to rip and replace.
The cloud to the silver lining
That’s the positive. On the other hand, the ability of cloud and AI to accelerate your operations means that anything wrong is also accelerated. Industry analysts estimate that up to 20% of a business’s assets can be considered toxic – that’s people, processes and technology that can’t be upgraded without near unlimited resources.
These bring rising costs, employee dissatisfaction, and, most critically, from a security perspective, gaping holes in cyberdefenses. If you’ve got an app that can’t be upgraded, it doesn’t matter what other protection you’ve got in place, that’s an open window to a bad actor.
This isn’t a new issue in cybersecurity; the chain has always been only as strong as its weakest link.
However, whereas in the past, hackers needed extensive skills, a lot of time, and access to specialist technology to exploit vulnerabilities, now they don’t. An eight-year-old with a smartphone can wreak havoc on an enterprise network that’s had millions of dollars of cyber investment.
How? With AI. One of the great things about AI is its ability to take on huge amounts of repetitive work. No wonder we’re all falling over ourselves to use it.
Do you know what has a lot of repetitive work? Hacking. Gathering intelligence, exploring vulnerabilities, writing exploits, and spawning attacks take time and energy.
Or at least it used to. Now, all of that can be done much, much faster. So, while it might have taken weeks or months to plan and execute one attack, it can now be done in minutes across multiple targets.
And with analysts suggesting that over 80% of businesses have vulnerabilities inside their IT landscapes, there is no doubt that AI-empowered attackers will get in.
Moreover, some tools combine readily available hardware like Raspberry Pie with AI to allow people to walk past a facility, access a network, take over encryptions, and they’re inside.
So even if you think your network is air-gapped, you’re still at risk.
It gets much worse
The worst thing is that this isn’t even the worst way AI makes cybersecurity a nightmare.
AI shapes how we behave. Businesses deploy AI tools in every function, from HR and marketing to IT. They’d be mad not to: the amount of necessary yet repetitive work it can do frees up costly developers, HR experts, lawyers, accountants, marketers and others to focus on other valuable work.
For instance, say you’re running a development team. Business requirements mean you’ve got to produce a lot of code quickly. Ideally, you’d have a staff of experienced developers, but the talent shortage means you haven’t, and those you do have are earlier in their careers than you might want. So, you use AI to get you 80% of the way there. To get it going, you take some code from a relevant, publicly available repository and set it to work. It produces the code, it looks fine, and in it goes.
But. Millions of code repositories are laced with vulnerabilities. Some will be the inadvertent vulnerabilities that occur in most software, but others might be instances of model poisoning. This is where attackers attempt to manipulate data to access AI models. If you use this code, you’re more likely to insert those vulnerabilities into your applications. Will you catch it?
It’s highly unlikely. Humans, even the most honest, tell white lies. Most security and compliance audits are manual. Teams are asked about their practices, and they say what people want to hear, not what they actually do. They may not even know how their working habits differ from what’s genuinely compliant.
Say you’ve got a new starter. You want to get them up to speed quickly, so you augment their working practices with AI tools. When it comes to an audit, how can the new starter say what code source they get from the AI? There’s a limit to how far back they can demonstrate provenance.
Combatting the security nightmare
How do you begin to tackle the challenge AI poses to cybersecurity?
You need to do four things:
1. Find, assess and remove toxic assets
Almost 80% of businesses have vulnerabilities in their IT estate, equating to one per application. With so many vulnerabilities, most enterprises don’t have the resources to patch everything. Your first job is to identify where your toxic assets are and find a way to get rid of them as quickly as possible. Not only will this improve your security posture overnight, but you’ll also be a leaner and more effective business.
2. Tailor your security strategy for your teams
Traditional cybersecurity has always focused on protecting assets and being compliant. Yet, when we look at the sort of problems AI creates, a lot of the time it's about manipulating how humans operate, such as the CFO that thought they were speaking to their CEO to move company funds to a new account. Assets aren’t being targeted anymore, people are, so minimum compliance isn’t protecting anyone. It’s time to build cyberdefenses that secure how the organization operates and behaves.
3. Develop a new security strategy
Your security strategy needs recreating. Why? Because your toxic assets can be exploited by anyone with some basic equipment that gives them superpowers, and your use of AI is creating new vulnerabilities. Whatever you had before wasn’t built with the threats AI posed in mind.
4. Build a security ecosystem
No company provides all the capabilities to protect against the nightmare AI potentially brings. Most solutions only focus on one problem, and this is much more complicated. You need to follow the first three steps before you can start thinking about solutions, but once you do, you will have a much clearer idea of the services you need and who is best equipped to deliver them. That doesn’t mean getting rid of all your old vendors, but make sure your working relationships support your new posture, not trying to change it to reflect their needs.
The potential for innovation and chaos
AI is a part of business today, and with it comes great potential for genuine innovation. Even the most basic uses could represent a quantum shift in how people work. Yet, it also brings huge security challenges and chaos. It turns amateur criminals into supervillains and could accelerate the number of holes in your cyber apparatus.
Tackling AI's threat requires a shift in mindset. Current security strategies were not designed with AI in mind, so they need to be rethought. Identifying and removing toxic assets is vital to closing gaps, and knowing where to find the right help is also important.
Do that, and you can end the AI-driven cybersecurity nightmare.
Are you interested in learning more about AI's implications for your business? Look at our new eBook to read the latest from our experts in data, large language models, networks and more.
As a Chief Evangelist, Jan Aril focuses on sharing transparent and inspiring knowledge about his areas of expertise, such as public cloud, big data and AI, with the aim of helping companies take on a more sustainable innovation journey. He has more than two decades of experience in the IT-industry and has held positions from full stack development and sysadmin to C-level and board member. He works across the brands of Basefarm and Orange Business.