blog-banner-image

The Future of Cybersecurity: Five Predictions from an Expert

Search For Schools

1
2
3
“Like many technologies in an adversarial environment, offensive and defensive AI tools will become locked in a technological arms race, the outcome of which is unknown.”David Aucsmith, Senior Principal Research Scientist for the Applied Physics Laboratory

Welcome to the decade of cyberattacks. In 2020 alone, the monetary losses from cybercrime amounted to nearly a trillion dollars, which was equivalent to nearly a full percent of global GDP.

The trend isn’t slowing: a 2022 report by McKinsey forecast that the annual increase of costs related to cybercrime will reach $10.5 trillion in 2025; it also found there were over 3.5 million cybersecurity positions available worldwide, and small and midsize enterprises intended to increase IT security spending in 2023. Financial loss isn’t the only concern with cybercrime anymore, as malicious actors—both state-sponsored and independent—target key infrastructure sites with hacks.

In May 2021, President Biden signed Executive Order 14028, which focused on improving the nation’s cybersecurity. This was followed in January 2022 by a National Security Memorandum to improve the cybersecurity of the Department of Defense and intelligence community systems.

But the cyberattacks of the past aren’t necessarily the same as those we’ll face in the future: cybersecurity is a rapidly evolving field, with multiple attack vectors and increasingly complex hacks.

“When making any assessment of the ramifications of an unknown threat to cybersecurity, it is important to realize that cybersecurity is about identifying, preventing, and mitigating failures in either the fabric of cyberspace or the people who use it,” says David Aucsmith, senior principal research scientist for the Applied Physics Laboratory at the University of Washington. “The fabric of cyberspace includes the hardware, software, and algorithms that enable us to move bits about in a reliable and predictable way. The people, of course, have to interpret what they see and make informed decisions. Adversaries can attack both the fabric and the people, and both may fail.”

In such a dynamic industry, perhaps the only thing that can be predicted with relative certainty is that cybersecurity’s importance will continue to grow in the coming years. Still, there are emergent areas of significant influence that tomorrow’s cybersecurity professionals must be paying attention to today.

Read on to learn what the experts predict to be the top areas of importance in cybersecurity going forward.

Meet the Expert: David Aucsmith

David Aucsmith

David Aucsmith is the senior principal research scientist for the Applied Physics Laboratory at the University of Washington. He is a senior computer scientist and technology leader with more than 30 years of experience in industry, government, and academia.

Aucsmith’s current research interests include the security of cyber-physical systems. He has previously worked on secure computer systems, secure communications systems, security architecture, random number generation, cryptography and cryptographic systems, steganography (i.e., the practice of concealing a message within another message or physical object), and network intrusion detection.

Aucsmith is a former officer in the US Navy who has written extensively on cybercrime, cyberespionage, and cyberwarfare. He has also participated in cybersecurity advisory groups with several national entities, including the Defense Department, the Pacific Northwest National Laboratory, the FBI, and a Presidential Task Force.

Machine Learning (ML) and Artificial Intelligence (AI)

ML and AI are major trends in practically every industry, and cybersecurity is no different. But their adoption by cyber-attackers is likely to continue to have several profound effects on the cybersecurity industry as a whole.

As pointed out in the McKinsey report, the days of the lone cyber-attacker are largely over: today’s attackers are better funded and more sophisticated than ever before, and their adoption of ML and AI applications will continue to expedite and complicate cyberattacks.

“Developers are creating AI-based tools to defend against cyberattacks that can identify patterns and anomalies indicative of attacks,” Aucsmith says. “Conversely, attackers are developing AI tools to search for opportunities and vulnerabilities, and to conduct reconnaissance for future attacks. Like many technologies in an adversarial environment, offensive and defensive AI tools will become locked in a technological arms race, the outcome of which is unknown.”

Aucsmith also forecasts a greater emergence of what is known as adversarial AI: tools and processes used to disrupt the algorithms that guide AI-based applications already deployed in the real world. Adversarial attacks will “poison” or “contaminate” AI/ML models with either inaccurate or maliciously designed data to deceive them into making false predictions; they can also mask malicious content from passing through an algorithm’s filters.

In 2018, the Office of the Director of National Security report highlighted adversarial ML threats as one of the most pressing concerns, particularly in the area of computer vision algorithms. As self-driving cars enter the mainstream, the issue will only grow more important. Already, something as simple as a few strips of tape can cause a self-driving car to misread a stop sign as a speed limit sign.

“AI/ML application development today is in a similar position to computer operating system development in the 1990s,” Aucsmith says. “Back then, we developed operating systems without the expectation that anyone would intentionally attack or otherwise try to induce the systems to fail. We designed them for a benign environment. Writing ‘secure code’ as a concept was still a decade away. Similarly, we do not currently develop AI/ML systems assuming a hostile environment. We do not develop them with a threat model.”

Tomorrow’s cybersecurity professionals will need to develop mature AI/ML systems which are analyzable, verifiable, and designed with potential adversaries in mind.

Deep Fakes

In 2021, ten videos of Tom Cruise popped up on TikTok. In them, Cruise spoke and made gestures one would likely associate with the actor, but none of the videos were real: they were deep fakes generated by an AI company, and 61 percent of people couldn’t tell the difference.

The technology has come a startlingly long way in the last four years, becoming not only more convincing but also more widely available. It’s likely to only get better at imitating reality: most deep fakes are generated by pitting two AI systems against one another, refining a fake until it’s no longer detected as one.

“The emergence of consumer-available tools to create synthetic video and audio that is indistinguishable from actual real-world recorded audio and video is clearly a potential problem for society as it will become more and more difficult to distinguish truth from intentional manipulation,” Aucsmith says.

In cybersecurity, deep fakes have the potential to facilitate new and convincing phishing schemes. Phishing attacks are crude compared to more complex technical hacks, but they’re remarkably successful, Aucsmith notes. Targeting the social layer has been a reliable method for cyber-attackers, and deep fakes give them yet another method of doing so.

“Introducing deep fakes used to fool the user into thinking a trusted person has requested they make an erroneous decision is but a small leap from telephone scams and fake emails,” Aucsmith says. “The problem is that the people part of cyberspace is both the less reliable component and the most difficult to change. How people make trust decisions is rooted firmly in social interactions, and deep fakes have the potential to cause great confusion.”

Cryptojacking

Cryptojacking is a novel form of cyber-attack: instead of stealing data or impairing functionality, it steals computing power and repurposes it towards mining cryptocurrencies. Typically those cryptocurrencies are privacy-oriented coins like Zcash and Monero, which natively obfuscate their transaction histories.

The purpose of cryptojacking malware is to run in the background, undetected, while earning cryptocurrencies for malicious actors. According to Interpol, signs of cryptojacking include device slowdown, battery overheating, and increases in electricity and cloud computing costs.

Cryptojacking isn’t at its peak, currently. Part of that is because cryptocurrency isn’t in the midst of the mania it was in 2021; part of it is due to an industry-wide shift away from the proof-of-work consensus mechanism which mines blocks; and part of it is due to the programmed increase in the difficulty of mining blocks for those currencies which do continue to use proof-of-work. But cryptojacking will continue to be used as long as it is profitable and will also continue to adapt to the crypto landscape as a whole.

“While cryptojacking attacks are currently on the decline, they will remain or become a lucrative target,” Aucsmith says. “I do not see cryptojacking diminishing in the long term.”

Internet of Things (IOT) and Supervisory Control and Data Acquisition (SCADA)

Both IOT and SCADA systems function primarily to connect cyberspace to real-world objects. While that enables many practical benefits, it also opens up new attack vectors and widens the possible consequences of an attack.

In a SCADA attack, a cyber-connected pump or motor can be made to run faster than normal, or valves and switches may be turned on and off; the Stuxnet attack on the Natanz nuclear complex in Iran was a SCADA attack that damaged physical machinery.

An IOT attack may use sensors or other internet-connected devices as a point of attack; the Mirai Botnet utilized vulnerable IOT devices to create the largest distributed denial of service (DDoS) attack in history.

“Two factors contribute to increasing IOT/SCADA attacks,” Aucsmith says. “First, the sheer number and variety of devices and manufacturers guarantee some systems will always be vulnerable. Second, for people who truly wish to do harm, it is in the real world that actual harm can be done. This is also the domain where cyber warfare is most meaningful and cyber warfare is not going away.”

Attacks on Trust Systems

Aucsmith also predicts an increase in attacks on trust systems: things like X.509 certificates, two-factor authentication (2FA), and other forms of verification. People remain the weakest component of cyberspace and are more easily fooled than hardware and software. Social engineering takes that idea a step further, and views a user’s mind as another attack surface. Cybersecurity experts can upgrade their defenses all the like, but every end user can remain a vulnerable point of attack.

“The fabric of cyberspace will continually evolve to meet threats,” Aucsmith says. “But people have great difficulty changing in the same way. More and more, the surest way to attack something in cyberspace will be to fool the user into making a mistake.”

Matt-Zbrog
Writer

Matt Zbrog

Matt Zbrog is a writer and researcher from Southern California. Since 2018, he’s written extensively about the increasing digitization of investigations, the growing importance of forensic science, and emerging areas of investigative practice like open source intelligence (OSINT) and blockchain forensics. His writing and research are focused on learning from those who know the subject best, including leaders and subject matter specialists from the Association of Certified Fraud Examiners (ACFE) and the American Academy of Forensic Science (AAFS). As part of the Big Employers in Forensics series, Matt has conducted detailed interviews with forensic experts at the ATF, DEA, FBI, and NCIS.