“Being a Good Hacker Is as Much About Understanding People as It Is About Understanding Tech.”
Geoff, as we speak, a massive cyber-attack on servers in the EU, in the US and even in Canada unfolds. According to the Italian cybersecurity agency chief, who warned about it last week, it’s successful due to outdated software running on affected servers. That got me thinking: These days, everything gets auto-updated every other second. So how is outdated server software even possible?
Geoff White: Well, it’s pretty usual. In this case, a ransomware gang targeted server software made by VMware, which runs a lot of servers as virtual machines around the globe. The hackers managed to compromise some of these servers, scrambled the files and then charged a ransom to unscramble them. They demanded two Bitcoin in this case, which was about €40,000, which doesn’t seem like a massive amount of money compared to other attacks. VMware said, “We patched this hole back in February 2021, and these servers should have been updated for years now.” For end-users, this is hard to understand because, usually for them, everything happens automatically. But for many companies, it’s much harder to keep track of those updates.
They run an incredible number of different pieces of software. They’ll have a patch kit from Microsoft, they’ll have a kit from VMware, they’ll have a kit from Cisco. And these pieces are so interactive with each other that changing one of them can sometimes cause whole other bits of software to crash. So computer engineers spend lots of their time working out what an installation of one update might mean for all the other bits they’re running. And the more complex organizations get, the more software they’re reliant upon, the harder this challenge gets. That is why the update issue causes enormous problems and massive headaches. Yet, probably the primary headache for IT security remains the users.
A few days ago, cybercriminals attacked the University of Zurich by first compromising individual accounts and then scrambling the underlying systems. How does such an attack unfold?
This case is an excellent example because the way it unfolded followed standard behavior. Putting yourself in the shoes of the computer hacker, you’ve got a couple of options: You can try and attack your target’s infrastructure. That means going after its website or servers, and maybe they’re running some outdated software, like in the case we just discussed. But you’ve only got one shot there. So, if the infrastructure isn’t vulnerable, then, well, that’s it. You’ve lost your chance. But:
Every organization or company has members or employees. If it has 1000 employees or members, well, that’s 1000 shots, 1000 chances. Hacking is a numbers game.
And going after individuals is the more promising way?
Exactly. All you need is one of those 1000 employees who’s not paying attention or who, frankly, doesn’t care and you can target them with a crafted email or an engaging social media post. For example, at a university, it might be some “urgent academic research” they “must look at.” Now you send them an email or a message on social media, and you get them to download a document. Often what hackers want is for them to finally click to “enable macros” when opening that Word document or Excel spreadsheet. It’s that little button at the top, bugging you all the time, asking, “Do you want to enable content?” Some people might open a hundred DOCs a day, so they might become conditioned to click that message away without second thoughts. Especially if the mail appears to come “from your organization” or one of “your peers” – right?
Right. So, the document makes people download a script or virus. The hacker got into that individual’s computer. But where do they go from there?
One of the great things about organizations now is we’re all connected. I can log in to the system at work, and I can get access to these files and those servers. The same applies to the hackers now: They get from that computer to another computer to another computer, and they work their way up through the organization. The easier the access for an employee or student, the easier the entry for a cybercriminal.
So, it’s all about fake emails and ignorant users. As a rule, however, such messages are immediately dumped or parked in the spam folder. And the ones that make it through are also usually easy to see through. What tricks get users to click anyway?
There are two levels to this strategy. First, there are blanket emails that hackers send out all the time. They get a list of, let’s say, a million email addresses. So you’ve got to send out an email that’s going to look hopefully convincing to a million people. Sometimes, it’s about a faked “iTunes update,” “Your Amazon account’s been locked,” or something like that. They write and send a lot of those, so they sometimes spell Amazon wrong, so an attentive user may recognize that it’s not from Jeff Bezos. (Laughs.) But there is a second level:
As you, as a hacker, start to drill down, you decide not to target a million people. You choose to go after one of them.
As an example, let’s go after yourself, Michael.
How do you compromise me?
I’ll do some research about you. I’ll start sniffing around everything I can get about you as an individual, and I’ll begin targeting my attack: Where do you work? What sort of things are you working on? I check your social media profiles, your coworkers, and your bosses. And now, there are two things that human beings respond to: the stick and the carrot. It’s either making you afraid of something or making you want something.
So, what’s next?
Let’s say you put up a post on social media that says, “Hey, I’m struggling with this particular project or with this piece of software. Can anyone help me?” And I respond by waving the stick: “Michael, the software you mentioned in your post on LinkedIn is out of date, and that’s dangerous. You must update it urgently.” Or I can try the carrot approach by saying, “Hey, I can help you with this piece of software. Just download this document. It’s going to help you through it.” Or, when it comes to a conference you were invited to and talked about on Twitter: “Michael, it’s all been canceled. We urgently need to talk to you. I attached our communication for a new date and time for the conference. Please review it today.”
Yet, still, I would think about it and do my checks.
To avoid this, cybercriminals put people under time pressure. “If you don’t do this, your account will be locked.” Or “You’re not going to get that delivery from DHL.” “The conference is tomorrow, but you can’t get on the plane.” Putting people under time pressure dramatically raises the chances of “success” for hackers. One of the most astonishing tricks, though, is to nurture guilt and shame. I work with a guy who’s an ethical hacker, Glenn Wilkinson, and he does this for a living: He sends phishing emails for companies to try and test their employees’ defenses. So, what he’ll do is he’ll pick his target for a dodgy email. First, he’ll work out who their colleagues are in the organization, which, again, using things like LinkedIn is very easy. Then, he’ll send that person an email that looks as though they’ve been accidentally cc’d into it, and it’s been sent to the senior team, the subject being something like Redundancy Plans 2024.
So, I receive this email, it’s gone to what looks like my senior team of people, and it is suspicious. But because I am curious, I click on it and maybe download something dodgy.
Right. And then, are you going to turn to the IT department and say, “Hey, I got this email, and I clicked on it?” No. Because you feel ashamed and guilty that you received and opened it. And while you think about that, the crime unfolds without anybody noticing anything.
So hacking is about psychological tricks.
Being a good hacker is as much about understanding people as it is about understanding tech.
What percentage of successful cybercrimes are due to direct threats to the infrastructure – and what percentage is due to attacks carried out on users?
My sense from most of the reports I read from technology security companies is that most successful hacks are social engineering hacks going after people. But there’s a bit of crossover where, if you successfully hacked somebody’s computer and you get their password, you might be able to use that password to, for example, log in to the company’s iCloud account or the company’s Amazon Web Services account. And from there, you might be able to do one of those technical attacks on a large scale.
What should companies do to educate their people on and prevent this threat?
First, get familiar with multi-factor authentication systems. Make sure an email address and a password are no longer enough and that your people need to install an app – or receive a text message on their mobiles via SMS – that will give them a login code to get access. Hackers hoover up email addresses and password combinations that people have used and were compromised somewhere else. They will try using those email password combinations to log in to their work accounts. With multi-factor authentication done right, you keep them at bay. Second, have some program in place for training your employees, letting them know how bad this could be, what the consequences could be, and educate them about the latest threats.
There are many threats out there. How much training is necessary?
To be honest: less and shorter and more frequent is better. It’s like exercise: Instead of doing a three-hour gym session once a week, maybe doing 10 minutes now and again is a bit better. Do it often and make it light, easy, and short by keeping your employees up to date on the latest thing.
Are there any best practices you can share on how to do that?
By gamifying cybersecurity training, let’s say, doing escape rooms and that kind of thing. These make it a bit more interesting. You can take awareness to a new level. On the other hand, you should incentivize the positives wherever you can:
As people fall for phishing because they have a bad password, the IT department gets mad with them. How about turning things around: A bottle of champagne or a box of great Swiss chocolates for the person with the best password in a phishing test?
Yes, you spend 40 dollars on a bottle or a box every now and then. But they are well invested as long as your staff recognizes that a good password is essential.
And on the infrastructure side? A bottle of champagne for the smoothest software updates?
Why not? In addition, I’d suggest sectioning off bits of your computer network, making sure that somebody who gets in – let’s say into the HR system – can’t move across the computer network and access, for example, the website host system from there. A bottle of champagne for the person finding out and raising a red flag that there is no practical reason for an HR person to access the server, the host, or the website! By working out the linkages, you draw walls between the systems and build new frontiers for hackers.
When it comes to compartmentalizing: Are proprietary systems, i.e., those programmed by organizations themselves, actually more secure than those that are outsourced – such as SAP and others?
That’s an important question. Unfortunately, organizations aren’t always going to be able to choose. Sometimes you have proprietary software for a reason, and sometimes migrating is difficult or too expensive. And there are advantages and disadvantages on both sides. If you build your software – you own, run and audit it – and if you’ve got good people in place and they know what they’re doing, they should be able to secure all the holes. Yet, all of that takes time and money. Hiring and retaining a good team of people who can build and secure your in-house software is not cheap. By contrast, outsourcing is probably less labor- and cost-intensive, and the giants like SAP do security at scale. They can and do throw massive money at it.
Yet, they are the biggest target.
For a reason: Everybody’s eggs are in that one basket. Loads of people are using these outsourced giant software providers, and if there’s a hole in one of their systems, someone out there will exploit it to the absolute max. That’s why I always suggest going somewhere in between: You’ll need somebody in-house who understands the security vulnerabilities and stays up to speed. So, if there’s a hole found in, for example, VMware server software that we discussed earlier, somebody in your organization will scan the wires and check if it’s been patched. So, even if you outsource stuff, I think having somebody or, ideally, a team of people inside keeping an eye on defending you from vulnerabilities is crucial.
Crime Dot Com was published two years ago and sums up cyber crime’s history over the last decades. What changed in the previous years regarding motivations and victims?
The fall of the Soviet Union had a significant influence on hacking. Remember, the USSR was putting through more than half its graduates in science, engineering, and technology! They were a very technically skilled society. And those young graduates were coming out in droves in the nineties to find that there were no jobs for them. A small minority of those people turned to fraud. And as the Internet took off, they turned to Internet fraud. A bit later, hacktivism emerged. Low-level, malicious, generally cheeky, younger hackers took potshots at organizations for reputation or kudos. There was this idea that they were exposing weaknesses in organizations that should know better. But what really changed over the last 20 years and put cybersecurity and cybercrime on the agenda was the emergence of nation-state hackers and government hackers.
Do you mean espionage?
Espionage has always existed. Governments have constantly spied, and inevitably they turned to technological means to do that. But what you got from the late 1990s was a collision of the organized cybercriminals who were just out to earn money, the hacktivists who were breaking in to damage reputations, and the government hackers who have the time and the resources to run a long-term campaign. They all started borrowing from each other, loaning each other’s tactics out and borrowing each other’s strategies.
Government hackers now will use the tools that cybercriminals use, and it’s hard to tell when you get hacked if a cybercrime gang or a government has hacked you.
But what is the motivation behind it if it is no longer classic espionage?
Governments started understanding that you could ruin somebody’s reputation by hacking them, which happened with the Russian intrusion into the US presidential election in 2016 or when North Korea attacked Sony Pictures Entertainment – to damage the overall “Western” reputation. Recently, we’ve seen government hacking at all levels start to hit high gear in coronavirus research and communication. Same with Russia’s invasion of Ukraine: Even if Russian criminal hackers are only in it for the money, as long as they can use the fig leaf of Russian patriotism to secure the president’s support, they are more likely to attack Western institutions and organizations. Those are all pages taken out of the hacker’s playbook. The whole espionage piece from nation-states went into absolute overdrive within the last five to ten years.
What has changed for the attacked companies and organizations?
Cybercrime is small numbers, but high volumes. And to make those high volumes work, you need to hit everybody at once. You need to carpet-bomb everybody, gather as many email addresses as possible, and send out as many spam emails as possible. Now that nation-states have taken a leaf out of that book, companies and organizations – and their employees – increasingly find themselves caught up in an indiscriminate nation-state attack. The most famous example was the WannaCry attack in 2017, attributed to North Korean hackers: A ransomware attack that spread worldwide because it spread automatically and indiscriminately. In the UK, it ended up disproportionately hitting the NHS because the NHS is a massive employer of people.
None of those victims were particularly targeted?
No. It’s not that North Korea had a problem with the NHS. It’s just they unleash their virus, and it hits loads and loads of people. There is, I think, a sense in large-scale hacking of “Hit them all and sort it out later.” A second example was an attack on a US software provider called SolarWinds. And what was interesting about that attack was many, many SolarWinds customers got caught out with that hack because they all used SolarWinds software. In the end, the hackers didn’t have much interest in most of the thousands of victims. What they were trying to do was get through to the few dozen, maybe a few hundred victims who worked for the US government and who use SolarWinds software. Again:
Thousands of people get hit, but the people cybercriminals are interested in is a tiny minority.
A currently confidential UN report is about to reveal that North Korean hackers like you just described stole more cryptocurrency assets in 2022 than in any other year. They targeted the networks of foreign aerospace and defense companies and stole cyber-currencies worth more than $1 Billion.
That’s right. And the reported $1 Billion worth of cryptocurrency is an astonishing amount of money, don’t you think?
Of course. Your most recent book, The Lazarus Heist, reports on North Korea’s “Lazarus Group,” a bunch of particularly successful hackers, and how they work. Why are they so successful?
First, North Korea is a top-level cyber threat alongside Russia and China. And part of the success story of the Lazarus Group is that North Korea’s gone broke. It’s hard to understand how North Korea can function as a normal country, right? They can’t buy stuff. They can’t sell things. So, the accusation is they’ve taken to hacking – to steal money, to stay afloat. Having said that, second, the reported $1 Billion worth of cryptocurrency is even more astonishing from a North Korean perspective! And third, after allegedly stealing it, they’re laundering and selling it, turning it into real dollars – and nuclear missile material. The Lazarus Group is one of the state’s hacking teams, and they are extremely well organized: They are part of the military. They have a rank. They have a bureaucracy behind them. A particular set of people runs them. They are tasked day in and day out to work very, very long shifts targeting organizations, using all the tricks that we’ve talked about.
But if they are that good, why does the whole world know they are responsible for their crimes?
Sure, many hacking organizations will take great care not to be caught and not leave traces. The weird thing here is: The North Koreans don’t seem to care. Because if they get found out, they’re not going to get arrested.
They’re in Pyongyang, doing their job, getting their paychecks from Kim Jong-un.
There’s no extradition treaty from Pyongyang to anywhere. As long as they can get away with the hacking work and make it back to Pyongyang, they feel, I think, safe to leave their fingerprints on stuff. It’s a bit of a PR exercise. I think they want the world to see what they’re capable of.
Well, the world is indeed watching. And with the pandemic and increased remote working options, more and more employees seem vulnerable to such threats because their security may not be as good as in their former offices. So, has work-from-home increased the attack surface for hackers?
Certainly, but there is a silver lining as well.
When companies send people to work from home, the security message that you’re trying to get out to them in terms of alerting them to phishing emails and so on is that they apply as much to people’s home systems and their friends and family and their housemates as to their work. So, instead of bringing people into the office and having a conversation saying, “Please don’t click on phishing emails, you know, it’s bad for work…,” you can today have a conversation with them, saying: “Hey, you’re working from home. By the way, all the things we’re telling you to protect your workplace will also help you protect your family and your home environment.” Usually, this works wonders.
And how is the hacking game foreseeably changing with the widespread use of AI?
Currently, I think there’s a bit of an AI arms race going on. The sense I got when researching Crime Dot Com, talking to a lot of hackers and cybercriminals, was that they hadn’t invested massively in AI because they didn’t need to. If you’re making millions out of just sending spam emails, you don’t necessarily need to. But they were very interested in how security companies were using AI. So, if you’re using artificial intelligence to spot spam emails, for example, or to spot viruses, the hackers want to know how your AI works so they can change their spam emails and change their viruses to get around it. They’re interested from an adversarial perspective. In AI, with ChatGPT, Bard and all the other competitors, at least the Big Five have moved the game forward, and now the other side must react to adapt.
With AI chatbots, however, it would be pretty easy to finally clean up the poorly texted spam emails – and thus increase the hit rate.
Sure. Some researchers I talked to the other day had used ChatGPT to craft a malicious piece of software, and apparently, it did an excellent job of that. So, it’s possible that at a low level, where somebody’s not very good at coding, they get ChatGPT to audit their malicious code. But on the plus side, companies are increasingly interested in using AI to spot how hackers work, how they get in, and how they move around the network.
So, for every development useful for hackers, AI can also be used for the defenders.
About the Author
Geoff White is an investigative journalist and cybercrime expert. He covered technology for the BBC and Channel 4 News. Recently, he published The Lazarus Heist, based on the popular BBC podcast series he co-created and co-hosted.