The Twitter hack that gripped the tech world last week was odd because it wasn’t perpetrated by the usual state groups or even a sophisticated group of hackers – it was a group of young people who managed to get access to Twitter’s servers. Hackers compromised the accounts of huge names – including Elon Musk, Barack Obama, Jeff Bezos, Bill Gates and more—and tweeted out scams to their audiences. The tweeted scams more or less used the same language: “I am giving back to the community. All Bitcoin sent to the address below will be sent back doubled! If you send $1,000, I will send back $2,000. Only doing this for 30 minutes.” Hundreds of people fell for the scam and the hackers walked away with about $120,000 worth of bitcoin. So, how did this group of hackers manage to pull this off and what lessons should companies concerned about security take away from the attack?
The details of the attack are still being unraveled as Twitter, Congress, cybersecurity experts and even the FBI investigate, but what we do know is this: one of the hackers somehow managed to access Twitter’s Slack account, where he found the credentials to access Twitter’s internal systems. It’s still not entirely clear how the hacker managed to get into Twitter’s Slack account, but Twitter has stated that they believe the hackers got access through social engineering tactics (a prime example of social engineering is phishing) that targeted employees who had access to internal systems and tools, wrestling that sensitive information away from them.
It’s often said that employees are the biggest threat to security – breaches are frequently a result of employees accidentally sharing sensitive information after falling for phishing scams from hackers. But that doesn’t mean employees are to blame when there’s a breach. Whether employees were tricked or bribed into sharing information, the key issue is that they were able to share the information. In other words, it should not even be possible for them to share the information, even if they wanted to – and when they can do that, that’s a problem with the communication tool being used, not the employee. It’s also something that we already know is a risk with Slack. In April of 2019, when Slack was preparing to go public and listed possible risk factors that the company could face, one of the risks they listed was hackers gaining access to customer Slack accounts and any accompanying fallout. In other words, it’s risky to share sensitive information over communication platforms that can be compromised by human error.
The takeaway from the Twitter hack, then, is that companies should rely on tools they use – not humans – to protect their information. Within the cybersecurity community, there is a lot of emphasis on training employees to recognize social engineering schemes like phishing and take steps like strong password hygiene to protect data. That should all happen, but it isn’t the end-all be-all. After all, even with hours of training, man can err – just take a look at the Twitter hack, which all stemmed from social engineering that allowed hackers to access sensitive information. Organizations should focus on using communication tools for sensitive information that are built with Privacy By Design. That means that privacy is built into the system and is a default – employees don’t have to take active steps to guard sensitive information shared in these systems, protection is built in.
At Vaporstream, we are a huge proponent of the idea that you shouldn’t have to put all your trust in the person, you can place your full trust in the system. For us, Privacy by Design means that information does not remain on devices, but automatically expires from devices, while still stored in a single secure repository for compliance purposes and that conversations cannot be forwarded, copied, or screenshotted and leaked – whether or purpose or accidentally – to hackers. We put privacy first – but don’t take our word for it. See why third-party assessor NowSecure says the very same thing about us.