.

ad test

Tuesday, June 27, 2017

Once Again, the NSA Makes Us All Less Safe

A new ransomware attack similar to last month's self-replicating WCry outbreak is sweeping the world with at least 80 large companies infected, including drug maker Merck, international shipping company Maersk, law firm DLA Piper, UK advertising firm WPP, and snack food maker Mondelez International. It has attacked at least 12,000 computers, according to one security company.

PetyaWrap, as some researchers are calling the ransomware, uses a cocktail of potent techniques to break into a network and from there spread from computer to computer. Like the WCry worm that paralyzed hospitals, shipping companies, and train stations around the globe in May, Tuesday's attack made use of EternalBlue, the code name for an advanced exploit that was developed and used by, and later stolen from, the National Security Agency.

According to a blog post published by antivirus provider Kaspersky Lab, Tuesday's attack also repurposed a separate NSA exploit dubbed EternalRomance. Microsoft patched the underlying vulnerabilities for both of those exploits in March, precisely four weeks before a still-unknown group calling itself the Shadow Brokers published the advanced NSA hacking tools. The leak gave people with only moderate technical skills a powerful vehicle for delivering virtually any kind of digital warhead to systems that had yet to install the updates.

Besides use of EternalRomance, Tuesday's attack showed several other impressive improvements over WCry. One, according to Kaspersky, was the use of the Mimikatz hacking tool to extract passwords from other computers on a network. With those network credentials in hand, infected computers would then use PSExec, a legitimate Windows component known as the Windows Management Instrumentation, and possibly other command-line utilities to infect other machines, even when they weren't vulnerable to the EternalBlue and EternalRomance exploits. For added effectiveness, at least some of the attacks also exploited the update mechanism of a third-party Ukrainian software product called MeDoc, Kaspersky Lab said. A researcher who posts under the handle MalwareTech, speculated here that MeDoc was itself compromised by malware that took control of the mechanism that sends updates to end users.
The fact that the NSA does not do a good job on cybersecurity should surprise no one.  Their job is not to keep our computers safe, but to break into as many systems as it can and hoover up data.

The ACLU has accurately described the problem:
Last month, a massive ransomware attack hit computers around the globe, and the government is partly to blame.

The malicious software, known as “WannaCry,” encrypted files on users’ machines, effectively locking them out of their information, and demanded a payment to unlock them. This attack spread rapidly through a vulnerability in a widely deployed component of Microsoft's Windows operating system, and placed hospitals, local governments, banks, small businesses, and more in harm's way.

This happened in no small part because of U.S. government decisions that prioritized offensive capabilities — the ability to execute cyberattacks for intelligence purposes — over the security of the world’s computer systems. The decision to make offensive capabilities the priority is a mistake. And at a minimum, this decision is one that should be reached openly and democratically. A bill has been proposed to try to improve oversight on these offensive capabilities, but oversight alone may not address the risks and perverse incentives created by the way they work. It’s worth unpacking the details of how these dangerous weapons come to be.

………

When researchers discover a previously unknown bug in a piece of software (often called a “zero day”), they have several options:
  1. They can report the problem to the supplier of the software (Microsoft, in this case).
  2. They can write a simple program to demonstrate the bug (a “proof of concept”) to try to get the software supplier to take the bug report seriously.
  3. If the flawed program is free or open source software, they can develop a fix for the problem and supply it alongside the bug report.
  4. They can announce the problem publicly to bring attention to it, with the goal of increasing pressure to get a fix deployed (or getting people to stop using the vulnerable software at all).
  5. They can try to sell exclusive access to information about the vulnerability on the global market, where governments and other organizations buy this information for offensive use.
  6. They can write a program to aggressively take advantage of the bug (an “exploit”) in the hopes of using it later to attack an adversary who is still using the vulnerable code.
Note that these last two actions (selling information or building exploits) are at odds with the first four. If the flaw gets fixed, exploits aren't as useful and knowledge about the vulnerability isn't as valuable.

………

The NSA knew about a disastrous flaw in widely used piece of software – as well as code to exploit it — for over five years without trying to get it fixed. In the meantime, others may have discovered the same vulnerability and built their own exploits.
The people handling our offensive cyber capabilities cannot be trusted to protect us, because it is not their jobs.

Their job is to hack into other people's systems, and any consequences are seen as irrelevant.

It's blind men and an elephant, and it's the rest of us who suffer as a result.

No comments: