‘Trusted Platform Module’: Oh, the Irony!

In This Issue:

  • C-minus for CVSS?
  • Fear doesn't make people patch


It was a really rough week for Intel. There was this: Intel warning: Critical flaw in BMC firmware affects a ton of server products. And this: True to its name, Intel CPU flaw ZombieLoad comes shuffling back with new variant. Oh, and this can't have been fun: 
Flaw in Intel PMx driver gives 'near-omnipotent control over a victim device' 

But the one that's gotten the most attention so far has been this: TPM-FAIL vulnerabilities impact TPM chips in desktops, laptops, servers. From the article:

"TPM stands for Trusted Platform Module. In the early days of computing, TPMs were separate chips added to a motherboard where a CPU would store and manage sensitive information such as cryptographic keys.

…However, as the hardware ecosystem evolved with modern smartphones and "smart" embedded devices, there was no room for a separate TPM chipset on all devices, and a 100% software-based solution was developed in the form of firmware-based TPMs -- also known as fTPMs.

Nowadays, it's hard to find a device that's not using a TPM, either in the form of a hardware-isolated chip, or a software-based solution. TPMs are at the heart of most devices, even in tiny electronics, such as some IoT "smart" devices."

Here's the full research. TPM-FAIL: TPM meets Timing and Lattice Attacks.




Intel's TPM and especially their out-of-band management platform (Intel IME) have been the source of numerous vulnerabilities for well over a decade. Intel hasn't addressed many of these problems, and it's increasingly looking like they simply aren't going to. AMD isn't perfect—they’ve had their share of vulnerabilities too—but they're far more responsive than Intel to these sorts of issues, and they don't seem to have quite so many.

While we here at The Countermeasure don't generally take sides on which vendors one should choose, the time has come to seriously consider alternatives to Intel. For most organizations, contemplating leaving Intel for another vendor is a serious supply chain headache. That said, the past few years have plainly demonstrated the lackadaisical approach Intel has toward security, and the only way they will ever change is enough organizations abandon them, and let them know quite clearly that their lack of concern for security is the reason.


Read More >

C-Minus for CVSS?

Do you prioritize patches? Some organizations, and many individuals, simply allow automatic updaters to take care of things. Out of sight, out of mind. But things work differently for developers, or for complicated or large-scale deployments.

For those organizations that, test, canary trial, and then mass deploy patches, a means of prioritizing which patches to focus on is simply a matter of survival. Recently, the industry-standard approach to patch prioritization has come into question: We're almost into the third decade of the 21st century and we're still grading security bugs out of 10 like kids. Why?

From the article: “For example, while a business would, ideally, swiftly patch a remote-code execution flaw that has a high CVSS score, lower-scored bugs, such as elevation-of-privilege and information-disclosure holes, may not be treated as a priority.

And yet hackers could, for instance, exploit a data-leak vulnerability to obtain enough information to log into a system, and then exploit the privilege escalation flaw to fully hijack that box. Thus, the two low-scoring bugs could wind up as bad if not worse than the scary remote-code execution flaw, and yet may not be seen as a priority due to their CVSS rating.

"It is complex, but there is nothing in the assessment process to deal with that," Rogers said. "It has lulled us into a false sense of security where we look at the score, and so long as it is low we don't allocate the resources."'





Read the article and circulate it if you can. This is an important concept, but it's more complicated than the kind of check-box compliance preferred by some organizations.



Read More >

Fear Doesn't Make People Patch

Surprising absolutely no one, hackers are now actively exploiting the BlueKeep vulnerability. (Have you patched Windows? We told you to patch Windows. You should patch Windows.) At the moment, the attacks are being brushed off as nothing serious—they cause affected computers to crash, but do little else. Of course, that's going to change... and fast. 

This makes us sad. Not surprised, just sad. Windows security: Have BlueKeep fears led to jump in patching? Nope. People are getting tired of the constant stream of fear messages bombarding them from the world of infosec, and they're tuning out. Or that's one hypothesis, anyway. 

So when we casually mention that this week saw multiple stories about how security flaws can and sometimes do literally kill people… meh. Whatever. Heard that before. US-CERT warns of critical flaws in Medtronic equipment. We've definitely heard that one before. The original was better than the sequel. 

So, instead of fear, here's a metaphor about a boat. Maybe challenging people to live up to the engineering standards of the five hundred years ago will get better results than counting corpses. Breaches Are Inevitable, So Embrace the Chaos. (The subhead is "Avoid sinking security with principles of shipbuilding known since the 15th century.")



We sincerely hope you don't need this advice, but here's a list of what to do if machines at your organization are still unpatched. BlueKeep: What you Need to Know

What to do about people not following security advice is a more complex issue. There aren't known good answers, but at least people are actively looking for them: Airbus Launches Human-Centric Cybersecurity Accelerator. This is a good thing. Even if the name of the initiative sounds like an unpleasant midway ride. 


Read More >

Fail(s) of the Week

In honor of TPM-FAIL, here's a quick collection of the week's worst security missteps.

Retailer Orvis.com Leaked Hundreds of Internal Passwords on Pastebin. From the article: "Holden said this particular exposure also highlights the issue with third parties, as the issue most likely originated not from Orvis staff itself." Fine, but Orvis still gets a fail for giving a careless subcontractor that much sensitive data - including the combination to a locked safe in the company’ server room.

Company discovered it was hacked after a server ran out of free space. The subhead says it all: "Hacker was detected after creating a giant archive file that took up all the free disk space. Had been inside the company's network for almost two years, undetected." Ouch.

Boeing's poor information security posture threatens passenger safety, national security, researcher says. This is actually from last week, but is still worth a look for the… comprehensive nature of the fails. Protip: if a researcher approaches you to report a bug, don't threaten them with a lawsuit and a smear campaign. It's not terribly hard for said researcher to pass the juicy story to a tech journalist, who can demolish you in print. At length. With relish.

Thread of the Week

"When the news broke about BlueKeep exploitation in the wild, most of the reactions were basically "it's not a worm, so it doesn't matter". I decided I'd do a thread on why that's wrong, and why a worm isn't even a worst case scenario." – Marcus Hutchins - @MalwareTech

Resource of the Week

PortSwigger Launches Web Security Academy. Free security training materials from the author of The Web Application Hacker's Handbook, Dafydd Stuttard.


Tools of the Week

Intel, Mozilla, Red Hat, and Fastly partner to make WebAssembly a cross-platform runtime. They also made a bunch of WebAssembly tools publicly available. 

Quick Links

Get Your Copy.