Safety Eats The World™ Notes

Safety Eats The World™ is now trademarked by Everbridge. I used to have a blog with that title and was taking notes on it — I’ve archived them here.

Favorite Quotes

If you don’t like change, you’re going to like irrelevance even less.
—General Eric Shinseki

100% is the wrong reliability target for basically everything (pacemakers and anti-lock brakes being notable exceptions).
—Betsy Beyer

You should study risk taking, not just risk management. They’re not separable.
—Nassim Taleb

Elimination is not a point in time; it’s a sustained effort.
—Jacinda Ardern

Classifications Of Risk

2014 report from the CDC lays out the six principles of crisis and emergency risk communication (“CERC”) as:

  1. Be first
  2. Be right
  3. Be credible
  4. Express empathy
  5. Promote action
  6. Show respect

A few factors pointed out as leading to more disasters are:

  • Increased population density in high risk areas
    • Greater density means more people impacted at the same time
    • Flooding, earthquakes, hurricanes, hillslides, wildfire impacts
    • Adjacency to hazardous waste landfills, airports, power plants
  • Increased technological risks
    • Hazardous chemical transport over decaying railroad tracks
    • Dependency on tech makes it vulnerable at scale when disrupted
    • Complex technologies can interact in chaotic ways, adding danger
  • Our aging U.S. population
    • Disasters of all kinds disproportionately impact older adults
    • By 2030 U.S. adults over 65 will double to about 71 million
    • Chronic conditions use about 95% of healthcare expenditures
  • Emerging infectious diseases and antibiotic resistance
  • Increased international travel
  • Increased terrorism

Communicators must inform and persuade the public in the hope that they will plan for and respond appropriately to risks and threats.

CDC (2014)

There are four types of communications:

  1. Crisis communication: For managing the unexpected emergency
  2. Risk communication: For pre-adjusting or adjusting to a crisis
  3. Issues management communication: For managing public questions (like vaccine safety) as an influencing response. In some cases, issues can become a crisis.
  4. Crisis and emergency risk communication: Combines crisis communication and risk communication to help individuals make the best choices possible while helping them accept the imperfect nature of choices available.

And when it comes to public health threats the CDC lists them as:

For disasters in general, Wikipedia has a list of threats by cost with earthquakes in Japan and China at the top, followed by hurricanes in N America

  1. 2011 Tohoku earthquake — $411B
  2. 1995 Great Hanshin earthquake — $329B
  3. 2008 Sichuan earthquake — $176B
  4. 2005 Hurricane Katrina — $165B
  5. 2017 Hurricane Harvey — $130B
  6. 2017 Hurricane Maria — $95B
  7. 2019-20 Australian Bushfire — $70B

CERC 2018 update introduces this table:

Crisis CommunicationIssues Management
Crisis and
Emergency Risk
CommunicatorMember of the
organization impacted
by the crisis
Member of the
organization impacted
by the crisis
Expert who is not
directly impacted by
Expert who is
impacted by
TimingUrgent and
Anticipated; timing is
somewhat controlled
by the communicator
Anticipated with little
or no time pressure
Urgent and
Message PurposeExplain and
Explain and
Explain, persuade,
and empower

Another framework that’s useful is by Coombs and Holladay (2002) that describes crises by attribution of who’s responsible*:

Victim cluster: In these crisis types, the organization is also a victim of the crisis.
(Weak attributions of crisis responsibility = Mild reputational threat)
Natural disasterActs of nature damage an organization such as an earthquake.
Rumor: False and damaging information about an organization is being circulated.
Workplace violence: Current or former employee attacks current employees onsite.
Product tampering/Malevolence: External agent causes damage to an organization.
Accidental cluster: In these crisis types, the organizational actions leading to the crisis were unintentional.
(Minimal attributions of crisis responsibility = Moderate reputational threat)
Challenges: Stakeholders claim an organization is operating in an inappropriate manner.
Technical-error accidents: A technology or equipment failure causes an industrial accident.
Technical-error product harm: A technology or equipment failure causes a product to be recalled.
Intentional cluster: In these crisis types, the organization knowingly placed people at risk, took inappropriate actions or violated a law/regulation. (**Note that this is also called “Preventable cluser”)
(Strong attributions of crisis responsibility = Severe reputational threat)
Human error accidents: Human error causes an industrial accident.
Human-error product harm: Human error causes a product to be recalled.
Organizational misdeed with no injuries: Stakeholders are deceived without injury.
Organizational misdeed management misconduct: Laws or regulations are violated by management.
Organizational misdeed with injuries: Stakeholders are placed at risk by management and injuries occur.

via Wikipedia

Known Knowns, Known Unknowns, Unknown Knowns, Unknown Unknowns

via Waybackmachine

Donald Rumsfeld in a 2002 press conference fostered the birth of the memes of “known knowns” and “known unknowns” and “unknown unknowns” which, to be frank, has long been a mystery to me. So I thought I needed to unpack it today as I can see it’s pretty important when considering the nature of black swans (versus white swans). Apparently it reaches back to the Greek era … I wish I had studied history better when I was younger.

Apparently there’s an Islamic philosopher named Ibn Yami from the 13th century who wrote:

One who knows and knows that he knows… his horse of wisdom will reach the skies.

One who knows, but doesn’t know that he knows… he is fast asleep, so you should wake him up!

One who doesn’t know, but knows that he doesn’t know… his limping mule will eventually get him home.

One who doesn’t know and doesn’t know that he doesn’t know… he will be eternally lost in his hopeless oblivion!

Ibn Yami

My own distillation of this matter with how people might feel about knowing versus not knowing is the following:

The more you know, the more afraid you’ll be.

OR alternatively:

The more you know, the more prepared you’ll be.


It’s easier to look backwards to the past than it is to muster the strength to look forward into the future. Because you’ll likely be wrong if you look forward, versus more easily looking backwards.

But let’s get back to Donald Rumsfeld.

Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know

Donald Rumsfeld (2002)

I scoured the Internet for different interpretations of this space, and came to a much better understanding than when I started. I’m feeling that this is a very KNOWN KNOWN table on this topic :+).

Low Problem UnderstandingHigh Problem Understanding
Your data lacks some accuracy
Know solution if know problem
Untapped knowledge
Hidden facts
Brainstorm, group sketching
Findability, collective memory
We understand but aren’t aware
“IDK but someone does—get them.”
The data you currently track
Know problem + solution
Not risks and manageable
Facts and requirements
Analogies, lateral thinking
Reporting, news
We are aware and understand
“I know and I’ve got this.”
Triangulating all your data
Be prepared to react well
We know nothing
Unknown risks
Research, explore
Big data, ML
Neither aware nor understand
“Huh? How did that happen?”
Data you want but can’t track
Startup-style build.test.learn
Classic risks and predominant
Known risks
Build hypothesis, measure, iterate
Analytics, data mining
We are aware but don’t understand
“I know that I don’t know it all.”

Notes to self: Distilling many thoughts on this topic from the Internet, I came to a way to understand it. The vertical axis is, Top: High Data Availability, Below: Low Data Availability. Ideally I can find it … Sources: [1] [2] [3] [4] [5] [6] [7]

Resilience Globally

“Interest in resilience is global …” —Buzzanell & Houston

The five contexts:

  1. Individual
  2. Family
  3. Organizational
  4. Community
  5. National

4Ps and 4Ds of the UK Home Office

I was reading materials in the US’ curriculum on homeland defense, and noted the 4Ps and 4Ds construct from the UK that I’d caught on Wikipedia earlier this week as their framework for counterterrorism.

  1. Prevent (radicalization of extremism)
  2. Protect (defend infrastructure and borders)
  3. Prepare (develop responses to attack)
  4. Pursue (detect, disrupt, stop)

For WMD threats, there are the 4Ds:

  1. Dissuade (countries to not stock materials)
  2. Detect (expose bad actors/agents)
  3. Deny (access to materials and know-how)
  4. Defend (with appropriate ops responses)

More on CONTEST is here.

US DHS Framework for Security

From the DHS’ CTTV (Counter Terrorism and Targeted Violence) work on integrating modern technology in a forward-looking way, there is a comprehensive framework for safety that’s outlined in rough detail.

There are four goals that line up against six specific “lines of effort.”

Core capabilities manifest as five types of activities that follow a generally linear chronology:

  • Prevention
  • Protection
  • Mitigation
  • Response
  • Recovery

In reality these all overlap in phases with different time constants of effectiveness and impact. The four goals map to the five different phased activities.

Global Risk Is Increasing carried a popular article at the end of 2020 claiming that social unrest is the new norm. There’s an article referenced from earlier in the year from the CSIS that is super interesting.

The root causes it lays out are:

  1. Global ICT (Information Communication Technologies)
  2. Global youth unemployment and underemployment
  3. Perceptions of inequality and corruption
  4. Environmental stress and climate change
  5. Global literacy and education
  6. Cities and urbanization

I strongly suggest you download the full free report.

Bats as Virus Supercarriers

Ever since I heard the 40 ℃/104 ℉ stat on bats’ body temperatures in flight and how they represent 1/5th of all mammals on earth, it’s put things in perspective for me. That means that they can carry a virus perfect for humans while flying — and not kill it. Not great, huh?

Other researchers have suggested that bats’ super-tolerance might have something to do with their ability to generate large repertoires of naïve antibodies, or the fact that when bats fly, their internal temperatures are increased to around 40 deg C (104 deg F), which is not ideal for many viruses. Only the viruses that have evolved tolerance mechanisms survive in bats. These hardy viruses can therefore tolerate human fever. What is a good thing for bats is a bad thing for humans.

Dr. Melvin Sanicas

Some bat stats for you:

Reach Out And Touch Someone Remotely

This article on FastCo gives useful context on how we got used to remote work in the last century — and in the process crashed the telephone system.

Universities Collaborating Across

The OmniSOC project makes a lot of sense as a means to connect universities together and to share information. I learned about it from this CSO Perspectives podcast episode.

Main takeaway of the podcast was:

  • Detect
  • Respond

Are the key JTBD for the NOC, but there’s agreement that the greater JTBD is:

  • Reduce the probability of material impact to an organization

Four Cybersecurity Models

Keeping a notion of an “adversary” and more of an OODA-mindset lies at the core of managing human-made incidents whether they’re digital or physical. In the case of battling nature, I think that knowing physics and any kind of *science* is what can drive an advantage in how critical event managers think.

The strong interconnectedness of digital-to-digital and physical-to-physical and all the various ways they intersect is what results in complexity. I like to think that the goal of managing complexity is to reduce it to what’s truly just complicated (i.e. understandable) versus truly complex (i.e. not-understandable). And then tackle what’s complicated, first.

Lockheed Model: Cyber Kill Chain

Mental model of hacker

  1. Reconnaissance
  2. Weaponization
  3. Delivery
  4. Exploitation
  5. Installation
  6. Command and Control
  7. Actions on Objectives

Diamond Model

Nodes in a graph structure

  • Adversary
  • Infrastructure
  • Capabilities
  • Victim

Mitre Model: ATT&CK (Adversarial Tactics, Techniques and Common Knowledge)

Mental model of hacker

  1. Initial access
  2. Execution
  3. Persistence
  4. Privilege escalation
  5. Defense evasion
  6. Credential access
  7. Discovery
  8. Lateral movement
  9. Collection
  10. Command and control


Zero Trust Model: ZTA (Zero Trust Architecture)

Removes the idea of protecting a castle with a moat, and assumes that the blurred boundaries of a network and service providers already has adversaries present. So the goal is to shrink trust zones around a specific role to minimize their potential negative impact. Core components of ZTA include:

  • Enterprise identities and devices: First, let them in with authentication.
  • Trust Verification Systems (Policy Decision Points (PDP) & Policy Enforcement Points (PEP) and policy engine): Second, determine your confidence level for them based on their device, time of day when connecting, and anything outside a normal pattern.
  • Enterprise Resources: Anything to be protected like data, applications, devices, etc.

Seven basic tenets of ZTA:

  • “All data sources and computing services are considered as ‘resources’
  • All communication is secured (internal or external)
  • All access is provided ‘per-session’
  • Access is provided based on a dynamic risk-based policy
  • All devices should be in the most secure state possible. They should be monitored for this
  • Dynamic authentication and authorization is strictly enforced before granting access
  • Collect as much information about the network and infrastructure as possible”**

References: and **

Seasonality Of Disasters (US)

NOAA Billion dollar disasters:


Peak flooding is June and July:


Peak hurricane season is September and usually runs 1 June to 30 November:



Tornado season usually refers to the time of year the U.S. sees the most tornadoes. The peak “tornado season” for the southern Plains (e.g., Texas, Oklahoma, and Kansas) is from May into early June. On the Gulf coast, it is earlier in the spring. In the northern Plains and upper Midwest (North and South Dakota, Nebraska, Iowa, Minnesota), tornado season is in June or July. But, remember, tornadoes can happen at any time of year. Tornadoes can also happen at any time of day or night, but most tornadoes occur between 4–9 p.m.


Note that Tornados can extend into southern Canada. Information on fatality trends here.

More tornado stats by NOAA here

Information on Tornado Alley via NOAA.

Tornado Alley via Severe Weather

There is a “second season” of tornadoes in September thru October.

October tornadoes via


via AMS

Blizzards have been reported in all months except July, August, and September. Monthly blizzard occurrence highlighted a more active blizzard season (December, January, February, and March; Fig. 6) and a less active blizzard period during the transitional seasons (October, November, April, and May; Fig. 7). 

via AMS
via AMS

Wildfires (US)

Regional Fire Seasonality

via NWCG

Cyberresilience, IOCs, and IOAs

Studying the terrible events of the July 4, 2021 weekend and what happened to Kaseya in the ransomware attack is instructive. In particular the reference in their blog post regarding how they needed to determine IOCs, or “Indicators Of Compromise”, caught my attention. It took me down a rabbit hole to 2013 and a post on Fireeye’s blog on the concept of “OpenIOC” and an attempt to imagine a standard format for compromise indicator recording and dissemination. The site no longer exists, but it’s instructive to see what folks were thinking back then via the Wayback Machine. It’s an entire network of schemas that were relevant back then in 2013 — I imagine it still drives a lot of thinking today. I found a list of 15 that is useful to note:

  • Anomalies in privileged user activity
  • Red flags in login activity
  • Unexpected DNS requests
  • Web traffic that doesn’t look human, or “inhuman behavior”
  • Unusual outbound traffic
  • Geographic abnormalities in traffic
  • Increased DB read volume
  • Unusual HTML response sizes
  • Mobile profiles that are odd
  • DDoS activity evidence
  • Databundles getting misplaced
  • Unusual port activity
  • Unusual number of request for a file
  • Unusual register or system file changes
  • Patches happening abruptly

There’s also a notion of an IOA (Indicator Of Attack) versus an IOC (Indicator Of Compromise). IOCs are about detection; whereas IOAs are about understanding intent. IOAs follow a common execution path:

  1. Reconnaissance
  2. Weaponization
  3. Delivery
  4. Exploitation
  5. Installation
  6. Command & Control
  7. Lateral Movement

The difference makes a little more sense in table form:

Reactive Indicator of CompromiseProactive Indicator of Attack
Malware, Signatures, Exploits, Vulnerabilities, IP AddressesCode Execution, Persistance, Stealth, Command Control, Lateral Movement

Enterprise Resilience Beginnings

Mass notification technology was a distinguishing factor in early thoughts about enterprise resilience:

“The use of mass notification technology to inform stakeholders has become a best practice for leading risk management and resilience programs.”

Security Magazine (2013)

The term was used in the manufacturing space as well:

“Enterprise Resilience = f( Vulnerability, Adaptability, Recoverability)“

Sanchis & Poler (2013) in 7th IFAC Conference Manufacturing on Modeling, Management, Control

The Octopus and Resilience

I’ve been an octopus fan for a long while, and when considering Rafe Sagarin’s work on the topic of octopi — it’s dawned upon me how much I have thought about this topic. But completely indirectly.

Apparently the octopus is thought to be resilient to climate change:

“… octopuses may be better able to withstand changes in ocean-acidity levels, which may have long-term bearings on our understanding of climate change”

Science Daily (2021)

And there’s also a weird and cool thing about how you can sever an arm and it’s still operating in a collaborative manner with other tentacles … somehow. It has something to do with their skin.

This is an interesting on how the hammerhead shark is powerful, but the octopus is adaptive and eminently flexible.

There’s a TinyMBA post on the octopus as well — that speaks to how the decentralized nature of an octopus’ intelligence architecture enables their resilience.

“Businesses trying to become more resilient focus on things like decentralization, redundancy, and independent decision making. Can parts of your company function if they get cut off from the other parts? Do you have backup systems in place in case your main systems are attacked? Can you re-route your data and re-grow your tentacles?”

Tiny MBA

Kintsugi (金継ぎ) and Resilience

The Japanese art of Kintsugi is a good example of how resilience can play out in real life — and sometimes over centuries. It is the act of taking what is broken, and then using gold to glue it back together again. It makes something old, into something completely new again.

Public Domain image via Wikipedia

You can see this way of restoring what is broken in this mini-documentary on the BBC with the theme of “embracing the imperfect.”

On the NY Met’s site they have a page dedicated to Kintsugi, and the School of Life does a nice job describing it as well—pointing out that it means, “to join with gold.” The SoL has TED-style education videos now, and this is their walk through:

From a less stilted point of view, I like how this video author refers to it as “the art of embracing damage” — that’s quite poetic.

You might notice how different this approach is from the runaway hit popularity of Marie Kondo’s notion of “throwing away anything that doesn’t spark joy” …