DEAD IN THE WATER | Maersk, Sandworm, and the $10 Billion Breach

The realization

that everything

is down.

Feeling this heavy

weight on

the shoulders

that

the organization

is down.

Because I

did not do

my job well.

I'm sure that's

what the CISO

there was

thinking

the whole time.

That gives me

the chills.

And that's why,

again, in my role,

I always have

in some

some part

of the brain

the thought of,

okay, am I ready

for something

that can happen?

Did I do everything

to prepare for the

unexpected and the

and the improbable?

Welcome to The CISO

Signal, the true

cyber crime

podcast.

I'm Jeremy Ladner.

On this episode,

we descend

into the shadows

of one of the

most devastating

cyber attacks

in modern history

a digital assault

that spread across

60 countries

caused an estimated

$10 billion

in damages

and brought one

of the world's

largest

shipping companies

to a halt.

Dead in the Water.

This is the story

of Maersk

and the

threat actor

known as Sandworm.

Joining us

to help unpack

what went wrong

and what could

have been done

is our special

guest, Chief

Information

Security Officer

and veteran

defender

of the enterprise,

edge, Shlomi Aviv.

Shlomi,

welcome to

the show.

Can you tell us

a little bit

about yourself?

Hi, Jeremy,

I'm Shlomi.

Avivi,

I'm the CISO

at Clear Street.

We are a US

based

fintech company.

I've been

in cyber security

for about

20 something years.

The last ten years

in CISO

roles, mostly

in Hypergrowth

companies.

The last role

before Clear Street

was CISO of One Zero

bank.

Relatively new

digital bank

in Israel.

Shlomi, it's

good to

have you aboard.

Now, let's

begin the

investigation.

We are in the midst

of a ceaseless war,

not of bombs

or bullets,

but of breaches,

firewalls

and silent

incursions.

The targets,

our borders,

our banks,

our commerce

and the

critical

infrastructure

that underpins

a free

civilization.

The enemy

is cloaked in

code, fueled

by greed, glory

and a desire

for chaos.

This is the story

of the unseen

protectors,

the nameless

generals,

the CISOs,

Chief

Information

Security Officers.

They are

the guardians

at the gate.

The watchers

on the wall.

Ever vigilant

and always

listening

for The CISO Signal.

The date is

June 27th, 2017,

in Copenhagen.

It's an ordinary,

sunny summer day

inside

the centuries

old white stone

headquarters

of Maersk,

the global

titan responsible

for nearly one

fifth of

the world's

freight transport.

A Quiet War

is about to break

the surface.

At his desk,

an unassuming

IT administrator

will call him.

Petr, prepares

for a

software update

that will be pushed

across Maersk

Global network

of nearly

80,000 employees.

Then his computer

suddenly

shuts down.

Annoyed, he

glances around

the open workspace

and watches

in disbelief

as one screen

after another

after another

goes dark.

Petr doesn't

know it yet,

but he's

just stepped onto

the front lines

of a conflict

that started more

than a thousand

miles away

a brutal,

bloody shadow war

between Russia

and Ukraine

that has already

claimed over

10,000 lives.

But

this battlefield

isn't drawn

in trenches.

It's coded

in silence,

and Maersk

neutral, commercial,

far from

the front,

has just become

collateral damage.

This wasn't

a heist,

and it wasn't

ransomware.

It was a scorched

earth campaign

disguised

as extortion.

The motive?

Obfuscation.

The method?

Weaponized malware.

The cost?

When all was said

and done, over

$10 billion.

This is not fiction.

This is not theory.

This is the true

story of NotPetya,

a cyber weapon

launched

during a war

most of the world

barely noticed.

And the company

that found itself

in the blast

radius.

This is

The CISO Signal.

I want you

to take us

behind the scenes

into the

security operations

center that day.

What do you think

was happening?

What do you think

these folks

were feeling?

What's going

through the CISO’s

mind as he

suddenly realizes

that this is not

your typical

morning.

This is not

just a few

alerts and alarms

that are going off.

This is it.

The dam has broken.

They've breached

their inside.

Try to explain

to us, if you can,

what the mood was

and what

maybe those

first steps were.

At first, it's

a feeling of the

end of the

world is here.

Because that's

basically

all they're doing

is triaging alerts

and possible

breaches.

And now

it happened.

And they did not

pick it up on time.

That's the

initial feeling.

But then, you know,

if it's

a trained SOC

they just go

into action mode

and do their

job

by collecting

data,

investigating,

thinking together,

brainstorming,

a lot

of brainstorming,

trying to pull in

whoever's

needed to be in

and try to

get some order

in this whole mess.

All right.

So at this point,

you know

you're under

attack.

You just don't know

the scale

and scope

of the attack.

So what do you do?

So you try

to first collect

a lot

of information

aiming at

just one point.

And that is

where are

we impacted

and where

are we not.

Act 1:

The Spark

In The East.

The first

tremor didn't

shake Copenhagen.

It struck Kyiv.

Around

the same time

Petr's screen

went dark.

Across

Ukraine, monitors

were vanishing

into black

terminals, locking

entire systems

falling into

silence.

Banks, TV networks,

government

ministries.

The cause?

A routine

looking software

update for a tax

application called

MEDoc

for Ukrainian

businesses.

MEDoc

was unavoidable,

a de facto

requirement

for financial

compliance.

But for any

global company

doing business

inside Ukraine,

that same software

sat silently,

quietly

waiting in

their network,

including Maersk.

The update came

from the

official servers.

It had

valid certificates,

it passed

security scans

and within

its lines of code

was waiting

a ticking time bomb

that bomb

would soon be

known to the world

as NotPetya

in form.

It resembled

ransomware

victims

saw familiar

warnings

and requests

like “your files

are encrypted,

pay $300

in Bitcoin

to recover them.”

But the encryption

was permanent.

There was

no recovery,

no keys, no ransom

process at all,

just devastation

and destruction.

It wasn't about money,

it was about chaos.

And from a

small tax office

in Ukraine,

NotPetya began

to move, to jump

borders.

Riding company

VPNs,

shared servers,

global networks.

Now

a piece of software

required

for doing business

in a war torn

region, had

become the vector

for one of the

most destructive

cyber weapons

ever unleashed.

Back in

Denmark, the

scope

and scale

of devastation

was beginning

to come into focus.

The blast

radius had

ricocheted

across oceans.

What no one

knew yet

was that the

domain controllers,

the nerve

center of Maersk’s

Active Directory,

were gone.

Not encrypted,

not held hostage,

destroyed,

burned to

the ground.

And with them,

the map of Maersk’s

vast network

had been wiped

from existence.

We love making

this podcast

and we really hope

that shows

in the care

and quality

that we invest

in it,

and we would

really

appreciate it

if you could

take a moment

to ‘LIKE’

and share it

with your fellow

security

professionals,

as well

as dropping us

a comment,

letting us

know what

stories and guests

you'd like to have

on the podcast

in future episodes.

Now back

to the story.

Okay,

so you said step

one for you

in a situation

like this is

collect

information,

essentially

a damage report,

an incident report.

Where are we hit

and how bad is it.

After that,

What do you do?

How do you,

as a leader,

turn the chaos

and carnage

of cyber war

into clarity

of purpose?

So after drinking

a glass of water

and understanding

that the worst

nightmare

has happened,

my first move

was to get everyone

that I think

can help

in the room

or involved.

And that means

both technical

people,

representatives

in the

different branches

and different areas

that I may need

at my disposal.

And as important,

get senior

management

involved.

Because this is

in cases

like that, it's

no longer

just a

security problem.

It's a business

problem.

It can have impacts

on the business

itself,

on reputation,

on regulatory

aspects of it.

You need

everyone

to be involved.

You need everyone

working on it.

So alongside

having the

technical people

restoring

infrastructure

and

having incident

response

specialists

investigate,

understand

what happen.

You want

the business people

to start

understanding how

we're running

the business

in an hour

and in five hours,

and tomorrow

with no systems.

And you want

our marketing

and PR

people

understanding

what type

of messaging

and how

we communicate

that to

our customers,

to the world,

to the press,

because that's

going to come.

And you want

your top level

senior management

on the case aware

and managing

these aspects

of the incident.

Okay,

so you mentioned

the marketing

and PR folks

and preparing them,

and that's

my world.

So let's say

you were working

at a massive

global organization

like Maersk

with billions

of dollars

of annual revenue,

tens of thousands

of employees,

and everything

is down, and

you as the

security leader is,

in a very real way,

responsible

for that.

There's a

press conference.

The PR

communication

person speaks,

the CEO

delivers their

prepared statement.

But you're

there to field

the really tough

security questions

the world is

watching.

How do you handle

that?

Being honest

is super important.

Not because...

I mean also

because it's the

right thing to do.

But not just that,

because we've

seen many cases

where companies

try to

say something

that wasn't

entirely true

and were corrected

by the attackers,

and there's nothing

that destroys

your reputation

more than that.

So be honest,

be humble,

communicate what

you know so far.

And mostly,

I think, convey

the message

that yes,

it happened,

but we are

all over it.

We have our top

guys here connected

with the

right vendors,

and we've

brought in anyone

that can

actually help

with this.

And we are

taking this

very seriously

and we will get

through this.

And I think that

what I would think

is the

proper messaging.

So for some

of the younger

security

professionals

that are listening

in the

context of 2017,

what does it mean

to lose your

domain controllers?

Back then,

domain controllers

were the heart

of everything.

You could

not log

into any system.

You could

not perform

administrative

actions,

you could not do

almost anything

without the

domain controllers

being up

and running

correctly.

And so that's

the starting point.

Understanding the

domain controllers

are down

directs

you operationally

to start

from scratch.

And that's

what they did.

They moved

back to paper.

They did

a full recovery

of systems

of backups,

where they were

lucky enough

to have

the specific

domain controller

that was not hit

the sync from,

but

it really means

that there's

no point in

trying to

contain this thing

and try to somehow

say online.

It means

that you need

to start

from scratch

and rebuild

everything.

Act 2:

The Collapse.

There are moments

when time

fractures,

when seconds

stretch into hours,

and decisions

made in

panic define

what's left

standing.

At 11:30 a.m.,

Maersk’s

Incident

Response Team

still believed

the issues

were local.

Denmark's

HQ was glitching,

sure, but

Operations

across Europe.

Asia and

the Americas

still appeared

online.

That illusion

shattered

within the hour.

Across the Atlantic

in Elizabeth,

New Jersey,

the chaos

was physical.

One of Maersk’s

busiest terminals,

APM terminal,

had gone dark.

On a good day, 3000

trucks pass

through the gate,

delivering cargo

that fuels

cities and

feeds communities

allows commerce

to happen

across the entirety

of the U.S.

but that morning,

nothing moved.

Gate clerks

were silent,

barcode scanners

dead.

A line of 18

wheelers stretched

for miles

outside the port.

Police began

to knock on

cab windows,

instructing drivers

to turn around

inside the ‘Reefers’.

Refrigerated

containers.

Goods

were beginning

to rot.

Pharmaceuticals,

produce, components

for just in time.

Manufacturing

shipping manifests

couldn't

be accessed,

booking systems

were down.

No one could load

or unload, or

track cargo.

No emails,

no phones, no

recovery window

from Los Angeles

to Algiers,

Rotterdam

to Mumbai,

the story was

the same.

Cranes

were inoperable.

Trucks turned away,

global logistics

paralyzed in place.

Maersk had become

a black hole

and every

passing hour.

Multiplied

the damage.

But this wasn't

just Maersk’s

problem anymore.

This was a

systemic disruption

to the

circulatory system

of global trade.

Across 600 offices

in 130 countries,

one of the world's

largest shipping

companies was dead

in the

digital water.

By now, Maersk’s

security team

had confirmed

what their

instincts

already told them.

This wasn't

user error

or an

isolated failure.

This was an attack,

but no one yet

knew who or why.

The ransom

message seen

in other parts

of the world,

The demand

for Bitcoin,

never even reached

many of Maersk’s

infected machines.

NotPetya, moved

too fast. It copied

credentials,

hopped domains

and executed

its payload

before any message

could appear.

At its core,

NotPetya

was a weaponized

version of tools

stolen

from organizations

like the NSA,

specifically

EternalBlue

and MimiKatz.

These tools

exploited a Windows

vulnerability

to move

laterally

and escalate

privileges with

terrifying speed.

Once inside,

NotPetya

You didn't stop

to negotiate,

it encrypted

the master

file table

of infected systems

and rendered them

unrecoverable.

The data was gone.

The demand

for ransom

misdirect, a lie.

The most

terrifying part

wasn't

just the breach.

It was

the realization

that Maersk’s

entire recovery

playbook

had been shredded

along with

their servers.

They didn't

have access

to email

to communicate.

Their phones

in many instances

were down.

Incident

command couldn't

even issue orders.

In some offices

whiteboards

and post-it notes

became the

new global

comms strategy.

By 3 p.m.,

it was clear

Maersk had lost

nearly

every Windows

based machine

across the entirety

of its enterprise.

Nearly 49,000

systems,

including 4000

servers.

The estimated

financial impact

would

eventually rise

to several hundred

million dollars.

But in that moment,

it wasn't

about money.

It was about

survival.

By that evening,

Maersk’s

executives

had activated

their emergency

protocols.

A recovery center

was set up in

Maidenhead,

England,

two floors

of an office

building quickly

converted into

a 24/7

crisis ops center.

Deloitte

was brought in

and handed

a blank check.

Hundreds of Maersk

employees

flew in from around

the world.

Hotel rooms

were hard

to come by

and so was sleep.

No legacy

equipment

could be trusted.

It was all suspect.

Teams hurriedly

hit up

every

electronic store

for miles,

buying every laptop

they could get

their hands on.

The mission

rebuild Maersk’s

Global Network

from scratch.

Then came

the darkest moment.

Backups of servers

had been located,

but no one

could find

a backup of Maersk’s

domain controllers.

The digital

Rosetta Stone,

for the company's

authentication

and access systems.

Why?

Because Maersk’s

domain controllers

synced with

each other,

meaning

they were

all destroyed

simultaneously.

One IT staffer

remembered

Thinking

“If we can't recover

our domain

controllers, we

can't recover

anything.”

Okay, Shlomi.

So we know

that Maersk wasn't

the only company,

and organization

that was affected

by the attack,

but certainly

they were among

the largest

and highest

profile.

What do you think

it was about them

that made them

so vulnerable

to an attack

like this?

I think

it was technically

a very

sophisticated one.

It was using

what was back

then, zero day

or a one day

vulnerability.

And I think it was

a combination

of Maersk

being caught up

in the crossfire

between Russia

and Ukraine.

So it was not

something that

they necessarily

have anticipated.

I think

that probably

in their risk

management meetings

and when looking

at their

possible threats,

they did not

think about

the Russian

government

going after them,

which probably made

them think

less careful.

Or did it

not go

that deep

in categorizing

these threats

and

addressing them?

But I did not

think that

this is a

threat scenario

that is relevant

for them.

And combine

that with the fact

Maersk is a very,

globally

distributed

company,

lots of branches,

lots of points

of presence,

it is hard to cover

that amount

of different

locations

and assets

throughout

the world.

And that was

basically

the reason. I'm

sure that

did not think

of their

Ukraine branch

with this specific

accounting

software,

to be,

one of their more

risky areas.

And the way that

they were

structured was

so that

they did not have

the right

separations between

different areas,

and different

assets,

and different

environments,

in their network.

Which made

every little

branch count.

So in the very

early minutes,

this presented

very much like

a ransomware

attack,

but eventually

that proved

not to be

unlike, say,

CNA financial.

That got hit

with a $40

million ransom

note back in 2021

and eventually

just paid off

The attackers. Here,

That wasn't

really an option.

So how does

that affect

your initial

assessment

as a leader

and then your

subsequent strategy?

From several

aspects

it was

a ransomware

attack.

I mean, it did

encrypt files

and data.

It was just

very, very big.

It spread out

very rapidly

and the incentive

was not money.

So if

the incentive was

money,

and Maersk

would have gotten

a ransom

note, probably,

you know,

would have tried

to pursue

this,

pay the ransom

and be

done with it.

The fact that

it was not

a financial

driven attack

made it

something that

they cannot buy off

to get out from.

So they had to go

through the

entire path

of restoring

systems,

investigating,

doing

the entire thing.

Act 3:

The Rebuild.

Imagine this.

You're one of

the world's

largest shipping

companies.

Your systems

are gone.

Your backups

are gone.

Your IT

staff

can't log in to

a single server.

Your company, 76

ports, 800 massive

vessels is afloat

in the dark.

No navigation,

no coordination,

no data.

Just the

sound of fans

spinning in empty

server rooms.

And then a signal,

a sign of hope.

Not from

headquarters,

not from

some office

in Europe,

but from a

lone office

outpost

thousands of

miles away

in West Africa.

Ghana,

To be specific.

It was luck,

a miracle

born of logistics,

timing

and geography.

One small

Maersk office in

Ghana had

lost power

the morning

of the attack.

Their building

circuit

had tripped,

cutting power

to their

entire local

domain controller.

When the power

returned, so

did something

Maersk had

thought

lost forever

and uninfected

backup of

their Active

Directory Server.

It was a shard

of the Old World,

a pristine

copy of

the company's

digital identity,

user accounts,

credentials,

configurations,

everything needed

to begin again,

to rebuild.

The drive

was flown

to England,

where Maersk

had begun

setting up

clean rooms,

digital

quarantine zones

for restoring

operations

from the ground up.

They worked around

the clock,

they rebuilt

servers, configured

credentials

and began slowly

reconnecting pieces

of Maersk’s

vast

infrastructure.

From there,

they pushed

new configurations

to Singapore,

then to the US,

and finally back

to Denmark.

With each server

restored,

a little more

visibility,

returned.

Port operations,

crawled back online

booking systems,

rebooted.

Maersk wasn't

saved yet,

but it was back

on its feet.

What followed

was the

longest week

in the company's

history.

IT teams worked 20

hour days. Offices

that had gone

dark were

reconnected one

by one.

Cargo began

moving again,

and within

ten days,

95% of Maersk’s

operations were

back online.

There were no

press conferences,

no ticker

tape parades.

Maersk

called it,

“business

continuity.”

But those inside

knew it was

a resurrection.

The total cost

in the neighborhood

of $300

million in damages,

49,000 devices

rebuilt

entire systems

re architected

from scratch.

But the real price

was measured

in something deeper...

trust, resilience

and pain.

So I think it's

probably safe

to say

that back

in the mid

2010s, Maersk

wasn't

considering Russia

as a chief threat

in terms

of cybersecurity.

But at this

point, here

we are in 2025,

we're all a

little bit wiser.

How concerned

and prepared

should global

organizations

be regarding

ending up as

collateral damage

or indirect targets

in someone else's

cyber war?

Many industries

are definitely

at risk

of becoming

collateral damage

in a cyber war.

We've seen this in

many cases

here in Israel,

for sure.

Quite a few of the

bigger attacks

that we've seen

were not

financially

driven,

but were aimed

at shaking

the stability

of our

financial system,

of our critical

infrastructure

companies like

electricity

and water.

And I think this is

one of the

key points

that CISOs

sometimes miss.

You need to

be very honest

with yourself

as to

what can

really happen

and who is really

after you.

There's

a common

misconception

that companies have

and people have

that, you know,

I'm not

important enough

for someone

to actually

target me.

That's not true.

If we are

working in a

in a financial

industry

or in a

critical

infrastructure,

or if you're

just big enough

that you being down

would make a dent

to the economy

or to

the stability

of the country,

you know,

definitely

on the list.

And even

if you're not,

like here,

they did not

target Maersk,

even though

they might have

I mean

it’s a legitimate

target I think.

But they

did not target

Maersk.

It was just

a mistake.

And you know,

companies do

get in

the crossfire

even by mistake.

So do you think

any company

or organization

has the luxury

at this point

of thinking

of themselves

as neutral?

Or is everyone

a legitimate

target?

Is everyone

essentially on

the cyber war

battlefield

at this point?

I think

we're all on the

battlefield now.

Some of us more,

some of us less.

But it's definitely

a threat scenario

that you need

to consider

and take

into account.

Doesn't mean that

every company

has to act itself

like

a government agency

or a big bank,

but it does

mean that

you need to take

into account,

and you need to

be aware of it,

and you need to do

the cost

effective steps

to be secure enough

against these

threat scenarios.

Act 4:

The Debrief.

There are breaches

that steal money.

There are breaches

that steal secrets,

and then

there are breaches

that send a message

when not meant

for the victim,

but for the world.

Maersk was never

the target.

It was collateral

damage

in a digital war.

It never

volunteered to

fight in.

As Maers’s systems

flickered back

to life,

investigators began

following

the infection

chain backwards,

line by line,

log by log,

NotPetya,

it turned out,

had not entered

through a

phishing email

or brute

force login.

It came through

software. Accounting

software.

MEDoc was a

widely used

Ukrainian

tax application

required by law

for companies

operating inside

Ukraine.

Maersk had

an office in Kiev

and like

thousands

of others,

they had

MEDoc installed

on a single

machine.

On June 27th,

MEDoc’s servers

were hijacked.

A poisoned

update was

sent out,

signed, legitimate,

and deadly.

NotPetya didn't

exploit user error.

It exploited trust.

The malware

spread with

terrifying speed.

It used multiple

tools, stolen

NSA exploits

like EternalBlue

and Mimikatz,

chaining them

together like a key

ring of doom.

If one door

was locked,

the next might

not be.

It didn't

just encrypt files,

it destroyed

them permanently.

The ransom

screen was a

smoke screen.

There was

no decryptor,

no customer

service, just data

turned to ash.

The code

was sloppy,

hastily written,

but effective,

and worst of

all, deliberate.

Ukraine was hit

hardest.

Government

ministries,

banks, power

grids,

transportation.

But then

the blast wave

rolled outward.

Fedex, Merc,

Mondelez,

even hospitals

in the

United States

reported system

outages.

Maersk

was one of the

most visible

victims, but

the attack

had infected over

60 countries.

The question now

was who and why?

In January 2018,

the United States,

United

Kingdom, Canada,

and Australia

jointly attributed

the

NotPetya attack

to the

Russian military.

Specifically

the GRU's

unit 74455,

also known

as Sandworm.

It wasn't a

cyber crime,

it was

cyber warfare.

The purpose

of NotPetya

to punish Ukraine,

to send a signal

to any company

doing business

there.

The timing wasn't

random.

It struck

just before

Ukraine's

Constitution Day,

but like a bomb

detonated in the

wrong building,

the message hit far

more than the

intended target.

This was the moment

the world saw

what cyber war

could do.

Not targeted

espionage,

not ransomware

for profit,

but raw,

calculated

destruction.

So in

a sea

of really bad

luck, Maersk

finally

catches a break

and gets lucky

and finds

this domain

controller

in Ghana.

No CISO ever

wants to

count on luck.

So what advice

can you offer for

better

preparedness,

even for the most

unlikely scenarios?

Sometimes

you get lucky,

sometimes

you don't, so luck

always helps,

but it's

not something

you can count on.

And so

preparedness is

super important.

Not only

on the classic

prevention

state of mind.

How would I

stop this

from happening?

But also

from incident

management

and crisis

management

to really think

and plan

what would happen

if everything fails

and my catastrophic

worst case scenario

actually occurred?

And this is

something, again,

it's not fun

to think about.

It's not a nice

or fun

part of the job,

but you have to

do it.

You have to do it

because like here

with Maersk,

and what can happen

today

with any type

of infrastructure

or technology,

you may be

in a place

where your defenses

are down

for some reason.

You may

be in a place

where a catastrophe

can happen.

You need to know

in advance

and prepare

and train,

What would you do

in that situation?

And that's

something that

I've been putting

a significant

amount of time

in planning

and training the

organization

and designing.

So there's this old

saying that

you don't know

what you

don't know.

So how

can you build

readiness

for attacks

that are beyond

what you can

reasonably

anticipate?

I think there

is always

a possible

blind spot.

I think

acknowledging

that is important

and planning,

knowing

there might be

a blind

spot is also

super important.

So if you know that

something

can happen

that you have

not anticipated,

you can be

you can build

your readiness

for that

unexpected event.

We can build

readiness in terms

of having

all kinds

of redundant

controls.

You can build

all kinds

of capabilities

to rapidly

restore

your business,

to detect

anything that

looks suspicious.

And it's

something that

needs to stay

always top of mind.

Because yes, the

these things

can happen.

And yes,

technologies

and techniques

from attackers

evolve in time.

If you always

are in a

state of mind

that you know

everything

and you've covered

everything, you

may be surprised.

Act 5:

Lessons

in the Aftermath.

It took just

seven seconds

for the worm

to enter,

seven minutes

to spread,

and seven hours

to bring

a global titan

to its knees.

But rebuilding

that took

grit, imagination,

and a team

that refused

to stay offline.

This wasn't

just a story

of disaster,

it was a story

of recovery,

of a global

supply chain

coming back to life

from a

single forgotten

server

in West Africa.

A story

of human

coordination

at an

impossible scale,

and of a new era

where wars aren't

just fought

on land,

but in code.

In the years since,

NotPetya

the cyber landscape

has changed.

But not enough

supply

chain attacks

are more frequent.

Attribution is

harder, and

the costs

are higher

than ever.

For CISOs,

the lessons

of Maersk aren't

just about risk,

they're about

readiness,

about humility,

about how close

we all are

to digital

catastrophe, and

how quickly

everything

can vanish.

Okay, so here

we are in 2025,

more than

half a decade

down the road

from this attack.

Do you think

the same

type of attack

could have this

massive impact

today?

Have we

learned the lessons

necessary

from this

type of attack

and build

sufficient

strategies

to protect

our organizations?

I guess,

essentially,

have things changed

enough?

I think

some things

have changed.

I think today

the risk of

zero days

creating such

a huge impact,

I think

is probably

lower, both

because vendors

and

operating systems

and vulnerability

management plans

are now

a commodity,

everyone has

something in

that area

and people are more

aware and also,

things like

if you look at,

Log4j or,

you know,

or the recent

Engenix

vulnerability,

they get more

publicity, right?

So people know

about them.

It's hard

not to know

what's going on.

And I think

organizations

are more

aware of that

and are

more trained

in patching and

and addressing

such

vulnerabilities.

In that sense,

I think

that specific

scenario

I think is,

is less

probable to happen.

But other things

can happen.

You know, the fact

that, you know,

I think today

most companies

rely on on

SaaS products,

third parties,

most companies

rely on cloud

and the

new technologies

that are coming.

And these all

create their

own attack

surface, you know,

and if one of these

type of

technologies

has an exposure

or some sort of,

weakness,

it can be used

and it can

eventually

result in,

in a very

high impact.

You know, the fact

that technology

changes

and the specific

technique

has changed

doesn't mean

that the risk

has significantly

changed.

And I think that

every CISO

today has to have

the state of mind

that you're

building

your controls

and you're doing

everything right.

But something

can happen.

And you always need

to be in a,

in a state of mind

that if

something happens,

do I know

what to do?

Does the

organization

know what to do?

And you need

to build this,

level

of readiness.

All right

Shlomi, last

question for you.

When the dust

settles

and management

is looking

to point fingers

and lay blame,

they'll inevitably

turn to your team.

So in that

position,

some might say

that an attack like

this was impossible

to predict.

What do you say

to them?

It is

pretty impossible

to predict,

but it doesn't

mean that

they should

not have

thought of that,

not thought

of this

specific scenario,

but the thought of,

unpredictable

possibilities.

And so I think

that eventually

they did

not do a good job

in addressing

these kind

of risks.

Their environment,

I think, was built

pretty,

with a pretty

naïve perception.

And I do think

that they

should have done

a much better job

in either

or both covering

the majority

of the organization

with different

security controls

that I'm sure

they had.

But also,

like I said,

separating

or containing

these areas

in a way that,

if one

branch is down,

it would not affect

the entire company.

Thank you so

much, Shlomi.

We really

appreciate you

being on the show,

and we hope

you'll come

back again soon.

In the silence

after the screens

go dark,

in the heartbeat

between login

failures

and lockouts,

the reckoning

begins.

What happened

to Maersk

wasn't fiction.

It wasn't

ransomware.

It was

digital

devastation.

And from the ashes,

a question echoes

through

every security

operation center

and every

CISOs mind.

If it happened

to them,

could it happen

to us?

This wasn't

just a breach.

It was a war zone

cloaked in code,

a battlefield

that stretched

across

continents, systems

and seconds for a

CISO

it could have been

the end of a career

or the moment

to rise,

to lead

with clarity,

resolve

and courage.

Always alert,

always listening

for The CISO Signal.

DEAD IN THE WATER | Maersk, Sandworm, and the $10 Billion Breach
Broadcast by