The $25 Million Arup Deepfake: AI's Most Convincing Con
Welcome
to
The CISO Signal
The CISO Signal
True
cyber
crime
podcast.
I'm
Jeremy
Ladner.
On this
episode,
we step
into a
new kind
of
darkness,
a world
where
the tools
of
creativity
have been
twisted
into
weapons
of
deception.
We're no
longer
talking
about
masked
figures
breaking
through
doors,
or
hackers
exploiting
fragile
lines of code.
The enemy
has evolved,
and
it wears
a familiar
face.
It speaks
with
a voice.
You know,
in a
meeting
you
thought
was real.
This is the new
frontier
of fraud,
where
trust
is not
a currency,
but
a vulnerability.
And the
most
sophisticated
heists
are
pulled
off
not by
compromising
a system,
but by
convincing
a human.
In early
2024,
a
finance
employee
at a
global
engineering
firm
received
an email,
a
seemingly
standard
request
for a
wire
transfer.
They were
initially
suspicious
as
they were
trained
to be,
but
the criminals
were
prepared.
They didn't
just send
a fake
link or a
malicious
attachment.
They
engineered
a
meeting,
a video
conference
where
the faces
and
voices
of senior
executives
appeared
with a
chillingly
perfect
authenticity.
The employee
saw
their
CFO,
heard
his voice,
and
watched
as other
colleagues
nodded
along.
All a
digital
mirage
created
by a AI
in
a matter
of
minutes.
More
than $25
million dollars
was gone.
Spirited
away to
foreign
bank
accounts,
leaving
behind
a
profound
and
unsettling
question
how do you defend
against
a lie
when it looks
and
sounds
like the
truth?
To help
us
investigate
this
unsettling
new
reality,
we have a guest
who has
spent his
career
at the
intersection
of high
growth
technology
and
security,
understanding
how to
build
resilient
organizations
from the
ground
up.
As the
chief
information
security
officer
at
Netlify,
he's
a veteran
in the
startup
space
with a
proven
track
record
of
securing
companies
through
periods
of rapid
scale
and major
transformation.
He's a
leader
who
understands
not
just the
technical
threats,
but also
the human
and
cultural
challenges
that come
with
building
trust
in a
digital
world.
This
week's
guest CISO
co-host
is Mark
Dorsi.
Mark,
welcome
to
The CISO Signal
The CISO Signal
So happy
to have
you here.
Yeah,
I love
being
here.
Thanks
so much
for the
warm
introduction.
I really
appreciate
it.
Absolutely.
Before
we dive
into the
investigation,
is there
anything
you want
to add
to
your bio,
some
background
that the
audience
should
know
about?
You know,
overall,
I'm a
startup
specialist
and I've
been,
you know,
a part
of
startups
throughout
my entire
career.
And
I just
really
love
trying to
make the
world
a better
place,
one
conversation
at a time.
And
so I'm
always
happy to
spend
time
with chat
with
the up
and
coming
founders
of
the world.
Those
folks
out there
that are
trying
to do
just
that.
And so
they're
always
free to
reach out
to me,
and I'll
be happy
to always
chat
with them
about
the next
thing
that
they're
going to
bring
to us,
to try
and help
defend us
against
those
adversaries
out there
that are
adapting
and
overcoming
all of
the
controls
we've
already
put in
place.
Excellent.
All right,
Mark,
are you
ready?
Let's get
started
with the
investigation.
Absolutely.
We are
in the
midst
of a
ceaseless
war.
Not of
bombs
or
bullets,
but of
breaches,
firewalls
and
silent
incursions.
The targets,
our
borders,
our
banks,
our
commerce,
and the
critical
infrastructure
that
underpins
a free
civilization.
The enemy
is
cloaked
in code,
fueled
by greed,
glory,
and a
desire
for
chaos.
This is
the
story of
the
unseen
protectors,
the
nameless
generals,
the
CISOs,
chief
information
security
officers.
They are
the
guardians
at the
gate.
Watchers
of the
wall.
Ever
vigilant
and
always
listening
for
The CISO Signal
The CISO Signal
One of
the
really
interesting
things
about
this
particular
attack
was that
the person
on the
inside,
the employee
you know,
the
victim
of the
social
engineering,
essentially
had their
suspicions.
They were
not so
trusting.
And they
said,
I don't know
about this.
And
there was
pushback.
And then
that led
to
the
invitation
to a
video
conference
where
the CFO
and all
the
coworkers,
not in
actuality,
but their
AI
doppelgangers,
were
there.
How
do you think
that
changed
the
trajectory,
and
what kind
of
planning
would you
imagine
was
involved
in
advance?
Where
okay,
this
sort of
chess
match, if
if he
if the
employee
says,
hey,
we'll
have to do
B, and if
he says
this and
we'll do
that.
Walk me
through
what you
think
that if
you're
sort of
going to
profile
them
and,
prepare
others
for this
sort of
attack.
Yeah.
I would
imagine
that it's
not
actually,
especially
with
ChatGPT
and those
sorts
of things
today
where
you can
just dive
straight
in and,
and
bring,
you know,
through
what
or what
might be
some
objections
that
someone
might
have to
a
scenario
in this
case,
and in
which
case
quickly,
you'll
see that,
an
individual
who wants
to help
and look
good in
front of
the
organization
and
deliver
something
at a very
high
quality
and high
level.
They'll
tend to
err
towards
trust,
especially
when it
comes to
some
senior
management,
those
sorts
of things
that are
on
the phone.
And
that's
where,
you know,
the
ability
to
separate
roles
and
responsibilities
within
an org
is really
important.
You know,
someone
who's
able to
just wire
on their
own,
you know,
$25
million,
and
maybe
they
had to
go
through
a couple
of
internal
hoops.
But
again,
it all
comes
back
to that
simple
question,
you know,
why
are we doing
this now?
Who have
we spoken
with?
What are the procedures
you need
to go
through
in order
to get
this
done?
And sure,
it can
add
friction
to a moment,
but it's
really important.
So as
I profile
that
individual
on the
other
end, it's
all right.
I would have
absolutely
asked
ChatGPT
what
might
the
objections
be from
an
employee.
And,
you know,
and
putting
together
an
incident
response
test.
I've done
this,
by the
way,
right
now.
And,
those
objections
can then
just be
quickly
met.
The most
recent
one that
I did,
I did
leverage
ChatGPT.
So
I perform
these
things
for
organizations
as well.
Right.
And so
I was
able to,
without
ever
speaking,
to an
individual
and
without,
any prior
engagement,
I was able
to
completely,
reset
a password
as
well as
a to a
for token
without
ever
having
to speak
with an
individual.
And
the way
that I
was able
to get
that done
and using
ChatGPT
was
through
persistence.
And I
just used
this
straight
phrases
that it
provided
me.
And I was
then
able to
be
successful
as far
as,
all right,
you know,
tell me
what
I need to
say next.
And it
just walk
me
through
the
entire
operation
on what
I needed
to do
on
the other
side.
So
a little
bit of
preparation
goes
a long
way, but
the ability
to see
that
person
and have
that
individual
speak
directly
to you
can be
a real,
challenge.
As far
as
questioning
authority,
right?
I have them
on
the phone.
I can
see them.
They're
letting
me know
what
I need
to do.
And, it's
always
about
that.
The same
fundamentals.
Is there
a sense
of
urgency?
Why
is this
urgent
now?
And
why can't it
wait
until,
you know,
Monday,
when we
normally
do these
sort of
things,
or
the end
of the month
when this
might
normally
happen?
So
with that
in mind
and
with the
the,
the
brilliance
of hindsight,
what are some
of the
low tech,
high
impact
protocols
a company
can put
in place
to verify
these
high
value
transactions,
especially
when they
involve
remote
teams?
We're
working
off site.
I think
this place
in this
instance,
he was in
Hong Kong
and
the CFO
was
in the UK.
What would you
suggest?
Well,
this is
where
I tell
folks to,
you can
start
small and
go big.
That to
me, is
where you
really
need
to be.
Is, the,
the
regular
cadence
of
business.
Right.
So we're
in
the business
of doing
business.
Let's
get down
to
business.
You know,
to quote
Monty
Brewster
and,
if you are
going to
do that,
I would
say,
you
would not
just
go off
and do
a wire
transfer
under,
you know,
random
Monday
for $25
million.
Now,
it
could be
that that
particular
Org has
a cadence
of that.
Yeah.
In
which
case they
probably
need to
go back
and
revisit,
you know,
just how
important
is,
you know,
sort of,
mid
period,
wire
transfer
activity.
How important
is that
to
the business
versus
knowing
that you
won't
lose that
type of
money
and the
individual.
Right.
They took
it
pretty
hard
in themselves
after
being at
that
business
for all
those
years.
Right.
They made
a huge
error
in
judgment.
But
in that
moment
in time.
Right.
Even,
even
they could
fall.
Subject.
Absolutely.
Right.
I think
it was
after
27 years
of being
a part
of the
organization
that,
I mean,
you know,
you've
you've done
well
all of
these
years.
And then,
of
course,
you're
tricked
by the
latest,
greatest
technology,
when
in reality,
the organization
really
just
needed
to have
some
better
controls
in place
so that
you were
never
tricked
in the
first
place.
Act 1.
The Uncanny Valley
The Uncanny Valley
The Uncanny Valley
You're
traveling
through
another
dimension.
A
dimension
not of
sight
and
sound,
but of
ones and
zeros.
Our
destination
is a
world
class
engineering
firm,
Arup,
a name
synonymous
with
structure
instability.
They built
the foundation
for some
of the
world's
most
iconic
monuments,
from the
soaring
majesty
of the
Sydney
Opera
House
to the
colossal
woven
steel of
the
Bird's
Nest
Stadium.
Their work
is a
testament
to the
immutable
laws of
physics
the strength
of steel,
the
certainty
of concrete.
But
what happens
when that
strength
is built
on
something
far more
fragile
than a
structural
blueprint?
What happens
when a
solid
foundation
is made
not of
steel
and
concrete,
but of
silicone
and lies?
The
time is
early
2024.
The catalyst
is a
seemingly
mundane
that
arrived,
as they
so often
do, a
digital
whisper
bearing
the name
of a
trusted
colleague.
The company's
CFO based
thousands
of miles
away from
Hong Kong
in the
United
Kingdom.
Now,
our man
was no fool.
He was a professional,
a
sentinel
at the
gate,
trained
to spot
the
subtle
tells
of a
digital
deception,
the grammatical
missteps,
the unnatural
urgency.
The
mismatched
sender
details
all the
telltale
signs
of a
phishing
attempt.
And yet,
there was
a doubt
that
small,
nagging
voice in
the back
of his
mind
it
required
confirmation.
It was a
reasonable
doubt,
a human
doubt.
And so
a video
conference
was arranged,
and
it was
here in
the cold
light
of a
laptop
screen,
that his
doubts
were
to be
laid
to rest.
He joined
the call.
And there
they
were,
the
familiar
faces,
the
trusted
expressions,
the
presence
of other
colleagues,
a silent,
unspoken
validation,
the
voices,
the
gestures,
the very
mannerisms
all in
perfect
chilling
sync.
The
call was
a
carefully
choreographed
performance.
They
spoke
of a
sensitive
transaction,
a
delicate
merger,
a need
for
absolute
secrecy.
He spoke
of the
urgency
of
the moment
and the
need
to move
funds now
to secure
the deal.
The CFO
even
referred
to
a recent
company
wide
meeting
in a
private
detail,
known
only to a
very few.
A
humanizing
touch
that
sealed
the
illusion.
And in
that
moment,
our man's
suspicion,
that
flimsy
veil of
doubt
was torn
away.
His mind,
trained
to detect
fraud,
was now
convinced
of the
opposite.
His caution
had
been appeased
not by a leap
of faith,
but
by the
irrefutable
evidence
of his
own eyes
and ears.
But what
he didn't
know,
what
he could
not
possibly
know,
was that
his
senses
had
become
accomplices
in his
own
deception.
The faces
on his
screen
were
not faces,
but
a meticulously
constructed
lie.
The voices
in
his ears
were
mimicry.
They were
carefully
constructed,
but only
ghosts
born of
a
malevolent
artificial
intelligence.
This
wasn't
some
clumsy
scam.
It was
a
profound
and
unsettling
forgery
of human
identity,
itself
a mimicry
so
precise
it
bypassed
our most
fundamental
defense
mechanism
the ability
to
discern
a person
from a
mere
reflection.
He was
not
tricked
by a
clumsy
hacker.
He was convinced
by
a ghost
in the
machine,
a ghost
that wore
the
familiar
face of a
trusted
colleague.
And now
the stage
was set.
The illusion
was
complete,
and the
transaction
of 25
million
USD
was
approved.
It was in
that
terrifying
moment
that the
twist
of
technological
fate
occurred.
The funds
were
siphoned
away
across 15
different
payments
a
methodical
digital
drain
of
corporate
lifeblood,
a phantom
taking
what it
wanted,
one
perfect
digital
step
at a time.
And so
the very
tool of
collaboration
had now
been
turned
into a
weapon,
a lie
so
convincing
it was
indistinguishable
from
the truth.
So
the attackers,
in this
case,
likely
used
publicly
available
footage
of
the executives.
What does this mean
for
corporate
leaders
in their
public
digital
footprint?
You
think?
It's
it's very
real.
I mean,
for
someone
like
myself
who's
been on
multiple
podcasts
and those
sorts
of things,
and
you can
mine
video
very
quick
in
today's
world,
it's it's
a very
real
threat
as far as
that's
concerned.
Right.
The question
is
whether
or not
an
adversary,
would
want to
take
advantage
of that
weakness
and
take your video
and do
whatever
they
want.
Right.
And,
you know,
for those
folks
out there
who have
more
content,
right?
Every sound
you
could
have ever
made,
every word
you
could have
uttered
is
already
able to
be
assembled
out
of just a
few
minutes
worth of
of
a podcast
like this
one.
And,
you know,
so,
the
things
that
I love
is all of
those
technologies
from back
of
the day,
which is,
verified me
by my
voice.
Right.
Well,
what does
that even
mean,
though?
Right.
There's,
you know,
and folks
will,
you know,
I know
we can
see the
nuance
in the,
you know,
and how
it was
delivered.
I don't
know,
my voice
right now
is being
digitally
uploaded,
right.
Put into
a bunch
of,
you know,
it's it's
not
analog,
right?
It's
not the
entire
thing.
It's
a very
digital
footprint,
in which
case
the
ability
for it
to
capture
and the
error
correction
mechanism
that's
there,
would it
actually
be able
to pull
that back?
So I just
look
at it
from
this,
this
overall
technology
perspective
is
you have
a set
of crown
jewels
who wants
to act
in
order to
get those.
And
are you
a tasty
enough,
morsel
in order
to try
and get
after it?
And folks
really
need to
think
through,
you know,
what
their
what
their
control
pieces
are in
order to
prevent
against,
you know,
those
types of
an
attacker.
So we've
talked a
lot about
the
threat
of AI
in terms
of
deepfakes.
What role
do
you see
AI
playing
on the
defensive
side?
Can we use
AI to
fight
back
against
deepfakes?
You
think?
Yeah,
exactly.
So
this is
one of my
main,
calling
cards or
points in
which is
I really
believe
that
we're
in the Clone
Wars,
and
this is
across
the board
from,
all over
the place
where,
today,
the adversary
has
really
an
unlimited
number
of clones
that can
attack
us.
They can use
all sorts
of free
things
in order
to take
low cost
shots
at us.
And we
desperately
need
our own
clone
armies.
Today,
we're
just a
handful
of Jedi
that
are out there
trying
to defend
the
entire
universe
against
massive
clone
armies
that
can exist.
And we
really
need,
those
capabilities
in-house
so that
we can
deploy
with,
increasing
confidence,
the clones
that
will help
defend
our
our
castle,
our
kingdoms,
against
those
adversaries
and time.
And I'm
a very
strong
proponent
that we
really
need to
be
working
as one
large
security
team,
because
the adversaries
are a
loosely
coupled
set of
folks
who share
all
of the
same
technology.
And we
end up,
you know,
sort of
battling
all of
those
clone
armies
at once.
I
just think about it
this way.
There's
there are
attackers
today
whose job
it is
to get
a foothold.
Once
they have
that
foothold,
they sell
that
foothold.
The next
person
then
advances
in a
lateral move
and
then sells
the
lateral
move.
Then
from the
lateral
move
right.
And
it just
continues
forward.
Then
the individual
actually
has
the data
right.
So
they were
able to
exfil.
Now
you have
all of
those
pieces
together.
And
they were
able to
do it
with
their
whole
clone
army
right
at this point.
And
we have
no
defense.
Right.
It's just
a few
Jedi who
may
or may
not have
configured
the
appropriate
set of
canaries,
alerts,
whatever
that might
be.
And now
we're
defending
that
castle
alone,
and we
can do it
in an
okay
manner.
But
as the
clones
get
better
and
better,
as they
get more
and more
sophisticated,
it just
becomes
harder
and
harder.
If you think
about the
existing
bug
bounty
platforms
that are
out there
today.
Right?
They are
absolutely
flooded
today,
not with
human
researchers,
but
with all of the,
sort
of clone
armies
that are
out there
trying to
harvest
bug
bounties.
And in
which
case,
like,
it's very
simple
then
for them
to turn
against
the
actual
crown
jewels
and
decide
that that
technology,
right,
once open
sourced,
whatever
it
might be
mimicked
or
otherwise,
can
then
be used
against
us.
So
we need
our own
clone
armies
and
we need it
in
a hurry.
So what
I hear
from a
lot of
folks
is, hey,
I can test
you in
this way
and
I can test
you
in that
way, and
we can do this
and
we can do that.
But
I don't
hear
a lot
of is
or really
any of at
this
point is
here's
a clone
army that
you can
leverage
in
this way
that will
help
defend
you
against
these
types of
attacks
in very
real
time.
So
from a CISOs
perspective,
how
do you
balance
a
security
first
culture
with
the need
for a
frictionless
work
environment
where
we are
productivity
focused?
Yeah, I
love it.
Well, I
like to
think
of it
in
terms of
guard
rails
nuggets.
So I
like to
reward
good
behavior
for those
folks
who are
doing the
right
things
and
attempting
to do
the right
things.
I would
put
together
a fairly
frictionless
way
for them
to
get things
done.
What does
that
look like
in
practice?
I'll take
a very
simple
example.
Do
we have
many
different
ways that
we do
this?
But
here's a
simple
one.
Should you
complete
your
annual
security
awareness
training
on time,
you keep
your access,
should you
not?
My bot
in the
background
just
turns
you off.
That's
it.
There's
no there,
you know,
there's
no ifs,
ands
or buts
about it.
And then,
you know,
folks.
Okay,
cool.
How did
how come
I lost
my access?
And my
team can
respond...
Well,
I don't
know.
Did you
complete
your
security
awareness
training?
And
you know,
so
we all know
what
to do
at a
particular
time.
Right.
And so
for
those folks
that
completed it
on time,
they have
frictionless
access.
You know,
I'm
continue
to have
that
access.
And in
a world
where you
provide
those
types
of
capabilities,
folks
will tend
to do
the things
which
allow
them
to work
at speed
and with
less
friction.
So we
balance
these
things
all the
time,
right?
Because
the biggest
risk
we have
is that
we add
so much
friction
that
we go
out of
business.
Yeah.
Right.
So we are
accepting
right to
little
risk
or are we
accepting
too much
risk,
in
which
case
somebody
gets our
crown
jewels.
And
that's
the problem.
So it's
always a
balance,
right.
We're
trying to
balance
that
amount of risk.
And
if we do it
effectively
then
the business
can
sort of
operate
at speed.
The truth
when
it came
arrived
like a
cold,
sharp
rain.
The
investigation
revealed
no broken
firewalls
or weak
passwords.
There was
no crime
of code
here.
This was
a crime
of
consciousness.
It
was a hunt
for a
shadow,
a specter
that left
no
digital
footprints
of its
own,
only that
ghostly
echo of
others.
The
perpetrators
did not
breach
the
perimeter.
They bypassed
it
entirely.
They did
not hack
a system.
They hacked
the human.
They used
artificial
intelligence
to create
a perfect
facsimile
of the
Arup CFO
and his
colleagues.
This
wasn't
the
clumsy
low
resolution
deepfake
of a
bygone
era.
No,
this was
a
masterpiece
of
deception
a meticulously
crafted
mosaic of
stolen
identity.
The algorithms
were
sophisticated
and
chillingly
patient,
and had
scoured
the
digital
world
for their
source
material.
Consider
for a moment
all of the
public
videos
from
corporate
presentations,
the news
interviews,
social
media
profiles,
and even
internal
corporate
communications.
Every
subtle
facial
twitch,
every
unique
vocal
inflection,
and
every way
the head
tilted
in
concentration.
All of it
was
collected
and
analyzed.
The
criminals
found
an
identity’s
digital
trail
of
breadcrumbs
and
used it
to build
a
flawless
yet
soulless
digital
puppet.
This
wasn't
mere
mimicry,
it
was replication,
an AI
driven
simulacrum
that was,
for all
intents
and
purposes,
real.
The attackers
didn't
need to
forcibly
take
the $25
million dollars,
no...
it was
willingly
delivered
with
the obedient
smile
of an
employee
certain
he was
doing a
good job.
This
chilling
ingenuity
represents
a
fundamental
shift
in the
landscape
of cyber
crime.
The target
is no
longer
the
server,
it's
the person
behind
the
screen.
It is a
new kind
of social
engineering,
enhanced
with
the most
sophisticated
tools
known
to man.
It
speaks to
a
profound
and
unsettling
truth
about our
digital age,
that
our very identity
is now
a
commodity,
a set of
data
points
that can be
stolen,
manipulated
and worn
like a
mask.
The
criminals
were not
just
thieves,
they were
digital.
Puppeteers
and the
strings
were
invisible.
They turned
trust
into a
weapon,
using
the very
technology
that was
meant
to bring
us
closer,
to create
a chasm
of
deception.
They understood
the
psychology
of the
human
mind,
its need
for
verification,
its
reliance
on sight
and sound.
And they
exploited
it with a
clinical
precision.
But the
tools of
their
trade
were a
grim
inventory.
The deepfake
technology
was built
not from
scratch,
but from
readily
available
machine
learning
frameworks.
The voice
synthesis
was
likely
off the
shelf
software
tweaked
for
specific
accents
and
cadences.
This
wasn't
the work
of a lone
genius,
but a
sophisticated
criminal
syndicate.
They understood
that
the most
powerful
weapon
is not
a complex
piece
of malware,
but a
simple
lie.
Perfectly
executed.
They
didn't
just
target
the employee,
they targeted
the entire
social
and
professional
ecosystem
of
the victim,
the
presence
of fake
colleagues
on
the call
created
a sense
of shared
responsibility
of a
confidential
group
working
toward
a common
goal.
It
preyed
on the
human
desire
to
belong,
to not
be
the one
to
question
a trusted
circle.
This was
a
premeditated,
calculated
assault
on the
very
architecture
of human
perception.
The true
act of
violence
was not
the theft
of funds,
but
the theft
of
certainty
itself.
It was the
moment of
professionals
lifetime
of
experience
was
turned
against him
by a
meticulously
constructed
lie.
So let's
let's
play a
game.
Let's
imagine
that in
the hours
after
this
breach.
Arup,
finds
out, that
they've got,
they've got to
deal with
this.
They've got
to make
some
changes
in-house
across
continents.
And
they call up
Mark
Dorsi
and they
say, hey,
we need to
make
some changes.
What would you
say?
If you were
brought
in,
would be
the top
three
priorities
immediately
following
the
discovery
of this
fraud.
What would you
do?
What
would you
change?
Yeah,
yeah.
So
I like to
focus
again
on the
crown
jewel.
I would take
a look
at,
you know,
who has
access
to those
crown
jewels,
individually
or
otherwise,
and
understand
in that
moment
whether
or not
that is
the
riskiest
crown
jewel
that
they have
once
breached.
Right.
You'll be
you'll
be picked
on again
for
another
attack.
That's
just
the way
the world
works.
So you're
going
to want
to ensure
that you
shore up
those
controls
quickly
so that a
repeat
attack,
of
course,
can't
occur.
And,
a company
will say,
oh, well,
that would
that
would
never
happen.
It's
happens
all the
time.
The repeat
attack,
you know,
can
happen
very
quickly.
And
especially
if you
are,
or in
parallel,
right,
especially
if you're
not
prepared
in order
to do so.
So
the very
first
thing is
take
a look
at the
crown
jewels,
who
has access
to them,
identify
any other
crown
jewels
that
could be,
weakness,
and then
really
talk
about
the processes
that
they have
in place
and
what needs
to change
for them
to,
really
understand
it. Okay.
This isn't
the path
we'd want
to take
in the future
and
then understand
whether
or not
you've
put in
a
sufficient
number
of checks
and
balances,
so that,
you know,
the kill
chain
there is,
is
broken,
right?
Which is
you want
to ensure
that you
put in
enough
friction
so that
you've
balanced
out
the risk
and
understood,
okay,
what went
wrong.
All
right.
And
then
come back
and
understand.
All right.
What changes
do
we need to
make?
I am
not a fan
of,
really,
tipping
over the
entire
apple
cart.
Unless
you
absolutely
have to.
What?
I think
needs to
typically
happen
is some
incremental
improvements
will
probably
go
a long way
and some
overall
training.
It's also
a moment
where,
security
team
tends
to get
a bit
more
attention,
from the
executive
staff.
And so
we have
an
opportunity
to
shore up
other
areas
that
have been
prior
identified
and that
need
help.
So
here's
what
I would
say.
I've been
a part
of
organizations
in
the past
where
we have
known
what the
challenges
are,
right?
For a
very long
period
of time.
But
until you
have an
issue in
that
area,
right,
there's
no way
of
knowing,
right,
that,
you know,
just
how real
the risk
is.
You've
tried to
quantify
it,
you've
let
folks
know
you've
qualified
at all
of the things,
but
you have
other
business
priorities.
Well,
in
this moment
that
they have
they have
opportunity.
The opportunity
is
where
else do
we need
to shore
up?
And
because I
would
imagine
the teams
already
understood
that
they were
a bit
weak in
that area,
but
they had
accepted
the risk
so
they could
move at
the speed
of
business.
Right.
They had
a
trustworthy
individual
who never
made a
mistake
in 27
years,
27 years.
Right.
But on
that day,
right,
it all
added
up to
$1
million
a year.
Right?
But
the dollars
per year
employee.
Right.
Yeah.
But
you know,
but they
they took
the risk,
right?
They risk
do their,
their
crown
jewel.
Right.
In that
particular
way by
not
having,
you know,
maybe
separate
holders
of
the MFA
or
whatever
it might
be,
the proper
control
that they
would
have,
right?
Triple
approval
process,
whatever
that is,
all the way
back
to them.
But
I would
definitely
go in,
analyze,
from
crown
jewel
perspective,
work
my way
out.
Understand
with the
security team
what else
they thought,
what
the weaknesses
were
and then
work with
the
different
business
units
because
they're
the ones
who have
direct
access
and need
to move
at speed.
Yeah.
And so
how do
we add
just the
right
amount of
friction?
So that
that
particular
attack,
as
well as
the other ones
that were
probably
aware of,
never can
happen
again?
How
would you
advise
employees
in a
situation
like this
to verify
the
authenticity
of a
request
from
a senior
executive?
What
mechanisms
can you
can you
suggest?
I love
that
it goes
back to
the number
one.
And
this could
be
difficult
culturally.
But you
know,
question
authority
just
because
I'm
coming
from a
position
of
authority,
it's
one of
the big
things
that I
champion
with my
teams
and orgs
is it's
it's fine
to
question
authority.
You don't
need to
be rude
about it.
But
at the same time,
there are
many ways
to
question
authority.
Which is,
you know,
phone
a friend.
Like,
this
seems
really
weird.
And
out of
the blue.
Does this
sound
weird
to you?
Right.
And be
that
skeptic.
Right.
So.
And it's
okay.
And
that's
where,
you know,
being a,
you know,
having
that
piece of
the
business
in
your mind
and, and
sort of,
you know,
being
the,
the owner,
operator,
even if,
you know,
you're
not
necessarily
but
you should
really
act
in that
way,
which is
as
the owner,
operator,
what would you do
if I was going
to wire
$25
million dollars,
which
maybe I
do it
every day.
Right.
But why
today did
the CFO,
you know,
did
the CEO
get on
the phone
with me?
Why?
Why
in this
way?
Why are they
the ones
challenging?
Right.
So it's
okay to
challenge
back
and say
actually,
you know,
thanks
for this,
but I'm
going to
give you
a call
in your
cell
phone.
Act III,
The Price
of a
Secret.
The Arup
heist
was not
just a
news
story.
It was a profound
signal.
It
told us
that
the old
rules
no longer
apply.
It told us
that our
financial
systems,
once
built
on the
certainty
of steel
and human
oversight,
are now
vulnerable
to an
unseen,
undetectable
enemy.
The attack
was a
wake up
call,
a
testament
to
the fact
that
security
is
not just
about
firewalls
and
procedures,
it's
about a
fundamental
shift
in human
consciousness.
It's
about
an
understanding
that the
trust
we place
in
a digital
handshake
can be
broken
by a
phantom,
a
whisper,
an AI
ghost
in the machine.
It
forces us
to
reconsider
every
interaction,
every
communication
for all
CISOs
and
security
professionals,
the lesson
is clear,
and it
is
haunting.
Protecting
the
perimeter
is no
longer
enough.
The enemy
is
already
inside.
Using
our own
tools
against
us.
The new
battleground
is not
the
network
interface,
it is
the human
psyche.
The new question
is not
what is real,
but
how can I prove
that
what I see
and
what I hear
is real?
This is the new reality
we inhabit
a world
where
the most
sophisticated
tool of
deception
is not
a mask,
but the
perfect
imitation
of a face
you know
and
trust.
We must
now live
in a
state
of
perpetual,
quiet
vigilance,
where
our greatest
asset
is a
healthy
dose of
skepticism.
The financial
loss is
significant,
yes, but
the deeper
consequence
is the
erosion
of faith
in the
very
medium
of our
global
collaboration.
We have
built
a world
on
screens
and voices,
and now
those
screens
and
voices
can be
turned
against
us.
The answer,
perhaps,
lies
not in
more
technology,
but
in a return
to a more
profound
human
vigilance
and
building
systems
not just of code,
but
of a new
kind of
skepticism.
It's
about
recognizing
that the
greatest
security
may not
be a line
of code,
but
the wisdom
of a
human
eye,
the courage
to
question
even what
appears
to be
a certainty.
We have moved
beyond
the age
of simple
phishing
emails
and weak
passwords,
and
we have
entered
the age
of AI
powered
deception,
where
the attack
is
tailored
to bypass
human
cognitive
defenses.
The
technical
controls
we've
painstakingly
put in
place, our
multi-factor
authentication,
our
network
firewalls,
our
intrusion
detection
systems.
These
are all
still vital,
but
they are
no longer
sufficient.
The Arup
case
proves
that even
a
perfectly
secure
network
can be
exploited
by an
attack
that
never
touches
its
digital
walls.
The true
threat
lies not
in a
network
vulnerability,
but
in a psychological one.
What does this mean
for
the future?
It means
that
security
awareness
training
needs
to evolve
from
spotting
typos
to
validating
human
identity.
It means
we must
build
new
protocols
of trust,
requiring
out-of-band
verification
for high
stakes
transactions.
It means
that
the most
powerful
tool
in your
security
arsenal
may soon
be
a simple
phone
call on
a number
that is
known.
We are tasked
with
rebuilding
the
architecture
of trust,
brick
by brick.
In
a world
where a
phantom
can wear
the face
of our
friend.
I've got
one more
question
for you.
Before
I let
you go.
Clearly,
you've
got years
and years
of
experience,
and
you've
seen all
kinds of
crazy
instances.
Is there
anything
that has
shocked
you,
that has
surprised
you, that
you still
look back
at and
maybe
give you
a little chill
that makes
the hairs
in the
back of
your neck
stand up?
But
this was
just so
brilliant,
so
clever,
so
unbelievable.
Anything
popped to
mind?
Yeah,
I mean,
that that
original
one that
I spoke
about.
And
the reason
that
that was
so good
is that
that
would
be such
an
effective
attack
today,
with
technology
available
and the
ability
to
just have
a bot
keep that
thread
warm
for you
across
so many
things.
That's
what
gives me
the
chills
today,
is
the ability
for a
an
attacker
to build
trust
through
cell phone
number.
Hey, this
is Bob.
Oh.
Hey, Bob.
Yeah,
just,
you know,
then
it sends
a text.
Oh, hey,
did
you see
this thing
that
we just
did?
Oh, my
gosh.
You know,
the results
that
we got
financially,
all those
things
can be a
bot
today.
And
they build
trust.
They
could
they
could
spend
two
months
building
trust
with a
text
message
that
happens
every
five days
or two
days.
Configurable
amount.
Right.
And that
ability
to build
trust is
the one
where
that's
where
folks
fall
victim.
Where
at
the
end of
all
of it,
right.
You had
an
individual
build
this
trust
moment
early
in my career.
We had a trust
moment
built,
between
this,
legacy
network,
an individual
was able
to
infiltrate
the this
legacy
network
that was
sort of a
global
network.
And
we were
able to
infiltrate
it
and have
full
access
to it.
And
the way
they did
it was
they
built
trust
with
the operator
of
the network,
and
they just
started
to
pretend
like
they were
an actual
contractor
of that
company,
and they
had just
pretended
long
enough.
And then
eventually
they
started
using
them for
real
tasks
in order
to get
work done
on the network.
And
they just
said,
oh, yeah,
well, my,
you know,
oh,
I need my
password
reset.
I don't
see
your account
in here.
Well,
I should have
an account
in there.
I mean,
I've been
blah blah
blah.
I'm
working
in this
other area
and
they were
able to
just
infiltrate
in that
time.
Right.
But
they had
built up
trust
with the
operator
of the
network
over
months
and
eventually.
Right.
I,
you know,
I
discovered
them,
you know,
sort of
moving
around
in the network.
And
we had to
bump them out
and
we had a full
on incident
and those
sorts of
things
in order
to figure
out how
to
get it,
you know,
taken
care of
in a
similar
way.
Right.
You have
these
trust
moments
that can
be built
over
time,
but now
you can
have
these
sort of
keep
alive,
keep warm
moments
that could
go on
a very
long time
from a
cell
phone
number
and cell
phone.
Hey,
could you
update my
cell
phone
number
in slack?
I don't know
how
to do it.
Then
the cell
phone
number
gets
updated
in slack.
Right
now,
everybody's
using
that
number
in order
to get
Ahold
of you.
Now it's
completely
out of hand.
How often
do you think
organizations
are
actually
checking
the
personal
information
that's
sitting
in your
HRIS?
Right.
It's
up to
the individual
to
keep it
there.
Right?
But
you can
you can
wholesale
overtake
an
identity
probably
with
just,
you know,
the
Starbucks
card
attack
that I
mentioned
earlier.
Right.
And just,
you know,
making
friends
right
there,
like
the thing
that
I see
wrong
today
with
the gift
card
attack
is that
they
go for it
right
away
instead
of just
having
a moment
where,
like,
hey,
this is
Matt.
Just
wanted to
welcome
you to
the team.
Here's
my
cell phone
if you ever
need it.
You can do
that to
thousands
and
thousands
and
thousands
of
individuals,
right?
This
welcoming
me to
the team.
Yeah.
Fred,
with this
company,
I'm Bob
with
that.
Okay.
You know
Sally
from
here,
right.
Joni
from
there.
Right.
You can have
all of
these
sort
of things
and you
can build
worm
attacks.
And
that's
the piece
that gets
me,
because
now
you're
able to
build
trust
over
time,
and now
you've
really
clouded
what
the environment
looks like.
So
I've just
probably
given
a bunch
of people
ideas
out there.
But
that's
the thing
that
keeps
me awake
at night
is
the first moment
and
how you
verify,
trust,
trust
but
verify.
So
the most
important
thing
is that,
great,
I have
a piece
of
information
that I
feel like
I should
be able
to trust.
Can I verify it
right
now?
And
that's
really
what
we work
with.
With it
teams
is
the ability
to verify
that an
individual
is who
they say
they are.
Because
of this
body of
evidence
versus
just
sort of
one
piece.
Excellent,
Mark.
Thank you
so much
for
being on
the show.
Really
appreciate
it, and hope
you
come back
again
soon.
Yeah,
100%.
I really
enjoyed
this.
Thanks
so much.
This was
fantastic.
My
pleasure.
And now
the final
chapter
in so
we live
with this
haunting
truth.
We
now live
in
a world
where
a video
call is
not
always
a video
call,
where
a voice
that
seems
familiar
may be
anything
but
a world
where our
most
basic
human
senses
can be
used to
deceive
us.
We've
crossed
into a
new
dimension,
one where
the line
between
what is
real and
unreal
is
blurred,
where
the
silent
specters
of
cyberspace
can
reach out
and touch
our
lives.
The very tools
that we
have
created
to
connect
us now
stand
poised to
betray
us.
We are
living in
an era
where
the enemy
wears
not a
mask
of an
alien
adversary,
but
our own
face.
We've
built
a digital
tower
of trust,
believing
it to be
impenetrable.
And
in a single
moment,
an AI
ghost in
the
machine,
showed us
that
the foundation
was not
as solid
as we
believed.
The
signal
has been
sent
a whisper
from the
void, a
warning
for those
who still
feel
confident
that
seeing
is
believing.
The
question
is, are
we
listening?
Have
we learned
to trust
the
unease
in our
gut
more
than the
perfect
image
on
a screen?
We must
now walk
in a
state of
perpetual
caution,
where
our greatest
asset is
a healthy
dose of
skepticism
and
our most
powerful
tool is
the
courage
to
question
all that
appears
to be
real.
And so
we remain
forever
vigilant
and
always
listening
for
The CISO Signal
The CISO Signal
All
episodes
are based
on
publicly
available
reports,
post-mortems,
and
expert
analysis.
While
we've
done
our best
to insure
accuracy,
some
cybersecurity
incidents
evolve
over time
and
not all
details
have been
confirmed.
Our
goal is
to inform
and
entertain,
not to
assign
blame.
Where
facts are
unclear,
we've
used
cautionary
language
and
we always
welcome
your
corrections.
Thanks
for
listening
to
The CISO Signal
The CISO Signal
