Midnight Blizzard | How Russian Intelligence Breached Microsoft - w/ Alyssa Robinson, CISO @ HubSpot
Who are these attackers, what
it is they are going after?
What's their motivation
and what might
they have touched along the way?
Welcome to The CISO Signal
True cyber crime podcast.
I'm Jeremy Ladner.
In the winter of 2024, the Cold War
in cyberspace
turned literal,not
with missiles or tanks,
but with something far quieter
a slow building pressure system
forming far to the east.
Silent patient.
Unseen.
A storm
the intelligence community
would later call
‘Midnight Blizzard’. Its target -
Microsoft, the company whose technologies
touch nearly every business,
every government,
in nearly
every individual connected
to the modern digital world
and whose internal defenses are studied,
relied upon
and implicitly trusted
across the global security ecosystem.
What happened next wasn't ransomware.
There was no countdown clock, no demand.
It was something colder.
Espionage.
When the breach eventually
surfaced, investigators realized
this wasn't an attack
designed to disrupt systems.
It was designed to observe them.
A disciplined adversary,
a methodical approach
an operation built
not to break, but to watch.
But how did they get in?
What did they see?
And how close did they come
to the conversations,
controls and decision
making process that shaped
global cyber defense?
Those are the pieces
will uncover on this episode.
Step by step
to help us navigate the story of nation
state espionage and human fallibility,
we are joined by a guest
who understands
both sides of the firewall
better than just about anybody.
Our CISO co-host
this week is Alyssa
Robinson, Chief
Information Security Officer at HubSpot.
Alyssa's career spans
more than two decades, from Cisco to MIT
and Harvard,
where she helped
safeguard one of the world's
most advanced genomic research networks.
At HubSpot,
she helps a modern security program built
not just on technology,
but on transparency,
trust, and resilience.
Alyssa, welcome to the CSO segment.
So good to have you on the show.
Really appreciate you coming on.
Thank you so much for having me Jeremy.
I'm very excited.
Are you ready to get started
with the investigation?
Well, let's jump right in.
We are in the midst of a ceaseless war,
not of bombs or bullets, but of breaches,
firewalls and silent incursions.
The targets.
Our borders, ore banks, our commerce,
and the critical infrastructure
that underpins a free civilization.
The enemy is cloaked in code,
fueled by greed,
glory, and a desire for chaos.
This is the story of the unseen
protectors, the nameless
generals, the CISOs,
chief information security officers.
They are the guardians at the gate.
Watchers on the wall.
Ever vigilant and always listening
for The CISO Signal.
Act I.
The Cold Front
In the years
leading up to 2024, cyber
conflict had become routine.
Constant, ever present. Attacks
flared and faded.
Ransom crews came and went,
and the headlines blurred together.
But beneath that noise,
something quieter was taking shape.
A shift in intent.
Nation state actors were no longer
just testing defenses.
They were studying them, learning
how their adversaries thought.
And as Russia's war in Ukraine dragged
on, as sanctions
tightened and alliances hardened.
Digital infrastructure
became a proxy battlefield.
Western technology companies
found themselves recategorized
not as vendors but as strategic terrain.
If we remember,
this attack took place in early 2024,
there was a tense geopolitical climate.
Russia was,
was on the attack in Ukraine,
and it was attacking,
you know,
not just on the battlefields,
but also, on the cyber front.
And a lot of different tech
firms had gotten drawn into that.
Microsoft was one of them,
but it certainly wasn't the only one.
But it was one
that had a lot of valuable data
that had a ton of reach.
And so it
it feels almost inevitable
that this would be a target
that, Russia would go after.
Inevitability is a dangerous word
in security. It implies fate.
But what Alyssa is describing
isn't fate its structure.
Microsoft didn't become a target
because of a single vulnerability
or a missed patch or a careless employee.
It became a target
because of what it represents
in the modern digital world.
Centralized identity, centralized access.
Centralized trust.
Thousands of organizations
don't just run on Microsoft.
They reason through it.
I mean, it's incredible
the number of different environments
that Microsoft products
are embedded into.
It makes Microsoft
such an attractive target, right?
If we
if we think about this
from a supply chain attack perspective,
the reach is incredible.
And it's just super high,
high-value for any attacker
who can actually get in there.
And that's why
they need to have such a high level
of protection.
Security teams
increasingly incorporate Microsoft
telemetry
into how threats are modeled
and prioritized.
Incident responders.
Tune detections
based on Microsoft signals
and governments
align defensive
posture around Microsoft
identity assumptions,
which means that insight into Microsoft's
internal decision
making isn't just
corporate intelligence,
it's defensive intelligence.
It answers questions
every adversary wants to ask.
What gets noticed?
What gets ignored,
and how long can we remain invisible
before anyone realizes
we were there at all?
If you were to zoom out, what early
environmental weaknesses
or trends made a breach like this
more likely?
I think there's a few things,
and some of them are specific
to Microsoft, right?
Like this is a company
that's, 40 years old at this point,
which means that there's
going to be legacy parts
of their infrastructure.
It's going to it's going to be huge.
It's going to have such a surface area.
And then there's things
that we're all seeing, right.
Small companies, large companies.
It's the sprawl of identity.
It's, multiple service
accounts... it’s non-human identities.
It's,
just the the sheer volume
and pace of patching
that we need to come up.
But it keep up with, its integrations.
Right?
All of these things
make securing a network
and make securing identity
so much more difficult.
And when you add that together
with a huge footprint and lots of legacy
areas of the network,
lots of legacy services,
that that's just
makes an incredible challenge to secure.
Legacy systems don't tend to fail loudly.
They fail quietly.
One exception at a time.
One compatibility requirement,
one business
justified decision
that makes sense in isolation
and becomes dangerous in aggregate.
Identity sprawl isn’t negligence,
it's entropy,
the natural byproduct of scale, speed
and exception.
Service accounts multiply.
Non-human identities outnumber people.
Integrations pile up
faster than governance can follow,
and over time, security posture stops.
Being a single wall and it becomes a map.
A map
that shows where friction exists
and where it doesn't.
For a nation state adversary,
that map
is more valuable than a zero day,
because the easiest way into a hardened
system is rarely the front door.
It's the place
no one thinks of as important anymore.
And when access control
becomes contextual,
when identity becomes fluid,
low importance systems stop
being low importance all together.
The full context really needs
to be considered there, right?
If an attacker is able to pivot
from a low important system
into something that's highly privileged,
it is an actually a low
importance system.
By late 2023, the conditions were set
global tension, centralized trust,
expanding identity services.
The perfect recipe
for an adversary that specialized
not in destruction
but a creeping, quiet presence.
Midnight Blizzard
didn't need to break Microsoft.
They only needed to blend in
because the most effective
espionage campaigns
don't announce themselves.
They wait for the moment
when normal behavior becomes the cover.
And in the modern enterprise,
nothing looks more normal than identity.
Next, how the quietest door proved
to be the most valuable one
Act II: Zero Visibility
just about every major
breach has a moment.
People expect
it's a zero day
or a misconfigured firewall
or perhaps a novel
exploit with a clever name.
But that moment never came here.
Midnight Blizzard
didn't arrive with new tools.
It arrived with patience.
The technique, investigators
would later confirm, wasn't exotic.
It was password spraying,
a method as old as enterprise
identity itself.
This is why
basic attacks
still work
against advanced organizations,
because identity systems
aren't built to spot intent.
They're built to allow access.
And in a global workforce
with legacy applications,
service accounts,
exception lists, and compatibility
requirements, identity
hygiene isn't a setting,
it's a moving target.
Midnight Blizzard understood that,
and more importantly, they understood
who was worth targeting first.
I think in an ideal world
executives are setting the tone
for a strong organizational culture
of security, right?
But I think anyone who has ever
worked in a helpdesk
or in an operations center,
they can speak to some of the issues
here. Right?
In some cases, executives
themselves are requesting
special treatment.
They want exceptions
because they have important work to do,
and they need to get that done fast.
But in other cases, it's really
just that
we all know who the VIPs are, right?
You know,
the people who are working there,
know that they should be
bending over backwards
to help these folks.
And that means that
we've created exceptions
in our security posture
that maybe senior leadership
didn't even ask for.
Maybe we granted them,
you know, without that, I think beyond
those cultural challenges,
there are additional issues, right?
Executives are just targeted more.
Their names are out there.
They get outreach from way more people.
They're talking to more people
out there on the internet,
and they have access
to more sensitive data.
They may have weird patterns of behavior.
They may be traveling frequently.
They may be collaborating
with a lot of outside folks.
They may be working strange hours.
And all of those things
make it hard to identify,
you know, what are the right patterns?
What's anomalous,
when you're looking at executive behavior
in espionage,
access isn't measured by privilege alone.
It's measured by context.
Executives don't just have credentials.
They have conversations and strategy
emails, board materials.
Early signals
about how an organization
thinks, reacts, and prioritizes risk.
And unlike
service accounts, unlike back-end systems,
executive behavior
is inherently irregular.
They travel at odd hours.
They have new devices,
delegated access, assistants
operating for them
across inboxes and calendars.
From a detection standpoint,
this creates a paradox
the most sensitive identities inside
an organization are often
the hardest to baseline,
which means the line between normal
and compromise becomes dangerously thin.
Midnight Blizzard didn't
exploit this by force.
It exploited it by blending in.
And do think executives
want to be secure, right?
They want to support
your security program,
but they want to be secure
without being inconvenienced.
They have important work to get done,
and they need to do it,
and they need to do it quickly.
And that can be extremely challenging.
At this stage,
the intrusion
still hadn't announced itself.
There were no alarms, no
data spikes, no outages,
just subtle changes,
the kind that only matter
if you’re looking for them.
Investigators would later
identify activity involving OAuth
permissions, access paths
that don't look like logins
at all, permissions added
where they didn't belong through
trust relationships that appeared
legitimate
production level access granted through
non-production applications.
The kind of configuration drift
that happens every day
in large environments except this time
it wasn't drift, it was reconnaissance.
Because once an attacker understands
how permissions propagate,
they no longer need to break systems.
They can inherit them.
So to really have understood
what was going on,
the alerting would really need
to go deeper.
Permissions to production
systems were being added
to a non production OAuth app,
and this should be an unusual situation
and one worth alerting on,
particularly for permissions
as critical as Office 365 full access.
This was a long dwell intrusion.
What do you think
makes slow burn operations
so hard to detect in real time?
I think these sorts of attacks
can be extremely difficult to detect
because they're using techniques
that mimic
potentially the behavior of real users.
They're using things
like living off the land services,
that can help them
get the information that they need,
but that
don't look that different
from normal behavior.
And so when you're attempting
to detect these low
and slow campaigns
launched by sophisticated threat actors,
that really is a whole new level.
Midnight Blizzard
didn't trip alarms
because it didn't behave like malware.
It behaved like an employee,
one with slightly
unusual access,
slightly unusual patterns,
and just enough legitimacy
to disappear into the noise
by the time defenders began
asking the right questions.
Is this activity normal
or merely familiar?
The hardest part of the breach
is already behind the attacker.
They're inside. They're persistent.
And now the only thing left is time.
Next, the moment defenders realize
they're no longer hunting an intrusion
but reconstructing a history.
If you want
the deeper takeaways
behind episodes
like this,
well, we break it down
in every issue of our linkedIn newsletter.
No hype, no
vendor spin, just the patterns
CISOs and security teams
actually care about.
Now back to the investigation.
Act III:
The Breakthrough
The tension is super high, right?
I think in the beginning, you know,
there's multiple phases
to the incident process.
And in the beginning,
you're really just trying
to develop a theory of
who are these attackers,
what it is
they're going after,
what's their motivation
and what might
they have touched along the way.
There is a moment
in every serious incident response
when the problem changes shape.
It's the moment defenders
stop asking is something wrong?
And start asking,
how long has this been happening
for Microsoft?
That moment arrived
quietly, buried inside logs, correlations
and anomalies
that didn't make sense in isolation.
What investigators would later
confirm was unsettling
not because it was noisy,
but because it wasn't.
The attackers weren't escalating
privileges aggressively.
They weren't deploying
malware across endpoints.
They weren't triggering the kinds
of alerts
incident response teams
are trained to chase.
They were reading,
mapping
how defenders responded to threats
and how long it took them to do it.
And by early January 2024,
one truth was becoming impossible
to ignore.
This wasn't a break in.
It was an occupation quiet, persistent
and deliberately non disruptive.
You know,
we've got our systems,
we've got all of our logs in place.
We've got all of our
various detection capabilities.
And you're really trying to find the data
that can either back up
or disprove that theory
of what's going on here.
And that can be incredibly stressful.
Right?
It's just not knowing what is going on
and knowing that there's
a ton of pressure that you really want
to protect those customers that,
there's going to be
potentially media coverage.
All of these things
are coming down on your head.
And so I think good incident
commanders are really good
at focusing people
on just the stage
that you're at
and trying not to look beyond that,
but trying to really figure out
what's going on here
and how are we going to stop these folks
from being here
in our network,
in ransomware incidents,
the priority is clear
contain, restore and resume operations.
But espionage doesn't offer that clarity
because the real damage isn't immediate.
It's cumulative.
Every hour an attacker
remains inside the environment
increases the chance
they've seen something
defenders didn't know was visible.
Internal emails,
perhaps, or security discussions
or early warning conversations
about other nation
state activity
and unlike financial crime, intelligence
theft can’t be rolled back,
you can rotate credentials,
you can rebuild systems,
but you can't make an adversary unsee
what they've already learned.
Which means the most
urgent question
becomes terrifyingly simple
what do they know?
And what does that knowledge
allow them to do next?
So how do you think
incident response changes
when the attacker
is collecting Intel and not
disrupting systems and demanding ransom?
I think in espionage attacks,
containment isn't just technical,
it's strategic. Right?
What are those most valuable assets
that they might be going after?
You're in early stages of of a breach.
You
you are trying to figure out
exactly where the attackers
have gotten to already know
how far into your network have they
penetrated?
If you are convinced
that they're going after, intelligence,
whether it be intellectual property
or strategy or information
about your customers,
you're
really trying to prioritize protecting
that most important data and,
ensuring that attackers are kicked
out of those parts of the network.
This is where theory
collides with reality.
Security playbooks
assume accounts can be shut down,
that access can be revoked,
that users can wait.
But executive accounts
don't exist in a vacuum.
They're tied to board decisions,
regulatory disclosures,
public statements that can move markets.
And in this case,
the very people
whose inboxes may have been exposed
were also the people responsible
for steering the response.
If you lock them out for too long,
the organization stalls,
leave them online
without absolute certainty,
and the attacker may still be listening.
It's a dilemma no dashboard can solve,
and one every CISO dreads facing.
When the breached accounts
belong to senior leadership.
How does that
complicate the response effort?
Does it complicate the response effort?
It absolutely complicates
the response effort. I think,
any SOC these days probably
has automation in place
that's just automatically shutting down
those accounts that you believe
are compromised.
Right?
You're able to do that very quickly
if you have good capabilities.
And that
that isn't going to fly for very long.
Right.
You're not going to be able
to keep executive accounts
shut down for a week.
You're not going to be able
to keep those folks off the network
and not doing their jobs while you're,
you're, doing that incident response.
And that makes things
so much more complicated.
If you
if you can't be sure
that those accounts have been secured,
but you've got to keep them running,
that that potentially gives you
some additional signal
that you're listening to.
But it
it can be just extremely challenging
at this stage of a breach.
The technical work
continues, logs are pulled,
indicators are refined,
and government partners are briefed.
But beneath the process
is something harder to document
its responsibility,
the kind that doesn't disappear
when the root cause
turns out to be systemic,
the kind
that lands on one roll,
one person's shoulders,
regardless of how large
the organization is.
And of course, that person is the CISO,
because when nation state attackers
succeed, the question from boards
and from regulators
and the public is rarely nuanced.
It's typically simple
how did this happen?
And just as often,
why didn't you see it sooner?
I can tell you that
I would not have wanted to be
the Microsoft CISO in this situation,
because this
this one is extremely challenging
and so many of the incidents are right.
I think there's the the pressure
of trying to figure things out quickly,
trying to
make sure
that you have a complete
understanding of exactly what was done,
trying to protect your customers.
I think there is both the feeling
of pressure that comes from,
you know, you are on the hook
to solve this thing
and the feeling of pressure
that comes from,
you know, situations that potentially
you created, not by yourself...
You know, there's there's a whole company
with a whole culture going on,
and there's
always a force of tension
between keeping something very secure
and building usable products
that, your employees
and that your, your customers can use.
And that's a difficult tension
to manage as a CISO.
It can
it can feel like a failure on your part
when you are facing one of these breaches
and and facing
that feeling of failure at the same time
as that feeling of tension, of
this is something you've got to do
and you've got to get it done quickly
and you've got to figure out
with potentially limited
information, it's just
it's a lot to deal with.
By mid 2024, Microsoft
would publicly acknowledge the breach,
confirming what many inside
the investigation already feared.
This was not a criminal operation.
It was an intelligence campaign,
and it had been active longer
than anyone would have liked to admit.
The story, however, was far from over
because once espionage becomes public,
the breach doesn't end.
It just changes shape.
I think there's a clear art to,
transparent communications
following an incident. Right.
And that's that includes, communications
to the board.
It includes communications to leadership.
It includes communications
to your customers. Right.
I think
clear, transparent communications
about what went wrong
and what you're going to change after
the fact are
are hugely important,
in restoring customer trust and,
likely also restoring
trust with your board.
Next,
when the cost of intrusion
is no longer measured in access
but in trust.
Act IV:
After the Thaw
when espionage becomes public,
the breach doesn't end.
It mutates.
Systems may be secured, access
paths are closed,
indicators are hunted down
and eradicated.
But the real damage continues to unfold
in boardrooms and customer conversations,
and in the assumptions organizations make
about what they thought they understood.
Because unlike ransomware,
espionage doesn't
leave a clean ledger of loss.
There is no invoice,
no simple accounting, only uncertainty
what conversations were read,
what strategies were observed,
and what defensive decisions
were learned
and stored away for later use.
In the weeks
following Microsoft's
disclosure in January of 2024,
the company was very careful
in its language.
This was an intelligence campaign,
not a criminal one,
and that distinction mattered
because intelligence doesn't
just extract data, it extracts advantage.
And advantage once lost, is difficult
to reclaim. They hear platitudes...
when they hear, you know
security is the
most important thing to us.
And that wasn't reflected
by, the information
that's come out about the breach.
It's not going to land. Right.
What went wrong?
And what are you going to change to make
that happen
is is really the way
to rebuild that trust.
It sounds really simple,
but it's so hard to get right,
particularly in the moment.
And I think it's even harder
when the issue is
that some fundamental security
control is missing,
because that the tendency
is to try to try to make yourself sound
better, to try to make the company
sound like there wasn't,
some fundamental issue.
Transparency isn't a public relations
exercise. It's a security control.
Silence creates more risk
than disclosure ever will.
Because when trust erodes, customers
change behavior.
Partners reassess dependency,
regulators recalibrate scrutiny,
and attackers well, they take note.
In the months that followed
Microsoft's disclosure,
security agencies would issue joint
advisories detailing the techniques
used, warning
that similar
intrusion attempts
were being observed
against other cloud providers.
The message was implicit, but clear
this was not a one off.
It was a playbook,
and playbooks are written
with the expectation
that they'll be reused,
which meant the real question
for defenders wasn't
how do we fix what happened?
It was
what assumptions
made this possible in the first place.
I think there are risks
also from being transparent
in these cases, right.
There are risks that you aren't
going to be able to move fast
to fix whatever caused the issue.
There's,
there's risks that additional attacks
could capitalize on those flaws.
There's risks to admitting
that you had less than perfect security
that could open your company up
to legal or regulatory action.
But I do think that
when it's done well,
transparent communication
about what went wrong,
how are you going to fix it?
It goes a really long way
to resetting reputation,
particularly
if there isn't a long history of issues
that have gone unfixed.
You need to have that real commitment
to understand and fix root
causes, even when
and maybe
especially when
those root causes are cultural.
And I think customers understand
that having perfect security is hard.
Standing up to nation
state level attacks is impossible,
but they don't understand
making the same mistakes again and again.
What Midnight
Blizzard ultimately
exposed wasn't a failure
of tooling or talent, or even effort.
It exposed a shift
that many organizations
are still coming to terms with.
Identity is no longer a category.
It's not IAM, it's not a product.
It's a strategy.
Because every business decision now
creates an identity consequence.
Because when identity
becomes the perimeter,
every exception becomes an access
path, every
legacy dependency
becomes a liability in every scale.
Decision.
Cloud adoption, integration velocity.
Centralized trust becomes both strength
and fragility.
And I think I see this
because the root cause, that first root
cause that you arrive at is
Is always something
that's easy to fix most of the time.
Right?
Microsoft almost certainly went back,
and they required MFA on the systems
that were involved in
this incident right away,
and it maybe even went
back to its inventory,
and it prioritized getting MFA in place
for all public facing
systems are all systems.
But that's really just the first level.
And I think I've found that,
like each successive level
gets you closer to the truth,
but harder to solve, right?
If the real root
cause was something like
having full visibility into network
traffic between segments
and shutting down
firewall rules or access control list
that weren't needed,
or if the real root
cause is something cultural,
like executives
getting exceptions to security rules,
those problems are really
exponentially harder to solve, right?
And they're harder
to get support to solve.
And when you go to your board,
when you go to your executive
leadership team.
And so in the long tail
in cleaning up from these incidents,
there is likely to be security
debt left behind in
any incident
like this,
because the really, really real problems
that really caused this
are usually the hardest ones to solve.
And if they were easy to solve,
you would have solved them already.
I got one more question for you
before I let you go, Alyssa. Okay.
Have you gotten
any piece of amazing advice
in your career
as you were rising up through the ranks
that you would like to share
with some of those aspiring
CISOs and cybersecurity leaders
right now?
One piece of advice
that has stuck with me over time
is that
as you make your way up
the at the career ladder, knowing
what to ignore is just as important
as knowing what to pay attention to,
because you can't possibly
pay attention to everything.
And I think that applies,
in, in your job, but it also applies to,
to things like detecting attacks. Right?
They are when you're when you're
trying to tune those alerts,
when you're
trying to get your, your SOC
operating in a place
where it's not drowning,
knowing what to ignore
and what to pay attention
to is super important.
Excellent advice.
Thank you so much
and thank you for being on the show.
Thank you so much for having me.
This was fun.
And now our closing
Midnight Blizzard didn't
exploit a zero day.
It exploited familiarity,
the belief that what had always worked
would continue to work,
that normal behavior was safe behavior,
and that scale was protection.
But in 2024, scale
became the attack surface.
Because the quietest intrusions
don't announce themselves.
They observe,
they learn,
they watch,
and they wait
for defenders to reveal
through patterns,
priorities and exceptions
exactly how they think.
The lesson of Midnight Blizzard
isn't that Microsoft failed.
It's that modern defense
is no longer about building higher walls.
It's about understanding
who's already inside
and why they haven't been noticed yet.
In a world where identity is everything,
the most
dangerous adversary
may not be the one
trying to break your systems,
but the one learning
how you protect them.
And so we must remain ever vigilant
and always listening for The CISO Signal.
If you enjoyed this episode, please like,
share and subscribe.
If you didn't,
thank you for watching this long.
We'll see you in our next episode.
All episodes are based on publicly
available reports, post-mortems,
and expert analysis.
While we've done our best
to ensure accuracy,
some cybersecurity incidents
evolve over time
and not all details have been confirmed.
Our goal is to inform and entertain,
not to assign blame
where facts are unclear.
We've used cautionary language
and we always welcome your corrections.
Thanks for listening to The CISO Signal
