← All posts

Post-incident review, what to document, what to change

A practical format for the post-incident review that produces operational improvements instead of blame, scar-tissue over-correction or a forgotten document nobody reads again.

· Atticus Rowan

The incident is contained. Forensics are complete. Customers have been notified. Regulators are satisfied. The organization exhales. At this point, most firms either skip the post-incident review entirely (everyone is exhausted, nobody wants to relive it) or produce a document that names fault, gets filed away and changes nothing operationally.

Both outcomes waste the single largest improvement opportunity the firm has in its cybersecurity program. The cost of the incident was paid. The value of the learning is sitting on the table, free for the taking, and it disappears within weeks if nobody structures the extraction.

Here is a working format for the post-incident review that produces operational improvement rather than blame, scar-tissue over-correction or a shelf-ware document.

When to run the review

Timing matters more than most firms realize.

  • Too early (within days of containment). People are still processing. Emotional residue from the response biases memory. Blame is the default frame.
  • Too late (more than 60 days after containment). Details fade. Participants forget their specific decisions. Motivation to improve declines as the organization moves on.
  • Right window: 21 to 45 days post-containment. Acute stress has subsided. Facts are still fresh. Documentation exists from the active response. Schedules can be coordinated.

A single 2-hour session is usually sufficient for most mid-market incidents. Significant incidents (ransomware with recovery extending over weeks, breaches with regulatory notification) may warrant a 3 to 4 hour session or a two-meeting format (facts first, improvements second).

Who should be in the room

Scope the participants carefully.

Must attend:

  • The incident commander
  • The primary technical lead (MSP or internal IT)
  • The MDR provider’s lead analyst (for technical context)
  • Legal counsel (for what can be said and how)
  • Any executive who made material decisions during the incident

Should attend:

  • The incident response firm’s lead (if one was engaged)
  • The cyber insurance broker (for coverage-relevant observations)
  • Any business function lead whose operations were affected

Should not attend:

  • Large groups of observers who were not involved
  • Any party with unresolved blame tension
  • Anyone who cannot speak candidly under the confidentiality framework

Keep the room to 6 to 12 people. More than 12 makes candid discussion harder.

The confidentiality framework

State it explicitly at the start of the session.

  • The discussion is confidential among participants
  • The output document is shared only with executive leadership (or board committee) on a need-to-know basis
  • Legal counsel’s privilege assertion applies where relevant
  • The goal is improvement, not accountability
  • Blameful characterizations will be rephrased in the output

Without the framework, participants self-censor and the review produces sanitized findings that do not drive change.

The 6-section format

A working format that produces usable output. Roughly 20 minutes per section.

Section 1, timeline reconstruction

Build a shared understanding of what happened and when.

Questions:

  • What was the initial detection (time, source, signal)
  • How did the event progress hour by hour
  • What decisions were made at each inflection point
  • What information was known vs assumed at each decision

Output: a dated, timed sequence of events from initial indicator through return to operational state.

The reconstruction often reveals that the story the organization told itself afterward is different from what actually happened. Getting to shared factual ground is the foundation for everything else.

Section 2, what worked

Before examining what went wrong, name what went right. This is not politeness — it identifies capabilities worth reinforcing.

Questions:

  • Which controls, procedures or decisions prevented a worse outcome
  • Which tools, vendors or partners performed as expected
  • What documented processes executed correctly under pressure
  • What deliberate pre-incident investments paid off

Teams often discover 5 to 10 specific things worked well that they hadn’t consciously credited before the review. Those become the “reinforce and protect” list.

Section 3, what went less well

The core of the review. Specific examples, specific root causes.

Questions:

  • What surprised you during the response (positive or negative)
  • Where did the plan fail to anticipate a situation
  • Where did communication break down, and what was the cost
  • Where did tooling not work as expected
  • Where did authority or decision rights create delay
  • Where did coordination across parties stall

Ground every finding in specific events from the timeline. Avoid abstractions like “communication was poor” — replace with “at 2:47 AM the MDR called the MSP on a number that had changed 6 weeks earlier, losing 45 minutes before contact was re-established.”

Specific findings produce specific remediations. Abstract findings produce shelfware.

Section 4, root causes vs surface symptoms

For each finding in section 3, ask the “why chain.”

Surface symptom: “The restore runbook had outdated admin credentials.”

Why chain:

  • Why: admin credentials rotated 3 months ago, runbook wasn’t updated
  • Why: no process links credential rotation to runbook updates
  • Why: runbook maintenance isn’t owned by a specific role
  • Why: when managed services onboarded, documentation ownership was never assigned to a single role

Root cause: ambiguous documentation ownership in the managed services engagement.

The root cause is what gets fixed. Fixing the surface symptom (“update the credentials in the runbook”) fixes one instance and leaves the next rotation cycle to produce the same failure.

Most root causes at mid-market firms fall into predictable categories:

  • Ambiguous ownership of a process or artifact
  • Missing feedback loop that would keep something current
  • Implicit knowledge that was never documented
  • Vendor coordination gaps at the handoff between parties
  • Defaults that look reasonable individually but compound badly under stress

Section 5, specific actions with owners and dates

The output of the review is a list of specific actions, not principles.

Each action has:

  • Description: concrete change to make
  • Owner: named person accountable
  • Target date: specific, near-term (usually 30 to 90 days)
  • Verification: how completion will be confirmed

Bad example: “Improve communication during incidents.” Good example: “MSP will publish updated incident notification tree with confirmed current phone numbers for 8 executive contacts. Owner: MSP account manager. Due: 2026-05-15. Verified by: test call to each number.”

Most reviews produce 6 to 15 specific actions. Fewer than 6 suggests the review was shallow. More than 15 suggests scope creep that won’t actually complete.

Section 6, cultural and organizational observations

The section most reviews skip. Often the most important.

Questions:

  • What did we learn about how our organization makes decisions under pressure
  • What assumptions about our capability proved false
  • What does this incident imply about our cybersecurity investment level going forward
  • What does this imply about our vendor relationships
  • What does this imply about board-level reporting

These observations rarely produce a specific Section 5 action but shape the executive team’s and board’s cybersecurity posture in ways that ripple across subsequent decisions.

Common failure modes

Reviews that fail to produce improvement.

  • Blame-dominant framing. Participants defend positions rather than discussing facts. Output is sanitized. Nothing changes.
  • Too abstract. Findings like “we need better training” with no specific action. 90 days later, nothing has actually been done.
  • Action list with no owners. Every item is “TBD” or “the team.” Nobody specifically accountable. Nothing completes.
  • One-and-done execution. The review happens, the document is filed, no follow-up check-in. 6 months later, half the actions are incomplete and nobody noticed.
  • Over-correction (scar tissue). The incident produces a 40-item action list that attempts to prevent every variant of the event. The program becomes brittle and expensive. Some scar-tissue actions never reverse even when the threat profile changes. Better to address root causes narrowly than symptoms broadly.
  • Under-communication. The review findings stay within the security and IT team. Business leaders don’t learn what happened or what changed. Funding for improvement stalls.

The 30-day follow-up

Schedule a 30-minute check-in 30 days after the review. Walk the action list:

  • Which are complete? Verify evidence.
  • Which are in progress? Confirm revised target date.
  • Which have slipped? Understand why and escalate if needed.
  • Have new issues surfaced that weren’t captured in the original review?

Without the check-in, the action list is aspirational. With it, completion rates double.

The document itself

The review produces a written artifact. Keep it concise — 3 to 8 pages typically. Structure:

  • Executive summary (one page for the board or exec team)
  • Timeline (appendix)
  • What worked (half page)
  • Findings and root causes (1 to 3 pages)
  • Action list (1 to 2 pages, the most-read section)
  • Cultural observations (half page)
  • Sign-off and distribution

Label the document clearly with confidentiality and distribution constraints. Under legal-privilege framework where applicable.

Where we fit

Atticus Rowan facilitates post-incident reviews for managed-services clients as a standard part of the incident response engagement. We also run reviews on a standalone basis for firms that had incidents handled elsewhere and want an independent reviewer to structure the learning extraction.

The facilitator role matters. Internal facilitation is subject to the same dynamics the review is trying to escape — blame, defensiveness, incomplete candor. An outside facilitator protects the confidentiality framework and keeps the discussion focused on root causes.

If your firm has had a cybersecurity incident recently and wants the post-incident review structured well, or you realize a past incident never got a proper review and the learning is still recoverable, schedule a discovery call. We can scope the facilitation work and the follow-up discipline.