Abishkar Bharat Singh

Incident Response

SOC Analysts

ServiceNow Tester

Asset Management

Citrix Administrator

Abishkar Bharat Singh

Incident Response

SOC Analysts

ServiceNow Tester

Asset Management

Citrix Administrator

Blog Post

Part 5: When Things Go Wrong – The Reality of Incident Response

Part 5: When Things Go Wrong – The Reality of Incident Response

It was 2:47 AM when my phone buzzed with an alert. A colleague had reported clicking a suspicious link hours earlier but waited until end-of-day to mention it. By the time we got the report, the attacker had six hours of undetected access.

Those six hours cost us three days of incident response, system isolation, forensic analysis, and damage assessment.

Had we known within minutes instead of hours, we could have contained it immediately. The difference between a minor incident and a major breach often comes down to one factor: how quickly someone reports what happened.

The Hardest Part of Security: Admitting Mistakes

The colleague who clicked the link later explained why he waited: “I felt stupid. I didn’t want to bother anyone if it turned out to be nothing. I thought maybe it would just… go away.”

This pattern repeats constantly. People delay reporting security incidents because:

  • Embarrassment: “I should have known better”
  • Denial: “Maybe nothing actually happened”
  • Fear: “Will I get in trouble?”
  • Uncertainty: “I’m not sure if this is actually an incident”
  • Minimization: “It’s probably not important enough to report”

Every hour of delay gives attackers more time to establish persistence, move laterally through systems, exfiltrate data, or deploy additional payloads.

The reality: I’ve never disciplined someone for reporting a security incident. I’ve had difficult conversations with people who waited.

What Actually Qualifies as an Incident

During security training, people always ask: “How do I know if something is really an incident?”

Formal definition: An information security incident is any event that results in unauthorized access to, disclosure of, or modification of information or systems.

Practical definition: If you’re wondering whether to report it, report it.

The question itself indicates something unusual occurred. Trust that instinct.

Clear incidents that require immediate reporting:

  • Clicked link or opened attachment in suspicious email
  • Provided credentials to unfamiliar website or caller
  • Lost or had device stolen
  • Noticed unauthorized access to accounts
  • Received ransom demand or system lock message
  • Observed unusual system behavior or performance
  • Accidentally sent confidential information to wrong recipient
  • Found sensitive data exposed where it shouldn’t be
  • Experienced suspicious contact from someone impersonating colleague

The critical principle: Over-reporting is vastly preferable to under-reporting. We can quickly determine if something is benign. We cannot quickly recover from delayed reporting of actual incidents.

The Statistic That Changed My Perspective

According to Verizon’s 2024 Data Breach Investigations Report, 73% of breaches in our sector start with phishing and social engineering.

Not sophisticated technical exploits. Not zero-day vulnerabilities. Not advanced persistent threats.

Social engineering. Tricking people. The same attacks covered in Part 1 of this series.

The attacks succeed not because users are incompetent, but because attackers are sophisticated and humans are human. Everyone makes mistakes. The question is what happens afterward.

Organizations with strong security cultures treat mistakes as learning opportunities and focus energy on rapid response. Organizations with weak security cultures punish mistakes and create incentive to hide them.

Guess which organizations suffer fewer major breaches?

The Real Cost of Delayed Reporting

Let me walk through two parallel scenarios to illustrate what difference timing makes:

Scenario A: Immediate Reporting

  • 2:15 PM: Employee clicks phishing link, immediately recognizes error
  • 2:17 PM: Calls service desk, reports incident
  • 2:22 PM: Security team begins response, isolates affected device
  • 2:35 PM: Forces password reset, reviews account activity logs
  • 2:48 PM: Confirms no unauthorized access occurred
  • 3:00 PM: Issues cleaned device back to employee
  • Total damage: 45 minutes of disruption, zero data exposure

Scenario B: Delayed Reporting (The Actual Incident)

  • 2:15 PM: Employee clicks phishing link, feels embarrassed
  • 5:30 PM: Mentions it casually to colleague while leaving office
  • 9:00 PM: Colleague contacts security team
  • 9:15 PM: Security begins investigation, discovers attacker accessed email account at 3:42 PM
  • 9:45 PM: Finds attacker sent phishing emails from compromised account to 47 colleagues
  • 10:30 PM: Begins containment – isolate device, reset credentials, notify affected users
  • 11:15 PM: Discovers attacker accessed confidential client proposals
  • Day 2: Forensic analysis, client notifications, regulatory reporting obligations
  • Day 3: Continue containment, implement additional monitoring
  • Total damage: 3 days of intensive response, 47 potential secondary compromises, client data exposure, regulatory implications

Same initial mistake. Drastically different outcomes. The variable was reporting speed.

What Happens When You Report an Incident

Many people delay reporting because they don’t know what to expect. Let me demystify the process:

Step 1: Initial Report (You)

  • Call service desk or IT support immediately
  • Describe what happened without minimizing or exaggerating
  • Provide specific details: time, what you clicked, what you saw
  • Don’t try to fix it yourself—we need to preserve evidence

Step 2: Immediate Response (Service Desk)

  • Log incident details
  • Assess severity and urgency
  • Provide initial guidance (disconnect network, don’t power off device, etc.)
  • Escalate to security team if needed

Step 3: Containment (Security Team)

  • Isolate affected systems to prevent spread
  • Reset compromised credentials
  • Review activity logs for unauthorized access
  • Identify what data or systems may be affected

Step 4: Investigation (Security Team + SOC)

  • Forensic analysis to understand attack vector and scope
  • Identify all affected systems and accounts
  • Document timeline and attacker actions
  • Assess damage and data exposure

Step 5: Recovery (IT + Security)

  • Clean or reimage affected devices
  • Restore access to legitimate users
  • Implement additional monitoring
  • Verify attacker has been fully removed

Step 6: Post-Incident (Leadership + Security)

  • Document lessons learned
  • Update security controls if needed
  • Conduct additional training if patterns emerge
  • Regulatory or client notifications if required

Your involvement: Primarily Steps 1-2 and 5. Report it, cooperate with investigation, get your systems back. Most incidents resolve within hours.

The Reporting Methods That Actually Work

Primary method: Phone call

  • Fastest response time
  • Allows real-time questions and clarification
  • Creates immediate ticket in tracking system
  • Spoken conversation often captures details that written reports miss

Secondary method: Email to service desk

  • Works for less urgent issues
  • Provides written documentation
  • Useful if you need to include screenshots
  • May have slower response time

Emergency method: Direct contact to security team

  • For critical incidents outside business hours
  • When service desk is unavailable
  • If you suspect active ongoing attack
  • Use organizational emergency contact procedures

Built-in tools: “Report Phishing” button

  • Available in most email clients
  • Fastest way to report suspicious emails
  • Automatically forwards to security team with headers intact
  • One-click reporting removes friction

Classifying Information: What’s Actually at Risk

Not all information exposure carries equal risk. Understanding classification helps prioritize response:

Unclassified / Public Information

  • Already publicly available
  • Can be freely shared
  • Exposure creates no significant risk
  • Examples: Marketing materials, public website content, published reports

Internal / Confidential Information

  • Intended for internal use only
  • Would create competitive disadvantage if exposed
  • Requires authorization to access
  • Examples: Internal procedures, draft documents, budget details

Strictly Confidential / Restricted Information

  • Highest sensitivity
  • Significant harm if exposed
  • Legal or regulatory obligations for protection
  • Examples: Client data, personnel records, financial information, strategic plans

Personal Identifiable Information (PII)

  • Any data that identifies specific individuals
  • Special regulatory protections
  • Breach notification requirements may apply
  • Examples: Social security numbers, financial account details, healthcare information

When reporting incidents, knowing what type of information may have been exposed helps security teams prioritize response and determine notification obligations.

The Incidents Everyone Hesitates to Report

Some incidents feel too minor to report. Others feel too embarrassing. Here are the ones people commonly delay reporting—and why they shouldn’t:

“I clicked a link but nothing seemed to happen” Modern malware often operates silently. No visible symptoms doesn’t mean no infection. Report it immediately.

“I sent confidential information to the wrong person, but I asked them to delete it” You can’t verify deletion. The information is now outside your control. Must be reported.

“I lost my laptop, but it was probably stolen from my car, not targeted” Doesn’t matter. The device and its data are compromised. Report immediately.

“I think someone might have seen my password when I typed it” Change it immediately and report the exposure. Better safe than assuming they didn’t notice.

“I accidentally uploaded confidential data to ChatGPT but I closed the window” The data was transmitted and may have been retained. Must be reported for data exposure assessment.

“My account logged in from a strange location, but maybe I forgot I accessed it while traveling” If you’re uncertain about whether you authorized an access, report it. We can quickly verify.

The Response You’ll Actually Get

Based on hundreds of incident reports I’ve handled, here’s the reality of how security teams respond:

For minor incidents / false positives (60% of reports):

  • “Thanks for reporting. This email was actually legitimate / This was expected behavior.”
  • Brief explanation of why it wasn’t a threat
  • Documentation in case of future similar reports
  • Appreciation for vigilance
  • No further action required

For potential incidents requiring investigation (30% of reports):

  • “We’re going to reset your password as precaution and review your account activity.”
  • Brief disruption (15-30 minutes typically)
  • Follow-up confirmation that no damage occurred
  • Sometimes additional monitoring for 24-48 hours
  • Back to normal quickly

For confirmed incidents requiring response (10% of reports):

  • “We’ve confirmed unauthorized access. Here’s what we’re doing and what you need to do.”
  • Password resets, device reimaging, access restrictions
  • May involve management notifications
  • Focused on containment and recovery, not blame
  • Post-incident discussion to prevent recurrence

In every case:

  • Respectful, professional interaction
  • Focus on resolving issue, not assigning fault
  • Appreciation for reporting
  • Clear communication about next steps
  • Documentation for organizational learning

Nobody gets fired for clicking phishing links and reporting them. People get fired for clicking phishing links and hiding them until they cause massive breaches.

My Personal Incident Response Experience

I’m not immune to mistakes. Two years ago, I nearly compromised my own credentials through a sophisticated targeted phishing attack.

The email appeared to come from our parent organization’s security team, warning of suspicious activity on my account. The landing page was pixel-perfect—correct branding, valid SSL certificate, professional layout.

I started entering my credentials. Got to the password field. Something felt wrong—couldn’t articulate what, just a feeling.

I stopped, closed the browser, and reported it to our security team.

Investigation revealed a targeted campaign against security professionals specifically. My credentials were never compromised because I reported before completing the attack. Three colleagues received similar emails; two completed credential entry before realizing the issue.

All three reported immediately. All three had credentials reset within minutes. No unauthorized access occurred.

The lesson: Even security professionals fall for sophisticated attacks. The difference between minor incident and major breach is how quickly you report it.

The Cultural Shift We Need

Organizations with strong security cultures share common characteristics:

Normalize reporting: Incidents are expected, normal occurrences. Reporting is routine procedure, not emergency escalation.

Eliminate blame: Focus on resolution and learning, not punishment. Mistakes happen; hiding them is the actual problem.

Reward reporting: Public recognition for people who report incidents quickly. Make it clear this is desired behavior.

Fast response: When people report incidents, they get immediate acknowledgment and rapid response. This reinforces that reporting is the right choice.

Transparency: Share incident statistics (anonymized) to show reporting frequency and outcomes. Demystify the process.

Leadership modeling: When leaders make mistakes and report them publicly, it creates permission for everyone else to do the same.

Organizations with weak security cultures do the opposite: treat incidents as failures, punish mistakes, create fear around reporting, respond slowly or dismissively, hide incident data, and expect perfection from employees.

Guess which organizations suffer worse breaches?

The Practical Reality Check

Let me be completely honest about incident reporting:

Will it disrupt your day? Maybe briefly. Most reports resolve in minutes. Serious incidents may require a few hours.

Will people know you made a mistake? Only those directly involved in response. We don’t broadcast who clicked what.

Will it affect your performance review? No. Unless you develop a pattern of deliberately ignoring security policies.

Will it take a lot of your time? Initial report: 5-10 minutes. Follow-up if needed: 30-60 minutes typically. Full investigation participation: rare, usually an hour or two maximum.

Is it really necessary for minor things? Yes. What seems minor to you might be the first indicator of larger campaign. We need the data points.

What if I’m wrong and it’s not actually an incident? Then we quickly confirm that and move on. False positives are completely acceptable and actually helpful—they show people are vigilant.

What Separates Organizations That Survive Breaches From Those That Don’t

After analyzing post-breach reports across multiple organizations, clear patterns emerge:

Organizations with contained damage:

  • Incidents reported within minutes to hours
  • Established reporting procedures, widely known
  • Security team resourced to respond rapidly
  • Culture that encourages reporting
  • Technical controls that limit blast radius
  • Regular training on incident recognition and reporting

Organizations with catastrophic damage:

  • Incidents reported days or weeks later (or discovered externally)
  • Unclear reporting procedures, staff unsure who to contact
  • Security team understaffed, slow response times
  • Culture of blame that discourages reporting
  • Flat networks with minimal segmentation
  • Infrequent training, focus on compliance rather than capability

The difference isn’t technical sophistication—it’s organizational culture around incident response.

Your Incident Response Checklist

Save this. Screenshot it. Print it. When something goes wrong, follow these steps:

Immediate Actions (Within Minutes):

  1. Stop what you’re doing
  2. Don’t try to “fix” it—preserve evidence
  3. Call service desk / IT support immediately
  4. Don’t shut down device—disconnect network if instructed

During Report:

  1. Explain what happened clearly and completely
  2. Provide specific times and actions
  3. Don’t minimize or exaggerate
  4. Answer questions honestly
  5. Follow instructions exactly

After Report:

  1. Document what happened while memory is fresh
  2. Change passwords if instructed
  3. Monitor accounts for unusual activity
  4. Cooperate with any investigation
  5. Learn from the experience

Never:

  • Delay reporting hoping problem resolves itself
  • Try to hide mistakes
  • Ignore suspicious activity because you’re busy
  • Assume “nothing happened” because you don’t see symptoms
  • Feel guilty—everyone makes mistakes; reporting quickly is what matters

The Question I Get Asked Most

“What’s the worst thing that can happen if I don’t report an incident?”

Real examples from my career:

Case 1: Employee clicked phishing link, didn’t report. Attacker accessed email for three weeks, harvested credentials for 200 colleagues, deployed ransomware. Company paid $2.4M ransom and still lost 40% of data. Multiple clients terminated contracts.

Case 2: Employee lost laptop, didn’t report (assumed it was misplaced, would turn up). Laptop contained unencrypted client data. Found it listed for sale online with data already extracted. Regulatory fines: $1.8M. Client lawsuits: ongoing.

Case 3: Employee accidentally sent confidential document to wrong recipient, didn’t report (asked recipient to delete it). Recipient was journalist. Document published. CEO resigned. Stock dropped 23%.

These aren’t hypotheticals—they’re real incidents where delayed or absent reporting enabled catastrophic outcomes.

Compare those to the hundreds of incidents reported immediately that resulted in zero data loss, zero financial impact, zero consequences for the reporter.

The math is clear.

Tags:
Write a comment