Skip to content
Knowledge base

Digital forensics after a cyberattack — how to secure evidence and reconstruct the incident

After a breach, what you do in the first hours determines everything. Learn how to conduct digital forensics, preserve chain of custody, and reconstruct the attack.

The SIEM fires. On a Monday morning, a SOC analyst sees an avalanche of alerts — disk encryption across 40 servers, lateral movement throughout the network, data exfiltration to an external IP. The organisation is under an active attack. First instincts in such situations can be ruinous: shut down the servers, apply the patch, run a restore from backup. These are decisions that — made too quickly and without a plan — destroy evidence, make it impossible to determine the root cause, and open the door to the next attack.

In moments like these, digital forensics is not a luxury — it is a necessity. It is systematic, methodical work that answers the questions: what exactly happened, when, how did the attackers enter the network, what were they able to do, and how long did they go undetected. Without this knowledge, every organisation — even one that technically restores its systems — remains vulnerable to a repeat. In this article I describe a practical approach to digital forensics after a security incident — from the first steps through to delivering a report for the regulator and the insurer.

What is digital forensics and why is it critical after a security incident?

Digital forensics is a discipline combining computer investigation with legal procedures — the systematic collection, preservation, analysis, and presentation of digital evidence in a manner that maintains its integrity and evidentiary value. After a cyberattack, forensics serves three key roles: technical investigation (what happened and how), legal support (evidence in case of proceedings), and the foundation for hardening (how to prevent the next incident).

Without forensic analysis, an organisation operates in the dark. It knows it was attacked, but does not know: since when, through what vector, what the attacker seized, what data was exfiltrated, and whether the adversary is still in the network. This is not an academic problem — it is a real barrier to returning to safe operations. Restoring a system from backup without forensics can mean restoring a system with an active backdoor.

The term DFIR (Digital Forensics and Incident Response) combines two closely related disciplines. Incident Response focuses on immediately containing damage and restoring operations. Digital Forensics focuses on deep analysis: collecting artifacts, reconstructing the timeline, identifying attacker techniques (TTPs). In practice, both disciplines intertwine — good IR requires forensics, and forensics informs IR decisions.

From a regulatory standpoint, forensics has become an obligation rather than a choice. The NIS2 Directive, the Cybersecurity Act (KSC), GDPR — all impose obligations to document incidents, investigate their causes, and report to the appropriate authorities. The absence of a documented forensic process is not only an operational gap, but a potential legal liability towards the regulator and — increasingly — towards the cyber insurer.

Key principle: Digital forensics must be started as early as possible after detecting an incident — but never at the cost of preserving the integrity of evidence. Haste without procedure destroys more than it saves.

Finally, forensics has a strategic dimension: establishing the full course of the attack — all compromised systems, credentials used, persistence techniques, and exfiltrated data — is the only solid foundation for a remediation effort that will truly close the gap. Organisations that skip forensics and limit themselves to technical cleanup statistically experience repeat attacks through the same vectors within a few months.

📚 Read the complete guide: OT/ICS Security: Bezpieczeństwo systemów OT/ICS - różnice z IT, zagrożenia, praktyki

How to secure digital evidence — chain of custody and data integrity?

The first and most important principle of forensics is chain of custody. This is the documented history of each evidentiary artifact: when it was collected, by whom, how it was stored, and who had access to it. Without chain of custody, even perfectly collected evidence loses its legal value. Any break in the continuity of the chain can be challenged by a defence attorney or questioned by a regulator.

In practice, the forensic chain of custody begins with the first contact with the system under investigation. Every action taken by the analyst must be documented to the minute: logging into the system, commands executed, data retrieved, tools used. Tools such as FTK Imager, Autopsy, and Velociraptor automatically generate activity logs, which significantly facilitates documentation. But every manual action as well — physically gaining access to a server, disconnecting a disk, photographing a screen — requires a manual entry in the evidence register.

Securing forensic data is based on the principle of “collect first, analyse later” and you never work on the original. A forensic disk image is a bit-for-bit perfect copy of the storage medium, containing every sector — including deleted files, free space, and file system metadata. The standard is the E01 format (Expert Witness Format) or raw/dd, with an MD5 and SHA-256 hash verifying integrity. If the hash of the image does not match the hash of the original — the image is useless as evidence.

Read-only principle: NEVER mount the original disk in read-write mode. Use hardware write blockers or write-blocking software. Mounting a file system without a write blocker alters metadata and timestamps — irreversibly.

Another critical artifact is RAM. Data in RAM is volatile — it disappears when power is cut. In RAM you can find: running processes (including those deleted from disk), encryption keys, passwords in plaintext, network connections, artifacts of malware operating exclusively in memory (fileless malware). Tools for RAM acquisition include WinPMEM, DumpIt, and Magnet RAM Capture — they perform a memory dump without shutting down the system. The size of the dump corresponds to the installed RAM, so on a server with 256 GB of RAM, be prepared for a file of that size.

The chronology of steps when securing evidence should look as follows: first, documentation of the state (screenshots, photographs, notes), then RAM acquisition, then disk imaging, and finally the preservation of network and system logs. This order reflects data volatility: RAM disappears when power is cut, network logs can be overwritten by subsequent traffic, a disk image can always be made — but the sooner, the less data will have been overwritten by the running system.

What system artifacts does a forensic team analyse — logs, RAM, disks?

Forensic artifacts are traces of activity left by the system, users, and processes — both legitimate and malicious. A good forensic analyst knows where to look and what to interpret. In a Windows system the list of artifacts is exceptionally rich; Linux offers different sets of traces; the cloud introduces its own category of logs.

Windows Event Logs are the primary source of information about system activity. Key channels: Security (logons, privilege changes, object access — Event IDs 4624, 4625, 4648, 4688, 4698, 4720), System (service starts, system errors), Application (application activity), PowerShell/Operational, and Microsoft-Windows-Sysmon (if Sysmon is installed — a goldmine of artifacts: process creation, network connections, file creation). Event Log ID 4624 with logon type 3 or 10 often indicates lateral movement. ID 4688 with command line logging enabled lets you see every command executed on the system.

Prefetch and Shimcache/Amcache are Windows artifacts recording the history of program executions. Prefetch (the C:\Windows\Prefetch directory) contains .pf files for each executed program, with a list of files and libraries it accessed and the date of its last execution. Even if an attacker deleted their tools, Prefetch may retain evidence of their existence. Amcache.hve (Windows 8+) records metadata of executed applications, including the SHA-1 hash of the executable — invaluable in malware analysis.

The Registry is one of the richest sources of Windows artifacts. Key locations: Run/RunOnce (persistence), UserAssist (user activity history, encoded ROT-13), ShimCache in the SYSTEM hive, MRU (Most Recently Used) lists of files and commands, BAM/DAM (Background Activity Moderator — Windows 10+, records process execution even after a restart). The Registry Explorer tool (Eric Zimmermann) greatly simplifies registry exploration.

Eric Zimmermann’s tools — a set of free Windows forensic tools (EZTools) — are the de facto standard for analysing Windows artifacts: Timeline Explorer, RECmd, MFTECmd, PECmd (Prefetch), AppCompatCacheParser. In practice, Windows forensics without EZTools is like cooking without a knife.

The Master File Table (MFT) in the NTFS file system contains an entry for every file and directory on the volume — with timestamp metadata (Created, Modified, Accessed, Entry Modified — the so-called MACE timestamps) and information about the file’s fragments on disk. MFT analysis reveals deleted files (MFT entries persist after deletion until overwritten), timestamp anomalies (timestamp stomping — an attacker technique involving manipulation of file dates), and the complete file system structure at a given point in time. MFTECmd from the EZTools suite parses the MFT to CSV for import into Timeline Explorer.

Network logs and PCAP provide information about systems’ communication with their environment. Firewall and IDS/IPS logs record incoming and outgoing connections with IP addresses, ports, and transfer volumes. Full network traffic capture (PCAP) enables reconstruction of communications — including potential extraction of data transmitted by attackers, provided it was not encrypted. In practice, organisations rarely retain full PCAPs due to volume, but NetFlow metadata (source, destination, port, time, volume) is typically available and sufficient to identify anomalies.

Linux system logs concentrate in several locations: /var/log/auth.log or /var/log/secure (authentication and sudo), /var/log/syslog or /var/log/messages (system events), /var/log/nginx/ or /var/log/apache2/ (web servers), bash_history (command history — but attackers often set HISTSIZE=0 or delete history), cron (scheduled tasks — a popular persistence vector), /etc/passwd and /etc/shadow (user accounts). The ausearch/auditd tool provides detailed logs of system calls if auditd was configured.

How to reconstruct the attack timeline step by step?

Reconstructing the attack timeline is the central objective of forensic analysis — the answer to the question “what exactly happened and when.” A good timeline combines artifacts from multiple sources into a chronological narrative that shows every step of the attacker: from the first entry into the network, through lateral movement, persistence, and exfiltration, to the moment of detection.

The first step is time normalisation. This is an apparently technical detail that in practice can be the source of serious errors. Different systems record time in different time zones, with different precision, and with potential clock drift. Windows servers record time in UTC, but the Event Log displays it in the local time zone. A firewall may record local time without time zone information. The analyst must convert all timestamps to a uniform time zone (preferably UTC) before building the timeline. A discrepancy of 1–2 hours due to incorrect time zone conversion can completely change the interpretation of the order of events.

Super timeline (or mega timeline) files are a technique for aggregating thousands of artifacts from multiple sources into a single chronological CSV file. The log2timeline/plaso tool automates this process: it retrieves artifacts from a disk image, logs, RAM dump, and generates a PLASO file (.plaso) that can be filtered and visualised with psort or imported into tools such as Timeline Explorer or Timesketch. A typical super timeline for a Windows system contains tens or hundreds of thousands of entries — the analyst does not read them sequentially but filters by time, artifact type, and keywords.

In forensic practice I always build the timeline backwards from the time of the event, not forwards. We know the moment of detection. I look for the “first trace” — the earliest artifact associated with the attacker. Then I fill the gap between the first trace and detection. This gives a complete picture of the “dwell time” (time of invisibility).

Identifying Patient Zero — the first system that was compromised — is typically the most difficult element of timeline reconstruction. It requires correlation between systems: if lateral movement went from system A to B to C, I trace back along the path to the point of entry. Techniques such as analysis of Event IDs 4624/4648 (network logons), analysis of PsExec/WMI/RDP/SMB connections, and analysis of parent-child processes (process tree) are key here. Sysmon Event ID 3 (Network Connection) is invaluable for this purpose — it records every network connection with information about the initiating process.

Analysis of MITRE ATT&CK techniques on the timeline provides a framework for interpretation. Instead of merely recording “command X was executed,” the analyst classifies each action according to ATT&CK techniques: Initial Access (T1190 — Exploit Public-Facing Application?), Execution (T1059 — Command and Scripting Interpreter?), Persistence (T1053 — Scheduled Task/Job?), Lateral Movement (T1021 — Remote Services?). This classification creates an “attack map” and allows identification of missing elements — stages for which no artifacts were found (which does not necessarily mean they do not exist — it may mean that the tooling coverage does not include that particular technique).

Timeline visualisation is critical for communicating findings. Timesketch (Google) is an open-source platform for timeline analysis and visualisation, allowing multiple analysts to work simultaneously on the same data set, add comments and annotations. For smaller incidents, Timeline Explorer from the EZTools suite suffices — an excel-like interface with advanced filtering. The final report should include the timeline in both graphical form (a time axis) and tabular form (artifact, time, system, interpretation), readable for both technicians and management.

What forensic tools do professional DFIR teams use?

Professional forensic equipment is a combination of commercial and open-source tools, selected according to the incident context. There is no single “best tool” — each has its own strengths and applications. In DFIR practice I use a tool set matched to the phase of analysis: acquisition, disk analysis, memory analysis, network analysis, malware analysis.

Acquisition and imaging: FTK Imager (AccessData/Exterro) is the standard for creating disk images and RAM acquisition. Free, with a graphical interface and CLI. Supports E01, AFF4, and raw/dd formats. Magnet ACQUIRE is an alternative, particularly good for mobile devices and the cloud. For enterprise environments and remote acquisition, Velociraptor (open-source) allows artifacts to be collected from hundreds of systems simultaneously without requiring physical access — this is a revolution in the scale and speed of DFIR.

Disk image analysis: Autopsy (open-source, Brian Carrier) is a graphical front end for The Sleuth Kit, offering file system analysis, keyword search, browser history recovery, registry analysis, and much more. For advanced users, the Forensic Toolkit (FTK) Imager and the full version of FTK (commercial) or X-Ways Forensics offer more advanced capabilities. KAPE (Kroll Artifact Parser and Extractor) is a tool for rapidly collecting and parsing specific Windows artifacts — invaluable when you need to quickly gather Event Logs, Prefetch, and Registry from a live system.

RAM analysis: Volatility Framework (open-source) is the de facto standard for analysing memory dumps. Volatility 3 supports Windows, Linux, and macOS systems, with hundreds of plugins: pslist/pstree (process list), netscan (network connections), filescan (files open in memory), malfind (detection of hidden or injected code regions), dlllist (libraries loaded by processes). Rekall (Google, now Rekall-forensics) offers similar capabilities with better support for memory profiles.

The Volatility plugin “malfind” scans process memory for regions with execution flags (Execute+Write) without a corresponding file on disk — a classic signal of process injection or shellcode. In practice, every process in the malfind output requires manual verification, but it is an excellent starting point for fileless malware analysis.

Log and timeline analysis: Plaso/log2timeline generates a super timeline from multiple sources. Timeline Explorer (EZTools) for visualisation and filtering of CSV. Timesketch for collaborative analysis and visualisation. Chainsaw (open-source, WithSecure) scans Windows Event Logs for known attack patterns based on Sigma rules and IoC detection — ideal for rapid triage of large numbers of logs. Hayabusa (open-source, Yamato Security) is a similar tool with an even larger Sigma rule base and more efficient parsing.

Malware analysis: Any.run, Cuckoo Sandbox, CAPE Sandbox — environments for safe dynamic analysis (running a suspicious file in an isolated environment and observing behaviour). FLOSS (FLARE Obfuscated String Solver, Mandiant) automatically extracts encoded and obfuscated strings from binary files. IDA Pro, Ghidra (open-source, NSA), and Binary Ninja are disassemblers/decompilers for deep static analysis of binary code. VirusTotal aggregates results from dozens of antivirus engines and offers a sandbox — ideal for a quick initial assessment of a suspicious file.

Noteworthy open-source tools worth knowing: Eric Zimmermann Tools (EZTools) — a complete set of Windows artifact parsers (free, invaluable). CyberChef — the “Swiss Army knife” for decoding, conversion, and data analysis. YARA — a rule language for identifying patterns in files and memory (industry standard). Sigma — a rule language for threat detection in logs (portable rules operating in multiple SIEMs). Strings, binwalk, hexdump — basic CLI tools for binary file analysis.

How to conduct forensics in a cloud environment (AWS, Azure)?

Cloud forensics is a new reality that every DFIR team faces. The traditional approach of “plug in the disk and make an image” does not work in AWS or Azure — there are no physical disks to plug in. The cloud introduces its own acquisition models, its own artifacts, and its own constraints.

AWS Forensics: AWS artifacts fall into several categories. Management and audit logs: CloudTrail records every AWS API call — console logins, instance launches, IAM permission changes, S3 access. CloudTrail is absolutely critical for AWS forensics and should be enabled in all regions with a retention period of at least 90 days. VPC Flow Logs record network traffic between EC2 instances (they do not contain packet content, but metadata: source, destination, port, protocol, volume). GuardDuty is a managed threat detection service — it generates alerts about suspicious activity and is a valuable source of forensic signals.

Acquisition of EC2 instances is performed via EBS (Elastic Block Store) snapshot: I create a snapshot of the instance disk, copy it to a dedicated forensic account (isolated from the production environment), mount it as a new volume to a forensic instance, and analyse it like a traditional disk image. AWS offers the Forensic Investigation Environment (FIE) tool — a ready-made CloudFormation template for creating an isolated forensic environment. Tools such as cloud-forensics-utils (Google) and Varc (Cado Security) automate the collection of cloud artifacts.

Azure Forensics: Azure Activity Log records operations on Azure resources (the equivalent of CloudTrail). Azure AD Audit Logs and Sign-in Logs are critical for analysing identity activity — including Entra ID (formerly Azure AD) sign-in risk events, which flag suspicious logons. Microsoft Defender for Cloud generates security alerts and recommendations. Azure Monitor Logs (Log Analytics workspace) aggregates logs from multiple Azure sources.

Key difference between cloud and on-premise: in the cloud you lose full access to the hypervisor and the physical hardware layer. You cannot collect raw RAM from an EC2 instance without stopping the instance (or using specialised tools like Volexity Surge Collect). But you gain: unlimited log retention, central aggregation from multiple systems, native disk snapshotting, and an API for automating artifact collection.

Forensics in containerised environments (Kubernetes, Docker) requires a separate approach. Containers are inherently ephemeral — after a restart, traces disappear. Key sources: Kubernetes Audit Logs (record every Kubernetes API call — pod creation, configuration changes, access to secrets), container logs (stdout/stderr) stored in the logging system (ELK, CloudWatch, Fluentd), container images in the registry (which can be statically analysed like files). Tools such as Falco (open-source) and Aqua Trivy are well suited to runtime forensics in containerised environments.

The Shared Responsibility Model has critical significance for forensics. In AWS/Azure you manage security “in the cloud” — applications, data, IAM configuration. The provider manages security “of the cloud” — physical infrastructure and the hypervisor. This means that some forensic artifacts are only available to the provider and can only be obtained through an official process (e.g., AWS response to a court order or subpoena). It is worth understanding in advance what you can collect yourself through the API and what requires engaging the provider.

What mistakes do organisations most commonly make after an incident — and how to avoid them?

The pressure during an active incident is enormous. Management wants to know what happened. Users cannot work. The media are asking for comment. In this atmosphere of pressure, mistakes are made that often cost more than the attack itself — both technically (loss of evidence) and legally (failure to meet reporting obligations).

Mistake no. 1: Immediate shutdown of systems. The reaction of “turn everything off to stop the attack” is understandable but is often catastrophic for forensics. Shutting down a system destroys the contents of RAM — running processes, open network connections, encryption keys, and in-memory malware disappear. The correct approach: first network isolation (disconnection from the network without cutting power), then RAM acquisition, and only then a possible shutdown. Exception: if a system is being actively used by the attacker for harmful actions — the priority is to stop the damage, but the decision must be made with conscious awareness of the trade-off.

Mistake no. 2: Working on originals. Analysing a system via SSH or RDP, running commands on the original server, browsing files without a write blocker — these are actions that alter artifacts and invalidate evidence. Opening any file changes its Access Time. Running any process leaves a trace in Prefetch. The correct approach: always work on copies (forensic images), use write blockers, document every action taken on the original system if it is absolutely necessary.

Mistake no. 3: Failure to document in real time. Analysts focused on analysis often defer documentation “until later.” This is a mistake: details fade from memory, the sequence of actions becomes unclear, and the chain of custody has gaps. The correct approach: document every action in real time — with timestamps, commands, results, and interpretation. A dedicated case management tool (TheHive, MISP, Jira) greatly helps in maintaining order.

The mistake I see most often: organisations consider the incident “closed” at the moment systems are restored. Forensics begins when systems are running — and ends when we fully understand what happened. Restoration without analysis is treating symptoms without curing the disease.

Mistake no. 4: Skipping forensics on “auxiliary” systems. Attackers rarely compromise only one system. Pivoting through an auxiliary file server, an administrator’s workstation, or a backup system is a standard technique. Organisations often concentrate forensics on “main” systems, overlooking network devices (routers, firewalls — they have their own logs and artifacts), user workstations (the entry point for phishing), and monitoring and backup systems (attackers often compromise these to hinder recovery).

Mistake no. 5: Communicating through compromised channels. If an attacker has access to the company’s email and Teams/Slack, communicating through those channels informs them of forensic and response activities. The correct approach: establish a secure communication channel outside the company’s domain (phone, encrypted messenger on personal devices) for the duration of the incident. Limit the number of people informed of forensic progress to the absolute minimum.

Mistake no. 6: Restoring systems from backup too quickly. If the attacker was in the network for 3 months, and the last pre-incident backups are 2 months old — those backups may contain backdoors. Restoring from an infected backup will return the organisation to the state “post-compromise, pre-ransomware” — that is, with the attacker’s active access intact. The correct approach: forensic analysis of backups before restoration, or restoration to an isolated test environment and verification of a clean state.

How to document an incident for the regulator and insurer?

A forensic report has two main audiences: technical (CSIRT, SOC, IT specialists) and formal (management, regulator, insurer, lawyer). The same incident requires two different documents — or one document with clearly separated sections for each group.

Regulators (UODO, KNF, CERT Polska, sectoral CSIRT) primarily expect: the date and time of incident detection, the scope and nature of the incident (what data was affected, how many individuals), mitigation measures taken, planned remediation actions, and — crucially — an assessment of the impact on the rights and freedoms of natural persons (under GDPR). GDPR requires reporting a personal data breach to UODO within 72 hours of detection. Failure to report or delayed reporting without justification is grounds for imposing penalties. The forensic report provides the factual material for the notification form.

The cyber insurer requires evidence of due diligence and a detailed description of the incident for damage assessment. Key elements of documentation for the insurer: the incident timeline with specific dates and times, documentation of forensic and IR steps taken (chain of custody, tools used), a list of damaged or lost data and systems, an estimate of response costs (DFIR working hours, tool costs, costs of external experts), and downtime of business-critical systems. The insurer may refuse to pay or reduce compensation if it proves that the organisation did not follow basic cyber hygiene principles or did not cooperate transparently in documenting the damage.

The golden rule of documentation: everything that is not written down does not exist. This applies both to actions taken during the incident (what you did, when, why) and to the security posture before the incident (what controls were implemented, what training employees underwent). Documentation builds a negotiating position with the regulator and the insurer.

The structure of a formal forensic report should include: an executive summary (1–2 pages for management, without technical jargon), an incident chronology (timeline in tabular and graphical form), the scope of the compromise (which systems, data, and users were affected), technical analysis (details for the technical reader — artifacts, tools, methodology), root cause (the root cause of the incident), business impact (damage estimate), recommendations (specific, prioritised remediation actions), and appendices (chain of custody, hash values, screenshots of artifacts).

The legal aspect of forensic documentation requires the involvement of a legal counsel at an early stage of the incident. Attorney-client privilege may protect some communications and analyses from disclosure in legal proceedings — but only if forensics is conducted on the instructions of a lawyer, rather than directly by the organisation. This is a legal nuance that can have enormous consequences in the event of subsequent litigation or regulatory proceedings.

What does the forensic process look like from alert to report?

The following table presents a comprehensive overview of the phases of the DFIR process — from the first alarm signal to the final report and hardening.

PhaseActionsToolsOutputsTime
1. Detection and TriageAnalysis of SIEM/EDR alerts, initial incident classification, priority and scope assessment, activation of DFIR teamSIEM (Splunk, QRadar), EDR (CrowdStrike, SentinelOne), SOAR, Ticketing (TheHive)Incident confirmed, initial scope, IR Plan activated0–2h
2. Isolation and ContainmentNetwork isolation of compromised systems, blocking user accounts, revoking compromised credentials, terminating malicious processesFirewall/NAC rules, AD/LDAP management, EDR isolation, Network segmentationActive attack stopped, further lateral movement prevented1–4h
3. Evidence AcquisitionRAM dump, disk imaging (forensic image), export of system/network logs, preservation of backups, chain of custody documentationFTK Imager, WinPMEM/DumpIt, Velociraptor, KAPE, Magnet RAM CaptureComplete set of forensic artifacts with MD5/SHA-256 hashes2–8h (depending on number of systems)
4. Forensic AnalysisDisk image analysis, RAM dump analysis, log parsing, malware analysis (static + dynamic), building super timelineAutopsy, Volatility, Plaso/log2timeline, Chainsaw, Hayabusa, EZTools, Cuckoo/Any.run, YARAAttack timeline, IoC list, TTP identification (MITRE ATT&CK), root cause8–72h
5. Timeline ReconstructionTimestamp normalisation, artifact correlation from multiple sources, Patient Zero identification, MITRE ATT&CK mapping, dwell time identificationTimeline Explorer, Timesketch, MITRE ATT&CK Navigator, MaltegoChronological attack narrative, lateral movement map, compromise scope identification4–24h
6. EradicationRemoval of malware and backdoors, elimination of persistence mechanisms, patching exploited vulnerabilities, reset of compromised credentials, configuration hardeningEDR, patch management, Qualys/Tenable, Active Directory, SCCMClean state confirmed by forensic re-scan, attack vectors closed4–48h
7. Recovery and MonitoringSystem restoration from verified backups, re-deployment from hardened images, enhanced monitoring (increased EDR sensitivity, SIEM rules), threat huntingBackup/DR systems, deployment tools, SIEM/EDR tuning, VelociraptorSystems in production with confirmed clean state, enhanced detection coverage1–7 days
8. ReportingExecutive summary, technical forensic report, chain of custody documentation, notification to regulator (UODO/KNF/CSIRT), communication with insurerWord/PDF, TheHive, MISP (IoC sharing), UODO notification formFormal incident report, regulatory obligations fulfilled, insurer documentation2–5 days
9. Lessons LearnedPost-mortem analysis, identification of detection and response gaps, update of IR Plan and playbooks, team training, implementation of forensic recommendationsSIEM rules update, EDR tuning, tabletop exercises, awareness trainingUpdated IR Plan, new detection rules, hardening roadmap1–2 weeks

The total DFIR process time for a typical ransomware incident at an enterprise organisation ranges from a few days to a few weeks — depending on the size of the environment, the degree of compromise, and the availability of DFIR resources. Organisations with a mature IR programme, ready-made tools, and a trained team reduce this time by 40–60% compared to organisations responding ad hoc.

How does nFlo conduct post-breach analysis and support clients after an incident?

When an organisation is experiencing an active incident, response time determines the scale of losses. nFlo builds its DFIR services around three principles: speed (first actions within 15 minutes of an alert), methodology (standardised forensic procedures based on NIST SP 800-61 and SANS PICERL), and completeness (from evidence acquisition through analysis to a formal report).

nFlo serves more than 200 clients and has completed more than 500 security projects — translating into extensive experience from real incidents, not just tabletop exercises. A response time of under 15 minutes means in practice: within 15 minutes of a report, the client has telephone contact with a DFIR analyst who guides them through the first steps of isolation and evidence preservation, before the team physically arrives on site or connects remotely.

The nFlo post-breach analysis service covers the full DFIR cycle:

Stage 1 — Emergency Response: Remote telephone/Teams assistance in the first minutes of an incident. Instructions for the client: how to isolate systems, how to block accounts, what NOT to do (do not shut down servers, do not delete files). Initial triage via remote access to the client’s SIEM and EDR, if monitoring infrastructure is available.

Stage 2 — Forensic Acquisition: nFlo analysts travel to the site or connect remotely via a secure tool (Velociraptor, Magnet AXIOM Cyber) for remote artifact acquisition. Creation of forensic images, RAM dump, log collection — all with chain of custody maintained. Every artifact is hashed and documented.

Stage 3 — Analysis and Timeline: Work in a dedicated forensic environment (isolated from the client’s infrastructure). Analysis of disks, memory, logs, and network traffic. Mapping to MITRE ATT&CK. Identification of patient zero, dwell time, and compromise scope. Malware analysis (static and dynamic). Extraction of IoCs for threat intelligence.

Stage 4 — Report and Regulatory Support: A complete forensic report: executive summary for management, detailed technical report, graphical timeline, remediation recommendations prioritised by risk. Support in preparing notifications to UODO/KNF/CSIRT. Communication with insurers and lawyers.

Stage 5 — Eradication and Hardening: nFlo does not end its work with the report. We support the client in eliminating the effects of the attack, hardening configurations, and implementing forensic recommendations. Optionally: re-deployment of infrastructure from scratch (clean build) for critical systems for which a clean state cannot be confirmed.

nFlo’s 98% client retention rate reflects an approach that combines technical effectiveness with an understanding of the client’s business context. Forensics is not just binary analysis — it is interpretation in context: what data was critical, what regulatory obligations the client has, what is the tolerated downtime, and what resources are available for recovery.

Organisations that have used nFlo’s DFIR services report a 90% reduction in the risk of a repeat incident within 12 months — thanks to comprehensive root cause analysis and the implementation of forensic recommendations.

Reactive forensics (after an incident) is only half the value. nFlo also offers forensic readiness — preparing an organisation to conduct forensics efficiently before an incident occurs: configuring extended logging (Sysmon, auditd, VPC Flow Logs), deploying a central SIEM with long-term log retention, configuring EDR with forensic acquisition permissions, creating and testing DFIR playbooks, and training internal analysts. Investment in forensic readiness shortens incident response time and dramatically improves the quality of collected evidence — because the organisation knows where to look and what to collect before the chaos of an incident has a chance to disrupt thinking.


FAQ — frequently asked questions about digital forensics

How long does forensic analysis take after a cyberattack?

The time for forensic analysis depends on the size of the environment and the scope of the compromise. For a small organisation (a few dozen systems, one infected server), initial analysis takes 24–48 hours, with a full report taking 3–5 business days. For a large enterprise with hundreds of compromised systems, a full forensic analysis can take 2–4 weeks. Factors that extend the timeline: lack of centralised logging (logs must be collected manually from each system individually), absence of forensic images (attacker deleted traces), heterogeneous environments (Windows + Linux + cloud + OT).

Can forensics be conducted remotely, or does it require physical presence?

Modern forensic tools enable remote acquisition of artifacts from hundreds of systems simultaneously — without the need for physical presence on site. Velociraptor, Magnet AXIOM Cyber, CrowdStrike’s Remote Response — these are tools that allow Event Logs, Prefetch, Registry, memory dumps, and other artifacts to be collected remotely over the internet (or VPN). Physical presence is recommended or required when: creating bit-for-bit images of physical server disks, analysing network devices with limited API access, OT/SCADA systems without remote access, and when the highest chain of custody standard is required for legal proceedings.

Can forensics be conducted on a compromised system?

It can, but with awareness of the limitations. Advanced rootkits and bootkits can actively conceal files, processes, and network connections from tools running within the operating system (live forensics). Offline analysis (from disk images on a separate forensic system) is free of this limitation. In practice: live forensics is used for rapid acquisition of volatile data (RAM, network connections, open files). Deep analysis should always be performed offline on a copy.

How much does an external DFIR service cost?

The cost depends on the scope of the incident and the engagement model. Typical rates for external DFIR firms in Poland and Europe: 300–800 PLN/hour for an experienced DFIR analyst. A complete ransomware incident response for a mid-market organisation costs in the range of 50,000–200,000 PLN (depending on the number of compromised systems, analysis time, and the need for physical presence). Organisations with an active cyber insurance policy often have access to external DFIR firms as part of the policy — which radically changes the cost calculation. It is worth reviewing policy terms for IR/DFIR coverage before an incident occurs.

What is “dwell time” and why is it important?

Dwell time is the period between the initial compromise and detection of the attacker. According to Mandiant research (M-Trends 2024), the global median dwell time was 10 days — but in the case of attacks not detected by the victim (detected by an external party) — significantly longer. The longer the dwell time, the more time the attacker had for lateral movement, privilege escalation, and data exfiltration. Forensics allows dwell time to be measured precisely — which is critical for assessing the scope of the compromise and regulatory obligations (e.g., GDPR requires an assessment of whether personal data was accessible to the attacker and for how long).


Sources

  1. NIST Special Publication 800-86: Guide to Integrating Forensic Techniques into Incident Response. National Institute of Standards and Technology, 2006. https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-86.pdf

  2. NIST Special Publication 800-61 Rev. 2: Computer Security Incident Handling Guide. National Institute of Standards and Technology, 2012. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf

  3. Mandiant M-Trends 2024: Special Report. Mandiant (Google Cloud), 2024. https://www.mandiant.com/m-trends

  4. Carrier, B.: File System Forensic Analysis. Addison-Wesley Professional, 2005. ISBN 0-321-26817-2.

  5. Ligh, M.H., Case, A., Levy, J., Walters, A.: The Art of Memory Forensics: Detecting Malware and Threats in Windows, Linux, and Mac Memory. Wiley, 2014. ISBN 978-1-118-82509-6.

  6. SANS Institute: DFIR Curriculum — FOR508: Advanced Incident Response, Threat Hunting, and Digital Forensics. https://www.sans.org/cyber-security-courses/advanced-incident-response-threat-hunting-training/

  7. Cado Security: Cloud Forensics and Incident Response Guide. https://www.cadosecurity.com/resources/

  8. Eric Zimmermann’s Digital Forensics Tools (EZTools). https://ericzimmerman.github.io/

  9. Zimmerman, E.: Introducing Plaso (log2timeline). 2013. https://plaso.readthedocs.io/

  10. MITRE ATT&CK Framework: Enterprise Matrix. https://attack.mitre.org/matrices/enterprise/

  11. Regulation (EU) 2016/679 of the European Parliament and of the Council (GDPR), Art. 33 and 34 — obligation to notify data breaches. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679

  12. Act of 5 July 2018 on the national cybersecurity system (KSC), as amended. https://isap.sejm.gov.pl/isap.nsf/DocDetails.xsp?id=WDU20180001560


Explore the key terms related to this article in our cybersecurity glossary:

  • Incident Response — Incident Response is a structured process for detecting, analysing, and responding to security incidents…
  • Cybersecurity — Cybersecurity is the set of techniques, processes, and practices for protecting IT systems…
  • SIEM — SIEM (Security Information and Event Management) is a platform that aggregates and correlates security events…
  • EDR — Endpoint Detection and Response (EDR) is an advanced solution for detecting and responding to threats…
  • SOC as a Service — SOC as a Service is the outsourcing of security monitoring, analysis, and incident response…

Learn more

Read related articles in our knowledge base:


Check our services

Do you need cybersecurity support? See:

Share:

Talk to an expert

Have questions about this topic? Get in touch with our specialist.

Product Manager
Grzegorz Gnych

Grzegorz Gnych

Sales Representative

Response within 24 hours
Free consultation
Individual approach

Providing your phone number will speed up contact.

Want to Reduce IT Risk and Costs?

Book a free consultation - we respond within 24h

Response in 24h Free quote No obligations

Or download free guide:

Download NIS2 Checklist