This challenge is based on the MITRE publication 11 Strategies of a World-Class Cybersecurity Operations Center by Knerler, Parker, and Zimmerman. The task is to read through the document and answer questions covering SOC structure, incident response workflow, data management, and advanced operations.
The document clearly distinguishes between three types of units that are often confused:
Figure 3 in the Fundamentals section diagrams the Basic SOC Workflow. After collection, triage, in-depth analysis, and decision making, the workflow terminates in a RESPONSE OPTIONS box listing four possible actions:
The document references the OODA Loop (Observe, Orient, Decide, Act) as the military strategy adapted by SOCs to achieve high levels of situational awareness. Originally developed for fighter pilots, the OODA Loop is a self-reinforcing decision cycle that analysts apply continuously — from seconds to years — as they build familiarity with their constituency and threat landscape.

Figure 6 maps constituency size to recommended SOC model. For organisations with 1,000 to 10,000 employees, the document recommends a Distributed SOC — a formal SOC authority comprised of a decentralised pool of resources housed across the constituency.

In a Large Centralised SOC, the SOC Operations Lead is responsible for generating SOC metrics, maintaining situational awareness, and conducting both internal and external trainings. In smaller SOCs these functions fall to the SOC Lead as an additional duty, but in a large SOC they warrant a dedicated section.
Table 4 (Capability Template) maps functions to SOC models. Under the Expanded SOC Operations category, two functions are listed as Optional (o) for Coordinating & National SOCs:
Section 3.7.3 discusses succeeding with Virtual SOCs during events like COVID-19. Two virtual console technologies are explicitly named to support remote access to SOC infrastructure:
The Follow the Sun model distributes SOC 24x7 coverage across two or three operations floors separated by many time zones. Each floor works local business hours (e.g. 9am–5pm), handing off to the next floor at shift end. This eliminates the need for analysts to work night shifts while maintaining continuous coverage.
Table 6 (Sample Incident Prioritisation Planning) assigns priorities as follows for the three activities asked about:
| Incident/Event | Priority |
|---|---|
| Phishing | Medium |
| Insider Threat | High |
| Pre-incident Port Scanning | Low |
Section 5.7 covers incident response with mobile devices. For mobile forensics investigations, the document recommends Santoku — an open source toolkit specifically designed for mobile forensics, malware analysis, and security that enables investigators to image and analyse devices as well as decompile and disassemble malware.
Section 6.6 (CTI Tools) advises that before choosing a CTI tool, organisations should ensure support for two open threat intelligence standards:
From Appendix E and supporting footnotes: PCAP (full packet capture) is explicitly noted as the data source whose volume “generally dwarfs all other data sources” — typically consuming TBs per day depending on network throughput.
Table 15 (Suggested Minimum Data Retention Time Frames) specifies that EDR, network sensor alerts, and SIEM-correlated alerts should be retained for 6 months to support SOC forensics and investigations.
The Pyramid of Pain (Figure 40) classifies how difficult each indicator type is for an adversary to change:
| Difficulty | Indicator |
|---|---|
| Trivial | Hash Values |
| Easy | IP Addresses |
| Challenging | Tools |
| Tough | TTPs |

Section 11.2.3 describes Adversary Emulation as the red teaming approach that specifically mimics the TTPs of a known real-world adversary. Unlike generalised red teaming, adversary emulation is based on a specific named threat actor, mirrors their documented TTPs, and targets the same assets that actor is known to pursue.