In the first week of July I was fortunate enough to attend the annual Cyber Conference at Chatham House. Two significant trends became apparent, which enable the exploration of a long muted metaphor, of mine, for cyber-security: at the end of every episode of Scooby-Doo the ‘monster’ is unmasked and a human is revealed to be behind the incident.
The trends in question are the importance of human factors in cyber security and the institutional processes that enable companies to respond to cyber incidents and mitigate the effects that they may have.
The essential argument that underpins these two trends is that technical cyber-security is near its zenith. Sometimes, engineers will stop an intrusion from becoming a breach and sometimes they won’t. Equally, they will be better at preventing some forms of intrusions than others and a realisation exists that zero risk equates to infinite expenditure. Therefore, an institution, either civic or military, that wants to enhance its cyber-security has to look beyond merely technical means.
The notion of human factors, or what some call the human firewall, is not a ground-breaking concept in cyber-security. The significant change that has occurred, in the corporate world, is the understanding that the implementation of best practices comes from the C-Suite (CCO/CEO/CFO etc.) and filters throughout the organisation. In the past, IT departments, broadly, had minimal C-Suite, or board level, oversight and purchased specifically tailored software to provide a solution for a given threat – a technical response. However, as information and, especially, data, the intellectual property of modern world, has become increasingly important to business, issues of security have moved to board level and away from the IT department. In many ways, this is akin to the ‘whole force’ approach utilising ‘full spectrum targeting’ that the British military espouses and runs parallel with changes in working habits, such as the blurring of the work/social and office/home distinctions, along with increasing demand for BYOD (Bring Your Own Device). Thus, the security of the business process is fundamentally altered and procedural solutions are necessary to provide protection for core business assets.
The enhancement of institutional processes relates to an increasing acceptance of the mantra that two types of companies exist: those that have been hacked and those that don’t know that they have been hacked. The procedures in place should achieve one of three objectives, to prevent an intrusion from becoming a breach, to reduce the amount of time to identify a breach (cases of well over a year exist!) and to mitigate the damage caused by a breach once discovered. An intrusion is a contained event whereby an unauthorised presence exists. A breach is the tangible consequences of an intrusion, whereby intellectual property, data, is, or can be, destroyed, altered, or stolen. The majority of intrusions do not become breaches.
As mentioned earlier, the military spend a lot of time talking about a ‘whole force’ approach and ‘full spectrum targeting’. Although this is the British terminology, the US has a similar approach, and it appears to confirm recourse to Clausewitzian total war principles, which have largely been shunned since the end of the Second World War. In the cyber environment, it is perhaps unsurprising that the level of civilian involvement is high given that around 95% of cyber infrastructure is owned by private companies. If you consider that to be a stretch then bear in mind that full spectrum targeting has been defined as delivering all levels of national power to achieve an outcome.
The traditional separation of the battlespace from societal-space is increasingly blurred. During the majority of human history civilians (non-combatants) have withdrawn from the battlespace and left the combatants to engage each other. When a victory is decided the non-combatants are free to return to the battlespace with a decreased, or negligible, risk of becoming collateral damage. Total war doctrine distorts this distinction. Utilising all levels of national power to a military outcome means that the civic-industrial process supports the military instrument. Therefore, all the apparatus of civil-industry, including the morale of the workforce, becomes a legitimate target of war. Operation Millennium, and the other 1,000 bomber raids during the Second World War strategic bombing campaign, exemplifies the point.
The cyber environment is an area where national advantage can be gained, both in relative and absolute terms. Consequently, the understanding of how to best gain effect from cyber operations has evolved from an narrow initial focus on perimeter style network defence (think a castle with a moat around it) to more multi-layered and active defence concepts, whereby defence is not merely sitting behind a wall and hoping it holds but also the ability to charge out through the gates and drive the enemy away. Or, better still, destroy the enemy before he gets close to your castle, or even better, remove the feasibility of the enemy being able to enter your land and move up to the castle walls. In military-strategic jargon this equates to deterrence by denial and increasingly makes the distinction between offensive and defensive action problematic.
Deterrence maintains a crucial role in the cyber environment and is also facilitated by the ability to attribute action to a particular party; think name and shame. Attribution is not some form of Holy Grail that is waiting to be discovered. It would be fair to say that prior to 2011, the process of attributing action in the cyber environment was not as well developed as we find today. By separating the strategic from the tactical, or operational, levels, it can be seen that attribution holds well at the strategic level. Hostile action in the cyber environment does not take place within a vacuum. Without picking on one country’s actions, consider the incidents in Estonia, 2007, Georgia/South Ossetia, 2008, Crimea, 2014, and the ongoing events in Eastern Ukraine. What level of proof is required to attribute hostile action has taken place? Identification of an electronic fingerprint and cookie crumb trail back to the perpetrator seems to be at odds with the transition of cyber-security principles away from the purely technical. Attribution is a firmly political decision.
It is increasingly understood that interaction, hostile or otherwise, in the cyber environment represents a reciprocal expression of human choice. As such, conceptualisation of issues within the cyber environment and the development of solutions and strategies require an appreciation of Scooby Doo. The technological fascination with, and, more generally, the poor public understanding of, the cyber environment, ensures that Scooby Doo would rarely earn his snack and that the true perpetrators of harm remain hidden behind a veil of ignorance. Consequently, and with one eye to the pending SDSR in the UK, unless human factors are central to a cyber-security strategy, it will surely be insufficient and incomplete.
Gavin E L Hall is a Doctoral Researcher in the Department of Political Science and International Studies at the University of Birmingham.