Dr. Matthew Canham

Leading Expert in Human Cybersecurity

Research Papers


Repeat Clicking: A Lack of Awareness Is Not the Problem

Matthew Canham

Although phishing is the most common social engineering tactic employed by cyber criminals, not everyone is equally susceptible. An important finding emerging across several research studies on phishing is that a subset of employees is especially susceptible to social engineering tactics and is responsible for a disproportionate number of successful phishing attempts. Sometimes referred to as repeat clickers, these employees habitually fail simulated phishing tests and are suspected of being responsible for a significant number of real-world phishing related data breaches. In contrast to repeat clickers, protective stewards are those employees who never fail simulated phishing exercises and habitually report phishing simulations to their security departments. This study explored some of the potential causes of these persistent behaviors (both good and bad) by administering six semi-structured interviews (three repeat clickers and three protective stewards). Surprisingly, both groups were able to identify message cues for identifying potentially malicious emails. Repeat clickers reported a more internally oriented locus of control and higher confidence in their ability to identify phishing emails, but also described more rigid email checking habits than did protective stewards. One unexpected finding was that repeat clickers failed to recall an identifier which they were explicitly informed that they would need to later recall, while the protective stewards recalled the identifier without error. Due to the small sample and exploratory nature of this study additional research should seek to confirm whether these findings extrapolate to larger populations.

Click Here to Read the Full Paper
The UnCODE system: A neurocentric systems approach for classifying the goals and methods of Cognitive Warfare

Torvald F. Ask, Ricardo Lugo, Stefan Sütterlin, Matthew Canham, Daniel Hermansen, Benjamin J. Knox

Cognitive Warfare takes advantage of novel developments in technology and science to influence how target populations think and act. Establishing adequate defense against Cognitive Warfare requires examination of modus operandi to understand this emerging action space. This includes the goals and methods that can be realized through science and technology. Recent literature suggests that both human and nonhuman cognition should be considered as targets of Cognitive Warfare. There are currently no frameworks allowing for a unified way of conceptualizing short-term and long-term Cognitive Warfare goals and attack methods that are domain- and species-agnostic. There is a need for a framework developed through a bottom-up approach that is informed by neuroscientific principles to capture relevant aspects of cognition. The framework should be at a level of complexity that is actionable to decision-makers in war. In this paper, we attempt to cover the existing gap by proposing the Unplug, Corrupt, disOrganize, Diagnose, Enhance (UnCODE) system for classifying the goals and methods of Cognitive Warfare. The system is neurocentric by conceptualizing Cognitive Warfare goals from the perspective of how adversarial methods relate to neural information processing in an individual or society. The UnCODE system identifies five main classes of goals: 1) Eliminating a target’s ability to produce outputs, 2) degrading a target’s capacity to process inputs and produce outputs, 3) biasing a target’s input-output activity, 4) monitoring and understanding the input-output relationships in targets, and 5) enhancing a target’s capacity and ability to process inputs and produce outputs. Methods can be divided in two categories based on access to the target’s neural system: direct access and indirect access. The UnCODE system is domain- and species-agnostic and allows for interdisciplinary commensurability when communicating attack paths across domains. In sum, the UnCODE system is a unifying framework that captures that multiple methods can be used to reach the same Cognitive Warfare goals.

Click Here to Read the Full Paper
On the Relationship between Health Sectors’ Digitalization and Sustainable Health Goals: A Cyber Security Perspective

Stefan Sütterlin, Benjamin J. Knox, Kaie Maennel, Matthew Canham, Ricardo G. Lugo

Digitalization in the health sector is, as in all societal domains, motivated by a range of anticipated positive consequences, such as increased effectiveness of prevention, treatment and follow-up, a generally improved resource efficiency and improved health care availability. This chapter will discuss how the ambition of achieving sustainable health goals may be affected by measures of digitalization. This will be done by covering digitalization from a cyber security perspective and how new potential threats to privacy may influence the public’s trust in their health care system, thereby affecting the envisaged goals of sustainable health care performance. It will also discuss further how digitalization in the healthcare sector unleashes an enormous potential in terms of cost-effectiveness, decentralization and the availability of specialist services and expertise, which risks and countermeasures these changes entail, and how they are currently dealt with and the role of cyber resilience in ensuring rapid digitalization does not come at the cost of essential trust mechanisms that are quid pro quo in the health sector. Transformation to a digitalized healthcare system must be governed and framed by a range of measures in various societal domains. The World Health Organization’s (WHO) report on the status of eHealth in the European region (WHO 2016) states that its member states “acknowledge and understand the role of e-Health in contributing to the achievement of universal health coverage and have a clear recognition of the need for national policies strategies and governance to ensure the progress and long-term sustainability of investments.

Click Here to Read the Full Paper
Planting a Poison SEAD: Using Social Engineering Active Defense to Counter Cybercriminals

Matthew Canham & Juliet Tuthill

By nearly every metric, the status quo of information security is not working. The interaction matrix of attacker-defender dynamics strongly favors the attacker who only needs to be lucky once. We argue that employing social engineering active defense (SEAD) will be more effective in countering malicious actors than maintaining the traditional passive defensive strategy. The Offensive Countermeasures (OCM) approach to defense advocates for three categories of countermeasures: annoyance, attribution, and attack. Annoyance aims to waste the attacker’s time and resources with the objective of not only deterrence but also to increase the probability of detection and attribution. Attribution attempts to identify who is launching the attack. Gathering as much threat intelligence on who the attacker is, provides the best possible defense against future attacks. Finally, attack involves running code on the attacker’s system for the purpose of deterrence and attribution. In this work, we advocate for utilizing similar approaches to deny, degrade, and de-anonymize malicious actors by using social engineering tools, tactics, and procedures against the attackers. Rather than fearing the threats posed by synthetic media, cyber defenders should embrace these capabilities by turning them against criminals. Future research should explore ways to implement synthetic media and automated SEAD methods to degrade the capabilities of online malicious actors.

Click Here to Read the Full Paper
Phish derby: Shoring the human shield through gamified phishing attacks

Matthew Canham, Clay Posey, Michael Constantino

To better understand employees’ reporting behaviors in relation to phishing emails, we gamified the phishing security awareness training process by creating and conducting a month-long ‘Phish Derby’ competition at a large university in the U.S. Employees competed against one another for prizes and were instructed to report emails as potential phishing attacks. Prior to the beginning of the competition, we collected demographic data and data related to the concepts central to two theoretical foundations: the Big Five personality traits and goal orientation theory. We found several notable relationships between demographic variables and Derby performance, which was operationalized from the number of phishing attacks reported and employee report speed. Several key findings emerged, including past performance on simulated phishing campaigns positively predicted Phish Derby performance; older participants performed better than their younger colleagues, but more education led to poorer performance; and individuals who used a mix of PCs and Macs at work performed worse than those using a single platform. We also found that two of the Big Five personality dimensions, extraversion, and agreeableness, were both associated with poorer performance in phishing detection and reporting. Likewise, individuals who were driven to perform well in the Derby because they desired to learn from the experience (i.e., learning goal orientation) performed at a lower level than those driven by other goals. Interestingly, self-reported levels of computer skill and the perceived ability to detect phish failed to exhibit a significant relationship with Derby performance. We discuss these findings and describe how focusing on motivating the good in employee cyber behaviors is a necessary yet too often overlooked component in organizations whose training cyber cultures are rooted in employee click rates alone.

Click Here to Read the Full Paper
Ambiguous self-induced disinformation (ASID) attacks: Weaponizing a cognitive deficiency

Matthew Canham, Stefan Sütterlin, Torvald Fossåen Ask, Benjamin James Knox, Lauren Glenister, Ricardo Gregorio Lugo

Humans quickly and effortlessly impose context onto ambiguous stimuli, as demonstrated through psychological projective testing and ambiguous figures. This feature of human cognition may be weaponized as part of an information operation. Such Ambiguous Self-Induced Disinformation (ASID) attacks would employ the following elements: the introduction of a culturally consistent narrative, the presence of ambiguous stimuli, the motivation for hypervigilance, and a social network. ASID attacks represent a low-risk, low-investment tactic for adversaries with the potential for significant reward, making this an attractive option for information operations within the context of grey-zone conflicts.

Click Here to Read the Full Paper
HUMAN FACTORS AND ERGONOMICS IN DESIGN OF A3: AUTOMATION, AUTONOMY, AND ARTIFICIAL INTELLIGENCE

Ben D. Sawyer, Dave B. Miller, Matthew Canham, and Waldemar Karwowski

Automation, autonomy, and artificial intelligence (AI) are technologies which serve as extensions of human ability, contributing self-produced, non-human effort (see Figure 1). These three terms encompass a set of computational tools that can learn from data, systems that act in a reasonable, and even human-like manner (Bolton, Machová, Kovacova, & Valaskova, 2018; Dash, McMurtrey, Rebman, & Kar, 2019; Shekhar, 2019). Computing of this nature has been pursued at least since the 1950s, when Simon predicted machines “capable … of doing any work a man can do” (Chase & Simon, 1973), and today such envisioned technology appears under the moniker Artificial General Intelligence (AGI). The desire for synthetic intelligent creations has been a staple of human desire for much longer, in various forms (Hancock et al., 2011; Schaefer et al., 2015). While AGI remains, at present, just a dream. A number of promising, and promised, future technologies under development require machines to learn, understand, and adapt to novel situations with at least the flexibility humans exhibit, albeit in a more limited context. The major technology underlying AI, machine learning (ML), is useful for engineering such autonomy, as it can learn from external data input, either with direct human oversight or without. In developing these highly useful technologies, knowledge from human factors and ergonomics (HF/E) can be of great use, especially to designers charged with the difficult task of dovetailing humans and machines in complex systems built to navigate sometimes chaotic environments. Technology serves as a greater extension of human ability each year, and optimal performance still results from hybrid human–machine teams...

Click Here to Read the Full Paper
Deepfake Social Engineering: Creating a Framework for Synthetic Media Social Engineering

Matthew Canham

How do you know that you are actually talking to the person you think you are talking to? Deepfake and related synthetic media technologies may represent the greatest revolution in social engineering capabilities yet developed. In recent years, scammers have used synthetic audio in vishing attacks to impersonate executives to convince employees to wire funds to unauthorized accounts. In March 2021, the FBI warned the security community to expect a significant increase in synthetic media enabled scams over the next 18 months. The security community is at a highly dynamic moment in history in which the world is transitioning away from being able to trust what we experience with our own eyes and ears. This presentation proposes the Synthetic Media Attack Framework to describe these attacks and offer some easy to implement, human-centric countermeasures. This framework utilizes five dimensions: Medium (text, audio, video, or a combination), Interactivity (pre-recorded, high asynchrony, low asynchrony, or real-time), Control (human puppeteer, software, or a hybrid), Familiarity (unfamiliar, familiar, close), and Intended Target (human or automation, an individual target, or a broader audience), to describe synthetic media social engineering attacks. While several technology-based methods to detect synthetic media such as currently exist, this work focuses discussion on human centered countermeasures to synthetic media attacks because most technology-based solutions are not readily available to the average user and are difficult to apply in real-time. Effective security policies can help users spot inconsistencies between the behaviors of a legitimate actor and a syn-puppet. Proof-of-life statements will effectively counter most virtual kidnappings leveraging synthetic media. Significant financial transfers should require either multi-factor authentication (MFA) or multi-person authorization. These ‘old-school’ solutions will find new life in the emerging world of synthetic media attacks.

Click Here to Read the Full Paper
Phishing for long tails: Examining organizational repeat clickers and protective stewards

Matthew Canham, Clay Posey, Delainey Strickland, Michael Constantino

Organizational cybersecurity efforts depend largely on the employees who reside within organizational walls. These individuals are central to the effectiveness of organizational actions to protect sensitive assets, and research has shown that they can be detrimental (e.g., sabotage and computer abuse) as well as beneficial (e.g., protective motivated behaviors) to their organizations. One major context where employees affect their organizations is phishing via email systems, which is a common attack vector used by external actors to penetrate organizational networks, steal employee credentials, and create other forms of harm. In analyzing the behavior of more than 6,000 employees at a large university in the Southeast United States during 20 mock phishing campaigns over a 19-month period, this research effort makes several contributions. First, employees’ negative behaviors like clicking links and then entering data are evaluated alongside the positive behaviors of reporting the suspected phishing attempts to the proper organizational representatives. The analysis displays evidence of both repeat clicker and repeat reporter phenomena and their frequency and Pareto distributions across the study time frame. Second, we find that employees can be categorized according to one of the four unique clusters with respect to their behavioral responses to phishing attacks—“Gaffes,” “Beacons,” “Spectators,” and “Gushers.” While each of the clusters exhibits some level of phishing failures and reports, significant variation exists among the employee classifications. Our findings are helpful in driving a new and more holistic stream of research in the realm of all forms of employee responses to phishing attacks, and we provide avenues for such future research.

Click Here to Read the Full Paper
Understanding Online Information Operations: Development of an Influence Network for Scientific Inquiry Testing Environment (INSITE)

Courtney Crooks, Tom McNeil, Ben Sawyer, Matthew Canham, David Muchlinski

Influence operations that promote propaganda, disinformation, and the propagation of social hysteria represent an existential threat to the United States. Effective countermeasures must be developed that can respond in near real-time and anticipate future adversarial actions. One of the most significant hurdles to developing effective countermeasures is the lack of a complex and dynamic testing environment that provides adequate assessment of algorithms and automated detection tools. Through the integration of social sciences with applied mathematics, dynamic, multi-factorial phenomena such as social influence and response behavior within complex social systems may be investigated with scientific rigor. The proposed capability will fulfill a critical need for developing a social-centric model to understand and assess complex influence factors and design of social engineering counter-measures that promote national security interests. To develop such a model, the research community also needs an accessible social media platform that is controlled by researchers, for researchers, and can be used to test new ideas in a realistic setting. A researcher-controlled platform will not only provide unprecedented access to data but will also allow researchers to test mitigation intervention strategies that would be impossible to implement in existing social media platforms.

Click Here to Read the Full Paper
Confronting Information Security’s Elephant, The Unintentional Insider Threat

Matthew Canham, Clay Posey, and Patricia S. Bockelman

It is well recognized that individuals within organizations represent a significant threat to information security as they are both common targets of external attackers and can be sources of malicious behavior themselves. Notwithstanding these facts, one additional aspect of human influence in the security domain is largely overlooked: the role of unintentional human error. Such lack of emphasis is surprising given relatively recent reports that highlight error’s central role in being the root cause for numerous security breaches. Unfortunately, efforts that recognize human error’s influence suffer from not employing a commonly accepted error framework and lexicon. We thus take this opportunity to review what the data show regarding error-based breaches across various types of organizations and create a nomenclature and taxonomy rooted in the rich history of safety research that can be applied to the information security domain. Our efforts represent a significant step in an effort to classify, monitor, and compare the myriad aspects of human error in information security in the hopes that more effective security education, training, and awareness (SETA) programs can be devised. Further, we believe our efforts underscore the importance of revisiting the daily demands placed on organizational insiders in the workplace.

Click Here to Read the Full Paper
The Enduring Mystery of the Repeat Clickers

Matthew Canham et al

Individuals within an organization who repeatedly fall victim to phishing emails, referred to as Repeat Clickers, present a significant security risk to the organizations within which they operate. The causal factors for Repeat Clicking are poorly understood. This paper argues that this behavior afflicts a persistent minority of users and is explained as either the main effect of individual traits (personality or others) or is a moderated interaction between traits and other factors such as cultural influences, situational factors, or social engineering techniques. Because Repeat Clickers represent a disproportionate risk, identifying causal factors and developing mitigations for this behavior should provide substantial return on investment to improving the security of an organization. Developing such mitigations will require a better understanding of the individual differences contributing to repeat clicking behavior. We present pilot data and suggest research questions to improve understanding of the contributing factors of repeated victimization by phishing emails.

Click Here to Read the Full Paper
Neurosecurity: Human Brain Electro-Optical Signals as MASINT

Matthew Canham, Ben D Sawyer

Applied neuroscience presently allows not only the scientific discovery-oriented probing of the inner workings of the mind, but increasingly the probing of individual minds toward gathering intelligence. Significant advances in neuroimaging, leveraging both active and passive electro-optical energy, can reveal specifics of information held in the mind even without cooperation (eg, Lange et al., 2018; Sawyer et al., 2016a). The processes of the brain increasingly join many other energetic sources from which quantitative and qualitative data analysis may extract identifying features and other useful intelligence (Sawyer & Canham, 2019). Indeed, it is increasingly appropriate to discuss the human brain as a system which can be read from, written to, and the operations of which may therefore be collected for analysis or influenced (Sawyer & Canham, 2019). Indeed, we argue here that we are witnessing the end of the era…

Click Here to Read the Full Paper
Developing Training Research to Improve Cyber Defense of Industrial Control Systems

Matthew Canham, Stephen M Fiore, Bruce D Caulkins

Cyber-attacks are a common aspect of modern life. While cyber based attacks can expose private information or shut down online services, some of the most potentially dangerous attacks change the sensor and control data utilized by Industrial Control Systems for the intended purpose of causing severe damage to the technical processes that these systems control. The damage caused by the Stuxnet worm is one of the most infamous examples of this type of attack. Because only the most advanced levels of adversaries are able to mount successful attacks against these systems, detecting them is extremely challenging. Automated detection systems have not yet evolved to the point of being capable of consistently and successfully detecting these attacks, and for this reason, human operators will need to be involved in Industrial Control Systems protection for the foreseeable future. We propose several potential training-based solutions to aid the defense of these systems.

Click Here to Read the Full Paper
A Computational Social Science Approach to Examine the Duality between Productivity and Cybersecurity Policy Compliance within Organizations

Clay Posey and Matthew Canham

Organizational employees often face conflicting responsibilities in their daily tasks. On one hand, employees must be productive members of their organization; on the other, they must perform their tasks while conforming to cybersecurity policies thereby causing a reduction in their performance rates. Such compliance can also lead to increases in stress, which might already be relatively high given the workload placed on the employees. In addition to this dichotomy, organizations vary significantly in the amount of emphasis placed on their productivity and cybersecurity goals. Employees use this and other information when making determinations about whether to follow cybersecurity policies for a given task. And while some of these determinations are based in rational cost-vs-benefits analyses, many are born out of habituation. Despite the importance of understanding individual-level decision making in regard to performance—both in productivity and compliance—little research has examined how such micro-level actions aggregate to macro-level phenomena within organizations. Given this opportunity, we explore how varying workload, productivity and compliance emphases (i.e., culture), and the degree by which compliance decreases productivity (i.e., friction) for a given task affects a simulated organization’s employees’ stress levels. Moreover, we investigate how these factors (including rationality vs habituation, morality) combine to form emergent noncompliance patterns at the organizational level.

Click Here to Read the Full Paper