Blog
Dec 8, 2021
Cognitive Biases and Penetration Testing
Jeremy “Harbinger” Miller shares with us his thoughts on cognitive biases and how they relate to penetration testing.
9 min read
by Jeremy “Harbinger” Miller
This post first appeared on November 30, 2021 and is republished with permission from the author.
Disclaimer: The ideas below are my own and may not reflect those of OffSec.
Our minds are adapted to maximize human gene replication under an extremely different environment than anyone reading this blog post is living in today. Through many thousands of years of evolution, our brains have developed heuristics to better aid us make decisions that would best serve that evolutionary purpose.
Heuristics are not always bad. For example, we are capable of making snap decisions in stressful situations without spending too much time weighing every conceivable option. However, heuristics are often maladapted to our modern circumstances. They can lead to cognitive biases that impair our reasoning and which reliably produce incorrect results.
Hacking involves thinking, so as security professionals we have an interest in improving the way that our minds work. In this blog post, I will discuss two cognitive biases I have experienced in myself and observed in students: the sunk cost fallacy and confirmation bias.
A Heap of Salt
The purpose of this article is not to help readers self-correct these biases because this isn’t often realistic. Daniel Kahneman, one of the two fathers of the study of cognitive biases and heuristics claims that he hasn’t gotten measurably better at correcting his own biases despite decades of study on the subject. In fact, he wrote his popular book partly as a guide for how to spot biases in others because noticing them in oneself is hard.
Instead, I hope this post will help readers understand a few ways their minds can in principle get stuck in a pentest, even if it won’t necessarily help them get unstuck in the moment. While we may not be very good noticers of self-bias, we can sometimes set up systems that help offset their effects in advance. In addition, if it’s easier to notice bias in others, then I hope it will help readers assist other students, friends and community members.
Sunk Cost Fallacy
What is it? The Sunk Cost Fallacy represents the systematic tendency to continue investing resources into an outcome even in the face of evidence that suggests said outcome is unlikely or not worth the investment. The Sunk Cost Fallacy often applies to financial investments, but can also apply to investments of time, emotion, or energy.
Why does it happen? We allow our past decisions regarding resource allocation to emotionally hijack our present decisions, even when there is no reason to continue investing. As noted in the above article, we might feel a sense of guilt or loss if we “give up” on an investment rather than try to see it through. Sometimes this stubbornness might pay off, but in Pentesting it can often result in frustration and an increased fear of failure.
How can it trap pentesters? The Sunk Cost Fallacy is so prevalent in information security and in Pentesting that we even have our own informal term for it: the dreaded Rabbit Hole. As far as I know, Lewis Carrol’s evocative phrase was first applied through an InfoSec analogy in The Matrix.
Pentesting students often use the term to describe the frustrating experience of attempting to attack a target that simply isn’t vulnerable in the way the attacker believes it is. Rabbit holes can occur at many levels of abstraction: we could be attacking the wrong machine, targeting the wrong service, exploiting the wrong vulnerability, or using the wrong exploit. Due to the Sunk Cost Fallacy, it’s often emotionally easier to continue down a rabbit hole rather than just move on to a different attack vector, even if it causes us more pain and sufferance than the alternative.
We can consider our relationship with a given attack vector as a pendulum between two potential failure modes. In the first case, we can abandon a truly vulnerable path too early. The thing we are attacking is actually vulnerable to our attack, but we move on out of fear that we’re wasting our time. In the second case, we continue to invest effort into making our attack work for a vector that is not actually vulnerable. This latter failure mode is where the Sunk Cost Fallacy comes in, and which (I claim) can be harder for Pentesting students to avoid.
What can we do about it? The following method works for me on many levels; we can apply it to machines on a network, to services on a machine, or to directories on a web application.
1
Step 1: Determine how many paths we can investigate on our target. By “target” here, I mean anything we are attacking, be it a network, a machine, or an application. By “path”, I mean the ways in which we might organize ourselves around the target. For example, open ports on a machine could each represent a different path.
2
Step 2: Set a timer for an amount of time that we can work uninterrupted for. The prefered time duration varies by individual. I like to set a timer of around 75 minutes.
3
Step 3: During the length of the timer, choose one of the paths to work on, and ignore the others.
4
Step 4: When the timer goes off, finish up whatever task we’re doing. Take a 5 minute break and get up from the computer. Walk around, grab a snack, or get a drink. It’s important to let our minds reset here.
5
Step 5: Move on to another path. Keep in mind any information we’ve previously learned, but make sure that our attention is on the new path.
6
Step 6: Repeat until we have exhausted all paths. Then restart the cycle.
By following these steps, we can avoid both failure modes: We ensure that we’ll return to every path over time (assuming we found all potential paths to begin with), and we ensure that we won’t get stuck in a particular rabbit hole. Most importantly, we condition ourselves to be OK with moving on to new paths before we’ve exhausted the current one we’re working on. This conditioning can help us avoid feelings of frustration and failure.
Advanced mode: Once we’re familiar with this general method, we can start attaching weight to different paths based on their vulnerability likelihood. For example, we might place more weight on a web server than an SSH server, and therefore spend 90 minutes on web and 60 minutes on SSH. I don’t necessarily recommend trying this until one is very comfortable with the mental motion of giving up on the current path and moving on to the next one.
Confirmation Bias
What is it? Confirmation Bias represents the systematic tendency to accept evidence that supports our current beliefs rather than evidence that refutes them.
Why does it happen? According to The Decision Lab, “Confirmation bias is a cognitive shortcut we use when gathering and interpreting information”. Since generating new hypotheses that explain events is cognitively expensive, it’s often easier to use the shortcut of relying on hypotheses we already know of rather than spend time on generating new ones. While this might be a good survival instinct, it hardly helps us understand and attack a computer or network efficiently.
How can it trap pentesters? Many pentesters would likely argue that the most important part of an engagement is the enumeration phase, i.e. the gathering and interpreting of information. If that’s the case, then it’s very important for us to make sure we’re gathering and interpreting information properly!
In my experience, students learning to pentest often put on their hacker hat too early. They gather some information about a machine, and based on the information they find, conclude that it must be vulnerable to XYZ. And because they are already wearing their hacker hat, they double down on the alleged vulnerability of XYZ despite future evidence they find that refutes the initial assumption.
There is a machine in the Penetration Testing with Kali Linux (PEN-200) labs called Beta. As noted in OffSec’s PEN-200 Learning Paths article, Beta contains an unusual application running on an uncommon port. Due to it’s uncommonness, students often (correctly) believe that this service is vulnerable, but (incorrectly) assume that the vulnerability is located in the wrong place on the machine. This directly leads to confirmation bias, because once they attempt to exploit the service in the wrong manner, the tendency is to try to modify the exploit (which won’t work) instead of believing the evidence that maybe they were wrong in the first place.
PWK students can read OffSec’s Complete Guide to Beta, which fully demonstrates OffSec’s methodology used to attack this target.
What can we do about it? One way to help offset (though not eliminate) confirmation bias is by applying something like the scientific method. Try to consciously generate, test, and falsify hypotheses about the target.
1
Step 1: Gather information about the machine, and write down notes and evidence.
2
Step 2: Try to pin down multiple hypotheses about the machine. Bonus points if the hypotheses are mutually incompatible, so that falsifying one gives further evidence for the other(s).
3
Step 3: Come up with tests that would provide further evidence, and which would ideally falsify one hypothesis over the others. We try to predict the results of our tests in advance to practice noticing our beliefs about the machine.
4
Step 4: Execute our tests and determine what conclusions we can draw.
5
Step 5: Use the test results to gather more information, generate new hypotheses, or create more tests.
By following this procedure, we will methodically gather more information about the machine, hopefully without getting too attached to specific hypotheses. Since hypothesis generation is explicitly part of the procedure, we’ll be less likely to fall into the trap of preferring the ones we’ve already generated.
Wrapping up
While it may be difficult to notice biases like the Sunk Cost Fallacy and Confirmation Bias in ourselves, we can set up systems to help mitigate them in advance. We can make ourselves more mentally resilient to these biases by using a time-based system to focus our attention, and by consciously monitoring our beliefs via a scientific methodology.
About the Author
Jeremy “Harbinger” Miller is an Information Security professional interested in how security skills are taught, learned, and applied by individuals and organizations. At Offensive Security, Jeremy serves as Product Manager of Content Development.
Latest from OffSec
Enterprise Security
How to Use Assessments for a Skills Gap Analysis
Discover how OffSec’s Learning Paths help organizations perform skills gap analyses, validate expertise, and strengthen cybersecurity teams.
Nov 19, 2024
4 min read
Enterprise Security
The Human Side of Incident Response
Effective incident response requires decision-making, adaptability, collaboration, stress management, and a commitment to continuous learning.
Nov 8, 2024
5 min read
OffSec News
Master Incident Response with Hands-On Training in IR-200: Foundational Incident Response
OffSec is excited to announce the immediate availability of a new course: IR-200: Foundational Incident Response.
Oct 29, 2024
4 min read