Insights

Publication | Legaltech News

Nervous System: The Sleepy History of the Buffer Overflow Attack

David Kalat

October 7, 2020

David Kalat writes about information security, zero-day vulnerabilities, and a race between security professionals and hackers that has continued to this day.

Download the article.

Information security professionals use the term “zero-day vulnerability” to refer to a weakness or hole in a computer system’s software that is unknown to the security community but could be used to launch an attack. The phrasing refers to the fact that “zero days” have elapsed since the weakness became known. If a hacker exploits that vulnerability, it would be called a “zero-day exploit.” Among cyber-criminals and other hackers, zero-day vulnerabilities are valuable commodities, because they represent opportunities that victim organizations are likely unable to guard against, at least until the weakness becomes known and the zero-day aspect is removed.

Curiously, one of the most commonly used lines of attack on computer systems was essentially never a zero-day vulnerability. In fact, some sixteen years were allowed to elapse after the discovery of the vulnerability until the first known exploit of it for an attack, and even then the security community was disastrously slow to attempt to patch it. Astoundingly, more than forty-eight years after the problem was first diagnosed, the so-called “buffer overflow attack” remains a risk in many systems!

The problem stems from a basic fact about how computers manage information in memory. A “stack” is a continuous chunk of live memory that holds units of information that the central processor needs to access to perform operations. Data can be pushed onto or pulled from the stack as the processor does its work. Within the stack, areas called “buffers” or “arrays” act essentially as slots into which data can be input dynamically, as opposed to preprogrammed data units.

A secure operating system would have protocols to examine a data input first, to verify that it is the right size to fit into the available buffer before attempting to write it into the stack. Many operating systems and programming languages were developed at a time when security was not a primary concern, unfortunately, and this basic self-protective step was missing from the most popular and widely deployed systems. Instead, these systems had no mechanism to block or truncate oversized data, and if the input was too large for the buffer, the excess data would overflow the boundary and overwrite another part of the stack.

A clever attacker could manipulate this flaw, designing an excessive input specifically to cause executable code to spill over into an area of memory where the system is expecting to find something else, tricking the host system into running alien instructions.

This threat was first outlined in print in a US Air Force planning study on computer security published in October 1972. Nonetheless, the flaws remained embedded in computer systems across the country, unaddressed by patches or updates.

This lackadaisical indifference was driven in part by the fact that the threat, however real, seemed remote. Most computers in use in 1972 were already largely insecure and protected by little more than flimsy passwords. An attacker had easier ways to break in than through an arcane buffer overflow.

The situation was different by 1988. The home computing revolution and the rise of the internet had vastly expanded the number of systems, the complexity of data stored on them, and the opportunities for attack. This was the year of the first major viral attack on the internet. A self-replicating program, called a “worm,” copied itself across thousands of computers throughout the country in a span of just hours, rendering the infected hosts useless. The attack exploited flaws in a particular type of UNIX operating system, including the buffer overflow vulnerability.

The worm was not a targeted attack, but rather a graduate student experiment that spiraled out of control. Nevertheless, it hobbled ten percent of the internet, taking major research institutions and military systems offline for days. In the wake of that incident, the computer science community was forced to rethink the importance of cybersecurity. Aside from patches applied to the infected UNIX systems, however, the fundamental risk posed by insecure buffers remained unaddressed for years.

For a buffer overflow attack to work, an attacker would need to have detailed knowledge about the inner workings of a given system in order to compose the excess data to overflow into the right place. To make the attack worthwhile, enough similarly configured systems would need to be networked together that an attack on one could cascade into others.

This state of affairs provided the illusion of safety for a while after the 1988 worm, but circumstances were changing. The hacker community actively traded highly granular information about system design; meanwhile, enterprise systems consolidated increasingly around a handful of popular systems.

In November 1996, a then -nonymous hacker with the pen name “Aleph One” published an article called “Smashing The Stack For Fun and Profit” (the author has since been identified as Elias Levy, co-founder of the company SecurityFocus) The 1972 Air Force study had offered a warning of a potential threat, but this article was nothing short of a cookbook, with step-by-step recipes for executing overflow attacks.

This landmark article marked the onset of an arms race between security professionals and hackers that has continued to this day, as new exploits are discovered and patched, discovered and patched, ad infinitum. Newer security-minded operating systems and programming languages have developed with various protections against such attacks, but numerous legacy systems and software that evolved in a less-security-minded era remain in place.

Forty-eight years and counting later, a staggering number of systems remain vulnerable to, and victimized by attacks on, the internet’s most notorious 5,800-day exploit.

 

The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions, position, or policy of Berkeley Research Group, LLC or its other employees and affiliates.

Read the article. (subscription required)

BRG Experts

Related Professionals

David Kalat

Director

Chicago