Skip to main content

How Hackers Tried to Add Dangerous Lye into a City’s Water Supply

A cybersecurity expert explains how safety systems stopped the attack

Local and federal authorities are investigating after an attempt on Friday to poison Florida’s city of Oldsmar’s water supply.

On February 5, an unknown cyberattacker tried to poison the water supply of Oldsmar, Fla. City officials say the targeted water-treatment facility had a software remote-access system that let staff control the plant’s computers from a distance. The hacker entered the system and set it to massively increase sodium hydroxide levels in the water. This chemical (better known as lye) was originally set at 100 parts per million, an innocuous amount that helps control the water’s pH levels. The attacker tried to boost that to 11,100 ppm, high enough to damage skin and cause hair loss if the water contacts the body—or, if it is ingested, to cause potentially deadly gastrointestinal symptoms. Fortunately, a staff member noticed the attack as it was happening and restored the correct settings before anything changed.

How much of a broader threat might attacks like this pose to public facilities, and what can be done to protect them? Scientific American asked Ben Buchanan, a professor specializing in cybersecurity and statecraft at Georgetown University’s School of Foreign Service.

[An edited transcript of the interview follows.]


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


What might make city infrastructure like a water treatment plant vulnerable to hackers?
Speaking generally, the challenge with a lot of these facilities is oftentimes that they are older, or they just don’t have the security infrastructure that we would want to guard against hackers. So, if the systems are not as secure as we would like, but their internet is accessible, that is a recipe for trouble.

Who might have been responsible for the attack?
Oftentimes the thing about targeting an industrial control system is that, in order to have the effect you want as an attacker, you need to understand the system reasonably well. If you’re truly a foreign attacker, you want to do a lot of reconnaissance on the system. If you’re an insider, you already have that kind of knowledge. A lot of times the people who carry out cases like this—of which there are not that many—were disgruntled employees who already knew the system and how to manipulate it. [But in this case] it is too soon to say, ‘This is a disgruntled employee,’ and it’s definitely too soon to say, ‘Oh, this is Iran or Russia and it’s a clear act of war.’ The speculation doesn’t help right now.

Oldsmar is a small city with a population of 15,000. Does that make it less of a target, or is it actually more vulnerable compared to a plant in a larger, more populated area?
I don’t know that it generalizes one way or the other. If it’s an insider, then that explains why they’re targeting that facility—but we don’t know that. I think it probably stands to reason that foreign hackers who want to make a big splash would choose a bigger target. But on the other hand, we have a case from a couple of years ago in which Iranian hackers were indicted by the U.S. government for breaking into the computer networks of this old dam that no one really was using much anymore.

The hacker gained access through an existing software program that enables remote access of the plant’s computers. Should that type of program be prohibited for plants like this?
It’d be hard in the COVID moment to say, “Everything’s got to be managed on-site.” I don’t know how realistic that is. But I think balancing the security and usability of these systems is often hard. Getting the balance right often depends on more resources than a lot of these facilities have.

How should facilities like these be protecting themselves?
One thing that’s really important is to have redundancy in systems, especially around safety systems. There’s an important distinction here between security and safety. Cybersecurity is keeping the bad actors out of the computer networks, or limiting what they can do once they’re inside the computer networks. Technically, safety is making sure the industrial control systems’ components don’t do anything that puts people at risk, even if they’re given instructions to do that—by hackers or by somebody else. For example, one thing you have is: Are there mechanisms in place to regularly test the processes and the outputs of an industrial control system, to test the particular qualities of the water? Are there people who are monitoring systems to make sure that things don’t move out of whack for unclear reasons?

In that respect, this is at least somewhat of a success story. Because although there was intent to attack here, and there was action to attack here, no one was harmed. I think you want to have those levels of redundancy any time you’re dealing with critical systems like those in industrial control. In my second book I wrote about (depending on how you count) four attacks on industrial control systems, all of which were far more significant than this. So, this doesn’t reach the level of attacks that were more successful.

What can we learn from this?
The case shows that there are bad actors out there, that money and time we spend securing and making safe our industrial control systems is often money and time well spent—but that we can actually manage to defend the systems in such a way that (against at least some attacks) minimizes the amount of harm that’s done.

So, people shouldn’t be panicking about their water right now?
I think that there’s certainly no reason nationwide to panic. This is a reminder of the importance of the work of industrial control systems professionals, and the people who secure those systems. But there’s no reason to overhype what happened here and spin it as “the sky is falling” when, in fact, the sky is not falling.

Sophie Bushwick is tech editor at Scientific American. She runs the daily technology news coverage for the website, writes about everything from artificial intelligence to jumping robots for both digital and print publication, records YouTube and TikTok videos and hosts the podcast Tech, Quickly. Bushwick also makes frequent appearances on radio shows such as Science Friday and television networks, including CBS, MSNBC and National Geographic. She has more than a decade of experience as a science journalist based in New York City and previously worked at outlets such as Popular Science,Discover and Gizmodo. Follow Bushwick on X (formerly Twitter) @sophiebushwick

More by Sophie Bushwick