In November 2022, AWS held its annual Re:Invent conference, and, as usual, LaunchDarkly was there (Alex Hardman provided a quick recap of our involvement here.) One particular session involved Andrew Krug, Head of Security Advocacy at Datadog.
With some help from the aforementioned Alex, Andrew led a session on observability, security, and feature management. The session covered how threats have evolved, new detection methods, and how feature flags can play a crucial role in both detecting and responding to attacks in highly innovative (and sometimes entertaining) ways.
We encourage you to watch the event in its entirety when you have the time, but for now, here are some of the main highlights from Andrew and Alex’s session. Did you know LaunchDarkly and Datadog have some excellent integrations referenced in the talk? Check them out.
New threats, new attacks
Andrew kicked things off by reminiscing about past security hot topics—such as security automation in 2016 and forensics automation in 2018—and then thought back to 2019, when we began to wonder how to, in his words, “stitch this all together as code."
“Classical game theory really can't be applied to defending our systems,” said Andrew. “We really have to think of this as security—continuous game theory—and we need to think about how a system can reorganize around threats in real-time.”
His solution? The unconventional application of leveraging feature flags to speed up your incident response process, engage your adversary, and disincentivize them from furthering their attack.
Speaking the same language
After stepping in to provide a brief introduction of LaunchDarkly to the audience, Alex pivoted to explaining what goes into creating a robust security culture in an organization: ensuring that your development teams and your security teams speak the same language.
“If you're in an environment today where your security teams are separate from your development teams, or they're using different tools, they don't share a common view of the world," Alex said. "Our development teams are already using Datadog to observe and understand the health of our systems. So, it's a natural extension that the security team would also use that same system and use the same semantics around monitors and alerts. At LaunchDarkly, we have this robust engineering culture that helps us ship and evolve quickly, but also we have our eyes on security.
"Our security team is made up of software engineers who have a deep understanding of what we're working on and where we're headed. So, when they identify new risks or new vulnerabilities, they're aware not only of the concern but also of what drove them there, and they work closely with our engineering teams to resolve them.”
Attack detection and response (with an app that doesn’t exist)
Alex and Andrew saw it fit to have an actual product to use as a live demo, so the two of them collaborated to build an app called Travel Dog prior to the conference. This cute little app is designed to allow dogs to showcase their favorite places to travel and simultaneously serve as an index of the most dog-friendly places on earth. Brilliant, right? Anyway, the app was built on a standard architecture running on EC2 and Graviton Arm instances.
So, in this scenario, the feature requests have started rolling in for the app. Long story short, someone made a mistake: a feature in the app that was supposed to be in development has ended up in the production environment, and now the feature is leaking personally identifiable information about Travel Dog’s users.
Oh no. Now what? According to Andrew, the answer to this is to do some lightweight threat modeling.
“A lot of people don't think that threat modeling is practical at scale because threat models
take a long time," Andrew said, "but building a lightweight threat modeling process into your process as part of a technical design review, like the Mozilla Rapid Risk Framework, will give you really, really good insight into what the potential risks are as part of a feature rollout. I can't stress this enough. This is a great framework that you can just pull into your TDD process.”
Andrew then describes how LaunchDarkly’s feature flagging platform can step in and disable the problematic feature in this scenario:
“Let's pretend that we actually did an RRA (rapid risk assessment) at Travel Dog," Alex said. "The business wanted to keep the feature in the app even though it was flagged as a risk. So, the compromise is that the DevSecOps folks write a detection rule to detect that behavior. We can actually use that detection to ask a platform like LaunchDarkly—using a web hook—to go and disable that feature in production, should it ever make it there… So, then instead of getting data leaks, the attacker ceased, because that feature's actually flagged off."
Andrew then outlined another security response scenario, which he prefaced by reminding the viewers that the ideal threat response should be to create a system that is more flexible, adaptable, and transformable than simply turning an entire feature off or turning a system off for 100% percent of the users. In other words, the ability of the system to reorganize around a threat.
After mentioning some Datadog tools, Andrew described how LaunchDarkly comes into play and demonstrated how to track and correlate user sessions and then target specific behavior for specific users.
He then showed how Datadog has an integration with AWS EventBridge that can emit any event from any monitor inside of the platform as the full JSON payload to EventBridge, which can be sent to a custom lambda function, which itself can then enable and disable features specifically for a single user inside of LaunchDarkly. Pretty great stuff.
Adversary engagement (i.e., messing with the attacker)
And now for the funnest part: messing with the attacker. And no, this is not just for the heck of it; there are actually several benefits to this, including learning how the attacker is operating and even getting them to reveal their attack methods. But, perhaps more importantly, it wastes their time, which is truly a useful thing.
“[As a developer], you are busy," Andrew said. "You want to roll out features. You don't have a lot of time to think about this stuff. Somebody that really wants to take your business down has effectively infinite time to think about the way that they want to mess with you and attack your system. So, if they have time to study your app, time to craft their attacks, we just want to make that as economically expensive as possible and learn as much about them as possible. That is the basis of adversary engagement.”
Andrew then demonstrated how Datadog can identify an attacker and trigger a response via LaunchDarkly. Ultimately, this leads to a trigger that feeds the attacker fake social security numbers that they’ll have to spend time checking to see if they are valid. (And they’re not, sucker.)
At this point, Alex chimed in to offer some clarity on the process and the ability to identify attackers by username:
“In the LaunchDarkly platform, you can target by anything you know. If you have an unidentified user, you could target by IP address or range or a variety of other criteria. So, if you can't pinpoint by identity alone, there are a number of different ways to do so…
"One of my favorite operational use cases with LaunchDarkly is to set log level through a feature flag. Since I can have such granular targeting, I could target an instance and capture more data or capture logs for a particular user.”
Alex later touched on how conditional statements of code can make it harder to understand.
“One thing you can do by using feature flags in your code is make it more configurable to change its run time, thus making it more resilient," he said. "You can reduce the conditional statements when you use multivariate flags to configure the application.”
Andrew goes on to describe the “practice” of tarpitting, which makes the attacker think they are affecting the app in a significant way when they really aren’t doing much of anything except experiencing what is now a very slow app. This involves correlating the user session, and then the session cookie becomes the ultimate decision-maker in terms of the user experience (i.e., very annoyingly slow—but only for the attacker).
If you can’t hack them… join them?
One incredibly interesting approach Andrew mentioned at the end of his portion on adversary engagement was the idea of possibly using flags to trigger an invitation to the attacker to quite literally join your team. Another alternative is to implement a bug bounty program that rewards attackers for exploiting weaknesses. While these routes may not be ideal for most, they’re certainly worth thinking about.
This, of course, was just a summary of some of the key points of Andrew and Alex’s presentation. We left out most of the technical details and code, which you’ll definitely want to see in action, so be sure to watch the entire talk in full when you have the time—it’s certainly very much worth it.
In the meantime, check out a walkthrough of our Datadog integration below.