On October 1, Tiffany Jachja, Developer Evangelist at Harness, spoke at LaunchDarkly's Test in Production Twitch Stream. Her talk focused on why Progressive Delivery, a new software development methodology that builds upon the core tenets of Continuous Delivery, is so critical for modern development teams.
Tiffany discussed Progressive Delivery in the context of Harness' recent study, "Continuous Delivery Insights 2020," which identifies the time, effort, cost, and velocity associated with current Continuous Delivery processes.
Watch Tiffany's full talk below. Attend our next Test in Production event.
FULL TRANSCRIPT:
Heidi Waterhouse:
Hey folks, welcome to Test in Production. We're happy to have you here today and we get to talk about a really cool product that we think works really well with LaunchDarkly. But the point of Test in Production is that we are the community organization that is really devoted to helping people understand how to do progressive delivery and why it matters. So as we move along, I hope that you enjoy this and that you get something out of it. And without further ado, let's get going. So here is Tiffany, from harness.io. Tiffany would you like to introduce yourself?
Tiffany Jacha:
Sure. Hi everyone. I'm so excited to be here today with Heidi. My name is Tiffany Jacha and I'm a technical evangelist at Harness and we're a company that helps people with their software delivery.
Heidi Waterhouse:
Awesome. So Test in Production is the show where we talk about how people get their code from, it works on my machine into production, so that their users can see it. So tell me a little bit about Harness and where it fits in the landscape.
Tiffany Jacha:
Sure. So, in the past, we've focused a lot on continuous delivery, which is to help people get their software into the hands of customers or into a production environment, right? So since then we've expanded through different products to being a software delivery platform. And really our goal is just to empower developers to move fast without breaking things. It really fits into that story that you were saying, Heidi, that you really want to get away from, well, it just works on my machine and whatever about trying to get it working in production and everywhere else, right? To being able to do this sustainably, safely and actually quickly.
Heidi Waterhouse:
That's so great. And I think that it's really interesting to look at the impediments that people have to doing this. What makes it hard. And a lot times it's culture, it's really hard to get over the fear that you're going to break something in production. One of the things that helps is knowing you have a good way to roll things out, and roll them back, and roll them forward, and have control over what's happening. And I feel like Harness adds a lot to your understanding of what you're actually deploying. So when we set this up, we were looking at a survey that Harness published and I'm going to switch to the screen where you can see it. And I'd love it if you could answer some questions for me.
Tiffany Jacha:
Sure. Yeah, that'd be awesome.
Heidi Waterhouse:
So tell me a little bit about how you ran the survey.
Tiffany Jacha:
Sure. So this was a survey conducted throughout 2019 and early 2020 by our product teams. And really when we were talking to different prospects and customers and even people who hadn't used Harness before, for their continuous delivery and their software delivery process, we tried to ask them questions about the amount of effort that it was taking to do software delivery in their organization. So whether it was a CI process, like building code, testing code, verifying that it works, working with release strategies, we just kind of wanted to get a good idea about the ecosystem and where people are at today in terms of performance, and scale, and capabilities. So that's what this report, it's called the Continuous Delivery Insights 2020 Report, that we did, that's the goal of what we were trying to do and kind of establish through that report. So we actually have a couple of different findings related to continuous delivery and we can go through some of those key findings as well.
Heidi Waterhouse:
That sounds great. One of the things that I noticed when I was reading this report is that you were very clear about what was the mean and what was the median of the numbers. And I think that's a really important statistical thing that we often just sort of slop it together and say, "Average." But it's really important to say most people did this, but our extremes were very extreme. So tell me a little bit about why you felt it was important to report on those extremes.
Tiffany Jacha:
Yeah. I just think that when you're talking about software delivery and trying to be a platform or a service to other organizations or other teams, right? Especially engineers, you kind of have to consider the top 1% of those people aren't necessarily always going to be your customers, right? People who've already solved for software delivery, maybe they've built their own platforms, maybe they've customized specific pipelines, maybe they just don't have a pain point. And people joining that organization, they'll never have to experience software delivery pains like everybody else does, but that's not the norm, right? That's probably the top 1% or the top, this magical circumstance that happened where you have all the engineering talent, you have the resources, you had the time to invest in a particular solution and you'd solve for software delivery.
But today that's not the case. And showing the worst cases shows what happens when you don't necessarily have the engineering resources? You maybe don't have the time to invest in a platform or to solve for software delivery, what happens then? Right? And I think the report gives good insights into what happens at both spectrums. What happens when you're able to build fast and get to production quickly? What happens when you're not? What is the worst case? And it kind of just shows... It's a very big ecosystem and it's maturing, and so that's why you have people at different points in the spectrum.
Heidi Waterhouse:
Yeah. We can't all be Google scale.
Tiffany Jacha:
Exactly.
Heidi Waterhouse:
So, another thing that I noticed was I saw a lot of Jenkins and hand-rolled scripts, and people's tool stack. Do you think it's easier or harder to transition to continuous delivery from sort of that interim stage we had between, I'm going to burn a CD and I just pushed the button when I commit, and it actually gets deployed.
Tiffany Jacha:
Yeah. I think it depends and sometimes it goes back to engineering culture, right? Because for some people they're like, we're using Jenkins for CI, all of the plugins work, I mean it builds artifacts, let's take our CI capabilities and extend them to CD, right? Let's use Jenkins pipelines, like groovy scripting or anything else to extend to software delivery. And then people realize, oh crap, this is a lot to maintain, this is a lot to try to figure out how it works with other solutions that help us with our continuous delivery process, right? So how does it integrate with our Prometheus or our Elk Stack? Right? We have so many different tools that are related to verifying a deployment that unless it comes natively in Jenkins it's tough.
Tiffany Jacha:
And even when it does come in the form of a plugin, you still have to spend a lot of time trying to figure out, well, how do I code this and how do I integrate with this one solution? And what happens is, you code for these specific use cases, but then when you try to generalize it across a team, it just doesn't work very well. And especially if you decide, oh, we're not using an Elk Stack for this application, we're going to use Splunk or we're going to use some other type of solution. And so-
Heidi Waterhouse:
Standardize script, except for this one's the marketing one and this one's the-
Tiffany Jacha:
... Exactly. And so it's tough. And so I think sometimes you get a little bit of pushback because people are like, well, this is the way that you do it and everything's while we're maintaining this. But really you realize, oh, maybe we're spending a lot of engineering effort into one deployment. Because if it takes 10 people and two days worth of work to do a deployment, I mean, maybe that's not as scalable as you think that it would be. But yeah, in general, I think, if you have scripted pipelines already, again all pipelines are, is just a sort of abstraction or some type of template, or I guess abstraction of the process that already exists, right? Because people have a certain process for getting into production because it reduces risks, it speeds up time, right?
And so you have all of these reasons why you do something, like maybe you have a change approval board to ensure compliance or something else. You have all of these things built into your process and your pipeline is just like a mirror of that process that exists. And when you can't mirror it correctly or you struggle to mirror it, that might be a sign like, oh, maybe the solution doesn't work. But yeah, I think it just really depends on where you are and if it works for your use case or not. For some organizations visibility is really important. Some organizations they don't want any deployments if they can't see exactly what was built, or what was the source code, or what was it attached to? And so if you don't have that visibility, then you're not getting into production and that can be a deterrent.
Heidi Waterhouse:
... You will not go to space today. Can you flip back to the page that has the Jenkins script statistic?
Tiffany Jacha:
Yeah. So we even captured a lot of the demographics of people who we surveyed and what industries were they are part of and sort of the high level findings there. But yeah, we had a majority of folks use Jenkins as their build tool. And I think that's fairly common, people who are trying to solve for continuous integration have pretty standard tools, they have a toolset that works for them. And in fact we found that people reported very minimal challenges in terms of creating a build pipeline, it was fairly standard for them. I think, that's pretty common for what we see as a practitioner to maybe the worst that happens is you can't quite get your tests right. And that's more of a testing and quality perspective, but in terms of actually building the pipeline itself and having a process, people understood that well and they had a good idea about what solutions work for them. And then-
Heidi Waterhouse:
So that's kind of encouraging. It says, "If you already have the start of a pipeline, it's going to be much easier to build it." It's not impossible if you don't, but most people have some kind of automation and if you could string it together better, maybe it's going to work out better for you in the long run.
Tiffany Jacha:
... Yeah, I think so. It's really encouraging to say that if you have some type of pipeline or even if you don't, if you just have some type of a process to get to some kind of packaged artifact that's ready to deploy, you're not that worst off or you're not doing too bad.
Heidi Waterhouse:
Great. And you mentioned when we were talking about this earlier, AIOps and is that like just the buzz wordiest thing I've ever heard? Explain to me what you're thinking with that. I mean, admit it, that sounds extremely futuristic-
Tiffany Jacha:
Yeah, it's crazy. I was just thinking that this month, because I've been seeing more around AI and AIOps and you're just like, "What is that? Why does it sound like an alien?" And to be honest, I think it's going to start becoming more of a use case, especially when people move to platforms and less off of the scripted pipelines sort of approach. And essentially AIOps is just a form of automation that allows you to use intelligence, to essentially make operations decisions for you. So in terms of verification, in terms of alerts, that kind of thing. So specific operations decisions are hard to scale, right? Say for example, you do a deployment and someone has to monitor that deployment to make sure that it works, at least for a few hours after that deployment or maybe a day or two, right? To make sure that everything is running smoothly.
And if it doesn't, then that's when you may want a rollback or call-in some people to figure out what exactly went wrong and what do we have to do from now. But that doesn't necessarily scale all the time, especially if you have a lot of applications or you have a lot of deployments to do in a particular timeframe, it can be really challenging to actually monitor dashboards and have one or two people responsible for those logs, or understanding, what the hell is going on. And so AIOps helps build those intelligent decision-making steps for you. So you can still use your metrics or your logging toolsets and it feeds into this AI system or this platform. And it'll give you those decisions for you.
So say for example, I have a couple of data points about a deployment, say for example some logs or something, you get some error logs, you get some warning logs, you get some like, this works, it's okay. Maybe you made a change and you get a lot of warnings, right? Well, with AIOps you can actually use machine learning to detect anomalies. And so from those data points, maybe you can see, oh, we have a lot of warnings, maybe we should flag this or fail this deployment until we can figure out why that is. And so it's just really building some of the operation decisions into your pipeline. And so, as futuristic as we make it sound, it's kind of like, yeah, we should do that in our pipeline anyway. It's kind of our form of reducing risk and building governance.
Heidi Waterhouse:
Yeah. It's sort of like, I got a fancy new car and if I'm backing up and something crosses the rearview mirror, it will slam on the brakes for me. It's not that I can't break it's that I didn't notice something behind me and it's helping me out. AIOps seems extremely futuristic, but the future is already here, the car is already telling me when I need to stop.
Tiffany Jacha:
Exactly. And I think operations and people who have those responsibilities, they're going to play a key role in setting and configuring, and making sure that they AIOps is trained properly, and really setting like, Oh, what are the decisions or what are the actions that we need to take if we have this type of use case, or this type of scenario? Something like that.
Heidi Waterhouse:
Great. Should it be, or should it break? What level of response do we want from this? So I noticed that this survey had a lot of the same success metrics as the DORA Report. And I was wondering how the DORA Report influenced the survey.
Tiffany Jacha:
Yeah. I think it has a big influence in how we kind of talk about delivery, especially, if you're going to... And I think, especially if you're going to talk about DevOps and kind of value stream management and flow of value, right? I think that's kind of like the future of how we're going to talk about software delivery too, but essentially this idea that, how much change are we getting out there? How does it impact us? Right? How is the mean time to failure? How does that affect different things? And so it really did influence how we decided to talk about this data, right? And I think, it goes to show like since the DORA Report worked for so many different organizations, then you would think that the organizations that we would survey have these numbers, or they're at least working towards gathering these metrics, right? It's a good starting spot if you don't have anywhere to start.
Heidi Waterhouse:
Great. And let me say right now, if you have not read Accelerate by Nicole Forsgren and Jezz Humble and Gene Kim, go buy it right now. Nicole reads the audio book herself, which is great, because then nobody weirdly mispronounces the words. She knows what all those words are. It's just the thing that our industry is moving toward. And if you're interested in progressive delivery, the whole idea of Accelerate is your foundational text, because you have to understand why it's important to do rapid continuous integration and delivery.
Tiffany Jacha:
Yeah. And we actually captured all those mean values and those average and high, low values for the four metrics that Nicole shares in her DORA Report. So these are those numbers in the key findings page and definitely check out this report. It kind of shows like even, how much does it cost to do a deployment per application, per year?
Heidi Waterhouse:
Great. I was just like, how many thousand dollars in developer hours are you spending babysitting a deployment?
Tiffany Jacha:
Yeah. Exactly.
Heidi Waterhouse:
If that's your core business value, do you have a business that's just about deploying things? Do you have a business creating value in deploying is just how you get there.
Tiffany Jacha:
Exactly.
Heidi Waterhouse:
So I heard that you had a demo of Harness that you were ready to share. Could you do that now?
Tiffany Jacha:
Yes. I can figure out how to show that.
Heidi Waterhouse:
We're on a new streaming platform folks, and we hope that it works out great for you, let us know afterwards how you feel about it. But in the meantime, I think, a couple of little things to work out, one of the things, well, Tiffany is working on that we want you to know is that, there's a lot of stuff that's going to be coming up about progressive delivery and progressive deployment. And I think one of the useful things to keep in mind is the difference between delivery and deployment or release and deployment. So deployment is just getting stuff on your servers, like getting it where it needs to go. Release and delivery are when you actually expose it to customers and that's something that I care a lot about differentiating, is...
Tiffany Jacha:
Oops! We might've just had Heidi drop. But yeah, I definitely agree with her there as, it's continuous delivery and delivery in general is very different from just getting something running into any type of environment. And it can be really tough, if people kind of correlate the two together and think it's the same thing, because then suddenly you can say, for example, "Well, oh, we're done because it runs in this server." But then you don't have any visibility into the application. You don't know how the deployment went. And really the next deployment ends when the other one starts, so it can be really challenging, but I'll let Heidi take over again, now that she's back. Heidi, you might be on mute. Still not hearing you.
Heidi Waterhouse:
When all else feels update your, or change your...
Tiffany Jacha:
How about now?
Heidi Waterhouse:
Nope?
Tiffany Jacha:
Minor technical issues.
Heidi Waterhouse:
Minor technical issues.
Tiffany Jacha:
All right. I'll pick it up from here since, we're having some audio difficulties. Oh, okay, it's just me who can't hear her, that's strange. Okay. But I'll pick it up from here and kind of just share what Harness looks like and sort of, what is it like to use a continuous delivery platform and software delivery platform? So this is the Harness demo environment, and you can actually see when you log in, some of the high level, just information about your deployments. So you can see the most active services, how many deployments you've done in the past 30 days. So, kind of getting into the kind of momentum and seeing like, oh, maybe, we started off with only doing 10 deployments a month now we're up into the thousands because of whatever it is, maybe you've standardized additional pipelines, maybe you've helped onboarded more applications.
So it's kind of easy to pick out the outcomes and really focus on those and say, "This is what's driving value today for our business." So essentially in Harness you have an abstraction model for building out your continuous delivery pipelines. So it essentially works by building out applications. And so a Harness application is actually a host of services or actions that you can take in the pipeline. So say for example, I want to deploy a demo application, well I might have two or three different microservices. I may have an environment that it runs on, so if it runs on AWS, maybe I want to have some type of AMI service, like an Amazon machine image that I can deploy.
So you can actually build out those features into your pipeline. And so it works by actually stitching together workflows. And so you can actually build and add new workflows that involve different actions that you want to take. So say, for example, if I want to deploy a microservice, I need an environment first. So maybe I'll deploy an Amazon machine image and then roll out a specific application, if I wanted to deploy a demo application, a Docker image, then I can do that and I can specify that, and I can even configure that deployment through a YAML. So you can change configurations here and actually build out kind of how you want the service to be deployed. But then you can also do that through the UI. So say for example, I want to deploy this order service.
Well, I need to create a Jira ticket and I need to make sure all that information is also copied over to another ticketing systems like service now. I want to do a SonarQube artifact scan just to make sure the code is good quality code. I don't have any vulnerabilities, something like that and then I actually want to do deployment. And so you can build these workflows, different types of workflows that anyone can use, and then you can actually build pipelines using those workflows. And so you can say, "In a standard CD pipeline, I want to deploy to a development environment and then a QA environment, and then actually get it approved to go into a production environment." Boom, now you have the stage to deploy it to production. And because this is a platform you can actually set up additional integrations and connectors and different cloud providers.
So it doesn't really matter what type of environment you have. You could be running on an OpenShift cluster, you could be running on a Vanilla Kubernetes cluster, you could be running on GCP or AWS. It doesn't really matter. You can kind of pick and choose what environments you want to target with your application. And so when you add applications, you can actually see specific insights into those applications. So, if I wanted to build custom dashboards I could, but for example, if I want accelerate metrics, it's very easy. We actually have custom templates or built-out templates that you can use, in case you do want to build these dashboards. So I have a couple of different services here, but I can see all right, this Istio Canary service is not doing too hot because of all of the deployments, 40% fail, something like that.
So you can actually capture a change failure rate, deployment frequency, how often did this particular service get deployed in a month, meantime to restore. So say for example, a pipeline fails, how long did it take to remediate that? And then again same thing, lead time to production. You can see that in most cases, our environments took 10 minutes or less to work. So it's just all something you can keep track of and it's pretty cool because you can get those dashboards for free and export them to any other service. So if you have a Fauna dashboards or any other dashboarding service, you can export this data into that and grab it. But yeah, there's actually a productivity dashboard as well. You can see team productivity and deployments. Yeah. For some reason I can't quite hear Heidi, but everybody else can.
So that'll be interesting. But yeah, Harness platform works with any CI platform. So, you can trigger a lot of your CI processes through Harness. So if you have your Jenkins Pipeline, if you're using CircleCI, any other CI tool, it works. So we're pretty agnostic in terms of like what other tool sets to use. Because we just try to be kind of friendly with that and give people a good kind of standard way of building pipelines across the board. We can't hear Heidi, yeah.
Heidi Waterhouse:
If I switch the microphone, does it work better? Your...
Tiffany Jacha:
Cool. Yeah. Let me know if you can hear me. I can't hear Heidi. Oh no. [inaudible 00:36:06]. Oh my God. I thought about that for a little bit. I was like, I wonder if I should do an ASMR challenge one of these times when I'm recording. Yeah. Heidi, I'm not sure but, oh, okay. Great. Heidi asked me in the chat. So I can take over some of the content or moderating. But yeah, the question was, do you think CI and CD mandatory together? I think so. Yeah. At least to me, one of the biggest reasons why someone would want to use CI is because they really want to keep up with feature work and feature development, right? And it's kind of one of the biggest practices for doing that because I think one of the challenges that you'll see, and I think this also speaks to why people would want to do feature five development is because you're constantly working on features, you're constantly trying to meet the demands of customers.
And so to be able to ensure that your branches and your development and the different versions of your application are good to go. And you have something that actually works, you kind of have to use CI. And so continuous delivery doesn't really work if you don't have CI, because you just can't meet the demands. So say, for example, you want to do continuous delivery because you think it'll speed up your processes, at some point CI will bottleneck. And then one of the reasons why you may not have a great or frequent deployment frequency is because maybe you're just not getting there in terms of coding and getting there in terms of the CI process in general.
And so I think that's why a lot of people will say like, oh CI/CD are kind of the same thing, but in general, I think they're two separate things. And you just have to be mindful that without one or the other, you can't really deliver value to a customer. Yeah, so I think Heidi can't do a reboot because we might lose the stream in that case. We got a question. The Harness dashboard looks so useful for tracking success of deployments over time, does it have anything for tracking further back in the value chain where the time and money is spent in planning, QA development? Yeah. So we actually have a new product that released this year called, Harness Continuous Efficiency. So that's when you can actually see, how much money did you spend in particular environments and how do you optimize for that?
So, in the continuous... And I can show you that actually in the, in the dashboard. If I can actually pull it up. Yeah. So you can actually explore how much money you're spending in the cloud. So if you're in AWS or Azure or GCP, like we're in GCP and AWS, I think we're definitely in GCP, I'm not sure if we're still in AWS, we might have moved some more things out of there, but you can actually see, how much are you spending per environment. So when you go and, and install, Harness, you do it for particular environments into a cluster. We have this delicate model that allows you to kind of control Harness through this platform, but then your delegates will actually do the work for you within your clusters.
So you can actually see how much you're spending per cluster. And it'll tell you, for example, maybe you need to resize a pod, to make better use of your nodes, because you can basically break it down based on how much you're utilizing in machine, how much does idle costs, right? For example, if you have an empty cluster with no applications, all of that cost is waste, because you're essentially paying for an empty cluster, right? Idle cost is if you have to many nodes, like you have multiple machines, say, for example, you provision, extra large nodes and you do four of them, but you only have three applications, doesn't really make sense because you just have one machine that you're paying for that's completely idle. You're paying for it, it's allocated, but it's completely idle.
Maybe the applications that you run and you have one application per machine doesn't use all of that machine power, because it's an actual large node. Maybe your applications are very simple, they only use one gig of memory, and then one core or something like that. Then, maybe you can downscale a lot of that. And so continuous efficiency actually tells you when you need to do that. And so actually I did want to say also, you can use continuous efficiency for free now, at Harness.io, you can download a trial, so not download, but try a trial for a couple of days and kind of figure it out, see for yourself. But yeah, I think it's like, that's a great question because when you think about the value chain, you do want to think about efficiency and eliminating waste, right? Not just trying to speed things up, but how can we do this sustainably?
So it's really great. Oh yeah, so another question is, there's a user conference coming up can you tell us about that? So yeah, we have a new conference coming up. It's our first time hosting industry conference, but it's called Unscripted and it'll be October, it'll be later this month actually, since it's October now, on the 21st and the 22nd. And you can actually go to unscriptedconf.io, I'll share it in the chat and register. Oops, I'll have to log in. Sorry, let me pass this over.
There you go. So we actually have a website where you can register for the conference and just check out some of the sessions, we had a CFP, an open call for papers, for people to kind of share stories and insights about how they scale and simplify software delivery. So we actually built it out in two tracks, you can learn how to scale and you can learn how to simplify different aspects in different components of the software delivery process. So, it's free also, it's a free event and really excited to be hosting that event.
I can hear you Alex, if that counts. Swagged out, our mascot is a canary and we call him Captain Canary. There's other versions of the Captain Canary. There's the rich Captain Canary, which I don't have, but she was part of the continuous efficiency launch. But yeah, we got new swag. It's Harness Canary, it's kind of cute, he's squishy, looks kind of sad, but you're like [inaudible 00:44:13] deployments. So if you do want to achieve canary deployments Harness the way to go. But yeah, we gave these out to speakers too. So if you want your chance at Harness plushy, we give them away to people. The people shall know. Yeah. It's an 8th, Audio-Technica 2020.
Yeah. Oh my gosh! It's changed so much. I think when I kind of first came into the Harness world, I always just figured CI/CD was just one thing. Like, okay, we can get this delivered and okay. Maybe we have some type of ecosystem built around it, right? Like verification, operationalizing. I didn't realize how much work it actually was to do continuous delivery, right? Because when you think about it and even in the report we share the seven kind of steps that are related to continuous delivery, there's a lot there. I mean, provisioning environments, roll back and release strategies, secrets management, thinking about costs. There's so much around the whole ecosystem for safe delivery, right? Safe and sustainable and quick delivery.
You don't really realize it when you're just building custom pipelines or scripting pipelines, because I think, before joining the Harness, I was actually a consultant and that's what we did. We helped people do that, but we always help people work with different tools, whatever tools that they had out there, to build, like just get something running on a cluster, right? Just get it running, just get it working, whatever it took.
And that can kind of limit sort of your vision or kind of understanding of continuous delivery. Because you're so invested in thinking about, oh, how do I script for this? How do I solve for this? That you don't really kind of look at like the whole ecosystem and no one provides that for you because it's so unopinionated and so unstructured that you don't have a good idea. And so at least for me, it's been like kind of crazy even thinking about it from a governance or compliance standpoint, like oh, I can actually like grade my pipelines or I can grade my deployments or I can achieve like compliance or FedRAMP through this.
It's not something you think about. That's probably the last thing I thought about, especially building my own pipelines, I was like, I can do that. So I think that's the funny part. And like the thing that gets you sometimes, until you work, until you talk to a company that kind of focuses in one particular area or focuses on solving one problem, right? You don't really get that otherwise. So I think that's one of the greatest things that's happened to me since joining Harness.
Alex:
I mean, that's amazing. I'm like, that's a big thing. Cool. So what would you tell people to start with as they start trying to understand their CI/CD transformation?
Tiffany Jacha:
Yeah, just for me, the biggest thing is figuring out where the pain points are, because it doesn't make sense if you've already solved for CI to say, "Oh, we need a whole new CI server, right? This is what we need to invest our time in." It's the wrong thing to invest your time in. And I think a lot of companies will say, "Oh, you need to use this solution or you need to use that solution." If there's no pain point there's no... And it works, and you could be investing your time somewhere else.
And so I think that's one of the biggest things is figure out what your entry point is, are you most concerned about speed? Are you most concerned about security? Are you most concerned about governance or compliance? What is it that you're trying to improve on first? Right? You got to start somewhere and I think that's one of the cool things about reading the DevOps handbook and sort of some of the thought ologies around value stream management is, you got to figure out where you want to start first and start small, right? So that's probably my biggest advice.
Alex:
Awesome. Cool. Thank you. And I know that we're kind of running out of time, but I just wanted to ask the audience one more time if anyone had any more questions for Tiffany.
All right. Give me one more moment. Awesome. All right. Well, if that's all the questions there are, I want to say thank you Tiffany to coming. It was awesome to have you on Test in Production and super excited to hopefully one day have one of those plushies of my own. So I will be a speaker at some point hopefully. I don't know, does this count? I feel like this would count.
Tiffany Jacha:
We'd love to have you there Alex.
Alex:
I feel like this counts. Yeah. Thank you so much. It was really awesome having you and can't wait for us to do something in the future.
Tiffany Jacha:
For sure, thanks so much for having me here. Thanks everyone.
Alex:
Yeah, of course. And just a reminder, we'll have the transcript and recording out in a bit and we'll see you all next time. See you.