tiero - Fotolia

Flush with funds, rising DevSecOps vendor reveals roadmap

Lacework has a fresh $525 million to help develop its machine learning-driven DevSecOps product, which integrates into CI/CD pipelines. CEO Dan Hubbard reveals his expansion plans.

A year ago, DevSecOps vendor Lacework was one among many emerging cybersecurity companies, but since then, it's made a meteoric rise, capped with a massive Series D funding round to begin 2021.

The privately-held company, founded in 2015, said it saw 300% revenue growth in 2020 as the COVID-19 pandemic accelerated enterprise digital transformations and cloud migrations. This month, the company closed a $525 million funding round led by Sutter Hill Ventures and Altimeter Capital. 

Lacework's SaaS-based cloud security platform collects a broad swath of data from AWS, Azure and GCP cloud infrastructures, along with application configurations, into a petabyte-scale back end based on the Snowflake cloud data warehouse. Lacework's machine learning algorithms then identify changes in that data on an hourly basis, alerting IT operators to anomalous behaviors that indicate security risk and suggesting remediations.

Similar features can be found among container and Kubernetes-focused security tools that have also emerged over the last several years. But Lacework's product has a broader focus that spans multiple IT security disciplines, including identity and access management, cloud security posture management, threat detection and response and regulatory compliance management, for container-based and non-container workloads alike. The platform also integrates into DevSecOps workflows with its API hooks into CI/CD pipelines, infrastructure as code and ChatOps tools.

Lacework CEO Dan Hubbard served as Chief Security Architect and Chief Product Officer at the company before being named chief executive in June 2019. Before Lacework, Hubbard was CTO at OpenDNS, now owned by Cisco, and before that, CTO at Websense, now owned by Raytheon under the name Forcepoint. SearchITOperations caught up with Hubbard this week to learn more about what made Lacework stand out to investors and where he plans to steer the company in 2021.

What accounts for the scale and speed of the Lacework platform, which seem to be its main differentiation?

Dan HubbardDan Hubbard

Dan Hubbard: There are really three main key differentiators. The first one is breadth -- we just do a lot of things across many different categories, all the way from compliance through to development, security, build time, runtime, containers and Kubernetes. That leads to a lot of ingestion, across many different data sources -- petabytes of data.

One way to think about the product is essentially as a massive ingestion engine, which can take all of your audit trails from GCP, Azure, AWS and Kubernetes, and all of your configurations. We pull all that information in to look for vulnerabilities, configuration problems, developer mistakes and unknown behaviors.

The second differentiation is the depth of our data classification and the efficacy of that engine. On average, a customer sends us a little over a billion log entries per day. We turn it into, on average, 1.25 high-end critical events or alerts that they should triage.

The third differentiation is that we fit very well into a DevOps lifecycle, or triage process, or a security process. We can plug into your Jira ticket, we can plug into an API, we could plug directly into your monitoring system, like Datadog or New Relic Core, or we can plug directly into your security workflow.

Is the integration with DevSecOps and CI/CD pipelines mostly for that monitoring output? Or do you also monitor the pipeline itself and workloads as they go through it?

On average, a customer sends us a little over a billion log entries per day. We turn it into, on average, 1.25 high-end critical events or alerts that they should triage.
Dan HubbardCEO, Lacework

Hubbard: We can look in and poll container repos, look at your containers for vulnerabilities and configurations. And then we have an API and a command line interface, which allows you to integrate into things like Chef, Puppet, Ansible and Terraform and automate a lot of the CI/CD process as part of the push.

If you're running a pipeline, we can help you if you want to stop a build, or send a response, like, 'Build X failed because of Y, send to this team.' Or create a ticket in Jira that goes to another group. And then in Terraform, we have an integration that would say, 'read template' to push a template, or [detect that] there's a problem with this template in some way.

The tool can offer suggestions for remediation -- can it automate remediation if a user wants it?

Hubbard: Our customers never want their cloud provider to have that level of privilege within their system. That's just very dangerous for a variety of security reasons. However, we either give them guidance, or we give them code, like a Lambda function for AWS, that allows you to close an S3 bucket if you want, or that allows you to turn on multifactor authentication if it's turned off. We're working on the ability to do deeper things within Kubernetes, like [help create] pod security policies and network security policies.

Our belief is, in the future, the platforms themselves will own the actual enforcement. We don't see Lacework being the platform that kills packets or quarantines hosts and things like that -- it's either going to be built into Kubernetes, or your AWS VPCs, or integrate directly with a CI/CD tool. And by the way, it's actually very, very rare, that customers are mature enough to get into that kind of automation. The most popular thing right now is detect and respond, maybe create a ticket and track that ticket. Then the next level is what we call Driver Assist -- maybe they integrate our product into Slack, and it says something like, 'There's a problem here, click this button to remediate it.' And then the real mature ones are like, 'Okay, run a serverless function that does, or a security policy that does XYZ.'

Even that represents an expansion of users' trust in AI and machine learning, right?

Hubbard: Trust is built with positive results over time, and we've been fortunate that we haven't had any major issues where some vendors have had what I call toxic false positives, blue screens of death, bad Linux kernel panics and things like that. But we operate at a higher level - we're not a kernel filter. We run in userspace.

We have three ways that you can do detection -- the machine learning stuff, typically based off of your infrastructure, and knowing your infrastructure. That's really good for the 'unknown bad'. Then there's the 'known bad,' known bad indicators of compromise like bad domains and bad IP addresses, and bad hashes, [which] is global. And [third,] there's custom rules that the customer creates.

Most security people are comfortable with the middle one, [vulnerability detection], and what they're actually really uncomfortable with are rules. This is a big part of our automation story -- although they may think they want the flexibility of rules, and really like rules, for one, it's just time-consuming. Then, the problem that usually occurs is that they either write the rules very, very narrow, and they miss all kinds of stuff. Or they write them very, very broad, and they catch way too much.

Customers are getting more used to the machine learning, and the output of that. And one of the reasons why we visualize that and represent it [graphically] -- our graphs are what create the events and alerts, but they also create stories and pictures. Sometimes the pictures really speak volumes, versus just an alert that says, 'bad stuff happening.'

So, you've just gotten this huge chunk of funding, and you've said you plan to double the number of employees this year. What will that mean, in terms of your product?

Hubbard: We think about the market kind of in two categories: There is the net-new stuff, cloud workload protection, Kubernetes security, container security, compliance for the cloud. And ChatOps also, the ability to do triage through Slack or other mechanisms -- maybe routing of tickets. Now, you have the ability to respond and send information, but ChatOps can get pretty deep, pretty quickly. We have a whole new suite of APIs that we're releasing this quarter, which will allow us and our customers to program the system better.

There are things we get asked for that we don't want to do [from a deployment standpoint], like ship an appliance or do layered software, or single-tenant SaaS -- we're sticking to our strengths in multi-tenant SaaS. We're building a European data center and building out a European presence.

Then there is a set of current and existing technologies that are expanding into or coming towards our strengths, for example, security analytics, security triage, SIEM and vulnerability management, as people move their core assets to the public cloud. Customers just started asking us, 'Hey, can you help decrease my SIEM spend? How can I use you as my SIEM?' We didn't really design this that way -- the ability to ingest other data sources, I think, is going to become pretty important over the next year there.

Beth Pariseau, senior news writer at TechTarget, is an award-winning 15-year veteran of IT journalism. She can be reached at [email protected] or on Twitter @PariseauTT.

Dig Deeper on Systems automation and orchestration

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close