bouncer
← Back

Unsupervised Learning · 110.5K views · 43 likes

Analysis Summary

30% Low Influence
mildmoderatesevere

“Be aware that the 'problem' of security-engineering friction is framed specifically to position the guest's product as the unique, 'only' solution in the market.”

Transparency Mostly Transparent
Primary technique

Performed authenticity

The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.

Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity

Human Detected
95%

Signals

The video is a genuine long-form interview featuring natural human speech patterns, including spontaneous fillers, conversational rapport, and unscripted technical explanations. There are no signs of synthetic narration or AI-generated script structures.

Conversational Fillers Transcript includes natural stutters and fillers like 'uh', 'you know', and 'right?' in mid-sentence positions.
Dynamic Interaction The back-and-forth between Daniel and Andrew shows spontaneous reactions, interruptions, and contextual follow-up questions.
Personal Anecdotes and Analogies The speaker uses a specific 'baking a cake' analogy and references specific customer case studies with approximate numbers ('like 1,200 tickets').
Speech Cadence The transcript reflects non-linear sentence structures and self-corrections typical of live human speech.

Worth Noting

Positive elements

  • This video provides a clear technical explanation of 'reachability analysis' and how call graphs can reduce vulnerability noise in software development.

Be Aware

Cautionary elements

  • The guest's claim of being the 'only' provider of this technology is a common marketing tactic that ignores a crowded field of similar AppSec tools.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217
Transcript

Only about 10% of code from agents is both correct and secure. Some of our customers actually run Hendrol Labs basically headless. If you were a developer, you would have no idea you you're using Hendro Labs because it's fully wrapped into other systems that you're using. One customer in the financial services space that basically reduced their uh security ticket backlog by like 90% in the first day, right? They went from like I think it was like 1,200 tickets down to like a couple hundred pretty much overnight. >> All right, Andrew, welcome to Unsupervised Learning. >> Thanks, Daniel. Great to be here. >> Awesome. So uh I understand you guys are doing some really cool stuff over there. How how would you describe the problem that exists and what you're trying to do to solve it? >> Yeah. So uh Enderlabs um works in the application security space and so we kind of sit in between security teams and engineering teams and historically those are two groups that don't always get along. Developers uh need to move fast and security is often there to put some brakes on what's going on. And so that's kind of always been the challenge. And I think the other kind of gap that has existed for a long time is that security took a lot of shortcuts to try to keep up with engineering. And so they kind of made a lot of guesses about what was actually risky. And so you know we would actually end up sitting in some of these conversations between security and engineering as having arguments about what is actually a risk in the code or not. And that's a huge waste of everyone's time. That was kind of the problem that Enderlabs was designed to solve. Um that started in the open- source space. So we were you know even just four or five years ago you know about 90% of application code came from open source. So that stuff you know developers didn't write themselves but imported to make their jobs easier because engineering you know have become basically uh describe it as like Lego assembly right you take all the pieces and kind of build um whatever you want to build anyway at that time all the products in the market basically looked at what was called the manifest file and made a guess as to what was in your application and so if you've ever you know baked a cake you know you might uh cheat or substitute things in the recipe um it's not a great way to know what's actually there and so Enter Labs pioneered a new way of actually assessing what is in that application that got rid of those shortcuts and actually gave evidence both the security engineering about what risk actually existed in their application. >> Okay. Interesting. So is this ultimately rolling up into some sort of vulnerability management system? Is it going directly to the developers? Like how are the developers actually consuming that? >> Yeah, both. For security, you know, they can obviously use our tool or in some cases they might roll it into their vulnerability management system. But for developers, it needs to show up where they're working, right? You don't want to go look at another dashboard. It needs to be in your IDE or in your pull request. That's where we typically show up for a developer. The other thing we pioneered besides this way of looking at code was actually the way we actually gave information to developers. And so again you have to kind of know the state of the industry but at the time people would open what they called like auto pull requests and it was basically just a guess. I would say there's this new version um you should upgrade. The best tools were like well 60% of the time it didn't break you might be fine go ahead and try it cuz the way we did the analysis of the code were actually able to give the developers evidence that actually your code won't break. And so that also kind of changed that experience both for engineering and security. >> Oh, interesting. Yeah, because that trust relationship between the security team and the engineering team is super critical. I've run a number of these programs and like it doesn't take long to burn the bridges where they just won't listen to you anymore. And so it's this weird political thing where like the managers are talking to the other managers saying, "Hey, we really need your people to do this." But on the engineering side, engineering wins. Like they're going to win they're going to win that, right? Because from the CEO down, it's about features. It's about the business moving, right? So, um I think enough developers complain to their manager, hey, I can do all this dumb security stuff all day, but it's just wasting my time and it's wrong half the time and it's causing me to not ship features. That means that engineering manager complains over the security and they're like, "Hey, I'm not going to do this until you sort this out." >> Yeah. The way this often shows up is we'll, you know, start talking with an organization about, you know, what's kind of going on, what challenges are you having? And sometimes it's like, oh, our technology is fine, but we have an organizational problem. And then the job there is to be a little bit of a therapist and diagnose what's going on between security and engineering and why security is losing that credibility um with engineering teams. And so, you know, the big benefit that we give um engineering teams, you know, we started open source, but this now applies to, you know, first-party code and containers as well, but we give you that evidence to actually have that informed conversation with engineering. So, it's not a security set engineering set. there's actually like evidence in the code about how something can be reached and exploited um by an attacker. >> Okay. Yeah. So, talk me through that. So, what are all the different surfaces you're able to look at? >> Yeah. So, um as I mentioned, we started in open source. So, open source dependencies and we pioneered there what's called program analysis, which is a form of static analysis that actually builds what's called a call graph of your code. So, you can kind of imagine a, you know, a map with all sorts of strings going everywhere seeing how all of your different functions are calling, you know, down into vulnerable functions. And so we extended that now to first-party code and containers as well. So we go from you know the code that your developers wrote you know we can trace you know user input data flow through all that down into your dependencies and then now also down into the container image. And so we are the only company in the market that provides that sort of what we call full stack reachability across code dependency and container image layers. And so that includes other things like you know secrets AI models all the different you know supply chain pieces that your developers might integrate into their applications. >> Okay. Interesting. Yeah, to whatever extent you can share like how are you doing all that? That seems like some of the secret sauce but also uh the audience is very technical so they like to hear details. >> Yeah, so we're actually pretty transparent about how it works. I think we you can find a lot of this on our docs or in our white papers on our website. You know it's it's changed over time. Now you know it used to be just static analysis. Now it's actually a mix of agentic AI and static analysis. But the you know the key thing when you build an agent is you have the LMM everybody can use that. But then you have the tools you can give an LMM and the data. And so the way we kind of describe it, you know, I can ask uh, you know, an AI model to do a math calculation for me. It's going to burn a ton of tokens and then it may or may not be correct at the end of it. If you've ever tried that, there's tons of examples of LMM failing at math problems. But if I give an LMM a calculator, it works really, really fast. And so you can think of Endrolabs as that calculator to help an LMM understand code. And so we have a couple um different kind of tools that LMMs use to parse code. So the first is code search. And uh we actually have a you know paper that's going to come out in a couple months because this is a you know brand new technology that our research team developed but it's basically using a mix of hashing and embedding to kind of map and find patterns in the code. So that's kind of like the basic we can search and find things in your in your application. The next piece is a code navigation and that's the part I was referring to earlier about the call graph. So basically we look at you know we build a model of your application code. We see exactly um you know when data comes in how you call you know function to function through your code and then down into the dependencies and then critically we look at both direct and transitive dependencies. So the dependencies of your dependencies and kind of build that graph and then we use know when you again when you build an application that gets compiled put into a container and so we use that information about the application layer to then look at how that application layer integrates with the you know container runtime and operating system layers as well. So you kind of get that full code to you know runtime reachability graph. And so that's how we're able to provide that like noise reduction. We're actually able to show you like with evidence how that code is calling into a vulnerable function whether that's you know the code you wrote or code you imported into your application. >> Okay. Perfect. And and how would that look? Let's say um log 4j was a great example of this. >> So it's a thing that it's not directly in the application but the application is using it. What would that look like? What would the experience be like? like where is the developer seeing that? What does the recommendation look like and how does that go into their workflow? >> Yeah. So, yeah, log 4j was obviously like a, you know, huge event in the industry, involved a lot of teams. I'll start with the security experience and I'll show you how security can design that experience in a way that doesn't interrupt um developers. So, as a security team, you know, I' I'd, you know, log in, I'd look at my findings, I'd see a critical finding for log 4j or in this case, maybe I need to go search through my environment and find all the instances. What we'll do though is you can immediately you can find where it is. So which applications etc are using it. But then I can dive into a particular application and actually see you know is the vulnerable function in log 4j actually used in my code. You know best case we might be able to give you some evidence that you know in fact you're using the log for 4j dependency but your application code isn't using that vulnerable function. That means all right you might want to update this anyway but it can be a lower priority versus this other application that is actually using that vulnerable function. Now when you're onboarding the Enderlabs platform, you can actually decide how this surfaces for developers using our policy engine. And so in this case, you might say, you know, it's a critical vulnerability. U I might decide I'm going to block anybody from building any new applications with this. So for an example, if I was a developer and I was submitting my pull request, I could, you know, block any build of the application, give them a warning that they need to, you know, fix and upgrade this dependency in the code. That's kind of like the kind of the ideal workflow. or you know if it wasn't log forj it was a maybe more minor one I could say I'll let you go ahead and ship this production that'll give you a warning and you can choose to accept that and continue if you wish and that's kind of a key like difference is that ability to like decide warn or block so you can kind of decide and design that developer experience um not just set some guardrails and move on >> okay and you mentioned the IDE and uh you mentioned the sort of agentic stuff how does that materialize >> yeah so um I'm sure your audience has heard about the model context protocol over the past year um MCP so that a way of integrating um systems like Enderlabs directly into the agents that developers are using um when they're writing code. And so with the Enderlabs MCP, you get actually full access to scanning both for the code you write, any secrets that might be exposed in your repo as well as what we've been talking about open source, so CVE as well as malware that can also show up in the supply chain. Um so that's kind of the easiest integration and the way it actually works is it's completely invisible to you as the developer once it's installed and set up. you just kind of work as usual with the agent. And every time the agent adds a dependency or modifies your code, it'll actually do a quick scan to make sure that whatever changes has made are secure. And so this is an easy way to resolve a bunch of security vulnerabilities before you even do that commit or open that pull request. And so it kind of keeps things moving forward and is mostly invisible to developers. You know, obviously it's there. So you can also choose to run it if you just want to do a final check before you, you know, submit your pull request or a commit. >> Okay. So it's kind of active during the development process. >> Yep. Absolutely. and it can see the repo so it can kind of go through and and do a scan there. And then um talk about the uh the container pieces, right? Because it could be sitting on like this very old container and like you've done all this work around the code that you're building, but it's sitting on a very nasty sort of thing. What does it look like in terms of like recommending like a more secure version of the container or something like that? >> Yeah. So on the container side, we do a little bit different approach geared more towards the security team as well as the the platform engineering team. um in most you know enterprises and companies it's not the individual developers who are responsible for the images it's the platform team that's maintaining those golden images and so what you can do as that team is I can see all of the container images in my environment all the ones that might be in my registry etc and then I can also see all the derived images right so I have my golden image and then I can see you know the five application images that come off of it just as an example and so what I can then see is like okay there is this vulnerability let's say in one of the libraries that's used in the container image I'm using like let's say it's a python image that I've kind of built. So I can see, you know, there's this vulnerability in that container image layer. And then I can actually look across those applications and see, okay, either, you know, maybe none of my applications are using it, which case I can just prune that dependenc I can prune that dependency out. I don't need it at all. Or I might see, okay, there's this one application that's impacted because it's actually using that code. And then I can make that decision. Okay, should I go ahead and upgrade it or do I need to, you know, temporarily put another mitigation because that upgrade might take a lot of work. touched on this earlier, but we have this concept of upgrade impact analysis. And so that's using that static analysis to actually see, you know, when we build that call graph of your application, are you using um that vulnerable piece of code? And then when the maintainers fix it, does that impact your application? Because a lot of times they might modify a function or a class or an API in the dependency. And so we actually give you that visibility so you can prioritize, you know, exactly what you need to go kind of refactor in your own application code. >> Yeah. Interesting. That that's really important because uh yeah you can make a tweak and suddenly the rest of the stack doesn't work or whatever. >> Yes. Yeah. And it's a huge waste of time, you know, developer to have to figure out what was the the line change and the dependency that I was using and how did they modify it. It's a bunch of reading and research and then I have to go try different variations to work around it. We give you that information so even if it is work, you see exactly what work you have to go do. You don't have to go do that research yourself. >> That makes sense. And what type of interaction is there for those other teams? It's like what does the portal look like? What does communication flows look like? Because there's also kind of a friction there if the developer is having to then go and market to the uh platform team, right? So is it possible to get the platform team involved and kind of be visible in this process in some sort of way? >> Yeah. Uh thinking specifically about the container images again. >> Yeah. >> Yeah. Yeah. Absolutely. So I think one of the cool things about Ender Labs is we actually don't have the uh concept of like a user account. um it is fully integrated with like your enterprise um single signon systems. So all you have to do is you know grant the platform engineering team access and they can immediately get all this data or pull it into any of the systems they're using because we're also built fully API first. So again easy to integrate wherever this work is happening give that data exactly where it's needed and allow them to work with it. And so that's kind of the powerful thing is you can build um workflows any way you want. Some of our customers actually run Hendrol Labs basically headless. If you were a developer, you would have no idea you you're using Endor Labs because it's fully wrapped into other systems that you're using. Um, and even the security team can roll it up into other things. And so it's this operational layer that you could choose to ignore. Other teams like the dashboard and then choose to spend, you know, use that as their main kind of like interface for working with, you know, security teams kind of managing things and then making sure tickets get cut to, you know, JR, whatever system you're choosing to use. But even in that case, like the big benefit, you know, we had a one customer in the financial services space that basically reduced their uh security ticket backlog by like 90% in the first day, right? They went from like I think it was like 1,200 tickets down to like a couple hundred pretty much overnight. And you know, that's a large team with thousands of developers. So that means it's a huge time savings for them. And is that just because they realized the things didn't apply to them and they were able to just reply back and say hey you know we're not even using this version or >> well in the case of the security team is like you know it's yeah you're either not using that version or you're not even using that particularly that vulnerable function and so you can say yes there is a risk in this code but it doesn't apply to us and so we don't as a security team we don't need to flag this as a challenge to engineering we'll just like next time they upgrade the application as part of their process it'll get fixed it's not an emergency actually we're one of the only companies that's accepted in Fed Ramp um if You're familiar with Fed Ran. It's the compliance. Yeah. For selling to the federal government. You know, there's really strict SLAs's around, you know, critical vulnerabilities you have to fix within 30 days or you have like a, you know, 60, 90, 180 what depending on the the standard. But we're actually auditor approved. Um, you can use our reachability analysis to actually classify something out of that 30-day SLA into some of those longer SLAs because we can give you the evidence that that vulnerability is not reachable in your code. And so that's how helps the Fed ramp, but also even if you're not fed ramp, it gives you that confidence that, you know, you don't have to actually worry about some of this risk. >> Okay. So it sounds like a big theme here is this evidence chain indicating like this is how we know you're vulnerable. This is how we know you aren't and providing that to a security team or an engineering team or an auditor or whoever. >> Yep. That is the big thing and that's like one of the benefits I think customers love is that transparency, right? So all of that data is accessible to you in that platform and then you can take it anywhere that you need. So whether it's you know we've talked about developer workflows into auditor reports rollups you know up to your board or whoever you're reporting to besides reporting out the overall risk of the organization you can actually again you know have that evidence exactly where you need it and so you know we've talked about dependencies you know you can see that chain from your code down through you know the five different dependencies or if you're uh you know working on you know application code source code you see the data flow like function to function um how something is exposed we haven't talked about it much but we also understand additional context right So in the you know SAS or first-party code use case you know besides you know typical you know multi-file analysis things like that you could also provide context about how applications work with other systems and so that we're able to understand that okay I'm using this you know firstparty code sanitization library and so as long as that exists I can actually say okay this code is fixed and that helps reduce false positives in my first party code as well. >> Oh interesting and how about recommendations of like hey you should be using this first party sanitization system as opposed to the one that you're using. Yeah. So we um I think there's there's two aspects of that. First is um like let's say I have a you know exposed function I need to fix. We can recommend that upgrade. But then for many companies there's actually like a standard library that they want people to use. And so that's where you can actually customize it by adding a prompt into the system. Like every application must be using this library to perform this function in this case input sanitization and the system will actually look for that. So that's kind of where it starts to bleed into like what engineering would call code code quality right like I'm doing the things that my organization needs to do. >> Yeah. And and of course the quality bleeds into security when the thing being recommended is um not just a design but actually a security issue, right? >> Yeah. Exactly. kind of like the way we kind of see the market going is just like more and more um security and engineering particularly in the appsac space are going to become a lot more aligned and I we kind of our hypothesis here is that you know over time while we'll still have compliance frameworks and things that you know companies are trying to maintain security will become a lot less about checking the compliance box and more about integrating and kind of building that um you know secure but design system for developers and agents so they don't have to really think about security you just kind of build the thing you want to build and security is baked in >> yeah that I so I very much believe that yeah my And Caleb Sima has always said um security eventually gets eaten by engineering. My example of this is you go on site where someone's building a skyscraper and you ask, "Hey, um cool. Who's the uh department here responsible for the building not falling down?" >> Yep. >> There there just isn't one. Like it's built into every part of the schematic, right? It's just the materials that we use, how we set them up, the order in which we build, and it's like it's just part of the SOP. This is how we build software. Yeah, I think that's really cool. So what does um some of that look like as we start to have more and more of the code written by agents themselves? What what type of information context here with the tool itself can we provide to the agents so that when they're building honestly this has happened way faster than uh than I anticipated. I thought this was going to happen. I didn't know was going to happen like now where the developers are like here's what I want make this and it just starts building. So how do we provide your tooling and all that extra context you were talking about to the agents themselves so that they're building safely in the first place? >> Yeah, actually this is like I think the most exciting opportunity in security. I think you know last year or actually about this time last year everybody was freaking out about AI generated code and like is it secure or not secure as we've seen how the development process is changing there's actually a real opportunity to do some context engineering for security right so that's kind of alluding to like engineering that context and so you know I was actually talking with one of our developers this week and over the last few months it actually starts at the prompt which is you think in the past you know I would have my uh design specs document for example I might have some security in there at some point I might have had a security team review it but Now that prompt is basically that document where I actually kind of outline what is the feature I'm building, what are the specs that need to be built into it and then the agent goes and builds it. So that's kind of the first step and there's a ton of research here. Even just saying like build write secure code improves the agent right away. But you can actually go much further than that, right? And you can guide it like you need to avoid this use case or put these controls or even pass these security tests. And so you kind of outline that, give it to the agent. That's kind of the next step. Then the agent starts building. And so this is where we were talking earlier about model context protocol and MCP. This is the next stage where the agent just needs access to security data. I'm sure most people are probably familiar with how AI, you know, generally works, but I think we often forget that it's trained on open- source code, which is an unlabeled set of code that has vulnerabilities baked into it. And so there's actually been a lot of research. There was one from Carnegie Mel Melon University recently that ran a whole bunch of benchmark tests and found that only about 10% of code from agents is both correct and secure. And so this is the opportunity with context engineering is actually helping these agents write that secure code by default when developers are you know splitting and running multiple agents in parallel uh you know cloud code or whatever uh system they're using. Um so again that's where MCP comes in. So the way it works is, you know, besides the access to the the backend system, also ships with some kind of built-in skills or um rules, which are basically just prompts that guide the agent, you know, once it finishes a step, you know, maybe it's written your first party code, it should run a scan just to kind of see the code there. Or if it imports a dependency um to scan that code as well. And so this happens really, really fast kind of in the back end. You know, if you have an agent running for, I don't know, maybe it's working for two or three minutes, having, you know, a scan for 30 seconds of that isn't a big addition. that actually helps the agent correct itself and hopefully what you get is pretty much secure by default right it's again it's at this stage the biases to move quickly you know we tend to find you know anywhere from like 80 90% of vulnerabilities whether it's in your first party code or the dependencies get fixed at that stage the challenge I think over the past year though for a lot of teams has been code review right that kind of next step where I am either I'm reviewing the code myself as the author that's now reviewing the agents or I'm sharing it with my team AI pull requests tend to be much much bigger and then AI fails in ways that humans don't particularly It makes silly mistakes, but these often show up as business logic or architecture flaws. And this is kind of where code review starts to be really important and actually where sounds controversial, but actually using AI to enhance it actually helps. Um, >> no, I totally agree. >> Yeah. So, we actually have a product called AI security code review. But what we're looking for there are those business logic and posture flaws in addition to, you know, the standard, you know, SAS secrets, I see, all these stuff we've done in the past to kind of see when um these design flaws have creeped in. And so you can, you know, review whether this is a business logic, uh, could be like an authentication flaw or it could be a prompt injection, whatever it is. But finding that, flagging it in the comment and then helping developers review it. And so we've working with several large enterprise companies. Dropbox is one example I I can site um that's actually now using this in their engineering organization to find these types of flaws in their code. So that's kind of the next stage. And then finally, you know, obviously we go to build and that's where I think all of our traditional stuff still comes in, right? You're still going to CI. This is when we kind of do your typical, you know, deep scan with SAS, SCA, container, but at this point, most things are fixed. And so hopefully you're actually able to keep shipping pretty quickly because there shouldn't be that much that's caught at the CI point anymore. >> Yeah, that makes sense. So what does that total flow look like? What what are the different product pieces and where do they go in the phases? >> Yeah. So again, MCP is an integration, so you're getting the whole platform there. So you're getting code scanning with SAS, open source dependency scanning with SEA, secret scanning for any exposed secrets, and then also malware, which has been another kind of big challenge over the past year is the number of malware attacks targeting developed machines. So that's kind of that first stage. Then at the pull request, that's where we do that code review. This is also where you can maybe start to enforce some gates around maybe SAS or SEA as well if there's um I would call like regular flaws for code or SEA. And then, you know, full platform, most people use us in uh CI just because that's kind of like the last big check for security teams. And then there's actually another part that we haven't talked as much about, but like post- deployment, you know, we might do a weekly deep scan of your entire repo, for example. The other, I think, exciting part that we didn't talk about is actually burning down the security backlog, right? So with tools like uh you know GitHub copilot or claude you can actually use that MCP server to actually kind of fix things after the fact because you know CVES and open source dependencies they're already deployed in most cases and so some of our forward-leaning security teams I'll just use cursor as an example they're actually using Ender Labs the security team just fixes everything themselves they don't want to stop engineering from shipping and because the volume is so much lower that they're able to like find prioritize and then kind of fix in the backlog on their own and so that's kind of I think also where it's going is as security teams get more technical and more comfortable using these tools. Security could be the one that actually applies these fixes in some cases rather than engineering. >> Yeah. And ideally, I I think what you're talking about there is they're recommending changes that will just apply like updates to the golden image, right? >> Or upgrade the dependency of the application layer. Yep. >> Or a new library or an expansion [clears throat] improvement of the library that would make it better. Yeah, that makes sense. And that just propagates forward. And I like your point about looking at the whole repo because it could be that okay, we're starting this new project. We've got this new Endor Labs tool and that's going to be fantastic. What about the other 940 projects already in there? So yeah, you got to look at those. Talk a little bit about like the way it actually gets integrated. So are we talking about hooks like in clogged code? Are we talking about like a system prompt? Like how does the normal flow of the agent harness tap into and use? Obviously it has access to the MCP, but a big trick is like how does it know when to call it and what to call it for? >> Yep. Exactly. So we actually use both. we have hooks as well as the MCP server. The other way we're actually packaging it more and more is with skills, right? I think that's something that CL or an anthropic pioneered but it's getting adopted by the other providers as well. Um so basically it comes with you know um here's a whole bunch of different commands that you could run kind of when and why you would run it packaged with some of those like basic rules that I you know mentioned a little bit earlier but basically like okay you've imported a dependency that means you have to run the scan and so it's kind of built into the agent framework much more directly. um it's not you know requiring a developer to remember it and I think back to a year ago like when MCP was new one of the challenges I often found is that I would have it there and then the agent would forget to use it even if I had the role in the repo and so like the way hooks and skills work it's much more integrated and so you know it actually happens now consistently rather than having to be a little bit scattershot I'd have to you know have to remind the agent that it's supposed to run its run it scans >> okay cool so so an install would be something like so install the MCP install the skills and in cloud code for example when you start up. It's looking at that skills index and it's got triggers in the description field. Use win, right? So, it's able to pull those up and then I guess you can have all your system prompts and your MCP calls and stuff inside those skills. >> Yeah, exactly. And we were actually a launch partner um with Cursor when they announced their um hooks and so we're extending it, but the very first one we built um for that ecosystem was actually around malware because increasingly developer workstations are that target of malware. And so, you know, again, that really taps in, you know, as soon as I do that prompt and the agent starts to perform a task, in this case, pulling in a dependency, it actually goes back to Enderlabs, double checks the dependency, confirms there's no malware in it, and then installs it and happens, you know, seconds or less than seconds. And so, again, that's kind of built in like I once you have that installed, you don't even kind of know what's happening. It just happens before the dependency is even recommended or lands on your machine. Um, kind of happens in the background for you. >> Yeah, that's fantastic. So what do you think this is sort of heading towards if things keep going in this direction? Like what does this look like? How do you get more integrated into things? What does that user experience start to look like when it's completely hands-off? Like even, you know, CI and everything is more hands-off. The code review starts being more hands-off. Like how do you get more touch points into the system? Maybe the integration with the organization, maybe making that more fluid. Like what do you see as the opportunities to just make it even more smooth? Yeah, I mean I think it's it's this the technology is evolving so fast it almost feels like I can't predict a year ahead right now because uh you know a year ago I don't think we were talking about skills and now skills are like kind of the new thing this year and I think even in 6 months it's going to look different but the way we're kind of building is our platform now is using all of these kind of technologies that we're talking about to build an application that built on two fronts. One we were talking about this integration directly into these agent harnesses and things like that. Um but also again we were kind of built fully API first now extending that to the world of agents so that also that you as a customer can architect your own agents in the future around this and build your own agents off of our tools and data. So again still pretty early. A lot of this is evolving right now but that's kind of the future we're kind of building towards is that we'll have our agents there. Whatever agents you're using whether that's you know the agent in cursor or in claude you know code or you know OpenAI codeex they can interact directly with us. So I like the future you kind of think about it. There's this old concept from the beginning of 2000s called test driven development TDD. I'm sure you've heard about in the past. Yeah. >> I love it. I love it. I use it everywhere. >> Yeah. And it's it's feel like it fell out of favor a little bit, but like now it is I think the way you kind of run. It's back, right? I I write my spec. I write the test. I let the agent go actually probably a swarm of agents now go run and build things. All those agents have access to that security tooling that my organization has kind of set in the back end. and I as the developer mainly interacting with the prompt, maybe getting to the code as I need to. I kind of think it's like the way I write now today, like I I do a lot of writing. Most of it still comes from me. If I ask, you know, Claude or something to write something, I still have to edit it. But it's gotten to the point where I'm looking for like adding the human flavor back in or my ideas. But I think that's kind of where it'll go is like mostly in the prompt. Occasionally get into the code as I need to and then yeah, it's like a chat experience. It goes and builds it. It tests it. I verify it. Check the code review and it ships. And if I'm the security team, you know, the things in my backlog just keep shrinking. And that's I think a a world that hasn't really existed. And I think it's actually exciting. Like I think when you talk to most organizations, the risk backlog is it's just that it's a backlog that keeps growing. They, you know, tackle the two or three things that are most most urgent and that a lot of it doesn't get resolved. But now you can actually burn down that backlog. Either tech debt, security debt, and yeah, keep shipping in the future. It'll be really exciting. >> Yeah. Absolutely. Yeah. I love I love the idea of what you're talking about where it's like, well, you can use these agents. We provide them if you don't have any yet. If you do have some, just teach them about this set of hooks and skills and tools available and they just become a tool set that those agents can use to build. >> Yeah. The benefit there again is like because of our both our background in static analysis. So that's kind of I was talking earlier about that calculator piece, right? If I ask an agent to like scan and review my whole codebase, it's going to be really slow, expensive. um static analysis gives it that you know those tools to do the security check really really quickly cost efficiently um at scale right so that's why we've been testing with some of our largest customers like you know Elasting Pelaton and others that are these huge companies large engineering teams so they can kind of build and ship quickly while you know embracing all these different new AI ways of working >> that's fantastic anything uh exciting coming out you going out to RSA by any chance >> we'll be at RSA we'll be sharing a lot more um we announced our AI SAS product back in November we'll be sharing a lot more about it at RSA today as well as um some of the other stuff that we'll be doing around developer workstation security. So where all this agentic work is now happening is actually on that developer workstation. Some additional um security for that as well. >> Oh, really cool. Well, I will be out there. I'm local anyway. So >> Oh, excellent. Well, look forward to seeing you there then. >> Yeah, absolutely. Anything else you want to mention? Uh could you mention where uh people can find out more? >> Yeah, absolutely. Check outs.com. So, obviously as the security team, there's a bunch of stuff for you there, but if you're a developer um you can try out our MCP server. It's free. You can go grab we have lots of resources about context engineering for security or uh you know secure prompt practices. So lots of resources. Um yeah feel free to check it out again at enderlabs.com. >> All right Andrew, thanks so much. >> Thanks Daniel. Appreciate it. Talk to you soon. Take care.

Video description

Check out Endor Labs here: https://ul.live/endor_labs_yt In this interview, Andrew from Endor Labs explains how their platform uses reachability analysis to bridge the historical gap between security and engineering teams by proving which vulnerabilities actually matter. What we talk about: Bridging the Security-Engineering Gap: How providing hard evidence of risk through reachability analysis stops the guessing game and builds trust between developers and security teams. Full-Stack Visibility: How Endor Labs maps vulnerabilities across the entire application stack, from first-party code down to open-source dependencies and container layers. Frictionless Developer Workflows: The importance of integrating security directly into IDEs and pull requests to reduce massive ticket backlogs and keep engineering teams moving fast. Securing AI Coding Agents: Tackling the fact that much of AI-generated code is insecure, and how "context engineering" using tools like the Model Context Protocol (MCP) and agent skills can enforce secure coding by default. The Future of AppSec: Using AI for advanced security code reviews to catch business logic flaws, and moving toward a future where security is seamlessly built into the engineering process from the ground up. 00:00 - Introduction 01:54 - How vulnerability data is delivered directly into developer workflows 05:02 - The underlying technology combining AI and static analysis 07:02 - Real-world workflow examples using the Log4j vulnerability 09:53 - Securing legacy containers and managing golden images 17:42 - Applying context and guardrails to autonomous AI coding agents 26:00 - The future of automated security and the evolution of test-driven development 29:27 - Upcoming events and where to find more information about Endor Labs Subscribe to the newsletter at: https://danielmiessler.com/subscribe Join the UL community at: https://danielmiessler.com/upgrade Follow on X: https://x.com/danielmiessler Follow on LinkedIn: https://www.linkedin.com/in/danielmiessler/

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC