bouncer
← Back

Low Level · 270.3K views · 10.8K likes

Analysis Summary

30% Low Influence
mildmoderatesevere

“Be aware that the host uses the genuine failure of AI-automated hacking to reinforce the necessity of his specific paid courses and platforms.”

Transparency Mostly Transparent
Primary technique

Performed authenticity

The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.

Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity

Human Detected
98%

Signals

The transcript exhibits highly authentic, unscripted human behavior including colloquialisms, emotional reactions, and natural disfluencies that AI cannot currently replicate convincingly. The content is a personality-driven commentary on technical news, consistent with a human creator's perspective.

Natural Speech Patterns Frequent use of filler words ('um', 'no but like', 'right'), self-corrections, and conversational tangents ('look at this guy look at him').
Contextual Humor and Slang Use of internet-specific slang like 'slop reports' and 'bro literally wrote a use after free' in a reactive, emotive tone.
Technical Improvisation The speaker live-analyzes code snippets, stumbling over technical terms ('nu...') and explaining logic in a non-linear, spontaneous way.

Worth Noting

Positive elements

  • This video provides a concrete, technical breakdown of why AI-generated bug reports often fail, using real-world code examples from the Curl project.

Be Aware

Cautionary elements

  • The host frames the systemic issue of bug bounty spam as a moral failing of 'AI hackers' to better market his own traditional hacking courses.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217
Transcript

Makes me sad, man. Curl to discontinue its hacker one bug bounty due to two strong incentives to find and make up problems in Bad Face that cause overload and abuse. Guys, this is not the first time that I've reported on our friend Daniel Stenberg here. If you don't know, Daniel Stenberg is the CEO, what he calls a code emmitting organism of for the curl project, right? both curl and libcurl which are fantastic pieces of software that allow the entire internet effectively to work between it and wget to make web requests programmatically is kind of the end state here right so if you're not aware for a long time right Daniel and friends have fought against the era of AI they have fought long and hard he's over here getting awards look at this guy look at him look at him winning um no but like against the the slop reports that have exploded in 2024 and 2025 with the onset of not only AI but supposedly you know AI I security researchers, right? And so the for a long time, the neverending slop submissions took a serious mental toll to manage and sometimes a long time to debunk time and energy that is completely wasted while also hampering our will to live. The mind-numbing AI slop, humans doing worse than ever, and the apparent will to poke holes rather than to help has caused Daniel to literally remove his project Curl from Hacker 1. Hacker One being the platform that allows you to register your code with it and receive security reports from researchers and if they find vulnerabilities you can pay them out which is the whole point of bug bounty by the way. It is a it is a great concept but the incentive structure is a little weird right it's kind of odd that we have to you know incentivize research with money kind of just the nature of the world unfortunately. Uh but that being said now that we have AI to potentially automate things and no real downside for submitting garbage to Hacker 1. We now have people like Daniel saying hey enough is enough. Yeah. So on the I believe 26th of January here, he removed the security related postit here part of the hacker one report or the uh of the curl you know security area of um of GitHub basically saying hey man we're done. A good example of of what Daniel and team are dealing with by the way are are bug reports like this right use after free in OpenSSL key log callback via SSL get external data extra data in lib curl. A few things here. First of all, this is a report that talks about a vulnerability in OpenSSL, which is a big deal if real. However, comma, this is a vulnerability report platform for lib curl. So, not sure what we're doing here. And so, he talks about the fact there's a use after free if you use this function which can crash or information leak. It can be triggered manually. Blah blah blah blah. Okay, fine. So, let me see your code. Where's your where's your little buzzer harness? He has your function called my key log callback which don't worry about what it does. is just it's a callback that takes an SSL context and a pointer to a line. Okay. And you say void pointer equals get the data from the SSL context. And then in the function they have main. They create a new SSL context. This is we're going to be a TLS client. We initialize the context. We call it SSL. We do some stuff. We then free the SSL context and then we use the SSL context and that's the bug question mark. Um use after free because we freed it. bro literally wrote a use after free and decided yeah this is definitely a bug and lib curl and again it's we like where is curl even involved here so like the I I don't even know what to say about this or or this one this one's even better buffer overflow in curl MQTT test server MQTT being message Q is like it's a protocol MQTT test server in the curl project contains a buffer over vulnerability to improper validation of password length fields in MQTT or or this one this one's pretty good buffer overflow risk in curl inet n2p and inet np4 which are used to convert number to pointer. Basically it's an IP address conversion function that conver converts from the binary format to a human readable format. And obviously like this could be vulnerable to something, right? Because like it has to do with converting a number to a string. That being said, the string is like generally always fixed length. It's never more than 3 * 4 + 3, right? Like digits. Like it's very hard to get this wrong, but okay, let's give let's give this guy a break and see what can happen. Okay, so he says um hm use of a stir copy vulnerable to overflow. Oh, okay. Stir copy potent. Oh, wait, hold on. So, literally there is a guard clause. Sterlin of temp is greater than size. GTFO there's no space, right? This is literally like very basic boilerplate guard clause before you go and do a dangerous thing. And so this is like early before I think you know Daniel had kind of like gotten the smell test for this stuff. First of all, you can see impact conclusion like these like this style screams AI writing. But let's go into this. Thank you for your report. One of his maintainers as if the sterlin of temp is greater than size clause and curl inet ntp prevents this kind of overflow describe what's going on here. Have you ever actually tried this pock yourself? You mentioned risk here. Does this mean you can actually not point to a real security flaw? And then you get the classic thank you for your feedback. You are absolutely right that if sterling f is greater than size then this will not be an issue. You're correct that in the provided code the checks prevent overflow. That being said, checks may be bypassed in a different implementation. He's like, I call slaps. Nothing weird's going on here. But then you get the classic double down, right? The AI double down where they know they're wrong, but they're going to keep going. Why buffer overflows can still occur. Although ster line of temp greater than size check is performed beforehand, stir copy does not check the size of the destination buffer. If size of dust is smaller than size of temp, stir copy will copy beyond the bounds of desk resulting in a buffer overflow. Brother, did you not what? What is happening? What is happening in this line? What is this line? That is what frustrates me so much about AI, by the way. It's not even that we're just like wasting everyone's time from the first place. It's just like its inability to see past its own nose on certain things is so infuriating. You can tell this happened in 2024 before like the AI bots got super advanced because Daniel rightfully so is like, "AI slop. This is garbage." And the response is, "I understand you're upset, but let's keep the conversation respectful. If you need to discuss something, I'm happy to listen." Which literally is like the exact response you get from chat GPT when you tell it it's an idiot. I have no experience doing that. But if that were something that I would do, maybe that's what I would experience potentially. Maybe. All right. But but enough on the uh the bug bounty submissions, right? I want to talk a little bit just more about like, you know, what my thoughts are on the situation and maybe what we do going forward, right? So, I guess to level set here, why do we have bug bounties, right? What what is the point of this system? It may seem bad to have a monetary incentive to submit bug reports, but what we're trying to do with bug bounty programs is incentivize people to find bugs and then not sell them to threat actors, right? Or even just incentivize people who are not in the exploit black market to just do security research, right? If I could tell you that you can go find a vulnerability in like a Netgear or Ubiquiti router, right, that's like remotely exploitable and you get paid like 5 to 10 grand, maybe you go learn how to hack and maybe you go make the world a safer place. That's a cool thing. That is a cool concept. It makes the world better and you can get some cash. That's dope as hell and I love that. And so bug bounty programs were created to effectively get rid of this weird market where people have these zero days. They're sitting on them and the only reason they don't like report them is they want to get paid out for their time, right? So instead of selling them to weird CN as a service, you know, organization, we now instead have organizations that pay them out, right? So hacker one is one of them and that there are others. And so the problem now is when we live in this this era of AI, we create a bit of a path to automation where like in theory, if you can get the AI to talk to you in the right way with the right prompt, there may be a world where we can automate away bug bounty programs, right? Where people can just like literally have a bot that they've written with the proper prompt. look at the source code for curl, Linux, kernel, SMB, etc., and they just can turn out some money. And I do want to highlight there actually is a an AI bug bounty like agent that is that is doing just that. This company, Expo, holy dark mode, by the way, Batman, is a company that maintains an autonomous offensive security platform that delivers the depth and results of a premium pen testing engagement in the fraction of the time. And you guessed it, using AI. And this one's legit because if you go to their hacker 1 dashboard, they actually have submitted and had accepted bugs in Booking.com, Informatica, Airbnb, tom rooters, etc. There's there's tons of vulnerabilities that they have found. I think the pattern that you're seeing here is a lot of these are um going to be bugs in like web frameworks, right? Maybe not uh source code in like Linux kernel for example or curl, but the fact that it can find bugs and a lot of them, you know, so many of them are legitimate, at least half, is a good thing. Now I want to highlight here all because XPO can do it doesn't mean you can do it. Sean Heeland a researcher who found a vulnerability using 03 in the SMB implementation in the Linux kernel makes it very clear that the signal to noise ratio that he experienced as a very very talented researcher with a very very wellthought uh prompt process and like process of giving small pieces of the code to the AI. The signal to noise ratio meaning the bugs that were real versus the bugs that were fake. And it's not just bugs. It's also a page or two long report that he has to triage and validate is 1 to 50, right? Generally. So that means if he runs this process, he will get a hundred reports of which potentially two are real bugs. And he Shawn has to go through and look at every one of those reports, figure out where in the code it says it's real or there's a vulnerability and triage that, right? And so that is what Daniel and his team has been dealing with. And that's why they eventually closed down their program on hacker one. Right? I want to just make this video about like I think that AI security research is fine if you do it in good faith if you do it in the way that I think it should be done where you're using it as an assistant to help you audit the code. If you think something smells weird in the code and you've already kind of done the cursory glance yourself, you can go use the AI to have it triage or write a fuzzing harness, right? have the AI read the code that have cursor read the code and produce a fuzzing harness that's compliant with the API or the protocol you're trying to fuzz or maybe you don't know the definition of certain things. Maybe you're doing research on like PCI pass through like weird hypervisor stuff and a lot of words don't make sense to you. So you can use the AI as like a buddy to help you out. But this man or like making up bugs in functions and then causing programs with code that is that is very important to society. Curl being one of them to kick themselves out of bug bounty programs because it's just too much work for their small team. A thankless team, a team that gets no money, by the way, is just, you know, really unfortunate for the world. So, if you're out there, dude, if you're doing security research, good on you. Keep doing that. If you're doing AI security research, be careful, right? Don't don't just submit everything and anything AI tells you. AI is dumb as hell most of the time. And all because he can do something, all because he can drink wine on a Monday night and make YouTube videos doesn't mean you should. Thanks for watching.

Video description

🏫 MY COURSES Sign-up for my FREE 3-Day C Course: https://lowlevel.academy 🧙‍♂️ HACK YOUR CAREER Wanna learn to hack? Join my new CTF platform: https://stacksmash.io 🔥COME HANG OUT Check out my other stuff: https://lowlevel.tv

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC