bouncer
← Back

Zen van Riel · 1.2K views · 59 likes

Analysis Summary

45% Low Influence
mildmoderatesevere

“Be aware that the 'security crisis' is framed specifically to make the creator's 'Starter Kit' feel like a necessary solution to an urgent industry failure.”

Ask yourself: “What would I have to already believe for this argument to make sense?”

Transparency Mostly Transparent
Primary technique

Performed authenticity

The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.

Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity

Human Detected
90%

Signals

The content exhibits high-quality, human-led educational storytelling with natural speech rhythms and a clear personal identity. While the subject matter is AI, the presentation layer shows the hallmarks of a human creator providing expert commentary and personal career advice.

Natural Speech Patterns The transcript contains natural conversational markers, rhetorical questions ('which is a big problem, right?'), and slight informalities ('vibecoded', 'heading into the year') that feel authentic to a tech creator.
Personal Branding and Social Proof The creator links to a personal LinkedIn and a Skool community, and uses first-person perspective ('Let me show you', 'I break all that down') consistent with a personal brand rather than a faceless content farm.
Specific Domain Expertise The script references specific, niche industry events (Enrichly startup failure, Cursor usage) and technical frameworks (OWASP Top 10 for LLMs) in a way that suggests human curation of current events.

Worth Noting

Positive elements

  • This video provides a useful introduction to the OWASP Top 10 for LLM Applications and correctly identifies a growing industry need for AI-specific security auditing.

Be Aware

Cautionary elements

  • The use of extreme salary outliers ($600k) to frame the potential ROI of a 'starter kit' may set unrealistic expectations for entry-level learners.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217
Transcript

Vibe coding promises you that you can build apps 10 times faster and it works until it breaks. A Vivecoded dating app just had their entire database exposed. 72,000 private images leaked, including government IDs. And the worst part is this wasn't even a sophisticated attack. This was basic security that nobody knew to check for. Everyone is racing to become a vibe coder nowadays. But the role that actually secures what gets built, AI security engineer, has almost no competition and gets paid seriously well. Let me show you why this might be the smartest career move that nobody's talking about. Here's what's happening in the industry. There are 30 million plus software developers worldwide, and 84% of them now use AI coding tools. That's up from 44% just a couple of years ago. Over half of them use AI coding tools every single day. And 41% of all code being written right now is AI generated or at least AI assisted. We are shipping software faster than ever. But at the same time, studies show 40 to 70% of AI generated code contains massive security vulnerabilities. Stanford, as an example, found that developers using AI assistants actually produced more security bugs than developers coding manually. And they were even more confident that their code was secure instead, which is a big problem, right? So, we've got this massive acceleration in how fast ships with almost no acceleration in how fast the software gets secured properly. This means that software is more insecure than it's ever been. And the role that sits in that gap, well, almost nobody is filling it. Let me give you an example. The startup enrich lead. A founder built a SAS product with cursor, zero handwritten code. It was entirely fabcoded. 2 days later, attackers found holes. They got into his Stripe account, refunded every customer, and emailed his entire leaked user list. He had to shut down completely. And this is not an isolated case. Security researchers scanned 2,000 bibcoded websites and found API keys just sitting in the front-end code. One scan of over 1,600 apps built unlovable found that 10% were leaking user data, names, emails, even financial information. The V coders are shipping every day, but nobody is securing what they actually build. Now, if you're wondering what it actually takes for you to get into security for AI systems, what skills matter, what tools to learn, and how this connects to the broader AI engineering path, well, I break all that down in my free AI starter kit, which covers everything from fundamentals to production systems. The link is in the description down below. Before you head off and check it out though, I want to show you why this role is so valuable right now and why you should be considering this path heading into the year. Now, there's three reasons for this. One is that the talent gap is massive at the moment. For example, there are nearly 5 million unfilled cyber security jobs globally. But here's the thing about that. Traditional security is actually the easier part to staff. AI security is where it gets interesting. Over a third of security teams will say that AI is one of their biggest skills gaps and fewer than a third have anyone with real AI expertise at all. The World Economic Forum found only 14% of organizations feel confident that they have the people they need to secure their software properly with all the advancements in AI. And because of that, reason number two is that the compensation is serious. Security engineers focused on AI can average around $150,000 and up with top earners hitting 280 or more. At frontier labs like openi and anthropic security engineers pull 400 to 600,000 total composation and that's of course the top of the market. It shows how much growth there is for you if you take this serious. Reason number three is that this role is genuinely futurep proof. Here's why. As AI gets more powerful and more people use it to build software, you don't need less security, you need more. Every vicoded app that ships is another attack surface. The skills that you build here will become more valuable as AI adoption grows. You're not competing with AI. You are securing what AI creates. Now, you might be asking, how do I actually get into this field? What do I need to learn? You're going to be essentially combining three skill sets. First is going to be your security fundamentals. Understanding how to do threat modeling, penetration testing, and risk assessment. If you've done any app security work or you are already in the security branch, you're already halfway there. Second is a little bit of AI/Machine learning knowledge. You need to understand at the very least how these language models work to understand how they try to break systems or how they write code that's insecure. You don't have to understand the neural networks between all these models, but at the very least you need to understand how language models process prompts and how they work from a conceptual perspective and not just as a magical prompting machine. And third, you want to understand AI specific attack techniques and this is new knowledge that many existing security engineers don't know about proper prompt ejection, data poisoning, and model extraction. One great way to get started is by exploring the OASP top 10 for LM applications. This is the industry standard for the most common AI vulnerabilities. I'm going to be honest about why this role is so underrated. It's not glamorous. Security engineer doesn't have the same ring as AI researcher or indeed five core. We can just build anything in a day. You are not going to be building the LLM models. You're breaking them. You're not launching products. You're finding the holes before attackers do. But if that does sound interesting to you, then you're in the right video. So, let me bring all this together for you. If you want to go deeper on how to build these skills, my AI engineering road map breaks down the full path. So check it out in the description down below.

Video description

🎁 Get the free AI Engineer Starter Kit: https://zenvanriel.com/ai-roadmap?ref=RRJaLUJEG5Q ⚡ Become a high-paid AI Engineer: https://zenvanriel.nl/job AI security engineer is the role nobody's talking about but everyone will need. Vibe coding is everywhere, but what happens when it breaks? A vibe-coded dating app just leaked 72,000 private images including government IDs. And this wasn't a sophisticated attack. It was basic security that nobody knew to check for. While everyone races to become a vibe coder, the role that actually secures what gets built has almost no competition and pays seriously well. This video breaks down why AI security engineer might be the smartest career move of 2025. What You'll Learn: - Why vibe-coded apps are creating a massive security crisis (with real examples) - The shocking stats: 40-70% of AI-generated code contains security vulnerabilities - Three reasons AI security engineering is exploding right now - The exact skills you need: security fundamentals, AI/ML knowledge, and AI-specific attack techniques - Compensation breakdown: from $150K average to $600K at frontier labs - How to get started with the OWASP Top 10 for LLM applications Timestamps: 0:00 The vibe coding security crisis 0:41 AI Coding In The Industry 1:45 Real examples of vibe coding failures 2:51 Reason 1: The massive talent gap 3:31 Reason 2: Serious compensation 3:55 Reason 3: Future-proof career 4:20 The three skills you need Why did I create this video? Everyone's focused on building faster with AI, but almost nobody is learning how to secure what gets built. This gap is creating one of the best career opportunities in tech right now. Connect with me: https://www.linkedin.com/in/zen-van-riel https://www.skool.com/ai-engineer Sources: - Stanford study on AI-assisted code security: https://arxiv.org/abs/2211.03622 - ISC2 Cybersecurity Workforce Study 2024: https://www.isc2.org/Insights/2024/10/ISC2-2024-Cybersecurity-Workforce-Study - World Economic Forum Global Cybersecurity Outlook 2025: https://www.weforum.org/press/2025/01/global-cybersecurity-outlook-2025-navigating-through-rising-cyber-complexities/ - Stack Overflow 2025 Developer Survey: https://survey.stackoverflow.co/2025/ai - OWASP Top 10 for LLM Applications: https://genai.owasp.org/llm-top-10/ Sponsorships & Business Inquiries: business@aiengineer.community

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC