bouncer
← Back

The Coding Sloth · 378.7K views · 21.9K likes

Analysis Summary

30% Low Influence
mildmoderatesevere

“Be aware that the 'safeguards' and 'level-based' testing are designed to showcase the sponsor's specific features as the unique solution to common AI 'slop' problems.”

Transparency Mostly Transparent
Primary technique

In-group/Out-group framing

Leveraging your tendency to automatically trust information from "our people" and distrust outsiders. Once groups are established, people apply different standards of evidence depending on who is speaking.

Social Identity Theory (Tajfel & Turner, 1979); Cialdini's Unity principle (2016)

Human Detected
95%

Signals

The transcript exhibits high levels of personality, specific personal experiences, and natural linguistic quirks that are characteristic of a human creator. The content is a first-person narrative about using AI, rather than a synthetic compilation or automated script.

Natural Speech Patterns Self-deprecating humor ('unhealthy amount of time', 'not touching grass', 'unlike me') and colloquialisms ('good sloppy', 'smooth brain').
Personal Anecdotes and Experiments The creator describes a specific three-level experiment they designed and ran themselves to test prompting strategies.
Opinionated and Subjective Voice Strong personal stances on how programmers communicate and the 'obvious' nature of learning to code first.

Worth Noting

Positive elements

  • The video provides a highly practical three-part prompting framework (Task, Background, Do Not) that is genuinely effective for LLM-based software engineering regardless of the tool used.

Be Aware

Cautionary elements

  • The 'experiment' is a controlled demonstration where the sponsor's tool is the protagonist, making its specific UI behaviors seem like industry-standard 'safeguards' that other tools lack.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217
Transcript

I have spent an unhealthy amount of time programming with AI and I want to share all the tips and tricks I've learned after spending hundreds of hours programming with AI just so you don't have to waste your precious time because I assume you actually have a life unlike me and so you can get the same benefits as I do because a lot of people have been saying they've gotten good results with AI and bad results with AI. And after spending a lot of hours not going outside, not touching grass, I think I finally figured out how to get consistently good results with it. It's definitely better than before, I will [music] say. I'm still learning how this tool works, but I at least want to share what I've learned so far with you. So, knowing this, let's start with the first tip. Oh, and this video is sponsored by Jetrain. Tip number one, learn how to program first. Yes, it's an obvious tip, painfully obvious, but I'm going to say it anyway because apparently it needs to be said. If you want AI to be useful at programming, you need to know how to program. This isn't rocket science. This isn't controversial. It's just facts. As of right now, AI is a multiplier. Albeit a small multiplier, but still a multiplier. It multiplies what you already know. You can't outsource your brain to AI if you don't have a brain in the first place. Tip number two, be as specific as humanly possible. I believe most people are not specific enough with AI. And I assume that's one of the big reasons why they get bad results. And now that I think about it, programmers aren't exactly known for their communication skills. So, this actually makes sense. Which means unfortunately for you, you better start watching some communication tutorials because those skills matter now because AI is only as good as the context you give it. And I decided to run a quick experiment to prove it to you. I told AI to build a Google Docs clone three times. And each time I gave AI more information to see what happens. And I did this experiment with Jet Brain's AI assistant, Juny, because Juny actually has some safeguards to prevent you from doing level one, the smooth brain. If you talk to AI like this, you deserve the bad results. The prompt I'm going to give it, build Google Docs. That's it. If AI can build a decent version of Google Docs with just this, I'd be impressed and terrified since this would basically replace programmers. But I'll tell you this already, it's not. So, please don't ever prompt AI like this. Don't do it. Level two, the average person. More details, but they're not technical. This is usually how most people prompt the AI, but I'm honestly being generous because I don't think most people actually put this much detail. They're kind of a mix between level one and level two. Level three, the ideal. Now, if you're serious about using AI or you seriously want to improve the performance of it, this is how you should be prompting it based on my experience and based on best practices from the actual AI companies. For this prompt, I'm giving it full technical details, the exact tech stack, how things should work, terminal commands, basically information that only a programmer would know. Brace yourself, we're about to go through a lot of tips. I've also included documentation, screenshots of how I want it to look since it's easier to show a design rather than explaining it. And I also gave it links that I can reference. Now, there's still a lot of problems with this prompt and we can do even better, but I'll talk about that later. People are always like, why should I use AI when I can just Google it in 5 seconds? And personally, this always confused me a bit because why not do both? Find the resources and solution you need and give it to AI. AI assistants can actually visit the web. And a lot of companies are starting to recognize this because they have a thing called an LLM's.ext page, which is documentation that's formatted for AI. And then you can let it write the code using the information from the resources. Saves you time from typing, but I'm lazy, so that's a personal preference. Oh, and a quick life hack for you lazy people. You can write a simple prompt, but make sure it has all the technical information. And once you have that, tell AI to enhance the prompt with LLM best practices. AI is going to make your prompt more detailed, and it's going to perform better. The results of level one, nothing. Juny actually asked for more information, which impressed me a lot because this is how a real developer should respond when you give it zero information. Most AI tools would just assume everything and then vomit out code. So, I am very happy to see that Jet Brains added safeguards to this. While it did fill the task, it's better than giving me slop, which honestly makes Juny better than most AI tools. So, if you're one of the people that actually prompt like this, I definitely recommend using Juny since you'll be forced to use better prompts and actually communicate with the AI because no sloppy is better than bad sloppy. Am I right, fellas? Results of level two. Now, with this much information, Juny did actually start building. It created a plan and scaffold the app, which is pretty nice. Now, the results. Yeah, errors, zero styling, and it didn't even run at first. It looks like HTML threw up on my screen. I did have to spend some disgusting human effort to fix it. But here's the interesting part. Once I fixed it, it actually had most of the features I asked for. It's just what's the word I'm looking for in development. Yeah, we'll go with that. So, what went wrong with this prompt? Well, it's not that Juny was bad. It's that I didn't give it enough technical context to make good decisions. Juny had to guess all the technical implementations, the tech stack, how to handle websockets, and it didn't even attempt to style it since we didn't tell it how. And when AI has to guess architecture, you're more likely to get bad output and you're more likely to get catfish code. Code that looks good at first glance, but when you look deeper, it's disgusting. Now, the results of level three, it actually ran first try. No errors. All the features I asked for worked. The styling was better. It's not perfect, but overall, it's way better. And the code itself, pretty good. It's code that I would basically write since I gave it all the documentation and resources that I would personally use to write the code myself. So, what does this mean? Well, the difference between AI is useless and AI is incredible might not be the AI. It might be you specifically how well you communicate what you want. But as you can tell, even with this much detail, Juny didn't get everything right. There's still some problems. And this isn't just because the information wasn't detailed enough, you can always add more details, but it's also because I was asking it to do too much at once. A lot of people say AI is great at small tasks, but it's not that good at big complex tasks. So why not use that to your advantage? This is a tip from everybody. The smaller the task, the better the results. If we already know that AI is good at small tasks, why not break a big task into smaller tasks? And if you can't break it down, you don't understand the problem well enough yet. And this isn't an AI trick. This is just fundamental engineering. Even before AI, this is what you should have been doing. Planning out a solution, breaking the problem down into smaller pieces, and then coding it. This is called problem solving and critical thinking, if you didn't know. The only difference now is that you can be the one to code it or have AI code it. And for me, most times I'm pretty lazy. I would rather have AI code it. I still have to figure out the solution. I just don't type it out. Once again, personal preference. I'm just lazy. And disclaimer, if I do see AI making a mistake with the code, I'll manually fix it. Now, here's what's interesting. If you follow this workflow, you actually end up with a good detailed prompt that you can give to AI already. And whenever I did this workflow, AI was pretty good. And I know what some of you are thinking. Oh, I thought AI was supposed to make our job easier. I don't want to have to think of the solution myself. I don't want to do thinking at all. AI should be able to do everything. And I see where you're coming from. It's a little grim because that would then replace us. But it's just not there yet. I understand what it's supposed to do in the future, but we're not in the future, are we? We're in the present right now, and I'm trying to make it work in the present. I want to make the tool work in the current state that it is, and I really wish you do the same. But in its current state, do not let AI do all the thinking for you. I'm personally okay with letting AI type for you, but I'm not okay with letting AI think for you. Because the moment you let AI think for you, you're useless. And that's just the honest truth. You're not applying any of your skills that you've learned, what's the point of you being there? Now, in the meantime, while you think about your self-worth, let's move on to slop. Everybody that uses AI for programming will one day encounter slop. The good news is there are ways to reduce slop. We've already talked about it with detailed prompts, but there are other methods and you won't believe this one. It's revolutionary. All you have to do is tell AI what you don't want. I know, insane. Never been done. I'm really innovative. Here's a pattern that you could use right now. Number one, the task. Remember to be as descriptive as possible. Number two, background information, files, documentation, and images. And then number three, the do not section. I have gotten so much better results with this. In the Google Docs clone, I want to add a commenting functionality just like how Google Docs does it. I click the comment button. It opens up this little panel and when there's text on the page, it'll have this add comment button and when I press it, it'll highlight the line and I can add my comment. And I prompted Juny using this three-section pattern. Number one has the task and as you can see, it is very detailed. Number two has the background information, useful documentation, any files and multiple images that show the user flow and styling. And then number three, the do not section, which has what not to touch, what not to change, and the only things it should be modifying. And with this three section pattern, I got amazing results. It did exactly that. Pretty impressive for like a quick little 3minut prompt. Way faster than if I typed it. And I'd personally prefer 10 minutes of problem solving and prompting over 30 minutes to an hour of problem solving and coding. And I'll say this again because some of you just can't listen sometimes. Personal preference. I'm lazy. Now, this pattern is great. It works really well. You can try it for yourself and let me know. Link is in the description. But we're not done yet. There's another strategy you can do to reduce the amount of mistakes AI makes. And this one's also revolutionary. You won't believe it. Tell AI to remember. I know. I know it's crazy. For programming tools, this will be in markdown files. For Juny, you would use a guidelines.md file or an agent.mmd file. And this file is amazing. It's beautiful. I love using this file. This file should have all your project information. What the project is about, the tech stack, any important commands, the workflow, project specific information, it should be in this file. And you can either create this yourself or you can even tell AI to generate it by itself. It'll analyze the codebase, get an overview of what the project is, and it'll put it in the file. But if you don't want to write it or you don't want AI to write it, there are templates online. You can find one based off the text stack you're using and paste it in there. And if you're really specific for certain technologies, you can create separate rules files for each type of technology. And if you're lazy like me, a lot of companies already have rule templates for their tools. And I 100% recommend copy and pasting those into the project if you're using them. Our next tip, MCPS will save your life. Now, if you don't know what MCP stands for, it stands for model context protocol. There's a lot of videos out there. You can watch one, but overall, it's just tools that can extend what AI can actually do. They're also really easy to use. You just go to the MCP tab in the settings and you just copy and paste the information. As of right now, my favorite MCPs are number one, Context 7. This lets AI fetch documentation automatically, which means I don't have to copy and paste the same documentation for the 50th time. AI can just grab it when it needs it. Second MCP I really like is the Next.js developer tools MCP. This gives AI information about your Next.js app. So things like build errors, project state, page metadata, developer logs, overall just useful information that AI can use on your projects. The third MCP I like to use is the Chrome developer tools MCP. This lets AI have access to the Chrome dev tools. So it can look at things like layout shifts, performance, console errors, network requests, everything in the dev tools. It's great. Now, I mainly do web development, so these MCPs are perfect for me, but they might not work for you. Find the MCPS that work for your tech stack. There's lots of MCPs out there for databases, cloud services, thirdparty software, whatever you're building, there's probably an MCP for it. The point here isn't to use my tools. You can if you want. I just want to bring awareness that these tools do exist because once you find the right combination of MCP tools for your project, it makes a world of a difference. I promise you. Next tip, always give AI a way to verify its work. AI should never just write code. It also needs a way to verify the code. Tests, running the app in the browser, CLI commands, CI/CD pipelines, literally anything that can prove the code works. You can create these tests or verification methods yourself. Or if you're lazy just like me, you can tell AI to generate them for you and make sure they all pass. Now, for front-end and design related tasks, I do recommend having an MCP tool. And for any type of verification method you do, always verify that it's working correctly, especially if you let AI generate it. After spending this much time with AI coding tools, I've noticed some patterns about who this actually helps and who it doesn't. The programmers who benefit the most from AI are the ones who already have good habits. Every tip I've shown you throughout this entire video, being specific, breaking down task, telling AI what not to do, telling it to remember, making it verify its work, those are all fundamentals of good engineering and leadership. AI doesn't replace those skills. It amplifies them. But if you have bad habits, you skip tests, you don't document anything, you're not thinking about edge cases, AI is going to amplify those bad habits, too. I could be wrong. I don't know. But here's what I want for you. I want you to be prepared. There are developers who are learning to work with AI effectively and I'm trying to learn from these developers and I hope you do too. It's better to be overprepared than underprepared, right? If you want to get started with AI assistant development, try out Juny. I've been using it for a while now. I've shown you by building this Google Docs clone and it could become one of those tools that are part of your daily workflow. It has some safeguards for bad prompts. They have pre-made templates you can use to prompt the AI. It's integrated in Jetrains ecosystem. Overall, Juny is great. If you want to try it out, link is in the description. give it a real project and see if the tips that I gave you today work like they did for me. I'm just trying to figure out how this tool works and I hope you do the same. But other than that, I'll see you in the next video.

Video description

Try out Junie: https://jb.gg/JunieAI-coding AI is alright if you have a brain. AI can give good sloppy. This video is sponsored by JetBrains. // USEFUL AI RESOURCES // MCPs: https://mcpmarket.com/server AGENTS.MD: https://agents.md/#examples Claude prompting best practices: https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/claude-4-best-practices OpenAI prompting best practices: https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-the-openai-api // NEWSLETTER // Sloth Bytes: https://slothbytes.beehiiv.com/subscribe // BUSINESS INQUIRIES // For business: thecodingsloth@smoothmedia.co For brand partnerships: https://tally.so/r/mZVvKa // SOCIALS // Twitter: https://twitter.com/TheCodingSloth1 TikTok: https://www.tiktok.com/@thecodingsloth Discord: https://discord.gg/2ByMHqTNca // TOOLS/THINGS I REALLY LIKE // If you wanna build 10x developer level projects check out CodeCrafters (40% off): https://app.codecrafters.io/join?via=TheCodingSloth If you want to build an awesome newsletter like Sloth Bytes I use beehiiv (20% off): https://www.beehiiv.com?via=the-coding-sloth (some of these links are affiliates, so I'll earn some money which supports the channel!)

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC