We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Travis Media · 2.8K views · 137 likes
Analysis Summary
Performed authenticity
The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.
Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity
Worth Noting
Positive elements
- The video offers a highly practical demonstration of using Model Context Protocol (MCP) to give LLMs access to real-time documentation, solving a genuine pain point in AI coding.
Be Aware
Cautionary elements
- The framing that '95% of bad AI code is the user's fault' shifts the burden of tool reliability entirely onto the developer, subtly priming them to buy more 'agentic' helper tools.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Related content covering similar topics.
STOP Using 10 Agents #ai #tech
EO
I Have Spent 500+ Hours Programming With AI. This Is what I learned
The Coding Sloth
Claude Code Skills Just Got Even Better
Nate Herk | AI Automation
Transcript
The AI conversation continues to change. For some people, that's terrifying. For others, it's the biggest opportunity they'll ever have. So, I saw this the other day with the release of Clawed Opus 4.5. And I think there's a significant takeaway here. We give prospective performance engineering candidates a notoriously difficult take-home exam. We also test new models on this exam as an internal benchmark. Within a prescribed two-hour time limit, Clauus 4.5 scored higher than any human candidate ever. That's amazing. Now, all pride put away for a minute, let's go ahead and accept the fact that these AI models are now inherently smarter and faster than us in pure knowledge. Not wisdom, not life context, not personality, but raw all-encompassing knowledge. And when we say, "No, they're not. They hallucinate," which they do. They're just predicting next characters, which they are. They have token limits. They don't always follow best practices, which they don't. We need to honestly admit that the reason 95% of the time is the person prompting. Leave all the hate you want to below. But when a model can be this smart can draw information from all over the place to consolidate to the best answer possible in seconds in a narrow context, we just can't beat it. And we're not going to beat it anymore. give it a wider context like a massive codebase then yes its smartness dumbs down a bit and needs humans to give it direction but these tools say clawed opus 4.5 or just released GPT 5.2 two are currently today the worst we're going to see going forward. And if I haven't stepped on all the toes here, let me also say that the efficiency of prompting is way greater than us typing code from scratch. Hopefully, we've established that already. That's old news. In fact, I've laughed at the idea of a prompt engineer for the longest time, but here we are. Instead of prompt engineers being particular people, it really has become a requirement of us all. Therefore, we can see and expect to see more coding sessions begin with prompts and coding overall moving more to natural languages, English for example, while deferring the syntax of coding to the LLM. I honestly have not started a coding session in the longest time without some sort of LLM sitting in front of me to kick it all off. adding a new feature, fixing a bug, brainstorming best approaches, architecture, terminology. I can just think back on the past three months of my life and not really think of a time I haven't utilized an LLM. And guess what? All of your fellow programmers are doing the same. They may not admit it, but they are. And your managers probably want you to. In fact, some companies are requiring it or at least granting free access to all of their devs. What to do then? Whether you're a junior or you're well seasoned, what to do about it? Well, at this stage, you're left with one option. Get really good at getting good output from LLMs in such a way that makes you more efficient. We still need to know logic and programming principles and solving problems with code and best practices, but you'll defer the details to the LLM. And again, this means coding sessions will begin with prompts and coding overall moving more to everyday languages while deferring the syntax of coding to the LLM. And if this is the case, which I think it's pretty obvious, then you need to be thoughtful about making sure you are able to get the best outputs, which of course starts with the right inputs. So, in this video, I'm going to give you five tips that I found extremely helpful in getting solid results from your LLM. Following these five tips, I'm able to use LLMs 80% of the time for all my coding and not sacrifice on quality. And instead of just listing the five tips out for you, let's actually use them to build a desktop app with Tori, which is a framework for building cross-platform desktop apps. So, I have an M1 MacBook Pro still, and I'm always running up against space limitations. So, I want to build an app where I choose a folder that I know has a lot of big files, and I can see a list of each folder in individual files, the size of each of these from largest to smallest, so I can determine what big files I can delete to free up space. kind of like a simpler clean my Mac type of app. And I'll make sure to push this up to GitHub in case you want to grab it or suggest changes or additions to it. And yes, I need to just buy some extra storage or upgrade, but how else would I get to have this kind of fun. So, let's go ahead and get started. This video is brought to you by Warp. So, tip number one is to force the LLM to use current documentation. LLMs do not stay current. In fact, even new model releases are still months behind. Sure, they can search for current answers, but I can't tell you how many times I've asked an LLM to install Tailwind CSS for me, and they waver between version three and four, which has a lot of upfront differences. Like, it will install version 4 and then try to set up version three. And then you get errors, and after like 5 minutes of around the world, it realizes, oh, wait, I'm looking at version 3. I need to actually go and get info for version 4. And you're like, see how stupid LLMs are? to which I say no. How stupid am I for not forcing it to use the latest documentation? So ideally, when you're working on an app, say a Nex.js app, a Golang app, or installing and setting up a package, you want the LLM to use the official documentation, not only for up-to-date versions, but up-to-date examples and references and rules. And to streamline this process of using the latest documentation, I exclusively these days use the context 7 MCP which will reach out and use the latest documentation for the tools we're using. Instead of arbitrarily gathering data from where LLM thinks is best, which is often outdated, it instead queries the latest docs themselves and finds the solutions from the source. Let me show you how this works as we begin to build this app. So, first let's create a new Tori app. And I'll do this manually because there are prompts or options that we need to choose in creating it. So looking at the docs here, I simply run this command npm create tory app latest. So let me open up warp which is the tool I use a lot of the time these days. It's what they call an agentic development environment built with Rust and their dev team actually builds warp using warp. So if anyone has an idea of prompt first development, they do. I have a separate video of the features of Warp, their number one ranking on Terminal Bench, things like that, and I'll put a link to that video below. And it's free to try out. That link will be below as well. So, let me CD into a folder I want to use. And I'll just run this command. And I'll just call it file manager desktop. I'll choose TypeScript npm. And let's pick React and TypeScript. And to run the app, first I'm going to do npm install to install my packages. And then warp's already guessing my next move, which is npm run Tory dev to run the app. All right. And here's my basic Tori app. Enter a name, Travis. Hit greet. And it says, "Hello, Travis. You've been greeted from Rust." So there we have our starting point. So next, let's set up context 7. So just go to Google, type in context 7, context 7.com. You'll need to create a free account. So just go to sign in, continue with Google or GitHub, whatever you want. And then you'll need to create an API key. So just click on dashboard. Scroll down and you can create your API key here. Next, if you go up here to install, it'll take you to their GitHub page and scroll down until you find warp or whatever terminal you're using. So, I'm going to find install and warp here. And I'm just going to copy this command. And then inside of warp, you just do a slashmcp. These are warp slash commands. So, choose add mcp and just paste the whole thing here. And then you want to add your API key where it says your API key. So, make sure you update that. And that's it. Save it and you're good to go. And then just to double check, if you go to settings and AI and go down to MCP servers here, just make sure that it's enabled. Context 7. And you have two tools available, which is resolve library ID and get library docs. And now we go back to our project. Let's just say something like add shad CN to my Tori project with Tailwind CSS and then say use context 7. This is the magic phrase. And when I run this, it's going to locate the correct docs to reference and even which part of the docs it needs to reference. And see, it's going to use this MCP tool, the resolve library ID tool. It's going to find shad CNUI. So, it found the library and then it's going to get the library docs of the information it needs. So, I'm going to run that. You can go back to the main page and just type in uh Shad CN in Shad CN UI. And we're basically just looking at this. So, we're searching the doc for what we need. Here's all of their docs. And it's just pulling the relevant information from this. And so, it's going to install and it's going to do all of this stuff according to the docs. So, if I actually go to uh shad cen vite, if I go to this page, like if I was installing it myself, I'm going to see npm create vite and then install tailwind. I'm going to have this TS config. So remember this bit here. You're going to see this. You're going to see this part of the TS config file. The vite config. You're going to see this being used. And also it's going to try to add a button because all of this is on this page that it's referencing. So it's setting this up according to the latest docs and even the latest examples. So here's the vite config, the TS config. Now it's moving on to Tailwind. Now look at this add button just like we talked about. All right. So now I have this Tory desktop app with Shadian and Tailwind ready to move forward with the app I want to build. And this leads me into tip two. Number two, always plan first. This applies to everything you do. So we're in this place now where many devs are starting with a prompt. We've established this. A new project or an already established project doesn't matter. Many of us are beginning tasks now with prompts. So, let's decide what we want to do with this first iteration and let's create a plan for Warp to work through step by step. This keeps the LLM from going off on tangents we don't want to be going off on. Now, Warp has a new feature for planning. I can use the plan/comand and then give it my entire game plan that I want to achieve here. So, I'll do slash plan. You'll see the slash command here. And describe my task. Here's my task in detail or this portion of the app that I'm building in detail. So this should and see how I'm using use context 7 when I mention shad CN or when I mention Tori I'm going to say use context 7 and I even know that Tori has a plugin called file system. See right here when I choose folder use Tori's file system plugin use context 7. So if I go to the Tori website and I go to guides and look at plugins there is a file system plugin right here. So, I'm telling it, use this plugin and then use Context 7 to go and check out these docs to make sure you're doing it properly. So, I'm not just randomly asking it to figure all this stuff out. I'm helping it along the way. And again, resolve library ID is the tool that's going to go find the Tori library. Let's run it. And I'm running these manually because that's how I like to do it. You could set it to automatically approve everything. I don't like to do that. And what we're doing here is we're going to generate a plan of action. And we'll see that come up here in a second. and it's looking up the topic of file system redirectory list files from the documentation. So now we have this game plan before we even begin building anything. I can edit this plan directly like if I need to add some information I can edit that directly. I can ask it to change something about it and when I do it will version the new changes. So see this version here show version history. If I were to add some stuff to this or ask it to change some part of the plan it'll refresh the plan and I'll have a new version. And I can use this for a team to work with if the plan is that big. And we can iterate on it until we've agreed. So I have step one, install required plugins, configure permissions, create UI components, implement file system scanning logic. The whole plan is laid out. Implement size calculation, implement reveal and finder, and then replace default app UI, some technical considerations, all of that. So let's say this is ready to go. I agree with it all. I can choose to execute a part of this plan or the whole thing. Let's go ahead and execute the entire plan. And here's my steps. So, step one, install Tori plugins. As it works to build this thing, it's going to walk me through my tasks. And I'm going to see here everything that's going on. And again, I can edit this. I can refine it or I can accept it. I'm going to accept this. Once task one is finished, it's going to move on to the next task. And ultimately, it keeps me up on which steps of this plan it's working on, showing me the diffs, so I'm aware of what's being updated. And see, it made a mistake and it actually went and updated the plan. And so now I have a version two with the updated plan. Pretty cool. And this now leads me to my third tip, which is spend ample time planning and don't rush. Okay, so this planning feature is to me vital in LLM development and is one reason I prefer Warp these days. 99% of problems people are having with AI development is that they are letting it run wild, having no clue what it's doing. It's a recipe for disaster. Warp and Claude and others have a yolo mode, but unless I'm building something petty that I may erase later, I think it's very important to be aware of every change that the LLM makes. If you're creating some sort of feature, you are responsible and the LLM works for you. The integrity of the code you're pushing reflects you, not the LLM. Remember that. Now, this plan that Warp creates when we're working with code bases that already exist is a gamecher. Why? because of the helpful context it gives you. Let me show you what I mean. Now that it's running, it's actually telling me what to verify. So, it's giving me things to verify. And then it says, I'll stop the server with this. Please let me know when you finish testing or if you encounter any issues. So, let's see if we have any issues. Let me select a folder. Let's just choose a Python app folder. Three items found. Let's try something bigger like the whole repos folder. 51 items found. But this doesn't look right. Something is off. These folders I know are way bigger than 992 bytes. So let's say Travis Media Astro. Let's just see how big that is. That's actually 666 megabytes. So yeah, something's off. So it's waiting for me to give feedback on if something is wrong. So I'm going to put a plan together and try to get this fixed. So here's my plan. And the reason I'm doing this is cuz I want to understand how things were currently set up, what's wrong, and what the plan is to fix it. I want to know the whole context here. That's why I'm doing this plan. Let's see what it pulls up. Now, instead of me just saying, "Hey, fix this problem. Figure it out and fix it." I can actually take this plan and see for myself how this app is coming together. So, the root cause is that line 47 uses this stat function. It's right for files, but it's not right for directories because of the recursiveness. The current implementation is this. So, I see here that this is wrong for directories. What's the solution? Well, since it checked the documentation, this plug-in provides a dedicated size function that works for files and directories. What are the changes to be implemented? Update the import statement, replace stat with size, add the right permissions, and then there's some performance consideration, and now I'm up to speed. So, I'm not just telling it fix it. I don't care what's wrong, just make it work. I'm actually understanding what was wrong and what it's proposing to fix. This is why I like planning. So, let's execute this plan and see if that fixes it. replacing the stat function with the size function. Good. That's step one, actually. Step two. Step three, allow size permissions. Looks good. And then the final step, test the updated size calculation. So, it's going to run my dev server and then it's going to read the output. So, let's try it again. Let's try the whole repos folder. And this looks a lot better. Folder size analyzer. That's actually this project calendar. These are just Rust project income app. These are just my repos of apps that I've been working on. So here's all my repos. All of my sizes from largest to smallest. If I click on one, it should open in Finder. Yep, there it is. Click on calendar. There it is. Looks like my app is working great. It wasn't working. And after some follow-up planning and fixes, now it's working great. This is how you work in real life scenarios with code going into production with a team of devs. You don't fire off a prompt and hope for the best. Everyone on social media may make you think you're missing out by not doing that, but you should take time to plan to understand the proposed changes and the finished changes. And this is where I'm at these days. I begin probably 80% of all features with planning. And in doing so, I can commit code that I can stand behind. Number four is rules. Now, you probably already know this, but every project that uses an LLM should have rules. So Claude has Claude MD, Google has Gemini imd and warp warp MD. These files keep you from having to reexplain your setup on every prompt. Now there are two things I want you to note with this file. Ask 10 people. They'll give you the 10 best configurations. But here's my advice. First and the most basic run /it to get a baseline for your app, how it runs, file structure, etc. You can do this from the outset or after your app has grown a bit as our example here. So, let me run this. And I need to initialize git to make this work. And it actually wants to scan my codebase and run a nit for me. That's great. I'm going to do it manually just so I can show you a nit. Yes, index this codebase. Also, would you like to create a warp.md file? Of course. And this will search over your codebase and create a warp.md file with an overview of your app like technology stack, dev commands, how to build the app, architecture, configurations, etc. And then here's the diff it proposed. Let's go ahead and accept it. and it will create the warp.md file. Again, you can do this with clawed as well with slash init. Now, this is not a readme. It's not there to explain your project to others. It can, but that's not its real purpose. Instead, this file actually gets prepended to every single prompt. So, when you ask the agent to do something, it's reminded every time of your architecture setup and commands to run your apps or tests or llinter. And in doing so, you're not allowing the LLM to wonder or lose sight of the context of your application. and it produces much more accurate results. Second, at this point, you want to begin adding rules as you need them. Do not just arbitrarily start adding stuff. You may have some generic template that serves you well, but if not, be conservative on your adding to this file. When you see the LLM fumble several times on the same thing, add a rule. When it tries to commit changes without your approval or do anything rightbased wbased with your database, add a rule. So you may have a commit workflow something like this always follow this checklist before committing and pushing. Number one run the full test suite and then you can say must show this passed this skipped this warnings with no failures. Number two format code number three lint code must compile with exit code zero no errors and then stage and commit push only when instructed. And then some critical notes tests must pass lint must pass. So you're giving the LLM specific rules for your commit workflow. and this is the place to put it. But don't go and add all this at one time since you saw it here and thought it useful. Let these come organically over time and you'll eventually end up with a template of your own that makes sense to you for this particular project and will help guide the LLM to better outputs, which is the whole goal we're trying to achieve here. Now, the final tip here is to have a senior engineer review your code before every commit. And of course, the senior engineer will be the LLM. This is sort of like having a PR review preview before the actual person will review your code. It gives you a chance to clean up before you send it out for others to see. And if you're just building a personal app for yourself, this is still a good practice every time before you commit. Sometimes the LLM churns and it ends up creating unnecessary files or code it forgets to remove or in hindsight there is an easier way to rewrite or refactor something or it allowed wide openen permissions from the outset but never toned them down. Often when you create a PR and someone calls out something stupid you've done, you already knew about it. You just looked at the code so much you overlooked it all. Well, let the LLM be your second set of eyes first. So, I actually run a prompt regularly before each commit. I have it saved here in my warp drive and let's take a look at it and feel free to steal this. Review the code we just wrote like a senior engineer. Use get status stage to see which files we've touched. Focus on performance, security, edge cases, and failure modes, and best practices and cleanup. If it's good, say good to ship. If not, list the top fixes and show the exact code changes, performance issues, overprivileged configurations, edge cases, lack of error handling, unused commands or variables. The LLM will notify you of each, and you should take time to see how each one can better your app or clean up your app before you commit. So, let's run this and see the results. And yes, I know we should have committed like five times already, but again, this is just all for demonstration. So here are the findings. Some critical issues. Blocking UI on large folders. So a performance issue. Security issue is path traversal vulnerability. Then there's security with overly permissive capabilities. This is like often these apps allow everything up front to make sure it works, but then you still need to scale it back later. And then there's a memory leak potential, so a performance issue. And then an edge case. These are all considered critical required fixes. So it wants to go ahead and start fixing these for me. So before you commit these and a senior sees this like hey this is way too permissive and hey this is a memory leak potential your LLM senior engineer can catch it first and can run the fixes. So let's accept this and run these fixes and we'll be good to go. Improve security by restricting file system permissions. Then we have the unused greet command. So when we create the Tori app by default you get this greet command. So let's get that taken out. That might be something that you'd leave in there accidentally. Then it's going to verify the changes still compile. And it says it's going to add one more improvement for better UX. A visual counter during scanning. That's pretty neat. I didn't ask for that, but I'll take it. All right. Issues fixed. We got performance fixed. And look at this. Before we had this big this slow big O complexity. And after batch updates every 50 items. Before, no way to cancel. Now we've added an abort controller to cancel. Security. We've tightened up security. edge cases, code cleanup, and if you want to know exactly what they changed, here it is. Test before shipping. Try scanning a folder with a thousand plus files. Let's do that. So, let's give it a final run. So, I'm actually going to do an npm run Tory build. I'm going to build this into an actual application on my MacBook. And it should be much faster now. It shouldn't be overly permissive. It should be secure and all those things we just asked our senior engineer LLM to do. So, there it's finished building. At this point, I can just drag this into applications. And then I can just say file manager desktop. I can open it like any other application. Here it is. And let's try a big folder like the entire desktop of my computer which is massive. Let's open it. And here we have we're counting up as we're scanning. 98 179 274. Here's the items. So my desktop files is a 15 gig folder. Maybe I need to scan that one and figure out what to delete. But let's say hey temp photos. What is this? We got 5 gigs. Well, I can double click it and that opens it up in Finder. So then I can go in there and figure out what to delete. So it looks to be fast. We've ran our performance and security improvements and I'm ready to push this off for my senior engineer to review. And what I'm actually going to do is I'm going to push this up to GitHub. I'll leave a link down below if you guys want to grab this application or if you want to add to it. Feel free to submit a PR, maybe we can build on it, whatever. But it'll be there. And before we wrap this up, the team over at Warp are celebrating the launch of these new features. Be sure to check out the link below to try Warp Out yourselves for free. So, those are five tips or guidelines that I follow that allow me to use LLM in a professional setting in a way that produces great outputs and produces code that I can be proud to commit. If you found this video helpful, give it a thumbs up. If you haven't subscribe to the channel, consider doing so. And I'll see you in the next video.
Video description
My friends at Warp are celebrating their launch, and you can now try Warp for free ➞ https://go.warp.dev/travisytagents AI models are already smarter than us at raw knowledge, but most bad AI code comes from bad prompting or inputs. In this video, I break down 5 practical rules for prompt-first development that help you ship production-ready code with LLMs. In discussing these, we'll build a real Tauri desktop app using Warp, planning first, enforcing rules, and reviewing AI output like a senior engineer. This is how I use LLMs for ~80% of my coding without sacrificing quality. Thank you Warp for partnering with me on this video. 🕒 Timestamps 00:00 Intro 00:32 LLMs are smarter than us 01:34 Moving to prompt-first 03:16 5 Tips for prompting 04:17 1 Force current documentation 09:22 2 Proper planning 12:31 3 Stop rushing (no YOLO) 16:26 4 Rules 19:15 5 A review BEFORE the review 23:33 Outro 📢 Video mentions - File Manager Desktop app - https://github.com/rodgtr1/file-manager-desktop 🎥 Watch These Next 🎥 https://youtu.be/F3j_1AEQkHk https://youtu.be/EMWNZtCYg5s https://youtu.be/uDcb12CqoR4 FOLLOW ME ON Twitter - https://x.com/travisdotmedia LinkedIn - https://linkedin.com/in/travisdotmedia FAVORITE TOOLS AND APPS: Udemy deals, updated regularly - https://travis.media/udemy ZeroToMastery - https://geni.us/AbMxjrX Camera - https://amzn.to/3LOUFZV Lens - https://amzn.to/4fyadP0 Microphone - https://amzn.to/3sAwyrH ** My Coding Blueprints ** Learn to Code Web Developer Blueprint - https://geni.us/HoswN2 AWS/Python Blueprint - https://geni.us/yGlFaRe - FREE Both FREE in the Travis Media Community - https://imposterdevs.com FREE EBOOKS 📘 https://travis.media/ebooks #promptengineering #llms #warp