bouncer
← Back

Mark Kashef · 20.5K views · 583 likes

Analysis Summary

40% Low Influence
mildmoderatesevere

“Be aware that the 'live' demonstrations are highly optimized to show success; the significant token costs and potential for 'hallucinated' logic in complex agent handoffs are mentioned but downplayed to maintain the appeal of the paid templates.”

Ask yourself: “Did I notice what this video wanted from me, and did I decide freely to say yes?”

Transparency Mostly Transparent
Primary technique

Performed authenticity

The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.

Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity

Human Detected
95%

Signals

The content is a high-quality, human-led technical tutorial featuring natural speech, personal workflow insights, and specific creative direction that lacks the formulaic or robotic markers of AI narration. The creator, Mark Kashef, is a known individual in the AI space providing original commentary and demonstrations.

Natural Speech Patterns Transcript contains natural filler phrases ('pretty much', 'TLDDR', 'kind of'), conversational contractions, and non-scripted verbal flow ('Now, I already prepared...', 'Let's dive in').
Personal Anecdotes and Context The speaker references his own YouTube scripts, specific workflows he used to build with Zapier/Make, and personal preferences for tool usage.
Demonstration-Based Content The video is a walkthrough of a specific technical setup (Claude Code) with live output and specific prompt breakdowns that align with a creator-led tutorial style.

Worth Noting

Positive elements

  • The video provides genuine architectural insights into how to structure multi-agent prompts using task dependencies and human-in-the-loop triggers.

Be Aware

Cautionary elements

  • The 'revelation' that these tools are easy for non-technical users masks the high token costs and technical setup required to run Claude Code effectively.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217
Transcript

When agent teams first dropped in cloud code, pretty much everyone was solely using them for non-technical tasks. But of the seven cases that I'm about to show you, six of them have absolutely nothing to do with code. And the last one helps you build a personal assistant from scratch. Each example I'm about to show you is composed of a single prompt. You paste it in a team of agents spawn up and they take care of what would normally take you an entire session. By the end of this, you won't look at agent teams in the same way. Let's dive in. Now, I already prepared all seven use cases I'm about to show you right here. And just to accompany them from a conceptual standpoint, each one I'll walk through the high level of what the agent team is planning to do and then we'll go into the actual prompt and I'll show you the associated output. Now, this first case is really straightforward. It is a content repurposing engine. And this is something that you might have built with things like naden make or zap year in the past but in my opinion this is infinitely more dynamic and flexible and malleable if you do it in cloud code. So the idea is I will give it one of my YouTube scripts and then it will spawn up an agent team where you have a LinkedIn writer, a thread writer, a newsletter writer and a blog writer. So the goal is to have one input and multiple outputs which you will see will be the main theme across all of these different examples. So when it comes to the prompt, you can say create an agent team to repurpose a video transcript into content for four platforms. The most important magic words you always need to say is create an agent team or spawn an agent team. If you just say spawn agents, it could get confused between sub aents, which are very different in the way they work versus agent teams. And the core difference TLDDR is sub agents can work in parallel, but they don't speak to each other. With agent teams, they can have that agentto agent communication. So all you have to do is just tell it exactly where the transcript is. This could also be in the cloud. You could connect it to some form of API or web hook. And here's the important part. So when it comes to asking it to spawn the team of agents, you have to be very intentional here. Do you want to leave it up to cloud code to decide what those agents should be or do you want to have some form of autonomy over it? In many cases, it makes sense that you should dictate what these agents should be so you can make sure it's executed in the exact way you expect. So in my case, I'm asking for a blog writer where I specify exactly what their role is. Same thing with a LinkedIn writer. Same thing with a newsletter writer. And I'm also telling it where to output things. So the more intentional you are on telling it exactly where the inputs lie, what the criteria is, and where it should output, the more control and predictability you have over a pretty unpredictable process. And one key thing you can do is provide conditions. So you can say before writing each teammate should read the full transcript and identify the three most compelling insights. So in a way you're now dictating that they can't move forward until they meet the specific criteria. And one thing you can do to encourage the communication and kind of force the communication is you can say have them share their chosen insights with each other to ensure that no two platforms lead with the same angle. Each piece should feel fresh and not repetitive. And along with the normal deliverables, you can also push it to do something like synthesizing a summary comparing the angles each teammate chose and flag any messaging inconsistencies. So you can basically get a postmortem report on top of the deliverables you're asking for. So in terms of this process, it was very straightforward because we dictated everything. It could exactly create the agents we're asking for. It spawned them all. It then went back and forth. We got that criteria of meeting the three blog insights per agent. But one interesting thing that happened was it said good. Three of four teammates reported their insights, but there seems to be heavy overlap. All three picked the three-level loading system and the kitchen analogy skills plus MCPs. I need to wait for the Twitter writers picks before I assign unique lead angles. So now when you have Claude Code and these agent teams, Claude Code takes this third person perspective looking at what's happening so they can better observe, survey, and intervene when needed. So then cloud code can look at all of the different angles, make sure that each one has a unique take and then it can assign it right here. So it tells you the blog writer, LinkedIn writer, newsletter writer, and Twitter writer. This is the lead angle for each one of them. And this is why they're doing it. So that's the rationale by the claude code agent itself. And at the end, we get a summary of everything that was completed. And then if we go to the bottom here, this goes through all the material that was shared between the agents, any form of inconsistencies that were flagged. And then we get the outputs exactly where we asked for them. So the beauty of this is I can click on this URL right here, click on command, open it up, and we can take a look at what the blog post looks like. We can intervene. And if you want, you could respin up the whole agent team to edit it. But in that case, it might make more sense to maybe spin up sub agents to make edits in parallel since they don't need to speak to each other if you can identify independently what needs to change. So the second use case might be very helpful and this is meant to research and create a pitch deck on a particular topic of our choice. So in stage one you have the researcher and the goal of the researcher is to come back with certain data points. Then we have the plan approval on our end and then once approved we have the slide writer and the slide writer comes up with what content should be on each and every slide based on the research and then beyond that in stage three we have the designer and the designer's role is to actually take all the research take all the slides and physically create the PowerPoint file using the HTML to PPTX library. So, this is a really good example of a sequential handoff workflow where you can't really have the agent teams work in parallel like you would with something like sub agents. You need each one to wait for the prerequisite to go to the next stage. So, the problem for this one looks as follows. Create an agent team to build a 12 slide pitch deck about how AI automation is transforming small business operations in 2026. So, once again, I say spawn three teammates with task dependencies. We have the researcher which whose role is to find eight to 10 data points, stats and supporting evidence. And then we have the slide writer. Now in this case I went down to a very deep level of granularity where I said exactly what should be on each slide. So you can always choose to relinquish control or take control. It's just a matter of the prompt that you put together. And then I say exactly what each slide criteria should be. So a max of eight words, three to four bullets and some speaker notes at the bottom of the slide. And then I tell it exactly where to save to. And then we have the designer. So using the slidewriter's content, I'm implying the whole sequentials handoff from here. Build the actual file using Python. Now this is overkill. It would figure it out on its own. But again, the less thinking you have to make cloud code do, the more accurate the results. And one last key nugget here is you can force the agent team to interrupt itself by asking for your input. So when you say require plan approval for the designer before they start building once a designer goes it will actually usually invoke what's called the ask user input tool and I'll show you what that looks like. So I screenshotted this while the agent teams are running because we can't recall it as soon as it's done. In this case once the designer came up with an idea it asked me to review the designer's plan and approve as is approve with notes or reject with some rework. And the great part is when you say involve me, you essentially create a human in the loop process yourself. And the agents are really good at actually spinning that up and interrupting their flow. So in this case, it spins up three agents as we specified, the researcher, slidewriter, and designer. And the rule of thumb, by the way, from anthropic is three to five agents is the sweet spot. Anything beyond that can lead to diminishing returns, overengineering, overthinking, and most importantly, a huge consumption of tokens. So here you can see the research has finished and notified the slidewriter. We see the sequential flow. We see exactly what it's come up with in terms of stats from its research. We then see the pipeline status. This is the example of the plan from the designer that I was asked to approve. So the the color palette, the typography, the slide dimensions, and everything we specified with a little bit more. After approval, they went and it was actually pretty quick. Typically, some of these tasks, especially for technical tasks, can take up to 30, 40, 50 minutes and 300,000 tokens. This still took 150,000 tokens, but it was very efficient. And then, like I said, you can always hover over this URL, click it, open up this pitch deck. It's not going to be absolutely beautiful, but it's respectable. So, if you go through it, everything is well organized. Everything looks pretty straightforward. We have our speaker notes at the very bottom like we requested. And you can imagine if you could dictate exactly a certain brand guide or a brand style, then you can make this very business friendly. And one extra super nugget for you is you can actually install this new extension called Claude for PowerPoint. And the whole point of it is you can open it up, authorize using your existing Claude account, and you can make specific tidbit updates without having to waste your tokens in Claude code or spin up a brand new agent team. So you can make any specific or surgical changes here and take it from 80% to 100%. So this next one will appeal to all the consultants out there. So if you've ever written an RFP or a request for proposal or a tender opportunity for a government contract, you'll know that the tender descriptions are really long and the amount of work you have to do to actually complete and satisfy them is equally, if not more, very long. So this will take the requirements of these proposals and then it will go create an agent team where you have an RFP analyst to look at all the different requirements you have to satisfy in your proposal. And then you have the capability researcher which you could give examples of who works on your team, who has what experience, what case studies do you have for it to draw from to help with the creation of this RFP. Then they will share their data and then it will pull everything. Then you'll have writer A and you'll have writer B. The goal of writer A will be to create an executive summary, talk about technical management, assuming there's a technical component to the RFP, and writer B will have the qualifications, past performance, and pricing. So they will work together again to cross reference and build the whole proposal. So in this case, instead of having a sequential handoff, we essentially spawn these agents to work in parallel here and then spawn them to work in parallel here. But the sequence is that this parallel task comes first. Then the second parallel task. So for this the prompt is create an agent team to respond to a request for proposal. Then we just provide it access to the URL right here which corresponds to an actual RFP to create an AI scribe and dictation solution. And you'll see here if you're not as familiar, you have bidding details, you have eligibility conditions, you have contact information, you have more information about the proposal in general. And then you give it more information about your particular organization. So you could say we are a 15 person AI consulting firm specializing in building custom automation workflows for mid-market companies. We use tools like cloud code etc. Our average project size is between this and this and we've completed 40 plus projects. Now naturally you might want to feed more information in the form of markdown files so you can really dial down the proposal. Then you spawn the four agents like we specified. the RFP analyst, then the company capability researcher, then the section writer A and the section writer B. Each of them with their own details, and then you basically dictate the flow. Like we said before, after both section writers have finished, review all sections for consistent tone and terminology, no contradictions between sections, and every RFP requirement addressed. Then flag any requirements that we didn't address. Terms of the setup, it's pretty straightforward. As you go down, we see the first two agents are launched. They run in parallel right here. And you can see that the next section A and section B writer are blocked until they run. So once they go through that entire process, this took around 180,000 tokens. I'm just telling you that so you can gauge your limit based on your plan. It comes back with the deliverables that I actually asked for in markdown format just because I didn't want to create a DOCX or PDF just yet. I wanted to review it just to preserve on those tokens. And then you have task five right here which is the final team lead boss and you can see the output if you click right here. Then you have each part you have the capability matrix of everyone in the company obviously hypothetical. Then you have the full proposal that you can review in pure markdown and assuming it fits your requirements then you can say okay cool can you go and create a PDF or a docx out of this and then it will be able to invoke the skill that comes out of the box from the anthropic team that can create that file. So then you have both proposal sections as well. So you can audit each and every deliverable from each and every agent. So this next use case is a spicy one. We're going to use agent teams to do competitive analysis. Compare claude code to four other competitors. Anti-gravity, cursor, codeex, and co-pilot. Then we have a synthesis lead whose sole job is to take the independent research from each one of these platforms and bring it all together. So our prompt here is a bit nuanced and I'll show you how. So in this case I say create an agent team to build a competitive intelligence report. I tell it what the target product is. I want to say Claude code the AI powered coding assistant CLI and these are the competitors to analyze. Document the following for each platform latest and greatest info as of 2026. I give it all the criteria but I want you to notice one nuance between this prompt and the ones before it. In this case, I'm not specifying it to create an agent team with specific agents to my instruction. I'm allowing it the creative freedom to do it itself. So, at the end, all I say is have each analyst share their top three findings with the group before the synthesis begins. Meaning, again, I'm encouraging communication between these agents. So, now it spins it all up. We have the cursor analyst, the co-pilot analyst, the codeex analyst, and the anti-gravity analyst. And then we have that team lead that does the synthesis. And notice how it says me. So Claude from the third person perspective is taking on the persona. So it tells you what each analyst will do. It goes through the process. It does the research and then it comes back with a synthesis file. Each one comes back with a deliverable. So each one has a markdown file of the full synthesis and it goes through the top strategic takeaways. It creates this competitive intel file right here that I can bring. Each one has analysis on how each and every IDE and platform works, what it brings to the table versus the other ones, and then you can see there's an overall synthesis report where it goes through and compares and contrasts each one of them. So, I'm using products here, but this could be competitors as in other companies, other platforms, other frameworks, whatever you want. This next one has to be one of my favorite examples, which is the AI advisory board. And the point of this use case is you take a very meaty problem or meaty question or opportunity and you just pose it and you create a very comprehensive prompt to help you split up that task in a way where different agents can take different perspectives on it to come up with a cohesive analysis. So in this case, let's say I wanted to launch a $7,500 higher ticket boot camp for more affluent CEOs and VPs, etc. Should I launch it or wait could be the overall question or premise. So then you could have a market researcher, a financial modeler, a devil's advocate, a competitive strategist, and an audience analyst all work together to act as the voice of the customer, voice of the consumer, and most importantly, the voice of the market. So if we go, you'll see that this is a behemoth of a prompt. And we started the same way saying, create an agent team to analyze a complex business decision. I pose the question, should we launch a $7,500 live six-week AI leadership boot camp for execs and CEOs who want to manage AI teams and integrate it into their operations? And then I give it context about myself, my agency, my community, the fact that I used to teach boot camps all the time. And then we say spawn five agents to investigate different angles. I say the market researcher who goes through and analyzes the executive AI education market. the audience gap analyst to investigate the gap between our current audience and the target audience. Then the financial modeler, competitive strategist to see exactly who's selling something like that out there. And then the devil's advocate who takes all the analysis and steps in to say maybe you shouldn't do this at all or maybe you should do it in a completely different way or at a different price point. So then this is the key deliverable and this is where you can get creative. So I say once consensus or informed disagreement really good nuance here emerges synthesize into a single executive brief with go no go or conditional recommendation top three reasons for the recommendation top three risks regardless of the decision and suggested next steps. This is where prompt engineering meets agentic workflows in a way where both become really powerful. So then we tell it exactly where to save it. This one spins them all up. In this case, it can make sense that all of them can run in parallel because they're all taking mutually exclusive tasks. So, it's not necessarily a sequential handoff. Then, as we go down, we get the analysis from each one. We can take a look at all the files they came up with from the audience strategy to the competitive framework. Obviously, a lot of reading. You might want to ask it for a TLDDR of the TLDDR, but it gives you out of the box. It tells you conditional go. Start with a $2,000 course. Then upgrade to $7,500 within four to six months. Then you can go through the risks, the key debates, everything that we used to see before, the revised numbers of what it would look like from a financial standpoint, all the permutations of it. And this runs for a while. And this really helps you if you have a very deep problem you want to go through and you might not have the colleagues of your own to be able to push back on you. This is a great way to get an initial lens from different virgin eyes of different agents. And for our penultimate example, we'll take on a marketing use case. So let's assume you have a full campaign launch and you are looking to market your new focus pods pro. I know very original name. You'll need a email marketer to come up with the three email sequence for the launch. You'll need the social media manager. You'll need the ad copywriter and you'll need the landing page creator for the product itself. And then you need some way to have some consistency, some fluency between each and every part of this process. So again, you have that team lead or that synthesis agent at the end that make sure that each individual output has some cohesion. So if we go over, we also have a pretty legendary prompt here as well. But notice that most of these are very granular. You don't necessarily have to go to this level. So, even though I go to levels like this where I'm pretty much spoon feeding and dog fooding it exactly what to do. You could just say email marketer. You could just say social media manager. And in this case, I say the product I explain exactly what it is. I see that we're creating the agent team to build a complete marketing campaign for the product launch. So, notice now we're at this stage. This is always the goal. So, having your objective function or your goal you're optimizing for is the most important part. And then we're just contextualizing it. We're giving it all the nuance with the ad copywriter. This is where it's valuable to add some granularity. I'm saying I don't want you just to create three variations of ad copy in general. I want you to create variation A where it's problem agitation. So the friction point. Variation B would be the social proof angle. And then variation C, us versus them comparison. A lot of more social psychology grounded variations. And you can really have fun with this. You could go and say, you know what, I want variation D to also take on the persona of Edward Bernay, which if you don't know, kind of invented all the trends, the whole notion of breakfast and bacon and eggs being a couple. He was the mastermind behind all those marketing campaigns. So, if you added that as an extra lens, now you have the capacity to have one agent take on different lenses without diluting any one of them since it's so focused on the task at hand. So after that, this runs as expected. We get all the email sequences. We get the entire marketing campaign as a series of markdown files. And once again, you can spin these up. If we take a look at the email sequence, for example, you have email number one, the teaser, the subject line. It tells you how long to release it before the launch. I don't see too many M dashes. I see a pseudo M dash here with two dashes, but you could probably just tell it to avoid that. Looks decent, but still AI. So you can focus on the copywriting to make it better. I see some italics here. So it's not completely AI slop, but you could desopify it with the right instructions. And for the final use case, I'm going to show you how you can create the 8020 version of your own version of Open Claw that fits exactly what you're looking for in the easiest way possible. I dropped a whole video kind of alluding to this earlier in the week. Some people loved it, some people hated it. All good. I'm still going to show you how to oneshot it very closely with a pretty comprehensive prompt. So, we're going to use both sub agents and agent teams to work cohesively together in a way where they complement each other. Sub agents will take on more grunt work. We'll use explore sub aents to go and take a look at the existing open claw repo to see what it is we want from it. And then the agent team will have the architect, the blueprinter, and you'll have everything else in terms of your core requirements or wish list. So, one in charge of the telegram setup, one in terms of the skill setup, one that's creating a version of memory that might fit for you for your use case, and one to help you create the CLI experience. And like I promised, this is probably the beefiest prompt of all of them. This is a small essay like The Hobbit, and it starts off saying, "Create an agent team to build a personal AI assistant, a better customized version of OpenClaw built specifically to run my company, Prompt Advisors." Now I give it the URL of my company for a reason. I wanted to tailor what the build of the personal assistant should be that drives business value based on what I do day-to-day. So the end goal a working CLI command line interface if you don't know what that is that is this right so where you have the open claw in this case it's called mark claw and we complete the command by saying that we want to connect it to telegram understand our business context pick the right tool for any task and help me run my company day-to-day and then I say first before spawning the agent team use a sub agent to clone and analyze the repo so now we're offloading that task to preserve tokens to just contextualize exactly what's there so this is where We specify everything for the sub agent. We tell it exactly to clone this repo, read through the codebase, summarize the overall architecture and how components connect, design patterns used, what technologies and dependencies it relies on, the three best ideas we should steal for our version, and the three things we should do differently. So with that, we have the agent team now spinning up in step two. And we tell it to spawn five teammates to build our custom version. and we tell it we want one for the architect, one for the telegram interface, the skill router and tools, memory and context integration and CLI. So, pretty much everything that you would need to put everything together. And then if you go to the very bottom here, we have the dependency chain it walks through. We have the architect taking care of everything we need to progress it. And it takes around probably 20 to 30 minutes to go from zero till the very end. It originally comes up with this one which I didn't really like prompt advisor assistant a little bit uh blinding on the eyes when you open it up. So then I switched it up to uh mark claw and that was not using the agent team. So the agent team shut down at the point where it completed the original version of the command line interface. I just wanted to now individually go back and forth with cloud code to get it to the point where I could look at it and be like awesome this works. This is cool. And then in literally one shot featuring some more aesthetic updates, we can go add our Telegram token on board and have this up and running in a matter of minutes. So hopefully this walkthrough shows you the power of agent teams applied to non-technical and a pseudo technical use case. So you can start using it for everything where it makes sense to break down heavy problems or break down heavy tasks. And I know you're probably waiting for me to say it. Yes, I'm going to make all the prompts I showed you available to you for free in the second link in the description below. But what I teach here on YouTube is just a sliver of what I go through in my exclusive community. And I'm going to do a whole master class on setting up and configuring my own version of OpenClaw, Markclaw, whatever claw you want. So, if you want to check that out and every other resource we have, then check out the first link in the description below and I'll see you inside. And for the rest of you, if you enjoyed this video and it helped illuminate more use cases where you can practically use agent teams, I'd super appreciate if you could like the video, leave a comment, good or bad, all good, just so the video can get some more recognition in the algo. And I'll see you all in the next

Video description

Join 800+ AI builders: https://www.skool.com/earlyaidopters/about Grab all 7 agent team prompts: https://markkashef.gumroad.com/l/agent-team-prompts Book a 1-on-1 call: https://calendly.com/d/crfp-qz3-m4z --- Agent teams in Claude Code are mostly used for technical tasks. This video flips that. I walk through 7 real use cases -- 6 completely non-technical -- where a single prompt spins up a coordinated team of agents that handles what would normally take you an entire session. From repurposing content across 4 platforms to building a competitive intel report to responding to an RFP, each example includes the full prompt breakdown and live output. --- TIMESTAMPS 0:00 - Hook 0:28 - Overview of all 7 use cases 0:44 - Use Case 1: Content Repurposing Engine 4:54 - Use Case 2: Research & Pitch Deck Builder 9:06 - Use Case 3: RFP / Proposal Response 12:58 - Use Case 4: Competitive Intelligence Report 15:03 - Use Case 5: AI Advisory Board 18:19 - Use Case 6: Marketing Campaign Launch 21:03 - Use Case 7: Personal AI Assistant (MarkClaw) 24:29 - Resources & Outro --- #ClaudeCode #AgentTeams #AIAgents #Anthropic #ClaudeAI #AIAutomation #PromptEngineering #MultiAgent #AIWorkflow #AICoding #AITools #AIConsulting #AgenticAI #AIProductivity #ClaudeCodeTutorial

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC