We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Jacob Dietle · 4.3K views · 173 likes
Analysis Summary
Ask yourself: “What would I have to already believe for this argument to make sense?”
Performed authenticity
The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.
Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity
Worth Noting
Positive elements
- The video provides a useful conceptual framework for understanding why AI agents fail when they lack specific, grounded context (the 'visibility vs. leverage' trade-off).
Be Aware
Cautionary elements
- The use of high-level philosophical jargon (epistemology) to describe what is essentially sophisticated data organization and prompt management.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Related content covering similar topics.
Claude Just Rolled Out 2 Big New Features
Matt Wolfe
DHH’s new way of writing code
The Pragmatic Engineer
This New Claude Code Feature is a Game Changer
Nate Herk | AI Automation
STOP Using 10 Agents #ai #tech
EO
AI Agents That Actually Do Work (2 Real Examples)
Travis Media
Transcript
Okay, today I'm going to talk about why I think the study of knowledge, so epistemology is the most powerful mental model, we'll say, uh, for context engineering and interfacing with large language model. At its core, epistemology is the study of what you can know. And in relation to using language models and agents and everything, we build on top of these things. ruthless pursuit of clarity. The ability to understand what you're asking for, what do you want, why do you want it, how do you want it done, what are your typical parameters for it being done a specific way all the way down to describing your own mental model. Right? So start or starting with describing your mental model then effectively communicating that outside of your head. So this is the I don't know let's call the oldest problem humanity has dealt with is how do you communicate with one another in an effective way? [clears throat] Well the really cool thing is is that uh communication kind of generalizes. So when you ask, oh, does that person have the context on what we're talking about? That ability to almost empathize, but like not emotional empathize. We'll say we will say understand the mental state of what information they could possibly have applies to communicating to your language model. Um I believe this to be to be so effective. It is I think better than knowing than being good at statistics, better than knowing uh being a really good data scientist, better than understanding machine learning. Those things are obviously far more [clears throat] excuse me those things are obviously far far more effective for actually building these um these large language models but for interfacing them in getting them to produce output that is actually valuable to solving a real end problem it's applied epistemology which is what I just I think context engineering is applied epistemology it goes back to that communication gap of I know well what do I want to know how do I get that information how do I communicate that information how do I distill the information and how do I get that from my mind to a external system like my own model there are two gaps there that's like your un your want right so gap one is your understanding to clarity on that problem set and then communicating that clarity to uh in this the language model. So why why does this matter? Um this matters because as we get more powerful systems we are making this trade-off. We're getting more leverage so you can spawn more agents to do more things to write more code to write more emails but we're losing visibility. That's the trade-off. So this XY axis right you get high le ideally you want to be up here high leverage high visibility. you have like clairvoyant vision and know how to describe that perfectly. So you can take advantage of all the leverage um all the leverage you now have more often than not we are I think we are I think we're actually over here we are using incredibly advanced systems without knowing how to get them to do what we want. not because of some arbitrary problem with it's we need better models or we need um better tooling or whatever. We don't know how to communicate clearly what we want and that is actually the main bottleneck for interfacing with I think the vast majority of AI systems. It is clarity. The way you solve clarity is ruthless. The way you solve for clarity is ruth ruthlessly interrogating your own knowledge and understanding how you communicate it to solve for what you want, how to communicate that externally and how to actually execute that. So that's what we're going be talking about today. So if you're not familiar with what I work on, I build context operating systems primarily for go to market teams. And what a context operating system is is we take your scattered context so or your scattered context such as transcripts, working documents, uh CRM data, uh analytics, everything that your business spins off of these artifacts um that cause you to spend endless time context switching and repeating yourself to your AI agents over and over again. We fix those problems by centralizing everything into a living source of truth that is in the form of a markdown knowledge graph. I like the markdown knowledge graph because it is simple. There is nothing but the agent, the context and the user. So remove as many moving parts as possible. Simple is effective. In the course of building these systems, they kind of grow. they they get these immersion properties of I can put my transcripts in here and I can put the app I'm working on in here and I can have the agent connect the dots across these two things to figure out hey oh actually my customers um I thought we're talking about this painoint XYZ we'll call actually painoint one I thought painoint one was the number one painoint my customers are having well looking at all our all of our transcripts across the past three months they're really talking about painoint three and that's because you now have leverage to go and query across all of those transcripts and actually pull out real insight. Uh I think more than ever it is exceedingly important to understand what you can know because our systems now confidently lie to us if they can't could lie, right? They conf they generate text that makes a confident statement that is right? It's not it's not true. That's a hallucination. Um so interrogating interrogating your context and grounding it in in a knowledge base like the one I build for myself and my clients is how you shift make these things valuable and far more reliable. Um and the core way uh I found this to be actionable um is around something called falsifiability. So what is falsifiability? All falsifiability means it's the ability to cate a claim as true or false. You can falsify it. Uh, a weekly falsifiable claim is something like you guessing what I'm thinking right now or something where there's a bunch of moving moving parts. It's unbounded uh and you have no way to verify it is the most important one. So a high falsifiability claim is something like 2 plus 2 = 4. That is something you can verify for yourself counting whatever you want to say. Um why does this matter? So at their core, language models are basically like dream machines that are sometimes line aligned with reality. That isn't maybe the most technical the most architecturally correct um analogy, but it's I think it's useful nonetheless. And it's useful because it features hallucinations at the center. They're not like this problem to work around their core to how these things function. So if we can't eliminate host nations, we have to find a way to work with them and make them work for us, which is what we're doing with falsifiability. We are basically taking the probabilistic nature of these machines and reducing down the deviation. We're making the system more reliable. So, I'd rather have a system that is 85% accurate all the time uh than a system that is 95% accurate but will sometimes swing down to 35% or accurate or 60% accurate. We are basically making the thing more reliable by making it so we can verify by making it produce claims that are more verified. Okay. So, when do you need to solve for falsifiability? It really depends on something I'm calling or I I guess called um context sensitivity. And basically what this means is a how much outside information do you need to give the model for it to correctly assess the situation and produce relevant outputs. Taking our favorite example 2 plus 2= 4. You don't need to give it any knowledge um or any outside information to do that because that's in the training data. Um, [clears throat] a high context sensitivity problem is one where you're working with uh a brand new Python package, a brand new framework, whatever it is, it is not going to be information that the model has. So therefore, you have to provide that context to it. Um, so thinking about that as this sliding scale up and down when you're working with, hey, I I want you to assess my entire business based on our transcripts. Obviously, you have to give it the transcripts. That's like an easy thing to assess. But once you start breaking that down, you understand you you can start figuring out where the nuances in almost like the physical shape of this context is and when you have to give it the right information at the right time to solve your specific problem. So just to contextualize what we're looking at here, by the way, this is a skill I made for myself based off of that write up I did several years ago. I've been using this epistemic epistemological sometimes it's a hard word to say a this epistemological content skill um all the time and I've been using it in conjunction with a command line tool I built I'm calling taste matter that the ang is basically it knows the context better than the user it knows it better than myself and you can see how I think you could see how fitting these two together um will be extraordinarily valuable but to spell it out to make it very concrete the the core tension I'm finding in all of these systems going back to our leverage and visibility trade-off there is what I think my system is doing um in how it's working so like assuming hey we get 1 plus 1 equals 5,000 I'm making it up but you get these emerging properties well how do it how to do that um I have these assumptions around what context I'm giving it and why that makes it work work the way it does but they're assumptions So having this tool increases my visibility for no additional leverage cost. Um and you can do things that are really really powerful that allow that are basically coming down to reducing the number of assumptions you have around the way your system works. So this example right here where I have it called the context query skill and it basically takes the whole um the command line which lets cloud go and talk to its own um JSON files and file activity uh access stuff um and basically paint a picture of hey this is how this system produced this end result um and here's what you were here's what you've actually been doing with the system um producing something where this is like a heat map, right? So this is these are the systems or these are the files and skills and everything or like this one specifically is a snapshot of that. But this is where you're actually actively engaging in the system. And then over here, right? So the context is these these tech the taxonomy and the anttology are supposed to be like a governing system or governing files that are always used more or less and tell the agents how to label files, worker files. And it's turning out that uh or it turned out that I'm not using these things pretty much at all versus um some files are meant to be not accessed often like a a knowledgebased entry. These things should have been on fire [laughter] based off of the heat level, right? Um sorry. Um but they're not. So now they are um basically an anti-atter. They're residual. they are um making it harder for me to actually understand what I need to do to optimize my system to write code faster or maybe not write code faster, whatever I'm optimizing it for. Um in this case, like context lookup and context restore is one of the things I'm solving for with um taste matter. Um, but it is actively hindering my my visibility into my own system and that's actively making it harder for me to make decisions and streamline the system to produce real valuable results for me. So that's um that's like the very applied epistemology I'm doing myself today and I'm doing similar things on all of my client systems of like hey like it's we're basically trying to solve for interpability like these are interpretability is a machine learning concept this is interpability of context um I'm finding as these systems get bigger and bigger they are you have to give yourself that visibility edge um so two things if you are interested in trying out um taste matter as a alpha user. I'm starting to open up open up a few slots. I need help testing the thing, beating it up. So, uh if you're interested in that, hit me up. And I am open to um you know releasing this skill. I have this uh epistemological context audit. So basically what I do with it, I have it if I'm starting a new project or a new context or context window in a session, I say, "Hey, call the epistemological scale and enumerate all the stuff we need to figure out." There's a ton of stuff down here about um assumption enumeration. And I'm finding it's making cloud code far more accurate um and far more reliable. So if you are interested in that, leave a comment and if enough people want it, I'm happy to release it. But um thank you so much for your time. I really hope you have a wonderful rest of your
Video description
Why You're the Bottleneck in your Claude Code System (Applied Epistemology) All systems have trade offs. Use a telescope? You can see craters on the moon but you're blind to what is five feet in front of you. You are trading vision for leverage (in the telescope example literal focusing power) In Context Operating Systems and broader agentic systems you are trading visibility for leverage. The creator of Claude Code said he never writes code by hand anymore. The creator of Nodjs, said this: “This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That’s not to say SWEs don’t have work to do, but writing syntax directly is not it.” We are moving up a layer in abstraction, no longer focused on syntax, but the architectural decision making is still critical. Increased leverage: write systems in a 1/10 of the time they used to take At the cost of visibility: You have less understanding of what individual lines do and why. In this video I share my framework for dealing with this fundamental problem of agentic systems. Heavily based in epistemology (the study of what you can know) I share how I make my own claude code and context OS more reliable by making them more verifiable, and the concepts I've created to do this such as context sensitivity. At the very end I share how I built a CLI to increase my visibility into my system for no leverage cost, and how it helped me see how many of my assumptions on why my context OS worked were wrong/misguided - helping me make it even more powerful via removing redundant and actively interfering parts. If you're interested in getting access to that CLI tool, or the epistemic context Claude Code skill I created, leave a comment. If there's enough interest will consider releasing :) Links: LinkedIn: https://www.linkedin.com/in/jacob-dietle/ Website: www.taste.systems #claudecode #contextengineering #aiengineering #gtm #gtmengineering