We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Analysis Summary
Ask yourself: “What would I have to already believe for this argument to make sense?”
Worth Noting
Positive elements
- This video provides a sophisticated application of functional programming principles (immutability, pure functions) to the complex real-world problems of digital identity and AI agent persistence.
Be Aware
Cautionary elements
- The speaker uses a 'revelation framing' by suggesting that human experience itself is an append-only log, which may lead viewers to accept technical surveillance architectures as a natural extension of human nature.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Related content covering similar topics.
Datomic Cloud - Datoms
ClojureTV
The Taming of the Deftype Baishampayan Ghose
Zhang Jian
Exercism Summer of Sexp - solving challenges with Clojure
Practicalli
Understanding Core Clojure Functions Jonathan Graham
Zhang Jian
The Data Reader's Guide to the Galaxy Steve Miner
Zhang Jian
Transcript
Good afternoon. Is it afternoon yet? It's still morning. Um, hi. My name is Scarlett Dame. My pronouns are she and her. It's good to be here. Um, today I want to give a talk I'm calling immutable selves. And I want to talk about a functional approach to digital identity. Um, if you are a follow along while you listen person, the QR code is the talk slides. Um, otherwise, let's do it. Um, first of all, this talk was largely compiled by AI. I'll get into how I did that later. Um, but just to reintroduce myself, it's fun to be back at the KJ. The KJ has seen me in various forms over a long time. The first Kanj I was at was 2013, back when I still thought I was a man. Um, I gave a talk in 2019 as Scarlet Spectacular. The talk was a sort of precursor to this. Um, and now I'm Scarlett Dame giving this talk. And the point here is that identity is not mutable state, right? I'd really like to forget some of the things that happened in the past, but the fact is that there's these touch points and they're individual append only moments in our lives and yet we're still treating identity in most of our lives and specifically identification and some of the services that we work with like a mutable uh thing. So this is a fake piece of identification. And my question here is what is actually being represented right like what is the identity and more importantly who is the authority that is issuing this identity and what claims are being made in this case it's the state of imaginary California is saying that we have this person John Doe who was born in 1991 but the actual truth here is not that there is a person that is John Doe. truth is that the state of California claims that there is someone that is named John Doe with these particular attributes. Right? So another sort of weirder example is there an identity here? Right? like and who is making the claim and uh I'd sort of argue the authority is open AI and it's claiming that there is this thing called chat GBT and it has a model version and it is presenting itself to us as this right um but it isn't the same thing as truth right it's a snapshot it's a representation of that thing in a particular point in time right when I talk to my AI today it's not the same thing as my AI three weeks ago because its experiences have changed. Um, and so I want to distinguish between two types of identity that we're talking about before we go on too much. The first is essentially individuality being distinguishing character or personality. And the second and third definitions are really the same and they all both describe sameness, right? The ability to compare something. Um, and we can use those to get two synthetic versions, right? The first is a synthetic identity that has a sameness of entity using constructed credentials. The second is synthetic individuality, right? Which is the creation of a unique being, something that is essentially unique. Um, so going back to this sort of comparison about mutable state, um, I was joking about backbone.js JS back before which was my introduction to professional development in 2012 back before I read Michael Fogus's functional JavaScript and you know essentially we had this model that I'm sure we're all familiar with of see picture select picture mutate picture right um and then we had for me what was a revelation of M the single source of truth right and it wasn't just that react was a declarative renderer right that allow us to to render the DOM using a pure function it was that it flowed code from a single source of truth that when we made mutations to the upstream single source of truth it triggered this pure function and we got our compiled surface um and in closure we don't have frameworks right often lamented we have good tools and or simple tools and good principles which I think in the end give us a much more powerful process and so I want to use this combination of simple tools and good principles to present a design pattern pattern that I'm going to call riified change. And this is not a new and revolutionary design pattern. This is the way closure has been thinking for a long time. But I want to apply it to identity and give you two different domain examples of how that works and then do a demo. So uh first this pattern what are we doing? We're one making state explicit through an appendon log that compiles to form a single source of truth at a point in time. Uh what does that mean? Everyone sees the same thing for a the same state. We render as a pure function which gives us a deterministic UI. Uh it then turns interaction to transaction. Right? We have function event handlers that produce transactions that then circle back around and append to the log. So again this is not new in closure. This is very common. We have well-accepted tools that we combine together to create this pattern in uh various stacks. The atomic data log is obvious example. But this is also just as true for data script. It's just as true actually for react and graphql. It's true for reframe. Um and we can construct various paradigms that do essentially the same thing. So again, rifi change single source of truth. We query. We then render with a pure function. We then have user interaction that produces events which produce transactions which append to the log which gives us a new single source of truth right so what is the source of truth when we're talking about human identity and I want to argue it's you but over time right it's it's not me as I stand here right now it's all the things I have been right um and I want to use that to argue that experience is an appendon log that compiles to identity which is probably the nerdiest thing I've ever said. But um [clears throat] we can treat that to say we can get a deterministic UI in this case which is identification. Right? We take a query onto the source of truth of our identity changing over time and those claims are rendered as who I am and what I'm allowed to do. Right? I can drive a car because I did these things that gave me the right to do so after taking a driving test and registering my identity with the state of California and so on and so forth, right? Um, and then interaction is transaction. I do new things. I interact with new people that appends to the log and round and round we go. So, two systems, one human and one AI. Um, the first example is uh be recognized. Formerly vouch.io IO um where I was the chief strategist for the last two and a half years before I left to start this new company. Um their mission is to recognize the human behind the device by creating immutable identification. And uh I want to give one use case that they've been working on. Um and that's it still kind of blows my mind, but we found that there are bad actors deep faking candidates in enterprise recruiting processes. So basically, someone shows up to the interview, they have a great interview, they show up to the next interview, that's a great interview, then they show up for work, and then they realize down the line that the person on the screen is actually not the person that they thought they were and has been deep faked the entire time and there been some malicious actor, often complicit with the candidate, uh that is now collecting paychecks on behalf of someone who did have the credentials and is now showing up and presenting themsel uh in order to impersonate them. Right? So what is the solution here? The solution is to establish continuity of identity at different touch points through the process. Right? So when the first p when that person shows up to the first meeting, we want to confirm that that person is the same person that was referred into the interview, that they're the same person that appeared on government identification, that they're the same person that appeared at the next interview, that is the same person that shows up for work, right? Um and so the architecture that they set up to do that um at the top is very standard closure datomic the query surfaces data log. What we're doing though is we're taking the appendon log of interactions of this person showed up for this interview, this document issued this privilege, whatever else are then rendered to the surface of instead of the DOM in the case of M, right? We're rendering it to identification, right? We're rendering it to a device that can then be recognized by another device and compare those privileges and what and then as we interact over zoom in person etc. we create further events by passing the interview by getting a job so on and so forth and that appends back to the log and we get our loop. Um so again identification represents a compiled snapshot as of a point in time. We have experiences are the appendon log that gives us a compiled single source of truth as of t. We then render that to identification i.e what who is this person and what are they allowed to do. Um and so what do we get from this pattern in the closure setup here from immutability? Well, we get equality and god, if only it was that easy in the world, right? Um, but what I mean is that when we compare an immutable log, it's a simple comparison, right? If I try to compare my license to your license or this license I had one time and the other time, there's no there's no way to do that, right? We have to call the source and check the watermark and so on and so forth. But I want to compare two append only logs that say this person was here, this person was here, this person was here. It's just a simple hash. It's just a pointer, right? Um so again design pattern I feel like I've stated this enough. Um what we get for free again we not only get equality we also get provenence we get who created this change who assigned this principle we get versioning I can not only get as oft I can fork I can do all the things we do with immutable data structures. Um I get generative testing if my system is just data and it's rendering via deterministic process to a rendered surface. i.e. identification. Now instead of my identification being some printed card, I get identification that's a result of a data structure. So I generate as many data structures as I want. I get generative testing. I get decentralization and infinite breathe scale, right? Because I can have as many clients reading from that same source of truth as I want. Um what I don't get, what are we giving up here? Distributed rights, right? We always have to bottleneck through a single transactor or else we can't guarantee that the system is eventually consistent. Um, all right. So, second example, how does this apply to AI? Um, and I think this gets a little bit uh more abstract, but the company that I've just started uh mission is to create AI memory that tells your story as written. Um, and solving a fundamental problem that, you know, my AI does not do the same thing as your AI. And you know back when I was working at Vouch I was doing a lot of strategy work creating a deck a week for different clients and we were creating different versions of this project for in this domain right um and I you know developed this process of seating this entire sort of narrative source of truth at the top of a chat and people would come to me and they're like hey can you have your AI write this document because when I ask I get nothing right um and the truth there is that even though we all see the same chat input what's behind it is very very different and that's currently hidden by the interface. Um, so what is the source of truth for AI identity, right? It's really unclear. It's really unclear. Um, so the goal is to create rified AI memory. Um, and the approach is setting up the same design pattern but with an entirely different stack. And I think that's my fundamental point in this talk, right? is that you don't have to use the traditional closure, the atomic data log, whatever stack to produce this pattern that allows us to start with an appendon log, get a snapshot, compile to a surface, and then produce events that transact to the log, right? Uh, and so in this case, I'm using RDF and git. Um, I'm querying that with sparkle and then rendering the rendered surface is AI memory, right? We're then interacting with that through an AI interface like chat and then the events further messages allow us to extract data about that message and then append that back to the log. So same semantics different substrates right a day is still a quad right we have an entity attribute value and TX ID um for assertions and also for retractions but we can do the same thing in RDF. RDF is fundamentally just triples and we can put them in name graphs. So we have transactions that both assert and retract and we can give them providence. Um this talk in many ways is also a response to Luke Vanderhard's talk last year. Uh I don't know if I see him here but um big fan of his work. Um so we get now a dependon transaction history but in git formed with RDF where we have a sequence of RDF transactions that we can compile as of t to form a snapshot. Um so the interaction pattern here is that someone shares something with AI again that gets extracted that gets saved to the appendon log we compile as of t and we can then get a deterministic response right so instead of AI memory being something that is hidden we can now have AI memory that is a deterministic function where we are compiled as a point in time and that seeds the AI and allows us to speak allows it to speak from a deterministic place that we and reason version so on and so forth. So this talk is proof of this right. This talk was created through this process where I did all of my research. I then extracted it via RDF. I then got a initial set of commits in the appendon log. I then had the AI generate a set of slides and then I started iterating. and I just like write or uh narrate long voice memos and then perform an initial additional extraction using the transcript and it would iterate and give me a new set of slides, right? And I can steer the content by doing so. So how does that end up looking? It gives us change over time, right? instead of treating the document and I think I've in the past I've seen that we've treat a document whether it's a slide deck or a product document or whatever else a bit like we treat identification right we're like this is the thing this is the thing now and it's the thing as it's supposed to be instead of that being the result of transactions over time transactions that change what we believe what uh policies are in effect you know the perspective underlying it. And so the goal of this system is to riify that and allow us to treat AI memory that can then compile any asset as something that is versionable. Um I think I already said that. So quick demo here. Um I think live demos are playing with fire. So cool talk. Can you summarize the back half for me? I got a little sleepy. So it says sure first let me load the snapshot and then AI is able to perform our query for us where it gives us uh exactly what we asked for right um you can then ask more specific queries can you show me how the stuff about aswritten.ai AI changed over time. Again going to compile the snapshot, load the underlying ontology and the set of transactions and then I can trace exactly how the talk evolved over time starting with initial transaction going into refinement. It even group this nicely into phases for us. Show breaks down into a cute table about how you know scope and data model etc. have changed over these phases. Can ask about the way entities relate to each other in the graph and also about how convinced the AI is. So I've added uh some metrics around conviction. You ask what transaction was the most impactful and changing your conviction about the value of the design pattern. [clears throat] And we get a most impactful transaction. It's going to say, oh, I can't read that fast, but that the escalation path moves from, this is also from Luke Vanderhard. I love this framework of uh thinking about ideas in terms of a notion, then a stake, something that's planted in the ground, then a boulder that's really hard to move except with help of others, and then a foundation, right? Um, and then I can also ask what stories I can write. So, I can treat different documents as render targets, right? Um, and we have the kunch, we have a generic, you know, presentation, and then we also have the readme. So, I asked it to write the readme. Um, and it's going to write a summary of the repo and the talk that created and the repo that created the talk, right? So, it gives me a summary of the state, gives me a summary of the stories in it, gives me directory structure, u, and this is all from, you know, a prompt stored in uh, that story template, gives me again a summary of some of the transactions. Um, oh, and then some stuff about how conviction has escalated and then some key insights and how they've changed. And so last I say I'm giving a demo right now and what is one thing you'd like to you'd like folks to leave with. Right. So it's again loading the ontology and snapshot. All right. And I'm going to read this directly which is usually for Bowen and talks but um this is you know a directly rendered summary of the entire talks knowledge graph. It says what do we want to leave this talk with? One thing identity whether human or AI should be modeled as an appendon log that compiles to state not as mutable objects. This single shift unlocks everything else. Provenence you can audit every change. Equality referential identity comes for free and decentralization reads scale infinitely because the source of truth is immutable. The pattern works across domains be recognized ID human identity via datomic and as written.ai AI memory via RDF and Git are two independent implementations of the same canonical flow. that flow. Again, our single source of truth is queried and then rendered to a surface that we then interact with that surface. There are then events that lead to transactions that append to the log and are then recompiled to the single source of truth. And really like cogently here it says this is not a new idea, right? This is how closure thinks about state. The insight is that identity is not is just another state machine. And state machines are easier to reason about, test, and trust when they're immutable. That's the core. Everything else, the talk, the case studies, the metad demonstration is proof that this pattern actually works when you build it. So in lie of some other summary of myself, I will leave it there. Um, if you'd like to play with the demo, this is a QR code that is a link to a live version. Um, it is not a production system. So, if it breaks, be gentle. Uh, but please have fun and experiment and let me know what you think. All right. Thank you so much. [applause] [applause] Any questions? So I constructed the ontology from a number of years of work as a strategist running tech companies. And so I had a document hierarchy of how I was using starting with sort of narrative anchors. And I think about narrative as a sort of steering function for ideas, right? A narrative is something like immutability is important, right? Or like it's a pathy statement. Um or the append only log, right? Would could be something like a narrative. Um and I constructed a um an architecture for creating documents that thread narrative from product strategy through to architecture documents through to you know strategy documents about partnerships and so on and so forth. And I used the sort of a canonical example of a collection of these documents and then had AI extract an RDF ontology from that and then did a couple other passes where I added things like style, added conviction and a couple number of things I've added over time. So [clears throat] >> um yeah, I thought that was really like philosophically and technically interesting. Um, one I guess more practical or social question that came up for me is if your identity is being conceptualized as a series of states, what does that mean for a like an large institution that has maybe bad actors or is systematically biased and wants to use like these past states to discrim disccriminate or otherwise um I don't know harm people. >> Yeah, that's a great question. Um there's not I mean there's not an easy answer. I think that the question for me lies in where the black box is. I think that you know there's the source of truth of what has actually happened to me and then there's where the sort of impenetrable boundary of what people see is. Um, and I think that uh and I hope this isn't a sidestep of of sort of responsibility and answering, but I think that we have a responsibility as technologists to draw the line of where the black box is in our systems that at some point there is the the truth of the immutable appendon log and then past that there's the sort of compiled surface that we actually allow people to see and interact with and potentially respond to. Um and whether you know an an appendon log represents the exact truth of reality is another question right like you can have whatever you want in that log um whether that corresponds onetoone with you know whatever I think is up to the designer of the system um and certainly yeah open question >> yeah as I was formulating my question I was thinking this would be a lot more comfortable and it feel more empowering if it was like on hardware I control. >> Yeah. >> So, um thank you [applause]
Video description
Identity has been centralized, mutable, and vulnerable to fraud. Deepfakes, synthetic identities, and impersonation expose the limits of password‑centric, mutable record systems. This experience report shows how Clojure’s principles of immutability, explicit state, functional composition, data‑first design, and knowledge graphs can ground a practical architecture for trust and the synthesis of identities that act on our behalf rather than for our would-be attackers. Using past work with Vouch.io, I explore a model for human identities in organizations as append‑only event logs, authentication as pure functions, and delegation as auditable chains of responsibility. I then extend the model towards new work, as the founder of Sic, on AI memory, using persistent logs and knowledge graphs to give agents deterministic individuality, narrative-driven provenance, and shareable perspective. We move from a simple mental model to concrete system patterns you can adopt today: immutable facts at the edge, verifiable receipts for every interaction, and graph‑based resolution across devices, agents, and organizations. Biography Scarlet Dame (she/her) is an independent narrative strategist and systems designer working on identity, agent memory, and functional architectures. She is founder of Sic, an AI memory company that uses narrative-driven knowledge graphs to create AI individuals that tell their organization’s story. She is formerly Chief Strategist at Vouch.io (now strategic advisor) and is developing and teaching an upcoming course on synthetic identity at NYU’s Interactive Telecommunications Program. Her work applies Clojure principles to real systems: immutability, explicit state, functional composition, and data‑first design. She has led identity and delegation initiatives with enterprise partners and built operational playbooks for provable interactions. As a trans woman, her lived experience informs a clear, practical framing of identity as contextual and evolving. Recorded Nov 14, 2025 at Clojure/Conj 2025 in Charlotte, NC.