We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Analysis Summary
Performed authenticity
The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.
Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity
Worth Noting
Positive elements
- The video provides a clear explanation of why latency matters for real-time AI agents and how distributed computing addresses the 'round-trip' problem of centralized data centers.
Be Aware
Cautionary elements
- The use of 'predictions' as a rhetorical device to present a specific vendor's product roadmap as an industry-wide inevitability.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Related content covering similar topics.
Transcript
Hi, this is your bhart and we're back with our yearly video production series and today we have with us once again Daniel Cook senior manager at Akama Daniel it's great to have you on the show. >> Thank you for having me again. >> It's my pleasure and of course we are going to ask you to share your predictions but before that is 2026 just remind our viewers what is Akcomi all about in the context of 2026. >> Absolutely. So, Aami powers and protects businesses online and we're helping organizations deliver really fast intelligent AI and digital experiences at a global scale. So, in 2026, AI is fundamentally resetting user expectations. And so once those ex people have different experiences and they are getting responses instantly and they're wanting real time information that raises the bar and so Akami in 2026 is here to kind of help organizations meet and exceed that bar. >> Excellent. Thank you. Now it's time for you to grab your crystal ball and share your predictions with us. So my first prediction is that the experience quality is going to be directly tied to infrastructure decisions. So what I mean by that is spec speed and responsiveness like they're not just performance metrics anymore. They're defining what good feels like from a product point of view. Customers want to have a great experience. And so, you know, in 2026, we're going to have to be making infrastructure decisions that make that experience feel great. My second prediction is that where AI inference runs is going to become a primary design choice. So, we're moving from a world where compute lives in a few regions or one centralized data center. And what we're doing is we're making the execution distributed and much closer to the users and data and we're doing that by default. So that's going to be really important in 2026 because again customers experiences and their expectations are only increasing. And my third prediction and I think this one is really interesting uh is that we're seeing Kubernetes achieve product market fit through AI. So we've seen this supported by the recent CNCF annual survey that came out. Kubernetes is where AI is running and AI inference is becoming its defining workload. So it's we're settling into this role um for Kubernetes as the kind of standard portable runtime for serving models anywhere. We need that distribution. >> Thanks for uh sharing these predictions. Uh can you also talk about what are some of the biggest challenges or bottlenecks organizations will face when as you also rightly mentioned that I talked to Hillary Carter from this foundation also that most of the workloads are running on Kubernetes challenges are going to be there. So what are some of those challenges or bottlenecks you see organization will face this year? >> Absolutely. So I have kind of three big challenges the organizations are going to hit. So the first one is that centralized architectures, they're they're going to struggle to support this real time AI interactions and they're going to struggle to do that at scale. So we're going to see what we saw with the worldwide web weight um you know distance and round trips are going to show up immediately in that customer experience. So organizations are going to have to overcome that by looking at distribution. The second challenge is like operational. So AI kind of collapses the entire day 2 stack into one problem. So we have security, we have observability, we have different pipelines, [clears throat] different GPUs. All of these things come together and teams are going to have to solve that in one place. They can't just address each issue singularly. And then finally a big challenge for organization is around traffic patterns. So agentic systems are creating continuous interaction loops and they're machine driven. Um and so the traditional model of request response architectures or what people are waiting to do um you know will wait for it's not going to work in this new agentic system >> all thank you for sharing this and now what are some of the biggest opportunities that you see will arise this year >> absolutely so big opportunities so I think there again there's three big opportunities that I see so the big first one is unlocking distributed cloud so this is going to become the default application architecture. So when you're running AI workloads, you're running inference closer to the users, you're going to improve responsiveness, resiliency all at the same time. And you're going to make the experience great for your customer. And your customer could be an external customer, it could be an internal person in your organization, but they need that quick real-time experience. And the distributed cloud is going to do that for them. So the next kind of opportunity I see does also relate to personalization and it's all of this is enabling that real-time personalization. We see it in retail in travel. Decisions are happening in the moment. They're not happening after the fact. And so that distributed cloud allows um this real-time personalization. And whether you are developing an AI application and selling it onto a vendor or you're building it inhouse, like you're going to need to consider that. And then at the platform layer, there's this huge opportunity in making Kubernetes effectively invisible. So it's an opinionated platform that removes the operational burden. And so your teams can just be deploying AI anywhere. And so for those organizations, there's huge opportunities with what has happened in the cloudnative community, what IDPs are available and kind of that golden path that you'll need to support this. >> What actionable advice do you have for enterprise leaders? What should they start doing now to be ready? >> So I think the main shift is around kind of building for you know or or planning for what you're building next. So you're starting to design systems. You need to assume the distribution at the start. Don't think about it after the fact. Think about do I need a distributed cloud? How close to my users do I need to be? What sort of experience do they want? And do that at the very beginning of whatever new application you're building, new AI inference workload, all of that. You also need to identify which AI workloads are latency sens sensitive because not all of them will be. So identify that early and move that AI workload that needs real-time experiences close to the user. And then you know of course as a CNCF ambassador I'm a standardized on Kubernetes um make sure that that's the platform that you're using look at it to help you remove lots of complexity. It's complex in itself but because of all the different ways that the whole ecosystem and cloudnative ecosystem is evolving like there's huge opportunities there. So look at it, use it, look at the different agents that are helping agents that are helping you use Kubernetes more effectively. >> And before we wrap this up, what is going to be yours as well as a focus and priorities this year? >> We want to make AI inference workloads great. We want our customers customers experiences to be amazing. And to do that, you know, we have a Kubernetes platform, a managed Kubernetes engine, LKE. We're using that. We're backing it by our GPUs. We have our distributed edge. We're combining that all making the developer experience great. So, you can be running any AI models or workloads wherever your users are without any of the drag that you might um you might face. Denil, thank you so much for taking time out today and share these predictions with us and as usual, I look forward to chat with you again. Thank you. >> Thank you.
Video description
AI is resetting user expectations — and the infrastructure decisions enterprises make in 2026 will determine who keeps up. Danielle Cook, Senior Manager at Akamai, shares her bold predictions for the year ahead: why distributed cloud is becoming the default architecture, how Kubernetes is finally finding its defining workload in AI inference, and why centralized systems will buckle under the weight of real-time AI interactions. Danielle also breaks down the biggest challenges organizations will face — from centralized architecture bottlenecks to the operational complexity of agentic systems — and the opportunities leaders can't afford to miss. Read the full story at www.tfir.io #AI #Kubernetes #AkamaiCloud #DistributedCloud #AIInference #CloudNative #DevOps #PlatformEngineering #AIInfrastructure #2026Predictions