We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Analysis Summary
Worth Noting
Positive elements
- This video provides a practical breakdown of the governance and cost challenges associated with moving AI agents from experimental 'playgrounds' into regulated enterprise environments.
Be Aware
Cautionary elements
- The 'market research' cited is derived from the guest's own sales qualifying questions, which naturally aligns the 'findings' with the guest's commercial solutions.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Related content covering similar topics.
Transcript
I run a show called the agentic enterprise. So I am from a camp that believes that agentic AI is no longer just an experiment in the lab. It is moving into production. I talked to so many folks, so many organizations who are deploying it in production. We oursel heavily used it. We have whole MCP servers and we are doing incredible work with it. Now, Southworks, a company, they did a research where they surveyed more than 625 cloud architects and decision makers and findings are pretty clear. Nearly everyone sees value in AI reasoning, decision optimization, and autonomous agents. But here is the challenge. Most organizations don't have the internal capacity to scale these initiatives on their own. So how are enterprises actually executing on agentic AI today and what's holding them back from doing it at a scale? That's exactly what we are going to find out in this episode of the Agentic Enterprise with Johnny Haleif, CTO at Southwork. Johnny, it's great to have you on the show. >> Thank you very much for having me. >> It's my pleasure. uh this is the first time I'm talking to you or to someone from Southworks while we'll talk about the survey report. But before that, let's just quickly talk about Southworks. Talk a bit of the company, the space that you folks operate in, the problem that you're solving for modern enterprises. >> Southworks has been around 21 years. Uh we started working before the cloud, before everything else. And throughout their journey we went from help helping hyperscalers that's how the company started working with Microsoft and the AWS of this world up to helping them their customers right to modernize their enterprise technologies move from onrem to cloud from cloud to cloud go to the cloud native stack cross clouds and and and get to that point of you know resiliency uh in in a piece of hardware they don't own anymore. more uh from there and over the last three to five years we've been uh working on the AI space. Uh I think that everybody's started paying attention when JPT exploded but before that we were already playing with the models trying to figure out what to do and over the last um couple of years we have been helping companies uh surf this way of AI and and everything that entails. >> Thank you so much for sharing that journey. uh you folks recently came out with the research around uh agentic AI the stage from experimentation to active execution. Uh first of all talk a bit about the idea behind this research or is it the first time you folks did or you have been doing every year just let's start the we start with the basics of this research first. We help a lot of companies uh all shapes and sizes, different geos or different industries. Uh but we were always with this question of like um are these companies with a specific problem that come to us or is this something that is happening on the market. So we decided to go ahead and partner with the cube to go do this research across 600 plus architects and figure out what's the state of the cloud what's the state of AI and what are they looking forward on doing and and how they are planning on you know delivering on that roadmap and and this is how this research came to be um it's based and and this is the interesting part um all the questions and and all the Q&A that shape the actual report are based on the on boarding uh questioner we run with our customers. So every time somebody comes along and say like hey we want to modernize our stack or we want to move from cloud A to cloud B or we want to go cross cloud uh we share with them a question. So when I was talking with one of the the analysts and researchers, we we thought about like hey how about we run this in a wider market with everybody those who don't know us that might not have come to us for help and understand the oral state of the market. So that that is how it came to be and of course agenda the adoption sourcing and you know scaling those endeavors was front and center because it's you know everywhere. >> Excellent. Thank you. Now let's talk about some of the findings. Uh in general we do understand that uh lot of companies they not last lot almost every company is excited about AI they want proof of concepts but most of them don't materialize into production but when we look at uh some of your reports uh you show that uh a lot of experimentation is moving to active execution. So talk a bit about the reality the contrast does it depends on industry to industry some industries are very mature they have been I mean if you look at your camera I mean you cannot see it here but it has machine learning it's like almost 12 year old camera but it attracts my face AI has been around for a very very long time open AI kind of uh rekindled uh the interest in AI uh so so there are two different things that we hear that proof of concepts are failing but your report is saying that from experimentation to active execution is helping. So I want to understand based on your survey and then you also said that the question you asked were not aimed at survey itself it's the question that actually customers ask or partners ask. So talk a bit about based on your experience what is driving this shift and what does execution actually look like inside today's enterprise because execution could mean a lot of things. this could be just hey we are experimenting with it versus this is something that actually end customers are using right now. >> It's a great question and and I think that the we live in a in a really messy space right now because from what we saw you can look at it from three different angles. The first one it's a great push for automation that it's not necessarily tied to AI. So that's something that we saw uh as you said open AAI and Chpt rekindle that love for automation and machine learning and those sort of things but not all of it is you know LLM based or generative or anything like that it's just a matter of saying like hey everybody's trying to automate everything how can we go ahead and do it ourselves and when you look behind you know behind the scenes it's a bunch of APIs and orchestration so that's something that requires there's a lot of I would say coaching to our customers saying like hey this might not be the right case for uh an agentic AI it's more like an automation thing but it's good that you are embarking on that journey the other thing and that's where most people say they are adopting AI it's with the personal productivity improvement kind of thing of it as either GitHub copilot if you're a developer or you know the M365 copilot or cloud code work or those sort of things that aid on the personal usage of your computer and save time on tasks and then there is a real implementation of a genetic autonomous AI in which you will have uh data that is being you know inferenced upon or reasoned upon if if you're not that technical trying to uh you know unlock new insights figure out new ways of doing things and and kind of you know running under the human supervision but with more autonomy. I think within those three buckets it's all different and I think that uh some standardization would need to happen because the one that don't need AI at all right like those integrations for automation if you end up going down that path and you say like yeah sure we'll go use a open AI agents in cloud and create an MCP server and all those things it might result in like too expensive to run and people will will kill it, won't take it into production because it's like, yeah, that's not a million dollar problem. The personal productivity is great, but it comes with a lot of governance challenges, right? Like who's paying for that CH GPT or that copilot? Is it you? Is it a corporate? It's like are you running on free trial? Are you training your are you training public commercially available models with PII with information that should never leave your enterprise and those sort of things. So that's one of the you know places in which we help customers uh move forward and try to standardize that uh productivity side of it. And then it's the implementation the real agentic workflows that uh go with a mandate and start asking for permissions and you treat it as you treat them as users and you generate identities and you try to figure out what's this new world of having autonomous pieces of software that are running that need to access your data. How clean is that data? How secure? Like how do you put guard rails on that? And I think that you know that's kind of the the the number one roadblock when you think about it whether it's on the personal productivity or the full agentic you know autonomous take open cloud or cloud like it it it might have different names by the time we release this but um at the end of the day it's like okay how secure are we uh which information is leaving our premises if and how who's who's going to control that who's going to own that information, are we putting the proper roadblocks in place when we give agents autonomy to go do things? You know, what are the contention nets? Who is overseeing that behavior? And and that is kind of the most exciting thing of it that it's like, you know, it's almost sentient kind of thing versus the this is a scary like, you know, uh it can go send emails and do stuff. So figuring out the proper governance, the proper you know uh kind of calculating your risk appetite when it comes to like okay this is how far I I I think we should go at this point and having that framework but when I say framework people think about you know the net framework or the traditional framework in the code sense this is more like a set of practices and processes in place for governance and for maintenance that you can you know use to keep control and and and keep tabs on your agents and such. It's kind of the next frontier for all this to start scaling because the individual user is super excited. We all go on on a X and read all the amazing things that these agent agents are doing or the agentic frameworks are doing and so on and so forth, but everybody's scared like what if it starts getting into the super confidential or the PII or the things that will get us in trouble. So from what we've learned in our report and the survey and everything that's the number one roadblock right like from having an enterprise gateway to accessing the models to running small self-hosted models to dealing with PII who owns the data what goes where those are the questions that remain unasked because these endeavors tend to be individual people get really really excited they go online they go into you know anthropic um perplexity chy BT rock and they throw a document and it's like yeah what what happens with that like where is that document going who's going to own once that you know it's passed your desktop and it's uploaded into the internet uh do we have any problems will we have any potential privacy problems and and those are the roads that's the number one thing when it comes to streamlining those pilots into actual production environment ments. The other thing is cost whether it it's related to specific user licensing cost and and we have run a lot of projects on change management for developers and stuff like that to figure out if it makes sense with the current state of their organization where they are using these copilots, how they are using it, if there is anything we can do to help them use because they have a price tag on it. And the last thing it's like is your use case the best use case for an LLM or for a copilot like uh and and we've seen many of these in which uh because there is no enterprise access layer there is no mandate to go you know organize this data and put these frameworks in place. We don't have the proper services. So what people end up doing is sharing reports and you use a lot of tokens in computer vision to try to turn back into database format something that was already there before it turn into a report and that might end up using a lot of tokens. So all that you know intent the individual intent from the information workers and the developers and everybody that is in the front line working with these models needs to up to the management and we need to have the proper frameworks for governance risk and so on in place in order for this to turn into a real thing otherwise it's you know the wide west. Your findings also suggest that most enterprises don't have internal capacity to build agentic AI solutions inhouse. How do you see the balance between uh platform vendors, consulting services, internal teams evolving over the next few years because this is becoming an ecosystem, not one company dominate the whole market. It's a great point because as with every technology and and and we've been around enough to have seen this before with the cloud, right? When you have like a lot of CIS ad means dealing with your current infrastructure and when you ask about the cloud, they will be like we have no time, we have no skill set, we need to learn this, but the transformation needs to happen way faster than that. So we are at the same point and and I think it's a combination of everything because the platform vendors have a unique advantage of you know being able to prepare these agents or prepare these APIs or MCP servers or or anything that it's ancillary to this ecosystem from the inside out and that's a unique advantage because you can shape it to your own taste and that's fantastic but then is the reality and the reality happens when you ask the right questions like do we really need this process? How will this process evolve if there is a human in the loop instead of a human actor taking care of the things and you need to rethink a lot of things and that's where you know the consulting or the IT services companies comes into play to ask the right questions to rethink the process because you know it's really hard to stop the machine when you're running and this requires some you know taking two three steps back and looking at the whole thing to make sure that you are not you trying to optimize something that doesn't even need to exist. So, it's a combination of both. Uh I think that with the evolution of the uh and and we've been there early early in the game of the you know agent to agent protocols and those sort of things that's the other the other frontier that it's like uh you can either build your own agent that does anything that will be too complex to maintain as any other piece of software. So why don't you bring the best from the platform vendors and combine it with your own unique advantage that it's your data, your access, your systems and your specific domain knowledge of your industry and make them talk and make them collaborate and put the proper guard rates in place and tap you know into the best of breed for everything. And that's a challenge that most companies running their day-to-day operations are unable to do without, you know, figuring out how not to stop everything. So that's a unique opportunity. Most of the respondent to our survey say that they will rely on IT and consulting firms to implement these workflows. Uh but also most of them are planning on sourcing these agentic capabilities from the platform vendors. So you see there is a white space there because if they are going after both somebody needs to stitch them together and I think this is the opportunity that that we are after. Can you also talk about whether organizations are looking at long-term deployments where they're planning for future or they're right now near-term uh practical AI applications where they can start seeing ROI tomorrow versus hey we'll invest we'll see what things look like 5 years from now. uh and also are there any use cases uh that are delivering the fastest time to value uh and u if you can give some examples that would be great too so that two questions mixed together >> definitely so this is uh a great moment because AI you know when chip came along everybody was AI is going to eat the world then and we hear this over again with every you report and tweet and everything that it's around that it's like we are living an AI bubble. So the use cases kind of swing between like hey we want to build a future 10 years from now but we need proof today. So there are specific use cases whether it's on automation or troubleshooting or you know assisting with decision making and finding insights and stuff like that that are you know midterm to shortterm where you can prove that value and it's kind of it's an agile approach that organizations are taking like prove that it can do the most basic stuff and then layering you know complexity aspirs one on top of the other. uh there are in the in the IT space. We've seen amazing results with troubleshooting. So think of it as you're running a live a live software that it's accessed by millions of users and you have an production incident, you have access to a logs, you have access to a source code. So why don't you connect an agent that can help you troubleshoot and maybe suggest a fix for them? Something that would have taken 6 hours now takes 15 minutes. That's impact because that reduces the number of people you have to on call that changes your escalation process that puts the human in the loop like this is not deferring everything and later on blaming the AI. It's, you know, instead of having a human go through all the data and trying to figure out what just happened, uh, getting like a root cost analysis because you can get the logs and you can look at the source code and say like, hey, here is an issue and this might be the fix. That's, you know, that's where the money is right now. That's a use case in which we are bringing efficiency to the human. We are putting the human in the loop and we are delivering real business value to the organizations because we went for we went from three to five hours downtime to 30 minutes because we get the fix and we roll it out. We have the human reviewing the change and saying like yeah that's it or maybe tweaking something. So that's really one place in which AI is shining. It's making sense out of data, going through a lot of data in a short amount of time and suggesting what's the next best step. I think that summarizes where um enterprises get more excited about it and are willing to invest the most. It's when you get to those use cases in which you say, "Hey, I put this out and here is my ROI." I think that the only this is this is not a bubble. We might be living in a bubbleish moment where we see crazy valuations and a lot of commentary around it. But this is an action change. This is going to change the way we work. We interact with computers and the efficiency that our business processes have. So I think that the way in which we build that trust and build that confidence is by finding these quick wins and stacking one on top of the other and rethinking those things. And it's we are at that moment and I think that what we found is that enterprises need to you know show the don't tell moment that it's kind of our DNA of saying hey you know what this sounds fantastic show me the value show me the money and I think those use cases where AI shines is what gives them courage to start rethinking business processes and workflows and automation and real autonomous genetic scenarios your way but you need to start some >> a lot of organizations you know they are using different tools different models can you talk about u how do you see the standardization interoperability and scalability so that organizations are not as we have seen in past reinventing the wheel uh the uh [clears throat] so the main foundation just like Linux kernel or kubernetes is there but they built on top of that they will add value on top of that. What are you seeing there? I see that uh the you know the agentic AI foundation and those things couldn't come at a better time because enterprise readiness is super fragmented as you the examples that you gave are right on spot because Kubernetes kind of set up the operating system for microservices and production ready and Linux is the core like even the Microsoft Azure and so on and so forth um workloads are running on that. So um I think it won't take 20 years as it was before but something needs to change and and we are living that time because today everything is super fragmented. uh most of deployments on the enterprise are limited to either individual departments or individual workers which are using publicly available tools that you know the cost of replacement is really really hard. They are not thought as a system. They are thought as a tool but a tool without the replacement or don't know the interface or the you know this is a job that is doing and eventually if we need to swap it out this is how we do it. We are not thinking in those terms. Uh it's super spread across organizations without standardization. Uh that fragmentations you know that fragmentation creates challenge for either scaling, consistency, usage, monitoring and eventually you know better negotiation with these vendors because when you want to consolidate you need to be prepared and that's where the frameworks and the practices come into play. also integration side right like uh we on the cloud native foundation technologies we talk about kubernetes as a gold standard and we figure out you know monitoring and all that based on that assumption then whether you re you run susi or canonical or red hat or whatever your container is using and it's using java.net net or whatever, we have some level of standardization from the outside and we can treat those blocks as building blocks that we can replace. I think that that's the next frontier that would drive the standardization. It won't diminish the value of the platform vendors because they have their unique secret sauce that are the models and the integrations and everything that they are working on. But as an enterprise, you need that Kubernetes moment. You need that Linux kernel moment. You need that uh you know well these are the things that we won't be arguing about. We just you know change the underlying implementation to better fit our business to better fit our purpose or because that specific platform vendor excels it capability or skill. Um that's not the reality today. That's definitely not where we are at, but that's where should we be going and and I think that's exciting too because with A2A and everything that Google had done at that front uh we started to see the light right like we for a long long time all these AI machine learning spaces were clustered and controlled within these platform vendors and nobody uh everybody was so secretive and not talking what they were sharing. Now we as a you know as a community or as an industry are talking about like hey we need to play better with each other. We need to understand the landscape and if enterprises are really going to adopt all this we need to get to that point. We need to get to that state in which we all trust Kubernetes. It's fantastic. There are many flavors to it. There are many vendors that provide it but the basics and the fundamentals are there. And I hope that we reach that in the short term and it doesn't take us another 10 years to stand that >> as these systems are getting deeper and deeper into organizations infrastructure uh their systems are becoming more and more autonomous. Governance is also going to become a very important uh topic there. Also this is global uh different countries they have their own privacy. Europe is all about GDPR uh Asia then of course North America different countries uh how mature is AI adoption governance >> so the tools are there I don't think that you the the current implementation state with this fragmentation and this per business unit or per individual sort of things are driving those um there are a lot of things to control and there are a lot of design patterns that we can learn from the past uh the le least privilege model to have the service principle ask for permissions when you need those the data resiency requirements and all those things still apply. But as it happened with the cloud and is it and is it had happened before with you know uh data processing pipelines and training machine learning models. uh you need a companywide policy and commitment to standardize this because a single individual user or a business unit it's not up to a point of thinking like hey where is this data processed or are we taking data across the border like we all know that that people are doing that but it's really hard for the current state to be standardized at the level that we need um I think it's a challenge. I think that there are a lot of tools that the AI safety or or AI has been born with the safety in mind because we are all scared of what it can do. So there are a lot of tools and mechanisms in place but there are a lot of patterns that we should you know uh try not to reinvent the wheel and figure out how we translate those into the AI world. uh call it the least privilege access or they need to know as if it was an employee or even the data residency as it happens between Europe, the US, Asia. Um but that requires a higher level commitment at an enterprise level and some sort of standardization because otherwise it's really hard to manage. Tools are there, mechanisms are there, patterns have been around for a long time. It just takes the enterprise to commit to it and start deploying this and put it as you said right the guard rails that are not the blockers because we don't want to deter innovation we want to foster it and I think that those guard rails will make sure that we do it in the um least dangerous way or the most optimistic way we can. >> As we discussed almost every company wants to embrace it. they are doing proof of concept which mostly are failing in production. Uh you folks have been around for a long time. You have seen the industry's failures. You have seen the mistakes and then how those mistakes were correct. You were not born 3 years ago in the era of AI. You have seen much bigger word. Um so based on those lessons whether it was Docker, whether it was Kubernetes or a lot of other technology that came and went uh where a lot of we have seen folks they get excited they invest heavily then they realize that ROI is not there then they start course correcting uh they will hire and then they will fire they will embrace and then they'll ditch. We have learned we don't have to keep reinventing the wheel. So if I ask you what is your advice based on your experience this survey where they should their proof of concept are more successful and they reach production what is the first step they should take and what is also a step they should not take if they really want to become successful when it comes to agentic AI or AI in general. That's a great question and and it's something that I I play by heart. It's when you look at these problems and when you look at the technology as the solution, try not to fit your problem into the solution. Let's figure out what your problem is, what your current solution and what needs to transpire into the next generation. We've seen this before with people saying like I own 150,000 blades on my own data center so I will go ahead and get 150,000 VMs and probably you will be out of business in three months because the TCO and the ROI for one technology is not the same as the other with with AI is the same thing. There are a lot of things into these business workflows that are inherent to us human beings and the way we communicate, the way we reason on top of the information we have and the different mediums and and formats we use to share information. With AI is the same thing. If you want to succeed, you need to design for the AI whether it's through MCP APIs using vectors and those things. It's not about replacing a human saying like hey this is what I do just do the same thing because it will fail um because the ROI won't be there it will be too expensive to run and you cannot translate linearly this is exponential technology as it has been with the cloud uh before the cloud we used to buy capacity just in case with AI people are trying to automate or carbon copy workflows as they are today they will eventually become too expensive to maintain and you won't see the ROI and you know as you described the firing and hiring and firing and hiring for a couple of years will start to happen until we settle. So my advice is always like when you have an agentic problem in front of you ask yourself is this the best solution or is this what we know like you need to rethink processes you need to rethink workflows for this new era and trying to understand the compute power of the AI models and so on and its limitations because there is no such thing as a do it all. You need to put the human in the loop in the right place at the right moment. But do not replace the whole workflow as a carbon copy of like hey I had a guy doing this now I have an agent doing that because eventually it will become just too expensive to maintain and you won't see the ROI. That's why most of these proof of concept phase because they try to automate what they know >> and what they know it's going to change because this is exponential technology that will revolutionize the way we work and it's a fantastic time to be alive. >> Johnny, thank you for joining and sharing these insights on how agentic AI is moving from the lab into real enterprise deployment and what it takes to scale this system effectively. what mistakes organizations make, what are the right steps they should take and how you presented a picture, positive picture of how enterprises are actually leveraging AI, generative AI, agentic AI to further automate their systems. Thank you so much for your time today and I look forward to our next conversation. Thank you. >> Thank you very much for having me. I had a great time and looking forward to do this again soon. And for those watching, if you are facing similar challenges scaling or adopting Agentic AI, if you have any doubts that this is a bubble between burst and your investment will go to waste, don't listen to critics. Listen to interviews like ours. And don't forget to also check out SWworks their latest reports and also don't forget to subscribe to this channel, like this video and share it with your team so we can bring more discussion like these to you. Thanks for watching.
Video description
Nearly all cloud decision-makers see value in agentic AI capabilities—from AI reasoning to fully autonomous agents. However, most organizations lack the internal capacity to scale these initiatives independently. Johnny Halife, CTO at SOUTHWORKS, breaks down findings from a survey of 625 cloud architects and explains what's driving the shift from experimentation to active execution. He discusses governance challenges, cost traps, and why enterprises need a "Kubernetes moment" for AI standardization. If you're deploying agentic AI—or planning to—this conversation reveals the real obstacles and how to overcome them. Read the full story at www.tfir.io #AgenticAI #EnterpriseAI #CloudNative #AIPlatforms #DevOps #AIGovernance #Automation #AutonomousAgents #CloudArchitecture #AIAdoption