bouncer
← Back

TFiR · 190 views · 5 likes

Analysis Summary

40% Low Influence
mildmoderatesevere

“Be aware that the 'inevitability' of general-purpose agents over specialized ones is framed to create a market necessity for the guest's specific middleware product.”

Transparency Mostly Transparent
Primary technique

Performed authenticity

The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.

Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity

Human Detected
98%

Signals

The content is a long-form interview featuring multiple speakers with distinct, natural vocal patterns, including physical interruptions and spontaneous conversational rapport. The presence of filler words and specific, unscripted anecdotes confirms this is a genuine human discussion.

Natural Speech Disfluencies Transcript includes '[clears throat]', 'um', 'uh', and self-corrections like 'I guess third foray into the...'
Conversational Dynamics Speakers reference shared history ('Good to see you after so many years') and hand off the floor to one another naturally.
Contextual Nuance Detailed explanation of specific business operations (Helium Mobile's 600,000 signups and 120,000 access points) delivered with non-linear phrasing.

Worth Noting

Positive elements

  • This video provides a practical case study of Helium Mobile's transition from rigid dashboards to natural language data querying.

Be Aware

Cautionary elements

  • The use of historical analogies (like the Dot-com crash) to frame a specific technical architecture as an evolutionary certainty rather than a business choice.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217
Transcript

Today companies are throwing billions of dollars at specialized AI agents. One for sales, one for support, one for data analysis and whatn not and they are creating a fragmented mess that doesn't actually solve their problems. It actually makes it worse. In the meantime, they're missing the bigger opportunity using general purpose agents that can think across domains and can adapt to real business challenges. So why are companies getting AI adoption so wrong and what should they be doing instead? That's exactly what we are going to find out in this episode of the Agentic Enterprise with Boris Reski, CEO at Apologic, Mario TTO, GM of networking at Helium Mobile, and Joey Padden, VP of network architecture at Helium Mobile. Boris, Mario, Joey, it's great to have you on the show. >> Well, thank you for having us. It's great to be here. It's good to be here. Good to see you after so many years. >> Yeah, it's true. It's been a while since we have spoken. Talk about what are you up to these days? Tell us a bit about Apologic. >> Yeah, so Apologic is going to be my I guess third foray into the uh entrepreneurial world. Um and what we're building is a system for bridging general purpose um [clears throat] AI agents like claude or like chat GPT um to um large data warehouses and large data pools such that anybody armed with a claude can just go in and ask questions inside Claude about any type of company data and get instant answers, um get the ability to chart those answers, plot those answers, things like that. And um [clears throat] the uh concept was actually born out of some of the problems that we were tackling um at Helium initially. So maybe I'll hand over to um Mario and he can talk about Helium and the problem. The easiest way to talk about Ulu Mobile is that we are um a mobile plan operator in the US. We have more than 600,000 signups into our mobile service. So you can buy a phone plan from us as well as we manage 120,000 access points uh in the US. Those access points are used by our own subscribers um of the Helium mobile phone plan as well as from carriers like AT&T and others. So all this to say without getting into details about helium mobile is that our need uh our business is a very data intensive business and you might imagine you have you have behavior on the user side that we want to track. We have behavior in terms of like where do they connect how much data they use and how they use as well as on the access point side on the network side what is the quality of this network how is is the access point up or down how much data the the access point is uh um has has transferred today so a lot of data and traditionally we've been managing that through dashboards lots of dashboards um in internal tool like superset which is essentially like a tableau if you want to think about it that way. But the reality is that what happens in all these cases is that every single different business function has different needs on how to slice and dice the same amount of data. Technical people of like Joey wanted to see in a certain way and we he talked probably about his own special specialty there but there's BD person and sales people and marketing. So the end result was that it was very hard to fit all these needs in the superet dashboard. So living in the AI world we're like there there must be a better way and that's where like that that's where the like the basic requirements for ape logic were started and that's how this this Boris venture started. >> Just to add more color to it um it's been a little bit of a journey um working with Helium mobile to figure out the correct um sort of uh structure for the product. Initially we figured that there's a better way than just you know superset dashboards and you have to be able to talk to data. So we went on a path that I think is very common among many and we built kind of a our own centralized data agent. Um we built a relatively heavy harness around that agent um with all the details of like what the databases looks like, what the you know different business terms are, how it should follow particular flow to query database to give an answer etc. And after playing it for a little while we realized that it doesn't actually work very well. It works okay, but all too often it doesn't give correct answers. It takes a long time and the whole thing's pretty brittle. So, um, in parallel, we started experimenting with a different approach where a lot of people uh, within Helium mobile already were using claude for all kinds of things. And we said, "Okay, well, how about you use a super stripped down version of our agent where we'll give you basically an MCP connector into your cloud instance and within that MCP connector, there is already info about the database schema and um sort of like a dictionary of common business terms, which is commonly referred to as the semantic layer, and let claude do the work." And what we've realized is that that approach is actually a lot more effective. Um, and it's gotten more and more effective over the last several months as cloud models have become more and more powerful. So with that in mind, um we've decided to kind of pivot away from you know like this buy an agent uh that will interrogate your data type business model to just focus on a connector for general purpose agents to via enterprise data. >> You are making a very strong case for general purpose agents over a specialized one. Why is that approach better? Why should companies use something like cloud directly instead of buying the specialized agents? I mean I think that um a little bit more of like a broader philosophical argument that is happening right now about how AI is going to get adopted in the enterprise and based on you know my subjective experience um with uh customers like Helium mobile um it feels like you know there is this pattern of everybody building specialized agents. And we were, you know, the first ones that were guilty of it um and potentially not doing enough around enabling people within the organization to actually use general purpose agents. And you can see why that's happening also because you know it's a it's a very clean understandable business model. Uh there's a lot of companies that have raised money to basically build specialized agents where they wrap um an LLM with some you know purpose-built functionality and then they monetize you know effectively on a cost plus basis the LLM wrapper. Um but that resulted in I think you know too many of these companies and this is kind of similar to what we saw in the dot era where you know every single thing needed to have like its own website. Um and we're seeing a lot of this right now with specialized agents. Um I think over time there's going to be massive consolidation. There's only going to be a few specialized agents and the predominant form factor of working with AI is going to be around people actually using general purpose agents like claude that are connected to um you know enterprise data enterprise tools and that evolve as sort of like a personal assistant for each enterprise employee because already Now, these types of agents can probably do like 95% of the stuff that the specialized agents can do. So, you know, the short of it is that I see like pretty heavy overinvestment into specialized agents and not enough accent on general purpose ones and making um folks within um a company effective using general purpose agents. >> Let's get technical for a moment. What's the fundamental difference between a specialized agent and a general purpose agent and the base label is the same LLM? So, what makes them different? >> I think one thing that comes into play here is how Frontier Labs approach training the um like Frontier LLMs that a lot of the specialized agents end up using. And the agent ultimately is basically an LLM that is trained to you know certain patterns and a certain data set and some memory and some harness around it. And I think that um general purpose agents that are built by the frontier labs that also own VLMs often times perform better than the specialized ones because you have a more kind of a connected interplay between how the LLM was trained and how the harness was built. And you can see a little bit of that already peeking out as we start talking about like MCP versus CLI. Um there's been like a big debate around MCP is dead, no it's not dead, etc. But where it's coming from is that um I think Anthropic spent a lot of cycles actually training the latest set of its uh um frontier LLMs to work and prefer and be biased towards the CLI tools versus the MCP. So if you give it a generic task and say you know fetch data all too often it will try and build something using you know CLI commands versus trying to search through the you know MCP data connectors and we're already seeing this right now. So I think that this is going to continue accelerating because um Frontier Labs understand that the uh LLMs are you know they're they're commoditizing very quickly and they need to use their unique LLM advantage to get themselves into the different use cases into the enterprise. So you will see an increasingly tighter interplay between the um LLM and how it's trained and the harness and the use cases that the very frontier lab is going after. And as such I think that most people that are just building their own harnesses around frontier will end up in a disadvantaged position. Not to mention just the general market dynamic of like, you know, you don't really need a specialized agents for everything. Just like you don't need to have, you know, a pets.com specialized websites for selling pet food. You know, you can go to Amazon and buy everything. Same thing is at play here. Like you can have one agent that an enterprise adopts that it vets against its security posture. Everybody gets a claude and then claude gets connected everywhere and it becomes super capable at helping people across all different tasks within a company. >> I'm kind of curious that if you are endorsing that companies can just go and use cloud directly with their databases, why do they need a logic at all? So um what we basically give um Claude is you can think of it as like a a purposebuilt memory for interrogating data. Um historically in the data science world this is referred to as the semantic layer. But the thing is that you know we're tackling a particular problem and that problem is like you have a general purpose agent assume it's infinitely smart like you know the smartest data scientist that you have um irrespective of how smart it is or how well it's harnessed been built to interplay with the underlying LLM it needs to have context about the data that it's quering it needs to understand um what the database schema looks like. It needs to understand things like in case of a Helium mobile like what is Helium mobile's definition of monthly active user. Um and those things are very specific to a company. They're not generic. [snorts] And moreover in uh managing those things you need to have some sort of interplay between like a number of actors. So for instance, you know, Mario and Joey can be querying a database um using claude, but they themselves um nor their claude knows like what the database schema looks like because they're not responsible for building it or maintaining it. Um so you need to have this context layer in between and you need to have a system whereby people querying it with different questions. um ultimately um continuously making this uh context better. So what e logic does is let's say you know um Mario's querying database about the you know Wi-Fi quality of service um initially um when you go to claude and ask like show me all the hotspots that have bad performance cla doesn't know what bad performance is and it might make something up then Mario Joey can go in and they can correct claude and say, well, bad performance means that it's Wi-Fi hotspots with a consistently poor signal uh within this threshold. Then what's going to happen is that a logic is going to remember it and propose this as a term to be reused. And then um you can have another party in play who is sort of like the administrator of the semantic layer who will see that hey there is a new insight um we all know what the quality of service means. should we save it so that the next time person asks about it um actually you know claude uses that same definition. So this kind of interconnect between data source and you know general purpose intelligence is something that just has to be there irrespective of how smart the agent is or how good the LLM is. Not to mention it makes the queries a lot more token efficient. So that's that's kind of what we are what we're betting on what we're building. >> Now I would like to talk to you Mario and Joey. Can you share what things looked like before Apologic versus how they are now? What actually changed for you? >> Well and I also want to hear from Joey because also from a technical standpoint. But what I can tell you is that the easiest way for us to do something like ape logic was to figure out a very long string of SQL query that we were going to query the database and inevitably I'll tell you what it was pretty openly is that each single differentme had different or similar SQL query. So we would end up with different numbers many times and then this query were taking a long time to come back. So the the the problem is that everybody was making up the context that Bor was saying everybody was making it that context in a way that would make sense but if you're not the one that builds the database it's very easy to create something was erroneous and so there was this constant back and forth about what is the right query what is the right query and then we did literally took like string of text put it into a knowledge base internal knowledge we said this is the query so once you start working with things like ape logic then then changes is because then you can put it in the context and everybody knows that when you're asking about bad performance is exactly this specific context and so but maybe I don't know maybe Joe you can add some more as he has been even heavily heavier than me involved in as well as well queries before ape logic >> AP logic um we had probably three people who were in charge of the SQL interaction and the whole rest of the running all of the different user profiles. you had to give a request to the SQL experts to get access to data. Um, with the advent of the Ape Logic product, we can have people on the support team. We can have executives asking questions of the data. And me as an engineering lead, I'm not worried about flawed, not getting the question or the context correct and yet confidently giving an answer that goes to a customer or it goes to an uh out decide. It's really you helped us um reduce consistent [clears throat] outputs from our data. Enjoy touch on something that I think is actually key. Uh usually let's say you have a sales functional for customer success or you have a blo a content manager that writes content all the time whenever there was some need for getting number they were always quote unquote bother the engineering site can you give me this number can you give me this number and we were happy to do it because you want to give it up with confidence right instead of them going and searching the like sea of dashboards that we had. So now in this case that work is completely parallelized as streamline and we put up a couple of content on the healing mobile network. We cover Marty GR New Orleans or we cover the Super Bowl effect and all those data just came straight from the from from that work without the you know the the reality the necessity to bother engineering to get to get that number for you type of thing. >> Can you [snorts] also talk about the impact on complexity and resource allocation? What about fragmentation and future proofing especially in a market where you don't even know which companies will survive? >> I think that it's um I don't know if cost specifically is the the answer that I can give right now because they're still in the middle of all of that. But what I can tell you what I just described in efficiency of cycle that is definitely so it's not necessarily like a straight up cost savings like we it's not like we are you know letting go people because of that but the ability for people to know switch context and give the ability to focus on that and the ability to like sales representative to hit their customers in their pipeline faster and returning with their data like twice or three times as faster as they used to. that efficiency will turn into like more revenue or will turn is we already know more customer engagement and satisfaction which eventually will turn into revenue. So it's not I don't know about necessarily cost saving as a as a thing but efficiency cycle absolutely. Well, one other item on [clears throat] cost is all of our data lives in an AWS either like an S bucket or a data base. What we saw previous ape logic is that the the marketing team would request its own data base that has data X and is structured as Y. Then the engineering team got its own databases. And then the finance team had its own databases specifically created to answer the questions that each of those users asks regularly. And the architecture that we're moving to at the moment puts all of that data in one lace. Then it uses the ape logic interface to expose it to each of these different users. And getting rid of that replication of data is going to save us. money every month you AWS bill >> but based on your experience what are some of the objections that you have heard from customers about using general purpose agents are they against the idea or are they in favor of the idea >> well I think that it's surprisingly I mean again we're still new so I don't know if I speak of like macro trends based on the personal experience that I've had, but actually general purpose agent adoption seems to be a little bit lighter on friction because the biggest push back that we have seen working with Helium mobile and and other um early access users that we have is the concern about um you know putting the data out there to some LLM. And when you're dealing with the specialized agents, like if we were to continue on the path of like a logic is the specialized agent that you install centrally on the server and we run the LLM where basically the wrapper around the LLM. um [clears throat] you have to show like exact layer you know what LLM you're using where the data is running like what are you handing off what the security procedure is etc but um you know like most companies already have claude in some capacity and you know they they have already figured out how to align claude with their security posture so um once we've switched to just you Hey, we're not a specialized agent. We are just a a clawed, you know, augmentation layer. That actually eliminated quite a bit of friction from the customer conversations. >> How does a company actually get started with APLogic? They already have their investment. They have existing setups, MCP servers. What does day one look like? >> Um, yeah. So, I can it's been a progression again. um I'll share and you know uh Mario Joey feel free to chime in. So as I've mentioned because you know what what we're building largely has been [sighs] born out of the helium mobile use case the adoption trajectory there has been you know atypical we we had the specialized agent first then you know we basically built this claude augmentation MCP data gateway as we call it now um that that we've basically handed out to people um and and That's where it is. But like for everybody else and we have um a number of other places where we're basically doing kind of early access trials with customers. The path is as follows. We um have you know like a a downloadable binary um that's available on a logic site and you can start in kind of like an individual use mode. You literally download it. It installs like a small desktop-based um MCP connector. Um if you already have a connection to the database um it will reuse your existing credentials and your scoped kind of security permissions. Um [clears throat] Claude will then be able to go through those and you know talk to data. as it talks to data, it will start automatically accumulating the semantic layer locally and then and this is you know you can start for free. You can anybody anybody who's talking to data right now and wants to try doing that using cloud can can do it. Just go to the website download the binary and then if and when you like the outcomes um at some point usually becomes like a team exercise. So let's say you know Joey started then Mario wants to also do it and then somebody else wants to join in and then the database administrator actually wants to start seeing you know the observability around it and starts actually creating the semantic layer. At that point, it's possible to switch to like a teams mode where basically the semantic layer is no longer stored on an individual laptop or a person um that's querying the data but gets pushed to um a secure git repository that you know whoever is responsible for the semantic layer in a company uh will specify and will get continuously updated um as people kind of interact with data for claud >> Joey Mario can you uh share from your perspective from inside a player how does it work look like for you working with a logic >> yeah we were introduced to a ape logic because as we said earlier it's a datative in data intensive business and Our data has been growing massively over the last couple of years and we know that there's useful insights in it but getting them out was becoming more and more work and we started working with Boris and his team and as he laid it out, I mean, we started out as individual users trading semantic hints in chat to update each other's individual users. And at a certain point, it made sense for us to centralize all of that to give each of the other users learnings generated by everyone. >> There's a lot of talk about AI disrupting the SAS world. Do you think business intelligence tools are going to go away? Well, I think that right now we're at like a PKI hype SAS apocalypse stage, you know, like a good good time to buy SAS stocks. I am, you know, kind of, you know, posting on Twitter and LinkedIn about this and uh I'm consistently getting comments from people like, "Oh, CLI is the only interface. That's the best interface. That's the only thing that's going to be there." I mean, I I don't agree with that. I think that like even within Helium mobile it was pretty obvious pretty pretty quickly that like you know a chatbot interface is pretty limiting. I do think that there's going to be quite a bit of dramatic change to you know what SAS is and how the interfaces are built. I think the future is going to be kind of like a combination of a chatbot interface and a rich UX interfaces that are dynamically composed by you know things like MCP UI that have been gaining traction already. um like you don't you know even interacting with data um you don't just want necessarily like an asky table that's printed out by whatever you know claude into into the into the chat interface right you want you want to see a graph or you want to see a pretty table that you can interact with etc but those tables will need to be kind of like generated on demand so um I I don't think that SAS is going away. I think that you know it'll it'll the interface will have to change. It's going to become dynamically composable. Um there's definitely opportunity for new companies to kind of start pushing towards that. And there's also an opportunity for the incumbents to realize that and then start evolving in that direction. Yeah, that's the way I see it too because right now everybody this is kind of realizing this the dream of every CEO in terms of like oh I can like get rid of all tools and just do this and then I think that when the dust settles at the end is always going to come in the middle and we'll probably see some startups sure maybe smaller companies they might know go all the way with all CRM system and they will use systems like this because they are faster they're agile they're cheaper but when it comes to bigger enterprise I do think that a CLI only type of uh interface. I agree with Boris and I think that in the longer term we'll probably see the bigger the big guys to start you know doing M&A and doing consolidation around the industry to to go faster and build this within their own tooling. So I think that's kind of usually how it goes for any type of big uh disruption in the in in the markets like at at large we've consistently seen that I think [snorts] like EV imagine like what electric vehicles was and what it is today for car manufacturer imagine what Ford stance was when the EV started what Ford stance is today about EV and you realize that that's that's usually how the dust settles >> what is the long-term vision of a logic. Where does this go in two to three years in such a volatile market? >> I think that um as software in general becomes easier and easier to build um there is very few modes that still remain. There's very few things that um as an enterprise one you know will own and one will pay for. Um one of those is you know the data obviously um which is you know stored in enterprise databases and the other is just the you know what I refer to as like the enterprise context. Um so you know like very simple example is like the helium's definition of monthly active user that is like a helium thing that will evolve with helium that helium will always want to own and control. So my vision of the future is that you know the enterprise will look like you know it'll have its data quantitative data in a databases. It'll have its organizational context or organizational memory keeping you know what is monthly active user and what is quality of service and what is the workflow to do this and that. Um and the software becomes sort of like you know this liquid in the vessel that's built of data and context and um we want to be the you know the the context part of the vessel. So right now we're focusing on probably the hardest place where the context is most important which is around quering the databases. But in general longer term I think I can see us evolving into more of like a enterprise memory layer for operating the company in the you know agentic world >> for a C label executive and data engineers who are watching the show what's the one thing they should be doing differently right now when it comes to AI investment >> my main advice is to stop buying specialized agents and start focusing on using general purpose ones and making the general purpose agents more effective within your organization. So this is for all the sea level executives and for you know the the the data scientists. So instead of buying you know 105th you know specialized agentic data science tool just use claude um or you know whatever whatever you choose um as as your you know general purpose agent of choice and focus on connecting it to tools and data within the organization. Boris Mario Joy, thank you so much for joining me and sharing these insights on why general purpose agents are smarter bad for enterprises than specialized agents. Thanks for those great insights and I look forward to our next conversation. Thank you. And for those watching, if you are rethinking your AI strategy, check out APLogic AI and don't forget to subscribe to TFIR, like this video and share it with your team. Thanks for watching.

Video description

Companies are throwing billions at specialized AI agents—one for sales, one for support, one for data analysis—creating a fragmented mess that makes problems worse, not better. They're missing the bigger opportunity: general-purpose agents like Claude that can think across domains and adapt to real business challenges. In this episode of The Agentic Enterprise, Boris Renski, CEO of Apelogic, explains why enterprises are getting AI adoption so wrong. Joined by Mario Di Dio, GM of Networking at Helium Mobile, and Joey Padden, VP of Network Architecture at Helium Mobile, they share real-world experience transitioning from specialized agents to a general-purpose approach—and the dramatic efficiency gains that followed. Boris argues that the over-investment in specialized agents mirrors the Dot-com era mistake of building specialized websites for everything. Just as Amazon consolidated e-commerce, general-purpose agents will consolidate enterprise AI use cases. The key? Connecting those agents to enterprise data and context through tools like Apelogic's semantic layer. Read the full story at www.tfir.io #AI #ArtificialIntelligence #AgenticAI #EnterpriseAI #CloudComputing #DataScience #GenerativeAI #MachineLearning #DigitalTransformation #TechInnovation

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC