We can't find the internet
Attempting to reconnect
Something went wrong!
Attempting to reconnect
Analysis Summary
Ask yourself: “Did I notice what this video wanted from me, and did I decide freely to say yes?”
Intensity amplification
Inflating the importance, drama, or shock value of information using superlatives, alarming framing, and emotional language. Once your alarm system activates, you stop evaluating proportionality.
Cultivation theory (Gerbner, 1969); availability heuristic (Tversky & Kahneman, 1973)
Worth Noting
Positive elements
- This video provides a helpful side-by-side comparison of how different LLMs handle dynamic code execution for data visualization.
Be Aware
Cautionary elements
- The seamless blending of 'AI News' with a sponsored tutorial for the host's own software tools (OpenClaw) can make a commercial pitch feel like neutral industry analysis.
Influence Dimensions
How are these scored?About this analysis
Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.
This analysis is a tool for your own thinking — what you do with it is up to you.
Related content covering similar topics.
Transcript
This is one of those weeks where we got a ton of really, really fun updates, like some really cool stuff that I'm excited to show off. Let's not waste any time. Let's get straight into it. Let's start with this new feature from Claude where it now allows you to create interactive charts, diagrams, and visualizations. So, one of the examples they show is this interactive periodic table where they can actually click over the different elements here. And then down at the bottom, it gives additional details about that element. They also created this visualizer that shows how different loads affect things like roofs. But I just want to try this out. I want to see what kind of visualizations I can get it to make. So for example, show me how compound interest works. It gives me a quick explainer here. And then it starts generating different sliders. And we can see here we've got a starting amount of 10,000, annual interest rate of 7% over 20 years. If you don't add any additional principle to it, after 20 years that 10,000 becomes 38,697. But let's say we add $1,000 a month to it. After 20 years, that $10,000 principle becomes $6.3 million. What about over 30 years? We can drag this slider and in real time we see it updating. What if our interest rate was closer to 10%. And we could just play around with sliders and see how this changes things over time. This is pretty wild. So, pretty cool visualization. Let's do another one. Let's have it create an interactive timeline of major AI model releases from 2018 to 2026. Now, this isn't instant. It does take a good minute or so, but eventually it does show us a timeline. So, we can see 2018 all the way up through 2026 here. And obviously, as the years go on, there's more and more big releases that happen. But it also created some filters. Let's say we just want to know about image models. I can select this and it filters it down to just the milestones in image generation or just the milestones in code generation. Create an interactive map showing where the top AI companies are located. Interesting. I don't know how accurate this map is. What's up with these shapes? Does it not know what the world looks like, but I can hover over these different areas. It will tell me where things are. Apple intelligence inflection AI data bricks runway. Apparently, Seattle's up here. So, I guess it's not always perfect, but I mean, it's an interactive map. All right, last one. Visualize how neural networks learn using an interactive diagram. We can see it building out our diagram here. All right. So, it gave us a very basic diagram of a neural network here. And then we've got a little slider. Let's see what happens if we move our slider. It does not appear to be doing much. And I think we broke it. All right. So, you know, it'll make some visuals, but it's still got a little ways to go. Our first couple visuals were okay, though. Like, it's really good at visualizing compound interest and creating timelines of things like model releases. It just kind of sucks at interactive maps and showing off how neural networks work. But on the plus side, this is actually available to everybody. If we scroll down here, try it today. This feature is available on all plan types. You should even be able to use it if you're using Claude for free. What's interesting is that OpenAI also released a very, very similar feature. They actually released it a couple days earlier on March 10th where the Anthropic one came out on March 12th. We can see in their demo video there's this explanation of the ideal gas law where they have sliders. Looks very similar to Claude, but this one is apparently animated, so that's cool. Here's another one. Different geometric shapes. Very, very similar idea. Understanding concepts through interactive visuals. In fact, they have some actual demos on their website. Like, here's a mirror equation. I'm not very smart when it comes to math. So, I don't actually know what all this stands for. But, if I drag these around, it changes the visuals of our mirror here. I do know the Pythagorean theorem, though. a^2 + b^2= c^2 to find the length of the sides of a triangle. And we could see if we adjust this, it adjusts the different lengths of the triangle, helping us find the length of the C line here on our triangle. And then we have the ideal gas law, which is one that they were just showing in their demo a second ago. I know nothing about this, but we can adjust these sliders here. So, this apparently adjusts the volume, I'm guessing. And then you can adjust the N value, which seems to adjust the amount of the little gas molecules. And then you can adjust the T value, which seems to adjust the sort of speed that they bounce around at. It didn't seem like the anthropic version of this same kind of thing did the actual animations. And also, this one does say interactive learning is rolling out starting today to all logged in ChatGpt users. So, let's head over to ChatgPT here and let's give it a prompt. Let's give it a similar one to Claude. Show me how compound interest grows over time. And one thing I will note is that this is way faster than Claude. Like this one was almost instant where Claude actually took a good minute, minute and a half to generate. So our principal value, we're starting at a,000 it looks like. Let's jump it all the way up to 10,000. Our interest rate, let's bring it down. And then we'll bring this to 15. And so $10,000 at 3% interest rate over 15 years. In 15 years, we'd have 15,000. Now, let's add a slider for monthly additions to the principal. Where's that principle? Oh, I spelled the wrong principle. It knew what I was talking about. It's okay. For some reason, this time around, it's actually showing it write a bunch of code. Interesting. So, it's saying I can't render a custom sliderbased chart in line in the chat the same way that built-in math visual did. Okay. So, I asked ChatGpt, are these pre-built coded visuals? I can't get chat GPT to create new custom visuals. Is there only a specific set of visuals that can be created? Short answer, yes. Right now, it's mostly a fixed set of pre-built interactive visual models. ChatGpt isn't dynamically coding brand new visual simulations for every prompt yet. It's matching your questions to a library of supported concepts and then loading the pre-built interactive visualizations. That's why that first one was so instant when I asked about it. The anthropic version was clearly building it from scratch, which is why it took so much time and which is also why some of the visuals kind of ended up sucking a little bit. But here's some examples of supported categories. Pythagorean theorem, slope intercept form, circle area, difference of squares. I'm not going to read them all out, but you could go ahead and pause the screen if you'd like. Let's have it do Ohm's law. Show me Ohm's law. So yeah, this is one of these pre-built visuals. Again, it was pretty quick because it's obviously pre-built and like cached. So we can adjust these sliders and see how it impacts the animation. And it's definitely more curated than what we saw out of anthropic. Still pretty cool nonetheless. Like if you need help with math or sciency topics, there is a bunch of cool built-in visuals here. And if you can't get the exact visual you're looking for, well, maybe jump over to Enthropic and that one will build it for you. Now, one thing that's become really clear from all the AI news recently is that AI agents are about to be everywhere. But if you want to run one yourself, the top two biggest barriers are cost and security. Not everyone can just buy a Mac Mini to run their OpenClaw agent on. So a more affordable option is to use a platform called Hostinger where you can spin up a VPS. So you can still run OpenClaw securely, but for way cheaper. You just go to hostinger.com/mattenclaw. Choose a plan, use my code Mattwolf to save some extra money, and click deploy. OpenClaw is pre-installed through their Docker template, so it autodeploys in minutes. After checkout, you set your environment variables, confirm, and then it launches. You connect WhatsApp with a QR code, and your agent is live, running 24/7 in the cloud, and not on your personal laptop with access to all of your sensitive documents. And the use cases for what you can build with OpenClaw are kind of insane. Like you could set up a daily Reddit or YouTube digest, build an inbox declutterbot that summarizes newsletters, or run a personal assistant across all your platforms like Telegram, Slack, and email. And what makes this VPS so powerful is that it's always on, but it's isolated from your personal system and way cheaper than running it on dedicated hardware. So, if you want to create your own OpenClaw agent without breaking the bank or risking your data, check out the link in the description box below. And thank you so much to Hostinger for supporting my channel and sponsoring this portion of today's video. So, a couple weeks ago, Perplexity introduced a new feature called Perplexity Computer, but they only made it available to Max plans. It was this AI agent where you can have it go off and do a bunch of work on your behalf, then it will just continue to work autonomously. Well, this week they made it much more widely available. Perplexity computer can understand a goal, move across tools, and keep work going after you step away. And now we're expanding that functionality across Perplexity, a personal computer that can merge your local files with Perplexity computer and work 24/7. Personal computer runs on a dedicated Mac Mini that can run 24/7 connected to your local apps and Perplexity secure servers. It's a digital proxy for you, working constantly on your behalf and allowing you to orchestrate all of your tools, tasks, and files from any device anywhere. Now, from what I can gather, this isn't something that you go and like install on your own Mac Mini and run it. No, this is like they're providing you access to a Mac Mini that it is running on and you're connecting up to their Mac minis. So, they have a demo video here on X and we can see it says Mac Mini discovered ready to use. So, it went and found an available Mac Mini for you. They had it generate a morning briefing. They click into it and I'm guessing this morning briefing is all sort of local on that Mac Mini that you just tapped into. Looking over on the sidebar, it looks like it's got access to Slack, Perplexity, Notion, Gmail, Dropbox, Figma. They prompt it with, "Find me five senior iOS engineers in the Bay Area who've worked on consumer apps with 1 million plus users. I need to fill this role by the end of the quarter." And then the computer goes off and starts just doing work in the background. They get it to go and like craft an email on their behalf. They take a Q4 revenue report and then have it turned into a slide deck and is going and doing it on an OpenClaw like agent system on a Mac Mini. I don't know if it's actually using OpenClaw behind the scenes or if it's something they customdeveloped or maybe it's like a fork of OpenClaw. I don't know. But it goes and creates a slide deck for them based on that earnings report they fed it. It sends them notifications to their phone about various candidates that are ready to review. Seems pretty cool and pretty powerful. So, I've seen a couple use case videos of this. Here's one that Perplexity themselves posted. Perplexity computer replaced 225,000 a year in marketing tools in a single weekend. We built an AI marketing agent that scans hourly, manages budgets, detects fatigue, and coordinates several campaigns end to end. In one test run, it made 224 micro optimizations to our ad stack. So, in this demo, they're in some sort of marketing automation platform, and it's showing off a bunch of data and charts. I don't know exactly what we're looking at here. It looks like it's different amounts of ad spend. It's got a bunch of demographics data. To me, it looks like it's just pulling together a bunch of data. So, I'm trying to figure out what it actually did for them other than pull in a whole bunch of data into a single place. Did it actually go and create these ads and do like split tests for them? I'm not quite sure, but they managed to get it to look at charts and data for them. I mean, it looks pretty. I just don't quite understand what this demo is showing off if I'm being honest. Other than a bunch of stats, I can't tell what the Perplexi Computer went and did for them. Here's another one. Perplexi Computer now runs on your portfolio. Connect your brokerage account securely through Plaid, then ask computer to make a personal terminal that's always on. And then they generated this almost like Bloomberg terminal looking thing that has all of their portfolio stats, a whole bunch of charts. Is it actually trading for them or is it just really good at looking at data for you? Again, it looks really impressive that it's pulling all of this information together, but what I'm trying to figure out is did it actually make decisions for them or did it just go and build the terminal for them that pulls data from all over the place? I mean, either way, it's pretty impressive that it built this terminal for them. But I can't quite figure out is it doing things autonomously for them like making marketing decisions on their ad spend and making trading decisions for them or is it just building really cool dashboards for them so they can see all the data. I don't know. Maybe somebody who has more experience with this perplexity computer will let me know in the comments what is actually going on with these videos cuz from what I can tell it's just building really cool stats dashboards for them. Again, something that is impressive that you could just tell it to go build a dashboard and it will. But this says Perplexi Computer now runs on your portfolio, which I think just means it connects to the Plaid API and can pull in data for you, but it's not actually making any sort of trading decisions, which you probably don't want it to do at this point yet anyway. But I do believe this new computer model is available to anybody on any of the paid plans. I don't think it's available for free, but if you're on even a pro plan, like the $20 a month plan, you have access to computer now. So, if I click into computer, you can see I have 4,000 credits here. So, the pro plan actually gets you zero credits, but because it's a new thing, I think they're giving out 4,000 bonus credits to get people to try it. So, looking through here, engineer a production ready AI agent with testing harness and evaluation suite. Organize my current tabs into a research doc. Build an agent red team lab. Model the AI model launches I care about. Build a financial model for my startup. Write a business plan. Create a prototype. Let's try to get it to do something that Claude kind of struggled with earlier. Create an interactive map showing where the top AI companies are located. I'll set Opus 4.6 is the model. And let's see what we get out of this. And it's going to work. All right. So, it worked for about 6 minutes here. If I go up to the top, you could see I started about 3:51 p.m. and it finished at 3:56, so call it 5 minutes. It used about 230ish credits out of my 4,000 credits. And here's the interactive map that it generated for me. Quite a bit better than the Claude one. So, I can actually zoom in on Seattle and it shows me Amazon. Let's zoom back out here. Let me see if I can make this a bigger, wider screen here. All right, so we can zoom into California. What do we got down in San Diego? No, that's Long Beach. What's in Long Beach? Okay, that's where Andre's located. Still a little wonky. Let's see. When I click into Seattle, it kind of zooms into the weird spot. Let's see. Let's click on Enthropic over here. Oh, and it actually zooms into where Anthropic is theoretically located, maybe. I don't know how accurate these locations are. The map is a heck of a lot better than what Claude made for me. the first time, but it's still a little bit wonky when I try to zoom into stuff. Pretty decent looking, though. Let's see. Let's switch to light mode. All right. Let's see. All categories, AI, chips, and hardware. Okay. And so, it narrows it down. They're all right in the same spot here. So, I mean, computer seems like it's pretty good at going and building dashboards for you. But I want to move over to something that I was actually really, really impressed with this week from a company that I don't normally talk about that much, and that's Canva. Now, before I do get into it, I will mention I do have a little sliver of equity in Canva, but it doesn't impact what I say about them whatsoever. I honestly just thought this new feature was cool. They introduced a new feature called magic layers. You can give it an image and then it will separate things out in that image into multiple layers and then you could re- imagine that image in different ways. Let me show you what that looks like in practice because it works really well if you generate AI generated images. So, I'll just create like a new YouTube thumbnail here in Canva. And I'll grab an image that I created in Nano Banana. So, I have this image here of me with my head open and my brain on fire. And you can see this is all just one image here. It was all generated with Nano Banana. If I click on edit, there's a new button that says magic layers. And if I click magic layers, it will analyze this image and then break it into separate layers. So, check this out. See my little skull and my brain and my body and even the background. I could move this around. Now, this was generated in the original AI image. Now, it's its own separate layer that I could put wherever I want. My little head layer here. I can move this off to the side if I wanted. I could reattach my head here. And we've got our background image. Now, that's a totally separate purple blue gradient. Let's right click on that, save image as background. Now, I've got the whole background here. Let's select both of these together here and make this a lot bigger. So, it's taking up more of my image. And you can see my little tiny pea brain popping out of my head here. Let me get rid of this one and show you with another one. Here's a really old image that I think I made it mid journey. I don't really remember what I made it in, but it's got a whole bunch of different elements in it. So, let's click on edit. Let's click our magic layers and let it scan this image for layers. And now, check this out. It broke all of this stuff out. So, we've got our background here. I could move this independently now. Sort of broke it out into like two separate backgrounds, but whatever. I'll just delete this one. And let's set this image as our background. And then we've got our coffee mug here. Let's move this coffee down here so it lines up with the bottom. Same with this one. And then we've got this coffee floating here, but I can move it wherever I want. Now, here's another one. I guess this isn't coffee. I guess it's like boba. We've got these little squares. I could move these to be down here. These squares could be like over here. Now, we've got our octopus on a unicycle that could be down here. And I could recompose the entire AI generated image. Like I don't know why I was so impressed with this, but I just think it's really really cool. You can generate any image you want and then go and recompose it cuz it breaks everything into layers. Like to me, this is super super helpful, especially if you make like YouTube thumbnails or Facebook ads or pretty much anything where you're generating images. And it doesn't only work with AI images. You can give it any real image as well, and it will break that out into layers. So, let me just delete everything here. Even broke out the bubbles separately. I just realized that this is an actual real image of me here in this office. Let's go ahead and magic layer that. And it broke it into two separate parts. You've got my background here, and you've got me here as a separate layer. I mean, some wonkiness going on cuz it had to try to guess what was behind me. But I could set this as my background, and I could have a little teeny miniature version of me sitting in the chair. Or make a giant head version of me in my chair. I don't know. I'm just I love this stuff. I just think it's cool. So, that one's called Magic Layers. And from what I understand, it's available in all plans on Canva. And since we're talking about that, I might as well mention this one as well. There's a new feature inside of Adobe Photoshop. Now, inside of Photoshop, you can actually give it text prompts to adjust images. So, this is called AI Assistant in Photoshop. It's apparently available on web and mobile. And they say it's as simple as describing the edits you want. So, I'm inside of Photoshop for the web here. I'll go ahead and pull in an image from my computer. It's just like a random image from a Photoshop. And you can see down at the bottom I have an option to describe what you want to do. So I can just say put an explosion in the background. And we can see that it's automatically selecting nano banana. So it's essentially just nano banana built into Photoshop. And it gives us a couple options here. Option one, option two. Not bad. I mean, looks like what you'd get if you were to drag and drop the original image into Nano Banana and give it the same prompt. We also got a couple new large language models this week. I'm not going to go too deep into them, but I do like to share whenever there's new models available for people. Nvidia released a new model this week called Neotron 3 Super. This is an openweight 120 billion parameter model. I mean, if you had a strong enough GPU, you could actually run it locally. More likely, you're going to run it in a cloud, but you could actually fine-tune this and essentially use it however you want. I haven't actually used this model yet, but from everything I hear, it's a really impressive model, especially for an openweight model. Next week is Nvidia's GTC event where Jensen's going to be giving his keynote. And I imagine we'll be hearing a lot more about this model at that event, but it is available now. You can use it inside of Nvidia's platform, Perplexity, Open Router, and Hugging Face. Google also released another model this week, Gemini Embedding 2. Now, this is more for developers and people who want to use this in the API, but it's Google's first natively multimodal embedding model that maps text, images, video, audio, and documents into a single embedding space. So basically it's a model where you can feed it text, images, videos, audios, and documents and then give it a prompt and it will understand what's going on in all of those various modalities that you gave it and be able to answer questions about all of those things. It will be able to essentially retrieve information from those things when you give it a prompt along with that information. So if you're a developer and you're building something that for instance you want the user to be able to upload a video and this has an understanding of that video and can actually retrieve information from what's in that video. This model could be a model that you build with that can do that kind of thing if that makes sense. All right. So I still have quite a bit I want to show you for the rest of what I want to share. Let's jump into a rapid fire. Starting with by far the most important news you're going to need to know this week, and that's I launched a new version of Future Tools. If you head over to futuretools.io, you will see the new design. It's just a simpler, cleaner look. It has all of the same features. It's just tightened up a little bit. So, you can still filter it by various categories here. Sort it by free, premium, paid, and open source. My suggestion, jump over to the tools database, select Matt's picks. These are the tools that I've actually spent a bit of time demoing and testing and vetting myself and are the tools that I think are actually worth paying attention to right now. There's 97 of them, narrowed down from over 4,000 tools. But the thing I'm most proud of on the news site is actually the AI news page. If you come to the AI news page, this has got a cleaned up look as well. I'll highlight the news that I think is the most interesting. If it is a paywalled article, I will let you know right from this page that it's paywalled. But if you hover over any piece of news on this website, it's going to give you a quick TLDDR summary of that piece of news. So even the paywalled news here, hover over it, and you can see it says indirect summary there. That's because it looked at the news, went and found another website that reported on the same news, and then pulled in a summary from that website. So, you can get all of the latest news at a glance by just skimming this page and hovering over any piece of news. And then, of course, if you want to read the whole article, you click on it, it will open it in a new tab, and you can read the entire article. Keep in mind, this site is very new. I have been working on it myself. And there's probably still a little bit of bugs and wonkiness to it. Just keep that in mind. But again, the exact same functionality works. Let's say you're a YouTuber or podcaster. Go up to search AI tools, type podcast, and it will pull up a bunch of tools related to podcasting. Type in YouTube, it'll pull up a bunch of tools that you can use for your YouTube channel. All right, moving along to real news that you probably actually care more about. Google rolled out a new update to maps. You can use it to ask maps questions and have conversations about your surroundings. One of the examples they give, plan out my February road trip between the Grand Canyon, Horseshoe Bend, and Coral Dunes. any recommended stops along the way and then it does like a whole mapping and gives them suggested stops along the way. I saw another example where it said help me find a toilet along the way where I don't need to actually be a customer to use it and it found toilets on their route so they can stop. So seems helpful. Google also rolled out some new Gemini features in Docs, Sheets, Slides, and Drive. These are only on the paid plans. You can see it's for Google AI Ultra and Pro subscribers, but it's essentially just Gemini features baked right into your Google Docs or your Google Slides or your Google Sheets. They suggest to use it for things like go to a first draft instantly. You could polish things. You could match the voice or writing style. Fill in missing data on Google Sheets. Create fully editable slides. Like if I go to a Google doc here up in the top right, I have an Ask Gemini button. Write a 500word essay about monkeys. All right, so it wrote it in the sidebar. I can click insert and boom, there's my monkey article. Maybe you're more of an Excel person. Well, guess what? ChatgPT for Excel came out this week. We can see in their little screenshot here that it is chat GPT in a right sidebar in Excel. You can see a little chat GPT button up in the top right. It opens it and it can read your data and it can also add data to the sheet. So, we can see it can build and update spreadsheet models faster. Instead of building spreadsheet models or running scenario analysis manually, teams can describe what they need in plain language and chat GPT will create or update live Excel models directly in the workbook. Teams can run data analysis, reporting, inventory management, budgeting, all while preserving structure, formulas, and assumptions in a formatted Excel native workbook. If you're a Grock user, they added a couple new quality of life features this week as well. They added audio options for long- form articles, so you can actually have Grock read articles to you. Now, apparently this feature only works on iOS. So, if you're using X on iOS, there'll be a little listen button and you can actually listen to the article. This is the example article. I don't see any listen button. So, can't do it on a computer, but I imagine it's only a matter of time. Grock also added the ability to stop people from editing your uploaded media. So, if you upload your own image inside of X, well, before anybody could go and animate that image or tweak that image using Grock Imagine. Now you can actually block people from modifying your images. Anthropics Claude rolled out a couple new features for coders that are pretty cool. So this week they launched scheduled tasks in Claude Code. So you could have it do things like a daily code review. Their example was every day at 9:00 a.m. A weekly dependency audit every Monday at 8 a.m. A PR triage weekdays at 5:00 p. p.m. And supposedly Claude Code in the desktop will just automatically handle those tasks at that time and that day for you in the background. There's also a new code review feature inside of Cloud Code. They describe it as an agent teambased reviewed system. So when a pull request is open, code review dispatches a team of agents. The agents look for bugs in parallel, verify bugs to filter out false positives, and rank bugs by severity. The result lands on the pull request as a single high signal overview comment, plus inline comments for specific bugs. Apparently, Anthropic uses this code review internally, and they've been doing it for months to review their own code. Microsoft rolled out a new co-pilot health feature where you can attach medical data. Copilot Health brings together your health records, wearable data, and health history into one place that implies intelligence to turn them into a coherent story where the connection between your broken sleep and the reasons why become visible. Now, ChatGpt has pretty much this same exact feature. I'm kind of wondering if Microsoft is just baking in the same OpenAI chat GPT health feature because, well, Microsoft's deal with OpenAI means that they're allowed to use any of Open AAI's technology. This may be proprietary to Microsoft, but if I had to guess, they probably just are using the OpenAI health feature or the chat GPT health feature and they baked it into Copilot. I can't say for sure, but that would be my guess. All right, now let me get into some of the more fascinating things that happened this week. I've just I've got a few more things. We're almost done here, but over the weekend, Andre Carpathy opensourced what he called auto research. Basically, he developed a little system that teaches your large language models how to optimize themselves. He put all of his code up on GitHub. So, check this out. The idea, give an AI agent a small but real large language model training setup and let it experiment autonomously overnight. It modifies the code, trains for 5 minutes, checks if the result improved, keeps or discards, and then repeats. You wake up in the morning to a log of experiments and hopefully a better model. So again, he's training large language models, telling it to go and find optimizations to make the large language model better, faster, cheaper, whatever. And it keeps on doing experiments autonomously while he sleeps to make the model constantly get better and better and better. And then he wakes up to see a whole bunch of tests and how the model improved. In Andre's own words, it's part code, part sci-fi, and a pinch of psychosis. So yeah, now we have an opensourced setup to go and teach our models to just get better and better for us while we sleep. What? All right, here's another weird one. This week, Meta hired the guys who made Moltbook. Now, if you don't remember, Moltbook was like Reddit but for AI agents. AI agents would go and post random posts on there, and then there'd be a whole bunch of comments on those random posts, but those comments were also by AI agents. And it was just again a social media network where agents post and comment on other agents posts and not real humans. Now it had a whole bunch of security issues and actual humans were using the APIs to post and pretend to be bots and there was all sorts of crypto scams on there and it was just all sorts of craziness. But apparently Meta decided that's worth buying. Now we don't know how much they bought it for or how they plan on implementing it, but there's a few things that come to mind for me when I see this happening. Number one, I feel like Meta is trying to build a platform where they just cut humans out of the loop. Like they probably still want humans to be the consumers, but they don't really care if humans are the creators. We've already seen that. They've already created like a vibes thing where you can just see like AI generated stuff in a feed. Social media platforms have to pay creators to sort of incentivize them to create on their platforms. If companies like Meta could cut out the middleman, aka the creator and just show content to people, well, that saves them a lot of costs of paying human creators. Let the agents just post stuff that humans consume. It keeps humans on their platform seeing the ads, but over time, less and less humans are actually creating the content that the humans are consuming. That's like one of the theories, right? Is that this is all part of that process of cutting out creators. The other theory is over time AI agents are going to help humans with buying decisions. We're seeing it more and more with things like Open Claw and Perplexity Computer and Manis and tools like that where humans will go and tell their agent to just go do things. Well, eventually humans are going to get more and more comfortable saying go do things and if you need to buy something to make it happen, here's your budget. Go and buy the thing to make it happen. So maybe in the future companies and Meta will be pointing their advertising more at agents than they're actually at humans. Maybe they're kind of thinking ahead of well if agents are going to be the ones making more and more buying decisions, let's own a lot of the sort of agent backends and help steer what the agents want to buy and let people who advertise on our platforms steer what the agents want to buy. I don't know, just a couple theories of where this could all be headed, but both of them try to cut humans out of the loop. One of them is trying to make it so they don't have to pay creators as much to create content on their platforms. The other one is sort of making it so if humans are no longer making the buyin decisions, then these AI agents are making the buying decisions and well, maybe they can influence that a little bit. I don't know, just just theories. Just throwing those things out there. And then finally, let's end on robots. This week, the company Figure showed off their Helix O2 living room tidy robot. They were super impressed and super proud of the fact that this humanoid robot was able to tidy up this very, very messy room. That's kind of a joke. This is, if this was in my house, this room is already considered clean. But if you watch the demo video here, basically this robot walks into a room that already looks pretty clean, moves a few items into this little basket here, sprays the table down, and then calls it a day. I would have been much more impressed to see a demo on an actually dirty room. But hey, we're getting closer and closer to robots that just roam our house and clean it for us while we're away. So, you know, that's something that we can all look forward to. And that's what I got for you today. But I do want to share one more thing. I do want to remind you I am giving away an Nvidia DGX Spark. This is like a mini supercomput that you can run AI models locally at home on. And all you got to do is register and attend Nvidia's GTC conference. Now, the conference is happening next week. You can attend in person or you can attend for free virtually and watch it through live streams. And if you register, I will put up a link in the description where you can register. You'll be entered to win this DGX Spark right here. So again, all you got to do is check out the free virtual GTC conference. Register it, attend at least one session, and you're entered to win. The details will be in a LinkedIn link below. All of the details for this giveaway are in that LinkedIn link. So, that will be in the description below and I think we'll also put it in a pinned comment. So, check that out and get registered if you haven't already and you could be entered to win that thing for free. Also, if you're somebody that likes just a once a week video where you can get completely looped in on all of the important AI news from the week, maybe like this video and subscribe to this channel. That's what I'm trying to do. I don't want to be one of the people that's trying to bombard you and tell you that every single thing that comes out every single day is the most important thing you need to know about. I want to be the guy that at the end of the week gives you just like a nice roundup so you know, okay, I'm tapped in because I watch this one video. So again, if that's your sort of thing, then you know, maybe consider subscribing. I'm trying to get to a million subscribers this year and I'm getting closer and closer. So you could really really help me get there if uh you just push that one little button there. That'd be that'd be super cool. Again, that's what I got for you today. I record these on Thursdays, publish them on Friday. So there might even be more news that I missed on Friday, but if I did, I'll make sure it's in next week's video. and I will keep you 100% looped in. I really, really appreciate you hanging out with me today, nerding out with all the cool AI stuff that's going on. And uh hopefully you stick around and watch more in the future. That'd be cool. All right, thanks again. Bye-bye.
Video description
Here's the AI News you probably missed this week. Head to http://hostinger.com/mattopenclaw and use the coupon code MATTWOLFE to build your own OpenClaw AI agent easier and more securely. DGX Spark Giveaway Details: https://www.linkedin.com/posts/matt-wolfe-30841712_nvidiapartner-activity-7432424153711816704-AC21 Discover More: 🛠️ Explore AI Tools & News: https://futuretools.io/ 📰 Weekly Newsletter: https://futuretools.io/newsletter 🎙️ The Next Wave Podcast: https://youtube.com/@TheNextWavePod Socials: ❌ Twiter/X: https://x.com/mreflow 🖼️ Instagram: https://instagram.com/mr.eflow 🧵 Threads: https://www.threads.net/@mr.eflow 🟦 LinkedIn: https://www.linkedin.com/in/matt-wolfe-30841712/ 👍 Facebook: https://www.facebook.com/mattrwolfe Resources From Today's Video: Claude Interactive Visuals: https://claude.com/blog/claude-builds-visuals ChatGPT Math & Science: https://openai.com/index/new-ways-to-learn-math-and-science-in-chatgpt/ Perplexity Everything Computer: https://www.perplexity.ai/hub/blog/everything-is-computer Perplexity Computer Demo: https://x.com/askperplexity/status/2031103256236274180 Canva Magic Layers: https://www.canva.com/newsroom/news/magic-layers/ Adobe AI Editing: https://blog.adobe.com/en/publish/2026/03/10/image-editing-just-got-smarter-with-ai-photoshop-firefly NVIDIA Nemotron 3 Super: https://blogs.nvidia.com/blog/nemotron-3-super-agentic-ai/ Gemini Embedding 2: https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-embedding-2/ Future Tools Database: https://futuretools.io/ Google Maps AI: https://blog.google/products-and-platforms/products/maps/ask-maps-immersive-navigation/ Gemini Workspace Updates: https://blog.google/products-and-platforms/products/workspace/gemini-workspace-updates-march-2026/ ChatGPT for Excel: https://openai.com/index/chatgpt-for-excel/ X Grok Audio: https://www.socialmediatoday.com/news/x-formerly-twitter-adds-ai-powered-audio-reading-long-form-articles/814138/ X Grok Media Block: https://www.socialmediatoday.com/news/x-formerly-twitter-adds-option-to-restrict-grok-image-variations/814140/ Claude Code Tasks: https://x.com/trq212/status/2030019397335843288 Claude Code Review: https://claude.com/blog/code-review Microsoft Copilot Health: https://microsoft.ai/news/introducing-copilot-health/ Karpathy Autoresearch Tool: https://venturebeat.com/technology/andrej-karpathys-new-open-source-autoresearch-lets-you-run-hundreds-of-ai Meta Acquires Moltbook: https://www.axios.com/2026/03/10/meta-facebook-moltbook-agent-social-network Figure Helix 02 Tidy: https://www.figure.ai/news/helix-02-living-room-tidy GTC Giveaway: https://www.linkedin.com/posts/matt-wolfe-30841712_nvidiapartner-activity-7432424153711816704-AC21/ Let’s work together! - Brand, sponsorship & business inquiries: mattwolfe@smoothmedia.co #AINews #AITools #ArtificialIntelligence Time Stamps: 0:00 Intro 0:10 Claude Interactive Charts & Visualizations 3:03 ChatGPT Interactive Visual Explanations 6:40 OpenClaw on Hostinger 8:10 Perplexity Computer 14:48 Canva Magic Layers (So Cool!) 18:13 AI in Photoshop & Firefly 19:09 NVIDIA Nemotron 3 19:45 Gemini Embedding 2 20:51 Future Tools Update 22:45 Gemini in Maps 23:14 Gemini in Docs, Sheets, Slides & Drive 23:45 ChatGPT for Excel 24:33 Grok Audio for Articles 24:55 Grok Media Editing Opt Out 25:13 Claude Code Scheduled Tasks 25:39 Claude Code Review 26:07 Microsoft Copilot Health 26:55 Andrej Karpathy’s Autoresearch 28:04 Meta Hires Moltbook Creators 30:56 Figure Helix 02 31:40 NVIDIA Giveaway