bouncer
← Back

Douglas Schmidt · 292 views · 10 likes

Analysis Summary

40% Low Influence
mildmoderatesevere

“Be aware that the 'inevitability' of AI is framed through selective historical analogies (like farming) which may oversimplify the distinct cognitive differences between manual labor automation and intellectual synthesis.”

Transparency Mostly Transparent
Primary technique

Performed authenticity

The deliberate construction of "realness" — confessional tone, casual filming, strategic vulnerability — designed to lower your guard. When someone appears unpolished and honest, you evaluate their claims less critically. The spontaneity is rehearsed.

Goffman's dramaturgy (1959); Audrezet et al. (2020) on performed authenticity

Human Detected
95%

Signals

The video is a recorded academic lecture by a verifiable individual (Dean Douglas Schmidt) featuring natural speech, personal anecdotes, and a specific professional context that AI cannot currently replicate in a long-form presentation format.

Personal Identity and Context The speaker identifies himself as Doug Schmidt, Dean at William & Mary, and references specific personal history (being in college in the 1980s).
Speech Patterns The transcript contains natural self-corrections, conversational transitions ('I spend a lot of time talking with...', 'I kept coming back to...'), and first-person anecdotes.
Content Depth and Structure The presentation is a long-form academic talk (over 1 hour) with a complex, non-formulaic structure that integrates historical analogies with specific institutional context.

Worth Noting

Positive elements

  • This video provides a high-level academic perspective on how AI tools like 'Deep Research' can be used to synthesize complex historical data and assist in creative writing tasks.

Be Aware

Cautionary elements

  • The use of the 'Prometheus' myth creates an aura of divine inevitability that discourages critical questioning of the specific corporate interests behind the AI tools being promoted.

Influence Dimensions

How are these scored?
About this analysis

Knowing about these techniques makes them visible, not powerless. The ones that work best on you are the ones that match beliefs you already hold.

This analysis is a tool for your own thinking — what you do with it is up to you.

Analyzed March 23, 2026 at 20:38 UTC Model google/gemini-3-flash-preview-20251217 Prompt Pack bouncer_influence_analyzer 2026-03-08a App Version 0.1.0
Transcript

I'm Doug Schmidt, dean of the School of Computing, Data Sciences, and Physics at William and Mary. Today I want to talk about unbinding Prometheus. How generative AI is liberating computational thinking from the confines of coding. In Greek mythology, Prometheus was the defiant Titan who stole fire from the gods and gifted it to humanity, an act that ignited civilization, but also provoked divine wrath. Zeus punished him with eternal torment for daring to share such power. That story still resonates today. Each time humanity unlocks a new source of power, whether fire, electricity, or now intelligence, we inherit both its creative spark and its moral burden. Generative AI is our modern Prometheian flame. Dazzling, dangerous, and full of possibility. Here are several key themes I will convey throughout this multi-part presentation. We'll begin by recognizing generative AI as a transformative force reshaping every aspect of modern life and then explore how it's rapidly transforming technology, society, education, and work by enabling new forms of problem solving, creativity, and automation. I'll also show how the traditional barriers to computational thinking are falling, making it accessible to everyone, and how this democratization is empowering people without formal programming backgrounds to solve complex problems and model systems effectively, provided they apply soundprompt patterns and good practices. This transformation echoes the myth of Prometheus, who shared the fire of the gods with humankind. Today, the fire is AIdriven reasoning and synthesis, democratizing access to analytical power once reserved for specialists. Moreover, we'll explore how prompt engineering is redefining computational thinking. First, as people across fields use natural language prompts to direct intelligence systems, and then as a larger transformation unfolds from traditional coding to prompt guided reasoning as the new literacy of the AI era. As computational thinkers begin to vastly outnumber programmers, we must first understand the societal shift this represents and then reimagine how we design, teach, and govern AI technologies to ensure they serve humanity wisely and responsibly. And channeling a bit of Uncle Ben's wisdom from Spider-Man, we'll underscore how with great computational power comes great responsibility to use AI ethically, sustainably, and with respect for the human and creative values that define us, especially when confronting thorny issues like energy and water usage, academic integrity, and intellectual property. Let me begin by sharing what inspired this presentation. As dean of the school of computing, data sciences, and physics at William and Mary, I spend a lot of time talking with faculty, students, administrators, and alumni about how rapid advances in technology are transforming our school and our world. In those conversations, I kept coming back to a powerful historical analogy. A little over a century ago, tens of millions of Americans worked on farms to feed the nation and much of the world. Today, that number has fallen dramatically. Not because we stopped eating, but because technology evolved. Hand tools gave way to horsedrawn plows, then gasoline powered tractors, and now drones and autonomous combines. Each leap in technology amplified human capability, reducing the number of people required to farm the land while increasing productivity substantially. That same story is now unfolding in the realm of computational work as generative AI begins to do for knowledge labor what mechanization once did for agriculture. That visualization of how few Americans now work on farms reminded me of another one. This time from a Microsoft research report published about 20 years ago. It charted the number of students majoring in computer science from the early 1970s through 2004, forming what I like to call the two humped camel curve. The first hump appeared in the early 1980s when I was in college. Back then, students rushed to major in computer science, convinced that without it, they'd never get a job. Then reality hit. Most jobs didn't actually require a computer science degree, and enrollments collapsed. To make matters worse, we were still using punch cards, which made programming painfully slow, especially if you dropped your deck on the floor. The second hump rose in the mid to late 1990s during the dot boom when students believed a computer science degree was their ticket to millionaire status. But once the bubble burst and outsourcing took hold in the early 2000s, computer science enrollments plunged again. It's a vivid reminder that interest in computing has always followed waves of hype and disillusionment. Each tied to how society perceives the power, promise, and pain of technology. Since that Microsoft research report stopped at 2004, I've always wondered what happened in the years that followed. So I decided to ask chat GPT to visualize the number of students who wanted to major in computer science over the past 55 years and this is what it produced. If you click the link at the bottom of the slide, you can actually see the chat GPT session that generated this graph. Now while the visualization isn't perfect, it mirrors the trends in the original report and shows how a computer science enrollments have surged over the past 15 years. A big question facing computer science deans, chairs, faculty, and students is whether these growth trends will continue. But beyond the data, this exercise highlights something even more interesting. It's an example of no code computational thinking. I wanted to solve a computational problem, visualizing five decades of CS enrollment trends without writing a single line of code. All it took was a well-crafted prompt and chat GPT quickly did the rest. I also asked ChatGpt to visualize data science enrollments over the past decade since the field itself is relatively new and as you can see it's following a strong upward trajectory. Once again, this is an example of no code computational thinking. To generate these visualizations, I use Deep Research, which I'll now briefly describe and demonstrate in action. Deep Research is an AI powered agent available from OpenAI. It performs multi-step reasoning, browsing, analyzing, and synthesizing information from online and or local sources to produce detailed citationrich reports on complex topics such as computer science, enrollment trends, and far beyond. In effect, it functions as a virtual team of postocs on demand, capable of diving deeply into any subject and producing clear, wellsupported insights in minutes. And it's not alone. Other advanced language models now offer comparable deep research capabilities, ushering in a new era where scholars, creators, and innovators can scale their curiosity and their productivity to unprecedented levels. An example of applying deep research to amplify my creativity occurred in the spring of 2025 when I used it to help craft my commencement address for the computer science and data science departments at William and Mary. I wanted to model my talk after General Douglas MacArthur's legendary duty honor country speech at West Point. One of the most stirring orations in American history. Ironically, despite years of public speaking, I'd never been able to capture the cadence and emotional power of General MacArthur's speech, even when I served as director of operational test and evaluation at the Pentagon and had a dedicated speech writer. Then one afternoon, driving to and from Williamsburg in Suffach, Virginia, I decided to give it a try using ChatGpt's voice mode on my phone. During the 50-minute drive each way, I dictated ideas, refined themes, and explored phrasing aloud. By the time I returned home, I had a complete first draft transcribed automatically in real time. I later turned to deep research to polish and strengthen that draft into the final version you can hear if you click the link at the bottom of this slide. A talk I quite literally could not have written in any timely manner before the advent of generative AI. Now that we've walked through several examples of no code applications of generative AI, let's step back and talk about what computational thinking actually is and why it matters. About 20 years ago, Dr. Janette Wing, then a computer science professor at Carnegie Melon University, wrote a landmark article in the Communications of the ACM magazine defining the term computational thinking and positioning it as a universally applicable attitude and skill set that everyone, not just computer scientists, would be eager to learn and use. She later went on to serve as assistant director for computer and information science and engineering at the National Science Foundation, vice president of research at Microsoft, and is now executive vice president for research at Columbia University. Dr. Wing described computational thinking as the mental framework for formulating problems so their solutions can be represented as computational steps or algorithms. She viewed it as a way of analysis and reasoning that transcends coding itself. An intellectual toolkit that students, scholars, and professionals in every field could embrace to solve complex problems more effectively. Five core elements of computational thinking are shown on this slide. Logic is about reasoning through problems step by step, evaluating what's true, what's false, and what follows from the evidence. It's how we teach machines and ourselves to make sound and consistent decisions. Decomposition means breaking a big messy problem into smaller, more manageable pieces. Once each piece is solved, they can be reassembled into a complete solution. Spotting patterns, similarities, and regularities across problems enables us to reuse ideas and accelerate our problem solving. Automation is the engine of execution where our reasoning becomes a process that machines can perform, turning logic into action and freeing us to focus on creativity and innovation. And finally, abstraction is the art of focusing on what truly matters by filtering out the noise, highlighting the essentials, and modeling problems in ways both humans and computers can understand. Together, these five elements form the foundation of computational thinking, the mindset that powers both coding and creativity in the age of AI. It's important to recognize that computational thinking isn't limited to coding. Instead, it's about thinking systematically, turning complex problems into structured, repeatable processes that humans and machines can perform, either together or separately. Over the past two decades, however, the practice of computational thinking has largely involved writing code, usually in second and third generation languages like C, C++, Java, and Python. But as every computer science student, faculty, and professional knows, fluency in these languages often means wrestling with accidental complexity rather than focusing on meeting actual domain requirements. Programmers must contend with syntax and semantics, memory leaks, concurrency bugs, buffer overflows, off by one errors, and headaches associated with error handling and distribution. Speaking as someone who spent decades deep in that code, both as a teacher and a programmer, I know from experience that many of these struggles are self-inflicted. It doesn't have to be this hard to think computationally. We've just mistaken these accidental complexities for necessity. If you trace the history of automobiles, you'll see the same curve we're riding now in computing. The shift from the how to the what. 120 years ago, if you wanted to drive, you first had to build the car yourself. Every bolt, belt, and spark plug painstakingly assembled by hand. 70 years ago, to drive a car, you needed to know how to maintain it by changing the oil, tuning the carburetor, and hoping it started on cold mornings. Today, driving has gone mainstream. You don't need to build or fix the car. You just need to operate it and fuel it or plug it in. And soon, you won't even have to drive. You'll simply tell the car what destination you want to go to and it will decide how best to get you there. I saw that future firsthand on a recent trip from Franklin, Tennessee to the Nashville airport. My Uber driver set his Tesla on autopilot and then never touched the wheel accelerator or breaks the entire 30inut ride. It was equal parts thrilling and unnerving like watching the future flex its muscles. But it worked. I arrived alive and deeply aware that autonomy is no longer science fiction. It's a beta feature. Just as the steering wheel is giving way to the algorithm, computing is also now shifting from the how to the what. From mechanical construction to expressive intent. 70 years ago, if you wanted to use a computer, you had to build it from scratch. Solder the circuits, punch the cards, and pray your program would compile before sunrise. When I learned to code 40 years ago, we worked in third generation languages like Pascal and ADA, painstakingly tracing through printouts and manually debugging logic by sheer force of will. Believe it or not, some universities still teach introductory programming that way, but the world has moved on. Modern software development is less about handcrafting every line and more about composing, assembling systems, connecting frameworks, and orchestrating application programming interfaces. Ironically, many academic programs still underplay this reality. Now, we stand at another threshold. Soon, you won't need to write much traditional code at all. You'll describe what you want in natural language, and your development environment powered by large language models will generate, adapt, and even test the solution alongside you. The level of abstraction keeps rising, from wiring machines to writing code to simply declaring intent. Programming is evolving from syntax to semantics, from telling computers how to think to telling them what to achieve. At the heart of these advances in the commoditization of computational thinking are prompt engineering and context engineering. Prompt engineering is a new literacy that empowers anyone to harness the power of large language models without needing to write code. Just as programming once defined the how of computation, prompt engineering now defines the what. Translating human intent into intelligent action. Prompt engineering comes in two distinct flavors. Prompt engineering in the small and prompt engineering in the large. Prompt engineering in the small is about structured iterative interactions by individuals who use natural language and prompt patterns to reason through complex problems step by step. In contrast, prompt engineering in the large takes that same principle and scales it, orchestrating systems of reasoning that span the entire software life cycle from ideiation to sustainment. Together, these practices democratize computational thinking, enabling anyone from scientists and engineers to artists and writers to reason and create computationally without ever needing to write code. Context engineering is closely related to prompt engineering. It's the process of tuning the instructions and the relevant context that AI models need to perform their tasks effectively. It's also the design discipline that builds the information hub around an AI system. The structured environment that gives AI models orientation, purpose, and grounding. Rather than crafting a single clever prompt, it orchestrates five interconnected spheres, each feeding AI models intelligence from different perspectives. As we just discussed, prompt engineering is the intent translator that converts human goals and nuances into structured, interpretable instructions that shape how AI models reason and respond. Retrieval augmented generation, or rag, is the knowledge conduit. It delivers verified, contextually relevant data from outside sources, ensuring AI models reason with evidence rather than assumptions. Structured outputs are the precision channel. They establish how AI models organize and express their results such as tables, JSON or logical formats. So their reasoning becomes traceable and useful. State and history are the temporal compass. They preserve the flow of a session, remembering what's been said and decided, so AI models maintain coherence and momentum within an ongoing interaction. Finally, memory is the long arc. It extends awareness beyond the current session, retaining durable facts, preferences, and prior insights. So AI models evolve with the humans they serve. Together, these five spheres orbit the central hub of context engineering, transforming a static exchange into a dynamic system of understanding. Teaching students this mindset helps them learn not just to prompt AI, but how to architect the environment in which reasoning itself unfolds. In the age of generative intelligence, context isn't peripheral. It's the gravitational center that holds meaning together. However, since prompt engineering and context engineering are so closely linked, we'll just use the more common term of prompt engineering to refer to both disciplines. There are many benefits from applying prompt engineering paired with context engineering, of course, that are redefining what it means to think computationally. It breaks down the walls that once separated coders from creators. For example, prompt engineering lowers the barriers to entry for applying computational thinking in both technical and creative fields. Whether you're a poet sculpting words, a physicist modeling the universe, or a programmer designing the next breakthrough, you can now reason computationally without touching a single line of code. It's the great equalizer, a new literacy that transforms imagination into execution through natural language. For the price of about three pumpkin spice lattes a month, roughly $20, you can now access advanced large language models like chat GPT, Gemini, or Claude and perform prompt engineering at a professional level. Computational thinking therefore no longer requires fluency in traditional programming languages. We've moved from imperative programming languages like C riddled with accidental complexity to higher level languages like C++, Java, and Python, which made things easier, but still demanded mastery of dense syntax and semantics. Now, we're entering a new frontier, performing computational reasoning in natural language. You can describe what you want in plain English or any language. And the AI handles the how. This shift from the how to the what is democratizing computational thinking. It opens the door not just for computer scientists but for social scientists, natural scientists, doctors, lawyers and national security operators. Anyone who thinks systematically and wants to turn ideas into action. When I was a professor at Vanderbilt, our colleagues from departments like chemistry, biology, and divinity would often come to us in computer science and say, "We'd love to use computation in our research and teaching. Can you show us how?" And our natural response was, "Absolutely. Let's start by teaching you Java or Python or maybe JavaScript." That's when their enthusiasm would fade. they'd smile politely and say, "You know, maybe computation isn't all that essential to our field after all." But here's the twist. Those same colleagues are now often the first to embrace generative AI. Why? Because they can finally think computationally without learning to code. The accidental complexities of traditional programming no longer stand between them and their ideas. Now they can go straight from concept to creation. Not by mastering syntax, but by mastering prompt engineering. Let's start with an example of what no code computational thinking looks like in real life, showing how generative AI can take on a task that used to feel tedious and errorprone. A few years ago, back when chat GPT was still brand new, I co-chared a computer science faculty retreat at Vanderbilt. We'd asked all 35 professors to send us one sentence about what topic they wanted to discuss. My co-chair proposed we read them all, group them by hand, and then manually assign everyone to one of five discussion groups. That sounded like a long afternoon of spreadsheets and coffee. So instead, I said, "Let's have chat GPT do it." In seconds, it clustered the topics perfectly. Then after each professor ranked their top three choices in a Google form, I fed the results back into Chat GPT, and it quickly balanced every group by interest and size. What once took hours of sorting, formula tweaking, and programming became a 10-minute conversation with generative AI. It was computational thinking without code, problem solving through natural language instead of syntax. That's the quiet revolution that's happening right now. Here's the record of the conversation that I had with Chat GPT. I started by giving it a data set, a list of all 35 faculty members and their top three topic preferences. Chat GPT thought for a moment and then announced that it would use a greedy algorithm to assign faculty to their preferred topics, which seemed reasonable. However, greedy algorithms tend to get a little greedy. Some topic areas filled up too fast and a few unlucky faculty didn't get assigned to any of their preferred topics. Most notably, Jonathan Sprinkle ended up in a topic area he didn't even want simply because his request came in last. Now, in the old days, that would have meant rewriting the Python code, debugging it, rerunning it, and testing again. But this time, I just told ChatGpt, "Your greedy algorithm gave Jonathan Sprinkle a topic he doesn't care about. Please come up with a better allocation." Chat GPT tried again and still didn't get it quite right. So, I nudged it once more. This time, it paused, thought a bit longer, and replied, "I'm going to use randomization." And voila! The new approach perfectly allocated everyone, including Jonathan, to one of their actual preferences. It took a few iterations, but what would have been hours of manual coding and debugging turned into a few minutes of no code computational thinking, simply guiding the AI through reasoning steps with natural language. Let's now step back and examine the three layers of computational thinking that generative AI is commoditizing. We'll start with the first layer, problem abstraction, where AI helps convert messy, vague ideas into something structured. You might start with a fuzzy concept, a cloud of question marks, and through a quick back and forth with a generative AI model, you can sharpen your idea into a clear, well-defined prompt. It's like having a thought partner who translates your intuition into a plan of action. The second layer is automation design. Once the problem is clear, AI scaffolds the logic for you. You don't have to write code line by line. You just describe what you want to happen in natural language, such as if this happens, then do that. AI connects the blocks, sketches the algorithm, and shows you the workflow that previously took hours of manual programming. Finally, the third layer is tool implementation. Here AI takes the automation design and makes it real. It writes the code, runs the analysis, builds the simulation, and even visualizes the results. We're seeing a dramatic shift. You can now move from a vague idea to working results almost instantly. Moreover, people are increasingly skipping straight from problem to outcome, bypassing traditional coding entirely. That's the magic of generative AI and the disruption. Computational thinking itself is being democratized and commoditized. Anyone with a browser and curiosity can now think and create computationally. So what does all this mean for the future of computational thinking and computer science education? For starters, it means the future increasingly belongs to those who can ask the right questions and then use AI to model their ideas computationally. We're entering an era where problem solving feels less like coding and more like conversation. You describe what you want. Maybe it's generate an app that optimizes my workflow and AI takes it from there, abstracting the problem, designing the logic, and implementing the solution. This new paradigm of prompt engineering is often called vibe coding. Creating through intuition, not syntax. It's about shaping experiences and systems with natural language and imagination rather than wrestling with lines of code. In the vibe coding paradigm, creativity becomes the new literacy. If you can imagine it, describe it, and refine it through dialogue with AI, you can build it. That's where computational thinking is headed. from programming machines to partnering with them. There's a terrific article by Tim O'Reilly titled AI and Programming, the beginning of a new era, where he argues that we're not witnessing the end of programming, but its remarkable expansion. His central idea is simple but profound. Thanks to Vibe Coding, the cost of trying new ideas has dropped by orders of magnitude. What used to take days or weeks can now be created and tested in minutes. Students, researchers, and even people with no coding background at all can now describe an idea in natural language and get a working prototype almost instantly. However, this doesn't mean we're handing over our most critical systems to AI. No one's suggesting we build a heart monitor or a nuclear reactor with a vibe coding. Yet, the rise of generative AI has dramatically expanded the addressable surface area of programming, transforming it from a narrow domain for specialists into a vast interconnected landscape where educators, writers, designers, and scientists can all express ideas computationally. What was once a small circle of programmers has rippled outward into a global creative ecosystem, bridging traditional coding islands like C++, Java, and Python with new continents of natural language interfaces, prompt engineering, design synthesis, and no code tools. Ultimately, this expansion is good news for computer scientists and software engineers. Vibe coding doesn't replace software professionals. It broadens the on-ramp and lets more people think computationally, experiment boldly, and build the future faster than ever before. As the wave of rapid prototyping grows, the best of these AI generated ideas must be refined, hardened, and scaled by experts who understand systems, architectures, dependability, security, and performance. Maybe it's time to rethink how we teach computer science. begin with the magic of AI, then pull back the curtain to reveal the rigor behind it. I've been thinking a lot about how vibe coding could redefine the entry point to our field. Imagine a CS100 course, a prelude to the traditional CS 101, built entirely around AIdriven creation. Students wouldn't start with syntax or loops. They'd start by using AI to build apps that actually work. websites, chat bots, data dashboards, real tangible results in minutes. The goal isn't to replace fundamentals. It's to ignite curiosity. Let them feel the thrill of creation, then deepen their learning by showing them the architecture beneath the illusion. Once students are hooked with creativity, we can open the curtain and show them what it really takes to make production systems that are reliable, robust, and secure. Too many students meet CS 101, face the eight queens problem or the towers of Hanoi and wonder why they're solving puzzles instead of shaping the future. They've already seen AI generate apps in seconds. We should meet them where the wonder lives. To inspire the next generation of computational thinkers, we need to start with AI, the new language of imagination. In today's AI augmented world, knowing how to prompt generative AI platforms and tools has become as important as writing code via traditional programming methods. To perform effective computational thinking, it's now essential to be proficient with prompt patterns, which encapsulate best practices for phrasing prompts so that large language models can reason, extract, and generate with precision. These patterns provide structured ways for humans and AI to collaborate via reusable conversational strategies that bring clarity and control to an otherwise open-ended medium. Just as design patterns have long helped developers craft robust and sustainable software reliant systems by codifying proven solutions to recurring programming problems, prompt patterns now offer a shared vocabulary for guiding the behavior of generative AI tools. Those of you who teach software engineering and programming may recognize the intellectual lineage from the influential gang of four book that codified key object-oriented design patterns to today's evolving body of work on prompt phrasing. Years ago, I co-authored several popular books on pattern oriented software architecture that broaden the focus beyond object-oriented design patterns to also cover architectural patterns related to concurrent and network programming. It's fascinating to see similar principles of reuse, context, and communication are reemerging in modern AI augmented systems and processes. However, they're now shaping how we interact and collaborate with intelligent systems rather than how we program them. Although prompt patterns differ from traditional software patterns like bridge, adapter, or reactor, they are no less important and no less relevant. One of the most widely used examples is the persona pattern where a user assigns a role-based identity to an AI model to shape how it responds. For instance, you might tell a large language model to act as a senior software engineer, a cyber security specialist, or a reliability architect, and then give it a set of tasks. Framing the conversation this way often dramatically improves an AI model's accuracy, tone, and contextual relevance. An analogy that comes to mind is Dumbledore's pensive from the Harry Potter book series. He could pull memories and ideas from his mind and store them for later reflection. In a similar way, information can be encoded into a vector database and retrieved by a large language model through the persona pattern, essentially externalizing knowledge so it can be recalled with precision. Our 2023 paper, a prompt pattern catalog to enhance software engineering with chat GPT, documents the persona pattern along with many other common prompt patterns. This paper has been cited over 2500 times, which indicates how quickly this field is advancing. For those who want to explore further, my Vanderbilt colleague Jules White has many excellent, freely available, massive open online courses on prompt engineering, prompt patterns, trustworthy generative AI, and applying generative AI to engineer software reliant systems. Let's now explore how computational thinking itself is evolving and which kinds of artifacts and tasks are becoming commoditized along the way. Generative AI is accelerating this shift, extending the reach of computational thinking to everyone and democratizing access to reasoning, analysis, and synthesis. Of course, this isn't the first time computation has been democratized. It's part of a long continuum of technological empowerment. In the 1980s, databases like DBase and Oracle put data storage and processing power into the hands of everyday professionals, while spreadsheets like Lotus 123 and Microsoft Excel brought tabular computation and analytical logic to millions. In the 1990s, search engines such as Alta Vista, Yahoo, and Google revolutionized access to information discovery. Today, large language models like ChatGpt, Gemini, and Claude are doing for abstract reasoning and synthesis. What those earlier tools did for data and search, making them universally accessible. Many newcomers use generative AI as a kind of conversational Google search, asking questions out of curiosity or nostalgia. I've often wondered who played the lead guitar solo in the Rolling Stones performance of Sympathy for the Devil on the Get Your Yayas Out album. ChatGpt explained that both Mick Taylor and Keith Richards traded solos and even pointed me to a video of their performance. Another powerful way to use Generative AI is as a planner and life organizer. Tools like the OpenAI operator app can now plan an entire vacation to Joshua Tree National Park. From recommending hiking routes to booking flights and hotels, these systems don't just retrieve information. They take action by interacting with other online services, turning a simple prompt into a coordinated plan. But Generative AI's true potential lies beyond productivity. It's becoming a kind of digital operating system for daily life. Through the model context protocol or MCP, large language models can securely discover, authorize, and connect to data sources and tools such as files, calendars, emails, databases, and web application protocols. With a single typed or spoken request, you could fetch a local PDF, summarize it, verify details online, and draft a follow-up email. All orchestrated through MCP with explicit permissions. It feels a bit like talking to the Star Trek computer, minus the warp core hum. Using generative AI as an operating system for your digital life means letting it serve as a unifying interface between your intent and your digital ecosystem. Instead of jumping between disconnected apps, tabs, and accounts, you describe what you want in plain language, and the AI coordinates the execution across tools and platforms. Your browser, calendar, inbox, and documents begin to function less like separate silos and more like extensions of a single intelligent workspace. Some people are even using generative AI as a personal life coach, setting goals, building habits, and seeking guidance when they hit a wall. They prompt it for weekly check-ins, motivational nudges, or scripts for difficult conversations, often pairing it with journaling or reflection. Used thoughtfully, AI can provide structure and accountability, but it should always complement, not replace, professional expertise in mental health, law, or medicine. As Sam Alman, OpenAI's CEO, recently observed, there's even a generational divide in how people use these tools. Older users often treat Chat GPT as a smarter Google. People in their 20s and 30s turn to it as a life adviser or trip planner and college students increasingly treat it as an operating system for their digital lives. Computational thinking is now supported by agenic AI platforms. For example, Google has released an agenic framework called Opel where users apply natural language and a visual editor to describe their desired workflow. Opel then automatically builds a multi-step app by chaining together prompts, AI models, and tools. I've recently applied Opel to optimize my daily workflows. For instance, I record many videos and often want to use their content for other purposes, such as generating blog posts that distill key points in my videos. Likewise, it's useful to have short 20-second trailer clips to encourage people to watch my longer multi-minute videos. I also generate podcasts where multiple AI generated people discuss what I've covered in my videos along the lines of Google's notebook LM app. Traditionally, I've done each of these tasks manually one at a time. By using Opel, however, I created a video content creator mini app that's a powerful content repurposing engine transforming a single video into a complete multimedia package. We'll now walk through the steps I used to generate this mini app with Opel. I started by giving Opel a single simple prompt that said, "Generate a mini app that prompts a user for a video URL, and then create a 1,000word blog post containing two images generated automatically from the video, a 20 second trailer video, a podcast summarizing key points in the video, and finally generate an interactive menu that enables the user to select from these three types of multimedia. After analyzing this prompt, Opel then generated the corresponding mini app in just a few seconds, which is a good example of no code computational thinking. The entire process kicks off when a user provides the video content creator miniapp with a link to a YouTube video. The miniapp's first step is to analyze the video and extract its key points. This analysis becomes the foundation for everything that follows. After understanding the core message of the video, the mini app begins working on multiple tasks simultaneously. In particular, it writes a complete blog post. It generates two distinct descriptive prompts to create two relevant images for the post. It writes a prompt to create a short video trailer. And it creates a summary and prompt to generate an audio podcast. The mini app then automatically compiles these individual pieces of generated content. the blog post, the two images, the video trailer, and the podcast into a single interactive menu. This menu provides a convenient hub where users can read the blog post, watch the video trailer, and listen to the podcast. Opel connects all these powerful AI models, though it's currently just limited to Google's AI models. I've shared the video content creator mini app, so anyone can explore it firsthand by clicking on the link at the bottom of this slide. The results highlight both the promise and unevenness of agenic AI frameworks like Opel. The results highlight both the promise and pitfalls of aenic AI frameworks like Opel. Its blog generation, complete with coherent writing and well-matched images, performs impressively, producing results that could pass for human created content. Though I'd still review and edit its output carefully before publishing it online. The podcast feature is equally strong, delivering clear, natural audio with professional pacing and tone. >> Generative AI is profoundly reshaping higher education with many incoming students already AI proficient. The video trailer, however, remains the weak link. Its visuals feel uncanny, and it didn't match what I'd asked it to generate. Still, Opel's overall workflow is striking. from a single noode prompt. It seamlessly produced a multimedia package that previously required hours of manual effort on my part. This synthesis of automation, creativity, and orchestration captures the emerging power of agenic AI, transforming fragmented manual processes into adaptive systems that learn, iterate, and collaborate. We'll now discuss how modern interactive development environments or idees are being integrated with generative AI models for use by those of us who still perform computational thinking through programming. AI coding tools now generate polished multi- language code in seconds. Writing from scratch or completing snippets with precision, redefining how developers create software. For example, modern idees like Windsurf and Cursor are evolving into co-creative workspaces that blend programming with prompting where developers seamlessly weave natural language instructions into code, guiding AI to produce software as part of an interactive conversational workflow. Instead of toggling between static editors and external chat bots, these new idees embed large language models directly into the coding process, letting developers express intent in natural language. Generate classes, methods, or tests instantly and refactor and optimize code across multiple languages with a single prompt. You can describe what you want in plain English and the AI scaffolds the logic, syntax, and even the test suite. What's emerging is a new paradigm where programming itself becomes conversational. You might ask an AI augmented IDE to optimize this Python script for GPU execution or port this data pipeline to Rust and within seconds it does the heavy lifting. These systems don't just autocomplete, they collaborate, keeping context across files, repositories, and entire projects. And since they can seamlessly translate logic across languages, it's now straightforward to hop from Python to Go or from C++ to TypeScript, enabling AI to handle the plumbing while software engineers and developers focus on the architecture and user experience. Beyond traditional idees, we're witnessing the rise of multi- aent development frameworks, visual and programmatic environments that orchestrate multiple AI models much like a digital Unix pipeline. AI models such as chat GPT, Claude, Copilot, and Gemini can be chained together to create multi- aent systems via orchestration layers or multi- aent development frameworks such as Lang Chain, Autogen or Crew AI. One agent might analyze data, one might generate simulations or write code, and yet another build interactive tools, all within a coordinated workflow. These systems don't yet eliminate the need for human oversight, but they are helping to bridge the gap between ideas and implementation, enabling developers to move from intent to execution at conversational speed. In short, we're witnessing the fusion of programming and prompting, a moment when code becomes both a language and a dialogue. Generative AI isn't replacing programming. It's redefining it. Transforming software creation into a process of reasoning, exploration, and design. The craft of coding is evolving into a creative partnership, one that feels less like engineering and more like composing ideas through conversation. The commoditization of computational thinking isn't all progress and promise. It also brings large-scale upheaval that's reshaping how we work, learn, and create. For example, knowledge work, once the domain of specialists, is now increasingly computational and thus accessible to nearly anyone with an internet connection. As with automation's impact on manufacturing and bluecollar jobs over many decades, AI is reshaping creative and cognitive white collar jobs, replacing some tasks while redefining others. The real risk isn't that AI will instantly take your job. It's that someone who uses it more effectively will. In this new landscape, the advantage belongs not to the most knowledgeable, but to the most adaptive, the humans who learn to collaborate fluently with their digital counterparts. Employers today aren't just looking for candidates with strong foundational knowledge. They want analytical thinkers who can reason fluently with AI tools and platforms. It's no longer enough to simply understand the principles. They expect employees to apply those principles through generative AI to enhance insight, automate work, accelerate workflows, and create value. The good news is that anyone with curiosity and commitment can master these AI tools, unlocking a new level of creativity, capability, and competitive edge. In fact, studies indicate that novices benefit from AI even more than experts. So why does this shift marked by the rise of AI fluent collaboration, the advent of prompt engineering and the commoditization of computational thinking mattered to us as computer scientists, teachers and students? Well, the truth is that the ground beneath our profession is shifting fast. Fortune recently reported that nearly half of Gen Z and millennials believe college was a waste of money and that AI has already made their degrees obsolete. That's a gut punch when we're asking families and students to invest four years and hundreds of thousands of dollars in a computing education. Meanwhile, the first rung of the career ladder, the entry-level job, is breaking. A senior leader at LinkedIn put it bluntly. Even before generative AI, entry-level roles were drying up. But now automation and AI co-pilots are accelerating that trend. For example, the chart on this slide shows that entry-level tech hiring dropped sharply in 2024, down nearly 25% at big tech firms and 11% at startups despite an overall rebound in tech employment. Meanwhile, companies are prioritizing mid to senior level hires defined as 2 to 10 plus years of experience, signaling a shift toward AIdriven productivity that favors experienced professionals over new graduates. What's happening in computing today echoes what's already transformed medicine, journalism, and finance over the past decade. The easy on ramps are disappearing. So where does that leave us? How can we remain competitive in a world where AI can code? Studies show that AI can already code better than many humans, which is having a deep impact on how we teach, learn, and practice computer science. The evolution is visible from early human programmers at mainframes to teams collaborating with intelligent systems to a new era where humans guide AI in reasoning and design. It's a computer science revolution with farreaching implications for every field that depends on logic, creativity, and problem solving. Here's one way to visualize the disruptions that are occurring. Imagine the talent pipeline for software. At one end are the dabblers and software makers. People experimenting with vibe coding, building prototypes, and using AI tools to create quickly but informally. In the middle lies the largest group, junior developers, recent grads, and early career coders forming the bulk of today's computing workforce. And on the far end are the senior developers, seasoned engineers who know how to architect, scale, and debug complex systems. Due to generative AI, the middle of the software talent curve is hollowing out. The traditional pipeline that once flowed from junior to senior developer is breaking down as AI systems now perform much of the routine coding once assigned to entry-level engineers. What remains are the two growing edges. Creators and dabblers who use generative AI to craft software without conventional programming expertise and senior developers who can design, orchestrate and oversee increasingly automated and intelligent systems. This trend underscores the challenge and the opportunity for us at William and Mary and other universities conferring degrees in computer science. In particular, how do we help our students accelerate their journey from junior to senior developers faster? How do we empower non-experts, majors in economics, biology, government, or philosophy who minor in data science or AI to build production quality software reliant systems, not just dabble? And most importantly, how do we devise a world where humans and AI collaborate seamlessly? where creativity and competence, not just credentials, define capability and competitiveness. The real question is this. How do we upskill our workforce and ourselves fast enough to stay competitive in the face of seismic technological shifts? The recent government shutdown was a wakeup call for the Washington DC region. But the truth is the pressure started well before that. across agencies and contractors. Hiring freezes and attrition have left teams with fewer people and the same or greater workloads. When that happens, there's only one move left. Turn to automation and AI. Not as a shiny new toy, but as a survival tool. The question is no longer if people will adopt AI. It's whether they can master it fast enough to stay productive, relevant, and resilient. When the margin for error keeps shrinking and there are fewer hands on deck. While it's tempting to charge uncritically into the AI revolution, this moment demands reflection as much as acceleration. The tide of technological change is rising fast, and our human systems, ethics, policy, and education are scrambling to build defenses sturdy enough to withstand it. We're not facing a clean, frictionless transition. We're shoring up the levies of civilization in real time, trying to ensure that progress doesn't wash away the very values that make it meaningful. We'll wrap up this part of my presentation by discussing ways of preparing students for career success in a rapidly changing work environment that's being disrupted by generative AI. At William and Mary School of Computing, Data Sciences, and Physics, we're devising strategies to intentionally futureproof our students. Our goal is to ensure graduates don't just enter the workforce as junior developers or apprentices, but as adaptable, creative, AI literate professionals ready for whatever comes next. We're tackling that challenge through five key strategy pillars. First, we're championing interdisciplinary learning and degrees. A student might double major in economics and computer science, pair data science and business, or a major in psychology with an AI minor. The goal is to produce graduates who don't see the world through a single disciplinary lens, but who can synthesize data and connect ideas across fields. Second, we're expanding bachelors to masters accelerated pathways. Many of our students come to William and Mary with many AP credits. We want to help them finish a 4-year degree with a master's already in hand so they graduate not as entry- level talent, but as emerging leaders. Third, we're expanding internships and co-ops that connect students directly to the intersecting worlds of government, industry, and research. Whether they're working with federal agencies, defense partners, or innovation labs, these experiences place students at the heart of real world collaboration where policy meets practice and ideas become impact. Graduates who understand how teams operate across these sectors don't just enter the workforce. They strengthen the bridge between discovery, application, and public good. Fourth, we're embedding undergraduate research at the heart of the William and Mary experience. We're already known for exceptional undergraduate teaching, but in fast-moving fields like computing, data science, and AI, you can't teach effectively if you don't operate at the cutting edge. Our talented teachers are also active researchers because research keeps our teaching alive, relevant, and future focused. And finally, we're helping students learn how to communicate digitally. Increasingly, job interviews happen over Zoom-like platforms such as Big Interview, where candidates speak to an AI interviewer. That requires confidence, presence, and adaptability. Skills most students haven't had to develop unless they aspire to a career as a YouTuber. So, we're using AI itself to help them practice, receive feedback, and master the art of presenting themselves professionally on camera. All five pillars serve a single mission to help our students move faster up the learning curve, thrive in a hybrid human AI workforce, and stand out not just for what they know, but for how they think, adapt, and communicate. Generative AI isn't all rainbows and unicorns. applying it comes with real risks and challenges that demand thoughtful use. One of the most common issues with AI is the broader spectrum of hallucinations which include factual inaccuracies, nonsensical output, source conflation, and even overindulgence in generating excessive or sensitive content. Early large language models struggled with these problems and they still arise when users misunderstand AI model limits or fail to apply best practices for versioning prompt engineering and context. In particular, most hallucinations today stem from using outdated models, vague prompts, or failing to provide sufficient context. Accuracy improves when four gears mesh. using the right large language model version, applying good prompt engineering practices and patterns, providing adequate context, and thinking critically about the results AI gives back to you. Relying on free or obsolete models drags results down, whereas combining up-to-date models with effective prompts and close scrutiny pushes the needle to high accuracy and better outcomes. I've become known as the resident AI enthusiast at William and Mary, which means my faculty colleagues enjoy sending me examples of where AI failed. They'll say something like, "I tried using AI to generate some complicated formulas and it didn't work." My first question is always, "Which AI model and what prompts did you use?" And the answer almost every time is something like, "Oh, the free version of chat GPT. I just asked it a vague question." That's when I sigh, smile, and say, "Send me your prompts and your attachments." I feed the same request into the plus version of chat GPT using a bit of structure, maybe a prompt pattern like persona or context, and out comes a flawless result. Then I send it back to them with a gentle nudge. Please spend the $20 a month and upgrade to a paid AI model. As with so many aspects in computing, it's garbage in, garbage out. And the free or obsolete models and vague prompting just don't cut it anymore. The real magic happens when you don't just give AI a prompt, but engineer the context around it. The text you've already written, other relevant documents, the chain of reasoning, and the structure of the task. Then the system doesn't simply hallucinate or plagiarize. It collaborates with you. It builds on your foundation, understands your scenario, uses your prior work, and delivers something aligned with your intent. Combining prompt engineering with context engineering, turns AI models into partners in reasoning rather than blackbox responders. These are not hard skills to master. However, metacognitive skills, knowing when to trust and when to question what AI models tell you, are far more vital than technical fluency alone because they ensure that AI output never outweighs human judgment and oversight. It's the same mental habit we should have begun honing back in the 1990s when Abraham Lincoln supposedly warned us, "Don't believe everything you read on the internet." While this is clearly a joke, it's also a timeless reminder of why discernment still matters. Today, computational thinking is easier than ever to access. But without critical thinking, analytical reasoning, and intellectual rigor, we risk a slow digital brain rot. So, use AI boldly, but be prepared to think even harder. And that's where the real question lies. You've probably seen the MIT study showing that using AI activates less of your brain than doing the same task unaded. Intriguing, right? But let's pause on that. Doing long division by hand fires a lot of neurons, too. But does that make it better than using a calculator? Of course not. The calculator frees your brain to solve bigger problems. Maybe the point isn't how much of your brain lights up, but what it's lighting up for. We should be studying when reduced cognitive effort means laziness and when it means evolution. Because mastering laborsaving tools has always been humanity's secret move. Offloading the mechanical so we can focus on the meaningful. Let's now turn our attention to various ethical and sustainability issues that are associated with generative AI. We can't talk about AI without discussing the thorny issues such as authorship, copyright, misuse, cheating, bias, environmental concerns, and so on. Educators feel the cheating concerns most acutely, so they must be addressed headon. However, the answer shouldn't be reverting back to blue books or giving exams in Faraday cages that block all Wi-Fi and cellular signals. Instead, we must redesign assessments for the world our students actually inhabit. when they are allowed to use AI. We must also require accountability to separate genuine understanding from mindless copy and pasting from AI generated essays or programs. Students can be required to defend their submitted work via oral vivoce style quizzies tied to each submission. Teachers can also intentionally underspecify their assignments. So students must prompt AI tools, iterate on the outputs, and show their work. Likewise, teachers can assess student reasoning processes, not just their results, by layering in artifacts such as prompt logs, version histories, and rationale memos. Introductory computing courses could operate within controlled, supportive digital green houses, which are restricted environments designed to nurture foundational programming and problem solving skills. These protective settings help novice students grow their competence and confidence before venturing into the open internet and the powerful yet often distracting tools of generative AI. Beyond that, however, AI should be treated as a tool to master question and audit, emphasizing collaboration, reflection, and problem solving. Academic integrity isn't about rejecting new technologies and tools. It's about how we use them to turn AI powered problem solving into a curiosity engine driven by transparency and proof of learning. Another issue that needs to be addressed headon is energy and water consumption, which is especially problematic when training generative AI models. The good news is that people are finally waking up to resolve these problems. Data centers, the beating hearts of our digital world, are getting a long overdue makeover. Some are being built with wood instead of concrete to cut carbon emissions. Others are experimenting with air cooling instead of water cooling. And deep in the labs, engineers are reimagining both the brains and the bodies of AI, building energyefficient neuromorphic chips and designing smarter software that delivers more performance with a fraction of the power. While AI's appetite for consuming energy and water isn't solved yet, the focus on sustainability is starting to pay off. Clean computing isn't just the right thing to do. It's becoming a business model. The next trillion dollar idea out of Silicon Valley might not be a new app or algorithm. It might be the company that figures out how to make AI greener. But we also need to keep things in perspective. It's easy to point at AI and say, "Look how much power it uses." While ignoring everything else we do that quietly drains the grid. Every Zoom meeting, every Netflix binge, every Fortnite gaming session, and every Door Dash delivery burns energy and water, too. Likewise, air travel also has a huge carbon footprint. The truth is, we're all part of this equation. If we're serious about sustainability, we must start close to home. Fix the leaking pipes. Eat fewer resource inensive foods. Cut down on hamburgers since it takes hundreds of gallons of water to raise that cow. Turn off the lights when you leave the room. All those things add up faster than we realize. I'm not dismissing the environmental impact of AI. It's real. But perspective matters. Before we obsess over the speck of carbon in AI's eye, let's deal with the plank of waste in our own. Because if we really want to save the planet, it starts not with blame, but with balance and the courage to see the whole picture. As we close this presentation, let's zoom out and reflect on how generative AI and the growing commoditization of computational thinking are reshaping both higher education and lifelong learning. Interest in computer science has surged and receded in cycles over the past five decades, mirroring society's shifting assessments of the value of computing proficiency. We're now entering a new era where generative AI extends the reach of computing to anyone with curiosity and questions. The new computational literacy is not about coding itself. Instead, it's the ability to translate human intent into forms that machines can reason about effectively, ethically, and creatively. In this transformation, computational thinking is evolving from a specialized discipline into a universal language of modern problem solving. So, should everyone still learn to code? The answer is probably, but not for the same reasons and not in the same way. In the old model, frustrated coders wrestled with obscure syntax and cryptic errors. This model is giving way to a new kind of fluency where the programmer evolves into a prompt engineer, someone who communicates ideas clearly to intelligent systems by collaborating through language, logic, and creativity. The essence of programming and problem solving remains, but the medium is shifting from curly brackets and semicolons to dialogue and design thinking powered by generative AI. This slide illustrates what I call the pyramid of AI proficiency, a modern echo of Maslo's hierarchy reimagined for the age of intelligent computing. It shows that people across disciplines will engage with AI at different depths depending on their goals and responsibilities. At the foundation lies AI literacy where everyone's journey should begin. Every student, no matter their field, deserves the chance to understand what AI is and how it works, its capabilities, its blind spots, and its ethical boundaries. This isn't about coding. It's about cultivating awareness, skepticism, and sound judgment. AI literate learners can tell when these tools help them think better, when they're leaning too heavily on them, and when human reflection must step back in. In essence, AI literacy is not only technical, it's civic, ethical, and deeply human. The middle tier represents AI fluency. The stage where learners and professionals move beyond awareness to mastery within their own domains. Here the goal isn't to write code but to communicate effectively with intelligent systems through prompts, data and agenic workflows. This is the realm of AI augmented thinkers, prompt engineers, computational creatives and researchers who use generative tools to accelerate discovery and creation. A computational chemist, bioengineer, or digital humanist may never touch Python or JavaScript. Yet, they must know how to guide large language models, test ideas rapidly, and generate insights responsibly. AI fluency is about speaking the language of intelligence itself, using it to extend human reach, imagination, and capability. At the summit lies AI mastery, the domain of those who create the systems that power our intelligent world. These are the computer scientists, engineers, and developers who design, train, and rigorously test the algorithms and platforms that make generative AI possible. AI masters don't just build models. They shape the foundations of modern technology. From spacecraft control systems and biomedical devices to the digital infrastructure that keeps society running, AI mastery demands deep expertise in coding systems architecture and validation. It's not the path for everyone, but it's essential for everyone's future because without these builders, the rest of the pyramid wouldn't hold together. Just as people can understand where their food comes from without being farmers, they can understand computational thinking without being coders. But we do need a vibrant ecosystem across all three levels. People who are literate enough to use AI responsibly, fluent enough to apply it to innovate their workflows and domains, and masterful enough to build and validate the next generation of trustworthy intelligence systems. The future of education and of work will hinge on how well we help students and ourselves find our place on that pyramid and climb as high as curiosity and purpose will take us. Learning must now move beyond passive use of AI toward active collaboration. Students should learn to craft prompts that shape how AI reasons, interpret its responses critically, and refine those outputs into stronger insights. At the heart of this emerging skill set is computational thinking, not as a niche technical field, but as a universal literacy that empowers learners to reason, problem solve, and co-create intelligently with machines. If we want students to collaborate with AI safely, we must actually teach them how to do this confidently. Ironically, we invest hundreds of hours teaching teenagers how to drive cars safely, but barely minutes showing them how to use AI safely. When my son learned to drive several years ago, he logged 40 hours in the classroom and 100 behind the wheel under close supervision by certified instructors and adults. However, his high school's AI instruction was maybe 5 minutes at best. Often from teachers who'd never used these tools, but who warned, "Don't use AI. It's cheating and will rot your brain." Meanwhile, many students already live in an AI infused world, scrolling, chatting, and consuming content on devices shaped by AI algorithms. But as they experiment with generative tools, they're beginning to move from passive use to active creation and problem solving. The next step is helping them understand AI as a genuine partner, one they can collaborate with thoughtfully and ethically, applying human judgment, creativity, and purpose in an age of human AI collaboration. So let's now discuss what we can do to prepare for our AI augmented future. We must start by recognizing that programming itself is crossing a bridge. From painstakingly coding line by line to clearly describing what we want in natural language. Traditional coding can trap us in syntax errors and mental gymnastics while natural language programming opens a new frontier of collaboration. Now we can say generate a function to parse this CSV file or build a restful interface for my to-do list app and AI handles the mechanics. The syntax fades away revealing the true core of programming. Clear logic, structured thought, and human creativity guiding machines instead of the other way around. We also need to bring AI augmented development environments into our classrooms. Tools like GitHub Copilot, ChatgPT Codeex, Claude Code, and Windsurf are becoming as indispensable to developers today as compilers once were. Tomorrow's programmers won't just write code. They'll co-create it with intelligent partners that autocomplete logic, generate tests, explain functions, and debug on the fly. Even in courses that don't explicitly teach AI, students should learn to work fluently with these tools. Because in the modern workplace, hiring managers won't ask if students can use them. They'll just assume it. However, there's a real risk on the other side of this transformation, leaning so much on AI that our own skills start to fade and our minds burn out from over. Developers who let AI do all their thinking risk becoming passengers instead of drivers capable of vibe coding yet unable to debug, design, or truly understand what's happening under the hood. Some developers are even going as far as to stop using AI code editors because they feel the magic of AI is eroding their mastery when used without selfdiscipline. While I personally don't subscribe to this viewpoint, the goal of AI should always be to amplify your talents and skills, not outsource them. We must also recognize and embrace the coming reality that computational thinkers will soon greatly outnumber traditional programmers. The image on this slide captures that transition. The old temple of coding on one side, the new hall of prompt engineering on the other, connected by a bridge of natural language. The next generation won't write software line by line. They'll guide AI through reasoning chains, test refine loops, and iterative improvement. It's a thrilling and sobering shift. More people than ever will wield computation to solve problems once tackled through bias or intuition, but only if they understand the logic beneath their prompts. Otherwise, that bridge could collapse under its own convenience. To make this transformation sustainable, we must bring the same rigor to prompt engineering that we apply to software engineering. Testing, maintenance, version control, and project management now belong inside the AI workflow. Too often prompting is treated like casual conversation. But meaningful progress demands structure and discipline. Prompts should be managed like code, tested, refined, versioned, and validated for reliability over time. That's what I call prompt engineering in the large. Significant R&D is still needed to mature this type of prompt engineering, which is also simply known as prompt wear. While prompt engineering in the small equipss individuals with reusable patterns and best practices, scaling to resilient missionritical AI systems demands more than a toolkit. It calls for a full engineering workbench with drawers labeled testing, version control, configuration management, prompt templates, and prompt linting. The aim of Prompware is to help teams harness AI tools across long evolving project life cycles through shifts in people, platforms, and priorities without accumulating mountains of technical debt. Building responsibly with AI requires disciplined workflows for testing, versioning, and auditing, not just quick vibe coding. Generative AI can accelerate progress, but without engineering rigor, it can just as easily accelerate chaos. So, if coding was the defining literacy of the past several decades, then prompt guided computational thinking is fast becoming the electricity of the future. Omnipresent, indispensable, and yes, occasionally shocking. Prometheus would grin knowingly at hearing all this. He'd recognize the spark, but he'd also remind us that every fire needs tending. As we harness this new creative power, we must keep our eyes fixed on trust, safety, and ethics, the moral circuitry that keeps innovation humane. Before we close, if you'd like to explore these ideas further, I've shared many more talks on my YouTube channel, Douglas Schmidt, including how we're futureproofing students at William and Mary, how AI is reshaping higher education, and how software engineering itself is evolving in this new era. Check out the videos in my generative augmented intelligence playlist when you have a chance. Thanks for inviting me to visit and for listening to my talk. I'm happy to take any questions you may have. [applause] [applause]

Video description

In this talk, I explore how generative AI is unbinding computational thinking from traditional coding. For decades, programming was the only way to translate ideas into computational form. Now, prompt engineering and context engineering let anyone think computationally using natural language. I show how this shift democratizes reasoning—turning problem solving into a dialogue between humans and machines. I also connect this shift to education, where students can now model, simulate, and iterate without first learning programming language syntax. Generative AI becomes a cognitive amplifier, extending human abstraction and creativity. My core message is that computational thinking has escaped the code editor—it’s now a universal literacy, accessible to every learner and creator who knows how to prompt with purpose. 00:00:00 Introduction 00:00:55 Overview of What I'll Convey in this Talk 00:02:55 Motivation for this Presentation 00:06:56: Deep Research: Overview and Example Application 00:09:11 Overview of Computational Thinking 00:12:59 Analogy Showing the Evolution from the "How" to the "What" in Automotive Technology 00:14:12 The Evolution from the “How” to the “What” in Computational Thinking 00:15:49 Advancing Computational Thinking Via Prompt and Context Engineering 00:19:19 Benefits of Computational Thinking with Prompt Engineering 00:22:13 A Case Study Example of Commoditizing Computational Thinking with ChatGPT 00:25:06 Three Layers of Computational Thinking that are Being Commoditized 00:26:49 The Future of Computational Thinking & CS Education 00:31:19 The Importance of Prompt Patterns for Computational Thinking 00:34:45 How Computational Thinking is Evolving & What is Being Commoditized 00:38:57 Computational Thinking is Now Supported by Agentic AI Platforms 00:40:03 Creating & Using the Video Content Creator Mini-App with Opal 00:43:29 Modern IDEs are Being Integrated with Generative AI Models 00:46:46 Consequences of the Commoditization of Computational Thinking 00:49:57 Remaining Competitive When AI Can Code 00:53:50 Preparing Students for Career Success in a Rapidly Changing World 00:56:51 Navigating Challenges & Risks When Applying Generative AI 01:01:26 Navigating Ethical & Sustainability Issues with Generative AI 01:05:43 The Impact of Generative AI on Higher Education & Lifelong Learning 01:12:36 What We Can Do to Prepare for Our AI-augmented Future A PDF version of the slides used in this video are available at https://www.cs.wm.edu/~dcschmidt/presos/unbinding-prometheus.pdf. A PDF version of a paper related to this video is available at https://www.cs.wm.edu/~dcschmidt/presos/Commoditization-of-Computational-Thinking-Paper.pdf.

© 2026 GrayBeam Technology Privacy v0.1.0 · ac93850 · 2026-04-03 22:43 UTC