From Reactive Graphics to the Agent Era: Reflections on the Evolution of Design and Code

Looking back at my path through design and technology, I’m often struck by how much the landscape keeps shifting—and how much there is to learn at every turn. In the early 1990s, my curiosity led me to experiment with what I called reactive graphics: programmed visuals that responded in real time to user actions. I wanted to see if computers could display a sense of life, with interactions that felt fluid and engaging. Projects like “The Reactive Square,” which moved in response to sound, or “Flying Letters,” where the mouse controlled type as marionettes, were early attempts to show that computers could be expressive—not just utilitarian.

At that time, design itself was in flux. Digital tools like Illustrator and Photoshop expanded what was possible, but also brought new constraints, sometimes limiting creativity to the boundaries set by the software. Many traditional designers were skeptical, seeing these tools as a threat to craft and discipline. I understood their reservations, but I was drawn to the potential of computation—not as a replacement for craft, but as another way to explore and make.

This tension led me to create Design by Numbers, a programming environment and book intended to make computation approachable for people who didn’t see themselves as technologists. My hope was to help others experience programming as a hands-on, visual act—not just abstract code, but something tangible you could see and manipulate. I’ve always felt that creative work benefits from understanding what’s beneath the surface, not just what’s on it.

Fast forward to 2025, and the field is changing again—faster than I ever expected. In my recent Design in Tech Report, I tried to capture a moment where artificial intelligence isn’t just a support tool, but a real partner in creative work. We’re now in what I call the “Agent Era,” where AI agents can complete complex tasks on their own, adapting and improving as they go. The cost and speed of experimentation have dropped dramatically, making it possible to test and iterate on ideas at a pace that was unthinkable before.

One of the biggest changes is the move from traditional user interfaces toward what’s now called “agent experience,” or AX. In many cases, users can now reach their goals directly, with AI agents working in the background to make things happen. This changes the designer’s role: instead of crafting every detail manually, we’re setting the intent, flow, and boundaries for AI-driven systems.

I’ve also noticed a trend toward more conversational, less formal coding approaches—often referred to as “vibe coding.” Developers and designers are treating code more like a dialogue, with generative AI tools making software creation more accessible and expressive. The impact and value of this approach are still being debated. It’s not always smooth or predictable, and the field is still learning where and how it’s most effective.

Even with these shifts, I don’t believe AI is replacing designers. If anything, it’s forcing us to focus on what only humans can provide: judgment, empathy, ethics, and the ability to ask the right questions. AI lets us scale and experiment in ways that weren’t possible before, but meaning, care, and resonance still come from human insight and intent.

If there’s a lesson in this journey—from reactive graphics, to Design by Numbers, to today’s agent-driven design—it’s that the tools and platforms will keep evolving, but our job as designers stays much the same. We’re here to guide, question, and shape the work, always trying to bring humanity and purpose to everything we make. —JM