Both Sides of the Blade: How AI and People Shape Craft

A large language model (LLM) is more than just a tool for churning out text—it’s a computational engine that learns from oceans of language data, turning words into vectors in a high-dimensional “latent space**.” In this space, language becomes numbers, and meaning can be measured, combined, and transformed. LLMs now power everything from chatbots to generative art to code, opening up new forms of creative and collaborative work.

But what makes these models so effective is not just the raw power of computation. It’s how they navigate the interplay between cognition and context—a dynamic perfectly captured by Herbert Simon’s famous “scissors” metaphor. As Simon described it, intelligence is not a single blade: it emerges from the friction between two edges. One is the structure of the environment (context), the other is the computational capabilities of the agent (cognition). Modern AI is built in the tension between these two blades. LLMs embody this: they have the cognitive blade (their neural net, their capacity to predict the next word), but they also rely on vast context (from user prompts, from embeddings, from API access) to produce genuinely useful, adaptive responses.

This interplay was foreseen decades ago. In the late 1960s, MIT Media Lab founder Nicholas Negroponte predicted a future in which computers would not just respond to our commands, but learn to anticipate our individual conversational quirks and adapt to us in real time. He envisioned machines building predictive models of each user, forging digital experiences in rhythm with each person’s unique style. In today’s LLMs—capable of context-aware, personalized interaction—we see his vision realized. AI systems can now shape their output in conversation with us, learning from our feedback and preferences.

A key part of this new AI paradigm is embeddings models. Where completion models handle cognition—figuring out “what comes next”—embeddings models provide the context. They convert words, documents, or even images into vectors in latent space, enabling AI to judge semantic similarity, search through knowledge, or cluster ideas. These embeddings let LLMs retrieve relevant context from vast datasets, mix and match information, and make connections that once required human intuition.

Put together, these advances form a new computational paradigm for creativity and design. Designers no longer just direct software; they collaborate with systems that “understand” their goals and style. This raises the bar for human creativity, pushing us not only to master new tools but to engage in deeper dialogue with our machines.

Yet with this power comes responsibility: bias, fairness, and privacy are ever-present concerns. The very friction that Simon described—between model and context—reminds us that how we set the environment, and how we design the prompts and datasets, deeply affects the outcomes. Responsible design isn’t an add-on; it’s foundational.

As AI systems continue to evolve, the most successful designers and technologists will be those who thrive in this new hybrid space—wielding both blades of the scissors, combining deep computational literacy with sensitivity to context, ethics, and the soul of the work itself. —JM


** “Latent space” here is shorthand for the high-dimensional vector spaces where LLMs represent language as numbers. Technically, specialists might call this “embedding space”—but the idea is the same: meaning gets mapped into math inside the model.