What a Toddler Can Teach Us About AI: Designing with Probabilistic Systems
Introduction
How does a toddler learn to speak? Not through textbooks, grammar lessons, or logic puzzles. A child picks up language through exposure, mimicry, trial and error, and countless interactions filled with ambiguity. From these noisy inputs, they form surprisingly robust internal models of language. This process—intuitive, incremental, and context-sensitive—mirrors how many contemporary AI systems, particularly those based on machine learning, develop their capabilities.
Yet our mental models of AI often default to something more rigid and deterministic: a computer as a rule follower, executing explicit instructions in a controlled environment. This perspective, while not without historical basis, fails to account for the probabilistic, pattern-driven nature of how many AI systems actually operate today. As a result, we risk misjudging their capabilities, misapplying their outputs, or designing around incorrect assumptions.
The shift from deterministic to probabilistic thinking in computational systems has tangible implications for how designers model, prototype, and evaluate intelligent systems in architecture and beyond. Recognizing that models behave according to learned distributions, not fixed rules, changes the nature of authorship, iteration, and control.
Implicit Learning: Lessons from Language Acquisition
Children are not taught the rules of language in a formal sense. Instead, they are immersed in environments rich with spoken language, visual cues, social feedback, and embedded context—conditions from which syntax and nuance eventually emerge. Over time, they internalize structure, syntax, and nuance, without ever being handed a rulebook. This is known as implicit learning: the acquisition of complex knowledge structures without conscious awareness.
Many modern AI systems learn in a comparable way. A language model, for example, doesn't "know" grammar in the way a linguist does. It forms internal statistical associations between words and contexts, gradually developing the ability to produce language that seems fluent and coherent. It does this by constructing high-dimensional vector representations of tokens and optimizing millions (or billions) of parameters through processes like stochastic gradient descent. The model's objective is to approximate the conditional probability distribution of the next token given a preceding context window.
This is not symbolic reasoning. It is pattern recognition at scale. And while the analogy with human learning is imperfect, it underscores a foundational shift: intelligence in these systems is not rule-based but emergent.
Deterministic vs. Probabilistic Thinking
Traditional software systems are deterministic. Given the same inputs, they produce the same outputs. They are designed through explicit logic: if this, then that. Such systems rely on formal rules, symbolic manipulation, and state machines—mechanisms well-understood in automata theory and classical computation.
AI systems built on statistical learning, by contrast, are probabilistic. They model uncertainty and operate through likelihood estimation. When prompted, a generative model does not retrieve a stored answer. It synthesizes a response by sampling from a learned probability distribution over possible continuations. The behavior of these systems is governed by parameters in high-dimensional spaces learned during training, not by explicit logic trees.
This introduces a new kind of opacity, often described as the "black box" problem. Outputs may vary across runs. Errors may be subtle, emergent, and context-dependent. It also introduces a different kind of power: the ability to generalize, interpolate between examples, and produce novel combinations. Designers must account for both the unpredictability and the potential.
Why This Matters: Misunderstandings About AI
When we treat AI as deterministic, we risk two major pitfalls. First, we may expect consistency or correctness where there is none. A user might assume that an AI-generated answer is factual because it sounds confident, overlooking its probabilistic roots. Second, we may fail to design interfaces, prompts, or workflows that accommodate uncertainty.
For designers, this means engaging with AI not as a precise tool but as a stochastic system. It means understanding key concepts such as overfitting (when a model memorizes training data rather than generalizing), hallucination (when the model produces confident but incorrect information), and training distribution bias (which governs where the model is most reliable).
It also means grappling with edge cases and out-of-distribution behavior—situations where the input deviates from anything seen during training, and the model's reliability can break down.
AI as a Design Material
One productive shift is to treat AI not just as a tool, but as a material. Like clay or code, AI has properties that can be felt, explored, and worked with. It resists certain shapes, affords others, and behaves differently depending on context.
Understanding AI as a design material emphasizes fluency over control. It invites experimentation, play, and iterative refinement. Designers learn to "read" the material—to sense when a model is interpolating (working within familiar territory), when it's extrapolating (venturing beyond its training data), and when it's outside the manifold of its training distribution entirely. This sensibility is as critical as any technical skill.
Engaging with AI Mindfully
This approach requires both humility and curiosity. Humility, to recognize that these systems are not infallible or fully knowable—even to their creators. Curiosity, to explore what they can do—and what they reveal about our own assumptions, biases, and patterns.
Engaging with AI mindfully also means thinking critically about its role: not just what it can generate, but what it leaves out. Not just how it reflects the world, but how it reshapes it. These are design questions as much as technical ones. And they benefit from systems thinking: how does this model interact with other systems, stakeholders, and signals? What are its feedback loops? Where might error propagation occur—small misjudgments cascading into larger failures?
Conclusion
The toddler analogy is not a perfect map, but it is a useful compass. It reminds us that intelligence can be emergent, not engineered; that learning can be implicit, not instructed; and that working with AI means embracing a world of probabilities, not certainties.
As designers and thinkers, we are not just building with AI. We are building ways of thinking about intelligence, creativity, and collaboration. The more we engage with these systems critically, systematically, and imaginatively, the more meaningful our work with them becomes.
Reflections on My Architectural Journey and Why I Chose Design Computation
As someone trained in architecture with a specialization in design computation, I’ve often found myself caught between admiration and skepticism toward my field. Architecture, in its broadest sense, spans the hierarchy of human knowledge—rooted in physics and chemistry, informed by biology and psychology, and shaped by social and cultural forces. Yet it ultimately resides within the humanities. And this layered position has become both a source of richness and unease in my ongoing reflection on the discipline.
The Longing for Objectivity
In my quieter moments, I’ve wondered whether I should have studied something more "objective"—like physics, chemistry, or biology. These sciences, grounded in testable hypotheses and falsifiable claims, offer a kind of intellectual certainty that architecture seldom affords. You can calculate the speed of light, measure the yield strength of steel, trace the pathways of a neural circuit. In architecture, however, truth is murkier. A design is successful—or not—based on shifting criteria: aesthetics, functionality, cultural relevance, emotion.
In this light, my discomfort begins to make sense. While architecture relies on scientific and technical knowledge, its outputs are filtered through layers of abstraction, symbolism, and human subjectivity. It’s an enterprise shaped as much by politics, taste, and economics as by gravity or thermodynamics.
Architecture as Emergent Knowledge
I’ve come to think of architecture as a kind of emergent knowledge—an upper layer built on the scaffolding of more fundamental sciences. If physics is the bedrock, chemistry the bonding, and biology the self-organizing form, then architecture is where these systems are shaped into human experience. It is a cultural act as much as a structural one. And that duality, while difficult, is precisely what makes it potent.
But for someone drawn to systems, logic, and models, this ambiguity can feel unsatisfying. That’s why I gravitated toward design computation.
Why I Chose Design Computation
Unlike the term “computational design,” which sometimes risks being reduced to an aesthetic movement or formal style, design computation refers to a deeper engagement with systems thinking, algorithmic logic, and the generative potential of code. For me, it offered a way to reengage with architecture through the analytical clarity I found so compelling in the sciences.
Through computation, I could simulate structural behaviors, model environmental conditions, and explore emergent geometries with rigor. I could write algorithms that didn’t just represent a design, but generated it. It transformed my role from being a form-giver to a rule-maker—someone who sets up the conditions for complexity to emerge.
In this space, architecture becomes testable again. Not in the strict empirical sense of physics, but through iteration, performance, feedback, and optimization. I could ask: What if this system adapts? What if this pattern learns? And then, I could watch it unfold.
The Beauty of the In-Between
What I’ve come to appreciate is that my dissatisfaction was never a weakness of architecture itself, but a misalignment of expectations. Architecture isn’t a hard science, and it doesn’t need to be. But when combined with computational thinking, it becomes a field uniquely equipped to grapple with the complexities of a dynamic, data-rich world.
In the end, I no longer see design computation as a departure from architecture, but as a return to its original promise: to shape space through reason and imagination, to navigate the known and the unknowable.
So if you, like me, have ever felt uneasy about the ambiguity that surrounds architecture, know that it’s okay to look downward—toward physics, chemistry, and biology—not to step away from architecture, but to engage with it more deeply. And through computation, perhaps we can build new bridges between matter and experience, form and feedback, the measurable and the emergent.
Originally written as a personal reflection, now shared to invite discussion on the evolving role of computation in design.