Dad was about my age when he said that reading the manual was better than hypothesis driven button pressing. For teenage me, that took too long. Sure, I may have crashed a computer or two but following my gut got me there. Of course my gut isn’t that smart. In the decades preceding, devices had converged on a common pattern language of buttons. Once learned, the standard grammar of action would reliably deliver me to my destination.
Image of a nebula taken by the Hubble Telescope.
In programming I was similarly aided by the shared patterns across MATLAB, Python, R, Java, Julia, and even HTML. In the end however, dad was right. Reading documentation is the way. Besides showing correct usage, manuals create a new understanding of my problems. I am able to play with tech thanks to the people that took the effort and the care to create good documentation. This is not limited to code and AI. During the startup years, great handbooks clarified accounting, fundraising, and regulations, areas foreign to me.
I love good documentation and I write documentation. Writing good documentation is hard. It is an exercise in deep empathy with my user. Reaching into the future to give them all they need is part of creating good technology. Often the future user is me and I like it when past me is nice to now me. If an expert Socratic interlocutor is like weight training, documentation is a kindly spirit ancestor parting the mist.
Maybe it’s something about being this age but now I try to impart good documentation practices to my teams. I also do not discourage pressing buttons to see what happens. Inefficient, but discovery is a fun way to spike interest.
Meanwhile, I’m reading a more basic kind of documentation. Writing English. Having resolved to write more, I’m discovering that words are buttons. Poking them gets me to where I want, but not always. Despite writerly ambitions, the basics are lacking. This became apparent recently when I picked up the book Artful Sentences by Virginia Tufte*. It’s two hundred and seventy pages of wonderful sentences dissected to show their mechanics. I was lost by page 5. The book is, temporarily, in my anti-library.
So, I’m going to the basics, Strunk and White, and William Zinsser. I’m hoping that Writing to Learn (finished) and On Writing Well (in progress) provide sufficient context about reasons to write to make the most of S&W, for the how, then somewhere down the road, savor Tufte.
* Those dastardly Tuftes are always making me learn some kind of grammar.
On the recent season of the show Clarkson’s farm, J.C. goes through great lengths to buy the right pub. As with any sensible buyer, the team does a thorough tear down followed by a big build up before the place is open for business. They survey how the place is built, located, and accessed. In their refresh they ensure that each part of the pub is built with purpose. Even the tractor on the ceiling. The art is in answering the question: How was this place put together?
A data-scientist should be equally fussy. Until we trace how every number was collected, corrected and cleaned, —who measured it, what tool warped it, what assumptions skewed it—we can’t trust the next step in our business to flourish.
Old sound (1925) painting in high resolution by Paul Klee. Original from the Kunstmuseum Basel Museum. Digitally enhanced by rawpixel.
Two load-bearing pillars
While there are many flavors of data science I’m concerned about the analysis that is done in scientific spheres and startups. In this world, the structure held up by two pillars:
How we measure — the trip from reality to raw numbers. Feature extraction.
How we compare — the rules that let those numbers answer a question. Statistics and causality.
Both of these related to having a deep understanding of the data generation process. Each from a different angle. A crack in either pillar and whatever sits on top crumbles. Plots, significance, AI predictions, mean nothing.
How we measure
A misaligned microscope is the digital equivalent of crooked lumber. No amount of massage can birth a photon that never hit the sensor. In fluorescence imaging, the point-spread function tells you how a pin-point of light smears across neighboring pixels; noise reminds you that light itself arrives from and is recorded by at least some randomness. Misjudge either and the cell you call “twice as bright” may be a mirage.
In this data generation process the instrument nuances control what you see. Understanding this enables us to make judgements about what kind of post processing is right and which one may destroy or invent data. For simpler analysis the post processing can stop at cleaner raw data. For developing AI models, this process extends to labeling and analyzing data distributions. Andrew Ng’s approach, in data-centric AI, insists that tightening labels, fixing sensor drift, and writing clear provenance notes often beat fancier models.
How we compare
Now suppose Clarkson were to test a new fertilizer, fresh goat pellets, only on sunny plots. Any bumper harvest that follows says more about sunshine than about the pellets. Sound comparisons begin long before data arrive. A deep understanding of the science behind the experiment is critical before conducting any statistics. The wrong randomization, controls, and lurking confounder eat away at the foundation of statistics.
This information is not in the data. Only understanding how the experiment was designed and which events preclude others enable us to build a model of the world of the experiment. Taking this lightly has large risks for startups with limited budgets and smaller experiments. A false positive result leads to wasted resources while a false negative presents opportunity costs.
The stakes climb quickly. Early in the COVID-19 pandemic, some regions bragged of lower death rates. Age, testing access, and hospital load varied wildly, yet headlines crowned local policies as miracle cures. When later studies re-leveled the footing, the miracles vanished.
Why the pillars get skipped
Speed, habit, and misplaced trust. Leo Breiman warned in 2001 that many analysts chase algorithmic accuracy and skip the question of how the data were generated. What he called the “two cultures.” Today’s tooling tempts us even more: auto-charts, one-click models, pretrained everything. They save time—until they cost us the answer.
The other issue is lack of a culture that communicates and shares a common language. Only in academic training is it possible to train a single person to understand the science, the instrumentation, and the statistics sufficiently that their research may be taken seriously. Even then we prefer peer review. There is no such scope in startups. Tasks and expertise must be split. It falls to the data scientist to ensure clarity and collecting information horizontally. It is the job of the leadership to enable this or accept dumb risks.
Opening day
Clarkson’s pub opening was a monumental task with a thousand details tracked and tackled by an army of experts. Follow the journey from phenomenon to file, guard the twin pillars of measure and compare, and reinforce them up with careful curation and open culture. Do that, and your analysis leaves room for the most important thing: inquiry.
I had the strangest conversation with my son today. There used to be a time when computers never made a mistake. It was always the user that was in error. The computer did exactly what you asked it to do. If something went wrong it was you, the user, that didn’t know what you wanted. After decades of that being etched in today I found myself telling him that computers make mistakes, you have to check if the computer has done the right thing and that is actually ok. A computer that hallucinates also provides a surface for exploration and seeking answers to questions.
Boys Wading (1873) by Winslow Homer. Original public domain image from National Gallery of Art
In her book, Open Socrates, Agnes Callard draws our attention to the differences between problems and questions. I’ll get to those in a bit, but the fundamental realization I had was that until recently all we could use computers (CPUs, spreadsheets, internet) for was solving problems. This started all the way back with Alan Turing when he designed the Turing test. He turned the question of what is it to think into the problem of how do you detect thought. As Callard mentions, LLMs smash the Turing test but we still can’t quite accept the result as proof of thinking. What is thinking then? What are problems? What are questions? How do we answer questions?
Problems are barriers that stand in your way when you are trying to do something. You want to train a deep learning algorithm to write poetry, how to get training data is a problem. You want something soothing for lunch, getting the recipe for congee is the problem. The critical point here is that as soon as you have the solution, the data, the recipe, the problem disappears. This is the role of technology.
When we work with computers to solve problems we are essentially handing off the task to the computer without caring that the computer wants to or even can want to write poetry or have a nice lunch. So we ask the LLM to write code, we command google to give us a congee recipe. Problems don’t need a shared purpose, only methods to solve them to our satisfaction. Being perpetually dissatisfied with existing answers is the stance of science.
Science and technology are thus tools to move towards dealing with questions. Unlike problems which dissolve when you solve them, questions give you a new understanding of the world. The thing with asking questions is that there is no established way, at least in your current state, to solve them. Thus asking a question is the first step of starting a quest. In terms of science the quest is better understanding of something and you use technology along the way to dissolve problems that stand in your way.
AI lets us explore questions with, rather than merely through, computers. Granted that most common use of AI is still to solve problems, LLMs and their ability to do back and forth chat in natural language does provide the affordance to ask questions. Especially, the kind that seem to come pre-answered because we are operating from a posture where not having an answer would dissolve the posture altogether.
The Socratic Co-pilot
As a scientist, the question “what is it to be a good scientist?” comes pre answered for me. Until I am asked this question I have not really thought about it but rush to provide answers. Scientists conduct experiments carefully, they know how to do use statistics, they publish papers and so on. However, this still does not answer what it is to be a good scientist. Playing this out with an AI, I assert “rigorous statistics,” the AI counters with an anecdote on John Snow’s cholera map and I’m forced to pivot. None of these by themselves answers the root question, but it allows generation of some problems which can be answered or agreed on. This is knowledge.
Knowledge draws boundaries, or as I have explored earlier, creates places around the space that you wish to explore. In the space of “being a good scientist”, we can agree that the use the scientific method is an important factor. Depending on who you are, this could be the end of quest.
Even if no methodology exists for a given problem, simply approaching any problem with an inquisitive posture creates a method, however crude. In his book What Is It Like to Be a Bat? Thomas Nagel tackles an impossible to solve problem but a great question, through the process of a thought experiment. If I were to undertake this, I may try to click in a dark room, hang upside down. Okay, maybe not the last bit, but only maybe. Even this crude approach has now put me in the zone to answer the problem. Importantly my flapping about has created surface area where others can criticize, as Nagel was. Perhaps future brain-computer-interface chips will actually enable us to be a bat. However, lacking such technology, this is better than nothing as long as you are interested in inquiring about the bat-ness.
This kind of inquiry, this pursuit of answering questions is thinking. Specifically, as Callard puts it, thinking is “a social quest for better answers to the sorts of questions that show up for us already answered”. Breaking that down further it’s social because it’s done with a partner who disagrees with you because they have their own views about the question. It’s a quest because the both parties are seeking knowledge. The last bit about questions being already answered is worth exploring a bit.
Why bother answering questions you already have answers to? This is trivial to refute when you know nothing about a subject. For example let’s say you knew nothing about gravity and your answer to why you are stuck to the earth cause we are beings of the soil and to the soil we must go, the soil always calls us. If that is the worldview then you already have the answer. The only way to arrive at a better answer, gravity, is to have someone question you on the matter. Refuting specific points based on their own points of view. This may come in the form of a conversation, a textbook, a speech etc. I suspect this social role may soon be played by AI.
Obviously hallucinations themselves aren’t great but the ability to hallucinate is. In the coming years I expect AI will gain significant amounts of knowledge access not just in the form of training but in the form of reference databases containing data broadly accepted as knowledge. In the process we will probably have to undergo significant social pains to agree on what Established Knowledge constitutes. Such a system will enable LLMs to play the role of Socrates and help the user avoid falsehoods by questioning the beliefs held by the user.
Until now computers couldn’t play this role because there wasn’t enough “humanness” involved. In the bat example, a bat cannot serve as Socrates or as the interlocutor to a human partner because there isn’t a shared world view. LLMs, trained on human generated knowledge would have enough in common to provide a normative mirror. The AI comes with the added benefit of having both infinite patience and no internal urge to be right. This would allow the quest to provide an answer that is satisfactory to the user searching at every level of understanding. LLMs can be useful even before they gain the ability to access established knowledge. Simply by providing a surface on which to hang questions the user can become adept at the art of inquiry.
So the next time you have a chat with your pet AI understand that it starts as a session of pure space. Each word we put in ties down the AI to specific vantage points to help us explore. Go ahead—pick a question you think you’ve already answered and let the machine argue with you.
At the edge of the known, maps fail and instincts take over. We don’t just explore new worlds—we build places to survive them. Because in deep space, meaning isn’t found. It’s made.
I. Interruption of Infinity
The Delta Quadrant is a distant region of the galaxy in the Star Trek universe—vast, largely uncharted, and filled with anomalies, dangers, and promise. It is where the map ends and the unknown begins. No stations, no alliances, no history—just possibility.
And yet, possibility alone is not navigable. No one truly explores a void. We only explore what we can orient ourselves within. That is why every journey into the Delta Quadrant begins not with motion, but with homebuilding—the act of constructing something steady enough to make movement meaningful.
This is not a story about frontiers. It is a story about interruptions.
To build a home is to interrupt space. To be born is to interrupt infinity.
Consciousness does not arise gently. It asserts. It carves. It says: Here I am. The conditions of your birth—your geography, your culture, your body—are not mere facts. They are prenotions: early constraints that allow orientation. They interrupt the blur of everything into something—a horizon, a doorway, a room.
Francis Bacon wrote that memory without direction is indistinguishable from wandering. We do not remember freely; we remember through structures. We do not live in space; we live through place. Philosopher Kei Kreutler expands this insight: artificial memory—our rituals, stories, and technologies—is not a container for infinity. It is a deliberate break in its surface, a scaffolding that lets us navigate the unknown.
Like stars against the black, places puncture the undifferentiated vastness of space. They do not merely protect us from chaos; they make chaos legible. Before GPS, before modern maps, people made stars into stories and stories into guides. Giordano Bruno, working in the Hermetic tradition, saw constellations as talismans—anchoring points in a metaphysical sky. In India, astronomy and astrology were entwined, and the nakshatras—lunar mansions—offered symbolic footholds in the night’s uncertainties. These were not just beliefs. They were early technologies of place-making.
Without a place, you are not lost—you are not yet anywhere.
And so, to explore the Delta Quadrant—to explore anything—we must first give it a place to begin. Not just a structure, but a home. Not just shelter, but meaning.
II. From Vastness to Meaning
To understand why we need homes in the Delta Quadrant, we must first understand what it means to be in any space at all. Not merely to pass through it, but to experience it, name it, shape it—to transform the ungraspable into something known, and eventually, something lived.
This section traces that transformation. It begins with space—untouched, undefined—and follows its conversion into place, where identity, memory, and meaning can take root. Along the way, we consider the roles of perception, language, and tools—not just as instruments of survival, but as the very mechanisms by which reality becomes navigable.
We begin where we always do: in the unmarked vastness.
What is Space?
Space surrounds us, yet refuses to meet our gaze. It is not a substance but a condition—timeless, uncaring, and full of potential. It offers no direction, holds no memory. Nothing in it insists on being noticed. Space simply waits.
Henri Lefebvre helps us make our first move toward legibility. He proposes that all space emerges through a triad: the representations of space—the conceptual abstractions of cartographers, economists, and urban planners; the spatial practices of everyday life—our habits of movement and arrangement; and representational spaces—the dreamlike, lived realities saturated with memory, symbol, and emotion. Yet in modernity, it is the first of these—abstract space—that dominates. Space is planned, capitalized, monetized. It becomes grid and zone, not story or sanctuary.
Still, even this mapped and monetized space is not truly empty. Doreen Massey reminds us that space is not inert. It is relational, always in flux, co-constituted by those who traverse it. Space may not hold memories, but it does hold tensions. A room shifts depending on who enters it. A street corner lives differently for each passerby. What appears static from orbit is endlessly alive on foot.
We might then say: space is not blank—it is waiting. It is the stage before the script, the forest before the trail, the soundscape before the melody. It is possibility without orientation.
And yet, we cannot live on possibility. To dwell requires more than openness. Something must be placed. Something must be remembered.
What is Place?
Place begins when space is interrupted—when the unformed becomes familiar, when pattern gathers, when time slows down enough to matter. Where space is potential, place is presence.
Yi-Fu Tuan called place “an ordered world of meaning.” This ordering is not merely logical—it is affective, mnemonic, embodied. Place is not only where something happens; it is where something sticks. The repeated use of a corner, the ritual return to a path, the naming of a room—all of these actions layer memory upon memory until a once-anonymous space becomes deeply, even invisibly, ours.
Edward Casey expands this view by proposing that place is not a passive container of identity, but a generator of it. Who we are emerges from where we are. The self is not constructed in a vacuum, but shaped by kitchens and classrooms, alleyways and attics. A place is a crucible for becoming.
And places are not necessarily large or fixed. Often they are forged in fragments—through a method of thought called parataxis, the act of placing things side by side without hierarchy or explanation. Plates, tables, menus—listed without commentary—already conjure a restaurant. North is the river, east is the village: already we are somewhere. This act of spatial poetry, what might be called topopoetics, allows us to construct coherence from adjacency. A place need not be explained to be felt.
Moreover, places are not isolated islands. They are defined as much by what they touch as by what they contain. A healthcare startup, for instance, is not merely a business plan or a piece of code—it is a bounded intersection of regulation, culture, user need, and infrastructural possibility. Its identity as a place emerges through tension, not through self-sufficiency.
To make a place, then, is to draw a boundary—not always of stone, but always of meaning. And once there is a boundary, there is the possibility of crossing it.
Exploration and Navigation
If place is what interrupts space, exploration is the means by which that interruption unfolds. We explore to understand, to locate, to claim. But we also explore to survive. In an unmarked world, movement without orientation is not freedom—it is drift.
The act of exploration is always mediated by tools—technologies, heuristics, protocols, even rituals. A tool transforms a space into something workable, sometimes by revealing it, sometimes by resisting it. The ax makes the forest navigable. The microscope transforms skin into data. A recipe, too, is a tool: it arranges the chaos of the kitchen into a legible field of options.
Skill determines the fidelity of this transformation. A novice with a saw sees wood; a carpenter sees potential. A goldsmith with pliers explores more in an inch of metal than a layman can in a bar of gold. Tools extend reach, but skill gives them resonance.
Rules of thumb emerge here as quietly powerful. They encode accumulated wisdom without demanding full explanation. A rule of thumb is a kind of portable place—a local memory that survives relocation. It allows someone to move meaningfully through new terrain without starting from nothing.
But perhaps the oldest, and most powerful, tool of place-making is language. To name something is to summon it into experience. A name makes the unspeakable speakable, the abstract navigable. Storytelling is not merely entertainment—it is cartography. Myth and memory alike help us place ourselves. Rituals, in this light, become recurring acts of alignment: a way to rhythmically convert time and action into a felt geography.
In early computer games like Zork, entire worlds were constructed out of pure language. “To the west is a locked door.” “To the north, a forest.” With no images at all, a mental geography emerged. Place formed from syntax. And in open-world games, which promise limitless exploration, boundaries remain—defined not by terrain, but by tools and capabilities. One may see a mountain, but until one has a grappling hook, the mountain is not truly in reach.
This is the double truth of exploration: it reveals, but also restricts. Every tool has affordances and blind spots. Every method of navigation makes some routes legible and others obscure.
And so, just as place makes meaning possible, it also makes power visible. When we explore, we choose where to go—but also where not to go. When we name, we choose what to name—and what to leave unnamed. With each act of orientation, something is excluded.
This is where the ethical tensions begin.
III. Violence, Power, Custodianship
The Violence of Exploration
To make a place is never a neutral act. It is always a form of imposition, a declaration that one configuration of the world will take precedence over another. Every boundary drawn reorders the field of possibility. In this sense, exploration—often romanticized as the pursuit of discovery—is inseparable from the logic of exclusion. The forest cleared for settlement, the land renamed by the cartographer, the dataset parsed by an algorithm: each gesture selects a future and discards alternatives. Place-making is not only constructive—it is also extractive.
Achille Mbembe’s concept of necropolitics offers a stark rendering of this dynamic. For Mbembe, the most fundamental expression of power is the authority to determine who may live and who must die—not just biologically, but spatially. A person denied a stable place—be it in legal terms, economic structures, or cultural recognition—is exposed to systemic vulnerability. They are rendered invisible, disposable, or subject to unending surveillance. In this framework, place becomes not a refuge but a rationed privilege, administered according to hierarchies of race, class, and citizenship. To be placeless is to be exposed to risk without recourse.
David Harvey arrives at a similar critique from a different angle. For Harvey, the production of space under capitalism is inherently uneven. Capital concentrates selectively, building infrastructure, institutions, and visibility in certain regions while leaving others disinvested, fragmented, or erased. Some places are made to flourish because they are profitable; others are sacrificed because they are not. Entire neighborhoods, cities, and ecosystems are subjected to cycles of speculative construction and abandonment. In this schema, place is commodified—not lived. It becomes a product shaped less by the needs of its inhabitants than by the imperatives of financial flows.
Who Gets to Make Place?
Even at smaller scales, the ethics of place-making hinge on who holds the authority to define what a place is and who belongs within it. The naming of a school, the zoning of a district, the design of a product interface—each involves not only inclusion, but exclusion; not only clarity, but control. The map that makes one community legible can make another invisible. Orientation, in this sense, is never free of consequence. It is always tethered to power.
If this is the cost of exploration, then the question we must ask is not simply whether to build places—but how, and for whom.
Those who create the tools through which places are made—architects, technologists, platform designers—wield a power that is both formative and silent. In shaping the conditions under which others navigate the world, they act as unseen cartographers. A navigation app determines which streets appear safe. A job platform defines whose labor is visible. A software protocol decides who is legible to the system. In each case, someone has already made a decision about what kind of world is possible.
This asymmetry between creator and user has led some to argue that ethical design requires more than usability—it requires an ethos of custodianship. The act of place-making must be informed not only by technical possibility, but by moral imagination. A well-designed place is not simply functional—it is inhabited, sustained, and responsive to the people who live within it.
Michel Foucault offers a vocabulary for this through his concept of heterotopias: places that operate under a different logic, outside the dominant spatial order. These may be institutional—cemeteries, prisons, libraries—or insurgent—subcultures, autonomous zones, speculative games. Heterotopias do not merely resist the prevailing map; they reveal that other maps are possible. They function as mirrors and distortions of the dominant world, reminding us that the spatial order is neither natural nor inevitable.
Yet even heterotopias cannot be engineered wholesale. They must be lived into being. This is the insight offered by Christopher Alexander and, more recently, Ron Wakkary in their explorations of unselfconscious design. Good places, they argue, are rarely planned top-down. Instead, they emerge from a slow dance between structure and improvisation. A fridge becomes a family bulletin board. A courtyard becomes a marketplace. A piece of software becomes an unanticipated ritual. In these cases, fit emerges not from specification but from accumulated use. Design, at its best, enables this evolution rather than constraining it.
To make a place, then, is not to finalize it. It is to initiate a relationship. The designer, the founder, the engineer—each acts as a temporary steward rather than a sovereign. The real test of their creation is not how complete it feels on launch day, but how it adapts to the people who enter it and make it their own. This is the quiet responsibility of custodianship: to create with humility, to listen after building, and to recognize that places do not succeed by force of vision alone. They succeed by making others feel, at last, that they belong.
IV. Fractal Place-Making
We often think of place-making as a singular act—a line drawn, a structure raised, a tool released. But in truth, places are rarely built in one gesture. They are shaped recursively, iteratively, across layers and scales. A place is not simply made once—it is continuously remade, revised, and reinhabited. If power animates the creation of place, then care animates its persistence.
The previous section examined how place-making implicates violence and authority. This one turns inward, offering tools to see place-making not as an external imposition, but as a continuous, generative practice—one we each participate in, often unconsciously. Places are not only geopolitical or architectural. They emerge in routines, in interfaces, in sentences, in rituals. They are as present in the layout of a city as in the arrangement of a desktop or the structure of a daily habit.
Place-making, in this light, becomes fractal.
Spaces All the Way Down
Every place, no matter how concrete or intentional, overlays a prior space. A home rests on a plot of land that once held other meanings. A software tool is coded atop prior protocols, abstractions, languages. A startup’s culture is built not from scratch, but from accumulated social assumptions, inherited metaphors, and the ghosts of previous institutions. No place begins in a vacuum. It begins by coalescing around an earlier ambiguity.
To say “it’s spaces all the way down” is not a paradox but a recognition: that all our structuring of the world rests on foundations that were once unstructured. And those, in turn, rest on others. Beneath every home is a history. Beneath every habit is a choice. Beneath every heuristic is an unspoken story of why something worked once, and perhaps still does.
This recursive layering reveals something crucial. Place is not just what we inhabit—it is what we build upon, often without seeing the full depth of what came before. When we set up a calendar system, when we define an onboarding process, when we reorganize a room or refactor code, we are engaging in acts of recursive place-making. These are not trivial gestures. They encode our assumptions about time, labor, clarity, worth. And in doing so, they scaffold the next set of moves. What feels natural is often just deeply buried infrastructure.
Traditions, Tools, and Temporal Sediments
Much of what makes a place stable over time is not its physicality but its rhythm. What repeats is remembered. What is remembered becomes legible. Over time, the sediment of repetition builds tradition—not as nostalgia, but as a living scaffolding.
Rules of thumb are examples of such traditions, compacted into portable epistemologies. They are not universal truths, but local condensations of experience: “Measure twice, cut once.” “If it’s not a hell yes, it’s a no.” “Always leave a version that works.” These are not mere slogans. They are the crystallization of hundreds of micro-failures, carried forward in language so that others may avoid or adapt. A rule of thumb is a place you can carry in your mind—a place where you briefly borrow the perspective of others, where their past becomes your foresight.
Ethnographic engineering—the practice of living among those you design for—extends this logic. It is not enough to ask what users want; one must become a user. To understand a kitchen, you must cook. To redesign a hospital intake form, you must sit beside a nurse at the end of a long shift. Inhabitance precedes insight. It is not empathy as abstraction, but as situated knowledge. This is why the mantra “get out of the building” matters. It invites designers to enter someone else’s place—and to temporarily surrender their own.
Even the way we recover from failure carries spatial weight. In systems design, crash-only thinking proposes that recovery should not be exceptional but routine. A system should not pretend to avoid breakdown—it should assume it, and handle it gracefully. This principle translates beyond code. Our identities, too, are shaped by rupture and repair. We are the residue of what survives collapse. To rebuild after a crash is to reassert a place for oneself in the world—to refuse exile, to restart with a new contour of legibility. The self is a recursive place, constantly reformed by continuity and failure.
Imagined Places, Real Consequences
Not all places are made of walls or workflows. Some are conjured in thought but anchor entire worlds in practice. These are imagined places—places held in common through language, ritual, and belief—and their effects are no less material for being constructed.
Benedict Anderson’s theory of imagined communities describes the nation as precisely such a place: a social structure that exists because enough people believe in its coherence. A country is not simply a set of borders—it is a shared imagination of belonging, reinforced by rituals as small as singing an anthem or using the same postal code. These rituals do not merely express the nation—they enact it. The community persists not because everyone knows each other, but because they believe in the same structure of place.
Gaston Bachelard, writing of intimate places, adds another layer. His Poetics of Space reveals how rooms, nests, and thresholds function not just architecturally, but symbolically. A staircase is not just a connector between floors—it is a memory channel. A drawer is not just storage—it is a metaphor for secrecy. Through repeated use and emotional investment, even the smallest corners of a home can become vast interior landscapes.
Designers who ignore this symbolic dimension risk creating tools that are frictionless but placeless. A well-designed app may guide a user efficiently, but if it lacks metaphor, texture, or resonance, it will not endure. By contrast, even ephemeral tools—when shaped with care—can become anchoring places. A text editor that respects rhythm. A ritualized way of closing the day. A naming convention that makes each project feel storied rather than serialized. These are small acts, but they echo. They accumulate. They become sediment.
Recursive place-making, then, is not about grandeur. It is about fidelity. It is about recognizing that every small act of shaping the world—every pattern set, every name given, every recovery ritualized—is part of a larger unfolding. Place is not a one-time gift. It is a continuous offering.
V. Homes at the Edge of the Known
Places don’t just emerge from space—they transform it. A well-made place doesn’t only make sense of what is; it makes new things possible. It reframes what we pay attention to, how we act, and who we become. Place is not the end of exploration—it is the start of imagination.
Each time we build a place, we alter the shape of the surrounding space. A room becomes a lab, a garage becomes a company, a notebook becomes a worldview. These shifts ripple outward. Identity follows structure. Tools reorganize desire. Suddenly what felt unreachable becomes thinkable. New directions appear.
This is why the Delta Quadrant matters. In Star Trek, it is the quadrant at the far edge of the map: unvisited, unaligned, untamed. But we all have our own Delta Quadrants—those domains where orientation fails. The new job. The new field. The social unknown. We don’t need to conquer these spaces. We need to inhabit them.
Building a home in the Delta Quadrant means giving shape to uncertainty. Not through control, but through commitment. Homes are not fortresses—they are launchpads. They anchor us without confining us. They give us somewhere to return to, so we can go further.
To build such homes is to design for possibility. It is to accept that the unknown will always outpace our frameworks, and to meet it not with fear, but with grounded generosity. Homes enable freedom not by removing constraints, but by embedding care in structure. They show us that discovery and dignity are not opposites—they are partners.
And yes, building these homes will be messy. There will be diplomacy with space jellyfish. There will be moral conundrums involving time loops and malfunctioning replicators. Someone will definitely rewire the main console so the espresso machine can detect tachyon emissions.
There is a shared soul shard between Dwarf Fortress, Emacs, and AI that lured me to them and has kept me engaged for over a decade. For a long time, I struggled to articulate the connection, managing only to describe Dwarf Fortress as the Emacs of games. But this analogy, while compelling, doesn’t fully capture the deeper resonance these systems share. They are not merely complicated; they are complex—tools for creativity that reward immersion and exploration.
To understand the allure, let’s revise the distinction between complicated and complex. Complicated systems, say a spinning-disk microscope, consist of interlocking parts (each with internal complications) that interact in predictable ways. They require technical expertise to master, but their behavior remains largely deterministic and I tire of them soon.
Complex systems, see Cynefin framework, exhibit emergent behavior. Their value/fun lies in the generative possibilities they unlock rather than the sum of their parts.
Dwarf Fortress, Emacs, and AI live on the froth of this complexity. None of these systems exist as ends in themselves. You don’t play Dwarf Fortress to achieve a high score (there isn’t one, you eventually lose). You don’t use Emacs simply to edit text, and you don’t build AI to arrange perceptrons in aesthetically pleasing patterns. These are platforms, altars for creation. Dev environments.
In Emergence We Trust
Like language with the rules of poetry, these environments are generative places enabling exploration of emergent spaces. Emergence, which manifests both in the software but also in you. There is always a point where you find yourself thinking, I didn’t expect I could do that. In Dwarf Fortress first you fight against tantrum spirals and then through mastery, against FPS death. Similarly, Emacs enables workflows that evolve over time, as users build custom functions and plugins to fit their unique needs. In AI, emergence arrives rather late but it’s there. Putting together datasets, training them, optimizing, starting over, are complicated but not complex per se. The complexity (and emergence) is in the capabilities of the trained network. Things infinitely tedious or difficult are a few matrix multiplications away.
This desire for emergence is spelunking. It rewards curiosity and experimentation but demands patience and resilience. Mastery begins with small victories: making beer in Dwarf Fortress, accessing help in Emacs, or implementing a 3-layer neural network. Each success expands your imagination. The desire to do more, to push the boundaries of what’s possible, becomes an endless rabbit hole—one that is as exhilarating as it is daunting.
Complexity as a Gateway to Creativity
The high complexity of these systems—their vast degrees of freedom—opens the door to infinite creativity. This very openness, however, can be intimidating. Confronted with the sprawling interface of Emacs, the arcane scripts of Dwarf Fortress, or the mathematical abstractions of AI, it’s tempting to retreat to the familiar. Yet this initial opacity is precisely what makes these systems so rewarding. Engaging with something that might blow up in your face—whether it’s drunk cats, a lisp error, or an exploding gradient—forces you to give up.
But just then you have an idea, what you tried this…
AI-generated image of The old Doge Enrico Dandolo sacking Constantinople
I’m taking part in the Contraptions Book Club where we are reading City of Fortune which is about Venice. I was struck by the character of Doge Dandolo. Dude was 80+ when we saw a trade opportunity in the 4th Crusades. In the book, the author, Roger Crowley describes a brief moment when Dandolo makes a heroic rush on the banks of Constantinople’s Golden Horn during the Sack of Constantinople.
I found both the Doge and the imagery interesting so went looking for art depicting the art, there’s supposed to be lots. Unfortunately I couldn’t find any and nothing in the public domain. So I asked AI to generate something.
There are other paintings like the one below, but not the one I was looking for.
Few ideas capture the collective human imagination more powerfully than the notion of a “universal library”—a singular repository of all recorded knowledge. From the grandeur of the Library of Alexandria to modern digital initiatives, this concept has persisted as both a philosophical ideal and a practical challenge. Miroslav Kruk’s 1999 paper, “The Internet and the Revival of the Myth of the Universal Library,” revitalizes this conversation by highlighting the historical roots of the universal library myth and cautioning against uncritical technological utopianism. Today, as Wikipedia and Large Language Models (LLMs) like ChatGPT emerge as potential heirs to this legacy, Kruk’s insights—and broader reflections on language, noise, and the very nature of truth—resonate more than ever.
The myth of the universal library
Humanity has longed for a comprehensive archive that gathers all available knowledge under one metaphorical roof. The Library of Alexandria, purportedly holding every important work of its era, remains our most enduring symbol of this ambition. Later projects—such as Conrad Gessner’s Bibliotheca Universalis (an early effort to compile all known books) and the Enlightenment’s encyclopedic endeavors—renewed the quest for total knowledge. Francis Bacon famously proposed an exhaustive reorganization of the sciences in his Instauratio Magna, once again reflecting the aspiration to pin down the full breadth of human understanding.
Kruk’s Historical Lens
This aspiration is neither new nor purely technological. Kruk traces the “myth” of the universal library from antiquity through the Renaissance, revealing how each generation has grappled with fundamental dilemmas of scale, completeness, and translation. According to Kruk,
inclusivity can lead to oceans of meaninglessness
The library on the “rock of certainty”… or an ccean of doubt?
Alongside the aspiration toward universality has come an ever-present tension around truth, language, and the fragility of human understanding. Scholars dreamed of building the library on a “rock of certainty,” systematically collecting and classifying knowledge to vanquish doubt itself. Instead, many found themselves mired in “despair” and questioning whether the notion of objective reality was even attainable. As Kruk’s paper points out,
The aim was to build the library on the rock of certainty: We finished with doubting everything … indeed, the existence of objective reality itself.”
Libraries used to be zero-sum
Historically,
for some libraries to become universal, other libraries have to become ‘less universal.’
Access to rare books or manuscripts was zero-sum; a collection in one part of the world meant fewer resources or duplicates available elsewhere. Digitization theoretically solves this by duplicating resources infinitely, but questions remain about archiving, licensing, and global inequalities in technological infrastructure.
Interestingly, Google was founded the same year as Kruk’s 1999 paper was nearing publication. In many ways, Google’s search engine became a “library of the web,” indexing and ranking content to make it discoverable on a scale previously unimaginable. Yet it is also a reminder of how quickly technology can outpace our theoretical frameworks: Perhaps Kruk couldn’t have known about Google without Google. Something something future is already here…
Wikipedia: an oasis island
Wikipedia stands as a leading illustration of a “universal library” reimagined for the digital age. Its open, collaborative platform allows virtually anyone to contribute or edit articles. Where ancient and early modern efforts concentrated on physical manuscripts or printed compilations, Wikipedia harnesses collective intelligence in real time. As a result, it is perpetually expanding, updating, and revising its content.
Yet Kruk’s caution holds: while openness fosters a broad and inclusive knowledge base, it also carries the risk of “oceans of meaninglessness” if editorial controls and quality standards slip. Wikipedia does attempt to mitigate these dangers through guidelines, citation requirements, and editorial consensus. However, systemic biases, gaps in coverage, and editorial conflicts remain persistent challenges—aligning with Kruk’s observation that inclusivity and expertise are sometimes at odds.
LLMs – AI slops towards the perfect library
Where Wikipedia aspires to accumulate and organize encyclopedic articles, LLMs like ChatGPT offer a more dynamic, personalized form of “knowledge” generation. These models process massive datasets—including vast portions of the public web—to generate responses that synthesize information from multiple sources in seconds. In a way this almost solves one of the sister aims of the perfect library, perfect language, where the embeddings serve as a stand in for perfect words.
The perfect language, on the other hand, would mirror reality perfectly. There would be one exact word for an object or phenomenon. No contradictions, redundancy or ambivalence.
The dream of a perfect language has largely been abandoned. As Umberto Eco suggested, however, the work on artificial intelligence may represent “its revival under a different name.”
The very nature of LLMs highlights another of Kruk’s cautions: technological utopianism can obscure real epistemological and ethical concerns. LLMs do not “understand” the facts they present; they infer patterns from text. As a result, they may produce plausible-sounding but factually incorrect or biased information. The quantity-versus-quality dilemma thus persists.
Noise is good actually?
Although the internet overflows with false information and uninformed opinions, this noise can be generative—spurring conversation, debate, and the unexpected discovery of new ideas. In effect, we might envision small islands of well-curated information in a sea of noise. Far from dismissing the chaos out of hand, there is merit in seeing how creative breakthroughs can emerge from chaos. Gold of Chemistry from leaden alchemy.
Concerns persist, existence of misinformation, bias, AI slop invites us to exercise editorial diligence to sift through the noise productively. It also echoes Kruk’s notion of the universal library as something that “by definition, would contain materials blatantly untrue, false or distorted,” thus forcing us to navigate “small islands of meaning surrounded by vast oceans of meaninglessness.”
Designing better knowledge systems
Looking forward, the goal is not simply to build bigger data repositories or more sophisticated AI models, but to integrate the best of human expertise, ethical oversight, and continuous quality checks. Possible directions include:
1. Strengthening Editorial and Algorithmic Oversight:
Wikipedia can refine its editorial mechanisms, while AI developers can embed robust validation processes to catch misinformation and bias in LLM outputs.
2. Contextual Curation:
Knowledge graphs are likely great bridges between curated knowledge and generated text
3. Collaborative Ecosystems:
Combining human editorial teams with AI-driven tools may offer a synergy that neither purely crowdsourced nor purely algorithmic models can achieve alone. Perhaps this process could be more efficient by adding a knowledge base driven simulation (see last week’s links) of the editors’ intents and purposes.
A return to the “raw” as opposed to social media cooked version of the internet might be the trick afterall. Armed with new tools we can (and should) create meaning. In the process Leibniz might get his universal digital object identifier after all.
Compression progress as a fundamental force of knowledge
Ultimately, Kruk’s reminder that the universal library is a myth—an ideal rather than a finished product—should guide our approach. Its pursuit is not a one-time project with a definitive endpoint; it is an ongoing dialogue across centuries, technologies, and cultures. As we grapple with the informational abundance of the digital era, we can draw on lessons from Alexandria, the Renaissance, and the nascent Internet of the 1990s to inform how we build, critique, and refine today’s knowledge systems.
Refine so that tomorrow, maybe literally, we can run reclamation projects in the noisy sea.
Image: Boekhandelaar in het Midden-Oosten (1950 – 2000) by anonymous. Original public domain image from The Rijksmuseum
Last year I was heavily experimenting with Knowledge Graphs because it’s been clear that LLMs by themselves fall short because of the lack of knowledge. This paper by Chirag Shah and Ryen White (you can click the heading above) from Dec 2024 expands on those shortcomings by exploring not just knowledge but also value generation, personalization, and trust.
They open the paper by casting a very wide definition of an “agent” everything from thermostats to LLM tools. While this seems facetious at first, their next point is interesting. Agents by definition “remove agency from a user in order to do things on the user’s behalf and save them time and effort.”. I think this is an interesting way to injext an LLM flavored principal agent problem into the Agentic AI conversation.
Their broad suggestion is to expand the ecosystem of agents by including “Sims”. Sims are simulations of the user which address
privacy and security
automated interactions and
representing the interests of the user by holding intimate knowledge about the user
Infinite knowledge is available through the internet today. It is available trivially and, some, ahem, blogs make a performance of consuming it. Machiavelli used to
put on the garments of court and palace. Fitted out appropriately, I step inside the venerable courts of the ancients, where, solicitously received by them, I nourish myself on that food that alone is mine and for which I was born, where I am unashamed to converse with them and to question them about the motives for their actions, and they, in their humanity, answer me. And for four hours at a time I feel no boredom, I forget all my troubles, I do not dread poverty, and I am not terrified by death. I absorb myself into them completely.
Some folks have a private office, but an office is not a study. A study or, studiolo
in Italian, a precursor to the modern-day study — came to offer readers access to a different kind of chamber, a personal hideaway in which to converse with the dead. Cocooned within four walls, the studiolo was an aperture through which one could cultivate the self. After all, to know the world, one must begin with knowing the self, as ancient philosophy instructs. In order to know the self, one ought to study other selves too, preferably their ideas as recorded in texts. And since interior spaces shape the inward soul, the studiolo became a sanctuary and a microcosm. The study thus mediates the world, the word, and the self.
In the 1500s Michel de Montaigne writes:
We should have wife, children, goods, and above all health, if we can; but we must not bind ourselves to them so strongly that our happiness [tout de heur] depends on them. We must reserve a back room [une arriereboutique] all our own, entirely free, in which to establish our real liberty and our principal retreat and solitude.
A little later, Virginia Woolf points out what seems to be an eternal inequality by struggling to find “a room of one’s own”.
The enclosure of the study, for those of us lucky to have one, offers us a paradoxical sort of freedom. Conceptually, the studiolo is a pharmakon, a cure or poison for the soul. In its highest aspirations, the studiolo, as developed by humanists from Petrarch to Machiavelli to Montaigne, is a sanctuary for self-cultivation. Bookishness was elevated into a saintly virtue
The world today would perhaps be better off if more of us had our own studiolos.
Another one of Frank Harrell‘s posts. Given my day job, and the R&D background this one is quite close to home. As a team leader on the industry side one hopes to build a culture with the team that aligns scientific rigor with company goals. Any method, statistical or cultural (in this case both), that solves for this tension will get you the most bang for your buck.
Building a startup, doing research, or even just launching a moonshot project is dependent on emergent actions and decisions. Behind this madness, there is a kind of Science of “Muddling Through” and Bayesian methods are perhaps best equipped:
make all the assumptions you want, but allow for departures from those assumptions. If the model contains a parameter for everything we know we don’t know (e.g., a parameter for the ratio of variances in a two-sample t-test), the resulting posterior distribution for the parameter of interest will be flatter, credible intervals wider, and confidence intervals wider. This makes them more likely to lead to the correct interpretation, and makes the result more likely to be reproducible.
In an environment of limited resources (time, money, investor patience), being able to quantify your data and obtaining bankable evidence is critical.
only the Bayesian approach allows insertion of skepticism at precisely the right point in the logic flow, one can think of a full Bayesian solution (prior + model) as a way to “get the model right”, taking the design and context into account, to obtain reliable scientific evidence.
The post provides a much more in-depth view, including an 8-fold path to enhancing the scientific process.
Chris Arnade gives us a view into why he goes on insanely long (both distance and time) trips by foot and what he discovers there. I found this passage hilarious, in that money seems to converge on a version of living that is identical with different paint while the rich likely search for that unique way of living.
Every large global city has a few upscale neighborhoods that are effectively all the same. It is where the very rich have their apartments, the five and four star hotels are, the most famous museums, and a shopping district with the same stores you would find in Manhattan’s Upper East Side, or London’s Mayfair.
The only difference is the branding. So you get the Upper East Side with Turkish affectations, or a Peruvian themed Mayfair. The residents of these neighborhoods are also pretty comfortable in any global city. As long as it is the right neighborhood.
Having traveled a fair bit, the designation of tourist brings with it multiple horrors. Chris uses walking as a mini residing and side-steps much of it. :
Walking also changes how the city sees you, and consequently, how you see the city. As a pedestrian you are fully immersed in what is around you, literally one of the crowd. It allows for an anonymity, that if used right, breaks down barriers and expectations. It forces you to deal with and interact with things and people as a resident does. You’re another person going about your day, rather than a tourist looking to buy or be sold whatever stuff and image a place wants to sell you.
This particular experience reminded of the book Shantaram, a book about a foreigner being adsorbed to and absorbed by the local.
While interesting the research uses LLMs to play both patient and doctor. I like that there is research happening in this area and at least we are moving in the right direction with the testing of AI i.e. not just structured tests. I would perhaps not judge the “failure” too harshly as the source of the data as also an LLM and would suffer from deficiencies which an actual patient suffering from symptoms would not.
This paper introduces the Conversational Reasoning Assessment Framework for Testing in Medicine (CRAFT-MD) approach for evaluating clinical LLMs. Unlike traditional methods that rely on structured medical examinations, CRAFT-MD focuses on natural dialogues, using simulated artificial intelligence agents to interact with LLMs in a controlled environment.
While the paper has a negative result we do come away with a good set of recommendations for future evaluations.
Traversing through human history, even in the last two decades, we see a rapid increase in the accessibility of knowledge. The purpose of language, and of course all communication is to transfer a concept from one system to another. For humans this ability to transfer concepts has been driven by advancements in technology, communication, and social structures and norms.
This evolution has made knowledge increasingly composable, where individual pieces of information can be combined and recombined to create new understanding and innovation. Ten years ago I would have said being able to read a research paper and having the knowledge to repeat that experiment in my lab was strong evidence of this composability (reproducibility issues not withstanding).
Now, composability itself is getting an upgrade.
In the next essay I’ll be exploring the implications of the arrival of composable knowledge. This post is a light stroll to remind ourselves of how we got here.
In ancient times, knowledge was primarily transmitted orally. Stories, traditions, and teachings were passed down through generations by word of mouth. This method, while rich in cultural context, was limited in scope and permanence. The invention of writing systems around 3400 BCE in Mesopotamia marked a significant leap. Written records allowed for the preservation and dissemination of knowledge across time and space, enabling more complex compositions of ideas (Renn, 2018).
Shelves, Sheaves, and Smart Friends
The establishment of libraries, such as the Library of Alexandria in the 3rd century BCE, and scholarly communities in ancient Greece and Rome, further advanced the composability of knowledge. These institutions gathered diverse texts and fostered intellectual exchanges, allowing scholars to build upon existing works and integrate multiple sources of information into cohesive theories and philosophies (Elliott & Jacobson, 2002).
Scribes, Senpai, and Scholarship
During the Middle Ages, knowledge preservation and composition were largely the domain of monastic scribes who meticulously copied and studied manuscripts. The development of universities in the 12th century, such as those in Bologna and Paris, created centers for higher learning where scholars could debate and synthesize knowledge from various disciplines. This was probably when humans shifted perspective and started to view themselves as apart from nature (Grumbach & van der Leeuw, 2021).
Systems, Scripts and the Scientific Method
The invention of the printing press by Johannes Gutenberg in the 15th century revolutionized knowledge dissemination. Printed books became widely available, drastically reducing the cost and time required to share information. This democratization of knowledge fueled the Renaissance, a period marked by the synthesis of classical and contemporary ideas, and the Enlightenment, which emphasized empirical research and the scientific method as means to build, refine, share knowledge systematically (Ganguly, 2013).
Silicon, Servers, and Sharing
The 20th and 21st centuries have seen an exponential increase in the composability of knowledge due to digital technologies. The internet, open access journals, and digital libraries have made vast amounts of information accessible to anyone with an internet connection. Tools like online databases, search engines, and collaborative platforms enable individuals and organizations to gather, analyze, and integrate knowledge from a multitude of sources rapidly and efficiently. There have even been studies which allow, weirdly, future knowledge prediction (Liu et al., 2019).
Conclusion
From oral traditions to digital repositories, the composability of knowledge has continually evolved, breaking down barriers to information and enabling more sophisticated and collaborative forms of understanding. Today, the ease with which we can access, combine, and build upon knowledge drives innovation and fosters a more informed and connected global society.