Tag: AI

  • AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    Fediverse Reactions
  • My Road to Bayesian Stats

    By 2015, I had heard of Bayesian Stats but didn’t bother to go deeper into it. After all, significance stars, and p-values worked fine. I started to explore Bayesian Statistics when considering small sample sizes in biological experiments. How much can you say when you are comparing means of 6 or even 60 observations? This is the nature work at the edge of knowledge. Not knowing what to expect is normal. Multiple possible routes to a seen a result is normal. Not knowing how to pick the route to the observed result is also normal. Yet, our statistics fails to capture this reality and the associated uncertainties. There must be a way I thought. 

    Free Curve to the Point: Accompanying Sound of Geometric Curves (1925) print in high resolution by Wassily Kandinsky. Original from The MET Museum. Digitally enhanced by rawpixel.

    I started by searching for ways to overcome small sample sizes. There are minimum sample sizes recommended for t-tests. Thirty is an often quoted number with qualifiers. Bayesian stats does not have a minimum sample size. This had me intrigued. Surely, this can’t be a thing. But it is. Bayesian stats creates a mathematical model using your observations and then samples from that model to make comparisons. If you have any exposure to AI, you can think of this a bit like training an AI model. Of course the more data you have the better the model can be. But even with a little data we can make progress. 

    How do you say, there is something happening and it’s interesting, but we are only x% sure. Frequentist stats have no way through. All I knew was to apply the t-test and if there are “***” in the plot, I’m golden. That isn’t accurate though. Low p-values indicate the strength of evidence against the null hypothesis. Let’s take a minute to unpack that. The null hypothesis is that nothing is happening. If you have a control set and do a treatment on the other set, the null hypothesis says that there is no difference. So, a low p-value says that it is unlikely that the null hypothesis is true. But that does not imply that the alternative hypothesis is true. What’s worse is that there is no way for us to say that the control and experiment have no difference. We can’t accept the null hypothesis using p-values either. 

    Guess what? Bayes stats can do all those things. It can measure differences, accept and reject both  null and alternative hypotheses, even communicate how uncertain we are (more on this later). All without making assumptions about our data.

    It’s often overlooked, but frequentist analysis also requires the data to have certain properties like normality and equal variance. Biological processes have complex behavior and, unless observed, assuming normality and equal variance is perilous. The danger only goes up with small sample sizes. Again, Bayes requires you to make no assumptions about your data. Whatever shape the distribution is, so called outliers and all, it all goes into the model. Small sample sets do produce weaker fits, but this is kept transparent. 

    Transparency is one of the key strengths of Bayesian stats. It requires you to work a little bit harder on two fronts though. First you have to think about your data generating process (DGP). This means how do the data points you observe came to be. As we said, the process is often unknown. We have at best some guesses of how this could happen. Thankfully, we have a nice way to represent this. DAGs, directed acyclic graphs, are a fancy name for a simple diagram showing what affects what. Most of the time we are trying to discover the DAG, ie the pathway of a biological outcome. Even if you don’t do Bayesian stats, using DAGs to lay out your thoughts is a great. In Bayesian stats the DAGs can be used to test if your model fits the data we observe. If the DAG captures the data generating process the fit is good, and not if it doesn’t. 

    The other hard bit is doing analysis and communicating the results. Bayesian stats forces you to be verbose about your assumptions in your model. This part is almost magicked away in t-tests. Frequentist stats also makes assumptions about the model that your data is assumed to follow. It all happens so quickly that there isn’t even a second to think about it. You put in your data, click t-test and woosh! You see stars. In Bayesian stats stating the assumptions you make in your model (using DAGs and hypothesis about DGPs) communicates to the world what and why you think this phenomenon occurs. 

    Discovering causality is the whole reason for doing science. Knowing the causality allows us to intervene in the forms of treatments and drugs. But if my tools don’t allow me to be transparent and worse if they block people from correcting me, why bother?

    Richard McElreath says it best:

    There is no method for making causal models other than science. There is no method to science other than honest anarchy.

    Fediverse Reactions
  • The secret flag of content

    I don’t have any fun when I use LLMs to write. It may have perceived utility: popping out a LinkedIn article or two everyday. But I bet no one is actually reading. It’s a strip mall for a thumb stroll.  

    LLMs suck at writing. The summaries that LLMs give with the “Deep Research” are so poor in quality that I start to skim it. Yes, I skim the thing that is already a summary. 

    Let’s do a small experiment using the  Harry G. Frankfurt Essay On Bullshit

    Here are HGF’s opening lines: 

    “One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. Each of us contributes his share.”

    Now here is ChatGPT o3, with the advantage of having HGF in the training data and extra contextualizing from me “theory of bullshit”: 

    Prompt: write me an introductory paragraph on the theory of bullshit 

     Result: “Bullshit isn’t just lying gone sloppy; it’s a distinct communicative strategy built on indifference to truth. Where the liar knows the facts and works to conceal or invert them, the bullshitter treats truth-value as irrelevant raw material—selecting, mixing, or inventing claims to achieve an effect (status, alignment, vibe) rather than to inform. The result is epistemic pollution: signals that…”

    Look! It’s the secret flag of “content”:  “Blah isn’t just X; Blah is Y”.

    AI generated flag. As dead as the text on it is wrong.

    I cannot assimilate this, it definitely can’t help me synthesize anything. There is always better human writing even if it is poorly written. It has a certain aliveness, and that aliveness is contagious enthusiasm. Vibe coding I can understand, you can see bad output in the final result. Vibe coding still manages to change something in my mind: knowing what I don’t want. 

    I don’t mind people using AI at all, I use it alllll the time. Writing with LLMs is no just fun. All this prompting and almost nothing changes in my mind. When an AI rearranges your thoughts it does not rearrange your brain. 

    Fediverse Reactions
  • Divine Documentation

    Dad was about my age when he said that reading the manual was better than hypothesis driven button pressing. For teenage me, that took too long. Sure, I may have crashed a computer or two but following my gut got me there. Of course my gut isn’t that smart. In the decades preceding, devices had converged on a common pattern language of buttons. Once learned, the standard grammar of action would reliably deliver me to my destination. 

    Image of a nebula taken by the Hubble Telescope.

    In programming I was similarly aided by the shared patterns across MATLAB, Python, R, Java, Julia, and even HTML. In the end however, dad was right. Reading documentation is the way. Besides showing correct usage, manuals create a new understanding of my problems. I am able to play with tech thanks to the people that took the effort and the care to create good documentation. This is not limited to code and AI. During the startup years, great handbooks clarified accounting, fundraising, and regulations, areas foreign to me.

    I love good documentation and I write documentation. Writing good documentation is hard. It is an exercise in deep empathy with my user. Reaching into the future to give them all they need is part of creating good technology. Often the future user is me and I like it when past me is nice to now me. If an expert Socratic interlocutor is like weight training, documentation is a kindly spirit ancestor parting the mist. 

    Maybe it’s something about being this age but now I try to impart good documentation practices to my teams. I also do not discourage pressing buttons to see what happens. Inefficient, but discovery is a fun way to spike interest.

    Meanwhile, I’m reading a more basic kind of documentation. Writing English. Having resolved to write more, I’m discovering that words are buttons. Poking them gets me to where I want, but not always. Despite writerly ambitions, the basics are lacking. This became apparent recently when I picked up the book Artful Sentences by Virginia Tufte*. It’s two hundred and seventy pages of wonderful sentences dissected to show their mechanics. I was lost by page 5. The book is, temporarily, in my anti-library. 

    So, I’m going to the basics, Strunk and White, and William Zinsser. I’m hoping that Writing to Learn (finished) and On Writing Well (in progress) provide sufficient context about reasons to write to make the most of S&W, for the how, then somewhere down the road, savor Tufte. 

    * Those dastardly Tuftes are always making me learn some kind of grammar.

    Fediverse Reactions
  • Beyond the Dataset

    On the recent season of the show Clarkson’s farm, J.C. goes through great lengths to buy the right pub. As with any sensible buyer, the team does a thorough tear down followed by a big build up before the place is open for business. They survey how the place is built, located, and accessed. In their refresh they ensure that each part of the pub is built with purpose. Even the tractor on the ceiling. The art is  in answering the question: How was this place put together? 

    A data-scientist should be equally fussy. Until we trace how every number was collected, corrected and cleaned, —who measured it, what tool warped it, what assumptions skewed it—we can’t trust the next step in our business to flourish.

    Old sound (1925) painting in high resolution by Paul Klee. Original from the Kunstmuseum Basel Museum. Digitally enhanced by rawpixel.

    Two load-bearing pillars

    While there are many flavors of data science I’m concerned about the analysis that is done in scientific spheres and startups. In this world, the structure held up by two pillars:

    1. How we measure — the trip from reality to raw numbers. Feature extraction.
    2. How we compare — the rules that let those numbers answer a question. Statistics and causality.

    Both of these related to having a deep understanding of the data generation process. Each from a different angle. A crack in either pillar and whatever sits on top crumbles. Plots, significance, AI predictions, mean nothing.

    How we measure

    A misaligned microscope is the digital equivalent of crooked lumber. No amount of massage can birth a photon that never hit the sensor. In fluorescence imaging, the point-spread function tells you how a pin-point of light smears across neighboring pixels; noise reminds you that light itself arrives from and is recorded by at least some randomness. Misjudge either and the cell you call “twice as bright” may be a mirage.

    In this data generation process the instrument nuances control what you see. Understanding this enables us to make judgements about what kind of post processing is right and which one may destroy or invent data. For simpler analysis the post processing can stop at cleaner raw data. For developing AI models, this process extends to labeling and analyzing data distributions. Andrew Ng’s approach, in data-centric AI, insists that tightening labels, fixing sensor drift, and writing clear provenance notes often beat fancier models.

    How we compare

    Now suppose Clarkson were to test a new fertilizer, fresh goat pellets, only on sunny plots. Any bumper harvest that follows says more about sunshine than about the pellets. Sound comparisons begin long before data arrive. A deep understanding of the science behind the experiment is critical before conducting any statistics. The wrong randomization, controls, and lurking confounder eat away at the foundation of statistics.

    This information is not in the data. Only understanding how the experiment was designed and which events preclude others enable us to build a model of the world of the experiment. Taking this lightly has large risks for startups with limited budgets and smaller experiments. A false positive result leads to wasted resources while a false negative presents opportunity costs.   

    The stakes climb quickly. Early in the COVID-19 pandemic, some regions bragged of lower death rates. Age, testing access, and hospital load varied wildly, yet headlines crowned local policies as miracle cures. When later studies re-leveled the footing, the miracles vanished. 

    Why the pillars get skipped

    Speed, habit, and misplaced trust. Leo Breiman warned in 2001 that many analysts chase algorithmic accuracy and skip the question of how the data were generated. What he called the “two cultures.” Today’s tooling tempts us even more: auto-charts, one-click models, pretrained everything. They save time—until they cost us the answer.

    The other issue is lack of a culture that communicates and shares a common language. Only in academic training is it possible to train a single person to understand the science, the instrumentation, and the statistics sufficiently that their research may be taken seriously. Even then we prefer peer review. There is no such scope in startups. Tasks and expertise must be split. It falls to the data scientist to ensure clarity and collecting information horizontally. It is the job of the leadership to enable this or accept dumb risks.

    Opening day

    Clarkson’s pub opening was a monumental task with a thousand details tracked and tackled by an army of experts. Follow the journey from phenomenon to file, guard the twin pillars of measure and compare, and reinforce them up with careful curation and open culture. Do that, and your analysis leaves room for the most important thing: inquiry.

    Fediverse Reactions
  • Chatbots, Bats & Broken Oracles

    I had the strangest conversation with my son today. There used to be a time when computers never made a mistake. It was always the user that was in error. The computer did exactly  what you asked it to do. If something went wrong it was you, the user, that didn’t know what you wanted. After decades of that being etched in today I found myself telling him that computers make mistakes, you have to check if the computer has done the right thing and that is actually ok. A computer that hallucinates also provides a surface for exploration and seeking answers to questions. 

    Boys Wading (1873) by Winslow Homer. Original public domain image from National Gallery of Art

    In her book, Open Socrates, Agnes Callard draws our attention to the differences between problems and questions. I’ll get to those in a bit, but the fundamental realization I had was that until recently all we could use computers (CPUs, spreadsheets, internet) for was solving problems. This started all the way back with Alan Turing when he designed the Turing test. He turned the question of what is it to think into the problem of how do you detect thought. As Callard mentions, LLMs smash the Turing test but we still can’t quite accept the result as proof of thinking. What is thinking then? What are problems? What are questions? How do we answer questions? 

    Problems are barriers that stand in your way when you are trying to do something. You want to train a deep learning algorithm to write poetry, how to get training data is a problem. You want something soothing for lunch, getting the recipe for congee is the problem. The critical point here is that as soon as you have the solution, the data, the recipe, the problem disappears. This is the role of technology.

    When we work with computers to solve problems we are essentially handing off the task to the computer without caring that the computer wants to or even can want to write poetry or have a nice lunch. So we ask the LLM to write code, we command google to give us a congee recipe. Problems don’t need a shared purpose, only methods to solve them to our satisfaction. Being perpetually dissatisfied with existing answers is the stance of science. 

    Science and technology are thus tools to move towards dealing with questions. Unlike problems which dissolve when you solve them, questions give you a new understanding of the world. The thing with asking questions is that there is no established way, at least in your current state, to solve them. Thus asking a question is the first step of starting a quest. In terms of science the quest is better understanding of something and you use technology along the way to dissolve problems that stand in your way. 

     AI lets us explore questions with, rather than merely through, computers. Granted that most common use of AI is still to solve problems, LLMs and their ability to do back and forth chat in natural language does provide the affordance to ask questions. Especially, the kind that seem to come pre-answered because we are operating from a posture where not having an answer would dissolve the posture altogether.

    The Socratic Co-pilot

    As a scientist, the question “what is it to be a good scientist?” comes pre answered for me. Until I am asked this question I have not really thought about it but rush to provide answers. Scientists conduct experiments carefully, they know how to do use statistics, they publish papers and so on. However, this still does not answer what it is to be a good scientist. Playing this out with an AI, I assert “rigorous statistics,” the AI counters with an anecdote on John Snow’s cholera map and I’m forced to pivot. None of these by themselves answers the root question, but it allows generation of some problems which can be answered or agreed on. This is knowledge.

    Knowledge draws boundaries, or as I have explored earlier, creates places around the space that you wish to explore. In the space of “being a good scientist”, we can agree that the use the scientific method is an important factor. Depending on who you are, this could be the end of quest. 

    Even if no methodology exists for a given problem, simply approaching any problem with an inquisitive posture creates a method, however crude. In his book What Is It Like to Be a Bat?  Thomas Nagel tackles an impossible to solve problem but a great question, through the process of a thought experiment. If I were to undertake this, I may try to click in a dark room, hang upside down. Okay, maybe not the last bit, but only maybe. Even this crude approach has now put me in the zone to answer the problem. Importantly my flapping about has created surface area where others can criticize, as Nagel was. Perhaps future brain-computer-interface chips will actually enable us to be a bat. However, lacking such technology, this is better than nothing as long as you are interested in inquiring about the bat-ness.

    This kind of inquiry, this pursuit of answering questions is thinking. Specifically, as Callard puts it, thinking is “a social quest for better answers to the sorts of questions that show up for us already answered”. Breaking that down further it’s social because it’s done with a partner who disagrees with you because they have their own views about the question. It’s a quest because the both parties are seeking knowledge. The last bit about questions being already answered is worth exploring a bit.

    Why bother answering questions you already have answers to? This is trivial to refute when you know nothing about a subject. For example let’s say you knew nothing about gravity and your answer to why you are stuck to the earth cause we are beings of the soil and to the soil we must go, the soil always calls us. If that is the worldview then you already have the answer. The only way to arrive at a better answer, gravity, is to have someone question you on the matter. Refuting specific points based on their own points of view. This may come in the form of a conversation, a textbook, a speech etc. I suspect this social role may soon be played by AI. 

    Obviously hallucinations themselves aren’t great but the ability to hallucinate is. In the coming years I expect AI will gain significant amounts of knowledge access not just in the form of training but in the form of reference databases containing data broadly accepted as knowledge. In the process we will probably have to undergo significant social pains to agree on what Established Knowledge constitutes. Such a system will enable LLMs to play the role of Socrates and help the user avoid falsehoods by questioning the beliefs held by the user. 

    Until now computers couldn’t play this role because there wasn’t enough “humanness” involved. In the bat example, a bat cannot serve as Socrates or as the interlocutor to a human partner because there isn’t a shared world view. LLMs, trained on human generated knowledge would have enough in common to provide a normative mirror. The AI comes with the added benefit of having both infinite patience and no internal urge to be right. This would allow the quest to provide an answer that is satisfactory to the user searching at every level of understanding. LLMs can be useful even before they gain the ability to access established knowledge. Simply by providing a surface on which to hang questions the user can become adept at the art of inquiry. 

    So the next time you have a chat with your pet AI understand that it starts as a session of pure space. Each word we put in ties down the AI to specific vantage points to help us explore. Go ahead—pick a question you think you’ve already answered and let the machine argue with you.

    Fediverse Reactions
  • We Need Homes in the Delta Quadrant

    Place is security, space is freedom.Yi-Fu Tuan

    Starfleet Log, Delta Quadrant—Classified Briefing

    At the edge of the known, maps fail and instincts take over. We don’t just explore new worlds—we build places to survive them. Because in deep space, meaning isn’t found. It’s made.

    I. Interruption of Infinity

    The Delta Quadrant is a distant region of the galaxy in the Star Trek universe—vast, largely uncharted, and filled with anomalies, dangers, and promise. It is where the map ends and the unknown begins. No stations, no alliances, no history—just possibility.

    And yet, possibility alone is not navigable. No one truly explores a void. We only explore what we can orient ourselves within. That is why every journey into the Delta Quadrant begins not with motion, but with homebuilding—the act of constructing something steady enough to make movement meaningful.

    This is not a story about frontiers. It is a story about interruptions.

    To build a home is to interrupt space.
    To be born is to interrupt infinity.

    Consciousness does not arise gently. It asserts. It carves. It says: Here I am. The conditions of your birth—your geography, your culture, your body—are not mere facts. They are prenotions: early constraints that allow orientation. They interrupt the blur of everything into something—a horizon, a doorway, a room.

    Francis Bacon wrote that memory without direction is indistinguishable from wandering. We do not remember freely; we remember through structures. We do not live in space; we live through place. Philosopher Kei Kreutler expands this insight: artificial memory—our rituals, stories, and technologies—is not a container for infinity. It is a deliberate break in its surface, a scaffolding that lets us navigate the unknown.

    Like stars against the black, places puncture the undifferentiated vastness of space. They do not merely protect us from chaos; they make chaos legible. Before GPS, before modern maps, people made stars into stories and stories into guides. Giordano Bruno, working in the Hermetic tradition, saw constellations as talismans—anchoring points in a metaphysical sky. In India, astronomy and astrology were entwined, and the nakshatras—lunar mansions—offered symbolic footholds in the night’s uncertainties. These were not just beliefs. They were early technologies of place-making.

    Without a place, you are not lost—you are not yet anywhere.

    And so, to explore the Delta Quadrant—to explore anything—we must first give it a place to begin.
    Not just a structure, but a home.
    Not just shelter, but meaning.

    II. From Vastness to Meaning

    To understand why we need homes in the Delta Quadrant, we must first understand what it means to be in any space at all. Not merely to pass through it, but to experience it, name it, shape it—to transform the ungraspable into something known, and eventually, something lived.

    This section traces that transformation. It begins with space—untouched, undefined—and follows its conversion into place, where identity, memory, and meaning can take root. Along the way, we consider the roles of perception, language, and tools—not just as instruments of survival, but as the very mechanisms by which reality becomes navigable.

    We begin where we always do: in the unmarked vastness.

    What is Space?

    Space surrounds us, yet refuses to meet our gaze. It is not a substance but a condition—timeless, uncaring, and full of potential. It offers no direction, holds no memory. Nothing in it insists on being noticed. Space simply waits.

    Henri Lefebvre helps us make our first move toward legibility. He proposes that all space emerges through a triad: the representations of space—the conceptual abstractions of cartographers, economists, and urban planners; the spatial practices of everyday life—our habits of movement and arrangement; and representational spaces—the dreamlike, lived realities saturated with memory, symbol, and emotion. Yet in modernity, it is the first of these—abstract space—that dominates. Space is planned, capitalized, monetized. It becomes grid and zone, not story or sanctuary.

    Still, even this mapped and monetized space is not truly empty. Doreen Massey reminds us that space is not inert. It is relational, always in flux, co-constituted by those who traverse it. Space may not hold memories, but it does hold tensions. A room shifts depending on who enters it. A street corner lives differently for each passerby. What appears static from orbit is endlessly alive on foot.

    We might then say: space is not blank—it is waiting. It is the stage before the script, the forest before the trail, the soundscape before the melody. It is possibility without orientation.

    And yet, we cannot live on possibility. To dwell requires more than openness. Something must be placed. Something must be remembered.

    What is Place?

    Place begins when space is interrupted—when the unformed becomes familiar, when pattern gathers, when time slows down enough to matter. Where space is potential, place is presence.

    Yi-Fu Tuan called place “an ordered world of meaning.” This ordering is not merely logical—it is affective, mnemonic, embodied. Place is not only where something happens; it is where something sticks. The repeated use of a corner, the ritual return to a path, the naming of a room—all of these actions layer memory upon memory until a once-anonymous space becomes deeply, even invisibly, ours.

    Edward Casey expands this view by proposing that place is not a passive container of identity, but a generator of it. Who we are emerges from where we are. The self is not constructed in a vacuum, but shaped by kitchens and classrooms, alleyways and attics. A place is a crucible for becoming.

    And places are not necessarily large or fixed. Often they are forged in fragments—through a method of thought called parataxis, the act of placing things side by side without hierarchy or explanation. Plates, tables, menus—listed without commentary—already conjure a restaurant. North is the river, east is the village: already we are somewhere. This act of spatial poetry, what might be called topopoetics, allows us to construct coherence from adjacency. A place need not be explained to be felt.

    Moreover, places are not isolated islands. They are defined as much by what they touch as by what they contain. A healthcare startup, for instance, is not merely a business plan or a piece of code—it is a bounded intersection of regulation, culture, user need, and infrastructural possibility. Its identity as a place emerges through tension, not through self-sufficiency.

    To make a place, then, is to draw a boundary—not always of stone, but always of meaning. And once there is a boundary, there is the possibility of crossing it.

    Exploration and Navigation

    If place is what interrupts space, exploration is the means by which that interruption unfolds. We explore to understand, to locate, to claim. But we also explore to survive. In an unmarked world, movement without orientation is not freedom—it is drift.

    The act of exploration is always mediated by tools—technologies, heuristics, protocols, even rituals. A tool transforms a space into something workable, sometimes by revealing it, sometimes by resisting it. The ax makes the forest navigable. The microscope transforms skin into data. A recipe, too, is a tool: it arranges the chaos of the kitchen into a legible field of options.

    Skill determines the fidelity of this transformation. A novice with a saw sees wood; a carpenter sees potential. A goldsmith with pliers explores more in an inch of metal than a layman can in a bar of gold. Tools extend reach, but skill gives them resonance.

    Rules of thumb emerge here as quietly powerful. They encode accumulated wisdom without demanding full explanation. A rule of thumb is a kind of portable place—a local memory that survives relocation. It allows someone to move meaningfully through new terrain without starting from nothing.

    But perhaps the oldest, and most powerful, tool of place-making is language. To name something is to summon it into experience. A name makes the unspeakable speakable, the abstract navigable. Storytelling is not merely entertainment—it is cartography. Myth and memory alike help us place ourselves. Rituals, in this light, become recurring acts of alignment: a way to rhythmically convert time and action into a felt geography.

    In early computer games like Zork, entire worlds were constructed out of pure language. “To the west is a locked door.” “To the north, a forest.” With no images at all, a mental geography emerged. Place formed from syntax. And in open-world games, which promise limitless exploration, boundaries remain—defined not by terrain, but by tools and capabilities. One may see a mountain, but until one has a grappling hook, the mountain is not truly in reach.

    This is the double truth of exploration: it reveals, but also restricts. Every tool has affordances and blind spots. Every method of navigation makes some routes legible and others obscure.

    And so, just as place makes meaning possible, it also makes power visible. When we explore, we choose where to go—but also where not to go. When we name, we choose what to name—and what to leave unnamed. With each act of orientation, something is excluded.

    This is where the ethical tensions begin.

    III. Violence, Power, Custodianship

    The Violence of Exploration

    To make a place is never a neutral act. It is always a form of imposition, a declaration that one configuration of the world will take precedence over another. Every boundary drawn reorders the field of possibility. In this sense, exploration—often romanticized as the pursuit of discovery—is inseparable from the logic of exclusion. The forest cleared for settlement, the land renamed by the cartographer, the dataset parsed by an algorithm: each gesture selects a future and discards alternatives. Place-making is not only constructive—it is also extractive.

    Achille Mbembe’s concept of necropolitics offers a stark rendering of this dynamic. For Mbembe, the most fundamental expression of power is the authority to determine who may live and who must die—not just biologically, but spatially. A person denied a stable place—be it in legal terms, economic structures, or cultural recognition—is exposed to systemic vulnerability. They are rendered invisible, disposable, or subject to unending surveillance. In this framework, place becomes not a refuge but a rationed privilege, administered according to hierarchies of race, class, and citizenship. To be placeless is to be exposed to risk without recourse.

    David Harvey arrives at a similar critique from a different angle. For Harvey, the production of space under capitalism is inherently uneven. Capital concentrates selectively, building infrastructure, institutions, and visibility in certain regions while leaving others disinvested, fragmented, or erased. Some places are made to flourish because they are profitable; others are sacrificed because they are not. Entire neighborhoods, cities, and ecosystems are subjected to cycles of speculative construction and abandonment. In this schema, place is commodified—not lived. It becomes a product shaped less by the needs of its inhabitants than by the imperatives of financial flows.

    Who Gets to Make Place?

    Even at smaller scales, the ethics of place-making hinge on who holds the authority to define what a place is and who belongs within it. The naming of a school, the zoning of a district, the design of a product interface—each involves not only inclusion, but exclusion; not only clarity, but control. The map that makes one community legible can make another invisible. Orientation, in this sense, is never free of consequence. It is always tethered to power.

    If this is the cost of exploration, then the question we must ask is not simply whether to build places—but how, and for whom.

    Those who create the tools through which places are made—architects, technologists, platform designers—wield a power that is both formative and silent. In shaping the conditions under which others navigate the world, they act as unseen cartographers. A navigation app determines which streets appear safe. A job platform defines whose labor is visible. A software protocol decides who is legible to the system. In each case, someone has already made a decision about what kind of world is possible.

    This asymmetry between creator and user has led some to argue that ethical design requires more than usability—it requires an ethos of custodianship. The act of place-making must be informed not only by technical possibility, but by moral imagination. A well-designed place is not simply functional—it is inhabited, sustained, and responsive to the people who live within it.

    Michel Foucault offers a vocabulary for this through his concept of heterotopias: places that operate under a different logic, outside the dominant spatial order. These may be institutional—cemeteries, prisons, libraries—or insurgent—subcultures, autonomous zones, speculative games. Heterotopias do not merely resist the prevailing map; they reveal that other maps are possible. They function as mirrors and distortions of the dominant world, reminding us that the spatial order is neither natural nor inevitable.

    Yet even heterotopias cannot be engineered wholesale. They must be lived into being. This is the insight offered by Christopher Alexander and, more recently, Ron Wakkary in their explorations of unselfconscious design. Good places, they argue, are rarely planned top-down. Instead, they emerge from a slow dance between structure and improvisation. A fridge becomes a family bulletin board. A courtyard becomes a marketplace. A piece of software becomes an unanticipated ritual. In these cases, fit emerges not from specification but from accumulated use. Design, at its best, enables this evolution rather than constraining it.

    To make a place, then, is not to finalize it. It is to initiate a relationship. The designer, the founder, the engineer—each acts as a temporary steward rather than a sovereign. The real test of their creation is not how complete it feels on launch day, but how it adapts to the people who enter it and make it their own. This is the quiet responsibility of custodianship: to create with humility, to listen after building, and to recognize that places do not succeed by force of vision alone. They succeed by making others feel, at last, that they belong.

    IV. Fractal Place-Making

    We often think of place-making as a singular act—a line drawn, a structure raised, a tool released. But in truth, places are rarely built in one gesture. They are shaped recursively, iteratively, across layers and scales. A place is not simply made once—it is continuously remade, revised, and reinhabited. If power animates the creation of place, then care animates its persistence.

    The previous section examined how place-making implicates violence and authority. This one turns inward, offering tools to see place-making not as an external imposition, but as a continuous, generative practice—one we each participate in, often unconsciously. Places are not only geopolitical or architectural. They emerge in routines, in interfaces, in sentences, in rituals. They are as present in the layout of a city as in the arrangement of a desktop or the structure of a daily habit.

    Place-making, in this light, becomes fractal.

    Spaces All the Way Down

    Every place, no matter how concrete or intentional, overlays a prior space. A home rests on a plot of land that once held other meanings. A software tool is coded atop prior protocols, abstractions, languages. A startup’s culture is built not from scratch, but from accumulated social assumptions, inherited metaphors, and the ghosts of previous institutions. No place begins in a vacuum. It begins by coalescing around an earlier ambiguity.

    To say “it’s spaces all the way down” is not a paradox but a recognition: that all our structuring of the world rests on foundations that were once unstructured. And those, in turn, rest on others. Beneath every home is a history. Beneath every habit is a choice. Beneath every heuristic is an unspoken story of why something worked once, and perhaps still does.

    This recursive layering reveals something crucial. Place is not just what we inhabit—it is what we build upon, often without seeing the full depth of what came before. When we set up a calendar system, when we define an onboarding process, when we reorganize a room or refactor code, we are engaging in acts of recursive place-making. These are not trivial gestures. They encode our assumptions about time, labor, clarity, worth. And in doing so, they scaffold the next set of moves. What feels natural is often just deeply buried infrastructure.

    Traditions, Tools, and Temporal Sediments

    Much of what makes a place stable over time is not its physicality but its rhythm. What repeats is remembered. What is remembered becomes legible. Over time, the sediment of repetition builds tradition—not as nostalgia, but as a living scaffolding.

    Rules of thumb are examples of such traditions, compacted into portable epistemologies. They are not universal truths, but local condensations of experience: “Measure twice, cut once.” “If it’s not a hell yes, it’s a no.” “Always leave a version that works.” These are not mere slogans. They are the crystallization of hundreds of micro-failures, carried forward in language so that others may avoid or adapt. A rule of thumb is a place you can carry in your mind—a place where you briefly borrow the perspective of others, where their past becomes your foresight.

    Ethnographic engineering—the practice of living among those you design for—extends this logic. It is not enough to ask what users want; one must become a user. To understand a kitchen, you must cook. To redesign a hospital intake form, you must sit beside a nurse at the end of a long shift. Inhabitance precedes insight. It is not empathy as abstraction, but as situated knowledge. This is why the mantra “get out of the building” matters. It invites designers to enter someone else’s place—and to temporarily surrender their own.

    Even the way we recover from failure carries spatial weight. In systems design, crash-only thinking proposes that recovery should not be exceptional but routine. A system should not pretend to avoid breakdown—it should assume it, and handle it gracefully. This principle translates beyond code. Our identities, too, are shaped by rupture and repair. We are the residue of what survives collapse. To rebuild after a crash is to reassert a place for oneself in the world—to refuse exile, to restart with a new contour of legibility. The self is a recursive place, constantly reformed by continuity and failure.

    Imagined Places, Real Consequences

    Not all places are made of walls or workflows. Some are conjured in thought but anchor entire worlds in practice. These are imagined places—places held in common through language, ritual, and belief—and their effects are no less material for being constructed.

    Benedict Anderson’s theory of imagined communities describes the nation as precisely such a place: a social structure that exists because enough people believe in its coherence. A country is not simply a set of borders—it is a shared imagination of belonging, reinforced by rituals as small as singing an anthem or using the same postal code. These rituals do not merely express the nation—they enact it. The community persists not because everyone knows each other, but because they believe in the same structure of place.

    Gaston Bachelard, writing of intimate places, adds another layer. His Poetics of Space reveals how rooms, nests, and thresholds function not just architecturally, but symbolically. A staircase is not just a connector between floors—it is a memory channel. A drawer is not just storage—it is a metaphor for secrecy. Through repeated use and emotional investment, even the smallest corners of a home can become vast interior landscapes.

    Designers who ignore this symbolic dimension risk creating tools that are frictionless but placeless. A well-designed app may guide a user efficiently, but if it lacks metaphor, texture, or resonance, it will not endure. By contrast, even ephemeral tools—when shaped with care—can become anchoring places. A text editor that respects rhythm. A ritualized way of closing the day. A naming convention that makes each project feel storied rather than serialized. These are small acts, but they echo. They accumulate. They become sediment.

    Recursive place-making, then, is not about grandeur. It is about fidelity. It is about recognizing that every small act of shaping the world—every pattern set, every name given, every recovery ritualized—is part of a larger unfolding. Place is not a one-time gift. It is a continuous offering.

    V. Homes at the Edge of the Known

    Places don’t just emerge from space—they transform it. A well-made place doesn’t only make sense of what is; it makes new things possible. It reframes what we pay attention to, how we act, and who we become. Place is not the end of exploration—it is the start of imagination.

    Each time we build a place, we alter the shape of the surrounding space. A room becomes a lab, a garage becomes a company, a notebook becomes a worldview. These shifts ripple outward. Identity follows structure. Tools reorganize desire. Suddenly what felt unreachable becomes thinkable. New directions appear.

    This is why the Delta Quadrant matters. In Star Trek, it is the quadrant at the far edge of the map: unvisited, unaligned, untamed. But we all have our own Delta Quadrants—those domains where orientation fails. The new job. The new field. The social unknown. We don’t need to conquer these spaces. We need to inhabit them.

    Building a home in the Delta Quadrant means giving shape to uncertainty. Not through control, but through commitment. Homes are not fortresses—they are launchpads. They anchor us without confining us. They give us somewhere to return to, so we can go further.

    To build such homes is to design for possibility. It is to accept that the unknown will always outpace our frameworks, and to meet it not with fear, but with grounded generosity. Homes enable freedom not by removing constraints, but by embedding care in structure. They show us that discovery and dignity are not opposites—they are partners.

    And yes, building these homes will be messy. There will be diplomacy with space jellyfish. There will be moral conundrums involving time loops and malfunctioning replicators. Someone will definitely rewire the main console so the espresso machine can detect tachyon emissions.

    But we’ve seen worse. That’s the job.

    Fediverse Reactions
  • Dwarf Fortress, Emacs, & AI: The allure of generative complexity

    There is a shared soul shard between Dwarf Fortress, Emacs, and AI that lured me to them and has kept me engaged for over a decade. For a long time, I struggled to articulate the connection, managing only to describe Dwarf Fortress as the Emacs of games. But this analogy, while compelling, doesn’t fully capture the deeper resonance these systems share. They are not merely complicated; they are complex—tools for creativity that reward immersion and exploration.

    Zunzar Machi at Torna – Wikipedia

    Complicated, Complex, Dev.

    To understand the allure, let’s revise the distinction between complicated and complex. Complicated systems, say a spinning-disk microscope, consist of interlocking parts (each with internal complications) that interact in predictable ways. They require technical expertise to master, but their behavior remains largely deterministic and I tire of them soon.

    Complex systems, see Cynefin framework, exhibit emergent behavior. Their value/fun lies in the generative possibilities they unlock rather than the sum of their parts.

    Dwarf Fortress, Emacs, and AI live on the froth of this complexity. None of these systems exist as ends in themselves. You don’t play Dwarf Fortress to achieve a high score (there isn’t one, you eventually lose). You don’t use Emacs simply to edit text, and you don’t build AI to arrange perceptrons in aesthetically pleasing patterns. These are platforms, altars for creation. Dev environments.

    In Emergence We Trust

    Like language with the rules of poetry, these environments are generative places enabling exploration of emergent spaces. Emergence, which manifests both in the software but also in you. There is always a point where you find yourself thinking, I didn’t expect I could do that. In Dwarf Fortress first you fight against tantrum spirals and then through mastery, against FPS death. Similarly, Emacs enables workflows that evolve over time, as users build custom functions and plugins to fit their unique needs. In AI, emergence arrives rather late but it’s there. Putting together datasets, training them, optimizing, starting over, are complicated but not complex per se. The complexity (and emergence) is in the capabilities of the trained network. Things infinitely tedious or difficult are a few matrix multiplications away.

    This desire for emergence is spelunking. It rewards curiosity and experimentation but demands patience and resilience. Mastery begins with small victories: making beer in Dwarf Fortress, accessing help in Emacs, or implementing a 3-layer neural network. Each success expands your imagination. The desire to do more, to push the boundaries of what’s possible, becomes an endless rabbit hole—one that is as exhilarating as it is daunting.

    Complexity as a Gateway to Creativity

    The high complexity of these systems—their vast degrees of freedom—opens the door to infinite creativity. This very openness, however, can be intimidating. Confronted with the sprawling interface of Emacs, the arcane scripts of Dwarf Fortress, or the mathematical abstractions of AI, it’s tempting to retreat to the familiar. Yet this initial opacity is precisely what makes these systems so rewarding. Engaging with something that might blow up in your face—whether it’s drunk cats, a lisp error, or an exploding gradient—forces you to give up.

    But just then you have an idea, what you tried this…

    Awaken, H. ludens.

    Fediverse Reactions
  • AI-generated image of The old Doge Enrico Dandolo sacking Constantinople

    I’m taking part in the Contraptions Book Club where we are reading City of Fortune which is about Venice. I was struck by the character of Doge Dandolo. Dude was 80+ when we saw a trade opportunity in the 4th Crusades. In the book, the author, Roger Crowley describes a brief moment when Dandolo makes a heroic rush on the banks of Constantinople’s Golden Horn during the Sack of Constantinople.

    I found both the Doge and the imagery interesting so went looking for art depicting the art, there’s supposed to be lots. Unfortunately I couldn’t find any and nothing in the public domain. So I asked AI to generate something.

    There are other paintings like the one below, but not the one I was looking for.

    The siege of Constantinople in 1204, by Palma il Giovane

  • The Universal Library in the River of Noise


    Few ideas capture the collective human imagination more powerfully than the notion of a “universal library”—a singular repository of all recorded knowledge. From the grandeur of the Library of Alexandria to modern digital initiatives, this concept has persisted as both a philosophical ideal and a practical challenge. Miroslav Kruk’s 1999 paper, “The Internet and the Revival of the Myth of the Universal Library,” revitalizes this conversation by highlighting the historical roots of the universal library myth and cautioning against uncritical technological utopianism. Today, as Wikipedia and Large Language Models (LLMs) like ChatGPT emerge as potential heirs to this legacy, Kruk’s insights—and broader reflections on language, noise, and the very nature of truth—resonate more than ever.


    The myth of the universal library

    Humanity has longed for a comprehensive archive that gathers all available knowledge under one metaphorical roof. The Library of Alexandria, purportedly holding every important work of its era, remains our most enduring symbol of this ambition. Later projects—such as Conrad Gessner’s Bibliotheca Universalis (an early effort to compile all known books) and the Enlightenment’s encyclopedic endeavors—renewed the quest for total knowledge. Francis Bacon famously proposed an exhaustive reorganization of the sciences in his Instauratio Magna, once again reflecting the aspiration to pin down the full breadth of human understanding.

    Kruk’s Historical Lens  

    This aspiration is neither new nor purely technological. Kruk traces the “myth” of the universal library from antiquity through the Renaissance, revealing how each generation has grappled with fundamental dilemmas of scale, completeness, and translation. According to Kruk,

    inclusivity can lead to oceans of meaninglessness

    The library on the “rock of certainty”… or an ccean of doubt?

    Alongside the aspiration toward universality has come an ever-present tension around truth, language, and the fragility of human understanding. Scholars dreamed of building the library on a “rock of certainty,” systematically collecting and classifying knowledge to vanquish doubt itself. Instead, many found themselves mired in “despair” and questioning whether the notion of objective reality was even attainable. As Kruk’s paper points out,

    The aim was to build the library on the rock of certainty: We finished with doubting everything … indeed, the existence of objective reality itself.”

    Libraries used to be zero-sum

    Historically,

    for some libraries to become universal, other libraries have to become ‘less universal.’

    Access to rare books or manuscripts was zero-sum; a collection in one part of the world meant fewer resources or duplicates available elsewhere. Digitization theoretically solves this by duplicating resources infinitely, but questions remain about archiving, licensing, and global inequalities in technological infrastructure.


    Interestingly, Google was founded the same year as Kruk’s 1999 paper was nearing publication. In many ways, Google’s search engine became a “library of the web,” indexing and ranking content to make it discoverable on a scale previously unimaginable. Yet it is also a reminder of how quickly technology can outpace our theoretical frameworks: Perhaps Kruk couldn’t have known about Google without Google. Something something future is already here…

    Wikipedia: an oasis island

    Wikipedia stands as a leading illustration of a “universal library” reimagined for the digital age. Its open, collaborative platform allows virtually anyone to contribute or edit articles. Where ancient and early modern efforts concentrated on physical manuscripts or printed compilations, Wikipedia harnesses collective intelligence in real time. As a result, it is perpetually expanding, updating, and revising its content.

    Yet Kruk’s caution holds: while openness fosters a broad and inclusive knowledge base, it also carries the risk of “oceans of meaninglessness” if editorial controls and quality standards slip. Wikipedia does attempt to mitigate these dangers through guidelines, citation requirements, and editorial consensus. However, systemic biases, gaps in coverage, and editorial conflicts remain persistent challenges—aligning with Kruk’s observation that inclusivity and expertise are sometimes at odds.

    LLMs – AI slops towards the perfect library

    Where Wikipedia aspires to accumulate and organize encyclopedic articles, LLMs like ChatGPT offer a more dynamic, personalized form of “knowledge” generation. These models process massive datasets—including vast portions of the public web—to generate responses that synthesize information from multiple sources in seconds. In a way this almost solves one of the sister aims of the perfect library, perfect language, where the embeddings serve as a stand in for perfect words.

    The perfect language, on the other hand, would mirror reality perfectly. There would be one exact word for an object or phenomenon. No contradictions, redundancy or ambivalence.


    The dream of a perfect language has largely been abandoned. As Umberto Eco suggested, however, the work on artificial intelligence may represent “its revival under a different name.” 

    The very nature of LLMs highlights another of Kruk’s cautions: technological utopianism can obscure real epistemological and ethical concerns. LLMs do not “understand” the facts they present; they infer patterns from text. As a result, they may produce plausible-sounding but factually incorrect or biased information. The quantity-versus-quality dilemma thus persists.

    Noise is good actually?

    Although the internet overflows with false information and uninformed opinions, this noise can be generative—spurring conversation, debate, and the unexpected discovery of new ideas. In effect, we might envision small islands of well-curated information in a sea of noise. Far from dismissing the chaos out of hand, there is merit in seeing how creative breakthroughs can emerge from chaos. Gold of Chemistry from leaden alchemy.

    Concerns persist, existence of misinformation, bias, AI slop invites us to exercise editorial diligence to sift through the noise productively. It also echoes Kruk’s notion of the universal library as something that “by definition, would contain materials blatantly untrue, false or distorted,” thus forcing us to navigate “small islands of meaning surrounded by vast oceans of meaninglessness.”

    Designing better knowledge systems

    Looking forward, the goal is not simply to build bigger data repositories or more sophisticated AI models, but to integrate the best of human expertise, ethical oversight, and continuous quality checks. Possible directions include:

    1. Strengthening Editorial and Algorithmic Oversight:

    • Wikipedia can refine its editorial mechanisms, while AI developers can embed robust validation processes to catch misinformation and bias in LLM outputs.

    2. Contextual Curation:  

    • Knowledge graphs are likely great bridges between curated knowledge and generated text

    3. Collaborative Ecosystems:  

    • Combining human editorial teams with AI-driven tools may offer a synergy that neither purely crowdsourced nor purely algorithmic models can achieve alone. Perhaps this process could be more efficient by adding a knowledge base driven simulation (see last week’s links) of the editors’ intents and purposes.

    A return to the “raw” as opposed to social media cooked version of the internet might be the trick afterall. Armed with new tools we can (and should) create meaning. In the process Leibniz might get his universal digital object identifier after all.

    Compression progress as a fundamental force of knowledge

    Ultimately, Kruk’s reminder that the universal library is a myth—an ideal rather than a finished product—should guide our approach. Its pursuit is not a one-time project with a definitive endpoint; it is an ongoing dialogue across centuries, technologies, and cultures. As we grapple with the informational abundance of the digital era, we can draw on lessons from Alexandria, the Renaissance, and the nascent Internet of the 1990s to inform how we build, critique, and refine today’s knowledge systems.

    Refine so that tomorrow, maybe literally, we can run reclamation projects in the noisy sea.


    Image: Boekhandelaar in het Midden-Oosten (1950 – 2000) by anonymous. Original public domain image from The Rijksmuseum