Category: Blog Posts

  • AI: Explainable Enough

    They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

    Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

    Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

    What the domain expert user doesn’t want:
    – How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

    What the domain expert desires: 
    – Help at the lowest level of detail that they care about. 
    – AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

    Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

    This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

    Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

    I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

    Fediverse Reactions
  • My Road to Bayesian Stats

    By 2015, I had heard of Bayesian Stats but didn’t bother to go deeper into it. After all, significance stars, and p-values worked fine. I started to explore Bayesian Statistics when considering small sample sizes in biological experiments. How much can you say when you are comparing means of 6 or even 60 observations? This is the nature work at the edge of knowledge. Not knowing what to expect is normal. Multiple possible routes to a seen a result is normal. Not knowing how to pick the route to the observed result is also normal. Yet, our statistics fails to capture this reality and the associated uncertainties. There must be a way I thought. 

    Free Curve to the Point: Accompanying Sound of Geometric Curves (1925) print in high resolution by Wassily Kandinsky. Original from The MET Museum. Digitally enhanced by rawpixel.

    I started by searching for ways to overcome small sample sizes. There are minimum sample sizes recommended for t-tests. Thirty is an often quoted number with qualifiers. Bayesian stats does not have a minimum sample size. This had me intrigued. Surely, this can’t be a thing. But it is. Bayesian stats creates a mathematical model using your observations and then samples from that model to make comparisons. If you have any exposure to AI, you can think of this a bit like training an AI model. Of course the more data you have the better the model can be. But even with a little data we can make progress. 

    How do you say, there is something happening and it’s interesting, but we are only x% sure. Frequentist stats have no way through. All I knew was to apply the t-test and if there are “***” in the plot, I’m golden. That isn’t accurate though. Low p-values indicate the strength of evidence against the null hypothesis. Let’s take a minute to unpack that. The null hypothesis is that nothing is happening. If you have a control set and do a treatment on the other set, the null hypothesis says that there is no difference. So, a low p-value says that it is unlikely that the null hypothesis is true. But that does not imply that the alternative hypothesis is true. What’s worse is that there is no way for us to say that the control and experiment have no difference. We can’t accept the null hypothesis using p-values either. 

    Guess what? Bayes stats can do all those things. It can measure differences, accept and reject both  null and alternative hypotheses, even communicate how uncertain we are (more on this later). All without making assumptions about our data.

    It’s often overlooked, but frequentist analysis also requires the data to have certain properties like normality and equal variance. Biological processes have complex behavior and, unless observed, assuming normality and equal variance is perilous. The danger only goes up with small sample sizes. Again, Bayes requires you to make no assumptions about your data. Whatever shape the distribution is, so called outliers and all, it all goes into the model. Small sample sets do produce weaker fits, but this is kept transparent. 

    Transparency is one of the key strengths of Bayesian stats. It requires you to work a little bit harder on two fronts though. First you have to think about your data generating process (DGP). This means how do the data points you observe came to be. As we said, the process is often unknown. We have at best some guesses of how this could happen. Thankfully, we have a nice way to represent this. DAGs, directed acyclic graphs, are a fancy name for a simple diagram showing what affects what. Most of the time we are trying to discover the DAG, ie the pathway of a biological outcome. Even if you don’t do Bayesian stats, using DAGs to lay out your thoughts is a great. In Bayesian stats the DAGs can be used to test if your model fits the data we observe. If the DAG captures the data generating process the fit is good, and not if it doesn’t. 

    The other hard bit is doing analysis and communicating the results. Bayesian stats forces you to be verbose about your assumptions in your model. This part is almost magicked away in t-tests. Frequentist stats also makes assumptions about the model that your data is assumed to follow. It all happens so quickly that there isn’t even a second to think about it. You put in your data, click t-test and woosh! You see stars. In Bayesian stats stating the assumptions you make in your model (using DAGs and hypothesis about DGPs) communicates to the world what and why you think this phenomenon occurs. 

    Discovering causality is the whole reason for doing science. Knowing the causality allows us to intervene in the forms of treatments and drugs. But if my tools don’t allow me to be transparent and worse if they block people from correcting me, why bother?

    Richard McElreath says it best:

    There is no method for making causal models other than science. There is no method to science other than honest anarchy.

    Fediverse Reactions
  • The secret flag of content

    I don’t have any fun when I use LLMs to write. It may have perceived utility: popping out a LinkedIn article or two everyday. But I bet no one is actually reading. It’s a strip mall for a thumb stroll.  

    LLMs suck at writing. The summaries that LLMs give with the “Deep Research” are so poor in quality that I start to skim it. Yes, I skim the thing that is already a summary. 

    Let’s do a small experiment using the  Harry G. Frankfurt Essay On Bullshit

    Here are HGF’s opening lines: 

    “One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. Each of us contributes his share.”

    Now here is ChatGPT o3, with the advantage of having HGF in the training data and extra contextualizing from me “theory of bullshit”: 

    Prompt: write me an introductory paragraph on the theory of bullshit 

     Result: “Bullshit isn’t just lying gone sloppy; it’s a distinct communicative strategy built on indifference to truth. Where the liar knows the facts and works to conceal or invert them, the bullshitter treats truth-value as irrelevant raw material—selecting, mixing, or inventing claims to achieve an effect (status, alignment, vibe) rather than to inform. The result is epistemic pollution: signals that…”

    Look! It’s the secret flag of “content”:  “Blah isn’t just X; Blah is Y”.

    AI generated flag. As dead as the text on it is wrong.

    I cannot assimilate this, it definitely can’t help me synthesize anything. There is always better human writing even if it is poorly written. It has a certain aliveness, and that aliveness is contagious enthusiasm. Vibe coding I can understand, you can see bad output in the final result. Vibe coding still manages to change something in my mind: knowing what I don’t want. 

    I don’t mind people using AI at all, I use it alllll the time. Writing with LLMs is no just fun. All this prompting and almost nothing changes in my mind. When an AI rearranges your thoughts it does not rearrange your brain. 

    Fediverse Reactions
  • What do platforms really do? 

    In 1986, David S. Landes wrote the essay, ‘What Do Bosses Really Do?’. He argues that the historical role of the ‘boss’ was an essential function for organizing production and connecting producers to markets. Digital platforms have become the new bosses. Platforms have the same functions of market creation, labor specialization, and management, but they have replaced the physical factory floor with algorithmic management. While their methods are novel, platforms are the direct descendants of the merchant-entrepreneurs and factory owners Landes described, solving the same historical problems of production in remarkably similar ways.

    Design for a Teacup (1880-1910) painting in high resolution by Noritake Factory. Original from The Smithsonian Institution. Digitally enhanced by rawpixel.

    So, why am I posting this on my own blog and not on a “platform”? I don’t view writing as a financial transaction. It is a hobby. By putting the financialization lens front and center, platforms are killing the mental space for hobbies. When you monetize tweets, you create incentive to craft tweets that create engagement in particular ways. Usually not healthy ways. 

    If we think of old media or traditional manufacturing, we can compare them to guilds. Guilds kept up prices and controlled production. With the simplification of tasks factories could hire workers who weren’t as highly skilled but didn’t need to be. Nowadays, why should any newspaper or TV channel’s output be limited by the amount of airtime or page space they have?

    Platforms take unskilled and train them. We are in the age of specialization of ideas.  Akin to the “the advantage of disaggregating a productive process”  Platforms leverage this by having many producers explore the same space through millions of different angles. This allows the platform to “purchase exactly that precise quantity of [skill] which is necessary for each process” —paying a viral star a lot and a niche creator a little, perfectly matching reward to market impact. Which is to say platforms make money through whatever sticks.  

    In Landes’s essay, Management became specialized, today management will become algorithmized. Platforms abstract away the issues that factory owners had such as embezzlement of resources, slacking off etc. Platforms don’t care how much or how little you produce, or even if you produce. If you do, the cash is yours (after a cut of course). 

    This may lead to a visceral reaction against platforms. This week when Substack raised a substantial amount they called the writers “the heroes of culture”. This should ring at least a tiny alarm in your head. The platform’s rhetoric of the creator-as-hero is a shrewd economic arrangement. In the putting-out system, the merchant-manufacturer “was able to shift capital expenditures (plant and equipment) to the worker”. Platforms do the same with creative risk. The writer, artist, or creator invests all the time and labor—the “capital” of creation—upfront. If they fail, they bear the entire loss. The platform, like the putter-outer, only participates in the upside, taking its cut from the successful ‘heroes’ while remaining insulated from the failures of the many.

    So what do platforms really do? They have resurrected the essential role of the boss for the digital age. They are the merchant-manufacturers who build the roads to market, and they are the factory owners who discipline production—not with overseers, but with incentive algorithms. By casting the creator as the hero, they obscure their own power and shift the immense risks of creative work onto the individual. While appearing to be mere background IT admins, they are, in fact, the central organizers of production, demonstrating that even in the 21st century, the fundamental challenges of coordinating labor and capital persist, and solving them remains, as it was in the 18th century, a very lucrative role.


    What Do Bosses Really Do?, David S. Landes, The Journal of Economic History, Vol. 46, No. 3 (Sep., 1986), pp. 585-623 (39 pages). https://www.jstor.org/stable/2121476

    Fediverse Reactions
  • Hack, Hacky, Hacker

    A few days ago I wrote about the beauty of great documentation; this is the evil twin post.

    The spectrum of meaning across the words hack, hacky, and hacker form a horseshoe when thinking about postures toward life. On either ends are the most difficult options. Being either a hack or a hacker requires dedication and both approaches narrow your world. Being hacky, taking imperfect shortcuts, in the world is immensely satisfying. It is play disguised as problem solving. 

    Fox by Arnold Peter Weisz Kubincan. Original public domain image from Web umenia

    A successful hack takes tremendous effort and dedication just to pretend to be great at something. Humans are great at spotting and discarding hacks. It takes a true master to fool a large enough population and build financial columns under the smoke. Being a hack is constant desperation, there is no play. It is no way to live. 

    On the other end of the same horseshoe as the hack, is hacking. Here, you are actually achieving something difficult enough to require mastery. “Playfully doing something difficult, whether useful or not, that is hacking.” says Richard Stallman. Now, I’m all for the playful, the difficult, and the useful, but not the “or not”. At minimum hacking should be in service of a prank. Doing things just because is like felling a tree in a forest when no one is around. At least a jump scare is a sine qua non (the dictionary is working :P). 

    Most systems, especially computers are designed by people for people like you and me who are neither very bright nor very invested in the thing. We want to not have the problem. You can always walk away but that is neither fun, nor useful, and certainly not hard. My favored way is to take the Nakatomi Tunnel through problems. Be hacky. Try enough approaches, push buttons that may do the thing you want until the alignment is just so and you slip through. Effectiveness here = solving many real-world problems quickly while preserving playful momentum.

    I want to draw a distinction here from the oversubscribed idea of jugaad. Jugaad was once framed as creative improvisation. It is not. I do not care for jugaad. To make something substandard and expect people to accept it is no way to be in the world. Build good stuff, be hacky route through the small issues.

    A hacky mindset is a foxy mindset and not just in the Hendrix way. The Hedgehog and the Fox is a great essay by Isaiah Berlin where he talks about the two kinds of people in the world. Hedgehogs, are great at one big thing. Foxes are mediocre at many things. Foxes thrive on lateral moves and opportunistic shortcuts, you know, hackiness. The hacky, foxy approach to life is more my style. 

    Breadth, speed, and joy beat fakery and fixation every time

    Fediverse Reactions
  • A Good Dictionary

    Yesterday I wrote about good documentation opening doors to options you didn’t realize you had. In the book On Writing Well Zinsser mentions how one of his key tools is the dictionary. That got me curious about the limitations about the dictionaries available to us. This is not just about the dictionary on the bookshelf but the ones that we have in-context access to. The ones on our computer and phones. 

    In my searches I came across this post by James Somers who references another great writer John McPhee and his article Draft No. 4. McPhee shows us how the dictionary is to be used. The crux is that modern dictionaries have taken all the fun out and left all the crud in. The old way is the proper way to play with words. 

    J.S ends with instructions on how to install the (apparently perfect) 1913 version of Webster’s dictionary. Unfortunately, his instructions are a little out of date. Which is to be accepted since he’s talking to people 10 years in his future. Luckily for us Corey Ward from speaking to use from just 5 years ago had updated instructions for MacOS that mostly still work.

    I’m updating Corey’s instructions below:

    1. Get the latest release for Webster’s 1913 from the Github Releases page for WebsterParser. Download the file: websters-1913.dictionary.zip and unzip it. You will see a folder like file with the extension .dictionary.
    2. Open the Dictionary app on your computer, and select File > Open Dictionaries Folder from the menu, or navigate manually to ~/Library/Dictionaries.
    3. Unzip the file, and move the resulting websters-1913.dictionary file into the dictionaries folder that you opened.
    4. Restart the Dictionary app if it is open (important), then open Dictionary>Settings (⌘,). At the bottom of the list of dictionaries you should see Webster's Unabridged Dictionary (1913) in the list. Check the box, and optionally drag it up in the list to the order you’d like.

    The dictionary is also available online if you don’t want to install.

    The best option is probably the OED . It’s expensive, but you may get access through your library. 

    Wordnik also cool. 


    Through J.S. I also discovered this interesting site: Language Log. They get really deep into language. I mean how much can you write about Spinach, apparently a lot


    I’d love to get back to a world where the internet was used in its raw form. If you are reading my posts, please do comment, share your site/blog and your posts. Social media is also good. More from Somers.

  • Divine Documentation

    Dad was about my age when he said that reading the manual was better than hypothesis driven button pressing. For teenage me, that took too long. Sure, I may have crashed a computer or two but following my gut got me there. Of course my gut isn’t that smart. In the decades preceding, devices had converged on a common pattern language of buttons. Once learned, the standard grammar of action would reliably deliver me to my destination. 

    Image of a nebula taken by the Hubble Telescope.

    In programming I was similarly aided by the shared patterns across MATLAB, Python, R, Java, Julia, and even HTML. In the end however, dad was right. Reading documentation is the way. Besides showing correct usage, manuals create a new understanding of my problems. I am able to play with tech thanks to the people that took the effort and the care to create good documentation. This is not limited to code and AI. During the startup years, great handbooks clarified accounting, fundraising, and regulations, areas foreign to me.

    I love good documentation and I write documentation. Writing good documentation is hard. It is an exercise in deep empathy with my user. Reaching into the future to give them all they need is part of creating good technology. Often the future user is me and I like it when past me is nice to now me. If an expert Socratic interlocutor is like weight training, documentation is a kindly spirit ancestor parting the mist. 

    Maybe it’s something about being this age but now I try to impart good documentation practices to my teams. I also do not discourage pressing buttons to see what happens. Inefficient, but discovery is a fun way to spike interest.

    Meanwhile, I’m reading a more basic kind of documentation. Writing English. Having resolved to write more, I’m discovering that words are buttons. Poking them gets me to where I want, but not always. Despite writerly ambitions, the basics are lacking. This became apparent recently when I picked up the book Artful Sentences by Virginia Tufte*. It’s two hundred and seventy pages of wonderful sentences dissected to show their mechanics. I was lost by page 5. The book is, temporarily, in my anti-library. 

    So, I’m going to the basics, Strunk and White, and William Zinsser. I’m hoping that Writing to Learn (finished) and On Writing Well (in progress) provide sufficient context about reasons to write to make the most of S&W, for the how, then somewhere down the road, savor Tufte. 

    * Those dastardly Tuftes are always making me learn some kind of grammar.

    Fediverse Reactions
  • The Plato Plateau

    This post started off as a joke. I was attempting to snow clone the Peter Principle for philosophy. It led to a longer thread of thoughts. But first, the snow clone: 

    The Plato Plateau: People philosophize to the level of their anxiety.

    Smoking farmer with branches by Kono Bairei (1844-1895). Digitally enhanced from our own original 1913 edition of Barei Gakan.
    1. Anxiety is the realization that you have absolute choice over life – Kierkegaard. Anxiety, in this context is not nervousness. It is a positive thing when harnesses. We harness it everyday.  
    2. Anxiety is a generative. Anxiety creates identity by locating stable places to launch exploration.
    3. Action, exploration, and anxiety are a motor. Anxiety → exploration → action → refreshed identity. Inaction leads to identity death
    4. Realizing you are radically free to choose can also lead to a forest of perceived signals. These can be an overwhelming inbox or simply overloaded ambition.
    5. When anxiety overwhelms it becomes difficult to tell signal from noise.
    6. Tools like GTD crash anxiety. When overwhelmed, GTD works well. When there is too little anxiety identity becomes ephemeral. 
    7. GTD isn’t a means to nirvana: GTD integrates 10k, 30k foot views to reintroduce future anxiety.
    8. When your identity is smeared across too many anxieties you declare anxiety bankruptcy and crash your identity in some safe spot. Journals, sabbaticals, quitting.
    9. Like the parable of the rock soup, vaporized anxiety needs a place to condense onto. Ideally something disposable but sufficient to let your identity create an “ordered world of meaning”
    10. Life examination occurs with identity crashes. Philosophy provides just enough of a toehold in the abstract to spur action in the actual. 
    11. Philosophy is a way to spur action absent anxiety/identity. We pick the philosophy depending on the degree of identity loss.
    12. Philosophy can be broadly sorted as:
      1. Survival – laws and tactics oriented
      2. Social Cohesion- harmony, virtue ethics, etiquette 
      3. Systems level order – algorithms and protocols oriented
      4. Self Knowledge and Meaning – reflecting on existing and consciousness 
      5. Meta-systems – theorizes about theories
    13. Most scientists and builders work best at level 3 systems level order. Going lower, i-ii, for environmental crises and higher, iv-v, for internal crises. 
    14. Complexity of selected philosophy is not superiority. A rung’s usefulness matches your identity state and environment, not some civilizational high score.
    15. Philosophy as Periodic Maintenance: Crashing and philosophy sampling are maintenance actions on the place called identity.
    Fediverse Reactions
  • Problems are Places, Questions are Spaces

    Last year, while regrouping myself and rebuilding my old curious ways, I had a thought. The common words “spaces” and “places” pass through our minds, fingers, and lips but they deserve a second thought. Unsurprisingly, I wasn’t the first one to consider this and the wealth of reading material helped me write We Need Homes in the Delta Quadrant. Spaces and places have been an enjoyable lens to look through.

    Recently, through Agnes Callard’s Open Socrates, I was introduced to the Socratic concepts of questions and problems. Initially I thought of it as a newish way to look at things, but I’m converging toward the idea that problems are places and questions are spaces. A quick exploration below as to why.

    Vintage pattern illustration. Digitally enhanced from our own 19th Century Grammar of Ornament book by Owen Jones.

    Problems impede your quest and solving them makes them disappear. There are established ways of solving problems—recipes, algorithms, or rituals that nudge the obstacle aside so the original activity may continue unabated. Essentially, problems are tractable.

    Places are tractable too as “an ordered worlds of meaning.” Place-making, like problem-solving, begins by drawing a boundary and then treating that encapsulation as a building black, whatever its inner workings. The moment you can stand somewhere and say “here” you have marked out a place; the moment you can name a difficulty and say “do this” you have packaged a problem.

    The Socratic question, by contrast, is a quest. It is a hunt whose solution is unknown. Questions do not disappear when solved, instead they are additive and leave you with something, i.e. the solution. A real question insists on orientation before action: you must find north in the wilderness before plotting any march. And yet, along the path to an answer, you inevitably solve problems. Those problems are the markers that help you orient and keep you moving. A previous “solution” to a question can be used as a new place to further explore and prod at the question. In that sense, a question is like the horizon you constantly seek.

    Spaces feel exactly like that horizon. Spaces are pure potential to be explored by the places that demarcate the space. Identity, orientation, and even memory of a space are created by and stored in the places that surround it. To explore a space you must create stable places around it

    While the new way of thinking about Questions and Problems is great, I still prefer the lens of Spaces and Places. Q&P seem too narrow a set of lenses limited to the human mind. S&P expand that stage and allow us to think of more in that context. What I like even more is that spaces can also be places assuming we allow a boundary to be drawn around the fuzzy nature of a space. As a scientist, this feels a bit more satisfying because it allows you to explore and experiment even when the knowledge isn’t properly tied down by facts. 

    Fediverse Reactions
  • The Best Game Ever Made

    What does audacity look like?

    I did not imagine the problems I was having was due to a lack of temples and worshipping the right gods. Being from India, this should have been obvious. I had figured out long ago that a steady supply of beer, dedication to craft, good means, and romance were the were critical to happiness. Spirituality had not been considered. Hewing a temple out of granite improved focus. 

    These days Dwarf Fortress gets lumped into the colony management category of games. It is a pioneer in the genre but it is also so much more. It is for good reason that it is one of the few games thought worthy of collecting by the Museum of Modern Art (MoMA). Even there DF changed the way MoMA preserves art. DF one the most complex game ever created, starting from simple experiments, coded by a single person over 20 years, available for free, through all the normal human hardships. To play Dwarf Fortress is to experience audacity.

    DF is a complete simulation. From the growth rate of trees and grass, to simulating individual body parts of creature that allows cats to get drunk. The point of Dwarf Fortress is not to win. There is no way to win. As they say, losing is FUN! The gams starts normally enough. You set jobs for seven dwarfs to that helps them create a home in an unkind wilderness. Sometimes unexpected things happen like a giant farting bird attacks or the elves are cross with you because you used wood to make beds, which is fine in a new game. Eventually though something happens that tells you that there are more layers to this. Like my spirituality problems. 

    There is a lot of well known lore surrounding DF. From the famous story of Boatmurdered, Oilfurnace, Webcomics, to the drunk cats bug. DF in its small ways also reminds us of life’s important truths like, cats adopt the person not the other way around. 

    Tantrum spirals, goblin sieges, chairs of different qualities and the happiness they impart, dwarf fortress is deep. Like any effort by a single person, it started simple. Zach and Tarn Adams are brothers who created many many games as kids. DF was not even created for any kind of commercial aim. They simply wanted to simulate as many things as they could so that the game had the ability to tell great stories. Bit by bit, Tarn Adams coded DF without any external help, while finishing his PhD, and eventually getting enough in donations that he could dedicate his time to just building the game. 

    Recently the brothers worked with a publisher to bring their game to Steam. It made them “overnight” millionaires. That night was 20 years long. Along the way they build up a dedicated fan following, some contributed art, and music to the game, others hacked into the software to provide utilities to improve quality of life. Many of them are now part of the team working on DF full time. A few years ago Tarn estimated that the game was about 44% complete. I have a suspicion that the number hasn’t changed much because despite the regular updates, the brothers keep adding new ideas to build on. 

    You are unlikely to ever play Dwarf Fortress, but that doesn’t mean it’s not worth knowing about this bittersweet human story. No Clip has made a four part documentary, you should watch it.

    In a time when games were simple, computing power limited, with no funding, and life’s challenges, DF was created. To play Dwarf Fortress is to experience audacity.