The wetting and general tribology of cats has not progressed enough to give a definitive answer to the capillary dependence of the feline relaxation time. Fig. 2b gives an example of a lotus effect of Felis catus, suggesting that the substrate is superfelidaphobic. […] cats are proving to be a rich model system for rheological research, both in the linear and nonlinear regimes.
It seems monstrosity requires an organic element. When there isn’t one, monster is more often an adjective than a noun qualifying an incomplete potential for monstrosity.
[…]
Monumentality embodies distant, impersonal forces at work, that pose a terrifying threat because they don’t care about us. Monsters on the other hand, care enough to be deliberately threatening to us.
There is a shared soul shard between Dwarf Fortress, Emacs, and AI that lured me to them and has kept me engaged for over a decade. For a long time, I struggled to articulate the connection, managing only to describe Dwarf Fortress as the Emacs of games. But this analogy, while compelling, doesn’t fully capture the deeper resonance these systems share. They are not merely complicated; they are complex—tools for creativity that reward immersion and exploration.
To understand the allure, let’s revise the distinction between complicated and complex. Complicated systems, say a spinning-disk microscope, consist of interlocking parts (each with internal complications) that interact in predictable ways. They require technical expertise to master, but their behavior remains largely deterministic and I tire of them soon.
Complex systems, see Cynefin framework, exhibit emergent behavior. Their value/fun lies in the generative possibilities they unlock rather than the sum of their parts.
Dwarf Fortress, Emacs, and AI live on the froth of this complexity. None of these systems exist as ends in themselves. You don’t play Dwarf Fortress to achieve a high score (there isn’t one, you eventually lose). You don’t use Emacs simply to edit text, and you don’t build AI to arrange perceptrons in aesthetically pleasing patterns. These are platforms, altars for creation. Dev environments.
In Emergence We Trust
Like language with the rules of poetry, these environments are generative places enabling exploration of emergent spaces. Emergence, which manifests both in the software but also in you. There is always a point where you find yourself thinking, I didn’t expect I could do that. In Dwarf Fortress first you fight against tantrum spirals and then through mastery, against FPS death. Similarly, Emacs enables workflows that evolve over time, as users build custom functions and plugins to fit their unique needs. In AI, emergence arrives rather late but it’s there. Putting together datasets, training them, optimizing, starting over, are complicated but not complex per se. The complexity (and emergence) is in the capabilities of the trained network. Things infinitely tedious or difficult are a few matrix multiplications away.
This desire for emergence is spelunking. It rewards curiosity and experimentation but demands patience and resilience. Mastery begins with small victories: making beer in Dwarf Fortress, accessing help in Emacs, or implementing a 3-layer neural network. Each success expands your imagination. The desire to do more, to push the boundaries of what’s possible, becomes an endless rabbit hole—one that is as exhilarating as it is daunting.
Complexity as a Gateway to Creativity
The high complexity of these systems—their vast degrees of freedom—opens the door to infinite creativity. This very openness, however, can be intimidating. Confronted with the sprawling interface of Emacs, the arcane scripts of Dwarf Fortress, or the mathematical abstractions of AI, it’s tempting to retreat to the familiar. Yet this initial opacity is precisely what makes these systems so rewarding. Engaging with something that might blow up in your face—whether it’s drunk cats, a lisp error, or an exploding gradient—forces you to give up.
But just then you have an idea, what you tried this…
Every few years decentralization and RSS feeds come back into the ligh. Usually this happens when one otherwise functional social media site dies in a real or practical way. Google Reader being my, and my generation’s, touchstone. During these times of turmoil, a beautiful soul puts together a guide for how to use RSS. This is a good guide for this iteration:
AI-generated image of The old Doge Enrico Dandolo sacking Constantinople
I’m taking part in the Contraptions Book Club where we are reading City of Fortune which is about Venice. I was struck by the character of Doge Dandolo. Dude was 80+ when we saw a trade opportunity in the 4th Crusades. In the book, the author, Roger Crowley describes a brief moment when Dandolo makes a heroic rush on the banks of Constantinople’s Golden Horn during the Sack of Constantinople.
I found both the Doge and the imagery interesting so went looking for art depicting the art, there’s supposed to be lots. Unfortunately I couldn’t find any and nothing in the public domain. So I asked AI to generate something.
There are other paintings like the one below, but not the one I was looking for.
Came across a heartwarming fan-made comic of a collaborative story told on tumblr. A farmer makes a temple to see who shows up and it’s a self-doubting god of transient beauty. This immediately brought back the Small Gods from Discworld, but also the poem Worm by Gail Mcconnell.
The comic and the story are beautiful. Perhaps there is some sense to shrines…
Trying something different for a few days. Instead of spamming all the social media accounts with daily links, I will post links only on the blog everyday. Maybe even multiple times a day.
Still thinking about doing a weekly digest or something. Let’s see.
Why do you groan, O Watermill; For I’ve troubles, I groan I fell in love with the Lord; For It do I groan They found me on a mountain; My arms and wings they plucked Saw me fit for a watermill; For I’ve troubles, I groan From the mountain they cut my wood; My disparate order they ruined But an unwearied poet I am; For I’ve troubles, I groan I am The Troubled Watermill; My water flows, roaring and rumbling Thus has God commanded; For I’ve troubles, I groan I am but a mountain’s tree; Neither am I bitter, nor sweet I am but a pleader to the Lord; For I’ve troubles, I groan Yunus, whoever comes here will find no joy, will not reach his desire Nobody stays in this fleeting abode; For I’ve troubles, I groan
This was a cool little find. I’ve always played with software in one form or another, but besides building PCs actual hardware hacking felt out of reach. Maybe I can start with some simple things like radio hacking.
The beauty of understanding
My love for science seems to always involve some sort of rube goldberg machine: you set things up just so and discoveries magically flow out. Sure, designing pretty experiments is difficult and there is a lot of literal and metaphorical heartbreak along the way but to finally discover the way is all frisson.
At the deepest level, what motivates scientists to pursue and persist in their work is the aesthetic experience of understanding itself. Centring the beauty of understanding presents an image of science more recognisable to scientists themselves and with greater appeal for future scientists.
Do you hate it when scientists unbraid a moonbeam? Well, there’s three types of happiness scientists feel, apparently:
Sensory beauty – what is visually or aurally striking
Useful beauty – involves treating aesthetic properties such as simplicity, symmetry, aptness or elegance as heuristics or guides to truth.
Beauty of understanding – grasping the hidden order, inner logic or causal mechanisms of natural phenomena.
Perhaps Edward Tufte knew a thing or two when he named his book Beautiful Evidence.
AI slop is all around and increasingly extraction of useful information will face difficulties as we start to feed more noise into the already noisy world of knowledge. We are in an era of unprecedented data abundance, yet this deluge of information often lacks the structure necessary to derive meaningful insights. Knowledge graphs (KGs), with their ability to represent entities and their relationships as interconnected nodes and edges, have emerged as a powerful tool for managing and leveraging complex data. However, the efficacy of a KG is critically dependent on the underlying structure provided by domain ontologies. These ontologies, which are formal, machine-readable conceptualizations of a specific field of knowledge, are not merely useful, but essential for the creation of robust and insightful KGs. Let’s explore the role that domain ontologies play in scaffolding KG construction, drawing on various fields such as AI, healthcare, and cultural heritage, to illuminate their importance.
Vassily Kandinsky, 1913 – Composition VII (1913) According to Kandinsky, this is the most complex piece he ever painted.
At its core, an ontology is a formal representation of knowledge within a specific domain, providing a structured vocabulary and defining the semantic relationships between concepts. In the context of KGs, ontologies serve as the blueprint that defines the types of nodes (entities) and edges (relationships) that can exist within the graph. Without this foundational structure, a KG would be a mere collection of isolated data points with limited utility. The ontology ensures that the KG’s data is not only interconnected but also semantically interoperable. For example, in the biomedical domain, an ontology like the Chemical Entities of Biological Interest (ChEBI) provides a standardized way of representing molecules and their relationships, which is essential for building biomedical KGs. Similarly, in the cultural domain, an ontology provides a controlled vocabulary to define the entities, such as artworks, artists, and historical events, and their relationships, thus creating a consistent representation of cultural heritage information.
One of the primary reasons domain ontologies are crucial for KGs is their role in ensuring data consistency and interoperability. Ontologies provide unique identifiers and clear definitions for each concept, which helps in aligning data from different sources and avoiding ambiguities. Consider, for example, a healthcare KG that integrates data from various clinical trials, patient records, and research publications. Without a shared ontology, terms like “cancer” or “hypertension” may be interpreted differently across these data sets. The use of ontologies standardizes the representation of these concepts, thus allowing for effective integration and analysis. This not only enhances the accuracy of the KG but also makes the information more accessible and reusable. Furthermore, using ontologies that follow the FAIR (Findable, Accessible, Interoperable, Reusable) principles facilitates data integration, unification, and information sharing, essential for building robust KGs.
Moreover, ontologies facilitate the application of advanced AI methods to unlock new knowledge. They support both deductive reasoning to infer new knowledge and provide structured background knowledge for machine learning. In the context of drug discovery, for instance, a KG built on a biomedical ontology can help identify potential drug targets by connecting genes, proteins, and diseases through clearly defined relationships. This structured approach to data also enables the development of explainable AI models, which are critical in fields like medicine where the decision-making process must be transparent and interpretable. The ontology-grounded KGs can then be used to generate hypotheses that can be validated through manual review, in vitro experiments, or clinical studies, highlighting the utility of ontologies in translating complex data into actionable knowledge.
Despite their many advantages, domain ontologies are not without their challenges. One major hurdle is the lack of direct integration between data and ontologies, meaning that most ontologies are abstract knowledge models not designed to contain or integrate data. This necessitates the use of (semi-)automated approaches to integrate data with the ontological knowledge model, which can be complex and resource-intensive. Additionally, the existence of multiple ontologies within a domain can lead to semantic inconsistencies that impede the construction of holistic KGs. Integrating different ontologies with overlapping information may result in semantic irreconcilability, making it difficult to reuse the ontologies for the purpose of KG construction. Careful planning is therefore required when choosing or building an ontology.
As we move forward, the development of integrated, holistic solutions will be crucial to unlocking the full potential of domain ontologies in KG construction. This means creating methods for integrating multiple ontologies, ensuring data quality and credibility, and focusing on semantic expansion techniques to leverage existing resources. Furthermore, there needs to be a greater emphasis on creating ontologies with the explicit purpose of instantiating them, and storing data directly in graph databases. The integration of expert knowledge into KG learning systems, by using ontological rules, is crucial to ensure that KGs not only capture data, but also the logical patterns, inferences, and analytic approaches of a specific domain.
Domain ontologies will prove to be the key to building robust and useful KGs. They provide the necessary structure, consistency, and interpretability that enables AI systems to extract valuable insights from complex data. By understanding and addressing the challenges associated with ontology design and implementation, we can harness the power of KGs to solve complex problems across diverse domains, from healthcare and science to culture and beyond. The future of knowledge management lies not just in the accumulation of data but in the development of intelligent, ontologically-grounded systems that can bridge the gap between information and meaningful understanding.
References
Al-Moslmi, T., El Alaoui, I., Tsokos, C.P., & Janjua, N. (2021). Knowledge graph construction approaches: A survey of recent research works. arXiv preprint. https://arxiv.org/abs/2011.00235
Gilbert, S., & others. (2024). Augmented non-hallucinating large language models using ontologies and knowledge graphs in biomedicine. npj Digital Medicine. https://www.nature.com/articles/s41746-024-01081-0
Guzmán, A.L., et al. (2022). Applications of Ontologies and Knowledge Graphs in Cancer Research: A Systematic Review. Cancers, 14(8), 1906. https://www.mdpi.com/2072-6694/14/8/1906