The advent of the “Cloister Web,” a conceptual space where individuals leverage Large Language Models (LLMs) to cultivate novel ideas and commit them to a persistent public memory, heralds a profound shift in our intellectual and political landscapes. Politically, the mere genesis of an idea is insufficient; its effective distribution is paramount. LLMs, in this context, are not just tools for thought but potent engines for bespoke delivery, tailoring messages to resonate deeply with individual recipients.
Historically, transformative communication technologies have reshaped political discourse. The printing press, for instance, democratized access to information, allowing entirely disparate, even contradictory, ideas to proliferate through the same medium without direct interference. In pre-independence India, this manifested in a complex tapestry of narratives. Various factions within both Hindu and Muslim communities advocated for cooperation with the British Raj, while others fiercely championed resistance. The press became the conduit for these divergent viewpoints, though often, subtle but significant ideological divisions, amplified by the very medium meant to connect, hindered broader unification against a common adversary. Print allowed these nuanced positions to be articulated and debated widely, yet the translation of these ideas into unified action remained a challenge.
Just as intellectuals of previous eras harnessed the power of print, today’s thinkers will inevitably turn to the Cloister Web for discourse and the dissemination of their ideas. However, this new paradigm may blur the traditional lines between the originator of an idea and its popularizer. Historically, distinct roles have often emerged: the intellectual who conceptualizes and articulates new frameworks, and the revolutionary leader who galvanizes public action around these concepts.
In India, figures like Rabindranath Tagore and Sri Aurobindo were profound intellectuals, shaping notions of Indian identity and spirituality, while leaders like Mahatma Gandhi (who uniquely embodied both roles) and Subhas Chandra Bose translated broader ideals into mass movements. Russian history offers examples like Alexander Herzen, whose writings laid intellectual groundwork, and Vladimir Lenin, who masterfully channeled such ideas into revolutionary action. In China, thinkers such as Liang Qichao envisioned a modern Chinese state, with figures like Sun Yat-sen and later Mao Zedong spearheading the revolutionary movements to realize differing versions of that vision. Similarly, in American history, the intellectual contributions of Thomas Paine and Thomas Jefferson provided the philosophical underpinnings for the revolution, which was then vociferously championed and driven by figures like Samuel Adams and Patrick Henry. The intellectual often risked censorship or academic isolation; the revolutionary leader, their liberty or life.
LLM technology offers a novel dynamic, potentially enabling intellectuals to bypass traditional gatekeepers and speak more directly to individuals. This is achieved by crafting content tailored to specific tastes, preferences, and pre-existing knowledge frameworks. This bespoke communication will be facilitated not only by generating written material that LLMs can readily process and adapt but also by creating LLM-consumable idea-graphs and knowledge structures. These structures will allow for a more nuanced and interconnected understanding of complex concepts. Beyond its current utility as a medium for targeted advertising or customer service, the LLM chat interface is poised to become the new political “maidan”—the public square or sports field historically used for political rallies and discourse—through which ideas reach, engage, and ultimately shape individuals.
The intellectual, therefore, may sow the seed of an idea within the Cloister Web, but it is the LLM itself that provides the uniquely fertile soil. Through its vast latent space—the complex, high-dimensional internal representations it develops from training data—an LLM can foster unexpected connections, interpretations, and extrapolations of these initial concepts, allowing them to root and flourish in diverse individual minds in ways previously unimaginable.
The Cloister Web, powered by LLMs, promises to revolutionize not just how ideas are born and recorded, but more critically, how they are distributed, interpreted, and integrated into the political consciousness. This shift presents both immense opportunities for direct engagement and nuanced understanding, alongside potential challenges in navigating a landscape where ideas can be infinitely remixed and individually targeted, forever altering the contours of our collective political maidan.
At the edge of the known, maps fail and instincts take over. We don’t just explore new worlds—we build places to survive them. Because in deep space, meaning isn’t found. It’s made.
I. Interruption of Infinity
The Delta Quadrant is a distant region of the galaxy in the Star Trek universe—vast, largely uncharted, and filled with anomalies, dangers, and promise. It is where the map ends and the unknown begins. No stations, no alliances, no history—just possibility.
And yet, possibility alone is not navigable. No one truly explores a void. We only explore what we can orient ourselves within. That is why every journey into the Delta Quadrant begins not with motion, but with homebuilding—the act of constructing something steady enough to make movement meaningful.
This is not a story about frontiers. It is a story about interruptions.
To build a home is to interrupt space. To be born is to interrupt infinity.
Consciousness does not arise gently. It asserts. It carves. It says: Here I am. The conditions of your birth—your geography, your culture, your body—are not mere facts. They are prenotions: early constraints that allow orientation. They interrupt the blur of everything into something—a horizon, a doorway, a room.
Francis Bacon wrote that memory without direction is indistinguishable from wandering. We do not remember freely; we remember through structures. We do not live in space; we live through place. Philosopher Kei Kreutler expands this insight: artificial memory—our rituals, stories, and technologies—is not a container for infinity. It is a deliberate break in its surface, a scaffolding that lets us navigate the unknown.
Like stars against the black, places puncture the undifferentiated vastness of space. They do not merely protect us from chaos; they make chaos legible. Before GPS, before modern maps, people made stars into stories and stories into guides. Giordano Bruno, working in the Hermetic tradition, saw constellations as talismans—anchoring points in a metaphysical sky. In India, astronomy and astrology were entwined, and the nakshatras—lunar mansions—offered symbolic footholds in the night’s uncertainties. These were not just beliefs. They were early technologies of place-making.
Without a place, you are not lost—you are not yet anywhere.
And so, to explore the Delta Quadrant—to explore anything—we must first give it a place to begin. Not just a structure, but a home. Not just shelter, but meaning.
II. From Vastness to Meaning
To understand why we need homes in the Delta Quadrant, we must first understand what it means to be in any space at all. Not merely to pass through it, but to experience it, name it, shape it—to transform the ungraspable into something known, and eventually, something lived.
This section traces that transformation. It begins with space—untouched, undefined—and follows its conversion into place, where identity, memory, and meaning can take root. Along the way, we consider the roles of perception, language, and tools—not just as instruments of survival, but as the very mechanisms by which reality becomes navigable.
We begin where we always do: in the unmarked vastness.
What is Space?
Space surrounds us, yet refuses to meet our gaze. It is not a substance but a condition—timeless, uncaring, and full of potential. It offers no direction, holds no memory. Nothing in it insists on being noticed. Space simply waits.
Henri Lefebvre helps us make our first move toward legibility. He proposes that all space emerges through a triad: the representations of space—the conceptual abstractions of cartographers, economists, and urban planners; the spatial practices of everyday life—our habits of movement and arrangement; and representational spaces—the dreamlike, lived realities saturated with memory, symbol, and emotion. Yet in modernity, it is the first of these—abstract space—that dominates. Space is planned, capitalized, monetized. It becomes grid and zone, not story or sanctuary.
Still, even this mapped and monetized space is not truly empty. Doreen Massey reminds us that space is not inert. It is relational, always in flux, co-constituted by those who traverse it. Space may not hold memories, but it does hold tensions. A room shifts depending on who enters it. A street corner lives differently for each passerby. What appears static from orbit is endlessly alive on foot.
We might then say: space is not blank—it is waiting. It is the stage before the script, the forest before the trail, the soundscape before the melody. It is possibility without orientation.
And yet, we cannot live on possibility. To dwell requires more than openness. Something must be placed. Something must be remembered.
What is Place?
Place begins when space is interrupted—when the unformed becomes familiar, when pattern gathers, when time slows down enough to matter. Where space is potential, place is presence.
Yi-Fu Tuan called place “an ordered world of meaning.” This ordering is not merely logical—it is affective, mnemonic, embodied. Place is not only where something happens; it is where something sticks. The repeated use of a corner, the ritual return to a path, the naming of a room—all of these actions layer memory upon memory until a once-anonymous space becomes deeply, even invisibly, ours.
Edward Casey expands this view by proposing that place is not a passive container of identity, but a generator of it. Who we are emerges from where we are. The self is not constructed in a vacuum, but shaped by kitchens and classrooms, alleyways and attics. A place is a crucible for becoming.
And places are not necessarily large or fixed. Often they are forged in fragments—through a method of thought called parataxis, the act of placing things side by side without hierarchy or explanation. Plates, tables, menus—listed without commentary—already conjure a restaurant. North is the river, east is the village: already we are somewhere. This act of spatial poetry, what might be called topopoetics, allows us to construct coherence from adjacency. A place need not be explained to be felt.
Moreover, places are not isolated islands. They are defined as much by what they touch as by what they contain. A healthcare startup, for instance, is not merely a business plan or a piece of code—it is a bounded intersection of regulation, culture, user need, and infrastructural possibility. Its identity as a place emerges through tension, not through self-sufficiency.
To make a place, then, is to draw a boundary—not always of stone, but always of meaning. And once there is a boundary, there is the possibility of crossing it.
Exploration and Navigation
If place is what interrupts space, exploration is the means by which that interruption unfolds. We explore to understand, to locate, to claim. But we also explore to survive. In an unmarked world, movement without orientation is not freedom—it is drift.
The act of exploration is always mediated by tools—technologies, heuristics, protocols, even rituals. A tool transforms a space into something workable, sometimes by revealing it, sometimes by resisting it. The ax makes the forest navigable. The microscope transforms skin into data. A recipe, too, is a tool: it arranges the chaos of the kitchen into a legible field of options.
Skill determines the fidelity of this transformation. A novice with a saw sees wood; a carpenter sees potential. A goldsmith with pliers explores more in an inch of metal than a layman can in a bar of gold. Tools extend reach, but skill gives them resonance.
Rules of thumb emerge here as quietly powerful. They encode accumulated wisdom without demanding full explanation. A rule of thumb is a kind of portable place—a local memory that survives relocation. It allows someone to move meaningfully through new terrain without starting from nothing.
But perhaps the oldest, and most powerful, tool of place-making is language. To name something is to summon it into experience. A name makes the unspeakable speakable, the abstract navigable. Storytelling is not merely entertainment—it is cartography. Myth and memory alike help us place ourselves. Rituals, in this light, become recurring acts of alignment: a way to rhythmically convert time and action into a felt geography.
In early computer games like Zork, entire worlds were constructed out of pure language. “To the west is a locked door.” “To the north, a forest.” With no images at all, a mental geography emerged. Place formed from syntax. And in open-world games, which promise limitless exploration, boundaries remain—defined not by terrain, but by tools and capabilities. One may see a mountain, but until one has a grappling hook, the mountain is not truly in reach.
This is the double truth of exploration: it reveals, but also restricts. Every tool has affordances and blind spots. Every method of navigation makes some routes legible and others obscure.
And so, just as place makes meaning possible, it also makes power visible. When we explore, we choose where to go—but also where not to go. When we name, we choose what to name—and what to leave unnamed. With each act of orientation, something is excluded.
This is where the ethical tensions begin.
III. Violence, Power, Custodianship
The Violence of Exploration
To make a place is never a neutral act. It is always a form of imposition, a declaration that one configuration of the world will take precedence over another. Every boundary drawn reorders the field of possibility. In this sense, exploration—often romanticized as the pursuit of discovery—is inseparable from the logic of exclusion. The forest cleared for settlement, the land renamed by the cartographer, the dataset parsed by an algorithm: each gesture selects a future and discards alternatives. Place-making is not only constructive—it is also extractive.
Achille Mbembe’s concept of necropolitics offers a stark rendering of this dynamic. For Mbembe, the most fundamental expression of power is the authority to determine who may live and who must die—not just biologically, but spatially. A person denied a stable place—be it in legal terms, economic structures, or cultural recognition—is exposed to systemic vulnerability. They are rendered invisible, disposable, or subject to unending surveillance. In this framework, place becomes not a refuge but a rationed privilege, administered according to hierarchies of race, class, and citizenship. To be placeless is to be exposed to risk without recourse.
David Harvey arrives at a similar critique from a different angle. For Harvey, the production of space under capitalism is inherently uneven. Capital concentrates selectively, building infrastructure, institutions, and visibility in certain regions while leaving others disinvested, fragmented, or erased. Some places are made to flourish because they are profitable; others are sacrificed because they are not. Entire neighborhoods, cities, and ecosystems are subjected to cycles of speculative construction and abandonment. In this schema, place is commodified—not lived. It becomes a product shaped less by the needs of its inhabitants than by the imperatives of financial flows.
Who Gets to Make Place?
Even at smaller scales, the ethics of place-making hinge on who holds the authority to define what a place is and who belongs within it. The naming of a school, the zoning of a district, the design of a product interface—each involves not only inclusion, but exclusion; not only clarity, but control. The map that makes one community legible can make another invisible. Orientation, in this sense, is never free of consequence. It is always tethered to power.
If this is the cost of exploration, then the question we must ask is not simply whether to build places—but how, and for whom.
Those who create the tools through which places are made—architects, technologists, platform designers—wield a power that is both formative and silent. In shaping the conditions under which others navigate the world, they act as unseen cartographers. A navigation app determines which streets appear safe. A job platform defines whose labor is visible. A software protocol decides who is legible to the system. In each case, someone has already made a decision about what kind of world is possible.
This asymmetry between creator and user has led some to argue that ethical design requires more than usability—it requires an ethos of custodianship. The act of place-making must be informed not only by technical possibility, but by moral imagination. A well-designed place is not simply functional—it is inhabited, sustained, and responsive to the people who live within it.
Michel Foucault offers a vocabulary for this through his concept of heterotopias: places that operate under a different logic, outside the dominant spatial order. These may be institutional—cemeteries, prisons, libraries—or insurgent—subcultures, autonomous zones, speculative games. Heterotopias do not merely resist the prevailing map; they reveal that other maps are possible. They function as mirrors and distortions of the dominant world, reminding us that the spatial order is neither natural nor inevitable.
Yet even heterotopias cannot be engineered wholesale. They must be lived into being. This is the insight offered by Christopher Alexander and, more recently, Ron Wakkary in their explorations of unselfconscious design. Good places, they argue, are rarely planned top-down. Instead, they emerge from a slow dance between structure and improvisation. A fridge becomes a family bulletin board. A courtyard becomes a marketplace. A piece of software becomes an unanticipated ritual. In these cases, fit emerges not from specification but from accumulated use. Design, at its best, enables this evolution rather than constraining it.
To make a place, then, is not to finalize it. It is to initiate a relationship. The designer, the founder, the engineer—each acts as a temporary steward rather than a sovereign. The real test of their creation is not how complete it feels on launch day, but how it adapts to the people who enter it and make it their own. This is the quiet responsibility of custodianship: to create with humility, to listen after building, and to recognize that places do not succeed by force of vision alone. They succeed by making others feel, at last, that they belong.
IV. Fractal Place-Making
We often think of place-making as a singular act—a line drawn, a structure raised, a tool released. But in truth, places are rarely built in one gesture. They are shaped recursively, iteratively, across layers and scales. A place is not simply made once—it is continuously remade, revised, and reinhabited. If power animates the creation of place, then care animates its persistence.
The previous section examined how place-making implicates violence and authority. This one turns inward, offering tools to see place-making not as an external imposition, but as a continuous, generative practice—one we each participate in, often unconsciously. Places are not only geopolitical or architectural. They emerge in routines, in interfaces, in sentences, in rituals. They are as present in the layout of a city as in the arrangement of a desktop or the structure of a daily habit.
Place-making, in this light, becomes fractal.
Spaces All the Way Down
Every place, no matter how concrete or intentional, overlays a prior space. A home rests on a plot of land that once held other meanings. A software tool is coded atop prior protocols, abstractions, languages. A startup’s culture is built not from scratch, but from accumulated social assumptions, inherited metaphors, and the ghosts of previous institutions. No place begins in a vacuum. It begins by coalescing around an earlier ambiguity.
To say “it’s spaces all the way down” is not a paradox but a recognition: that all our structuring of the world rests on foundations that were once unstructured. And those, in turn, rest on others. Beneath every home is a history. Beneath every habit is a choice. Beneath every heuristic is an unspoken story of why something worked once, and perhaps still does.
This recursive layering reveals something crucial. Place is not just what we inhabit—it is what we build upon, often without seeing the full depth of what came before. When we set up a calendar system, when we define an onboarding process, when we reorganize a room or refactor code, we are engaging in acts of recursive place-making. These are not trivial gestures. They encode our assumptions about time, labor, clarity, worth. And in doing so, they scaffold the next set of moves. What feels natural is often just deeply buried infrastructure.
Traditions, Tools, and Temporal Sediments
Much of what makes a place stable over time is not its physicality but its rhythm. What repeats is remembered. What is remembered becomes legible. Over time, the sediment of repetition builds tradition—not as nostalgia, but as a living scaffolding.
Rules of thumb are examples of such traditions, compacted into portable epistemologies. They are not universal truths, but local condensations of experience: “Measure twice, cut once.” “If it’s not a hell yes, it’s a no.” “Always leave a version that works.” These are not mere slogans. They are the crystallization of hundreds of micro-failures, carried forward in language so that others may avoid or adapt. A rule of thumb is a place you can carry in your mind—a place where you briefly borrow the perspective of others, where their past becomes your foresight.
Ethnographic engineering—the practice of living among those you design for—extends this logic. It is not enough to ask what users want; one must become a user. To understand a kitchen, you must cook. To redesign a hospital intake form, you must sit beside a nurse at the end of a long shift. Inhabitance precedes insight. It is not empathy as abstraction, but as situated knowledge. This is why the mantra “get out of the building” matters. It invites designers to enter someone else’s place—and to temporarily surrender their own.
Even the way we recover from failure carries spatial weight. In systems design, crash-only thinking proposes that recovery should not be exceptional but routine. A system should not pretend to avoid breakdown—it should assume it, and handle it gracefully. This principle translates beyond code. Our identities, too, are shaped by rupture and repair. We are the residue of what survives collapse. To rebuild after a crash is to reassert a place for oneself in the world—to refuse exile, to restart with a new contour of legibility. The self is a recursive place, constantly reformed by continuity and failure.
Imagined Places, Real Consequences
Not all places are made of walls or workflows. Some are conjured in thought but anchor entire worlds in practice. These are imagined places—places held in common through language, ritual, and belief—and their effects are no less material for being constructed.
Benedict Anderson’s theory of imagined communities describes the nation as precisely such a place: a social structure that exists because enough people believe in its coherence. A country is not simply a set of borders—it is a shared imagination of belonging, reinforced by rituals as small as singing an anthem or using the same postal code. These rituals do not merely express the nation—they enact it. The community persists not because everyone knows each other, but because they believe in the same structure of place.
Gaston Bachelard, writing of intimate places, adds another layer. His Poetics of Space reveals how rooms, nests, and thresholds function not just architecturally, but symbolically. A staircase is not just a connector between floors—it is a memory channel. A drawer is not just storage—it is a metaphor for secrecy. Through repeated use and emotional investment, even the smallest corners of a home can become vast interior landscapes.
Designers who ignore this symbolic dimension risk creating tools that are frictionless but placeless. A well-designed app may guide a user efficiently, but if it lacks metaphor, texture, or resonance, it will not endure. By contrast, even ephemeral tools—when shaped with care—can become anchoring places. A text editor that respects rhythm. A ritualized way of closing the day. A naming convention that makes each project feel storied rather than serialized. These are small acts, but they echo. They accumulate. They become sediment.
Recursive place-making, then, is not about grandeur. It is about fidelity. It is about recognizing that every small act of shaping the world—every pattern set, every name given, every recovery ritualized—is part of a larger unfolding. Place is not a one-time gift. It is a continuous offering.
V. Homes at the Edge of the Known
Places don’t just emerge from space—they transform it. A well-made place doesn’t only make sense of what is; it makes new things possible. It reframes what we pay attention to, how we act, and who we become. Place is not the end of exploration—it is the start of imagination.
Each time we build a place, we alter the shape of the surrounding space. A room becomes a lab, a garage becomes a company, a notebook becomes a worldview. These shifts ripple outward. Identity follows structure. Tools reorganize desire. Suddenly what felt unreachable becomes thinkable. New directions appear.
This is why the Delta Quadrant matters. In Star Trek, it is the quadrant at the far edge of the map: unvisited, unaligned, untamed. But we all have our own Delta Quadrants—those domains where orientation fails. The new job. The new field. The social unknown. We don’t need to conquer these spaces. We need to inhabit them.
Building a home in the Delta Quadrant means giving shape to uncertainty. Not through control, but through commitment. Homes are not fortresses—they are launchpads. They anchor us without confining us. They give us somewhere to return to, so we can go further.
To build such homes is to design for possibility. It is to accept that the unknown will always outpace our frameworks, and to meet it not with fear, but with grounded generosity. Homes enable freedom not by removing constraints, but by embedding care in structure. They show us that discovery and dignity are not opposites—they are partners.
And yes, building these homes will be messy. There will be diplomacy with space jellyfish. There will be moral conundrums involving time loops and malfunctioning replicators. Someone will definitely rewire the main console so the espresso machine can detect tachyon emissions.
There is a shared soul shard between Dwarf Fortress, Emacs, and AI that lured me to them and has kept me engaged for over a decade. For a long time, I struggled to articulate the connection, managing only to describe Dwarf Fortress as the Emacs of games. But this analogy, while compelling, doesn’t fully capture the deeper resonance these systems share. They are not merely complicated; they are complex—tools for creativity that reward immersion and exploration.
To understand the allure, let’s revise the distinction between complicated and complex. Complicated systems, say a spinning-disk microscope, consist of interlocking parts (each with internal complications) that interact in predictable ways. They require technical expertise to master, but their behavior remains largely deterministic and I tire of them soon.
Complex systems, see Cynefin framework, exhibit emergent behavior. Their value/fun lies in the generative possibilities they unlock rather than the sum of their parts.
Dwarf Fortress, Emacs, and AI live on the froth of this complexity. None of these systems exist as ends in themselves. You don’t play Dwarf Fortress to achieve a high score (there isn’t one, you eventually lose). You don’t use Emacs simply to edit text, and you don’t build AI to arrange perceptrons in aesthetically pleasing patterns. These are platforms, altars for creation. Dev environments.
In Emergence We Trust
Like language with the rules of poetry, these environments are generative places enabling exploration of emergent spaces. Emergence, which manifests both in the software but also in you. There is always a point where you find yourself thinking, I didn’t expect I could do that. In Dwarf Fortress first you fight against tantrum spirals and then through mastery, against FPS death. Similarly, Emacs enables workflows that evolve over time, as users build custom functions and plugins to fit their unique needs. In AI, emergence arrives rather late but it’s there. Putting together datasets, training them, optimizing, starting over, are complicated but not complex per se. The complexity (and emergence) is in the capabilities of the trained network. Things infinitely tedious or difficult are a few matrix multiplications away.
This desire for emergence is spelunking. It rewards curiosity and experimentation but demands patience and resilience. Mastery begins with small victories: making beer in Dwarf Fortress, accessing help in Emacs, or implementing a 3-layer neural network. Each success expands your imagination. The desire to do more, to push the boundaries of what’s possible, becomes an endless rabbit hole—one that is as exhilarating as it is daunting.
Complexity as a Gateway to Creativity
The high complexity of these systems—their vast degrees of freedom—opens the door to infinite creativity. This very openness, however, can be intimidating. Confronted with the sprawling interface of Emacs, the arcane scripts of Dwarf Fortress, or the mathematical abstractions of AI, it’s tempting to retreat to the familiar. Yet this initial opacity is precisely what makes these systems so rewarding. Engaging with something that might blow up in your face—whether it’s drunk cats, a lisp error, or an exploding gradient—forces you to give up.
But just then you have an idea, what you tried this…
AI slop is all around and increasingly extraction of useful information will face difficulties as we start to feed more noise into the already noisy world of knowledge. We are in an era of unprecedented data abundance, yet this deluge of information often lacks the structure necessary to derive meaningful insights. Knowledge graphs (KGs), with their ability to represent entities and their relationships as interconnected nodes and edges, have emerged as a powerful tool for managing and leveraging complex data. However, the efficacy of a KG is critically dependent on the underlying structure provided by domain ontologies. These ontologies, which are formal, machine-readable conceptualizations of a specific field of knowledge, are not merely useful, but essential for the creation of robust and insightful KGs. Let’s explore the role that domain ontologies play in scaffolding KG construction, drawing on various fields such as AI, healthcare, and cultural heritage, to illuminate their importance.
Vassily Kandinsky, 1913 – Composition VII (1913) According to Kandinsky, this is the most complex piece he ever painted.
At its core, an ontology is a formal representation of knowledge within a specific domain, providing a structured vocabulary and defining the semantic relationships between concepts. In the context of KGs, ontologies serve as the blueprint that defines the types of nodes (entities) and edges (relationships) that can exist within the graph. Without this foundational structure, a KG would be a mere collection of isolated data points with limited utility. The ontology ensures that the KG’s data is not only interconnected but also semantically interoperable. For example, in the biomedical domain, an ontology like the Chemical Entities of Biological Interest (ChEBI) provides a standardized way of representing molecules and their relationships, which is essential for building biomedical KGs. Similarly, in the cultural domain, an ontology provides a controlled vocabulary to define the entities, such as artworks, artists, and historical events, and their relationships, thus creating a consistent representation of cultural heritage information.
One of the primary reasons domain ontologies are crucial for KGs is their role in ensuring data consistency and interoperability. Ontologies provide unique identifiers and clear definitions for each concept, which helps in aligning data from different sources and avoiding ambiguities. Consider, for example, a healthcare KG that integrates data from various clinical trials, patient records, and research publications. Without a shared ontology, terms like “cancer” or “hypertension” may be interpreted differently across these data sets. The use of ontologies standardizes the representation of these concepts, thus allowing for effective integration and analysis. This not only enhances the accuracy of the KG but also makes the information more accessible and reusable. Furthermore, using ontologies that follow the FAIR (Findable, Accessible, Interoperable, Reusable) principles facilitates data integration, unification, and information sharing, essential for building robust KGs.
Moreover, ontologies facilitate the application of advanced AI methods to unlock new knowledge. They support both deductive reasoning to infer new knowledge and provide structured background knowledge for machine learning. In the context of drug discovery, for instance, a KG built on a biomedical ontology can help identify potential drug targets by connecting genes, proteins, and diseases through clearly defined relationships. This structured approach to data also enables the development of explainable AI models, which are critical in fields like medicine where the decision-making process must be transparent and interpretable. The ontology-grounded KGs can then be used to generate hypotheses that can be validated through manual review, in vitro experiments, or clinical studies, highlighting the utility of ontologies in translating complex data into actionable knowledge.
Despite their many advantages, domain ontologies are not without their challenges. One major hurdle is the lack of direct integration between data and ontologies, meaning that most ontologies are abstract knowledge models not designed to contain or integrate data. This necessitates the use of (semi-)automated approaches to integrate data with the ontological knowledge model, which can be complex and resource-intensive. Additionally, the existence of multiple ontologies within a domain can lead to semantic inconsistencies that impede the construction of holistic KGs. Integrating different ontologies with overlapping information may result in semantic irreconcilability, making it difficult to reuse the ontologies for the purpose of KG construction. Careful planning is therefore required when choosing or building an ontology.
As we move forward, the development of integrated, holistic solutions will be crucial to unlocking the full potential of domain ontologies in KG construction. This means creating methods for integrating multiple ontologies, ensuring data quality and credibility, and focusing on semantic expansion techniques to leverage existing resources. Furthermore, there needs to be a greater emphasis on creating ontologies with the explicit purpose of instantiating them, and storing data directly in graph databases. The integration of expert knowledge into KG learning systems, by using ontological rules, is crucial to ensure that KGs not only capture data, but also the logical patterns, inferences, and analytic approaches of a specific domain.
Domain ontologies will prove to be the key to building robust and useful KGs. They provide the necessary structure, consistency, and interpretability that enables AI systems to extract valuable insights from complex data. By understanding and addressing the challenges associated with ontology design and implementation, we can harness the power of KGs to solve complex problems across diverse domains, from healthcare and science to culture and beyond. The future of knowledge management lies not just in the accumulation of data but in the development of intelligent, ontologically-grounded systems that can bridge the gap between information and meaningful understanding.
References
Al-Moslmi, T., El Alaoui, I., Tsokos, C.P., & Janjua, N. (2021). Knowledge graph construction approaches: A survey of recent research works. arXiv preprint. https://arxiv.org/abs/2011.00235
Gilbert, S., & others. (2024). Augmented non-hallucinating large language models using ontologies and knowledge graphs in biomedicine. npj Digital Medicine. https://www.nature.com/articles/s41746-024-01081-0
Guzmán, A.L., et al. (2022). Applications of Ontologies and Knowledge Graphs in Cancer Research: A Systematic Review. Cancers, 14(8), 1906. https://www.mdpi.com/2072-6694/14/8/1906
Few ideas capture the collective human imagination more powerfully than the notion of a “universal library”—a singular repository of all recorded knowledge. From the grandeur of the Library of Alexandria to modern digital initiatives, this concept has persisted as both a philosophical ideal and a practical challenge. Miroslav Kruk’s 1999 paper, “The Internet and the Revival of the Myth of the Universal Library,” revitalizes this conversation by highlighting the historical roots of the universal library myth and cautioning against uncritical technological utopianism. Today, as Wikipedia and Large Language Models (LLMs) like ChatGPT emerge as potential heirs to this legacy, Kruk’s insights—and broader reflections on language, noise, and the very nature of truth—resonate more than ever.
The myth of the universal library
Humanity has longed for a comprehensive archive that gathers all available knowledge under one metaphorical roof. The Library of Alexandria, purportedly holding every important work of its era, remains our most enduring symbol of this ambition. Later projects—such as Conrad Gessner’s Bibliotheca Universalis (an early effort to compile all known books) and the Enlightenment’s encyclopedic endeavors—renewed the quest for total knowledge. Francis Bacon famously proposed an exhaustive reorganization of the sciences in his Instauratio Magna, once again reflecting the aspiration to pin down the full breadth of human understanding.
Kruk’s Historical Lens
This aspiration is neither new nor purely technological. Kruk traces the “myth” of the universal library from antiquity through the Renaissance, revealing how each generation has grappled with fundamental dilemmas of scale, completeness, and translation. According to Kruk,
inclusivity can lead to oceans of meaninglessness
The library on the “rock of certainty”… or an ccean of doubt?
Alongside the aspiration toward universality has come an ever-present tension around truth, language, and the fragility of human understanding. Scholars dreamed of building the library on a “rock of certainty,” systematically collecting and classifying knowledge to vanquish doubt itself. Instead, many found themselves mired in “despair” and questioning whether the notion of objective reality was even attainable. As Kruk’s paper points out,
The aim was to build the library on the rock of certainty: We finished with doubting everything … indeed, the existence of objective reality itself.”
Libraries used to be zero-sum
Historically,
for some libraries to become universal, other libraries have to become ‘less universal.’
Access to rare books or manuscripts was zero-sum; a collection in one part of the world meant fewer resources or duplicates available elsewhere. Digitization theoretically solves this by duplicating resources infinitely, but questions remain about archiving, licensing, and global inequalities in technological infrastructure.
Interestingly, Google was founded the same year as Kruk’s 1999 paper was nearing publication. In many ways, Google’s search engine became a “library of the web,” indexing and ranking content to make it discoverable on a scale previously unimaginable. Yet it is also a reminder of how quickly technology can outpace our theoretical frameworks: Perhaps Kruk couldn’t have known about Google without Google. Something something future is already here…
Wikipedia: an oasis island
Wikipedia stands as a leading illustration of a “universal library” reimagined for the digital age. Its open, collaborative platform allows virtually anyone to contribute or edit articles. Where ancient and early modern efforts concentrated on physical manuscripts or printed compilations, Wikipedia harnesses collective intelligence in real time. As a result, it is perpetually expanding, updating, and revising its content.
Yet Kruk’s caution holds: while openness fosters a broad and inclusive knowledge base, it also carries the risk of “oceans of meaninglessness” if editorial controls and quality standards slip. Wikipedia does attempt to mitigate these dangers through guidelines, citation requirements, and editorial consensus. However, systemic biases, gaps in coverage, and editorial conflicts remain persistent challenges—aligning with Kruk’s observation that inclusivity and expertise are sometimes at odds.
LLMs – AI slops towards the perfect library
Where Wikipedia aspires to accumulate and organize encyclopedic articles, LLMs like ChatGPT offer a more dynamic, personalized form of “knowledge” generation. These models process massive datasets—including vast portions of the public web—to generate responses that synthesize information from multiple sources in seconds. In a way this almost solves one of the sister aims of the perfect library, perfect language, where the embeddings serve as a stand in for perfect words.
The perfect language, on the other hand, would mirror reality perfectly. There would be one exact word for an object or phenomenon. No contradictions, redundancy or ambivalence.
The dream of a perfect language has largely been abandoned. As Umberto Eco suggested, however, the work on artificial intelligence may represent “its revival under a different name.”
The very nature of LLMs highlights another of Kruk’s cautions: technological utopianism can obscure real epistemological and ethical concerns. LLMs do not “understand” the facts they present; they infer patterns from text. As a result, they may produce plausible-sounding but factually incorrect or biased information. The quantity-versus-quality dilemma thus persists.
Noise is good actually?
Although the internet overflows with false information and uninformed opinions, this noise can be generative—spurring conversation, debate, and the unexpected discovery of new ideas. In effect, we might envision small islands of well-curated information in a sea of noise. Far from dismissing the chaos out of hand, there is merit in seeing how creative breakthroughs can emerge from chaos. Gold of Chemistry from leaden alchemy.
Concerns persist, existence of misinformation, bias, AI slop invites us to exercise editorial diligence to sift through the noise productively. It also echoes Kruk’s notion of the universal library as something that “by definition, would contain materials blatantly untrue, false or distorted,” thus forcing us to navigate “small islands of meaning surrounded by vast oceans of meaninglessness.”
Designing better knowledge systems
Looking forward, the goal is not simply to build bigger data repositories or more sophisticated AI models, but to integrate the best of human expertise, ethical oversight, and continuous quality checks. Possible directions include:
1. Strengthening Editorial and Algorithmic Oversight:
Wikipedia can refine its editorial mechanisms, while AI developers can embed robust validation processes to catch misinformation and bias in LLM outputs.
2. Contextual Curation:
Knowledge graphs are likely great bridges between curated knowledge and generated text
3. Collaborative Ecosystems:
Combining human editorial teams with AI-driven tools may offer a synergy that neither purely crowdsourced nor purely algorithmic models can achieve alone. Perhaps this process could be more efficient by adding a knowledge base driven simulation (see last week’s links) of the editors’ intents and purposes.
A return to the “raw” as opposed to social media cooked version of the internet might be the trick afterall. Armed with new tools we can (and should) create meaning. In the process Leibniz might get his universal digital object identifier after all.
Compression progress as a fundamental force of knowledge
Ultimately, Kruk’s reminder that the universal library is a myth—an ideal rather than a finished product—should guide our approach. Its pursuit is not a one-time project with a definitive endpoint; it is an ongoing dialogue across centuries, technologies, and cultures. As we grapple with the informational abundance of the digital era, we can draw on lessons from Alexandria, the Renaissance, and the nascent Internet of the 1990s to inform how we build, critique, and refine today’s knowledge systems.
Refine so that tomorrow, maybe literally, we can run reclamation projects in the noisy sea.
Image: Boekhandelaar in het Midden-Oosten (1950 – 2000) by anonymous. Original public domain image from The Rijksmuseum
This beautiful talk about Bayesian Thinking by Frank Harrell should be essential material for scientists who are trained in frequentist methods. The talk covers the shortcomings of frequentist approaches, but more importantly the paths out of those quagmires are also shown.
Frank discusses his journey to Bayesian stats in this blog post from 2017 which is also in the next section.
The most useful takeaway for me from this post is that even experienced statisticians had to steer towards Bayes fighting both agains norms and their education. The post has many many good references to whet your appetite if you are Bayes-curious. I particularly liked the following take:
slightly oversimplified equations to contrast frequentist and Bayesian inference.
Frequentist = subjectivity1 + subjectivity2 + objectivity + data + endless arguments about everything
Bayesian = subjectivity1 + subjectivity3 + objectivity + data + endless arguments about one thing (the prior)
where
subjectivity1 = choice of the data model
subjectivity2 = sample space and how repetitions of the experiment are envisioned, choice of the stopping rule, 1-tailed vs. 2-tailed tests, multiplicity adjustments, …
Traversing through human history, even in the last two decades, we see a rapid increase in the accessibility of knowledge. The purpose of language, and of course all communication is to transfer a concept from one system to another. For humans this ability to transfer concepts has been driven by advancements in technology, communication, and social structures and norms.
This evolution has made knowledge increasingly composable, where individual pieces of information can be combined and recombined to create new understanding and innovation. Ten years ago I would have said being able to read a research paper and having the knowledge to repeat that experiment in my lab was strong evidence of this composability (reproducibility issues not withstanding).
Now, composability itself is getting an upgrade.
In the next essay I’ll be exploring the implications of the arrival of composable knowledge. This post is a light stroll to remind ourselves of how we got here.
In ancient times, knowledge was primarily transmitted orally. Stories, traditions, and teachings were passed down through generations by word of mouth. This method, while rich in cultural context, was limited in scope and permanence. The invention of writing systems around 3400 BCE in Mesopotamia marked a significant leap. Written records allowed for the preservation and dissemination of knowledge across time and space, enabling more complex compositions of ideas (Renn, 2018).
Shelves, Sheaves, and Smart Friends
The establishment of libraries, such as the Library of Alexandria in the 3rd century BCE, and scholarly communities in ancient Greece and Rome, further advanced the composability of knowledge. These institutions gathered diverse texts and fostered intellectual exchanges, allowing scholars to build upon existing works and integrate multiple sources of information into cohesive theories and philosophies (Elliott & Jacobson, 2002).
Scribes, Senpai, and Scholarship
During the Middle Ages, knowledge preservation and composition were largely the domain of monastic scribes who meticulously copied and studied manuscripts. The development of universities in the 12th century, such as those in Bologna and Paris, created centers for higher learning where scholars could debate and synthesize knowledge from various disciplines. This was probably when humans shifted perspective and started to view themselves as apart from nature (Grumbach & van der Leeuw, 2021).
Systems, Scripts and the Scientific Method
The invention of the printing press by Johannes Gutenberg in the 15th century revolutionized knowledge dissemination. Printed books became widely available, drastically reducing the cost and time required to share information. This democratization of knowledge fueled the Renaissance, a period marked by the synthesis of classical and contemporary ideas, and the Enlightenment, which emphasized empirical research and the scientific method as means to build, refine, share knowledge systematically (Ganguly, 2013).
Silicon, Servers, and Sharing
The 20th and 21st centuries have seen an exponential increase in the composability of knowledge due to digital technologies. The internet, open access journals, and digital libraries have made vast amounts of information accessible to anyone with an internet connection. Tools like online databases, search engines, and collaborative platforms enable individuals and organizations to gather, analyze, and integrate knowledge from a multitude of sources rapidly and efficiently. There have even been studies which allow, weirdly, future knowledge prediction (Liu et al., 2019).
Conclusion
From oral traditions to digital repositories, the composability of knowledge has continually evolved, breaking down barriers to information and enabling more sophisticated and collaborative forms of understanding. Today, the ease with which we can access, combine, and build upon knowledge drives innovation and fosters a more informed and connected global society.
In Newton’s era it was rare to say things like “if I have seen further, it is by standing on the shoulders of giants” and actually mean it. Now it’s trivial. With education, training, and experience, professionals always stand “on shoulders of giants” (OSOG). Experts readily solve complex problems but the truly difficult ones aren’t solved through training. Instead, a combination of muddling through and the dancer style of curiosity is deployed, more on this later. We have industries like semiconductors, solar, and gene sequencing with such high learning rates that the whole field seems to ascend OSOG levels daily.
These fast moving industries follow Wright’s Law. Most industries don’t follow Wright’s law due to friction against discovering and distributing efficiencies. In healthcare regulatory barriers, high upfront research costs, and resistance to change keeps learning rates low. Of course, individuals have expert level proficiencies, many with private hacks to make life easier. Unfortunately, the broader field does not benefit from individual gains and progress is made only when knowledge trickles down to the level of education, training, and regulation.
This makes me rather unhappy, and I wonder if even highly recalcitrant fields like healthcare could be nudged into the Wright’s law regime.
No surprise that I view AI being central, but it’s a specific cocktail of intelligence that has my attention. Even before silicon, scaling computation has advanced intelligence. However, we will soon run into limits of scaling compute and the next stage of intelligence will need a mixed (or massed, as proposed by Venkatesh Rao). Expertise + AI Agents + Knowledge Graphs will be the composite material that will enable us not just to see further, but to bus entire domains across what I think of as the Giant’s Causeway of Intelligence.
Lets explore the properties of this composite material a little deeper, starting with expertise and it’s effects.
An individual’s motivation and drive are touted as being the reason behind high levels of expertise and achievement. At best, motivation is an emergent phenomenon, a layer that people add to understand their own behavior and subjective experience (ref,ref). Meanwhile, curiosity is a fundamental force. Building knowledge networks, compressing them, and then applying them in flexible ways is a core drive. Everyday, all of us (not just the “motivated”) cluster similar concepts under an identity then use that identity in highly composable ways (ref).
There are a few architectural styles of curiosity that are deployed. ‘Architecture’ is the network structure of concepts and connections uncovered during exploration. STEM fields have a “hunter” style of curiosity, tight clusters and goal directed. While great for answers, the hunter style has difficulty making novel connections. Echoing Feyerabend’s ‘anything goes’ philosophy, novel connections require what is formally termed as high forward flow. An exploration mode where there is significant distance between previous thoughts and new thoughts (ref). Experts don’t make random wild connections when at the edge of their field but control risk by picking between options likely to succeed, what has been termed as ‘muddling through’.
Stepping back, if you consider that even experts are muddling at the edges then the only difference between low and high expertise is their knowledge network. The book Accelerated Expertise, summarized here, explores methods of rapidly extracting and transmitting expertise in the context of the US military. Through the process of Cognitive Task Analysis expertise can be extracted and used in simulations to induce the same knowledge networks in the minds of trainees. From this exercise we can take away that expertise can be accelerated by giving people with base training access to new networks of knowledge.
Another way to build a great knowledge network is through process repetition, you know… experience. These experience/learning curves predict success in industries that follow Wright’s Law. Wright’s Law is the observation that every time output doubles the cost of production falls by a certain percentage. This rate of cost reduction is termed as the learning rate. As a reference point, solar energy drops in price by 20% every time the installed solar capacity doubles. While most industries benefit from things like economies of scale they can’t compete with these steady efficiency gains. Wright’s Law isn’t flipped on through some single lever but emerges through the culture right from the factory floor all the way up to strategy.
Labor efficiency – where workers are more confident, learn shortcuts and design improvements.
Methods improvement, specialization, and standardization – through repeated use the tools and protocols of work improve.
Technology-driven learning – better ways of accessing information and automated production increases rates of production.
Better use of equipment – machinery is used as full capacity as experience grows
Network effects – a shared culture of work allows people to work across companies with little training
Shared experience effects – two or more products following a similar design philosophy means little retraining is needed.
Each of these is essentially a creation, compression, and application of knowledge networks. In fields like healthcare efficiency gain is difficult because skill and knowledge diffusion is slow.
Maybe, there could be an app for that…
Knowledge graphs (KGs) are databases but instead of a table they create a network graph, capturing relationships between entities where both the entities and the relationship have metadata. Much like the mental knowledge networks built during curious exploration, knowledge graphs don’t just capture information like Keanu → Matrix but more like Keanu -star of→ Matrix. And all three, Keanu, star of, and Matrix have associated properties. In a way KGs are crystalized expertise and have congruent advantages. They don’t hallucinate and are easy to audit, fix, and update. Data in KGs can be linked to real world evidence enabling them to serve as a source of truth and even causality, a critical feature for medicine (ref).
Medicine pulls from a wide array of domains to manage diseases. It’s impossible for all the information to be present in one mind, but knowledge graphs can visualise relationships across domains and help uncover novel solutions. Recently projects like PrimeKG have combined several knowledge graphs to integrate multimodal clinical knowledge. KGs have already shown great promise in fields like drug discovery and leading hospitals, like Mayo Clinic, think that they are the path to the future. The one drawback is poor interactivity.
LLMs meanwhile are easy to interact with and have wonderful expressivity. Due to their generative structure LLMs have zero explainability and completely lack credibility. LLMs are a powerful, their shortcomings make them risky in applications like disease diagnosis. The right research paper and textbooks trump generativity. Further, the way that AI is built today can’t fix these problems. Methods like fine-tuning and retraining exists, but they require massive compute which is difficult to access and quality isn’t guaranteed. The current ways of building AI, throwing in mountains of data into hot cauldrons of compute and stirred with the network of choice (mandatory xkcd), ignores the very accessible stores of expertise like KGs.
LLMs (and really LxMs) are the perfect complement to KGs. LLM can access and operate KGs in agentic ways making understanding network relationships easy through natural language. As a major benefit, retrieving an accurate answer from KGs is 50x cheaper than generating one. KGs make AI explainable “by structuring information, extracting features and relations, and performing reasoning” (ref). With easy update and audit abilities KGs can easily disseminate know-how. When combined with a formal process like expertise extraction, KGs could serve as a powerful store of knowledge for institutions and even whole domains. We will no longer have to wait a generation to apply breakthroughs.
Experts+LxMs+KGs are the composite material to accelerate innovation and lower costs of building the next generation of intelligence. We have seen how experts are always trying to have a more complete knowledge network with high compression and flexibility allowing better composability. The combination of knowledge graphs and LLMs provide the medium to stimulate dancer like exploration of options. This framework will allow high-proficiency but not-yet-experts to cross the barrier of experience with ease. Instead of climbing up a giant, one simply walks The Giant’s Causeway. Using a combination of modern tools and updated practices for expertise extraction we can accelerate proficiency even in domains which are resistant to Wright’s Law unlocking rapid progress.
****
Appendix
Diving a little deeper into my area of expertise, healthcare, a few ways where agents and KGs can help:
Application
Role of Intelligence
Outcomes
Efficiency in Data Management
KGs organize data in a way that reflects how entities are interconnected, which can significantly enhance data accessibility and usability
faster and more accurate diagnoses, streamlined patient care processes, and more personalized treatment plans
Predictive Analytics
AI can analyze vast amounts of healthcare data to predict disease outbreaks, patient admissions, and other important metrics
allows healthcare facilities to optimize their resource allocation and reduce wastage, potentially lowering the cost per unit of care provided
Automation of Routine Tasks
AI agents can automate administrative tasks such as scheduling, billing, and compliance tracking using institution specific KGs.
With widespread use, the cumulative cost savings could be in a similar range as Wright’s law
Improvement in Treatment Protocols
Refine treatment protocols using the knowledge graph of patient cases.
More effective treatments being identified faster, reducing the cost and duration of care.
Scalability of Telehealth Services
Agentic platforms rooted in strong Knowledge Graphs can handle more patients simultaneously, offering services like initial consultations, follow-up appointments, and routine check-ups with minimal human intervention.
Drive down costs of service delivery at high patient volumes
Enhanced Research and Development
Already in play, AI and KGs accelerate medical research by better utilizing existing data for new insights.
Decreases time and cost of developing new treatments
Customized Patient Care
AI can analyze multimodal KGs of individual patients integrating history, tests, and symptoms for highly customized care plans.
When aggregated across the patient population healthcare systems can benefit from economies of scale and new efficiencies
While exploring the application of AI agents in healthcare we see that standard Retrieval-Augmented Generation (RAG) and fine-tuning methods often fall short in the interconnected realms of healthcare and research. These traditional methods struggle to leverage the structured knowledge available, such as knowledge graphs. Data approaches like Fast Healthcare Interoperability Resources (FHIR) used alongside advanced knowledge graphs can significantly enhance AI agents, providing more effective and context-aware solutions.
The Shortcomings of Standard RAG in Healthcare
Traditional RAG models, designed to pull information from external databases or texts, often disappoint in healthcare—a domain marked by complex, interlinked data. These models typically fail to utilize the nuanced relationships and detailed data essential for accurate medical insights (GitHub) (ar5iv).
Leveraging FHIR and Knowledge Graphs
FHIR offers a robust framework for electronic health records (EHR), enhancing data accessibility and interoperability. Integrated with knowledge graphs, FHIR transforms healthcare data into a format ideal for AI applications, enriching the AI’s ability to predict complex medical conditions through a dynamic use of real-time and historical data (ar5iv) (Mayo Clinic Platform).
Enhancing AI with Advanced RAG Techniques
Advanced RAG techniques utilize detailed knowledge graphs covering diseases, treatments, and patient histories. These graphs underpin AI models, enabling more accurate and relevant information retrieval and generation. This capability allows healthcare providers to offer personalized care based on a comprehensive understanding of patient health (Ethical AI Authority) (Microsoft Cloud).
Implementing AI Agents in Healthcare
AI agents enhanced with RAG and knowledge graphs can revolutionize diagnosis accuracy, patient outcome predictions, and treatment optimizations. These agents offer actionable insights derived from a deep understanding of individual and aggregated medical data (SpringerOpen).
A Novel Approach: RAG + FHIR Knowledge Graphs
Integrating RAG with FHIR-knowledge graphs to significantly enhance AI capabilities in healthcare. This method maps FHIR resources to a knowledge graph, augmenting the RAG model’s access to structured medical data, thus enriching AI responses with verified medical knowledge and patient-specific information. View the complete notebook in my AI Studio.
Challenges and Future Directions
While promising, integrating FHIR, knowledge graphs, and advanced RAG with AI agents in healthcare faces challenges such as data privacy, computational demands, and knowledge graph maintenance. These issues must be addressed to ensure ethical implementation and stakeholder consideration (MDPI).
Conclusion
Integrating FHIR, knowledge graphs, and advanced RAG techniques into AI agents represents a significant advancement in healthcare AI applications. These technologies enable a precision and understanding previously unattainable, promising to dramatically improve care delivery and management as they evolve.
If you’re in the field or exploring applying AI, do get in touch!