Welcome to Eukairos, a collection of musings at the confluence of artificial intelligence (AI), data management and healthcare. The term eukairos is derived from the Greek ευκαιρός, loosely meaning ‘timeliness’ or ‘opportunity’. The short explanation for the site’s name is that English-language domain names are pretty much saturated in the .sg domain. The more involved explanation is that I see timing and opportunity as under-appreciated serendipitous and stochastic factors that are often overlooked. As the book of Ecclesiastes succinctly observed, “time and chance happen to them all”. Do they include AI, data or healthcare?
Take AI, for instance. In the last few years, there has been a veritable Cambrian explosion of evolution in generative AI (GenAI) tools. I was just migrating from laboriously stringing Docker-based OpenWebUI to Ollama when I came across LibreChat. Then Ollama came up with its own frontend, rendering both LibreChat and OpenWebUI redundant. I went from coding AI agents manually with LangGraph or Llamaindex to increasingly abstracted agents in Cursor, Gemini CLI, and most recently Antigravity. And there is Anything LLM, an all-in-one app that you can install and use like you would Outlook or Word. Those are just a few open-source examples.
The furious development of GenAI has made definitely life easier for AI newbies. A year ago, you would have to set up ByteDance’s DeerFlow (arguably the first one-prompt AI agent creator) laboriously in a specific Python environment and then be prepared to make frequent trips to their GitHub issues page after that as bugs surface. Today, not only DeerFlow has become more polished and robust, but similar apps abound, like Antigravity. How arduously or easily your personal AI journey pans out can depend a lot on when you step into the field.
This brings to mind the concept of the “adjacent possible”, which had its roots in evolutionary biology. The intuition from this elegant concept is what should have been obviously self-evident, but apparently isn’t: that new innovations spring forth from a base of existing technologies. Think ride-hailing apps that could only possibly have been invented after both GPS and smartphones became widely available. With so many fundamentally different approaches to applying GenAI, the range of possible new innovations springing from existing approaches can become vertiginous for players trying to sift the winners from the evolutionary dead ends among the smorgasbord of tools and approaches. Who knew that Python-native AI agents would turn out to be more efficient than agents churning out json outputs even though both were adjacent-possibles of large language models (LLMs)?
This evolutionary stage is also playing out in AI development in healthcare, where the action is more at the “applied AI” level, rather than at the “tooling” level. I see a plethora of healthcare GenAI apps being flogged, both by established tech and startups. (In healthcare in Singapore, there is a peculiar preference for outsourcing tech development rather than building in-house… why this is so might be the subject of exploration in a future blog.) Healthcare being a risk-averse and tightly-regulated industry, we can be grateful for these constraints leading to a slower pace of evolution and adoption that can perhaps offer more time for us to think about the potential of each new adjacent-possible that we encounter.
In healthcare data management, on the other hand, what I’m seeing today is a gradual coalescing around established data standards such as SNOMED, data frameworks such as OHDSI, and messaging standards like FHIR. There are less options to pick and choose from, in other words, which is a blessing of sorts. The main challenge in healthcare data lies not in furious evolution, but in its intrinsic complexity, something I’ll probably explore deeper in a future post. Having seen LLMs making speedy short work of complexity, though, we can expect exciting adjacent-possibles in exploiting healthcare data. For instance, consider whether graph-based databases are now more suited for healthcare data given their natural fit with multi-hop retrieval that is often necessary for AI to navigate healthcare data, compared to the current landscape of overwhelmingly relational databases. While I don’t see Cypher overtaking SQL anytime soon, that is one adjacent-possible I am watching with interest.
In healthcare delivery, there is a growing shift towards preventive and mitigative health as societies age and healthcare inflation hits budgets both at macro and individual levels. It probably isn’t an overstatement to call this shift seismic; societies have grown too accustomed to thinking in terms of healthcare as an artisanal cottage industry of repair jobs customized to individual clients by specialized practitioners. Seismic shifts, however, have the unfortunate properties of causing disorientation and dislocation for those of us healthcare providers trying to adapt our existing frameworks and processes to incorporate these new health goals.
The hope is that AI can help mitigate the pain for those tasked with engineering and implementing this shift, as well as the recipients of healthcare delivery. AI, however, needs to work on high-quality data feedstock to be practically useful, but today healthcare data is not easily accessible and digestible by AI. This applies even to unstructured text data – the main data fodder for LLMs – that generally need less cleaning and preparation than structured tabular data. So, amid all the efforts to deploy AI in healthcare, I do expect to see interesting adjacent-possibles from the intersection of AI, data and healthcare in the coming years ahead. Indeed, that is why I think this is an opportune time to launch this blog at the start of 2026.
This post was written without the use of artificial intelligence.
