The “Resonant Computing Manifesto” is not very good
Sunday December 7, 2025 — Poughkeepsie, NY
A friend informed me recently of The Resonant Computing Manifesto — the latest from the “malleable systems”/“digital gardening”/“knowledge systems” folks, who appear to every few years need a fancy new term to describe the utopia that they’re going to build any day now. In this most recent iteration, “AI” will lead us to this utopia, but only if we are careful to follow the principles they are laying out.
In this post, I’m going to go through the entire manifesto, and point out the things that I find suspect. It feels a bit rude sometimes to do a close reading of a text that is not very rigorously thought through, but I hope that the authors are able to appreciate that doing this sort of reading is taking their ideas seriously, and I hope that they are willing to take their own ideas seriously enough to consider them. I will quote the entire article here, so as not to take anything out of context.
There’s a feeling you get in the presence of beautiful buildings and bustling courtyards. A sense that these spaces are inviting you to slow down, deepen your attention, and be a bit more human.
What if our software could do the same?
This sounds nice!
We shape our environments, and thereafter they shape us.
Did our environments not shape us before we shaped them? It seems obviously true that we shape our environments and our environments shape us, I do not see how that relates to the rest of this project or why it is phrased in this way.
Great technology does more than solve problems. It weaves itself into the world we inhabit. At its best, it can expand our capacity, our connectedness, our sense of what’s possible. Technology can bring out the best in us.
I think I agree with this, in a broad sense. However, I don’t actually know what the authors mean by the word “technology” here. I personally am inclined towards a expansive definition, as Ursula K. Le Guin is. When I think of great technology, I think of textiles.
Our current technological landscape, however, does the opposite. Feeds engineered to hijack attention and keep us scrolling, leaving a trail of anxiety and atomization in their wake. Digital platforms that increasingly mediate our access to transportation, work, food, dating, commerce, entertainment—while routinely draining the depth and warmth from everything they touch. For all its grandiose promises, modern tech often leaves us feeling alienated, ever more distant from who we want to be.
Immediately, I think this fails to understand the causes of this correctly-diagnosed problem. Atomization and alienation are not a result of feeds and other computer technology, they are the result of bureaucratization that began before the modern computer, as a result of economic forces, and was simply accelerated by the advent of digital information technology. The Utopia of Rules goes over much of this history of this bureaucratization. If we hope to reverse these forces, we must understand that they were not generated by modern technology, but merely accelerated by it.
The people who build these products aren’t bad or evil. Most of us got into tech with an earnest desire to leave the world better than we found it. But the incentives and cultural norms of the tech industry have coalesced around the logic of hyper-scale. It’s become monolithic, magnetic, all-encompassing—an environment that shapes all who step foot there. While the business results are undeniable, so too are the downstream effects on humanity.
This is the first “we” that we come to in this essay, and it’s quite a interesting one. It points to a nerdy kid, sucked into working in the “hyper-scalar” tech industry, not realizing the impact that had on the world, thinking they’re making things better, but unable to see the consequences of their actions, like a confused child. There is certainly something worth discussing about the way that the tech industry (particularly tech giants like Google and Facebook) isolates its workers from the rest of the world, attempting to cocoon them in a fantasyland where the suffering they create is out of sight.
At the same time, though, we have to recon with the fact that for 20+ years, people have known that software engineering is a high-paying job, and one with opportunities for getting rich by starting a startup (however long the odds may actually be). I’m not convinced that “most of us” in the tech industry got here because of a desire to improve the world — money and class seem like a much more present motivators for a majority of the tech industry. While the “logic of hyper-scale” may have swindled a handful of gullible bright-eyed new grads, I think it’s far more common that people go into those environments because they understand that that’s where the money is.
I don’t understand the point of this misdirection, and I remain confused about who this “we” is supposed to represent.
With the emergence of artificial intelligence, we stand at a crossroads. This technology holds genuine promise. It could just as easily pour gasoline on existing problems. If we continue to sleepwalk down the path of hyper-scale and centralization, future generations are sure to inherit a world far more dystopian than our own.
- I do not feel that “artificial intelligence” has emerged. LLMs now exist, which has made a world quite different than the one before they existed, but they do not seem to have any real indicators of being intelligent.
- I do not feel that LLM technology “holds genuine promise,” and I would like to see that belief justified. Moving towards a world in which LLMs play a significant role seems to me to be diving into a local maxima that is actually noticeably worse than previous local maxima that human civilization has been in at the recent past. I am sure that LLMs can be jammed into some aspects of human activity with some degree of success, but since when is that “promise”?
But there is another path opening before us.
As we will find out, this is not the “don’t shove LLMs into everything” path — the authors seem to consider that one already closed, for reasons that are unclear to me.
Christopher Alexander spent his career exploring why some built environments deaden us, while others leave us feeling more human, more at home in the world. His work centered around the “quality without a name,” this intuitive knowing that a place or an architectural element is in tune with life. By learning to recognize this quality, he argued, and constructing a building in dialogue with it, we could reliably create environments that enliven us.
We call this quality resonance. It’s the experience of encountering something that speaks to our deeper values. It’s a spark of recognition, a sense that we’re being invited to lean in, to participate. Unlike the digital junk food of the day, the more we engage with what resonates, the more we’re left feeling nourished, grateful, alive. As individuals, following the breadcrumbs of resonance helps us build meaningful lives. As communities, companies, and societies, cultivating shared resonance helps us break away from perverse incentives, and play positive-sum infinite games together.
Sounds nice!
For decades, technology has required standardized solutions to complex human problems. In order to scale software, you had to build for the average user, sanding away the edge cases. In many ways, this is why our digital world has come to resemble the sterile, deadening architecture that Alexander spent his career pushing back against.
Once again, I would direct our attention for a moment away from “technology” and towards bureaucracy, which is the actual underlying thing that requires standardized solutions to complex human problems.
I would also love a example of “standardization” and why that’s bad. Is having standard sized metric machine screws a example of standardization that is sterilizing our environments? Perhaps, but that seems to me to be a worthwhile tradeoff. I have generally not felt that “standardization” is the problem with technology in my life.
It’s been a long time since I’ve read A Pattern Language or Notes on the Synthesis of Form, so I may be misremembering, but I do not recall Christopher Alexander being against standardization. In fact, the “Language” in A Pattern Language is explicitly about giving people a standardized toolkit of concepts so that they can discuss and build the spaces that work for them. It is possible to view hypertext in much the same way, as a standardized toolkit of language that allows communities to build spaces that work for them.
This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human—at scale.
I am once again asking for a single example of what the fuck you are talking about. People who feel like LLMs produce output that is “responding fluidly to the context and particularity of each human” are living in a entirely different world than I am.
One-size-fits-all is no longer a technological or economic necessity. Where once our digital environments inevitably shaped us against our will, we can now build technology that adaptively shapes itself in service of our individual and collective aspirations.
If “AI” is going to enable us to build technology that is no longer shaped against our will, why is AI getting added to every fucking thing I interact with on the web despite me not wanting it to be there at all? Isn’t that literally the kind of removal of choice in technology that you are supposedly concerned about?
We can build resonant environments that bring out the best in every human who inhabits them.
There are beat journalists out there these days writing a article every few weeks about the most recent time a LLM told someone to kill themselves. Are you listening to yourself here? How does that fit into your fairyland where LLMs are “bringing out the best in every human”?
And so, we find ourselves at this crossroads. Regardless of which path we choose, the future of computing will be hyper-personalized. The question is whether that personalization will be in service of keeping us passively glued to screens—wading around in the shallows, stripped of agency—or whether it will enable us to direct more attention to what matters.
What do you mean by this, and why is it inevitable? I would not describe someone interacting with a LLM as any more computationally “hyper-personalized” than someone interacting with a social media feed, so I don’t see increasing personalization as a inevitability in the “corporate use of LLMs in consumer media products” scenario.
In order to build the resonant technological future we want for ourselves, we will have to resist the seductive logic of hyper-scale, and challenge the business and cultural assumptions that hold it in place. We will have to make deliberate decisions that stand in the face of accepted best practices—rethinking the system architectures, design patterns, and business models that have undergirded the tech industry for decades.
Once again — who is the “we” here? Those who find “hyper-scale” seductive mostly find the boatloads of cash seductive.
Why are you focusing on “business and cultural assumptions” when economic factors like interest and tax rates are a much larger driver of the structure of firms in a market?
We suggest these five principles as a starting place:
Private: In the era of AI, whoever controls the context holds the power. While data often involves multiple stakeholders, people must serve as primary stewards of their own context, determining how it’s used.
This is word soup. What, precisely, do you mean by “context”, what does it mean to “control” context, and how will that be used to wield power? “In the era of AI, whoever controls the context holds the power” is a great line for your TED talk but if you’re interested in actually understanding anything you gotta actually think about things.
Dedicated: Software should work exclusively for you, ensuring contextual integrity where data use aligns with your expectations. You must be able to trust there are no hidden agendas or conflicting interests.
This is a good principle! Unfortunately, as soon as you’re using LLMs, it becomes essentially impossible to do this, because LLMs require tens of millions of dollars to train a base model, and the data that’s chosen to go into that base model has a huge influence on its behaviour. Everyone who is training LLMs is making politically motivated decisions about what training data to use and how to structure “safeguards” into the model. Does LLMs having “safeguards” / “guardrails” mean that they are not “working exclusively for you”? I would say so. So it seems that you’d be advocating for the removal of those, if you actually believe in these principles. It seems more likely you just haven’t thought this through much at all, though.
Plural: No single entity should control the digital spaces we inhabit. Healthy ecosystems require distributed power, interoperability, and meaningful choice for participants.
Once again — something that I agree with, and that the structure of LLM training is fundamentally at odds with. Do you have any ideas for what to do about that tension? Because I do: stop jamming LLMs into everything, and suddenly you’ll have a huge head start on implementing this principle!
Adaptable: Software should be open-ended, able to meet the specific, context-dependent needs of each person who uses it.
This is nice in theory, but very difficult in practice. I suspect this idea is why the authors are excited about “AI” — what if everyone had access to situated software, without needing all those pesky humans to program the computers?
I am much less optimistic than the authors about the ability of LLMs to usher us into this glorious future: in order to be useful, software not only needs to be flexible, but it also needs to be robust and understandable. Error handling and documentation are areas where LLMs are extremely bad. I would much rather take a piece of software that is robust and well documented, and bend it to work in a slightly unusual usecase than take a piece of software that is brittle and undocumented, which no human has ever used before, but which is theoretically perfectly suited to what I’m trying to do.
Prosocial: Technology should enable connection and coordination, helping us become better neighbors, collaborators, and stewards of shared spaces, both online and off.
Sure, that’s nice. Got any actual concrete ideas for what properties of a system lead to this, or just sort of handwavey aspirations?
We, the signatories of this manifesto, are committed to building, funding, and championing products and companies that embed these principles at their core. For us, this isn’t a theoretical treatise. We’re already building tooling and infrastructure that will enable resonant products and ecosystems.
But we cannot do it alone. None of us holds all the answers, and this movement cannot succeed in isolation. That’s why, alongside this manifesto, we’re sharing an evolving list of principles and theses. These are specific assertions about the implementation details and tradeoffs required to make resonant computing a reality. Some of these stem from our experiences, while others will be crowdsourced from practitioners across the industry. This conversation is only just beginning.
If this vision resonates, we invite you to join us. Not just as a signatory, but as a contributor. Add your expertise, your critiques, your own theses. By harnessing the collective intelligence of people who earnestly care, we can chart a path towards technology that enables individual growth and collective flourishing.
Ah, here we are — surely the “theses” will be well thought out, clear, and provide direction!
Alas, sadly they are yet more TED talk aphorisms with no substance.
Hope it at least helps you get the funding you’re looking for, though!
A final little note, tangentially related. I essentially stopped writing public blog posts in the past several years. I still regularly receive emails from people appreciating my old writing, and asking why I’ve stopped, or if I plan to write again.
There a lot of answers to why I stopped writing, but a significant part of it is that witnessing the popularity of LLMs among people who are my “peers” in the tech world has given me the strong impression that people largely no longer care about quality or beauty. I try to write things that are beautiful and high quality, and if the culture in which I exist no longer values those things, I do not see that culture as worth engaging in. Why bother spending time writing things that are good, when people would eat up their LLM slop instead if I didn’t bother writing anything, and I’d just be writing for the LLM scrapers that will regurgitate a mangled copy of what I write? People who are satisfied with LLM outputs are not the people I am writing for.
I’ve been writing, but I haven’t been publishing, because my writing is for a culture that no longer exists, and I want no part in continuing to pour energy into its limping corpse. I know some of the people who have told me in the past that they appreciated my writing are the people working on developing and implementing LLMs. I know that probably you’ll see me as simply a bitter reactionary, but I do hope you can at least glimpse my side of things.