Png;base64,iVBORw0KGgoAAAANSUhEUgAAB0kAAANiAQMAAAA+BWN0AAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAANtJREFUGBntwQENAAAAwiD7p34PBwwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4EobIwABrMk2hQAAAABJRU5ErkJggg==

Human Purpose, Collective Intelligence, Agentic AI, Leadership Development

Enlightenment Spirits at the Pub: Reflections on Agentic AI in London

Png;base64,iVBORw0KGgoAAAANSUhEUgAABAAAAAKrAQMAAABV2G3XAAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAAGxJREFUeNrtwTEBAAAAwqD1T20MH6AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOBtYOgAB23VbnQAAAABJRU5ErkJggg==

Ale is a time-honoured catalyst for dialectical sparring and the promulgation of novel conceptions. The public house, after all, was often the unofficial annex to more formal venues of invention—whether before or after a grand unveiling at places like the RSA House. Ideas have always needed a warm room and good company to ferment, and the London pub has rarely disappointed on either count and it certainly didn’t disappoint this time. In the pubs of seventeenth‑ and eighteenth‑century England, shoes dusty from farm or counting‑house alike rested on the same wooden floorboards while their owners sparred—sometimes politely, sometimes not—over science, scripture, and the price of corn. Those smoky rooms pre‑dated the formal coffeehouses of the Enlightenment yet did much the same work: they were common‑sense parliaments, places where reason in its shirtsleeves debated power in its waistcoat. As Adam Smith observed in The Theory of Moral Sentiments, “Society and conversation are the most powerful remedies for restoring the mind to its tranquillity,” convivial gatherings are both, generative and therapeutic.

That tradition stretches—like an unbroken foam atop a well‑pulled pint—into our own century, now grappling with what some have called the Fourth, Fifth, or Nth Industrial Revolution (what number are we on?)—a cascade of transformative technologies where AI agents now occupy as pivotal a place as steam engines once did. Witness last week’s gathering in the upstairs chamber of The Duke of Sussex in London. A crackle of anticipation, a concoction of Wi‑Fi and wonder filled the air; laptops and pint glasses stood in amicable proximity. LangChain picked up the bar tab, to remind us that conviviality need not be sacrificed on the altar of agent frameworks. And there, amid oak panelling and the low murmur of “Top up?” from the barkeep, I shared our (Towards Peoples’) reflections and progress—not with a Newtonian prism but with LangGraph, an open-source framework for autonomous agents, offering our thread in a wider communal weave of insight and innovation.

I found myself speaking of software that perceives, reasons, and acts—agents that will shoulder our cognitive chores so we, as a collective, can turn our attention—as Amartya Sen put it—toward richer freedoms and wider sympathies, in both thought and deed. The room nodded, not merely from the ale. One could hear Tagore’s gentle admonition breathing through the rafters: “Let us not pray to be sheltered from dangers, but to be fearless in facing them.” What better place than a pub, cradle of fearless talk, to meet this new species of thinking tool? After all, it was in another English public house—the Eagle in Cambridge—where Crick famously announced the double helix. While no such singular revelation occurred at The Duke of Sussex, the spirit of convivial curiosity was palpable. If anything, our gathering—an informal continuation of the San Francisco Interrupt conference—echoed a broader, ongoing revolution in cognition and computation. We came not to unveil the next double helix, but to explore, together, the remarkable systems that others have recently set into motion and to build out of them our own innovations.

A Warm Gathering: Global Minds in a local setting

If the Enlightenment-era public house functioned as a kind of common-sense parliament, then our upstairs alcove at The Duke of Sussex embodied its contemporary reincarnation—a crucible where analog conversation met digital content, where heritage furniture cradled polyphonic futures. Yes, there was a flat-screen and an HDMI cable. But what infused the space was something more elemental: the presence of real humans, animated by shared curiosity and intellectual generosity, grappling with the future of agentic AI. The strongest signal of our century wasn’t the tech itself—it was the extraordinary heterogeneity of those in the room. This was not merely a London meetup; it was a node in a planetary mesh.

Attendees converged from across Britain and continental Europe, some coming in from Cambridge and others arriving via the Eurostar.

Png;base64,iVBORw0KGgoAAAANSUhEUgAABAAAAAKrAQMAAABV2G3XAAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAAGxJREFUeNrtwTEBAAAAwqD1T20MH6AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOBtYOgAB23VbnQAAAABJRU5ErkJggg==

Among the evening’s most engaging moments was the dialogic interplay between Antanas (Ant) Daujotis and Marlene Mhangami. Their presentations were complementary: his, a humorous yet incisive discussion of institutional needs—especially within sectors like banking—when adopting innovations like agentic systems system design; hers, a grounded and personal account of agentic development in production environments, notably within Azure’s Python ecosystem.

Png;base64,iVBORw0KGgoAAAANSUhEUgAAAqsAAAQAAQMAAAD83oKlAAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAAG1JREFUeNrtwTEBAAAAwqD1T20JT6AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHgaXA8AAaOSslwAAAAASUVORK5CYII=

Mhangami’s account of working within the Azure Python ecosystem, contributing directly to LangChain, and building agentic systems for enterprise contexts demonstrated the operational readiness and material traction of LangGraph—particularly as Microsoft now treats it as a stable component in their developer tooling. Far from speculative, her insights reflected the day-to-day realities and institutional requirements that shape the deployment of agents at scale.

Shrey Shah contributed remotely via a recorded presentation—which, though hampered by poor audio and an unsteady screen recording, introduced me, for the first time, to Cursor and its usefulness for building agentic systems.

Png;base64,iVBORw0KGgoAAAANSUhEUgAABAAAAAKrAQMAAABV2G3XAAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAAGxJREFUeNrtwTEBAAAAwqD1T20MH6AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOBtYOgAB23VbnQAAAABJRU5ErkJggg==

Shashi Jagtap, who followed with a lively in-person talk titled Agentic AI: Beyond the Buzz, demoed the new LangGraph Studio V2 and offered a practitioner’s perspective on its utility in enterprise environments. Taken together, their contributions illuminated the practical experimentation already animating the London, and the global, tech scene. Their presentations emphasized the infrastructural demands and organizational dynamics involved in real-world enterprise deployments—where error tolerance is minimal and systems must be legible not just to developers, but to legal, compliance, and managerial teams. They articulated, with clarity and candor, how the abstractions of agentic design meet the rough edges of operational constraints.

It was energizing to spot my collaborators—Terry Smith and Jon (Jonathan David Coleman)—among the crowd, and gratifying to acknowledge their ongoing contributions to our shared open-source project: a memory agent built on LangMem and Swarm. We’ve been meeting weekly in a café near Crouch End, and the resonance of that small, persistent collaboration felt amplified in the room. As I gave a shoutout to their sustained work on a dedicated memory agent, built atop LangMem and Swarm, I caught a twinkle in their eyes and I’m hoping that the next time they will present their work.

The intermingling of seasoned experts, early-career practitioners, and hands-on open-source contributors generated an atmosphere saturated with Hirschmanian possibilism—a sense that within this heterodox constellation of developers, theorists, and practitioners, the adjacent possible was actively being redrawn. By the evening’s close, those who had arrived as unfamiliar faces were exchanging LinkedIn contacts, continuing threads of inquiry sparked by the talks, and proposing practical ways to collaborate. As we clustered around tables, steeped in ale and anecdote, what distinguished this gathering was the substance exchanged across disciplines and domains. In a gesture that exemplified the evening’s spirit, Shashi Jagtap generously offered to help troubleshoot some of my implementation challenges—a moment that crystallized the event’s ethos: collaborative, technically grounded, and intellectually generous. This wasn’t ambient networking; it was a shared commitment to tackling real-world complexities at the confluence of systems design and institutional practice.

Synergy in an Agentic Ecosystem: Greater Than the Sum

What I aimed to bring across in my own presentation was a conviction grounded in systems thinking: neither humans nor agents reach their full potential in isolation. From Aristotle to Stuart Kauffman, we’ve learned that new capabilities emerge not from components alone, but from their interactions. My presentation built on that premise, and I used it, inclusive of my demo, as an illustration of this principle in action—how coordination among parts produces something irreducible to any one element and how we can build systems that systematically build in these synergies.

Take the open‑source Ryoma project. On its own, a Ryoma agent can ingest data, vector‑index it, and spit out tidy visualisations. Impressive indeed on its own. When the Ryoma agent is embedded in a coordinated multi-agent system—alongside a dedicated memory agent powered by LangMem and a responsive charting module—its functionality multiplies. The embeddings it generates become semantic memory artifacts, digested by a memory agent that in turn enriches prompts passed to a visualisation layer. What begins as mere indexing unfolds into an orchestrated interpretive effort, in which each component amplifies the others. The whole arrangement begins to feel less like a rigid pipeline and more like a human‑machine consultancy—one where agents execute interdependent structured and repeatable tasks with disciplined precision, while humans are freed for more creative tasks and for thinking about what ought to be: ethics and why we’re doing what we’re doing. In such workflows, agents function as tireless assistants in a highly coordinated relay. Their adaptive interplay is not driven by artistic freedom—the expressive spontaneity best left to humans—but rather by the structured regularity of data warehousing, where clarity, consistency, and maintainability take precedence. As Adam Smith observed in The Theory of Moral Sentiments, it is sentiment, sympathy, and moral imagination that ground human judgment. Agents lack these faculties; they do not weigh what ought to be done, nor imagine better futures. Instead, they excel at structured tasks such as the task of the analytics engineer, where meaning must be located within metadata, patterns discerned from repetition, and decisions documented. In this domain, their reliability becomes their virtue, not their inventiveness. The result is not creative spontaneity but reliable synergy—an automation of the analytical drudgery humans find tedious yet vital, made possible through careful orchestration.

My own contribution to this evening, then, foregrounded LangGraph as the conductor of that ensemble. LangGraph’s appeal lies not in any single algorithm, but in the rigor with which it formalises shared memory, task delegation, and failure recovery. I sketched a scenario: a research‑assistant agent drafts a literature summary; a data‑cleaning agent sanitises incoming CSVs and drops them into BigQuery; a visualisation agent turns cleaned data into Superset dashboards—all mediated through a common LangGraph memory. The result is a depth and breadth of analysis no monolithic model could match.

What continues to strike me is how this technical modularity mirrors effective human organisations. A lone genius coder is potent; a small, well‑composed team is resilient, creative, and perhaps antifragile. In this sense, agentic AI does not replace human teamwork; it extends it, supplying extra cognitive limbs where we tire or overlook. When designed thoughtfully, 1 + 1 doesn’t merely sum—it compounds. The collaboration of modular agents and human oversight can produce super-linear intelligence with respect to the intelligence of their inputs: (1 + 1)² or (1 + 1)³. That exponential margin—disrupting the adjacent possible, as Albert Hirschman called it —that arises from synergy not only among agents but among us, the humans designing the agents—was the evening’s quiet star.

The elephant in the room was the woman not in the room

The room at the Duke of Sussex carried an unmistakable imbalance: far more men than women. That observation echoes the global data—women occupy barely 30 % of AI‑skilled roles worldwide, a plateau the UN calls a “structural bottleneck” for the field (unwomen.org). Feminist philosopher Sandra Harding warns that knowledge produced from a single social vantage risks “systematic blindness” toward other realities (jstor.org). Donna Haraway’s dictum—“technology is not neutral; we’re inside of what we make”—serves as a companion caution: bias is baked long before deployment (libquotes.com). Indigenous theorist Kim TallBear takes the critique further, noting that digital infrastructures often travel on “colonial rails”—laid by extractive assumptions and power asymmetries—unless design authority is genuinely and structurally shared (kimtallbear.substack.com). As Kate Crawford’s Atlas of AI and Brett Scott’s Cloudmoney argue, and as explored more recently in the widely discussed Empire of AI by Karen Hao, which investigates the geopolitical and ethical ramifications of AI development driven by corporate and state actors in a global race for dominance, we must be mindful of how AI infrastructures often recapitulate older logics of control—systems shaped less by democratic intention than by market imperatives and imperial legacies. Empire of AI highlights how a small cluster of powerful actors—mainly in Silicon Valley and allied national-security circles, though sometimes echoed in London pubs—design the systems that interpret the world. This concentration of influence, the book argues, produces geopolitical imbalances and narrows the range of perspectives these systems can represent. These aren’t neutral technologies; they are political architectures. When we build agentic systems atop such architectures, we risk extending—not mitigating—the inequalities encoded therein. Even the ledgers at McKinsey show that gender‑diverse technical teams outperform homogeneous ones on both innovation and financial metrics (mckinsey.com). For agentic systems, whose worth lies in interpreting data and acting on those interpretations, epistemic breadth is not a courtesy—it is an engineering constraint. Broaden the builders and you broaden what the agents can reliably see, flag, or amplify; you expand, in Hirschman’s phrase, the adjacent possible our networks can explore.

Mind the Gap—then Bridge It!

I couldn’t let the gender imbalance that evening slip quietly into the fog of memory and thus partake in its normalization. Instead of chalking it up as yet another data point in a long‑known trend, I kept turning it over in my mind— it haunts me like a cheesy earworm of a melody that misses its cadence—lingering not by design but by omission, unresolved and yet insistent. I am uncomfortably aware that I too contribute to the wider cultural patterns that make such stark imbalances a recurring refrain. What, I ask myself—and you, dear reader—could I do differently next time to interrupt the habits of recruitment, visibility, or design that make absence appear ordinary? What actions could help to resolve the friction that keeps the imbalance in place? How can I, as one of the future organizers of such events help here? What would it take to shift this pattern in the informal rituals of a meetup like ours? If you have some answers to these questions please let us know in the comments!

There’s no panacea, but neither is the situation immutable. Research on conference dynamics shows that when women occupy visible speaking slots, subsequent female participation rises measurably—an effect documented from classroom interventions to large STEM symposia (swe.org, linkedin.com). That gives organisers a simple lever: extend deliberate invitations, aim for a balanced slate, and let demonstration do the persuading.

Mentorship offers another empirically grounded lever. Structured programmes boost women’s retention and promotion rates by 15–38 per cent (sawitnetwork.com, womentech.net). We hardly need corporate infrastructure to tap that benefit; a small “office‑hours” table after each talk—staffed by veterans willing to triage questions—could seed the habit locally.

Logistics, too, are tacitly decisive. Studies of academic and tech conferences show that onsite childcare or even modest stipends materially increase women’s attendance (pnas.org, meetingstoday.com). Early‑evening scheduling, clear accessibility notes are low‑cost design choices that signal forethought rather than afterthought.

Hybrid access can widen the aperture further: caregiving and time‑zone constraints fall unevenly, and asynchronous recordings paired with a follow‑up Q&A offer a bridge (insidehighered.com). Finally, even the call‑for‑presentations framing matters. Women are more inclined to submit when prompts emphasise collaborative problem‑solving over lone‑genius theatrics—echoing findings from grant‑submission studies that show lower female PI rates despite equal success probabilities (pmc.ncbi.nlm.nih.gov, pmc.ncbi.nlm.nih.gov). Recasting our next meetup as a “design clinic”—five‑slide idea sparks, not polished decks—could lower the activation energy for new contributors.

None of this is peripheral to engineering. As Adam Smith observed, fellow‑feeling enlarges not only our moral world but our perceptual one; we literally see more when we listen beyond ourselves. Agentic systems inherit their epistemic horizons from their makers, at least in part. Broaden the makers, and the agents will see—and serve—more of the world. That is not a detour from good engineering; it is its pre‑condition.

Principled Foundations: Cognitive Science and the AI Canon

LangGraph took the stage not as a trendy overlay, but as a rigorously constructed framework grounded in first-principles engineering, synthesizing insights from cognitive science and the classical AI canon. The foundations were laid in Russell and Norvig’s Artificial Intelligence: A Modern Approach (Russell & Norvig, 2020), wherein an intelligent agent is understood through the triadic schema of perception, reasoning, and action. LangGraph operationalizes this model with fidelity: its architecture parses memory into transient and persistent stores, employs dynamic focus controllers to regulate attention, and cycles through iterative deliberation routines—architectural decisions that mirror computational models from cognitive psychology and the discipline of connectionism.

Connectionism, broadly construed, refers to a class of cognitive models inspired by neural computation. These systems represent knowledge not through discrete symbols but via distributed activation patterns across networks of interconnected nodes, learning by iteratively adjusting synaptic weights (Rumelhart et al., 1986). This paradigm undergirds the modern machine learning landscape, and particularly the architecture of transformers—the deep learning models that serve as the backbone of today’s large language models (LLMs), including GPT‑4 and DeepSeek’s R1 model for example (Vaswani et al., 2017). Yet LangGraph’s ambitions transcend raw language modeling: it fuses connectionist machinery with modular symbolic reasoning and graph-based orchestration, achieving a hybrid computational substrate. In this sense, LangGraph enacts the vision articulated in Artificial Intelligence: A Modern Approach—not merely training models to predict tokens, but coordinating autonomous agents capable of structured interaction within dynamic environments.

This hybridism is not incidental. LangGraph offers a lightweight yet coherent theory-of-mind substrate, aiming to outgrow the fragile charms of prompt engineering while avoiding the rigidity of classical GOFAI pipelines. The metaphor that emerged in discussion was architectural: LangGraph feels less like a patchwork prototype and more like principled construction—carefully laid, modular, extensible.

This engineering ethos aligns with the teachings of Andrew Ng, whose pedagogy emphasizes abstraction, modularity, and rigorous validation (Ng, 2021). Ng’s often-cited insistence that AI success is more about reliable execution than algorithmic novelty seems well embodied in LangGraph’s DNA. Each component in the system—whether a memory adapter, a reactive tool, or a swarm orchestrator—has clearly bounded responsibilities, coherent failure semantics, and composable interfaces.

A salient architectural decision underscores this ethos. Our team originally explored a CrewAI structure, with fixed agent roles à la starship bridge crew—captain, navigator, specialist. While evocative, it proves brittle under scale and variation. On the recommendation of William Fu-Hinthorn, my design pivoted to a Swarm model, wherein agents are modular peers that coordinate via shared memory and local autonomy. This model confers robustness: new agents can enter, old ones can depart, and the system reorganizes rather than fractures—though not without consequence. Because each agent is highly specialized, its presence shapes the swarm’s capabilities. The removal of one agent, or the introduction of another, alters the system’s skill set and the workflows it can sustain. There is no central hierarchy; agents coordinate laterally. And while the system does not collapse in the absence of any one component, its functional profile necessarily shifts in response. Cognitively, this resonates with theories of the mind as a swarm of semi-autonomous cognitive faculties—visual, linguistic, motoric—cohering through attentional synchrony and episodic memory. In particular, the conceptualization draws clear parallels with Marvin Minsky’s “Society of Mind” framework, which posits that what we call intelligence emerges from the cooperative behavior of many relatively simple processes or “agents,” each specialized for distinct cognitive functions (Minsky, 1986). LangGraph instantiates this swarm metaphor in software, building agentic ensembles that, like Minsky’s cognitive agents, self-organize and integrate their contributions to form coherent, adaptive behavior.

The central theme of my presentation was intended to bring the architecture vividly to life: could a research-assistant agent, a data-cleaning agent, and a visualization agent all collaborate via a shared semantic memory?? The answer is a confident yes—and not only that, but such coordination is hypothesized to bring about super-linear returns. The emergent intelligence is not magic; it is what should happen when abstraction and architecture are theoretically coherent and operationally sound.

In an industry captivated by the mantra “move fast and break things,” LangGraph’s design philosophy feels refreshingly contrarian. It builds not for the demo, but for the decade. Cathedrals, after all, are not assembled in sprints—and yet, they endure (unlike a speedily built burger shack).

Agents to Empower, Not Enslave: The Vision

If LangGraph’s architecture is its skeleton, ethics is the marrow that gives it life and direction. This is the premise which undergirds my work and from which I spoke: that agentic systems should extend human capabilities, not undercut them. Biologist E. O. Wilson’s famous diagnosis remains apt—“We have Paleolithic emotions, medieval institutions, and godlike technology” (Wilson, 2012). Our brains, evolved for Pleistocene foraging, now contend with digital torrents we were never shaped to manage. LangGraph offers a response: AI agents not as decision-makers, but as cognitive exoskeletons—tools that hold memory, offload analysis, and help us coordinate at a scale that no single brain can.

This orientation draws consciously on Amartya Sen’s capabilities approach. Poverty, Sen reminds us, is not merely lack of income but a shortfall in real freedoms—the capacity “to lead the kind of lives we have reason to value.” (Sen, 1999) By that metric, agentic AI succeeds only if it enlarges the space of possible doings and beings for humans: better analyses, more lucid decisions, gentler cognitive load. Put differently, our covert slogan might be “AI for capability expansion.”

A complementary thread reaches back to Adam Smith’s Theory of Moral Sentiments. Smith argued that society is stitched together not by pure self‑interest but by sympathy, a reciprocal moral imagination (Smith, 1759). Many readers encounter Smith primarily through the oft‑quoted butcher‑brewer‑baker passage in The Wealth of Nations, interpreting it as a hymn to naked self‑interest. That snippet, however, is descriptive rather than prescriptive and it only pertains to highly competitive sectors in the economy: Smith is noting how division of labour aligns incentives, not celebrating selfishness as a moral compass. The Theory of Moral Sentiments makes the corrective explicit—empathy precedes efficiency.

Designing AI agents therefore entails more than optimization; it demands alignment with human values—fairness, transparency, beneficence—qualities we demand of agents precisely because they are not human. Ironically, while we often expect an AI agent to be able to explain its choices and log its reasoning process, we cannot make such demands of other humans. A person’s motivations are murky, multi-layered, and often opaque—even to themselves. In this sense, our insistence on transparency from agents reveals more about accountability and blame attribution than it does about any inherent virtue of disclosure. If an AI misleads us, it is often unclear who, if anyone, bears responsibility. This ambiguity unsettles us, for better or worse, and in response we seek exhaustive transparency—more than we could possibly expect from one another. It is a paradox of our age that we demand greater legibility from machines than from our peers.

Albert O. Hirschman’s framework of exit and voice offers a compass. Hirschman observed that healthy systems let participants either leave (exit) or influence the rules (voice) when quality declines (Hirschman, 1970). LangGraph aspires to enable these levers: while not all code is open-source, its architecture supports modular adaptation, and the system is deliberately designed to be non-extractive—users are not locked in. Voice is built into the philosophy: feedback is not just welcomed, but structurally anticipated. A platform that empowers users—by allowing opt-out, adaptation, and input—edges us closer to what Hirschman might call a “virtuous equilibrium.”

Indigenous perspectives further broaden the ethical canvas. Activist John Trudell urged that technology serve “the love of the people and the Earth,” rejecting tools that alienate communities or despoil environments (Trudell, 1988). Kim TallBear warns that tech often rides “colonial rails” unless governance is shared (TallBear, 2021). Such objectives steer the project toward applications that heal rather than exploit—for example, agents that surface climate risks or democratize technical literacy.

Philosopher Luciano Floridi adds an ecological point: the digital infosphere is an environment to be stewarded, not a landfill to be filled with opaque automation (Floridi, 2014). The question for every agent and the dance they do collectively becomes: does it enable or constrain human flourishing? LangGraph’s design bias—transparent memory, inspectable reasoning, modular failure recovery—leans decisively toward enablement.

Finally, I always like to go back to Jürgen Habermas’s discourse ethics. In an ideal speech situation, “no force prevails but that of the better argument” (Habermas, 1984). Applied to the collaborative systems in which humans and agents jointly operate, this principle implies the design of workflows and interfaces that enable participants—human or artificial—to justify actions, surface reasoning, and engage in collective deliberation. In such ecosystems, affected parties should be able to question, refine, or veto proposals, aiming for alignment on shared actions. This aspiration echoes the vision of a ‘Habermas machine’—a deliberative scaffold where diverse perspectives, human and computational, converge on decisions grounded in mutual understanding and real concerns.

Closing Thoughts

I left the Duke of Sussex buoyed by discovery (Cursor), fraternity (collaborators), and something harder to name: the sense that principled architecture and moral imagination need not compete. If agentic AI is to fulfill its promise, it will grow in pub rooms and GitHub issues —where candor tempers hype, and ethics anchors design.

I was pleased—though not entirely expecting—to see my call for positive ethics find good traction: not ethics as audit trail or compliance layer, but as generative architecture for extending human freedom and capability. It was gratifying to see that point land—it’s rarely made in technical gatherings so it’s hard to anticipate how people will react.

The evening, then, was, in the best sense, grounded. People met not to pitch but to share, not to posture but to explore. If agentic AI is to realize its collective promise, it will do so through such engagements: real conversations, real collaborations, and a continued insistence that technical elegance and ethical ambition are not competing goals, but mutually reinforcing constraints on what is worth building.

References

Crawford, K. (2021) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press.

Floridi, L. (2014) The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford: Oxford University Press.

Harding, S. (1993) ‘Rethinking Standpoint Epistemology: What is “Strong Objectivity”?’, in Alcoff, L. and Potter, E. (eds.) Feminist Epistemologies. New York: Routledge, pp. 49–82.

Haraway, D. J. (1991) ‘A Cyborg Manifesto’, in Simians, Cyborgs and Women: The Reinvention of Nature. New York: Routledge, pp. 149–181.

Hao, K. (2024) Empire of AI: Inside the Reckless Race for Total Domination. London: Allen Lane.

Habermas, J. (1984) The Theory of Communicative Action, Vol. 1: Reason and the Rationalization of Society. Boston: Beacon Press.

Hirschman, A. O. (1970) Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, MA: Harvard University Press.

Kauffman, S. A. (1995) At Home in the Universe: The Search for the Laws of Self-Organization and Complexity. Oxford: Oxford University Press.

McKinsey & Company (2020) Diversity Wins: How Inclusion Matters. New York: McKinsey Global Institute.

Minsky, M. (1986) The Society of Mind. New York: Simon & Schuster.

Ng, A. (2021) Machine Learning Yearning. Palo Alto: deeplearning.ai (ebook).

Rumelhart, D. E., McClelland, J. L. and PDP Research Group (1986) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations. Cambridge, MA: MIT Press.

Russell, S. J. and Norvig, P. (2020) Artificial Intelligence: A Modern Approach. 4th edn. Boston: Pearson.

Scott, B. (2022) Cloudmoney: Cash, Cards, Crypto and the War for Our Wallets. London: Bodley Head.

Sen, A. (1999) Development as Freedom. Oxford: Oxford University Press.

Smith, A. (1759) The Theory of Moral Sentiments. London: A. Millar.

TallBear, K. (2021) ‘Close Encounters of the Colonial Kind’, The Critical Polyamorist (Substack), 5 July. Available at: https://kimtallbear.substack.com/p/close-encounters-of-the-colonial-kind (Accessed 10 June 2025).

Trudell, J. (1988) ‘Of People and Earth’, keynote address, Seventh International Treaty Council Conference, Rapid City, SD, 12 June.

UN Women (2023) Gender Equality and Artificial Intelligence: Structural Bottlenecks in the AI Workforce. New York: United Nations Entity for Gender Equality and the Empowerment of Women.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. and Polosukhin, I. (2017) ‘Attention Is All You Need’, Advances in Neural Information Processing Systems, 30, pp. 5998–6008.

Wilson, E. O. (2012) The Social Conquest of Earth. New York: Liveright.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *