Png;base64,iVBORw0KGgoAAAANSUhEUgAAB0kAAANiAQMAAAA+BWN0AAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAANtJREFUGBntwQENAAAAwiD7p34PBwwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4EobIwABrMk2hQAAAABJRU5ErkJggg==

Human Purpose, Collective Intelligence,
Leadership Development

Year: 2025

  • The UK Government’s AI Playbook: Progress, Power, and Purpose

    The UK Government’s AI Playbook for 2025 (UK Government, 2025) aspires to make Britain a global leader in artificial intelligence. Although it commendably emphasizes innovation, expanded compute capacity, and AI integration in public services, the document raises questions about whether it fully aligns with broader societal needs. Viewed through the lenses of ethics, equity, and governance, in my view, the playbook both excels and stumbles in addressing the ethical, social, and political implications of AI.


    Compute Capacity: Efficiency vs. Sustainability

    The playbook envisions a twentyfold increase in compute capacity by 2030, in part through AI Growth Zones (UK Government, 2025). This emphasis on scaling up infrastructure parallels the hitherto rising computational demands of advanced AI models. Yet it risks overshadowing the benefits of algorithmic ingenuity—a possibility illustrated by DeepSeek’s R1 model, which achieves near-reasoning parity with top-tier models at a fraction of the computational and carbon cost (DeepSeek, 2024); as I have already pointed out here. This finding suggests that brute force is not the sole path to progress.

    Luciano Floridi’s concept of environmental stewardship points to the importance of developing technology responsibly (Floridi, 2014). Although the playbook mentions renewable energy, it lacks firm commitments to carbon neutrality, and it fails to recognize rival uses for such energy; even if it is renewable it isn’t free. Without enforceable sustainability targets, the rapid expansion of data centers may undermine ecological well-being. This concern resonates with Amartya Sen’s focus on removing obstacles to human flourishing (Sen, 1999): if AI is meant to serve society over the long term, it should do so without depleting environmental resources. In fact, AI can and should help to enhance bio-diversity and to decarbonize our economies!


    Innovation for Public Good: Missions Over Markets

    While the playbook frames innovation as a cornerstone of national strategy, it falls short of setting specific missions that address urgent societal challenges. Mariana Mazzucato argues that invention for its own sake often enriches existing power structures instead of tackling critical issues like climate adaptation, public health, and digital inclusion (Mazzucato, 2018). Without clearly defined missions, even groundbreaking discoveries can deepen inequities rather than reduce them.

    The proposed £14 billion in private-sector data centers underscores a reliance on corporate partnerships, echoing Shoshana Zuboff’s caution about surveillance capitalism (Zuboff, 2019). These collaborations might prioritize profit unless they include clear standards of accountability and shared ownership. Building in public stakes, as Mazzucato recommends, could align AI development more closely with social goals. Likewise, participatory governance frameworks—anchored in Floridi’s ethics-by-design—would ensure that data usage reflects collective values, not just corporate interests (Floridi, 2014).


    Public Services and Democratic Participation: Empowerment or Alienation?

    Plans to integrate AI into public services—such as NHS diagnostics and citizen consultations—are among the playbook’s most promising proposals. Yet they merit caution. For instance, while AI-powered healthcare diagnostics could expand access, digital exclusion persists without sufficient broadband coverage or user training. Following Sen (1999), true progress lies in increasing the range of freedoms that people can exercise, and this often requires more than technological fixes alone.

    Floridi’s concept of the infosphere reminds us that AI restructures how people interact and make decisions (Floridi, 2014). Tools such as the i.AI Consultation Analysis Tool risk reducing nuanced human input to algorithmically processed data, potentially alienating users from democratic processes. A participatory design approach would help prevent such alienation by incorporating public input from the outset and preserving context within each consultation (our work at Towards People goes in that direction).


    Equity and Inclusion: Bridging Gaps or Reinforcing Barriers?

    Although the playbook mentions upskilling programs like Skills England, it fails to address the systemic forces that marginalize certain groups in an AI-driven economy. Technical training alone might not suffice. Pairing skill-building with community-based AI literacy initiatives could foster trust while mitigating bias in AI systems. Meanwhile, the document’s brief nod to fairness in AI regulation overlooks deeper biases—rooted in datasets and algorithms—that perpetuate discrimination. Zuboff (2019) warns that opaque processes can exclude minority voices, particularly when synthetic data omits their concerns. Regular audits and bias-mitigation frameworks would bolster equity and align with the pursuit of justice; yes, we should still care about that.


    Strengths Worth Celebrating

    Despite these gaps, the playbook contains laudable goals. Its commitment to sovereign AI capabilities demonstrates an effort to reduce dependence on external technology providers, promoting resilience (UK Government, 2025). Similarly, the proposal to incorporate AI in public services—if thoughtfully managed—could enhance service delivery and public well-being. With the right checks and balances, these initiatives can genuinely benefit society.


    Conclusion: Toward a Holistic Vision

    If the UK aspires to lead in AI, the playbook must move beyond infrastructure and economic growth to incorporate ethics, democratic engagement, and social equity. Emphasizing ethics-by-design, participatory governance, and inclusive empowerment would position AI to expand freedoms rather than reinforce existing barriers. Sen’s work remains a fitting guide: “Development consists of the removal of various types of unfreedoms that leave people with little choice and little opportunity of exercising their reasoned agency” (Sen, 1999). By centering AI policies on removing these unfreedoms, the UK can ensure that technological advancement aligns with the broader project of human flourishing.


    References

    DeepSeek, 2024. “DeepSeek R1 Model Achieves Near Reasoning Parity with Leading Models.” Available at: https://www.deepseek.com/r1-model [Accessed 11 February 2025].

    Floridi, L., 2014. The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford University Press.

    Mazzucato, M., 2018. The Value of Everything: Making and Taking in the Global Economy. Penguin Books.

    Sen, A., 1999. Development as Freedom. Oxford University Press.

    UK Government, 2025. AI Playbook for the UK Government. Available at: https://assets.publishing.service.gov.uk/media/67a4cdea8259d52732f6adeb/AI_Playbook_for_the_UK_Government__PDF_.pdf [Accessed 11 February 2025].

    Zuboff, S., 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books.

  • From Carbon Footprints to Sensitive Data—How Diversity in Large Language Models Elevates Ethics and Performance through Collective Intelligence

    Humanity has long grappled with the question of how best to combine many minds into one coherent whole—whether through bustling marketplaces or grand assemblies of knowledge. Today, we find ourselves at a watershed where that same pursuit of unity is taking shape in ensembles of artificial minds (LLMs in particular). In the spirit of Aristotle’s maxim that “the whole is greater than the sum of its parts,” we write a new chapter: Ensembles of artificial minds, composed of multiple specialized models, each carrying its own fragment of insight, yet collectively amounting to more than any monolithic solution could achieve. In that sense, we step closer to Teilhard de Chardin’s vision of a “noosphere,” a shared field of human thought, only now augmented by a chorus of machine intelligences (Teilhard de Chardin, 1959).


    1. Collective Intelligence: Lessons from Humans, Applications for AI

    Thomas Malone and Michael Bernstein remind us that collective intelligence emerges when groups “act collectively in ways that seem intelligent” (Malone & Bernstein, 2024). Far from being a mere quirk of social behavior, this phenomenon draws on time-honored principles:

    1. Diversity of Expertise: Mirroring John Stuart Mill’s argument that freedom of thought fuels intellectual progress (Mill, 1859), specialized models can enrich AI ecosystems. Qwen2.5-Max excels in multilingual text, while DeepSeek-R1 brings cost-efficient reasoning—together forming a robust “team,” much like how varied skill sets in human groups enhance overall performance.
    2. Division of Labor: Just as Adam Smith championed the division of labor to optimize productivity, AI architectures delegate tasks to the model best suited for them. Tools like LangGraph orchestrate these models in real time, ensuring that the right expertise is summoned at the right moment.

    Picture a climate research scenario: Qwen2.5-Max translates multilingual emission reports, DeepSeek-R1 simulates future carbon footprints, and a visual model (e.g., Stable Diffusion) generates compelling graphics. By combining these capabilities, we circumvent the bloat (and carbon emissions) of giant, one-size-fits-all models—realizing more efficient, collaborative intelligence.


    2. Cost & Carbon Efficiency: Beyond the Scaling Obsession

    Hans Jonas (1979) urged us to approach technology with caution, lest we mortgaged our planet’s future. Today’s AI industry, enthralled by the race for ever-larger models, invites precisely the ecological perils Jonas warned against—ballooning compute costs, growing data-center footprints, and proprietary “Stargate” projects fueled by staggering resources.

    A Collective Antidote emerges in the form of smaller, specialized models. By activating only context-relevant parameters (as DeepSeek-R1 does via Mixture of Experts), we not only reduce computational overhead but also diminish the associated carbon impact. Qwen2.5-Max’s open-source ethos, meanwhile, fosters broader collaboration and lowers barriers to entry, allowing diverse research communities—from startups to universities—to shape AI’s future without surrendering to entrenched power structures.


    3. Sensitive Data: Privacy Through Self-Hosted Diversity

    Michel Foucault (1975) cautioned that centralized systems often drift into oppressive surveillance. In AI, this concern materializes when organizations hand over sensitive data to opaque external APIs. A more ethical path lies in self-hosted, specialized models. Here, the pillars of privacy and autonomy stand firm:

    • Local Deployment: Running Llama 3 or BioBERT on in-house servers safeguards patient records, financial transactions, or other confidential data.
    • Hybrid Workflows: When faced with non-sensitive tasks, cost-efficient external APIs can be tapped; for sensitive tasks, a local model steps in.

    Such an arrangement aligns with Emmanuel Levinas’s moral philosophy, prioritizing the dignity and privacy of individuals (Levinas, 1969). A healthcare provider, for instance, might integrate a self-hosted clinical model for patient data anonymization and rely on cloud-based computation for less critical analyses. The result is a balanced interplay of trust, efficiency, and ethical responsibility.


    4. Geopolitical & Cultural Resilience

    Reliance on models from a single country or corporation risks embedding cultural biases that replicate the hegemony Kant (1795) so vehemently questioned. By contrast, open-source initiatives like France’s Mistral or the UAE’s Falcon allow local developers to tailor AI systems to linguistic nuances and social norms. This approach echoes Amartya Sen’s (1999) belief that technologies must expand real freedoms, not merely transplant foreign paradigms into local contexts. Fine-tuning through LoRA (Low-Rank Adaptation) further tailors these models, ensuring that no single vantage point dictates the conversation.


    5. The Human-AI Symbiosis

    Even as AI models excel in bounded tasks, human judgment remains a lighthouse guiding broader moral and strategic horizons. Hannah Arendt’s (1958) celebration of action informed by reflective thought resonates here: we depend on human insight to interpret results, set objectives, and mitigate biases. Rather than supplanting human creativity, AI can complement it—together forging a potent hybrid of reason and ingenuity.

    Malone’s collective intelligence framework (Malone & Bernstein, 2024) can inform a vision of a dance between AI agents and human collaborators, where each movement enhances the other. From brainstorming sessions to policy decisions, such symbiosis transcends the sum of its parts, moving us closer to a robust, pluralistic future for technology.


    Conclusion: Toward a Collective Future

    At this turning point, we have a choice: pursue more monolithic, carbon-hungry models, or embrace a tapestry of diverse, specialized systems that lighten our ecological load while enriching our ethical stance. This approach fosters sustainability, privacy, and global inclusivity—foundations for an AI ecosystem that truly serves humanity. In Martin Buber’s (1923) terms, we seek an “I–Thou” relationship with our machines, one grounded in reciprocity and respect rather than domination.

    Call to Action
    Explore how open-source communities (Hugging Face, Qwen2.5-Max, etc.) and orchestration tools like LangGraph can weave specialized models into your existing workflows. The question isn’t merely whether AI can do more—it’s how AI, in diverse and orchestrated forms, can uphold our ethical commitments while illuminating new frontiers of collaborative intelligence.


    References

    Arendt, H. (1958) The Human Condition. Chicago: University of Chicago Press.
    Buber, M. (1923) I and Thou. Edinburgh: T&T Clark.
    Foucault, M. (1975) Discipline and Punish: The Birth of the Prison. New York: Vintage Books.
    Jonas, H. (1979) The Imperative of Responsibility: In Search of an Ethics for the Technological Age. Chicago: University of Chicago Press.
    Kant, I. (1795) Perpetual Peace: A Philosophical Sketch. Reprinted in Kant: Political Writings, ed. H.S. Reiss. Cambridge: Cambridge University Press, 1970.
    Levinas, E. (1969) Totality and Infinity: An Essay on Exteriority. Pittsburgh: Duquesne University Press.
    Malone, T.W. & Bernstein, M.S. (2024) Collective Intelligence Handbook. MIT Press. Available at: [Handbook Draft].
    Mill, J.S. (1859) On Liberty. London: John W. Parker and Son.
    Sen, A. (1999) Development as Freedom. Oxford: Oxford University Press.
    Teilhard de Chardin, P. (1959) The Phenomenon of Man. New York: Harper & Row.

    Additional references cited within the text or footnotes:
    111 OECD (n.d.) Artificial Intelligence in Science. Available at: https://www.oecd.org/.
    222 LinkedIn (n.d.) Collective Intelligence, AI, and Innovation. Available at: https://www.linkedin.com/.
    333 (n.d.) AI Model Collapse: Why Diversity and Inclusion in AI Matter?
    555 Autodesk (n.d.) Diversity in AI Is a Problem—Why Fixing It Will Help Everyone. Available at: https://www.autodesk.com/.
    666 Atlan (n.d.) Collective Intelligence: Concepts and Reasons to Choose It. Available at: https://atlan.com/blog/.
    777 (n.d.) Why Diversity in AI Makes Better AI for All: The Case for Inclusivity.
    888 GOV.UK (n.d.) International Scientific Report on the Safety of Advanced AI. Available at: https://www.gov.uk/.

  • Toward a Habermas Machine: Philosophical Grounding and Technical Architecture

    Philosophers from Socrates to Bertrand Russell have underscored that genuine agreement arises not from superficial accord but from reasoned dialogue that harmonizes diverse viewpoints. Jürgen Habermas’s theory of communicative action refines this principle into a vision of discourse aimed at consensus through rational argument. Recently, a paper in Science by Michael Henry Tessler et al. (2024) (“AI can help humans find common ground in democratic deliberation”) echoes this idea by describing a “Habermas Machine”—an AI mediator capable of synthesizing individual opinions and critiques to foster mutual understanding. While their study focuses on social and political issues, the underlying concepts extend readily to organizational contexts and knowledge management.

    In our own effort to realize a Habermas-inspired mediator, we employ an architecture that leverages BigQuery as a data warehouse built on a Data Vault schema, managed and orchestrated with dbt (Data Build Tool). The system ingests communications from platforms such as Slack and Gmail, breaking each message into paragraph-level segments for individual vector embeddings. These embeddings are then stored in BigQuery, forming a semantic layer that augments traditional relational queries with more nuanced linguistic searches. In the below diagram, you can see how messages flow from raw capture to an enriched, queryable knowledge graph.

    Png;base64,iVBORw0KGgoAAAANSUhEUgAAA+gAAAJYAQMAAADL0F5mAAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAF9JREFUeNrtwQENAAAAwqD3T+3sARQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANydfAAFYF3K5AAAAAElFTkSuQmCC

    This structural framework, however, only solves part of the puzzle. We then introduce LangGraph agents, enhanced by tooling such as LangSmith, to marry textual and structural data. These agents can retrieve messages based not only on metadata (author, timestamp) but also on thematic or conceptual overlap, enabling them to detect undercurrents of agreement or contradiction in vast message sets. In a second diagram, below, you can see how agent-mediated queries integrate semantic vectors, user roles, and conversation timelines to pinpoint salient insights or latent conflicts that humans might overlook.

    Png;base64,iVBORw0KGgoAAAANSUhEUgAAA+gAAAJYAQMAAADL0F5mAAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAF9JREFUeNrtwQENAAAAwqD3T+3sARQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANydfAAFYF3K5AAAAAElFTkSuQmCC

    The philosophical impetus behind this design lies in extending what Habermas posits for face-to-face discourse—an “ideal speech situation”—to distributed, digitally mediated communication. Like the “Habermas Machine” described by Tessler et al., our system provides prompts and syntheses that help participants recognize areas of accord and legitimize points of dissent, rather than imposing a solution from on high. A final diagram, below, depicts a feedback loop, where humans validate or refute AI-suggested statements, gradually converging on well-supported, collectively endorsed conclusions.

    A8mAAAAAElFTkSuQmCC

    Ultimately, these tools do not replace human judgment; they aspire to enhance it. By combining robust data engineering on BigQuery with sophisticated natural-language reasoning via LangGraph agents, we strive to ground the ideal of rational consensus in a practical, scalable system. Inspired by recent research and Habermasian philosophy, we envision AI as a diplomatic catalyst—one that quietly structures and clarifies discourse, guiding us toward common ground without diluting the richness of individual perspectives.

  • TP’s NICER Habermas Machine

    We are developing the NICER Habermas machine ….SO COOL

    Png;base64,iVBORw0KGgoAAAANSUhEUgAAA8AAAAPAAQMAAADAGILYAAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAAIdJREFUeNrtwTEBAAAAwqD1T20Hb6AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4DfFzwABa9NYigAAAABJRU5ErkJggg==

    I guess I am somewhat of a negotiation professional and proud to be an Alumnus of the Harvard Strategic Negotiations programme created by James K. (“Jim”) Sebenius, the Gordon Donaldson Professor at Harvard Business School. Following that experience, as a commercial exec in the media world I became reasonably fluent in the process of creating multi-party agreements.

    I transitioned from being a commercial exec to lecturing in Sales and Negotiation at Kingston Business School, Kingston University in London. This is where I met our (genius) Director of Intelligent Systems, data and behavioural scientist, economist, former eBay exec and academic Johannes Castner.

    By way of background, we can probably agree that collectively, humans, bless us, struggle with speedy decision-making. Yes, there is too much data to get up to speed with, conflicting priorities, and the ever-present risk of a person hijacking the agenda to explore their current rabbit hole.

    We have created the basis of a new software system and API, that brings together human purpose, the inclusive genius of Collective Intelligence (CI) and a bunch of bespoke AI agents. Also in creating the code, we are drawing heavily on our experience of deal making – that teaches us people are much more likely to agree at the level of values and interests. Unlike many people, NICER quickly finds out what people care about.

    NICER is an acronym of Nimble, Impartial, Consensus-Engendering Resource! Basically a software product/API that uniquely blends AI agent-driven insights with human expertise, to create faster, more harmonious decision-making. Imagine an AI-powered assistant that sifts through mountains of data, tests hypotheses on human behavior (using oTree experiments), and presents known facts and suggested ideas in real-time. The result? Fewer deadlocks, more clarity, and less time spent debating things that don’t matter. It taps into LLM’s that instantly read organisational paperwork, digest strategic papers, regulatory frameworks, news and social media commentary, plus freely and legitimately available, peer reviewed academic papers in order to feedback in real time.

    Building on some solid behavioural science research, the expectation is the consensus-building software system can work with comms platforms like Slack, to help teams focus on critical, contemporaneous issues and make a contribution. Analysing Collective Intelligence forums we discover that many organisations are leveraging AI-driven CI, to enjoy a boost in KPI performance, innovation speed, and employee engagement.

    We are doing this to contribute to an innovative ecosystem that is currently driven by Innovate UK and UKRI, that’s improving productivity, engendering sustainable growth and ultimately regenerating communities across the UK. Next time you find yourself stuck in a slow decisional process, remember: the future belongs to those who think smarter together.