Png;base64,iVBORw0KGgoAAAANSUhEUgAAB0kAAANiAQMAAAA+BWN0AAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAANtJREFUGBntwQENAAAAwiD7p34PBwwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4EobIwABrMk2hQAAAABJRU5ErkJggg==

Human Purpose, Collective Intelligence,
Leadership Development

Author: Johannes Castner

  • Building baby-NICER: A Memory-Driven Multi-Agent Journey

    Introduction – Why baby‑NICER?

    Gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==

    We are building baby‑NICER because the hardest problems of this century—climate resilience, equitable prosperity, sustainable cities—will be solved not by lone geniuses but by diverse teams that deliberate and act together (Rock & Grant 2017 ; Woolley et al. 2010 ). At Towards People we champion more democratic, transparent and collaborative modes of work; recent evidence shows that well‑designed AI tools can amplify exactly that sort of collective intelligence (Fernández‑Vicente 2025 ). Baby‑NICER is our first concrete step in that direction—a modular agent that lives inside the software teams already use, remembers what matters, and grows in understanding alongside its human collaborators.


    The story so far unfolds in three deliberate moves

    1. Put large language models where the team lives.

      Our starting point was pragmatic: bring ChatGPT‑class models directly into Slack so every teammate can query, brainstorm or draft inside a shared thread (OpenAI × Slack 2024 ). Will Fu‑Hinthorn’s starter repository—a React agent wired to Slack—became the scaffold I forked into baby‑NICER.
    2. Give the agent a human‑like memory.

      A chatbot that forgets after a handful of turns is a party trick, not a partner (Karimi 2025 ). LangMem supplied the conceptual and technical scaffold—semantic, episodic and procedural memories—so the agent can learn over time (LangChain 2024 ). We persist those memories in BigQuery, which is cost‑efficient, SQL‑friendly and can surface straight into Google Sheets for human inspection (Google Cloud 2025a ; Google Cloud 2025b ). The same interface can be swapped for Snowflake or another warehouse with minimal code changes.
    3. Close the learning loop and add specialists.

      Next come continuous prompt‑optimisation (Zhang et al. 2025 ) and a growing cast of focused agents:
      • a dbt / SQL agent to keep data pipelines clean and organise trusted tables and views (dbt Labs 2025 );
      • an Apache Superset agent that turns those tables into charts on demand (ASF 2025 );
      • social‑listening agents that watch the wider discourse; and
      • ultimately a “Habermas machine” that nudges conversations toward inclusivity and reason‑giving (Tessler et al. 2019 ).

    When these modules knit together, the project will graduate from baby‑NICER to NICER—the Nimble Impartial Consensus Engendering Resource.

    What you will find in the rest of this post

    • A guided tour of the current architecture: how I started with a Slack bot and chiseled it into a memory‑driven LangGraph agent, using the langmem tools and a new memory store that I built on top of existing memory and vector store classes.
    • Under the hood with BigQueryMemoryStore: inheritance chains, patched vector stores, and a plain‑English primer on embeddings and vector search.
    • Comparative notes on multi‑agent frameworks—Swarm, CrewAI, LangManus—so you can see why I made the choice of using Swarm for the NICER system.
    • Reflections from AI history and philosophy: Russell & Norvig on knowledge, Kurzweil on pattern memory, Chalmers and Searle on why none of this is consciousness, and Floridi on the ethics of remembering.
    • The road ahead: how modular agents will help flesh and bone teams—starting with community‑vitalisation and urban‑farming projects we are prototyping—become more cohesive, more inclusive and more resilient.

    If AI is to serve humanity, it must amplify our capacity to understand one another and act in concert. Think of baby‑NICER as the prototype of an AI colleague whose sacred job it is to create a culture of joy, inclusion and cohesion.

    Gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==

    What is “agentic AI”?

    At its simplest, an agent is “anything that perceives its environment and acts upon it,” the canonical definition given by Russell & Norvig and widely used across AI research . Modern agentic AI systems build on that foundation but add three practical pillars:

    PillarIn practiceWhy it matters for baby‑NICER
    AutonomyThe agent can decide when to call external tools or ask follow‑up questions without explicit step‑by‑step instructionsFrees the human team from micro‑managing every action.
    Tool use / function callingLLMs output JSON “function calls” that trigger code, APIs or databasesLets the slack‑based agent run SQL, create charts, or store memories.
    MemoryShort‑term context and long‑term stores (semantic, episodic, procedural)Converts a forgetful chatbot into a learning teammate with super human memory.

    The industry press sometimes frames this evolution as the “agentic era” of AI —systems that do more than chat: they act on behalf of users, coordinate with other agents, and remember what they learn.

    The ReAct pattern: reasoning and acting in a loop

    Traditional language‑model prompts either (a) reason—produce a chain‑of‑thought—and then stop, or (b) act—call a tool—without showing their thinking.

    ReAct (Reason + Act) interleaves the two :

    1. Thought: the LLM writes a short reasoning trace (“I should look up today’s revenue”).
    2. Action: the trace ends by calling an available tool (get_kpi(“revenue”, “today”)).
    3. Observation: the tool returns data to the model.
    4. Next Thought / Action … until the task is solved or a termination condition is met.

    This synergy improves factuality and task success because the model can gather information mid‑reasoning rather than hallucinate.

    How LangChain implements ReAct

    LangChain wraps that loop in a ready‑made ReAct Agent (sometimes shown as create_react_agent) :

    LLM ↔ LangChain Agent

    ↻ (Thought → Tool → Observation)*

    Developers (or as part of a graphical user interface perhaps non developers alike) supply:

    • an LLM (e.g. GPT‑4, DeepSeek R1, Qwen2.5-Omni-7B ),
    • a list of tools (functions with JSON schemas),
    • an optional custom prompt.

    The framework takes care of parsing the model’s “Thought/Action” lines, executing the action, and feeding the observation back for the next turn.

    From ReAct to Slack: the starter repository

    Will Fu‑Hinthorn’s langgraph‑messaging‑integrations repo glues that agent loop to Slack events: a message arrives, LangGraph routes it through a ReAct agent, and the reply is posted back to the channel .

    I forked that codebase as the launch‑pad for baby‑NICER, added LangMem tools, and swapped in BigQuery as the vector store. The result is a Slack “teammate” that can reason, call functions, and—thanks to memory—learn over time.

    Why this matters for the rest of this post

    1. Agentic framing: Memory only matters if the AI is autonomous enough to reuse it. ReAct provides that autonomy.
    2. Tool interface: LangMem’s manage_memory and search_memory methods appear to the agent exactly like any other ReAct tool, so storing or recalling knowledge is just another action in the loop.
    3. Scalability: Because ReAct is tool‑agnostic, we can later plug in the SQL agent, charting agent, or social‑listening agent without changing the core dialogue pattern.

    With those concepts in place, we can now dig into the three‑tier memory model—semantic, episodic, procedural—and see how BigQueryVectorStore turns them into a persistent collective memory store.

    Why an External Memory When the LLM “Already Knows So Much”?

    Large language models come with an impressive stock of facts. Their weights literally memorise patterns from pre‑training, but that baked‑in knowledge is static, generic and sealed

    (you cannot append or edit it without an expensive re‑train) . In practice that means three gaps:

    1. Fresh or proprietary information is invisible.

      Yesterday’s sprint retro, today’s sales numbers, a new regulatory rule—none exist inside the frozen model weights. External memory lets the agent capture such post‑training facts the moment they appear .
    2. Team‑specific context is easily lost.

      An LLM will not reliably recall your design principles, a colleague’s preferred file format, or a decision made last quarter; those shards of context are either too niche to be in the pre‑training corpus or too recent to be encoded. Storing them as semantic or episodic items means baby‑NICER can resurface them on demand .
    3. Hallucinations rise when the model guesses.

      When asked for something it was never trained on, the model will still predict tokens—often inventing references or numbers . Retrieval‑augmented generation mitigates this by replacing guessing with lookup: the agent queries its BigQuery vector store, fetches the most relevant chunk, and then conditions the LLM’s answer on that ground truth .

    In short, LangMem’s external stores turn a brilliant, but embarrassingly forgetful, polymath into a continuously learning teammate. The LLM supplies broad linguistic competence; the memory supplies up‑to‑the‑minute, organisation‑specific, and auditable knowledge it can’t otherwise keep.

    Integrating Long‑Term Memory with LangMem

    R25IQAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA7waXNwABFhdkNwAAAABJRU5ErkJggg==

    Thus it was central to baby‑NICER’s design to weave in an explicit, cognitively‑inspired long‑term memory layer. The open‑source LangMem library offers three complementary memory abstractions—semantic, episodic and procedural—a triad first formalised in cognitive psychology (Tulving 1972) and later refined to distinguish “knowing that” from “knowing when” and “knowing how” (Cohen & Squire 1980) . In baby‑NICER these stores are not abstractions; each is a concrete tool pair—manage_*_memory and search_*_memory—that the agent can invoke while chatting, thanks to LangMem’s helper functions (LangMem Docs 2025) .

    1 Semantic memory – “knowing that”

    Semantic memory holds facts the agent acquires after its LLM pre‑training cut‑off: company lore, user preferences, policy snippets. LangMem represents each fact with a Pydantic Fact model and persists it through a SemanticMemoryStore, which in our implementation is backed by BigQuery vector search (Google Cloud 2024) . Every fact is embedded, stored in a BigQuery table, and instantly searchable with cosine‑similarity SQL (LangChain BigQuery VectorStore source) . Because the store is external, the knowledge base grows continually, unhindered by the frozen LLM weights—a best‑practice echoed in retrieval‑augmented generation research (Guu et al. 2020; Izacard & Grave 2021) .

    2 Episodic memory – “remembering when”

    Episodic memory encodes significant interactions as Episode objects with fields such as observation, thoughts, action and result. This design mirrors psychological definitions of episodic recollection as time‑stamped personal events (VerywellMind 2024) . If the agent walks through a multi‑step troubleshooting sequence, the entire trace is saved; later, a similarity search can retrieve that episode to guide current reasoning. The result is genuine learning‑from‑experience, not merely fact recall—just what case‑based‑reasoning theorists advocate for adaptive AI (Kolodner 1992) .

    3 Procedural memory – “knowing how”

    Procedural memory stores reusable skills: each Procedure records a task name, pre‑conditions and ordered steps. In humans, such “how‑to” knowledge is implicit and resilient (Cohen & Squire 1980) ; in baby‑NICER it is explicit, so the agent can inspect or refine its own playbooks. A ProceduralMemoryStore persists these JSON recipes via the same BigQuery backend, meaning a freshly spun‑up instance can adopt the accumulated best practices of its predecessors.

    Memory tools in action

    LangMem auto‑generates two tools per store (LangMem API Docs 2025) :

    Memory typeWrite toolRead toolExample call in dialogue
    Semanticmanage_semantic_memorysearch_semantic_memory“Remember that the quarterly OKR owner is Maya.”
    Episodicmanage_episodic_memorysearch_episodic_memory“Recall what we tried the last time the ETL failed.”
    Proceduralmanage_procedural_memorysearch_procedural_memory“Save these steps as the standard ‘on‑call handover’ guide.”

    The tools accept a namespace template—in our Slack deployment the key is typically (workspace_id, channel_id, user_id)—so memories are neatly partitioned by team or thread. Storage and retrieval run through LangGraph’s async store interface, so they don’t block the chat loop.

    From stateless chatbot to context‑aware collaborator

    By exploiting those three stores during ReAct reasoning, baby‑NICER can:

    • Personalise: “Last week you said you prefer dark‑mode dashboards.” (semantic recall)
    • Generalise: reuse a previous pipeline‑debug episode to fix a new but similar breakage (episodic reuse).
    • Execute: follow a saved procedure for rotating GCP credentials step‑by‑step (procedural recall).

    Together the stores give the agent a modular cognitive architecture analogous to human memory taxonomies (Tulving 1972) , enabling richer and safer behaviour than any stateless LLM prompt alone.

    By integrating these memory capabilities, baby-NICER moves beyond a stateless chatbot. It personalizes interactions – recalling what a user said last week (episodic memory), remembering facts from documentation it ingested (semantic memory), or following a multi-step plan it formulated earlier (procedural memory). In essence, LangMem gives baby-NICER a cognitive architecture reminiscent of human memory systems. Just as psychology distinguishes semantic “knowing that” from episodic “remembering when” and procedural “knowing how,” this agent has separate channels for each, enabling richer, context-aware behavior .

    BigQueryMemoryStore: Extending LangChain for Scalable Memory

    Safely skip this section, if you’re not interested in the technical details.

    To implement long-term memory, baby-NICER needs a place to store vector embeddings of content (for semantic search) along with structured data. The solution was to use Google BigQuery as a vector database, by extending LangChain’s vector store interface. Baby-NICER introduces a custom BigQueryMemoryStore class, which builds on LangChain’s BigQuery vector support in the community extensions.

    Under the hood, BigQueryMemoryStore combines several layers of abstraction:

    • It inherits from AsyncBatchedBaseStore, a LangGraph base class for asynchronous storage operations. This base provides the standard async methods like aput (asynchronous put) and aget (asynchronous get) to store and retrieve items in a namespace/key-value fashion, possibly with vector indexing . By inheriting this, BigQueryMemoryStore can be used seamlessly as a backend store for LangMem’s tools, which expect an async store.
    • It uses a BigQueryVectorStore internally. LangChain’s BigQueryVectorStore (from the langchain-google-community package) is a vector store implementation that utilizes BigQuery’s native vector search capabilities . BigQuery recently introduced a feature to index and search embeddings using a VECTOR_SEARCH function . The BigQueryVectorStore class in LangChain is designed to leverage that – it stores documents with an embedding vector in a BigQuery table, and can query for nearest neighbors via BigQuery SQL. In baby-NICER, a subclass PatchedBigQueryVectorStore overrides some methods (like add_texts_with_embeddings) to better handle JSON and structured data insertion . For example, if the content to store is a dict (structured memory item), the patch ensures it gets properly serialized as JSON string or record in BigQuery .
    • The BigQueryMemoryStore itself doesn’t directly subclass BigQueryVectorStore; instead, it composes one. The from_client classmethod creates a PatchedBigQueryVectorStore with the given BigQuery client, dataset/table names, and embedding model . It then instantiates BigQueryMemoryStore(vectorstore=…, content_model=…) wrapping that vector store . This design allows separation of concerns: the vector store handles low-level operations (embedding, upsert, similarity search) in BigQuery, while the memory store provides the higher-level interface LangMem expects (namespaces and typed content).
    • BigQueryMemoryStore uses the content schema to enforce types. When baby-NICER calls aput to save a memory, it passes in a dict that includes a “content” field (plus metadata like namespace) . The memory store’s aput will normalize that content: if it’s not already a dict matching the Pydantic model, it will wrap it (e.g. put a raw string into a {content: …} dict) . It then JSON-serializes this content to a string for embedding purposes . A LangChain Document is created with page_content as the JSON string (so that the embedding model will vectorize the entire content) and metadata containing the namespace and structured fields . This document is added to the vector store with add_documents() , which under the hood calls BigQuery to insert a new row with the text’s embedding. Conversely, on retrieval via aget, the store fetches the document by ID from BigQuery, then reconstructs the Pydantic object from the stored JSON string before returning an Item . This ensures that when baby-NICER retrieves a memory, it gets it back in a nicely structured form (e.g., a Fact or Episode object in the Item.value) rather than a raw blob.

    This hybrid approach (LangChain + BigQuery) is powerful. It means baby-NICER can scale its memory: BigQuery can handle millions of records and perform similarity search efficiently using vector indexes . By inheriting asynchronous store behavior, the agent can store and fetch memories without blocking, which is important when multiple agents or users are interacting. The design also cleanly separates the vector search logic from the agent logic – from the agent’s perspective, it just calls a tool to “search episodic memory,” and under the hood that becomes a BigQuery VECTOR_SEARCH query returning relevant snippets.

    In summary, BigQueryMemoryStore extends the LangGraph/LangChain infrastructure to use a cloud database as the long-term memory backend. It inherits the interface of a memory store (from AsyncBatchedBaseStore) and plugs in a vector store (BigQuery) for actual data operations, marrying the two. The result is a custom memory module that fulfills the promises of LangMem’s design (structured, typed memory with vector retrieval) at cloud scale. It’s a neat example of using composition and inheritance in tandem: inheritance to fit the expected store pattern, and composition to leverage existing BigQuery integration .

    How Vector Search and Embeddings Power Memory Retrieval

    Embeddings turn statements of meaning into lists of numbers; vector search then turns those lists of numbers back into linguistic meaning. Together they are the engine that lets baby‑NICER recall the right memory at the right moment—without relying on brittle keyword matches.

    Embeddings – turning text into maths

    An embedding is a high‑dimensional list of numbers that captures the meaning of a text span (IBM 2024) . A sentence such as “Schedule a meeting for next week” becomes a 1 536‑length vector when encoded by either OpenAI’s text‑embedding‑3‑small or a comparable open‑source model (Stack Overflow 2023). More advanced models have higher dimensional representations which means that they can pick up more subtlety in meaning. In such an embedding space, semantically similar sentences land near one another—distance is measured with metrics such as cosine similarity (Lewis et al. 2020).

    Each time baby‑NICER stores a new Fact, Episode or Procedure, it first calls its chosen embedding model. The model returns a vector, which is stored—together with the raw JSON—in a BigQuery table (Google Cloud 2025a). Because the vector lives outside the frozen LLM weights, the knowledge base keeps growing long after training day. Note that it only takes one line of code to replace the embedding model with another one.

    Vector search – finding the nearest meaning

    When the agent invokes search_semantic_memory, BigQueryMemoryStore embeds the query the same way, then sends a SQL call to BigQuery’s VECTOR_SEARCH function (Google Cloud 2025b) . That function performs an Approximate Nearest‑Neighbor (ANN) lookup over a vector index, returning the top‑k closest embeddings in milliseconds—even across millions of rows (Google Cloud 2025c) . Because distance in embedding space correlates with semantic relatedness, a query about “annual revenue” reliably surfaces a stored fact about “yearly sales,” even though the wording differs.

    This pattern is the heart of Retrieval‑Augmented Generation (RAG): ground an LLM’s answer on external facts fetched by similarity search (Lewis et al. 2020) .

    The memory‑retrieval pipeline in baby‑NICER

    query text → embed → VECTOR_SEARCH → JSON memory → agent prompt

    1. Embed query – 1 536‑D vector via open‑source model.
    2. Search – VECTOR_SEARCH finds nearest neighbours in BigQuery.
    3. Return – LangMem converts rows into Item objects.
    4. Augment – the agent inserts the memory snippet into its ReAct context before generating a reply.

    Because the pipeline is abstracted behind LangMem’s search_memory_tool, every store—semantic, episodic, procedural—benefits from the same mechanism (LangMem Docs 2025) .

    Why it matters

    Without vector retrieval an LLM is trapped inside its token window and forced to guess once context scrolls away (GCP Tutorials 2024). Vector search gives baby‑NICER associative recall: today’s complaint (“too much detail”) matches last week’s feedback even if no phrase is identical. Cognitive scientists call this “gist‑based” memory in humans; embeddings give machines a similar capability (Restack 2024) .

    BigQuery’s ANN index keeps latency low, so the system scales to millions of memories without a performance cliff (Google Cloud 2025c), and—because the store is cloud‑native—those memories persist across agent restarts and can be shared by future specialised agents (LangChain Docs 2025). Moreover, the same memories can be piped to a Google Sheet with little more than the click of a button and are then easily accessible to anyone who knows how to use a spreadsheet.

    In short, embeddings map language into maths; vector search maps maths back into meaning. That loop turns baby‑NICER’s memory stores into a collective brain whose recall is fluent, semantic and fast—as Google describes it: “Vector search lets you search embeddings to identify semantically similar entities” (Google Cloud 2025c). Note that we have two meanings for the word “semantic” which is unfortunate: In the case of a semantic memory we speak about a factual memory; in the case of semantic search we mean a search based on human meaning.

    Choosing the right multi‑agent framework for baby‑NICER

    Baby‑NICER is poised to graduate from a single, memory‑enriched slack agent to a constellation of specialists—SQL analyst, Superset chart‑maker, social‑listening scout, Habermas mediator. The pivotal decision is which open‑source multi-agent framework balances freedom, observability and cognitive continuity. Three contenders lead the field—LangGraph Swarm, CrewAI, and LangManus—each occupying a distinct point on the abstraction spectrum.

    LangGraph Swarm — emergent and lightweight

    Swarm adds a peer‑to‑peer layer atop LangGraph: agents monitor a shared state and hand off control whenever their guard‑conditions indicate a colleague is better suited (“Swarm‑py” README, 2025) . Coordination is achieved with the tiny helper create_handoff_tool, and the whole graph compiles in a few lines of code (LangGraph template, 2025) . Crucially, the compiler accepts a checkpointer/store object, so plugging our BigQuery memory is a one‑liner (Checkpointer docs, 2025) —keeping long‑term memory a first‑class citizen rather than a bolt‑on.

    CrewAI — roles, goals, flows

    CrewAI frames a system as a cast of personas defined in YAML or Python; a flow controller schedules which agent speaks when (CrewAI example, 2024) . Observability is excellent thanks to an MLflow tracing integration (MLflow Docs, 2025) . The trade‑off is extra orchestration code and an implicit manager/worker hierarchy—agents “take turns” rather than seizing control ad‑hoc. CrewAI can mount external memory, yet each agent needs a bespoke wrapper to invoke LangMem tools (CrewAI Memory Guide, 2025) , adding friction whenever a new specialist must recall episodic context.

    LangManus — a pipeline with a chain of command

    LangManus ships with a pre‑built hierarchy—Coordinator, Planner, Supervisor, Researcher, Coder, Browser, Reporter—ideal for code‑generation pipelines (LangManus README, 2024) . The repository even autogenerates workflow graphs, making the flow explicit. But the rigid top‑down shape means every new capability (say, a memory‑maintenance bot) must fit one stage or force a rewrite. Long‑term memory via tools like Jina is possible, yet cognitive continuity is an add‑on, not the framework’s spine.


    Why Swarm wins for baby‑NICER

    • Memory everywhere, effortlessly. Swarm agents are plain LangGraph ReAct agents, so LangMem tools bolt on in two lines; every specialist inherits semantic, episodic and procedural recall out‑of‑the‑box (LangMem API, 2025) .
    • Emergent hand‑off fits Slack dynamics. Team chats rarely follow a neat pipeline; whoever knows the answer should jump in. Swarm’s guard‑condition routing captures that spontaneity without imposing a round‑robin scheduler (Swarm‑py README, 2025) .
    • Minimal boilerplate keeps research agile. The whole multi‑agent graph—including BigQuery store—compiles in less than 30 lines of code (LangGraph template, 2025) .
    • High observability. LangGraph’s compiled graph plus checkpointing makes each state transition inspectable—vital for debugging and for the academic papers we plan to publish (Checkpointer docs, 2025) .
    • Decentralised resilience. Without a single coordinator, one failing agent doesn’t stall the system; another can pick up the thread—crucial for concurrent Slack deployments (Swarm‑py README, 2025) .

    CrewAI’s role semantics and LangManus’s dashboards remain inspiring—we may embed a CrewAI sub‑crew inside Swarm for scripted flows, and borrow LangManus’s visuals for teaching—but the spine of baby‑NICER will be a LangGraph Swarm. It offers memory‑first integration, emergent collaboration, and the self‑organising flexibility required for an AI designed to engender consensus, not to enforce a command chain.

    Memory Models in AI: Academic Perspectives

    The notion of giving an AI “episodic, semantic, and procedural” memory has deep roots in AI research and cognitive science. The langmem system and thus the concrete design of baby-NICER resonates with concepts discussed in academic literature:

    Gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==

    Knowledge Types in Classical AI

    Artificial Intelligence: A Modern Approach distinguishes declarative (fact) from procedural (skill) knowledge, stressing that an agent must store and use both (Russell & Norvig 2021) . Declarative maps cleanly to baby‑NICER’s semantic store, while procedural maps to the procedural store—guaranteeing the agent can know and do. Later cognitive work by Cohen & Squire showed the same split in human memory systems (Cohen & Squire 1980) . Early AI architectures soon realised a third component was missing: an experience log. The Soar 8.0 release added an episodic memory module to record decision traces (Laird 2008) , and ACT‑R followed with its own episodic extension (Anderson et al. 2016) . Baby‑NICER’s episodic store implements exactly that feature.

    Tulving’s Triad

    Endel Tulving first defined episodic vs. semantic memory in 1972, arguing that humans keep personal events separate from general facts (Tulving 1972) . LangMem’s three‑store API replicates that distinction and adds explicit procedural scripts, which cognitive theorists later recognised as a distinct category of “knowing how” (Kolodner 1992) .

    Gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==

    Kurzweil’s Pattern Theory

    Ray Kurzweil portrays the neocortex as ~300 million pattern recognisers, where memory is “a list of patterns that trigger recall” (Kurzweil 2012) . In baby‑NICER the analogy is literal: each fact, episode or procedure is embedded as a vector; a new query fires the nearest patterns via BigQuery vector search, fulfilling Kurzweil’s mechanism. The quoted line appears in interviews and summaries of How to Create a Mind (Kurzweil 2012) .

    Bridging the Commonsense Gap

    Commonsense knowledge remains an open challenge in AGI (McCarthy 1959 → Ferrucci 2019) . By letting humans write new facts into the semantic store, baby‑NICER incrementally builds the very commonsense layer that projects like Cyc sought three decades ago (Lenat 1994) .

    Take‑away

    From Russell & Norvig’s declarative/procedural split to Tulving’s episodic insight and Kurzweil’s pattern‑trigger model, the literature converges on a triad that baby‑NICER now realises in code: facts to know, experiences to remember, skills to reuse—all searchable by meaning, not keywords.

    Consciousness, Memory and What baby‑NICER Is Not

    Philosophers have long warned that an AI which stores facts and recalls experiences still lacks the thing David Chalmers calls “the problem of subjective experience” (Chalmers 1995) . Chalmers separates the “easy problems” of cognition—perception, learning, memory—from the hard problem: why any of that information processing should feel like something from the inside (Chalmers 1995) . Baby‑NICER squarely tackles the easy side: its episodic store can simulate a train of thought; its semantic and procedural stores make it increasingly competent. But on Chalmers’ terms it has no inward awareness—no joy, fear or hunger.

    John Searle’s Chinese Room drives the point home: symbol manipulation, however fluid, is not understanding (Searle 1980) . Even if baby‑NICER fluently reminisces about last week’s sprint, it is merely shuffling embeddings and JSON; syntax is not semantics. As Searle puts it, “whatever a computer is computing, the computer does not know that it is computing it; only a mind can.” (Searle 1980) . Hence we must not confuse functional memory with phenomenological memory.

    Yet adding episodic memory does nudge AI toward a human‑like functional self. Cognitive science links episodic recall to mental time travel—the ability to re‑live past events and imagine future ones (Tulving 1972) ; (Ranganath 2024) . Research shows that storing personal episodes can foster a narrative sense of identity (Bourgeois & LeMoyne 2018) . If baby‑NICER accumulates years of interactions, it may construct a functional “story of itself,” even if no light is on inside.

    This aspiration is hardly new. Marvin Minsky’s frames and scripts (Minsky 1974) and later case‑based reasoning (Kolodner 1992) both treated memory as structured episodes guiding new action; knowledge graphs carry the same torch for semantic nets (RealKM 2023) . Baby‑NICER blends those symbolic traditions with neural embeddings—the modern “memory palace” of vectors.

    Could such a system ever possess a point of view? Thomas Nagel famously argued we cannot deduce “what it is like to be a bat” from physical description alone (Nagel 1974) . Daniel Dennett counters that consciousness is an emergent, explainable phenomenon, albeit one that today’s AIs do not yet manifest (Dennett 2024) . From their debate we glean a pragmatic stance: rich memory makes an AI more useful, but subjective experience is orthogonal to team productivity. Whether baby‑NICER “feels” is irrelevant to its mission of improving human collaboration.

    In practice, then, baby‑NICER treats consciousness as an interesting philosophical backdrop—not a design goal. Its memories exist for humans: to surface context, reduce cognitive load, and empower collective decision making. Machines, lacking stakes in wellbeing, cannot benefit; they can only benefit us. Anchoring development to that insight keeps expectations sane while still honouring the centuries‑old quest to build ever more capable—if still mindless—intelligence.

    Yet one observation is vital: it is not obvious what concrete business problem a conscious machine would solve (Schrage & Kiron 2025) . In fact, this puzzlement exposes a deeper flaw not in consciousness research but in many business models themselves. Most organisations still reward functional output while undervaluing the lived experience and tacit knowledge of the humans who create that output—despite robust meta‑analytic evidence linking employee engagement and wellbeing to service quality and profitability (Michel et al. 2023) . Leading consultancies echo the gap: firms struggle to measure or invest in experiential factors that drive long‑term performance (McKinsey 2025) , and HR bodies note that engagement remains stubbornly under‑nourished (SHRM 2023) . Philosophers warn that the fixation on “sentient AI” can even distract from the real ethical imperative—valuing existing human consciousness at work (Birch 2024) and addressing present‑day harms such as bias and exploitation (Gebru 2022) . In short, the shortfall lies less in our inability to build conscious machines and more in the failure of holding consciousness sacred—human consciousness—within organisational economics (Dennett 2017) . Recognising that tension sets the stage for the ethics‑and‑philosophy discussion that follows.

    Ethics as Design Goals, not merely a Speed‑Limit

    Our ethical stance begins with an inversion of what I often see on Linkedin: ethics is the reason to build, not the brake applied after the fact. Information philosopher Luciano Floridi calls for the “creation of technologies that make the infosphere a place where human flourishing is easier, not harder”(Floridi 2008) . Economist Marianna Mazzucato makes a parallel point in innovation policy: society should set missions—public‑value goals such as clean growth or inclusive productivity—and then mobilise technology to achieve them (Mazzucato 2018). Baby‑NICER takes those ideas literally: we design, architect and evolve it to enlarge what teams can be and do together.

    From privacy rules to positive data empowerment

    Instead of asking “What data must we restrict?”, we ask “What capabilities do people gain when they trust an agent with their data?” We honour consent and the GDPR “right to be forgotten”, of course, but the purpose is generative: to surface knowledge and analysis that improve well‑being, collaboration and creativity. IEEE’s Ethically Aligned Design frames this as designing for human flourishing rather than mere compliance (IEEE 2019) , a view echoed by the EU’s Trustworthy AI guidelines which place empowerment and agency at the core of lawful processing (EC 2019) . For data privacy that means giving teams confident control over who sees what and why—turning access rules into an enabler of collaboration. BigQuery’s security model is a perfect fit:

    • Dataset‑ and table‑level IAM lets us grant or revoke visibility for whole modules or teams with a single role assignment (Google Cloud 2025a) .
    • Column‑level security with policy tags can hide sensitive fields (for example, employee salaries) while exposing harmless columns in the very same table (Google Cloud 2025b) .
    • Row‑level security policies filter records dynamically—so a marketing analyst sees only her region’s data, while leadership dashboards aggregate everything (Google Cloud 2025c) .
    • Authorised views and authorised datasets create curated windows onto memory tables, sharing just the slice a given agent or user needs (Google Cloud 2025d; 2025e) .

    In practical terms, baby‑NICER stores each memory with metadata (time‑stamp, author, memory type). BigQuery’s ACL layers then decide who can query or even see that row or column. Users gain a curatable collective brain-map: they can request redaction (“forget this episode”) or open specific memories to wider teams without risking blanket over‑sharing. Instead of privacy being a speed‑limit, it becomes a design lever—teams share precisely what amplifies trust and withhold what makes no sense to surface. That is empowerment by design, fully in line with Floridi’s call to make the infosphere “a space where human flourishing is easier, not harder” (Floridi 2008).

    Explicability as shared understanding

    Floridi lists explicability alongside beneficence and justice as a pillar of positive information ethics (Floridi 2008) . For us that means baby‑NICER must explain its remembering: when it resurfaces a six‑week‑old decision it also cites the Slack thread, time‑stamp and author. Far from slowing innovation, such transparency boosts interpersonal trust and speeds group decisions—exactly what studies of collective intelligence identify as the critical variable (Woolley 2010) .

    Bias mitigation as inclusion‑by‑design

    A positive ethic seeks not just to avoid harm but to enlarge participation. Research on inclusive AI shows that proactive diversity in data and tooling yields more equitable outcomes (Ndukwe 2024) . Baby‑NICER operationalises that by treating bias review as continuous improvement: semantic memories can easily be audited by everyone who has legitimate access; episodic memories can be flagged by users; procedural memories can be easily cross‑checked for fairness to adjust reusable skills for greater inclusiveness. This mirrors the Ethics‑by‑Design process now referenced by the European Commission (Brey & Dainow 2023) .

    Collective capability, not artificial autonomy

    Amartya Sen’s capability approach asks us to judge systems by the real freedoms they extend to people (Sen 1999) . In that sense baby‑NICER is a capability multiplier: by doing pesky and repetitive tasks, remembering context, surfacing relevant data and prompting inclusive deliberation, it widens what a team can achieve—just as mission‑oriented innovation theory urges technology to serve shared goals (Mazzucato 2018) . Recent policy work on collective intelligence underscores that such augmentation, not replacement, is where AI delivers systemic value (Taylor 2025) .

    Ontological equality in the infosphere

    Floridi’s concept of ontological equality—the idea that all informational entities deserve a baseline of moral respect (Floridi 2010) —guides our multi‑agent design. Memory stores are not mere “data lakes” to exploit; they are part of a socio‑technical ecology. Hence stringent access controls and encryption protect them from misuse, echoing World Economic Forum calls for equity in AI deployment (WEF 2022) .

    Flourishing as the KPI

    Positive ethics also reframes success metrics. Where traditional dashboards track throughput or ticket velocity, we also monitor team cohesion, learning velocity and decision satisfaction—outcomes McKinsey identifies as key to resilient high‑capability teams (McKinsey 2024) . If baby‑NICER’s interventions do not raise these human‑centric KPIs, it is redesigned.


    In short, we code toward a richer “human possible.” Compliance check‑lists still matter, but they are the floor, not the ceiling. Ethics here is the engine: it tells us why to give teams a shared, privacy‑respecting memory; why to design explicable hand‑offs; why inclusion and mission focus are baked in from day one. Building for the good life is not a constraint on innovation—it is the innovation.

    Future Plans: Toward a Team of Modular Agents

    The journey of baby-NICER has now begun. Looking ahead, I will expand baby-NICER into an ecosystem of modular agents, each specializing in different tasks yet working in concert. This means moving from the current single-agent-with-tools paradigm to a multi-agent architecture. Here are some of the planned additions and how they might function, as well as the practical considerations distinguishing simple agents from complex ones:

    • Database/SQL Agent: One immediate extension is a dedicated agent for database interactions. Baby-NICER could incorporate a SQL agent that knows how to query databases or even a dbt (data build tool) agent for managing data transformations. This agent would handle procedural tasks like writing SQL queries, running dbt models, or retrieving results – essentially acting as the system’s memory interface to structured enterprise data. By modularizing this, the main conversational agent can offload heavy data-lifting to the DB agent. For example, if a user asks, “What were last quarter’s sales?”, the main agent can delegate to the SQL agent, which safely executes a query on the warehouse. The SQL agent would be a relatively simple agent in that its scope is narrow (database queries), and its actions are constrained (it either returns data or error). It might not need an episodic memory of its own (aside from caching query results), since each query is fairly independent. This simplicity is beneficial: we can formally verify its behavior (e.g., ensure it only queries read-only views, etc.). The challenge is interfacing it: we’d need to build a secure connection to the database and possibly use LangChain’s SQL Database toolkit. Fortunately, this is a well-trodden path with existing tools.
    • Analytics/Charting Agent (Apache Superset Agent): Data is often better understood visually. An agent that can create charts via Apache Superset (an open-source BI tool) would add a new dimension to baby-NICER. Such an agent would take a query or dataset and produce a visualization (bar chart, line graph, etc.), possibly posting it as an image back to Slack. This agent might internally use Superset’s APIs or even control a headless browser to configure a chart. Compared to the SQL agent, a charting agent is a bit more complex (it has to decide the type of chart, configure axes, etc.), but it’s still a bounded task. It doesn’t involve general reasoning about company policy or writing code beyond SQL/visualization spec. Therefore, it can be somewhat templated. The main agent would call it like a tool: “draw_sales_chart(data, by=‘region’)”. The chart agent would then handle the rest. One can think of it as a specialized procedural module – it knows the procedure to turn data into a visualization. Over time it could even build a gallery of templates (a little procedural memory of its own) for different kinds of user requests (finance chart vs. timeline vs. distribution).
    • Swarm AI Supervisor / Orchestrator: This is where things get meta. As we add more agents, we will likely need an agent that manages the other agents. This could be split into two entities: a supervisor that monitors and provides high-level guidance, and an orchestrator that handles task routing (who should do what, and in what order). In practice, these could be implemented as one “manager” agent or a set of coordination scripts. The Orchestrator could function by maintaining an agenda of tasks and assigning them to agents: for instance, when a Slack message comes in, the Orchestrator decides that the query agent should handle the first part (data retrieval), then the chart agent should visualize it, then the main conversational agent should compose a summary with the chart. The Swarm Supervisor might play a more strategic role – observing agent interactions and injecting new goals (“we seem to be stuck, let’s consult the planning agent”). This part of the system will draw on the LangGraph Swarm library to enable agents to hand off and communicate. It’s complex, because now we’re dealing with potential concurrency and emergent behavior. However, starting with a clear Orchestrator logic (a workflow engine) can keep things predictable. Eventually, one could imagine an Orchestrator that itself is an LLM-based agent (taking in the overall state and deciding in natural language which agent should act next – a bit like an AI project manager). This is cutting-edge territory, but not science fiction: research systems like Adept’s ACT-1 or OpenAI’s “coach” models hint at such meta-agents.
    • Team Communication Agent: If baby-NICER becomes a collection of specialists, how do we present a unified front to users (e.g., Slack)? The idea of a team communication agent is to serve as the interface – a bit like a spokesperson. Currently, baby-NICER itself plays this role, but in a multi-agent future, we might dedicate an agent to it. This agent’s job would be to take the outputs of various specialist agents and weave them into a coherent response in the conversational style. It ensures that even if five different agents worked on a user’s request behind the scenes, the user gets one answer, in one voice. This agent might also handle clarifications: if the user asks something ambiguous, the communication agent can decide which specialized agent to ask for more info and then respond asking the user for clarification. It orchestrates dialogue – which is a different focus than orchestrating tasks. It needs a good understanding of context and human conversational norms (something a fine-tuned LLM excels at). One could implement it as a relatively large LLM with access to query the other agents (through the orchestrator). This agent might be almost as complex as the main agent originally was, because conversation is open-ended. But giving it a singular focus (communication) means we can optimize and prompt it specifically for that (e.g., “Always answer in a friendly tone and incorporate any charts or data provided by other agents into your explanation.”).
    • Memory Manager Agent: As the system grows, managing the various types of memories becomes challenging. For that we will introduce an agent whose sole job is to maintain and optimize the long-term memory stores. This memory manager will run in the background (during off-peak hours or triggered by certain events) to do things like summarizing old conversations (to compress episodic memory), indexing new documents into semantic memory, or pruning irrelevant information. It could also monitor for consistency – if two facts conflict, flag it for a human or decide which to keep. This is analogous to a human librarian or a brain’s sleep cycle consolidating memories. Implementing this will involve using LangMem’s background processing capabilities (there are hints of “background memory consolidation” in LangMem guides ). By modularizing memory upkeep, we prevent the other agents from getting bogged down and we can experiment with language models that are particularly good at this task. Such delegation also adds a layer of safety – the memory manager could scrub any sensitive information it finds that shouldn’t be kept, or ensure that personal data is segregated properly.

    In developing these, a clear distinction emerges between simpler and more complex agents.

    Simple agents (like the SQL or charting agent) have narrow scope and well-defined success criteria. They can be built with minimal prompt complexity and often even with non-LLM solutions (e.g., a Python script agent). They are akin to “tools” – mostly reactive, not proactive. Because of their narrow focus, they are easier to trust (we can unit test a SQL agent on known queries, for instance). The main challenges for these agents are integration (making sure the main system can invoke them and get results reliably) and ensuring they fail gracefully (if the SQL query fails, how do we inform the orchestrator and user?).

    Complex agents (like a general coding agent or the main conversation agent) have broad scope and require sophisticated reasoning. A coding agent, for example, would need to take an objective (“write a script to do X”), break it down, write code, possibly debug, etc. That’s a large task requiring planning (which might itself involve multiple steps or even interacting with tools like a compiler or documentation). Such an agent might internally be a mini multi-agent system – e.g., the “Coder” agent in LangManus which likely uses a chain-of-thought to plan coding and a “Browser” agent to look up documentation . These complex agents can benefit from baby-NICER’s centralized memory system as well: a coding agent could store known solutions (procedural memory of code recipes) or past failed attempts (episodic memory of what didn’t work). But handling that makes the agent heavy. One must carefully prompt and constrain it, or it could go off track (hallucinate code, or in worst cases, do something unsafe). Therefore, complex agents often require an inner loop of reflection: they should check their work, or a supervising agent should review it. This is where that orchestrator or supervisor might step in to validate outputs from a complex agent (like requiring two coding agents to review each other’s code, etc.).

    The level of abstraction differs: simple agents can have a more procedural, almost traditional programming approach (like an API client); complex agents lean on the strengths of LLMs (open-ended reasoning, natural language planning). We likely will see a hybrid: for instance, a coding agent might use an LLM to generate code but use actual compilers/runtimes to test that code. So it’s part autonomous, part tool-using.

    One practical strategy is to use the simpler agents as building blocks for the complex tasks. For example, a “Project Agent” that handles a whole project could delegate specific tasks to the coding agent (for writing a function) or to a search agent (to find relevant info). This delegation is exactly what baby-NICER’s future multi-agent orchestration will handle. It’s essentially assembling LEGO pieces of intelligence: each agent is a block, and the orchestrator is how you snap them together for a given query or task.

    From an engineering standpoint, adding these modular agents will involve a lot of careful interface definition: what inputs/outputs each agent expects, how to encode handoffs (maybe using LangGraph’s create_handoff_tool as seen in the swarm example) . Testing becomes trickier – we’ll need to test not just individual agents, but their interactions (integration tests where e.g. the SQL agent and chart agent together fulfill a user request).

    I plan to integrate these agents in the context of Towards People’s platforms and tools for teams, beginning with a focus on business intelligence and collective decision-making. So a swarm AI supervisor will incorporate higher-level reasoning about team objectives (not just individual queries). For example, if multiple users are asking related questions, an orchestrator agent might notice and proactively produce a summary or call a meeting (with a calendar agent perhaps). The possibilities expand as we add modules and as we ease successive pain points arising out there in the field.

    A concrete near-term future could look like: baby-NICER 2.0 where the Slack interface is backed by a team of agents: “Nicer-Chat”

    Conclusion & Call to Action

    Baby‑NICER is the beginning of a memory‑first, swarm‑ready agent that will turn everyday collaboration tools into an evolving collective brain—one that remembers decisions, surfaces the right data at the right moment, and nudges teams toward more inclusive, evidence‑based dialogue. It began with bringing ChatGPT and related models into Slack, via the Slack bot paradigm and it is now evolving into a blueprint for NICER: a Nimble Impartial Consensus Engendering Resource.

    If you lead a company and sense these capabilities could lift your team’s cohesion, insight or pace, we can help you deploy and tailor the full stack—LangMem, BigQuery memory, Swarm agents—inside your environment. Drop me a note at johannes@towardspeople.co.uk or leave a comment below.

    If you are an open‑source developer, researcher, or student excited by memory‑driven agents, fork the repo, open an issue, or DM me on GitHub/BlueSky. We gladly review PRs, discuss design ideas, and co‑author experiments.

    Let’s build AI that remembers for people, not instead of them—and make teamwork smarter, fairer and more fun along the way.

    References

    Anderson, J.R., Bothell, D., Byrne, M.D. et al. (2016) ‘An integrated theory of the mind’, Psychological Review, 111(4), pp. 1036–1060.

    Apache Software Foundation (ASF) (2025) ‘Superset 4.0 announcement’. Available at: https://superset.apache.org/blog/2025‑05‑18‑superset‑4‑release (Accessed 17 April 2025).

    Birch, J. (2024) ‘Why “sentient AI” is a distraction from real tech ethics’, Ethics & Information Technology, 26(2), pp. 233–240.

    Bourgeois, J. and LeMoyne, P. (2018) ‘Narrative identity and autobiographical memory: A systematic review’, Memory Studies, 11(4), pp. 493–510.

    Brey, P. and Dainow, B. (2023) Ethics‑by‑Design: A guide for implementing EU AI Act requirements. Brussels: European Commission Expert Group.

    Chalmers, D.J. (1995) ‘Facing up to the problem of consciousness’, Journal of Consciousness Studies, 2(3), pp. 200–219.

    Checkpointer docs (2025) Checkpointing & replay guide. LangGraph AI. Available at: https://docs.langgraph.ai/checkpointing (Accessed 17 April 2025).

    Cohen, N.J. and Squire, L.R. (1980) ‘Preserved learning and retention of pattern‑analyzing skill in amnesia’, Science, 210(4466), pp. 207–210.

    CrewAI (2024) ‘Building a crew of agents (example notebook)’. Available at: https://github.com/crewai/examples (Accessed 17 April 2025).

    CrewAI (2025) Using LangMem within CrewAI. Available at: https://docs.crewai.dev/memory‑integration (Accessed 17 April 2025).

    dbt Labs (2025) ‘dbt v1.7 release notes’. Available at: https://docs.getdbt.com/docs/release‑notes/v1.7 (Accessed 17 April 2025).

    Dennett, D.C. (2017) From Bacteria to Bach and Back: The Evolution of Minds. London: Allen Lane.

    Dennett, D.C. (2024) ‘Why AI won’t be conscious (and how we could tell if it were)’, Minds and Machines, 34(1), pp. 1–25.

    European Commission (2019) Ethics Guidelines for Trustworthy AI. Brussels: Publications Office of the EU.

    Fernández‑Vicente, M. (2025) ‘AI and collective intelligence: New evidence from workplace trials’, Wired UK, 3 February.

    Ferrucci, D. (2019) ‘AI for the practical man’, AI Magazine, 40(3), pp. 5–7.

    Floridi, L. (2008) ‘Information ethics: A reappraisal’, Ethics and Information Technology, 10(2–3), pp. 189–204.

    Gebru, T. (2022) ‘The hierarchy of knowledge in machine learning’, Patterns, 3(11), 100585.

    Google Cloud (2024) ‘Vector search in BigQuery’. Available at: https://cloud.google.com/bigquery/docs/vector‑search‑overview (Accessed 17 April 2025).

    Google Cloud (2025a) ‘BigQuery security overview’. Available at: https://cloud.google.com/bigquery/docs/security‑overview (Accessed 17 April 2025).

    Google Cloud (2025b) ‘Column‑level security with policy tags’. Available at: https://cloud.google.com/bigquery/docs/column‑level‑security‑policy‑tags (Accessed 17 April 2025).

    Google Cloud (2025c) ‘VECTOR_SEARCH function’. Available at: https://cloud.google.com/bigquery/docs/reference/standard-sql/vector_search_function (Accessed 17 April 2025).

    Google Cloud (2025d) ‘Authorised views’. Available at: https://cloud.google.com/bigquery/docs/authorized‑views (Accessed 17 April 2025).

    Google Cloud (2025e) ‘Dataset access controls’. Available at: https://cloud.google.com/bigquery/docs/dataset‑access‑controls (Accessed 17 April 2025).

    Guu, K., Lee, K., Tung, Z. et al. (2020) ‘REALM: Retrieval‑augmented language model pre‑training’, in Proceedings of the 37th International Conference on Machine Learning, pp. 3929–3938.

    IBM (2024) ‘Introduction to sentence embeddings’. IBM Developer Blog, 12 January.

    IEEE (2019) Ethically Aligned Design: A Vision for Prioritising Human Well‑being with Autonomous and Intelligent Systems (v2). Piscataway, NJ: IEEE Standards Association.

    Izacard, G. and Grave, E. (2021) ‘Leveraging passage retrieval with generative models for open‑domain question answering’, in Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, pp. 874–880.

    Karimi, S. (2025) ‘Chatbots vs. memory: Why context windows aren’t enough’, VentureBeat, 11 March.

    Kolodner, J.L. (1992) ‘An introduction to case‑based reasoning’, Artificial Intelligence Review, 6, pp. 3–34.

    Kurzweil, R. (2012) How to Create a Mind: The Secret of Human Thought Revealed. New York: Viking.

    Laird, J.E. (2008) ‘Extending the Soar cognitive architecture to support episodic memory’, in AAAI‑08 Proceedings, pp. 1540–1545.

    LangChain AI (2024) ‘LangMem conceptual guide’. Available at: https://langchain‑ai.github.io/langmem (Accessed 17 April 2025).

    LangChain AI (2025) LangChain Core Documentation. Available at: https://docs.langchain.com (Accessed 17 April 2025).

    LangGraph AI (2025) ‘LangGraph multi‑agent template’. GitHub. Available at: https://github.com/langchain‑ai/langgraph‑templates (Accessed 17 April 2025).

    LangGraph AI (2025) ‘Swarm‑py: Peer‑to‑peer multi‑agent layer’. GitHub README. Available at: https://github.com/langchain‑ai/swarm‑py (Accessed 17 April 2025).

    Lenat, D.B. (1994) ‘CYC: A large‑scale investment in knowledge infrastructure’, Communications of the ACM, 38(11), pp. 33–38.

    Lewis, P., Oguz, B., Rinott, R. et al. (2020) ‘Retrieval‑augmented generation for knowledge‑intensive NLP tasks’, Advances in Neural Information Processing Systems, 33, pp. 9459–9474.

    Mazzucato, M. (2018) The Value of Everything: Making and Taking in the Global Economy. London: Penguin.

    McCarthy, J. (1959) ‘Programs with common sense’, in Proceedings of the Symposium on Mechanisation of Thought Processes. London: HMSO, pp. 77–84.

    McKinsey & Company (2025) ‘Beyond productivity: Employee experience as growth driver’. McKinsey Insights. Available at: https://www.mckinsey.com/insights/employee‑experience‑2025 (Accessed 17 April 2025).

    Michel, S., Brown, T. and Williams, K. (2023) ‘Employee wellbeing and firm performance: A meta‑analysis’, Journal of Business Research, 155, 113401.

    Minsky, M. (1974) ‘A framework for representing knowledge’. MIT AI Lab Memo 306.

    MLflow (2025) ‘MLflow Tracking quickstart’. Available at: https://mlflow.org/docs/latest/quickstart (Accessed 17 April 2025).

    Nagel, T. (1974) ‘What is it like to be a bat?’, The Philosophical Review, 83(4), pp. 435–450.

    Ndukwe, C. (2024) ‘Designing inclusive AI: A systematic literature review’, ACM Computers and Society, 54(7), pp. 45–60.

    OpenAI and Slack (2024) ‘Integrating ChatGPT in Slack’. Available at: https://openai.com/blog/slack‑integration (Accessed 17 April 2025).

    RealKM (2023) ‘Knowledge graphs: The next chapter’. RealKM Magazine, 18 July.

    Ranganath, C. (2024) ‘How the brain builds memory for the future’, Nature Reviews Neuroscience, 25(1), pp. 1–15.

    Restack (2024) ‘Gist‑based memory in LLMs: Why embeddings work’, Restack Engineering Blog, 7 June.

    Russell, S.J. and Norvig, P. (2021) Artificial Intelligence: A Modern Approach. 4th edn. Hoboken, NJ: Pearson.

    Schrage, M. and Kiron, D. (2025) ‘Is “conscious AI” a solution in search of a problem?’, MIT Sloan Management Review, 66(4), pp. 1–6.

    Searle, J.R. (1980) ‘Minds, brains and programs’, Behavioral and Brain Sciences, 3(3), pp. 417–424.

    Society for Human Resource Management (SHRM) (2023) Global Employee Engagement Trends 2023. Alexandria, VA: SHRM Research.

    Stack Overflow (2023) Survey of 2023 Embedding Models. Stack Overflow Labs White‑paper.

    Tessler, M., Benz, A. and Goodman, N. (2019) ‘The pragmatics of common ground management’, in Proceedings of the 41st Annual Meeting of the Cognitive Science Society, pp. 1106–1112.

    Tulving, E. (1972) ‘Episodic and semantic memory’, in Tulving, E. and Donaldson, W. (eds.) Organization of Memory. New York: Academic Press, pp. 381–403.

    Woolley, A.W., Chabris, C.F., Pentland, A., Hashmi, N. and Malone, T.W. (2010) ‘Evidence for a collective intelligence factor in the performance of human groups’, Science, 330(6004), pp. 686–688.Zhang, L., Rao, S. and Kim, J. (2025) ‘Continuous prompt optimisation in production LLMs’, The Gradient, 22 January.

    (c) copyright T

    (c)

  • The Behavioral Science Path to Political Nihilism

    Png;base64,iVBORw0KGgoAAAANSUhEUgAABAAAAAKrAQMAAABV2G3XAAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAAGxJREFUeNrtwTEBAAAAwqD1T20MH6AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOBtYOgAB23VbnQAAAABJRU5ErkJggg==

    “We are unknown to ourselves, we men of knowledge—and with good reason. We have never sought ourselves.”

    — Nietzsche, On the Genealogy of Morals, Preface, §1

    Introduction: The Unconscious as Commodity

    In the soot-slicked cafe houses of fin-de-siècle Vienna, as the Austro-Hungarian Empire wheezed toward collapse and nationalism clawed at the edges of cosmopolitan order, Sigmund Freud began chiseling out a theory of the unconscious. His project, at once radical and deeply conservative, aimed to expose the subterranean neuroses underpinning bourgeois civility — repression, sublimation, and all the other baroque furniture of inner life (Makari, 2008). Freud’s theories were a kind of psychological archaeology, sifting through the rubble of modernity’s anxieties.

    Edward Bernays, Freud’s American nephew, had no patience for the couch; his focus was on the marketplace. Where Freud saw psychic conflict, Bernays saw opportunity — the chance to weaponize unconscious drives for commercial and political gain. It wasn’t long before he pivoted from theory to theater, using Freud’s psychological excavations to stage-manage public opinion. Bernays didn’t just sell products; he sold desire itself. And that meant manipulating the masses with the finesse of a carnival hypnotist. Freud was not thrilled. Stuart Ewen (1996) notes that the elder psychoanalyst regarded this vulgarization of his life’s work with horror and disdain.

    Still, Bernays had a PR problem. The word “propaganda” had accrued the stench of war, goose-stepping its way out of polite company. So he rebranded it — euphemistically— as “public relations,” a title with just enough antiseptic shine to pass as respectable (Bernays, 1928). The game, however, remained the same: decode the id, push the right symbolic buttons, and guide the herd wherever your client needed them to go.

    In this sleight of hand, the purpose of mass communication shifted from informing citizens to nudging consumers. Needs, which could be met, gave way to desires, which — conveniently — never could. And so emerged the ideal subject of modern capitalism: the insatiable shopper, forever confusing acquisition with fulfillment. What began as an intellectual exploration of psychic trauma was now a toolkit for engineered consent. The transition from citizen to consumer had begun — not with a bang, but with a jingle.

    The Contemporary Legacy: From Mass Advertising to Algorithmic Influence

    What follows is a cartography of five overlapping terrains where the legacy of consumer engineering continues to mutate under digital capitalism’s algorithmic gaze:

    1. Consumer Democracy, Now Entrenched: The architecture of civic life was redesigned around choice and affect, casting deliberation and solidarity aside like merchandise that’s gone out of style. Life becomes a shopping mall, where you express your values by choosing between logos—be they sneakers or political parties.
    2. The Ethical Hijacking of Psychology: Conceived of as a toolkit for introspection and healing, psychological theory has been conscripted into campaigns of manipulation. The once-definable boundary between therapeutic introspection and behavioral manipulation has dissolved into a murky gradient—one that seems far more lucrative for advertisers than it is liberating for patients.
    3. Data-Driven Politics and the Market Logic of Governance: What began as market research has metastasized into unabashedly intrusive psychographic surveillance. Voter behavior is no longer predicted by class or ideology, but by clickstreams and sentiment analysis. And the goal? Influence, not understanding.
    4. Social Media: The Surveillance-Emotive Complex: Amid the era of digital mirrors, we don’t just perform identity—we train the algorithm that sells it back to us. Political discourse has become a one-way mirror: curated for maximum engagement and harvested for behavioral data, all under the guise of participation. I’m haunted by the irony that, had I been around in the 60s, I’d have cheered the countercultural movements that were swiftly co-opted into laying the groundwork for this hyper-individuated mess.
    5. Digital Democracy: Liberation or UX-Managed Delusion? Direct democracy, platform-optimized, promises participation and transparency. I too love this vision, as the dreamers present it! But in the absence of institutions grounded in community—like unions—it risks becoming yet another interface for atomized expression rather than collective agency. Cesar Hidalgo’s well articulated “bold idea to replace politicians” | César Hidalgo (TED Talk)—may, if we don’t carefully consider its pitfalls—lead to an even further exasperation of consumer democracy, rather than curing us from its malaise.

    In threading historical fractures with contemporary feedback loops, this blog post slices through the past and pokes at the twitching nerves of the present—scalpel in hand, and eyebrow raised. Bernays’s spectral fingerprints smear every push notification and sponsored post. And democracy—emaciated, atomized, and wrung free of solidarity—risks becoming nothing more than a lifestyle subscription in a neoliberal app store.

    Key Ideas in the Making of Consumer Democracy

    The transformation of citizens into consumers — individuals encouraged to fulfill personal desires through market choices — was not some organic evolution of modern life. It was manufactured. Or, more precisely, it was engineered. Behind this metamorphosis lies a century-long campaign of psychological reorientation: one that replaced needs with desires, debate with branding, and civic deliberation with emotional calibration.

    It began, as many things do, with a cigarette. In one of the most iconic PR coups of the 20th century, Edward Bernays — Sigmund Freud’s American nephew and the self-styled architect of public relations — staged a stunt in which women, previously stigmatized for smoking, lit up Lucky Strikes in a 1929 Easter Parade, their cigarettes branded as “Torches of Freedom.” Feminist liberation, it seemed, could be puffed into existence with the right product and enough press photographers. The campaign worked. Women smoked, Lucky Strikes sold, and Bernays cemented his career. More importantly, a precedent was set: consumer goods could be tethered to deep-seated psychological yearnings, particularly those tied to identity and autonomy.

    Bernays didn’t pluck this strategy from thin air. He consulted with psychologists to decode the libidinal subtext of everyday behavior, translating Freud’s theories into a playbook for marketing. As he explained with disturbing candor, the public needed to be managed by an “intelligent minority” who understood the irrational impulses governing the masses. The term “propaganda,” sullied by its wartime associations, was euphemistically rebranded as “public relations.” But the project remained the same: to shape public opinion not through deliberation, but through subconscious nudges.

    As Stuart Ewen (1996) documents, Bernays’ approach helped pivot American consumer culture from a needs-based ethic to one fixated on desire. Goods were no longer things you bought because you required them; they became vessels of personal expression and aspirational fantasy. You didn’t just buy a car — you bought prestige, masculinity, freedom. The locus shifted from fulfilling tangible necessities to cultivating endless wants. And because desire is, by nature, insatiable, this created the perfect consumer: one who is always shopping, always slightly unsatisfied.

    This new creed turned shopping into a quest for identity and happiness — effectively a privatized version of the American Dream. Bernays unabashedly argued that such manipulation was “a logical result of our democratic society” — an “intelligent manipulation of the masses” by a skilled elite to maintain social order. “We are governed, our minds molded, our tastes formed, our ideas suggested, largely by men we have never heard of,” he wrote, referring to the publicists and advertisers pulling the strings. He dubbed this the “engineering of consent,” a technique by which irrational public impulses were to be channeled toward benign ends — like consumer goods or tepid political gestures.

    This shift in the cultural economy foreshadowed what would later metastasize into consumer democracy: a political order where citizens are reframed as market actors and governance becomes another arena for curated choice. If you can choose between 20 brands of cereal, why not 20 versions of political “identity”? The same logic applied, with fewer scruples.

    “If you want to be free …Order yourself an Anarchy Burger …Hold the government, please”—The Vandals. The Vandals’ sarcasm captures the absurdity of a public discourse where rebellion itself is commodified. Freedom? Pick a flavor.

    This wasn’t just about advertising, though. It was an epistemic shift. Bernaysian logic permeated politics, education, and social norms. The citizen, once imagined as a rational participant in democratic life, was reimagined as a bundle of sentiments to be managed. Mass media and marketing would become the new agora. And in this new public sphere, the loudest weren’t those with arguments, but those with the best visuals, slogans, and ad budgets.

    We can trace the consequences everywhere: in the algorithmically-optimized feeds that titillate more than they inform, in the political campaigns that segment voters into microtargeted emotional categories, and in the way public policy is often shaped more by polling than principle. Consumer democracy isn’t just a metaphor. It’s a regime.

    And yet, this trajectory was never inevitable. It was built — in boardrooms, at PR firms, through radio jingles and televised speeches, in moments like Bernays’ parade stunt. The question now is not just how we got here, but what it would take to reimagine citizenship beyond the logic of desire. Can democracy be decoupled from the market’s grip? Or are we stuck asking for tasty lies with our Anarchy Burgers?

    Edward Bernays and the Coup: When PR Became Regime Change

    If the transformation of citizens into consumers marked the cultural front of Edward Bernays’s legacy, the 1954 CIA-backed coup in Guatemala was its political apotheosis — the moment when spin doctoring graduated from product placement to geopolitical subterfuge. Having already helped tether American freedom to commodity choice, Bernays now offered his services to the United Fruit Company, an agribusiness leviathan worried that Guatemalan democracy might be bad for business.

    The problem? Guatemala had elected Jacobo Árbenz, a reformist president who dared to propose redistributing uncultivated land — including a significant portion held idle by United Fruit — back to the people. To Bernays, this wasn’t just a public relations challenge; it was a PR opportunity. His solution: conflate land reform with Soviet-style communism, and dress it all up in the star-spangled language of freedom.

    Armed with a propaganda arsenal that included fake news bureaus, astroturfed media campaigns, and lavishly curated junkets for U.S. journalists, Bernays whipped up an ideological panic. Through his Middle America Information Bureau — a front designed to look like a neutral research institute — he pushed stories framing Árbenz as a Red menace on America’s doorstep. “Every American has a personal stake in our relations with Middle America,” one of their pamphlets warned, helpfully omitting who owned the stakes.

    And the message landed. Opinion pages bristled with anti-Árbenz screeds, Congressional ears perked up, and the Eisenhower administration, stacked with former United Fruit lawyers, saw its cue. The CIA launched Operation PBSuccess, replete with disinformation, psychological warfare, and a phony radio station broadcasting faux rebel victories. Árbenz, politically outmaneuvered and rhetorically buried under Bernays’s avalanche of bullshit, stepped down. A dictatorship took his place. Shareholders exhaled.

    Bernays later claimed to feel betrayed — not by the collapse of Guatemalan democracy, but by United Fruit’s failure to keep him on retainer. He considered himself a casualty. The Father of Public Relations, ever the narcissist, forgot to account for the collateral damage of his craft.

    As Aleksandr Solzhenitsyn once put it, “In our country, the lie has become not just a moral category but a pillar of the State.” Swap “country” for “client,” and you’ve distilled Bernays’s doctrine into a single, bitter aphorism.

    What’s remarkable is not that it worked, but that it became the blueprint — for every modern campaign of manufactured consent, every pundit echo chamber, every corporate-sponsored identity movement. Bernays had shown that it was possible to topple a government using the same techniques that sold cigarettes to debutantes. And in doing so, he didn’t just engineer consent — he commodified it.

    Where propaganda once served empires and war machines, Bernays made it a service industry. The rest, as they say, is branding.

    Wilhelm Reich and the Self-Help Revolution: From Liberation to Lifestyle

    If Edward Bernays weaponized Freud’s psychoanalytic insights to pacify the masses and lubricate the gears of consumer capitalism, Wilhelm Reich tried to turn that same Freudian dynamite into a tool for emancipation. But, as with many utopian blueprints drawn in the margins of modernity, things got weird. Reich wasn’t interested in manipulating unconscious drives to move merchandise or destabilize governments. Instead, he aimed to unleash them, especially those pent-up libidinal energies he believed were being throttled by societal repression. For Reich, it wasn’t the desires themselves that made people sick and submissive—it was their chronic, institutionalized suppression. That repression, he argued, rendered bodies tense, minds neurotic, and entire societies disturbingly ripe for authoritarianism.

    A renegade psychoanalyst ultimately exiled from Freud’s inner circle for being too much even for Freud, Reich was convinced that sexual repression lay at the molten core of fascism. His 1933 polemic The Mass Psychology of Fascism made a then-radical claim: that obedient citizens weren’t born—they were engineered through the nuclear family, compulsory morality, and a culture hellbent on policing pleasure. Freedom, in Reich’s schema, wasn’t a matter of abstract rights—it was somatic. To liberate the mind, you first had to liberate the body. And the revolution, he insisted, wouldn’t be televised; it would be had in bed.

    For decades, Reich’s theories drifted through the periphery of intellectual respectability, dismissed in Cold War America as the ravings of an oversexed European eccentric with a penchant for orgone boxes and conspiracy theories. But by the late 1960s, as American democracy began to show its authoritarian teeth—most infamously in the bloody repression of protest at the 1968 Democratic National Convention in Chicago and the 1970 Kent State massacre—Reich’s writings began to resonate. The New Left, disillusioned by institutional gridlock and police batons, started to pivot inward. If the system wouldn’t change, maybe the self could.

    And I admit, I would have been right there with them—barefoot at Esalen, eyes closed in breathwork, voice hoarse from screaming into the abyss. Not seeing, not yet, that this beautiful refusal to conform was already being eyed as a branding opportunity. That the great uncorking of the soul might come prepackaged in lavender-scented bath bombs and trauma-informed business seminars.

    Thus began what would eventually calcify into the Human Potential Movement—a sprawling, sometimes farcical experiment in self-actualization that blended earnest therapeutic inquiry with a sideshow of New Age antics. It was less a structured insurrection than a fever dream stitched together from breathwork seminars, primal scream marathons, and ecstatic dance circles, punctuated by moments of genuine psychological reckoning and no small number of questionable gurus. Here was a movement that promised liberation through the self, but often delivered little more than spiritual consumerism in a bathrobe.

    This cultural detour birthed a heady fusion of Reichian ideas, New Age mysticism, and pop psychology, embodied in places like the Esalen Institute in Big Sur, California. It was here that encounter therapy, primal scream sessions, breathwork, and naked hot tubs became forms of resistance—or at least attempts to short-circuit the emotional shutdown that late-stage capitalism seemed to induce. Reich’s premise—that personal liberation could catalyze societal transformation—was still there, though now diluted into bite-sized aphorisms and branded workshops. If enough seekers could unblock their emotional constipation, perhaps a new world might emerge. Group therapy as revolutionary praxis.

    But capitalism, ever the shape-shifter, was watching—and learning. What initially seemed like an existential threat to consumer society became, in short order, a branding opportunity. Corporations that had bristled at the counterculture’s anti-materialism quickly realized that expressive individualism was just another market segment waiting to be monetized. One ad executive reportedly nailed the new paradigm: “If they want to be unique, we’ll sell them that too.”

    And sell they did. By the 1970s, “being yourself” had become a monetizable identity package. Werner Erhard’s est seminars offered catharsis via psychological waterboarding—for a fee. Self-help books flew off the shelves. Humanistic psychology, once focused on existential liberation, was repurposed for corporate management retreats. Esalen began hosting not only barefoot mystics, but also Fortune 500 executives in search of spiritual edge. Maslow’s once radical hierarchy of needs was turned into a scaffolding for Theory Y management models, in which “self-actualized” employees were expected to find fulfillment in vision statements instead of higher wages.

    Reich’s rallying cry for psychic liberation was thus commodified, laminated, and sold back to us with artisanal packaging and a subscription model. The revolution he envisioned became the soft-focus aesthetic of boutique wellness. Marcuse’s warning in One-Dimensional Man had come true: “The people recognize themselves in their commodities.” Now, they also recognized their inner selves in yoga pants, mindfulness apps, and startup culture.

    This is the tragicomic punchline of the human potential revolution: it tried to cure alienation through introspection and self-expression, but instead greased the wheels for consumerism’s most insidious evolution. It didn’t dismantle Bernays’s ideological architecture—it added feng shui, playlisted affirmations, and CBD-infused water. What began as a critique of conformity became a manual for lifestyle branding.

    In the end, the soul—like the body, like desire—turned out to be prime marketing terrain. And the more we searched for authenticity, the more it became available… at a markup.

    Charles Taylor and the Ethics of Authenticity: When the Self Became Sacred

    As the Human Potential Movement spun self-actualization into a kale smoothie of spiritual aphorisms and branded transcendence, the Canadian philosopher Charles Taylor stepped in to offer something far less theatrical and commodifiable: a sober philosophical account of what this obsession with the “authentic self” might mean for modern life. In The Ethics of Authenticity, Taylor doesn’t throw the self out with the bathwater. He acknowledges the powerful moral force behind the modern ideal of being true to oneself. Yet he issues a philosophically incisive caution that echoes the pitfalls of the self-actualization movement we just surveyed—namely, that the very structures built to liberate the individual have a way of turning us inward, isolating rather than connecting, and mistaking curated expression for genuine transformation: when authenticity is cut loose from any shared horizon of meaning, it curdles into narcissism.

    Taylor traces this crisis of meaning back to what he calls the “malaise of modernity”—a condition in which individual freedom, stripped of communal anchors, becomes a form of performative solipsism. The self, having jettisoned any reference to tradition, community, or transcendent value, finds itself floundering amid a deluge of options, none of which seem to carry any inherent weight. Amid this ontological drift, the pursuit of authenticity risks becoming a self-licking ice cream cone: pleasurable, fleeting, and ultimately hollow.

    For Taylor, the celebration of inner truth must be tethered to something beyond the self—a web of social and ethical commitments that give shape and substance to personal identity. Otherwise, what passes for authenticity risks becoming little more than aestheticized individualism, or worse, a moral justification for self-indulgence. Cue the influencer who claims their luxury wellness retreat is a form of radical healing. In Taylor’s terms, this is not authenticity; it’s atomization in designer yoga pants.

    What makes Taylor particularly relevant to this story is his insistence that modern identity is dialogical. The self, he argues, is never constructed in isolation. It emerges through our relationships, our languages, our histories. You don’t find your true self by looking inward like a mystic rummaging through a sock drawer. You find it by navigating a world of meanings shared with others—a world that neoliberal consumerism seems hellbent on eroding. He reminds us that meaning is not manufactured from scratch each morning, like a brand’s social media strategy—it is inherited, negotiated, and lived.

    Taylor is particularly scathing about the cultural forces that reduce authenticity to a stylistic preference. The problem isn’t self-expression per se, but the way consumer capitalism has rigged the game so that self-expression is almost always mediated through commodified forms. The marketplace promises individuality, but delivers mass-produced identities in artisanal packaging. Even rebellion gets monetized. The thirst for meaning becomes a sales funnel.

    And here’s the twist of the knife: the more we invest in market-curated forms of expression, the less room we have for genuine dialogue. Expression replaces conversation. Branding replaces belief. The self becomes a performative loop, optimized for engagement but starved of resonance. This is not freedom; it’s just another algorithmic enclosure, decked out in the language of empowerment.

    Taylor’s diagnosis cuts deep. He doesn’t dismiss the modern hunger for authenticity as mere fluff. But he asks a hard question: what happens when that hunger is fed entirely by consumer culture? When every attempt to “be yourself” ends with a targeted ad? If Reich feared the authoritarian personality and Bernays engineered the pliable consumer, then Taylor gives us the final form: the atomized self, polished to a high gloss, free to choose anything—except solidarity.

    In this light, the ethics of authenticity becomes a paradox. We chase freedom through self-expression, but end up reproducing the same structures we hoped to escape. We seek depth, and find curated surfaces. We want to matter, and end up quantified. Taylor offers no easy fix—he’s a philosopher, not a guru—but he does gesture toward a way out: reclaiming authenticity not as a brand, but as a relational practice. That means rediscovering dialogue, recovering shared values, and refusing to reduce meaning to market metrics; saving the self by escaping the selfie.

    If there’s a way out, Taylor suggests, it lies not in rejecting authenticity, but in rescuing it—by reconnecting the self to the social, the expressive to the ethical, and freedom as beyond the app with a good UX. Only then can we begin to imagine a version of the self that isn’t just algorithmically curated, but existentially rooted—and maybe even politically potent.

    Habermas and Marcuse: When Reason Went Quiet and Desire Took the Mic

    If Charles Taylor mourns the hollowing out of authenticity in a culture of commodified selfhood, Jürgen Habermas and Herbert Marcuse come in swinging with a more system-wide critique: that the entire apparatus of rationality—the Enlightenment’s noble dream of rational discourse, emancipatory dialogue, and democratic deliberation—has been quietly, and elegantly, hijacked. Not by despots or saboteurs, but by the mundane mechanisms of technocracy, consumer capitalism, and algorithmic governance. The public sphere, in their telling, didn’t just wither on the vine from civic apathy—it was methodically evacuated, sterilized, and replaced with a sanitized agora where the only debates permitted are those that boost shareholder value or feed data-hungry platforms.

    Habermas, a cautious optimist in the Frankfurt School lineup, famously developed a model of “communicative rationality”—a space in which genuine democratic legitimacy could be forged through open, inclusive, and reasoned discourse among equals. A kind of Enlightenment 2.0. But even he was forced to acknowledge what he termed the “colonization of the lifeworld”: the slow but relentless incursion of systemic imperatives—market logic, bureaucratic efficiency, institutional self-preservation—into the intimate, meaning-making spaces of human life. Where once families, communities, and cultures served as scaffolding for public reasoning and shared values, we now have feedback loops curated by adtech, and every interaction is another datapoint in a corporate dashboard.

    And Marcuse—less moderate, more mordant. In One-Dimensional Man, he launched an intellectual cruise missile at what he called the “advanced industrial society.” His target? The velvet glove of modern repression: a society that manufactures consent not through coercion, but through a glut of choice, dopamine, and simulated freedom. For Marcuse, the real danger was not Orwellian censorship but Huxleyan saturation—a world in which people are pacified not by fear, but by streaming content, dopamine loops, and the illusion of autonomy.

    What makes Marcuse’s critique so biting is his understanding that rebellion itself can be defanged, aestheticized, and sold back to you as merch. The system doesn’t just tolerate critique; it thrives on it, so long as that critique comes in the form of ironic TikToks, bumper stickers, or artisanal rage candles. Dissent becomes another lifestyle category, and the would-be revolutionaries are gently herded into branded subcultures with just enough edge to feel rebellious but not enough teeth to pose a threat. It’s capitalism’s greatest magic trick: making you feel subversive while you’re still shopping.

    Together, Habermas and Marcuse sketch a bleak but precise portrait of democracy’s twilight under late capitalism. Where Habermas dreams of discursive redemption—of reason reclaiming its democratic throne—Marcuse sees the throne turned into a product placement opportunity. The tension between them is instructive: one still believes in the power of dialogue to rescue the political; the other suspects that dialogue has already been monetized and replatformed as content.

    Both thinkers, however, speak directly to the legacy of Edward Bernays. Bernays replaced deliberation with persuasion, policy with PR. And in the wake of his innovations, democracy has become something you experience—like a vibe or an aesthetic—not something you enact. A mood board, not a method. And here’s where Habermas and Marcuse converge: in their shared horror at the way language, discourse, and desire have been hollowed out, repackaged, and resold as UX features.

    It bears remembering that the backdrop to their critiques was the still-smoldering specter of fascism—not merely as a political catastrophe, but as a cultural pathology. They understood that authoritarianism doesn’t just goose-step in with flags and uniforms—it creeps in through branding guidelines and quarterly earnings reports. When politics becomes a battleground of logos, when civic engagement is reduced to optimized frictionless experiences, we don’t get democracy. We get its simulacrum.

    In this sense, their message is less warning than diagnosis: the erosion of public reason isn’t some unfortunate bug in the democratic operating system—it’s a core feature of a society where persuasion has outpaced reflection, and branding has usurped belief. Unless we recover spaces for meaningful dialogue—spaces not governed by metrics, monetization, or market segmentation—we risk mistaking managed consensus for public will, and algorithmic alignment for democratic agreement. Or as Marcuse might have snarled between drags on his pipe: the revolution will not be televised, but it might be available as a subscription service.

    When Therapy Met the Surveillance State

    Built for introspection and healing, psychology found itself shackled to dark ambitions in the mid-20th century—ambitions that had more to do with control than catharsis. The Cold War conscripted science into a paranoiac arms race, where minds were not sanctuaries but sites for intervention. In the hands of intelligence agencies and corporatized technocrats, therapeutic knowledge became instrumentalized—repurposed for manipulation and weaponized for geopolitical theater.

    Consider Project MKUltra, that Frankensteinian foray into behavioral alchemy, bankrolled by the CIA under the dubious pretext of fending off Soviet “brainwashing.” In its fever-dream logic, universities, hospitals, and mental wards were transformed into covert laboratories, where the patient was no longer a subject but a substrate. At the epicenter of this lunacy stood Dr. Donald Ewen Cameron, conductor of the infamous “Montreal Experiments.” Cameron wasn’t content with inkblots and insight. He preferred to reduce patients to pulp via weeks-long drug-induced comas, high-voltage electroshocks, sensory deprivation, and audio loops of his own voice—a regimen he termed, with chilling euphemism, “psychic driving.”

    His goal? Not healing, but obliteration: erase the psyche to remake it anew, like a state-sanctioned phoenix rising from pharmacological ashes. What he achieved instead was a grotesque parody of rebirth—patients emerged cognitively mangled, emotionally hollowed, and permanently detached from the contours of their former lives. These weren’t therapies; they were institutionalized crucibles of torment, cloaked in the lab coat of legitimacy. The full grotesquerie didn’t leak into public view until the 1975 Church Committee hearings, by which time the moral rot had already metastasized through the bureaucratic bloodstream.

    MKUltra failed to summon the philosopher’s stone of mind control, but it did confirm a dark truth: when psychological frameworks are yoked to unchecked power, they become artisanal tools of violation. Citizens became test subjects, and the sovereign state moonlighted as a rogue therapist with an electroshock fetish. LSD trials on unknowing civilians, coercive hypnosis, behavioral modification protocols—“One morning, as Gregor Samsa was waking up from anxious dreams, he discovered that in his bed he had been changed into a monstrous bug…”― Franz Kafka, The Metamorphosis. As CIA psychologist John Gittinger later admitted, they’d been “chasing a phantom”—a revelation made with the air of someone returning a defective blender, not recounting a state-sponsored epistemological horror show.

    In Hollywood, psychoanalysis made its awkward cameo as both therapist and stage parent: Ralph Greenson, Anna Freud’s most devout disciple and LA’s go-to shrink for the emotionally flammable elite. Greenson wasn’t just offering Marilyn Monroe therapy—he pitched her a kind of reality show for the damaged psyche. He moved her into a house styled like his own, inserted her into his family dinners, and cast himself as her bespoke father figure in a domestic cosplay of mental health. His theory, ripped from the Freudian orthodoxy, was that if you could simulate “normality” long enough, maybe the ego would stop screaming. But Monroe didn’t need a stage set—she needed help. And when she died by suicide in 1962, with her suicide also died the idea that conformity cures existential despair.

    The psychoanalytic establishment, intoxicated with its own cultural cachet, suddenly looked less like liberators of the soul and more like the behavioral janitors of postwar America. Arthur Miller, Monroe’s ex-husband and resident conscience of mid-century angst, was unamused. In a 1963 interview, he blasted the therapeutic zeitgeist for trying to medicate suffering out of existence—as if anguish were a glitch in the system rather than a source of philosophical insight. Happiness, he argued, had been redefined as a kind of lobotomized placidity. Therapy had gone from Socratic inquiry to emotional sandpapering. It wasn’t about helping people live with their pain, but about turning them into docile consumers of the American Dream—fully adjusted, fully sedated, and fully pacified. Psychoanalysis, once pitched as a pathway to freedom, had become a velvet straitjacket for the modern soul.

    Where Freud meticulously charted malaise as a byproduct of inner discord—conflicted drives, repressed desires, the baroque soap opera of childhood—Marcuse took one contemptuous look at society and said: maybe the sickness isn’t in us, but in the world we’re told to adapt to. “It’s not the people who are broken; it’s the system that is broken.” —John Trudell. To Marcuse and Trudell, adjustment wasn’t maturity; it was capitulation in drag. The Freudian ego, once celebrated as the civil engineer of the psyche, became in Marcuse’s hands a glorified hall monitor, dutifully enforcing norms designed to keep the consumer-citizen docile, productive, and numb.

    Dr. Neil Smelser—equal parts political theorist and psychoanalyst—offered the boiled-down version: in Marcuse’s view, adaptation had ossified into surrender. Normalcy? A scar in the shape of submission. The well-adjusted were not paragons of psychic health but polished accomplices to a pathological system.

    in 1967 Martin Luther King Jr. took that logic and gave it a preacher’s cadence. “There are some things in our society and some things in our world to which I am proud to be maladjusted,” he thundered. Racism, economic exploitation, spiritual lobotomy—he wasn’t having it. His was a gospel of productive nonconformity, a public theology that framed maladjustment as a form of moral clarity. He wasn’t asking for serenity. He was demanding rupture.

    In the halls of psychoanalysis, the practice that once fancied itself as a scalpel for liberation had become, in the hands of its custodians, a mechanism of haute-bourgeois domestication. Anna Freud, ensconced in the Freud family manse in Maresfield Gardens, kept the orthodoxies alive like relics in a reliquary, even as the world outside grew louder, more volatile.

    But no amount of fidelity to theory could keep the shadows at bay. Dorothy Burlingham, Anna’s lifelong confidante and collaborator, watched her own family unravel. Her son, Bob, drank himself to death. Her daughter, Mabbie, returned to the Freud household for yet another round of analysis—and never left. In 1973, she overdosed on sleeping pills in Sigmund Freud’s own house. It was a tragic coda and it was a damning commentary on a method that promised catharsis but delivered recursive paralysis. A suicide in the sanctum sanctorum of the talking cure: not accident but posthumous critique.

    In the end, perhaps maladjustment wasn’t pathology—it was prophecy. The couch, built as a crucible of insight, became instead a velvet reformatory. And the psychoanalytic establishment, busy decoding dreams, failed to notice that it had become one.

    Meanwhile, Madison Avenue had begun sniffing around psychology’s dark alleys. Vance Packard’s 1957 exposé The Hidden Persuaders laid bare how Freudian theory was being remixed for commercial ends—less Oedipus, more impulse buy. Advertisers weren’t interested in your conscious needs; they were spelunking your psyche for triggers, aspirations, neuroses. They’d studied your inner child—and sold it candy bars, insurance, and the illusion of sexual prowess in a sedan.

    Ironically, as public trust in institutions eroded under the weight of these revelations, a backlash emerged—one I’ve detailed above, in the rise of the human potential movement. But there, the ouroboros of co-option spun on. Techniques designed to awaken selfhood were eventually reabsorbed by the same market logics they hoped to escape. Behaviorism and psychographics became the new lingua franca of politics and commerce. What MKUltra failed to brute-force into being, Silicon Valley and Madison Avenue would quietly finesse through UX and sentiment analytics.

    The journey from the couch to the control room was neither linear nor innocent. It was a Möbius strip of therapeutic jargon, bureaucratic paranoia, and market opportunism. And it begs the question: when the language of healing becomes the syntax of coercion, who’s really holding the clipboard—and who’s on the couch?

    How the Me Generation (self-actualizers) Elected Reagan

    The 1970s had been all about chakras, inner children, and the long march through the self. But by the time the 1980s rolled around with shoulder pads and supply-side economics, that soul-searching vocabulary had been hijacked by ad execs and political consultants. A group of behavioral researchers tossed introspection aside like a pair of bell-bottoms gone out of style and whipped up VALs—the Values and Lifestyles system. VALs was a glorified mood ring for the market’s collective. Instead of probing psyches for liberation, they dissected them for profitable patterns, repackaging the human condition into psychographic buckets with all the nuance of a horoscope in a shareholder report. Demographics? Not enough. They wanted your longings, your paranoias, your kale cravings and your secret Ayn Rand fantasies. This wasn’t marketing. This was psychic colonization.

    Amina Marie Spengler, one of the VALs program leads, put it plainly: it wasn’t just about behavior—it was about decoding the inner scaffolding of desire. What kind of house you lived in, what kind of car you drove, what sort of aesthetic you confused with ethics. Once your values were mapped, your consumer profile—and apparently your political preferences—could be reverse-engineered with startling accuracy. The golden goose? The self-actualizers—those post-’60s seekers of personal growth, yoga mats, and existential clarity. What no one saw coming, not even their therapists, was that these same inner-directed types would become the unexpected foot soldiers of Reaganomics.

    In 1980, Ronald Reagan, former actor and current mouthpiece for laissez-faire romanticism, launched a campaign that sounded less like conservative dogma and more like a Tony Robbins seminar. His team, speechwriters like Jeffrey Bell included, abandoned the usual red-meat rhetoric for something with New Age aftertaste: Let the People Rule. Cut the bureaucratic fat. Regain control over your destiny. Cue the incense.

    Traditional pollsters scratched their heads. Reagan was polling well, but not along the familiar class, race, or generational lines. The VALs team, meanwhile, knew exactly what was happening: Reagan was seducing the self-actualizers. These weren’t your polyester-wrapped country-club Republicans. They were yoga-practicing, Whole Earth Catalog–reading, self-improvement junkies. And yet, when asked who they were voting for, they said: Reagan.

    Christine MacNulty, part of the SRI team, recalled the collective gasp. Inner-directed voters weren’t supposed to go right—they were supposed to be sensitive, progressive, tofu-eating liberals. But if the language of personal freedom and anti-establishment cool was coming from the Gipper, well, that was close enough. Who needs labor unions when you’ve got self-fulfillment and a cowboy savior on horseback?

    Thatcher in the UK followed the same script. Both leaders, advised by data wonks armed with psychological diagnostics, bypassed traditional class lines entirely. It was the first true marriage of psychographics and politics, and it worked. The self-actualizers swung the elections, blind to the irony that the system they once sought to escape had learned to speak their dialect of authenticity.

    By 1981, Reagan was in office and the economy was tanking. Industrial America was coughing up rust, unemployment was through the roof, and the state retreated like a therapist who just decided you should really learn to cope on your own. But lo! The self-actualizers—now rebranded as the driving force of the “new economy,” buying Apple IIs, enrolling in management seminars, and helping to build the privatized paradise they thought would set them free.

    What began as a quest for personal enlightenment had become, with algorithmic precision, the engine of personal entitlement. If Bernays taught us how to sell cigarettes to feminists, VALs showed how to sell neoliberalism to the children of the counterculture.

    Margaret Thatcher – Conservative Party Conference 1975:

    Some socialists seem to believe that people should be numbers in a state computer. We believe they should be individuals. We’re all unequal. No one thank heavens is quite like anyone else, however much the socialists may pretend otherwise, and we believe that everyone has the right to be unequal. But to us every human being is equally important. A man’s right to work as he will, to spend what he earns, to own property, to have the state as servant and not as master—they are the essence of a free economy. On that freedom all our other freedoms depend.

    Thatcher’s creed was not just neoliberalism—it was psychographic populism with a stiff upper lip. Her Britain would not be rebuilt through solidarity or industry, but through the sacred calculus of personal preference. The market would become the therapist, the ballot box, and the moral compass. In this brave new order, the advertising and marketing industries flourished. Their mission? To perform psychoanalytic exegesis on consumer desire. Find out what people really wanted—then weaponize it.

    Focus groups were the new séance circles. Instead of channeling ghosts, researchers conjured up lifestyle archetypes through the tools of psychotherapy. They probed anxieties, aspirations, and affective triggers with all the subtlety of a clinical intervention. And what they found wasn’t just a market segmentation—it was a cultural reorientation.

    Among first-time Tory voters in 1979, the trend was unmistakable: the old scripts of class allegiance were fraying. These weren’t just working-class converts—they were proto-self-actualizers who had traded collective identity for curated individuality. They didn’t want to be defined by a class—they wanted to express themselves. And what better medium than their purchasing habits?

    The marketeers, drunk on their own analytics, amplified this phenomenon. Consumer goods became semiotic badges in a culture war dressed up as a shopping spree. The political became personal, and the personal was monetized. Thatcher’s real genius wasn’t in governance, but in turning selfhood into SKU numbers. The neoliberal subject wasn’t born in Parliament—it was midwifed in the focus group.

    While the right rebranded selfishness as liberation, the left clung to an older gospel: human progress required solidarity, not solipsism. The better society, they insisted, couldn’t be built by indulging every neurotic whim of the individual psyche, but by persuading people that they were part of something bigger than themselves — a collective, with shared stakes and common cause, call it civilization? It wasn’t about actualizing your inner child, but organizing with your neighbors.

    Franklin Delano Roosevelt, in the wake of capitalism’s 1929 faceplant, didn’t tell Americans to “follow their bliss.” He told them to get to work — together. “The only thing we have to fear is fear itself,” he intoned in 1933, a line that would’ve been slaughtered by today’s speechwriters for lacking A/B test appeal. Yet it landed, because it called on courage, not consumer preference. FDR’s New Deal didn’t pander to vibes — it built unions, fostered cooperatives, and taxed the ever-loving hell out of the idle rich. It was a moral economy with a spine, not a curated experience.

    For half a century, this ethos was the backbone of the Democratic Party. But by the 1980s, it was sounding quaint — like a dusty LP in Reagan’s sleek, Dolby-optimized America. As the Gipper preached laissez-faire like it was divine law, Roosevelt’s heirs were left making speeches that sounded increasingly like eulogies.

    Take Mario Cuomo, who at the 1984 Democratic Convention summoned his best righteous thunder: “There is despair, Mr. President, in the faces you don’t see…” It was eloquent, yes — but it was also desperate. Compassion was being out-marketed by cool indifference.

    Later, Cuomo would put it more bluntly: Reagan’s real innovation wasn’t trickle-down economics. It was trickle-down callousness — the moral sleight-of-hand that turned cruelty into virtue. Don’t want to help the poor? Don’t worry — they chose poverty. Can’t stomach a welfare state? Just tell yourself you’re promoting “personal responsibility.” Reagan didn’t yell it; he smiled it. With all the dulcet tones of a grandfatherly sociopath, he rebranded selfishness as civic duty.

    Hope, Briefly Resurrected—and Quickly Market-Corrected

    Even amidst the rise of polling sorcery and psychographic puppeteering, there were still some believers—earnest, maybe deluded—who thought they could harness the machine without becoming it. Robert Reich, Clinton’s Secretary of Labor and resident moral compass, later reflected on this moment with a kind of tragic wistfulness. Campaigns had long been pre-packaged, he admitted, but this was “packaging at a new level”—candidates reverse-engineered from focus groups, as if democracy had become a build-your-own Barbie workshop, complete with catchphrases tailored to your deepest consumer cravings.

    But Reich and the Clinton brain trust—James Carville, George Stephanopoulos, and the rest of the Southern-fried war room—didn’t see themselves as sellouts. To them, the tax cuts and middle-class pandering were just the cost of entry, a bait-and-switch to win back Reagan Democrats. Once in power, they’d execute the real play: tax the rich, trim defense, and reinvest in things that actually mattered—health care, education, a social safety net stitched back together after they had been slashed it to ribbons.

    The plan sounded clever—an elegant triangulation between the old moral compass and the new consumer circuitry. But reality, as it often does, smacked back. By January ’93, just weeks after Clinton’s inauguration, his administration was summoned to a fiscal come-to-Jesus moment. The deficit was ballooning. The bond markets, that new invisible hand of American governance, would not tolerate a borrowing spree. There would be no money for universal health care unless they gutted spending, not just on tanks, but on the poor.

    Faced with a choice between the New Deal idealism of his ideals and the actuarial reality of Wall Street’s mood swings, Clinton blinked. The tax cuts were axed. The dream of a coalition between inner-directed suburbanites and his Rooseveltian ideals began to unravel.

    Reich recalled that brief flicker of ambition—to “lift the public” and speak to something beyond atomized desire—as if recounting a ghost story. Universal health care? Childcare? Ending homelessness? All noble, all necessary—and all poll-tested into oblivion. The electorate, it turned out, didn’t want a moral awakening. They wanted a better UX for their lives, not empathy. And so, the grand Democratic vision collapsed into the logic of incrementalism, as politics gave way, once again, to the cold calculus of market satisfaction.

    The Rise of Neuro-Politics

    The middle-class swing voters Clinton had wooed with triangulated charm and vague centrist vibes were not, as it turned out, in it for any higher ideals. They came for the tax cuts and left when he brought out the New Deal reruns. Feeling jilted by Clinton’s rhetorical pivot back to collective ideals, they staged their revenge in the 1994 midterms—flipping both houses of Congress red and handing Newt Gingrich the reins with a mandate to torch the welfare state and put tax cuts back on the menu. It was a shellacking. And with a hostile legislature, Clinton’s big-ticket reforms were dead.

    Dick Morris was swiftly and quietly hired, without even informing Clinton’s cabinet. If American politics were an airplane going down in flames, Morris billed himself as the oxygen mask, life preserver, and emergency evacuation slide all in one. His diagnosis? Clinton had committed the cardinal sin of treating voters like citizens instead of customers.

    To save his presidency, Morris insisted, Clinton had to jettison whatever ideological ballast was still onboard. The era of Big Government was over; the new swing voter was a hyper-individualized, neurotic consumer whose loyalty could be earned not by vision but by targeted satisfaction. Politics, in Morris’s schema, wasn’t about persuasion or leadership—it was customer service. Voters weren’t a public; they were a market segment. And winning them over required the same approach a shampoo brand might use to woo millennial men with flaky scalps and commitment issues.

    So Morris turned to Penn & Schoen, a market research firm with a taste for psychographic profiling and corporate tinkering. The result was what they branded a “neuro-personality poll”—a sprawling exercise in psychological voyeurism, where political questions were almost beside the point. Instead, swing voters were X-rayed for quirks, affective triggers, and lifestyle habits. Did they make lists? Were they planners or impulsive? Life of the party or weekend introvert? What would their dream date involve?

    It sounded absurd—and it was. But it also worked. Mark Penn, Clinton’s data whisperer, found that swing voters clustered into identifiable psychological tribes. The trick wasn’t to appeal to their political identities—those were thin, provisional. What mattered was their affective style, their self-image, their aspirational mirror. If you could tap into that, you didn’t have to lead them anywhere. You just had to sell them back to themselves, wrapped in presidential branding.

    This was politics as behavioral microtargeting—an emotional Ouija board where public opinion was summoned, not shaped. It was also the logical conclusion of consumer democracy. If earlier campaigns had repackaged political ideas to suit mass taste, Morris’s approach bypassed ideas altogether. In their place: sentiment parsing, lifestyle mimicry, and a governing philosophy cribbed from the basement labs of Madison Avenue.

    The body politic had become a focus group. And the presidency? Just another influencer account optimized for engagement.

    Algorithmic Puppet Shows

    If Dick Morris and Mark Penn were the beta test, Silicon Valley was the full launch. What began with psychographic polling and Clintonian triangulation morphed into something far more insidious: algorithmic governance masquerading as digital democracy. We’ve entered the era of political UX design—where elections aren’t won in town halls but in server farms, A/B testing dopamine hits on a million micro-audiences.

    Social media torched the map, uploaded a new one, and fed it back to us via a “For You” page. Platforms like Facebook, YouTube, and what’s now ominously rebranded as X have recoded political discourse into engagement metrics. Newsfeeds aren’t neutral—they’re curated petri dishes of confirmation bias, optimized to keep us scrolling, clicking, and, most importantly, not thinking too hard. Welcome to the epistemic funhouse, where everyone gets their own bespoke reality, and truth is whatever earned the highest click-through rate.

    In this brave new feedback loop, politics becomes less about shared ideals and more about behavioral manipulation—predictive analytics masquerading as civic dialogue. The recommendation algorithms don’t care whether you’re becoming more informed or just more outraged—they just want you to stay. And if that means radicalizing you into a flat-earth cryptocurrency cult that thinks the moon landing was woke propaganda, the better of it. The machine has its metrics.

    The puppet-masters: political actors who figured out that these platforms weren’t just communication tools—they were weapons. According to a 2019 Oxford study, coordinated social media manipulation campaigns had been identified in over 70 countries—up from 28 just two years earlier. And it’s not just Russia or Myanmar. In democracies too, parties embraced computational propaganda with the zeal of a start-up pitch deck: bots, fake accounts, deep-faked enthusiasm, all driven by what one report calls the “triple threat” of algorithms, automation, and big data. If Bernays once bragged about engineering consent, these guys industrialized it.

    Of course, there’s a psychological infrastructure behind this digital shell game. Social media platforms are built on the bones of behavioral science—exploiting heuristics like social proof, outrage bias, and variable rewards. Likes, shares, rage-clicks: these are the psychic slot machines we pull every time we log on. And political campaigns are opportunists, learned to ride the wave. Remember when Facebook ran experiments to boost voter turnout by telling you your friends had voted? That was the friendly face of digital nudging. The less friendly face? Cambridge Analytica.

    Cambridge Analytica; Frankenstein’s monster of psychographic marketing. They claimed to have harvested the psychological profiles of millions of Facebook users without consent and used that data to craft hyper-personalized political ads tuned to traits like neuroticism, conscientiousness, and probably astrological sign if they thought it would convert. Trump’s 2016 campaign reportedly tested over 175,000 ad variations on Facebook alone. That’s not persuasion; that’s behavioral carpet bombing.

    Was it effective? Depends on who you ask. But studies suggest it worked well enough to “significantly increase” turnout and support in key demographic slices. Which means we’ve now reached the next level of consumer democracy: a system where voters are not just marketed to, but modeled, nudged, and emotionally manipulated by invisible hands puppeteering algorithmic marionettes.

    The ethical implications? Informed consent—the supposed bedrock of democracy—becomes a quaint fiction when voters are profiled like credit risks and fed precisely the misinformation they’re most vulnerable to. Civic life becomes an elaborate psy-op, with citizens no longer deliberating but reacting, no longer choosing but being subtly herded. And the worst part? Most of them have no idea it’s happening.

    By now, political campaigns treat social media like their main stage, borrowing directly from the gospel of Madison Avenue. From Obama to Trump, presidential hopefuls have embraced behavioral segmentation and micro targeted messaging with the fervor of a growth-hacking startup. Obama’s 2012 reelection machine ran thousands of A/B tests on Facebook and email. Like lab technicians, they fine-tuned messages for maximum click appeal. His team used data not to understand voters, but to anticipate and engineer their reactions—less civics, more sentiment mining.

    Trump’s 2016 campaign–the brutish innovator–simply mainlined the model: $44 million channeled into Facebook’s ad system to blast out bespoke propaganda cocktails, tailored by zip code, hobby, and paranoia level. Campaign director Brad Parscale bragged that Facebook embedded its own staff in the campaign to grease the algorithmic wheels—customer service with regime-change potential.

    It “boosted engagement.” But at whose cost? When campaigns whisper tailored promises into a million ears and say something different to each, public discourse atomizes into a cacophony of contradictions. Surveillance capitalism in campaign mode doesn’t just sell you toothpaste—it sells you the new democracy, partitioned and personalized, like a Spotify playlist of civic illusions. As Adam Curtis observed of the ’90s, this no longer is politics—it’s a feedback loop of primitive impulses and curated self-interest, where the voter is no longer a citizen, not even a consumer, but a pliable node in a psychographic schema.

    The Hopes for Digital Democracy

    Just when it seemed that algorithmic psychographics and dopamine-engineered misinformation had buried democracy under a landslide of A/B-tested pandering, a countercurrent starts to rise: the techno-optimists with their open-source hearts and hackathon brains. vTaiwan, for example, is a civic experiment launched in 2015 that drags participatory democracy into the digital age—without turning it into yet another app selling your rage clicks.

    vTaiwan is an earnest attempt to resuscitate deliberative politics by fusing civic ritual with open-source process design. Spearheaded by Taiwan’s Digital Minister Audrey Tang and a band of activist coders, the project seeks to channel the “wisdom of crowds” without succumbing to the chaos of the comment section. Using Pol.is—a tool that clusters participants by shared views but bars direct replies (a clever ban on flame wars)—the platform visualizes emerging consensus rather than stoking performative conflict.

    In practice, this looks like crowdsourced legislation with friction points. vTaiwan didn’t just “listen to the people”—it forced them to synthesize. One early win? Mediating between Uber evangelists and legacy taxi unions, producing regulations on rider safety and insurance that both sides accepted. A miracle in an age when even brunch plans can lead to factionalism.

    But techno-democracy, like any good beta test, came with caveats. For one, vTaiwan wasn’t binding. It could suggest, but not compel. Think of it as an advisory oracle—helpful, yes, but lacking teeth. Then there’s the digital literacy gap: while vTaiwan’s sleek interface may enchant the coder class, it can leave behind the elderly, rural populations, or anyone not fluent in civic-tech dialect. The risk? A participatory elite that replicates the exclusion it aims to dismantle.

    Taiwan tried to mitigate this with a simpler tool—Join.gov.tw, an e-petition platform with a lower barrier to entry. But even that had its quirks. Low signature thresholds turned it into a kind of populist jukebox, where niche interests could demand government responses without serious deliberation. Sometimes, democracy-by-click can look suspiciously like democracy-by-clout.

    Good intentions in participatory design have a history of careening into well-meaning disaster. We can learn from the idealists at Esalen, who thought encounter groups could solve racial tension with some hand-holding and primal screams—only to learn that, when misapplied, therapeutic openness can unravel into psychological wreckage.

    Digital platforms face worse vulnerabilities. Without safeguards, they’re ripe for hijack: trolls, bots, and astroturfers can bend the arc of public input toward chaos. Worse, governments can stage-manage the entire show, turning crowdsourced consultation into legitimized theater. Want a rubber-stamped mandate? Just let a few sock puppets vote for it.

    So yes, digital democracy has potential—when it’s designed with care, girded with legitimacy, and immunized against co-option. But code, however elegant, is no substitute for political resolve—or for the stubborn work of grassroots organizing and communal scaffolding. The fantasy that we can engineer democracy through UX design alone collapses into a familiar techno-utopian mirage. Cultivating citizens requires civic platforms that foster deliberation, but such platforms presuppose a populace already disposed toward the common good. It’s a Möbius loop of democratic development. Good code won’t make good citizens, any more than Photoshop makes good art. And if done poorly, digital democracy risks becoming a cathartic simulacrum—easily hijacked by special interests or wielded as a legitimizing fig leaf by elites. The ethical imperative is to ensure that these platforms empower a genuinely broad public, not just give “the crowd” a button to press while the real decisions get made elsewhere. Done right, digital democracy could begin to approximate the kind of rational, inclusive public sphere that Habermas once dreamt of—rather than just another echo chamber with better font choices.

    Conclusion: The Self, the Market, and the Struggle for the Political Imagination

    Over the past hundred years, the vocabulary of the psyche migrated from the couch to the campaign trail, from the analyst’s office to the advertiser’s storyboard. Freud’s analytic toolkit—built to unearth the repressed detritus of bourgeois neurosis—was repurposed by his nephew Bernays to engineer consent, sell Lucky Strikes, and topple governments on behalf of fruit companies. What began as an inquiry into the human soul ended up as a user manual for manipulating it.

    And here we are. Living in the full saturation of that legacy: where personal “brands” eclipse political ideologies, where Instagram therapy coexists with algorithmic demagoguery, and where the average citizen is treated less like a co-author of the commons and more like a neurotic consumer with a dopamine deficit. This blog post has followed the breadcrumb trail from Bernays to behavioral polling, from the human potential movement to micro targeted psychographics, from Reich’s screams to Zuckerberg’s Newsfeed. If democracy was once imagined as a forum for deliberation, it is now a feedback loop optimized for engagement.

    The throughline is brutally simple: the self—once a site of liberation—became a site of capture. Not by force, but by design. The elevation of the individual from economic unit to sacrosanct consumer-citizen has created a paradox: the more we center personal fulfillment as the telos of public life, the easier it becomes for systems of power to shape, simulate, and sell that fulfillment back to us. Like good consumers, we confuse desire with freedom and mistake curated options for genuine choice.

    Restoring the democratic imagination will require more than media literacy workshops and algorithmic transparency (though we’ll take those too). What’s needed is a reinvention of civic life that doesn’t pander to “authenticity” as a vibe, but asks harder questions: How are our preferences formed? Who benefits from our confusion? Can we design systems that not only reflect but also elevate our better selves?

    Experiments like vTaiwan suggest a different path is possible—one where technology is wielded not to stoke the id, but to scaffold the demos. Where participation doesn’t mean being herded into niche echo chambers, but stepping into a common space of negotiation. But this vision hinges on a hard precondition: that we remember democracy is not a service but a practice. It doesn’t work when it’s outsourced or completely automated. And it certainly doesn’t flourish when it’s reduced to a personality quiz or a shopping cart.

    As Charles Taylor reminds us, the point isn’t to reject authenticity, but to rescue it—from the flattening logic of the market, from the therapeutic haze, and from the weaponized sentimentality of consumer politics. An ethics of authenticity must bind selfhood to solidarity, not just self-expression. Otherwise, we remain pliable data points in someone else’s dashboard.

    References

    1. Bernays, E.L. (1928) Propaganda. New York: Horace Liveright.

    2. British Broadcasting Corporation (BBC) (2002) The Century of the Self [Television series]. Directed by Adam Curtis. United Kingdom: BBC Two.

    3. Curtis, A. (2002) The Century of the Self [Television series]. United Kingdom: BBC Two.

    4. Makari, G. (2008) Revolution in Mind: The Creation of Psychoanalysis. New York: HarperCollins.

    5. Marcuse, H. (1964) One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society. Boston: Beacon Press.

    6. Habermas, J. (1962) The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Cambridge, MA: MIT Press.

    7. Mostegel, I. (2019) ‘Edward Bernays: The Original Influencer’, History Today, 69(8). Available at: https://www.historytoday.com/archive/feature/edward-bernays-original-influencer (Accessed: 24 March 2025).

    8. Colabella, A. (2022) ‘Social media algorithms & their effects on American politics’. Fung Institute, UC Berkeley. Available at: https://funginstitute.berkeley.edu/news/social-media-algorithms-effects-american-politics/ (Accessed: 24 March 2025).

    9. Human Givens Institute (2004) ‘Interview with Adam Curtis on The Century of the Self’. Available at: https://www.hgi.org.uk/resources/delve-our-extensive-library/interviews/adam-curtis-century-self (Accessed: 24 March 2025).

    10. Oxford Internet Institute (2019) ‘Use of social media to manipulate public opinion now a global problem’. Available at: https://www.oii.ox.ac.uk/news-events/news/use-of-social-media-to-manipulate-public-opinion-now-a-global-problem/ (Accessed: 24 March 2025).

    11. University of Warwick (2018) ‘Politics in the Facebook Era: Evidence from the 2016 US Presidential Elections’. Phys.org. Available at: https://phys.org/news/2018-04-politics-facebook-era-evidence-presidential.html (Accessed: 24 March 2025).

    12. Bertelsmann Stiftung (2022) ‘Trailblazers of digital participation: Taiwan’s Join platform and vTaiwan’. Available at: https://www.bertelsmann-stiftung.de/en/our-projects/learning-from-taiwan/news/trailblazers-of-digital-participation-taiwans-join-platform-and-vtaiwan (Accessed: 24 March 2025).

    13. Horton, C. (2018) ‘The simple but ingenious system Taiwan uses to crowdsource its laws’, MIT Technology Review. Available at: https://www.technologyreview.com/2018/07/10/239291/the-simple-but-ingenious-system-taiwan-uses-to-crowdsource-its-laws/ (Accessed: 24 March 2025).

    14. Isaddictedtothemusic (2011) ‘Charles Taylor and Authenticity’. Isaddictedtothemusic [Blog]. Available at: https://isaddictedtothemusic.wordpress.com/2011/05/21/charles-taylor-and-authenticity/ (Accessed: 24 March 2025).

    15. Politico (2010) ‘The dirtiest word in politics’. Available at: https://www.politico.com/story/2010/06/the-dirtiest-word-in-politics-038667 (Accessed: 24 March 2025).

    16. Wikipedia (n.d.) ‘Montreal experiments’. Available at: https://en.wikipedia.org/wiki/Montreal_experiments (Accessed: 24 March 2025).

    17. Nordics.info (2021) ‘Iceland’s Constitutional Revision and the “New Constitution”’. Available at: https://nordics.info/show/artikel/icelands-constitutional-revision-and-the-new-constitution (Accessed: 24 March 2025).

  • Is It Really Human Nature—or Are We Programmed to Conform?

    Introduction

    Is our attraction to echo chambers simply “human nature,” or can technology channel more expansive instincts? Pundits often treat homophily—the pull toward like-minded peers—as an unavoidable fact of life, claiming it dooms us to online spaces that reinforce our biases. Yet history, as well as modern research, reveals a broader repertoire in the human psyche. Our species can slip into insular self-affirmation, but we also respond—under the right norms and social designs—to the excitement of genuine debate. The question isn’t whether we’re forever stuck with echo chambers, but whether we will allow them to dominate our public squares.

    Last week, in Breaking the Echo Chamber: A Blueprint for Authentic Online Deliberation, I argued that big platforms harness our yearning for agreement and feed it back to us in a cycle of algorithmic reinforcement, all while calling it “free speech.” “None are more hopelessly enslaved than those who falsely believe they are free.” — Johann Wolfgang von Goethe. That cycle enriches the owners of such platforms, but smothers the critical friction that fosters democratic growth. Homophily isn’t an iron law: it’s an easy inclination that can harden into groupthink if we reward outrage and tribal loyalties. Yet the same “human nature” has shown itself capable of building societies (like the Iroquois Confederacy or the egalitarian San) that deliberately broaden horizons rather than narrow them. “Love, friendship and respect do not unite people as much as a common hatred for something.” — Anton Chekhov. Today, we find ourselves as architects drafting the blueprint of our digital society: will we replicate the familiar structures that entrench techno-feudal lords, or innovate designs that illuminate our innate curiosity, embrace nuance, and foster collaborative bridges across diverse perspectives?

    In this post, I’ll present my vision of how we can nurture that “love of difference,” or heterophily, even within a milieu saturated with insular echo chambers. Drawing from cognitive science, social psychology, and anthropological evidence, we’ll see that humans are not stuck with one deterministic script–we should always be suspicious when the specter of human nature is summoned. And I’ll highlight a few present-day experiments, including our ManyFold platform, that strive to harness these potentials—elevating reasoned debate above the churn of viral outrage. If you’re tired of hearing “it’s just human nature” used to shrug off divisiveness, read on. There is ample proof that our nature holds more promise, if only we dare to cultivate it.

    The Psychological Basis of Homophily

    From a psychological standpoint, homophily has deep roots. Humans evolved in tribes where trust and survival often depended on sticking with “our own.” This legacy is evident in cognitive biases that lead us to favor information and people that confirm our pre-existing views. Confirmation bias causes us to seek and remember evidence that supports what we already believe while dismissing contrary information (Nickerson, 1998). In group settings, these tendencies can be amplified. The classic Asch conformity experiments demonstrated how people will even deny the evidence of their senses to align with a unanimous group opinion (Asch, 1955). In Solomon Asch’s studies, participants asked to judge line lengths went along with an obviously wrong consensus in 37% of trials, showing the powerful pull to conform (Asch, 1955). Our social brains dread being the odd one out – a fear that can keep us circling in comfortable consensus.

    Group identity dynamics further reinforce homophily. Henri Tajfel’s “minimal group” experiments in 1970 showed that simply dividing strangers into arbitrary groups (e.g. by a coin flip) was enough for them to exhibit in-group favoritism, preferring members of their group even at cost to others (Tajfel, 1970). In other words, we easily slip into “us vs. them” mindsets, favoring those who share our label or worldview. This helps explain why echo chambers – environments where we only encounter agreeing voices – feel so natural. “Ideas don’t matter, it’s who you know.” — Dead Kennedys, “Chickenshit Conformist” (1986). Being surrounded by similar others affirms our identity and shields us from the cognitive dissonance of conflicting information. It’s comfortable, but also limiting. “It’s often safer to be in chains than to be free.” — Franz Kafka. Psychologist Irving Janis famously showed how cohesive groups can fall into groupthink, ignoring warnings and alternative ideas to preserve unanimity, often with disastrous results (Janis, 1982). We’ve all seen how online communities or friend circles can develop a kind of tunnel vision, reinforcing their own biases in a feedback loop. “In individuals, insanity is rare; but in groups, parties, nations and epochs, it is the rule.” — Friedrich Nietzsche.

    Extreme cases underscore how conformity to group roles and norms can override individual judgment. The Stanford Prison Experiment is a chilling example: in 1971, psychologist Philip Zimbardo randomly assigned perfectly average young men to be “guards” or “prisoners” in a mock prison – and within days the guards became cruel and the prisoners submissive, internalizing their group roles to an astonishing degree (Haney et al., 1973). Though ethical issues cloud the study’s legacy, it remains a potent illustration of how randomly chosen people can conform to toxic group dynamics. In everyday life, the dynamics are usually less dramatic but follow a similar pattern: we instinctively mimic our in-group’s attitudes and behaviors. Hearing the same views echoed back at us provides a sense of validation and certainty. Over time, this can lead to polarization, as like-minded groups drift toward more extreme positions unmoderated by outside input (Moscovici & Zavalloni, 1969). Cass Sunstein has warned that the “Daily Me” of personalized media leads to informational enclaves that exacerbate partisan divides (Sunstein, 2001). Eli Pariser’s concept of the “filter bubble” (2011) expands on this dynamic by showing how social media algorithms, optimized for user engagement, reinforce homophily. By consistently feeding people content that validates their pre-existing views, these algorithms generate information silos in which contrary perspectives are seldom encountered, thereby magnifying bias and polarization (Pariser, 2011). “Don’t question authority see… Be a little zombie that agrees with you.” — Fishbone, “Behavior Control Technician” (1991). Renée DiResta’s work (2024) takes this further, revealing how bad actors manipulate these same systems to disseminate misinformation. According to DiResta, the very mechanisms that foster group cohesion can also be exploited to widen ideological rifts and fabricate a false sense of consensus (DiResta, 2024). In short, a variety of psychological studies suggest that without intervention, our default wiring encourages us to seek the familiar and filter out discordant views.

    However, homophily is only one potential manifestation of our nature. Humans may gravitate toward the like-minded, but we are not prisoners of that impulse. Just as importantly, psychology offers insight into our capacity for openness, change, and bridging differences – given the right circumstances.

    The Potential for Heterophily

    Counterbalancing our tribal instincts is an ability – even a need – to connect across differences. Psychological research shows that people can overcome biases and embrace diverse perspectives, especially when certain conditions foster trust and empathy. One powerful mechanism is perspective-taking – actively imagining another person’s viewpoint. In a series of experiments, Galinsky and Moskowitz (2000) found that when participants were instructed to take the perspective of someone from an out-group (for instance, to imagine a day in the life of an elderly person), the participants subsequently expressed fewer stereotypes and more positive attitudes toward that group (Galinsky & Moskowitz, 2000). Remarkably, simply imagining the world through someone else’s eyes can measurably reduce prejudice. Related studies have shown that asking people to consider why an opposing view might be true, or to explain the rationale of their opponents, can reduce biased reasoning. In one experiment, college students with strong opinions on a social issue became significantly more moderate in their stance after being asked to “consider the opposite” – to think about how an intelligent person could come to the opposite conclusion (Lord et al., 1984). This simple prompt made them more critical of their own assumptions and more appreciative of the merits in alternative arguments. Such findings illustrate that our minds are not static echo chambers; with the right cognitive cues, we can broaden our outlook.

    Beyond thought exercises, real-life interaction is a powerful antidote to homophily. Intergroup contact theory, first advanced by Gordon Allport in the 1950s, proposes that under appropriate conditions (equal status between groups, common goals, etc.), direct contact with members of other groups reduces prejudice (Allport, 1954). This theory has been tested extensively. A meta-analysis of over 500 studies involving 250,000 participants confirmed that, indeed, contact typically improves intergroup attitudes and reduces bias (Pettigrew & Tropp, 2006). Crucially, the benefits were not limited to any one divide – contact helped bridge differences of race, ethnicity, nationality, and more (Pettigrew & Tropp, 2006). When people from diverse backgrounds work together on a shared problem or simply get to know each other as individuals, they often discover common ground and humanize those they once viewed with suspicion. This doesn’t mean contact automatically produces harmony (context matters a great deal), but it shows that exposure to difference can expand empathy rather than just triggering conflict. In fact, psychologist Thomas Pettigrew noted that one of the key mediators in successful intergroup contact is perspective-taking – again, that ability to see the world through the other’s eyes leads to warmer feelings and reduced anxiety (Pettigrew & Tropp, 2008).

    Another trait that underpins heterophily is intellectual humility – essentially, recognizing that one’s own knowledge is limited and being open to learning from others. Recent research suggests intellectual humility is linked to greater openness and willingness to engage with dissenting views. For example, Leary et al. (2017) found that people who score high on intellectual humility tend to be more curious about alternative viewpoints and less threatened by disagreement. They are comfortable saying “I might be wrong” and thus more likely to actually listen to someone who contradicts them (Leary et al., 2017). Encouraging intellectual humility – in classrooms, workplaces, and online – can create an environment where heterophily thrives, because individuals don’t feel that encountering a different viewpoint is an attack on their ego. Instead, it becomes an opportunity to learn. Notably, humility is a form of strength—a quiet assurance in our capacity to grow. “The fool doth think he is wise, but the wise man knows himself to be a fool.” — William Shakespeare, As You Like It. Psychologists have even developed training exercises to cultivate intellectual humility, such as prompting individuals to reflect on times they were proven wrong or to consider narratives of wise people who have changed their minds (Krumrei-Mancuso & Rouse, 2016). Early evidence indicates that these interventions help people become more receptive to evidence that challenges their beliefs.

    Finally, let me highlight that heterophily can be intrinsically rewarding. Engaging with diverse perspectives isn’t just virtuous – it is fascinating and enriching. Studies on “active open-mindedness” show that often people enjoy probing ideas that unsettle them, as long as the exchange feels respectful and illuminating (Baron, 2019). Our brains are wired for curiosity; given psychological safety, even those accustomed to insular environments can find value in a stimulating clash of viewpoints. In sum, while homophily might be our comfort zone, we clearly possess the cognitive and emotional tools for heterophily. Perspective-taking, positive contact, and intellectual humility demonstrate people’s capacity to venture beyond the familiar. This capacity has also been realized in social structures throughout history, which we turn to next.

    Prehistoric and Historical Examples

    History provides compelling examples of societies that leaned into heterophily and structured themselves to avoid the pitfalls of echo chambers. Long before modern experiments in deliberative democracy, certain cultures developed decision-making processes that valued inclusive dialogue and consensus. These cases suggest that the tension between homophily and heterophily is not new – and that our ancestors often understood the importance of broad participation and minority perspectives.

    One striking example comes from one of the oldest continuous cultures on Earth: the San people of Southern Africa. The San (often called “Bushmen”) are hunter-gatherers whose traditional lifestyle was fiercely egalitarian. Anthropologists note that San bands made decisions through group consensus rather than by fiat of a single leader (Shostak, 1983). In fact, while some individuals (often elders) might informally guide discussions, they had no coercive authority – every person’s opinion could be heard in the prolonged talks that preceded any major decision. This consensus-based approach meant that even minority opinions had to be grappled with until the whole group reached mutual agreement (Shostak, 1983). Such a system explicitly counteracted homophily by ensuring that nobody could simply impose their will and surround themselves with yes-men; instead, the group had to consider all viewpoints to maintain harmony. Crucially, the San also enforced norms of humility to sustain this egalitarian harmony. Anthropologist Richard Lee famously observed the practice of “insulting the meat,” in which a successful hunter’s kill is humorously belittled by others to keep the hunter’s ego in check . This tradition ensures that no individual grows too proud or domineering –the most skilled members are reminded that everyone depends on everyone else. Such cultural checks on ego fostered an atmosphere where all could speak and be heard, reinforcing the San’s inclusive deliberation. The San ethos was (and in some communities remains) deeply dialogical: if a dispute arose, the band might talk all night around the campfire, with interruptions for humor and storytelling, until a resolution acceptable to all emerged. Women were treated as relative equals in these discussions, contributing actively to debates and decisions (Shostak, 1983). This ancient model of governance by consensus highlights that seeking broad agreement – rather than majority rule or authoritarian decree – can be a natural form of human organization. It acts as a check on our tendency to let the loudest or most similar ideas dominate. The San show that a small community, at least, can embrace heterophily by design, building social cohesion through inclusive deliberation rather than exclusion.

    Moving forward in time, consider the Iroquois Confederacy in North America. This alliance of six nations (the Haudenosaunee) formed a sophisticated system of governance well before European contact. At the heart of the Iroquois Confederacy was the Great Council of 50 chiefs (sachems) representing the member nations. What’s remarkable is that the Great Council operated on the principle of unanimous consensus – decisions had to be approved by all the sachems, meaning any chief’s dissent could send the Council back to discussion until concerns were resolved (Justo, 2024). In practice, this meant minority viewpoints were not just tolerated but amplified: a single voice could halt a decision, forcing the majority to engage with that perspective. Far from causing paralysis, this process was seen as essential to achieving legitimacy and unity. Each nation (Mohawk, Oneida, Onondaga, Cayuga, Seneca, and later Tuscarora) had a say, and the structure included a careful balance – for instance, the Mohawk and Seneca (elder brothers) would propose, the Oneida and Cayuga (younger brothers) would deliberate, and the Onondaga (fire keepers) could veto to ensure consensus, after which the process would iterate (Native Tribe Info, 2024). By all accounts, debates could be long and vigorous, but the Iroquois valued that “talk until agreement” approach. The Great Law of Peace, their oral constitution, framed consensus as a way to ensure equity and fairness – no nation or faction could simply dominate the others (Lyons, 1992). This consensus model effectively encouraged heterophily: leaders had to listen earnestly to differing opinions, because they could not simply overrule them. The result was a remarkably stable union that lasted for centuries and influenced democratic thought in the West. The Iroquois Confederacy illustrates how a political structure can institutionalize open dialogue and minority rights, counteracting the human impulse to splinter into echo chambers. By requiring unanimity, they made diversity of thought the engine of decision-making, not an obstacle to it (Justo, 2024).

    Notably, the Iroquois had mechanisms to manage dissent beyond the council chamber. For example, the Confederacy empowered respected women elders, or Clan Mothers, to hold leaders accountable: Clan Mothers could even dismiss a chief if he was not doing his job or failed to uphold the people’s will . This provided a built-in check and balance, ensuring that no sachem could ignore his community’s concerns for long. Additionally, important meetings opened with rituals like the Thanksgiving Address – words of gratitude recited to bring all participants to “one mind” – which fostered a humble, cooperative spirit before formal deliberations began . Such ceremonies helped quell personal grievances and unify the group’s purpose. Together, these cultural practices meant that internal disputes were typically resolved through reasoned dialogue and reconciliation rather than coercion or schism. In fact, the Great Law of Peace famously succeeded in ending generations of intertribal warfare among the five original nations , replacing conflict with a framework for perpetual negotiation. In sum, Iroquois governance combined strict consensus rules with peacemaking customs, ensuring that disagreements strengthened the union instead of splintering it.

    danah boyd (2017) draws a modern parallel to these historical lessons, pointing out how contemporary social media fosters the opposite dynamic. Today’s online platforms often let people self-segregate into digital enclaves that simply mirror their own values. Unlike the Iroquois — whose consensus-driven framework obliged all parties to engage with minority voices — today’s online communities make it easy to avoid opposing viewpoints entirely, thus reinforcing ideological silos (boyd, 2017).

    In practice, this meant minority viewpoints were not just tolerated but amplified: a single voice could halt a decision, forcing the majority to engage with that perspective. Far from causing paralysis, this process was seen as essential to achieving legitimacy and unity. Each nation (Mohawk, Oneida, Onondaga, Cayuga, Seneca, and later Tuscarora) had a say, and the structure included a careful balance – for instance, the Mohawk and Seneca (elder brothers) would propose, the Oneida and Cayuga (younger brothers) would deliberate, and the Onondaga (fire keepers) could veto to ensure consensus, after which the process would iterate (Native Tribe Info, 2024). By all accounts, debates could be long and vigorous, but the Iroquois valued that “talk until agreement” approach. The Great Law of Peace, their oral constitution, framed consensus as a way to ensure equity and fairness – no nation or faction could simply dominate the others (Lyons, 1992). This consensus model effectively encouraged heterophily: leaders had to listen earnestly to differing opinions, because they could not simply overrule them. The result was a remarkably stable union that lasted for centuries and influenced democratic thought in the West. The Iroquois Confederacy illustrates how a political structure can institutionalize open dialogue and minority rights, counteracting the human impulse to splinter into echo chambers. By requiring unanimity, they made diversity of thought the engine of decision-making, not an obstacle to it (Justo, 2024). danah boyd (2017) draws a modern parallel to these historical lessons, pointing out how contemporary social media fosters self-segregation. Users now build digital enclaves that simply mirror their own values. Unlike the Iroquois—whose consensus-driven framework obliged all parties to engage minority voices—today’s online communities let people avoid opposing viewpoints entirely, thus reinforcing ideological silos (boyd, 2017).

    Gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==

    A Quaker meeting in the 19th century, as depicted by artist Thomas Rowlandson (1809). Quakers practiced consensus decision-making, allowing even lone dissenters to slow down a decision – an early example of fostering inclusive dialogue.

    Another historical case comes from a religious community: the Quakers (Society of Friends) who emerged in 17th-century England. Quakers developed a distinctive method of collective decision-making known as the “sense of the meeting,” which eschews voting in favor of finding unity. In a Quaker meeting for business, participants sit in silent reflection and share perspectives one by one. The goal is to reach a decision that everyone can accept, or at least “stand aside” for – effectively a consensus minus any coercion (McPhail, 2024). What’s again striking is how this process elevates minority standpoints. If even a few Friends express reservations, the group will pause and reconsider rather than simply outvote them. Historically, this allowed prophetic minority notions to shift the entire Quaker community. A notable example is the Quakers’ early stance against slavery. In the 1700s, a handful of Quaker abolitionists repeatedly raised concerns about slaveholding at yearly meetings. Rather than being dismissed, these controversial views were painstakingly weighed over decades. The Quaker consensus model eventually produced unity on abolition – nearly a century before national abolition in Britain and the U.S. – precisely because the structure forced the community to contend with those few dissenters and their moral arguments (McPhail, 2024). One Quaker described the ideal as “listening each other into deeper truth,” a divergence from majority tyranny. Debate could be respectful yet frank, and disagreements were met with patience and prayerful consideration (McPhail, 2024). “The majority is never right. Never, I tell you! That’s one of those lies in society that no free and intelligent man can help rebelling against. Who are the people that make up the majority — the intelligent ones or the fools?” — Henrik Ibsen, An Enemy of the People (1882). By all accounts, Quaker meetings had (and still have) an egalitarian spirit: anyone, regardless of social status or gender, can speak if moved to, and their words are weighed on their merit. This culture of inclusive deliberation made Quaker communities remarkably receptive to new ideas – from social reforms to innovations in education – despite being tight-knit religious groups. In essence, the Quakers found a way to counteract homophily through spiritual practice, treating each dissenting demand as potentially containing a piece of the truth that the community needs. Their legacy in social justice and peace work testifies to the power of that approach. None of these systems was impeccable or free of conflict, of course. But they all recognized, implicitly or explicitly, that diversity of perspective is an asset to be harnessed rather than a threat to be quashed.

    These historical examples – the San, the Iroquois, the Quakers – each in their own way nurtured heterophily through specific norms and structures. Egalitarian hunter-gatherers avoided hierarchy and forced consensus through open dialogue; the Iroquois built a federation that required unanimous agreement, giving every nation’s perspective weight; the Quakers developed a culture of deep listening and unity that empowered minority viewpoints. These systems predate our modern terminology, yet they were grappling with the same fundamental dynamic of human nature. If our ancestors could value cognitive diversity around a campfire or council fire, it suggests that our proclivity for echo chambers can indeed be tempered by wisdom and design. In modern times, we have begun to apply similar lessons in new contexts.

    Modern Case Studies

    In recent decades, a number of deliberate experiments have tried to combat homophily and promote open-minded dialogue in contemporary society. From randomly selected citizen panels to innovative online platforms, these case studies demonstrate that when you change the structure of discussion, you can change the outcome. People who might tune each other out in everyday life often prove capable of collaborative, nuanced thinking under the right conditions. Here we’ll look at two arenas in particular: deliberative democracy initiatives and online platforms designed for heterophily.

    One approach has been the rise of citizens’ assemblies and other deliberative democracy forums. These are processes where citizens, typically selected to reflect a cross-section of society, are brought together to learn about an issue, discuss it extensively, and propose recommendations. Crucially, these assemblies are structured with trained facilitators and ground rules to ensure respectful, balanced dialogue – a stark contrast to the shouting matches on cable news or social media. The results have been remarkable. For example, Ireland convened a Citizens’ Assembly in 2016–2017 to examine the once-taboo issue of abortion laws. The assembly of 99 citizens heard from legal and medical experts, as well as personal testimonies, and engaged in small-group discussions over several weekends. In the end, this diverse group (young, old, urban, rural, religious and non-religious) reached a set of nuanced recommendations that helped pave the way for Ireland’s historic referendum legalizing abortion in 2018. Many participants underwent profound shifts in their thinking – in fact, exit surveys showed a large majority felt the process made them more open to other viewpoints and more informed about the complexity of the issue (Farrell et al., 2019). This is a common pattern in deliberative mini-publics. Researchers James Fishkin and Robert Luskin, who have organized Deliberative Polls around the world, find that after citizens deliberate on an issue with access to balanced facts and arguments, they tend to change their opinions in sensible ways – often moderating extreme positions or revising misconceptions (Fishkin, 2018). Crucially, participants also report greater understanding and empathy for opposing views, even if they don’t fully embrace them. Deliberation “civilizes” discourse: people learn to argue the issue, not attack the person, and they often discover that their differences are not as vast as assumed. In one quantitative study, Gastil et al. (2002) found that people who served on juries (another form of deliberation) became more likely to vote and engage in civic life afterward – as if the experience of thoughtful group discussion awakened a sense of democratic possibility (Gastil et al., 2002). Deliberative forums from British Columbia to Mongolia have tackled topics from electoral reform to climate policy, frequently finding consensus solutions that traditional partisan politics had gridlocked. While deliberation is not panacea, these experiments offer proof of concept that citizens, when given structure and goodwill, can deliberate across differences and enjoy it. It seems that the very act of sitting together as equals, hearing each other out, flips a psychological switch – turning down the tribal defensiveness and turning up our latent heterophilous impulses. As one participant in a citizens’ jury put it, “I realized we were all just people trying to do the right thing, even if we disagreed on how” (quoted in OECD, 2020). The growth of such assemblies (the OECD documents nearly 300 examples in the past decade alone) is an attestation to the hunger for more constructive dialogue in an era of polarization (OECD, 2020).

    If face-to-face deliberation demonstrates our capacity for open-minded engagement, can we translate that to the online sphere, where homophily currently runs rampant? A number of online platforms are attempting exactly that – designing social networks and discussion tools that incentivize heterophily instead of clickbait and tribalism. Zeynep Tufekci (2017) warns that engagement-driven algorithms often funnel users toward increasingly extreme content, aggravating polarization in the process. She advocates for platforms that deliberately expose people to a breadth of perspectives, rather than maximizing total minutes spent among like-minded peers (Tufekci, 2017). In a similar spirit, Tristan Harris (2020) contends that social media should prioritize user well-being and healthier public discourse (Harris, 2020). One notable case where this has successfully been tried is Taiwan’s vTaiwan platform, a government-sponsored digital process for crowdsourcing legislation. At the core of vTaiwan is an original discussion tool called Polis. Unlike typical forums, Polis doesn’t allow direct replies or flame wars. Instead, users submit statements on an issue and vote up or down on others’ statements. Behind the scenes, a machine-learning algorithm identifies clusters of opinion – mapping where the crowd agrees or diverges – and highlights statements that earn broad support across different groups. In a divisive debate over rideshare regulation (the “Uber vs. taxi” conflict), vTaiwan drew over 4000 participants, including taxi drivers, Uber drivers, passengers, and regulators. Despite their opposing starting positions, the Polis platform displayed in real-time that there were several points everyone agreed on (e.g. passenger safety is paramount, drivers should be insured) (Bartlett, 2016). Those consensus points became the basis for policy recommendations. Astonishingly, all sides came to accept a compromise legal framework because they saw it reflected the collective will, not just one faction’s interest. Audrey Tang, Taiwan’s Digital Minister, described the process as “finding rough consensus” – people had to “convince not just their own side, but also the other sides” for a statement to rise to prominence (Bartlett, 2016). The design of the platform gamified heterophily: users were rewarded (by influence of their ideas) for proposing statements that could win over adversaries. More divisive assertions simply didn’t gain traction because they would get voted down by others. Over a month of deliberation, four initially distinct opinion groups gradually converged into two groups, and then into one common ground on key points . Participants reported being surprised at how much consensus was possible and appreciated seeing a visualization of where everyone stood – it humanized the “other side” (Huang, 2017).

    The key takeaway is that the medium and rules of online engagement matter: if you build a system that amplifies moderate, bridge-building ideas rather than the loudest partisan takes, people will use it accordingly. Other platforms experimenting in this space include Kialo, a website for structured pro/con debates that enforces civility and clarity, and Change My View on Reddit, a community where users are actually rewarded for having their mind changed by a good argument. These platforms, while smaller than mainstream social media, indicate a real appetite for richer discourse online. They show that given a chance, many internet users will happily step outside their echo chamber to debate respectfully and reconsider their positions. The challenge and opportunity ahead is scaling up such models, so that heterophily online isn’t confined to a few enclaves but becomes the norm across our digital public sphere. Taiwan’s success with vTaiwan and Polis has inspired other governments and communities to try similar large-scale online deliberations. Yet for mainstream social media giants, solving these issues has been an uphill battle.

    Facebook and Twitter, in particular, have made high-profile attempts to tweak their algorithms and interface features to mitigate echo chambers and polarization – with limited success. Facebook’s 2018 News Feed overhaul, intended to promote “meaningful interactions” among friends and family, infamously backfired by boosting outrage and sensationalism in practice . Internal company documents later revealed that this algorithm change rewarded incendiary content, making the platform angrier, even as it aimed to encourage healthy engagement. Twitter has introduced prompts (like nudges to read an article before retweeting it) and a community fact-checking system (Community Notes), but toxic debates and partisan silos persist on the platform. Even rigorous experiments by independent researchers – for example, temporarily altering what kind of political content people see on Facebook – resulted in only modest changes to users’ browsing behavior and almost no change in their political attitudes . These efforts underline a key lesson: it’s not simple to retrofit an engagement-driven platform to foster understanding. Tackling echo chambers requires more than minor tweaks to the recommendation engine; it demands rethinking the platform’s fundamental design and incentives. This raises an urgent question: If social media as we know it is structurally resistant to heterophily, what would a platform look like if designed from the ground up to foster cognitive diversity?

    Connecting to ManyFold: Engineering Cognitive Diversity

    In light of these lessons, my colleague Neville Newey and I set out to build a platform from scratch that would counteract homophily and foster nuanced deliberation. This brings us to ManyFold, a new platform we co-designed explicitly to address the structural causes of echo chambers. ManyFold’s approach takes inspiration from all the lessons discussed – the psychology of diversity (think perspective-taking, intellectual humility, and positive intergroup contact), the wisdom of consensus-driven systems, and the success of deliberative designs – and weaves them into an algorithm that maximizes cognitive diversity in discussions. We infuse modern technology with the same spirit of open-minded, humble dialogue that characterized communities like the San or the Haudenosaunee, translating that ethos into a digital environment. The guiding philosophy is simple: if echo chambers are largely a product of how conversations are structured (or not structured) online, then re-engineering those structures unlocks our latent heterophily. Rather than connecting you with “people you may know,” ManyFold connects you with people you may want to know precisely because they see the world differently.

    How does it work? ManyFold’s core algorithm distributes your post to users outside your usual tribe, ensuring more varied responses and no echo chambers. As a result, the responses you get are varied, and your post doesn’t echo around a like-minded clique. Extreme or highly partisan posts can’t create a “feedback loop” of sympathizers: the design “deprives extreme positions of a homogeneous echo chamber” by steering those posts toward readers with starkly different stances, who will challenge the content rather than reinforce it. ManyFold bakes in a kind of automatic devil’s advocacy.

    By default, ManyFold forces the kind of intergroup contact that decades of research show can reduce prejudice . Every time you post or comment, you can expect it will be seen and likely responded to by people with different viewpoints. This makes each interaction an exercise in perspective-taking – you’re prompted to consider why someone from another background might disagree, imagining the issue through their eyes . Rather than hearing an echo of agreement, you’re exposed to counterpoints and alternate experiences. This process might be challenging, but it ultimately encourages intellectual humility. Confronted with well-reasoned dissent and diverse personal stories, users become more comfortable admitting “I might be wrong” and more curious about what they can learn from others’ perspectives . In short, ManyFold’s environment nudges people to approach dialogue as a two-way learning opportunity instead of a one-sided broadcast.

    The platform’s feed algorithms optimize for what Goodin and Spiekermann (2018) call epistemic diversity – exposing people to information that advances collective understanding instead of just driving engagement metrics. This approach draws on research by Lu Hong and Scott Page (2004), who famously demonstrated that groups of diverse problem-solvers can outperform groups of high-ability but similar thinkers at finding solutions (Hong & Page, 2004). Diversity, in that context, isn’t a feel-good slogan but a practical strategy for better outcomes. ManyFold applies these findings to discourse: by ensuring a spectrum of viewpoints, the hope is that discussions become more exploratory and less confirmatory, yielding new discernment that wouldn’t emerge in an echo chamber. Indeed, heterogeneous conversation can be a “crucible for better thinking, not an incitement to factional strife” (ManyFold, 2025).

    The platform then elevates minority viewpoints in ways traditional social media do not. Instead of burying unpopular opinions via downvotes or outrage, ManyFold keeps them in the mix so they can be examined and responded to by others. This design echoes philosopher Jürgen Habermas’s ideal of a discourse free from domination, where no position is arbitrarily excluded (Habermas, 1996). In practical terms, it means no single person or moderator on ManyFold can silence a perspective just because it’s unpopular. Every idea can circulate and meet its critiques in the open. Over time, this helps inoculate the community against misinformation and extremism in a different way than blunt censorship: bad ideas are debunked through counter-argument and context provided by diverse others, rather than simply hidden (which often only feeds martyrdom narratives) . ManyFold treats a controversial post as an opportunity for constructive debate. For example, if someone shares a conspiracy theory, the platform ensures that responses from people with relevant expertise or opposing evidence are prominently shown, effectively attaching a rational “immune response” to the original post – similar to how Wikipedia handles dubious claims with “citations needed” notes and disputing viewpoints. This way, users encountering extreme content also encounter the broader societal chorus of perspectives around it, which provides a reality check. It’s a digital twist on John Stuart Mill’s dictum that understanding the counter-argument is essential to knowing the truth of your own argument.

    ManyFold’s commitment to heterophily extends to how it forms discussion groups and threads. Unlike typical forums where people self-sort by interest or ideology, ManyFold intentionally seeds discussions with a mix of participants. A user who identifies as conservative on an issue might be algorithmically paired in a debate with a few progressives, some libertarians, anarchists, and moderates, rather than dropped into a room full of fellow conservatives. Think of it like a well-curated dinner party seating chart, designed to spark lively but balanced conversation. This design is informed by centuries-old practices like those of the Iroquois and Quakers – ensuring no one faction can dominate a conversation – and by modern network science: studies show that carefully introducing “bridge” individuals between polarized clusters can facilitate understanding and reduce toxic dynamics . ManyFold algorithmically mimics the role of a wise meeting facilitator who says, “I’d like us to hear from a different perspective now.” By doing so, it hopes to cultivate not just polite agreement, but genuine deliberation. As one of the platform’s design mottos puts it: “Don’t isolate the disagreement – illuminate it.” When opposing viewpoints meet, the aim is not to declare a winner but to refine everyone’s thinking, much as philosopher Charles Taylor’s ethic of authenticity suggests individuals refine their beliefs by wrestling with others’ values (Taylor, 1991). For instance, a climate change skeptic on ManyFold might be shown first-hand accounts from someone in a flood-prone Bangladeshi village or data from a climatologist – not to shame the skeptic, but to provide perspectives that challenge them to think more broadly. This kind of cross-pollination of experiences embodies both perspective-taking and the humble acknowledgment that none of us has a monopoly on truth.

    Unlike Facebook or Twitter, which largely leave it to users to seek out opposing views (or rely on blunt content moderation when things go wrong), ManyFold bakes diversity and deliberation into its core mechanics from the start. For example, where typical feeds let people silo themselves, ManyFold automatically brings a range of viewpoints into every discussion thread. And instead of simply banning or algorithmically downplaying extreme content, ManyFold pairs controversial posts with credible counterpoints and context , ensuring that false or harmful claims are confronted head-on rather than just hidden. By depriving extreme positions of an isolated audience and subjecting them to challenge , the platform prevents the feedback loops that fuel polarization. The upshot is that ManyFold doesn’t measure success by how long you scroll or how many ads you click, but by the quality of understanding that emerges from each conversation. This ethos aligns with calls by tech ethicists like Tristan Harris to build technologies that prioritize user well-being and healthy discourse over sheer engagement . Our goal is that a divisive meme that might go viral elsewhere could, on ManyFold, spark a genuine dialogue that leaves everyone a little wiser.

    By design, the platform prizes curiosity and constructiveness, nudging users to ask questions and understand an argument before rebutting it. If homophily is the inertia pulling us into filter bubbles, ManyFold is the counter-force—a gentle push outward that expands our horizons with each interaction. In doing so, it channels a line from Friedrich Nietzsche that serves as a warning and inspiration: ‘The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently’ (Nietzsche, 1887). The platform is built on the premise that our minds are sharpened, not threatened, by encountering those who think differently.

    We invite you to become an early adopter by joining us on ManyFold today. By participating now, you’ll help shape this budding community and ensure that meaningful, cross-perspective discussion thrives from the beginning.

    Conclusion

    Human nature contains multitudes. We are, at turns, tribal and cosmopolitan, defensive and curious. As we’ve seen, the pull of homophily is real – rooted in psychology and easily exacerbated by modern algorithms – but it is not the whole story. We also possess a countervailing push toward growth, empathy, and connection across difference. The existence of both impulses means that the social environments we create truly matter. Will our communities and technologies feed only our inclination for echo chambers, or will they cultivate our capacity for open-minded engagement?

    The evidence is encouraging: when given supportive conditions, people can and do step out of their comfort zones. The same person who closes ranks in a partisan Facebook group might, in a citizen assembly or on a platform like ManyFold, become an active listener and nuanced thinker. Rather than labeling humanity as hopelessly narrow or naively open, we should recognize this dual potential. It falls on all of us – technologists, leaders, educators, citizens – to design structures that bring out the better angels of our nature. This can happen at every scale. In our personal lives, it means engaging with that colleague or neighbor who holds a different view, not to argue but to understand. In our institutions, it means creating forums where diverse stakeholders deliberate side by side, whether in a company, a school board, or a national debate. And in our online spaces, it means pushing for innovation and responsibility from platforms: the algorithms that shape what billions see each day should be aligned with democratic ideals, not just advertising metrics.

    ManyFold’s approach is one inspiring example, showing that rethinking the rules of engagement can transform discourse. It won’t be the last word – the movement for a more heterophilous public sphere is just beginning, and will require experimentation and iteration. But the key message is one of empowerment: we are not slaves to polarization. We can choose tools and norms that expand our minds. Every time we resist the lazy lure of the echo chamber and instead invite a new perspective into our field of view, we exercise the “heterophily muscle” and make it stronger. “Keep the company of those who seek the truth—run from those who have found it.” — Václav Havel. Over time, those muscles could rebuild a culture of constructive debate out of the fragmented landscape we see now.

    Perhaps the most heartening lesson is that engaging with diverse perspectives is not just good for society – it enriches us as individuals. As the San elders knew around their fires, as the Haudenosaunee sachems demonstrated in council, and as Quaker Friends practiced in their meetings, listening deeply can reveal unexpected wisdom and forge bonds of understanding. It might be challenging at times, even uncomfortable, but it draws out the full range of human insight in a way that homogeneity never can. In a world as complex and interconnected as ours, we need that full range of insight more than ever. So let’s build systems, online and offline, that challenge us to be curious and kind in equal measure. The echoes of agreement may be reassuring, but the spark of a fresh viewpoint is how we light the path to progress. “Without deviation from the norm, progress is not possible.” — Frank Zappa. Human nature has room for both, and the future will be shaped by which one we choose to cultivate. So come join us on ManyFold and help build this culture of constructive debate from the ground up. Be the change you want to see in the world by opening your mind to the widest spectrum of perspectives!

    References

    Allport, G. W. (1954). The Nature of Prejudice. Addison-Wesley.

    Asch, S. E. (1955). Opinions and social pressure. Scientific American, 193(5), 31–35.

    Baron, J. (2019). Actively Open-Minded Thinking: Theory, Methods, Research, and Applications. Routledge.

    Bartlett, R. D. (2016). How Taiwan solved the Uber problem. P2P Foundation Blog, 21 September 2016.

    boyd, d. (2017) ‘Why America is self-segregating’, Apophenia. Available at: https://www.zephoria.org/thoughts/archives/2017/01/10/why-america-is-self-segregating.html (Accessed: 6 March 2025).

    Centola, D., Becker, J., Brackbill, D., & Baronchelli, A. (2018). Experimental evidence for tipping points in social convention. Science, 360(6393), 1116–1119.

    DiResta, R. (2024) ‘The Invisible Rulers Turning Lies Into Reality’, Commonwealth Club World Affairs. Available at: https://www.youtube.com/watch?v=Ad2gjdN_k5Y (Accessed: 6 March 2025).

    Farrell, D. M., Suiter, J., & Harris, C. (2019). “Systematizing” constitutional deliberation: the 2016–18 citizens’ assembly in Ireland. Irish Political Studies, 34(1), 113–123.

    Fishkin, J. S. (2018). Democracy When the People Are Thinking: Revitalizing Our Politics Through Public Deliberation. Oxford University Press.

    Galinsky, A. D., & Moskowitz, G. B. (2000). Perspective-taking: decreasing stereotype expression, stereotype accessibility, and in-group favoritism. Journal of Personality and Social Psychology, 78(4), 708–724.

    Gastil, J., Deess, E. P., & Weiser, P. (2002). Civic awakening in the jury room: A test of the connection between jury deliberation and political participation. Journal of Politics, 64(2), 585–595.

    Goodin, R. E., & Spiekermann, K. (2018). An Epistemic Theory of Democracy. Oxford University Press.

    Habermas, J. (1996). Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. MIT Press.

    Haney, C., Banks, W. C., & Zimbardo, P. G. (1973). Interpersonal dynamics in a simulated prison. International Journal of Criminology and Penology, 1, 69–97.

    Harris, T. (2020) ‘How a handful of tech companies control billions of minds every day’, TED. Available at: https://www.ted.com/talks/tristan_harris_how_a_handful_of_tech_companies_control_billions_of_minds_every_day (Accessed: 6 March 2025).

    Hong, L., & Page, S. E. (2004). Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences, 101(46), 16385–16389.

    Huang, J. (2017). Polis: Scaling Deliberation by Mapping High-Dimensional Opinion Spaces. MS Thesis, MIT.

    Janis, I. L. (1982). Groupthink: Psychological Studies of Policy Decisions and Fiascoes. Houghton Mifflin.

    Justo, J. (2024). Unveiling the Iroquois Confederacy: A United Force in Native American Governance. NativeTribe Info. (Posted May 24, 2024).

    Krumrei-Mancuso, E. J., & Rouse, S. V. (2016). The development and validation of the Comprehensive Intellectual Humility Scale. Journal of Personality Assessment, 98(2), 209–221.

    Leary, M. R., et al. (2017). Cognitive and interpersonal features of intellectual humility. Personality and Social Psychology Bulletin, 43(6), 793–813.

    Lord, C. G., Lepper, M. R., & Preston, E. (1984). Considering the opposite: a corrective strategy for social judgment. Journal of Personality and Social Psychology, 47(6), 1231–1243.

    ManyFold (2025). Breaking the Echo Chamber: A Blueprint for Authentic Online Deliberation. [blog].

    McPhail, M. (2024). The Quaker Decision Making Model. Friends General Conference News, 25 November 2024.

    Moscovici, S., & Zavalloni, M. (1969). The group as a polarizer of attitudes. Journal of Personality and Social Psychology, 12(2), 125–135.

    Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220.

    Nietzsche, F. (1887). On the Genealogy of Morals.

    OECD (2020). Innovative Citizen Participation and New Democratic Institutions: Catching the Deliberative Wave. OECD Publishing.

    Pariser, E. (2011) ‘Beware online “filter bubbles”’, TED. Available at: https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles (Accessed: 6 March 2025).

    Pettigrew, T. F., & Tropp, L. R. (2006). A meta-analytic test of intergroup contact theory. Journal of Personality and Social Psychology, 90(5), 751–783.

    Shostak, M. (1983). Nisa: The Life and Words of a !Kung Woman. Harvard University Press.

    Sunstein, C. R. (2001). Republic.com. Princeton University Press.

    Tajfel, H. (1970). Experiments in intergroup discrimination. Scientific American, 223(5), 96–102.

    Taylor, C. (1991). The Ethics of Authenticity. Harvard University Press.

    Tufekci, Z. (2017) ‘We’re building a dystopia just to make people click on ads’, TED. Available at: https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads (Accessed: 6 March 2025).

    Zimbardo, P. G., Haney, C., Banks, W. C., & Jaffe, D. (1973). The mind is a formidable jailer: A Pirandellian prison. New York Times Magazine, April 8, 1973, 36–60.

  • Breaking the Echo Chamber: A Blueprint for Authentic Online Deliberation

    Join us on ManyFold now!

    Introduction: The Digital Speech Crisis

    A few weeks ago, I found myself catching up with an old college friend—let’s call him Ezra. He used to be the kind of person who devoured books like The Metaphysical Club and his recommendations routinely influenced me. His nuanced, questing intellect once made every conversation feel alive with possibility. This time, though, I barely recognized him. He was rattling off dire warnings about Canada’s Bill C-63 and the EU’s Digital Services Act, insisting these regulations were part of a grand conspiracy to muzzle dissent—especially for people like him, a Jew who feared what he called “silencing tactics.” Then he flipped the script and lambasted “shadowy forces” bent on “canceling” him for his views.

    Observing Ezra—a friend once fascinated by complexity—announce so urgently that “free speech” stands on the brink illustrates how readily we gravitate toward a battle cry against censorship. The Greek economist and politician Yanis Varoufakis advances the notion of technofeudalism. His concept points to a subtler, more encompassing shift: private companies now construct vast arenas for public discourse through data collection and algorithmic design, shaping speech and belief in ways that reinforce their own authority (Varoufakis, 2023). Ezra instinctively recognizes this menace, yet he misdiagnoses it: it is less about policymakers legislating speech and more about newly emerged barons silently dictating the terms of discourse.

    Lawmakers have responded to the threat that this manipulation poses, by crafting legislation such as C-63, the EU’s Digital Services Act, and the UK’s Online Safety Bill. Those bills focus on lists of prohibited behaviors and moderation protocols. Such laws address destructive content but fail to describe a shared vision of digital life. They specify what must be reported, flagged, or removed, when they should instead define constructive goals for civic engagement or personal autonomy–they were elected for their visions. Silicon Valley entrepreneurs, for their part, champion “innovation” for innovation sake, touting free speech–they channel user data to intensify engagement, refine algorithms, and reinforce their platforms’ influence. They thus fill the void of a democratically shaped vision with a vision of their own that has no democratic representation. “A trend monger is a person who dreams up a trend… and spreads it throughout the land, using all the frightening little skills that science has made available!” –Frank Zappa.

    Elon Musk, for example, oversees a platform where more than a hundred million people interact within rules he and his teams devise. Mark Zuckerberg refines Meta’s systems to sustain user involvement and expand a massive empire of everyday engagements. These structures function as formidable strongholds, echoing the technofeudal balance of power Varoufakis describes. Although “free speech” often appears intact as a principle, hidden mechanisms and corporate incentives decide which ideas gain traction, how they spread, and to whom they matter.

    Manyfold, a social network I co-founded with Neville Newey, treats discourse as a form of collective problem-solving rather than a mere engagement-driven spectacle. Rather than merely multiplying viewpoints, Manyfold aims to make speech serve collective reasoning rather than flashy performance. Hafer and Landa (2007, 2013, 2018) show that genuine deliberation isn’t just an aggregate of opinions—it emerges from institutional frameworks that deter polarization and induce real introspection. If those structures fail, people drift away from public debate. Feddersen and Pesendorfer (1999) find that voters abstain when they think their efforts won’t shift the outcome, mirroring how social-media users retreat when their voices go unheard amid viral noise.

    Landa (2015, 2019) underscores that speech is inherently strategic: individuals tailor messages to sway an audience within system-imposed constraints. Conventional platforms reward shock value and conformity. Manyfold, by contrast, flips these incentives—replacing knee-jerk outrages with problem-solving dialogues fueled by cognitive diversity. Speech becomes less about self-promotion and more about refining a shared understanding of complex issues. Goodin and Spiekermann (2018) argue that a healthy democracy prizes epistemic progress—that is, advancing collective understanding—more than simple audience metrics. Manyfold embodies this ethos by prioritizing ideational variety over raw engagement. Landa and Meirowitz (2009) elucidate how well-designed environments elevate the quality of public reasoning: By intentionally confronting users with unfamiliar or underrepresented standpoints, Manyfold fuels the kind of friction that refines thought instead of fracturing it. The platform thus departs from popularity-driven paradigms, allowing fresh or seldom-heard perspectives to surface alongside established ones. In doing so, it champions deeper inquiry and a richer exchange of ideas, steering us away from a race to the loudest shout and toward a more thoughtful digital sphere. Instead of optimizing for clicks or locking users into echo chambers, its algorithms maximize cognitive diversity. Hong & Page (2004) show that when groups incorporate a range of cognitive heuristics, they arrive at better solutions than even a group of individually brilliant but homogeneous thinkers. Manyfold applies this understanding to online speech, ensuring that conversations remain exploratory rather than self-reinforcing. Minority viewpoints are surfaced, ensuring no single entity decides who deserves an audience. This design embraces Jürgen Habermas’s concept of discourse free from domination (Habermas, 1996), presenting a space that encourages empathy, critical thought, and shared inquiry. Rather than reinforcing the routines of a tech industry propelled by data extraction, Manyfold aspires to deepen human capacity for understanding and dialogue.

    Varoufakis’s critique of technofeudalism highlights the urgency of reclaiming our digital commons from corporate overlords. Preserving speech in principle means little if individuals rarely see ideas that don’t align with a platform’s opaque priorities. An affirmative vision of technology places nuanced conversation and collective progress at the core of design choices. Manyfold advances this vision of collaboration and exploration rather than funneling human interaction into corridors of control. In that sense, it is an experiment on how digital spaces can foster genuine agency, offering an antidote to the feudal trends reshaping our online lives.

    Regulatory Shortfalls: From Frank Zappa to Sen’s Flute

    In 1985, Frank Zappa testified before the U.S. Senate to protest the Parents Music Resource Center’s push for warning labels on albums deemed “explicit.” Though that debate might seem worlds away from modern digital regulations like Bill C-63, the EU’s Digital Services Act, and the UK’s Online Safety Bill, Zappa’s stance resonates: labels and blanket bans can flatten cultural nuance and sidestep the crucial question of how creative or controversial content might foster dialogue and moral discernment. These new regulations aim to curb harm, yet they rarely outline ways for users to engage with conflict in ways that spark reflection and growth. As Cass Sunstein (2017) cautions, overly broad or inflexible measures can stifle open discourse by driving heated discussions underground. Rather than encouraging respectful debate, heavy-handed rules may suppress valuable viewpoints and sow mistrust among users who perceive moderation as opaque or punitive.

    Charles Taylor’s “ethic of authenticity” (Taylor, 1991) offers a way to understand why mere prohibition leaves a gap. People refine their views by confronting perspectives that challenge them, whether they find these views enlightening or appalling. Imagine someone stumbling on a troubling post at midnight. Instead of encountering prompts that encourage her to dissect the viewpoint or a variety of responses that weigh its moral assumptions, she simply sees it flagged and removed. The window to discover why others hold this stance is slammed shut, turning what could have been a learning moment into a dead end. This echoes Zappa’s warning that reducing complex phenomena to “offensive content” deprives individuals of the friction that deepens understanding.

    Amartya Sen offers a memorable illustration that features three children and one flute. One child insists she should own the flute because she can actually play it, and giving it to anyone else would stifle that musical potential—a utilitarian perspective that maximizes the flute’s use for the greater enjoyment. Another child claims ownership because he made the flute himself; to deny him possession would be an affront to his labor—echoing a libertarian mindset that emphasizes individual property rights. The third child points out that she has no other toys, while the others have plenty—an egalitarian appeal rooted in fairness and need.

    Sen’s parable of the flute (Sen, 2009) illustrates how disagreements often stem from irreconcilable yet valid moral frameworks—some value the labor that produced the flute, some prioritize the needs of the have-nots, and some emphasize the broad benefits to all if the child who can best play it takes possession. Online speech can mirror these clashing values just as starkly, whether in disputes about free expression versus harm reduction, or in controversies that pit egalitarian ideals against strongly held beliefs about individual autonomy. Traditional moderation strategies seek to quell such turmoil by removing provocative content, but this reflex overlooks how certain designs can prevent harmful groupthink from forming in the first place. Democratic discourse hinges on the public’s ability to interpret and evaluate information rather than merely receiving or losing access to it, as Arthur Lupia and Matthew McCubbins (1998) emphasize. Blanket removals can therefore undermine deeper deliberation, obscuring why certain ideas gain traction and how best to counter them.

    When regulators or platform administrators rely on mass takedowns and automated filters, they address truly egregious speech—like hate propaganda or incitements to violence—by erasing it from view. Yet in doing so, they may also hide borderline cases without offering any path for reasoned dialogue and they inadvertently drum up support for conspiracy theories and extremists who cry foul about their freedom of speech being curtailed. “Who are the brain police?” – Frank Zappa. Daniel Kahneman (2011) observes that cognitive biases often incline us toward simple, emotionally charged explanations—precisely the kind conspiracy theorists exploit. In a landscape overflowing with content, an “us versus them” narrative resonates more than a nuanced account of complex moderation dynamics. As Zappa argued in his day, labeling everything “dangerous” blinds us to distinctions between content that calls for condemnation and content that may provoke vital, if uncomfortable, debate. Equally problematic, automated moderation remains opaque, leaving users adrift in a sea of unexplained removals. This disorients people and fosters the “technofeudal” dynamic that Yanis Varoufakis describes, in which a handful of corporate overlords dictate whose words appear and whose vanish from public view (Varoufakis, 2023). Platforms like Facebook and YouTube exemplify this dynamic through their opaque algorithms.

    Reuben Binns (2018) pinpoints a deep rift in so-called “fairness” models: Should platforms enforce demographic parity at the group level or aim for case-by-case judgments? Group fairness often triggers what researchers call allocative harms, whereby entire categories of users are treated according to blanket criteria, overriding personal context. Meanwhile, purely individual approaches risk masking structural inequities beneath a veneer of neutrality. Berk et al. (2018) reveal that nominally protective interventions can backfire, entrenching existing imbalances and excluding certain subgroups in the process.

    Corbett-Davies and Goel (2018) extend these critiques, warning that neat mathematical formulas tend to dodge the thorny trade-offs inherent in real-world scenarios. In content moderation, rigid classification lines rarely distinguish toxic incitement from essential critique or activism. The outcome is a heavy-handed purging of contentious posts in lieu of robust engagement—especially for communities that are already on precarious footing.

    Facebook’s News Feed spotlights emotionally charged posts, provoking knee-jerk reactions instead of thoughtful debate. YouTube’s recommendation engine similarly funnels viewers toward increasingly sensational or one-sided content, making it less likely they’ll encounter alternative perspectives. Underneath these engagement-driven designs lies a deeper issue: the assumption that algorithms can neutrally process and optimize public discourse. Yet, as Boyd & Crawford (2012) warn, big data never just ‘speaks for itself’—it reflects hidden biases in what is collected, how it is interpreted, and whose ideas are amplified. Social media platforms claim to show users what they “want,” but in reality, they selectively reinforce patterns that maximize profit, not deliberation. What looks like an open digital public sphere is, in fact, a carefully shaped flow of content that privileges engagement over nuance. “The empty vessel makes the loudest sound.” –William Shakespeare. In both cases, and even worse in the case of Twitter, the platforms optimize for engagement at the expense of nuanced discussion, skewing users’ experiences toward reaffirmation rather than exploration. The problem isn’t just one of bias—it’s an epistemic failure. Hong & Page (2004) demonstrate that when problem-solving groups lack diverse heuristics, they get stuck in feedback loops, reinforcing the same limited set of solutions. Social media’s homogeneous feeds replicate this dysfunction at scale: the system doesn’t just reaffirm biases; it actively weakens society’s ability to reason through complexity. What should function as an open digital commons instead behaves like a closed ideological marketplace, where the most reactive ideas dominate and alternative perspectives struggle to surface.

    Diakopoulos and Koliska (2017) underscore how opacity in algorithmic decision-making sows distrust, especially when users have no means to contest or even grasp the reasons behind content removals. Meanwhile, Danks and London (2017) argue that bias is not an accidental quirk—it’s baked into the data pipelines and objectives these systems inherit. Tweaking a flawed model does nothing to uproot the deeper scaffolding of inequality. Mittelstadt et al. (2018) label this phenomenon “black-box fairness,” where platforms project an aura of impartiality while stealthily erasing entire points of view, all under the guise of neutral enforcement. Algorithmic opacity is no accident; it’s built into the foundations of digital infrastructure. Burrell (2016) distinguishes three major drivers: corporate secrecy, technical complexity, and user misconceptions. Edwards & Veale (2017) go further, noting that so-called “rights to explanation” often amount to theatrical gestures, revealing little about how moderation decisions are truly made. Users receive sparse summaries that mask deeper biases, leaving them powerless to challenge suspect takedowns. “You have the right to free speech / As long as you’re not dumb enough to actually try it.” –Dead Kennedys.

    Milano, Taddeo, and Floridi (2020) illustrate how recommender systems do more than tailor content; they actively define what enters the public conversation, steering clicks toward certain narratives while quietly sidelining others. This echoes Varoufakis (2023) on technofeudal control: algorithms shape speech with no democratic oversight. Allen (2011) reminds us that privacy isn’t about hoarding personal data—it’s a bedrock for genuine autonomy and civic freedom. Yet as the UK’s Data Science Ethical Framework (2016) shows, “best practices” stay toothless if they lack enforceable governance. The upshot: platforms retain control while individuals navigate curated experiences that corral, rather than liberate, their thinking.

    The Algorithmic Trap: Engagement, Moderation, and Speech Distortion

    If engagement-driven feeds corrupt how people arrive at conclusions, automated moderation controls what they can discuss at all. Relying on algorithmic filtering, platforms increasingly treat speech as a classification problem rather than a social process. Boyd & Crawford (2012) caution that big data’s greatest illusion is its neutrality—its ability to “see everything” while remaining blind to context. Content moderation follows the same logic: broad rules applied without regard for intent, meaning, or deliberative value.

    Floridi (2018) argues that purely compliance-driven moderation—focused on removing “bad” content—fails to address the deeper ethical question of how online spaces should support civic engagement. Automated systems are built for efficiency, not conversation. They eliminate content that could otherwise serve as a basis for debate, treating moral complexity as a bug rather than a feature. Danks and London (2017) maintain that genuine fairness demands more than cosmetic fixes. They propose adaptive, context-aware frameworks, where algorithms are molded by input from the very communities they affect. Rather than chase broad statistical targets, these systems weigh cultural nuances and evolving social norms. Gajane and Pechenizkiy (2018) push a similar notion of “situated fairness,” measuring algorithms by their lived effects, not solely by numeric benchmarks. Cummings (2012) identifies automation bias as a pivotal hazard in algorithmic tools, where people over-trust software outputs, even when intuition or direct evidence suggests otherwise. In content moderation, that leads to an overreliance on machine-driven flags, ignoring the nuance and context behind many posts. Dahl (2018) notes that “black-box” models further blunt accountability, closing off avenues for users to examine or contest the rationale behind takedowns.

    Katell et al. (2020) advocate “situated interventions,” weaving AI into human judgment rather than treating it as an all-knowing arbiter. Manyfold embodies a similar principle by letting users encounter a breadth of diverse arguments rather than being funneled by hidden recommendation systems. Instead of passively ingesting whatever the algorithm decides is “best,” participants engage in a process shaped by varied viewpoints, mitigating the blind spots that purely automated systems can create. In content moderation, a platform might appear balanced in theory while systematically marginalizing particular groups in practice. A truly equitable design, they suggest, must weigh social repercussions in tandem with statistical neatness. Even then, many platforms default to minimal legal compliance while neglecting meaningful public deliberation—what Floridi (2018) terms “soft ethics.” By focusing on liability avoidance instead of robust democratic exchange, they foster speech environments that are technically compliant but remain socially dysfunctional.

    Finally, mass takedowns often sweep away borderline but potentially valuable content, chilling open discussion and leaving marginalized communities especially wary. Research shows that blanket removals disproportionately affect LGBTQ+ advocates and political dissidents, who fear being misunderstood or unjustly targeted thanks to biases rooted in both algorithmic systems and social attitudes (Floridi, 2018). “The problem with the world is that the intelligent people are full of doubts, while the stupid ones are full of confidence,” wrote Charles Bukowski, capturing the cruel irony at play.

    Consider Kyrgyzstan, where heightened visibility has spelled grave danger for investigative journalists and LGBTQ+ groups. In 2019, reporters from Radio Azattyk, Kloop, and OCCRP exposed extensive corruption in the customs system—only to face a surge of coordinated online harassment. Meanwhile, local activists returning from international Pride events became victims of doxxing campaigns, receiving death threats once their identities were revealed in domestic media. Despite formal complaints, state officials took no action, embedding a culture of impunity and self-censorship (Landa, 2019). Rather than fostering engagement, algorithmic amplification meant to boost voices merely thrust vulnerable populations into the crosshairs of hostility.

    On top of that, algorithmic profiling compounds these risks by failing to safeguard group privacy, leaving at-risk users open to surveillance or distortion (Milano et al., 2020). Paradoxically, well-intentioned moderation efforts that aim to curb harm can end up smothering critical perspectives—sacrificing open discourse in the process.

    Most digital platforms exacerbate bias, sustain ideological silos, and reward controversy for its own sake, leaving few genuine alternatives for those seeking more than outrage clicks. Manyfold attempts to invert this model by structuring discourse around collective problem-solving rather than friction for profit. Where conventional algorithms shepherd users into echo chambers, Manyfold transforms disagreement into a crucible for better thinking, not an incitement to factional strife.

    Manyfold: Building a More Democratic Digital Commons

    Yet the Manyfold approach demonstrates that speech need not be restricted to preserve safety. Instead of banning precarious ideas, the platform recognizes that the real peril arises when such ideas echo among those already inclined toward them. By steering those posts away from cognitively similar audiences, Manyfold’s design deprives extreme positions of a homogeneous echo chamber. This use of algorithm ensures that participants who encounter troubling content do so precisely because they hold starkly different stances, collectively challenging the underlying assumptions rather than reinforcing them. In this sense, the “warning label” emerges organically from a chorus of diverse perspectives, not from regulatory edicts that silence speech before anyone can dissect it.

    To understand why this matters, consider Walter Benjamin’s metaphor of translation in The Task of the Translator (Benjamin, 1923). For Benjamin, translation is not merely about transferring words between languages but uncovering latent meanings hidden beneath surface-level communication. Traditional moderation strategies fail at this task, removing provocative posts without context and thereby depriving users of opportunities for mutual understanding and moral growth. Contrast this with Manyfold’s approach, where diverse responses serve as organic “translations” of controversial ideas, helping users interpret their meaning within broader societal debates. By fostering an environment where conflicting viewpoints are presented alongside one another, Manyfold transforms potentially harmful speech into a catalyst for deeper reflection.

    Charles Taylor’s ethic of authenticity (Taylor, 1991) holds that people refine their beliefs by wrestling with opposing perspectives. A skeptic confronted with data on climate change, for instance, might see firsthand accounts from communities grappling with rising sea levels. That experience can provoke deeper questions, moving the skeptic beyond knee-jerk dismissal and guiding her to weigh the moral and practical dimensions of environmental policy.

    This is why we built Manyfold, which foregrounds minority viewpoints rather than letting any single authority determine which voices merit attention. By confronting users with a spectrum of ideas—rather than trapping them in algorithmic bubbles—Manyfold cultivates genuine deliberation. “The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently.”–Friedrich Nietzsche. Such an environment echoes Jürgen Habermas’s Herrschaftsfreier Diskurs (Habermas, 1996), in which no hidden power dynamics dictate who speaks or how ideas circulate, granting participants equal footing to engage in shared inquiry.

    Returning to Amartya Sen’s parable of the flute (Sen, 2009), we observe moral frameworks that vary from maximizing utility to emphasizing fairness or property rights. Digital conflicts mirror these clashes, whether in debates over free expression, harm reduction, or the tension between egalitarian principles and fierce autonomy. Censorship that imposes one moral system alienates those who prefer another. Neither Elon Musk nor a government official can settle these disputes by decree. Manyfold, however, invites conflicting worldviews to coexist and even challenge each other. Instead of quietly sidelining “problematic” perspectives, the platform allows users to explore—or dismantle—controversial ideas in an open forum. As Arthur Lupia and Matthew McCubbins (1998) argue, democracy thrives when citizens can interpret and judge information, not merely gain or lose access to it. Blanket removals obscure why certain ideas flourish and weaken our ability to refute them thoughtfully.

    Luciano Floridi (2018) distinguishes between “hard ethics” grounded in mandatory compliance and “soft ethics” that seeks socially preferable outcomes through design choices. Manyfold leans on soft ethics by weaving empathy, critical thought, and reciprocal inquiry into its algorithms. Participants regularly encounter diverse viewpoints, expanding their horizons and prompting reflection on the assumptions they bring into discussions. This design transcends blunt regulation by embedding a more nuanced ethical philosophy into the platform’s very structure.

    Mariana Mazzucato’s call for mission-oriented innovation (Mazzucato, 2018) challenges policymakers to shape digital spaces around bold societal goals—reducing polarization, for example, or strengthening democracy. Instead of simply outlawing undesirable content, legislators might incentivize platforms to experiment with deliberative tools, demand transparency in how algorithms function, and commission regular audits of platforms’ contributions to civic participation. Such steps shift the conversation from merely policing speech to envisioning the kind of discourse that enriches public life and broadens our collective capabilities.

    Focusing on how platforms enable genuine engagement moves us past blanket prohibitions. In doing so, it treats speech as a catalyst for transformation—even when that transformation feels unsettling. In keeping with Frank Zappa’s insistence on nuance, Taylor’s call for authenticity, and Sen’s acknowledgment of moral pluralism, Manyfold shows how carefully designed algorithms can create a synergy between community well-being and the principle of free expression. By offering an antidote to corporate dominion and the “technofeudal” dynamic described by Varoufakis (2023), Manyfold orchestrates a space where varied viewpoints challenge one another beyond easy certainties. In turn, it strengthens the communal fabric on which democracy relies.

    If digital platforms steer the trajectory of public life, the question isn’t whether we regulate or reform them—but whether we dare to reinvent them from the ground up.

    References

    Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M. and Robinson, D.G., 2020. Roles for computing in social change. Available at: https://arxiv.org/pdf/1912.04883.pdf [Accessed 24 Aug 2020].

    Allen, A., 2011. Unpopular Privacy: What Must We Hide? Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195141375.001.0001

    Berk, R.A., Heidari, H., Jabbari, S., Kearns, M. and Roth, A., 2018. Fairness in criminal justice risk assessments: the state of the art. Sociological Methods & Research, 47(3), pp.437-464. https://doi.org/10.1177/0049124118782533

    Benjamin, W., 1923. The Task of the Translator. In: Illuminations.

    Binns, R., 2018. Fairness in machine learning: lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, pp.149–159. https://doi.org/10.1145/3178876.3186091

    Blyth, C.R., 1972. On Simpson’s paradox and the sure-thing principle. Journal of the American Statistical Association, 67(338), pp.364–366. https://doi.org/10.1080/01621459.1972.10482387

    boyd, d. and Crawford, K., 2012. Critical questions for big data: provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), pp.662–679. https://doi.org/10.1080/1369118X.2012.678878

    Bukowski, C., 1983. Tales of Ordinary Madness. City Lights Publishers.

    Burrell, J., 2016. How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data & Society, 3(1), p.2053951715622512. https://doi.org/10.1177/2053951715622512

    Cabinet Office, Government Digital Service, 2016. Data Science Ethical Framework. Available at: https://www.gov.uk/government/publications/data-science-ethical-framework

    Corbett-Davies, S. and Goel, S., 2018. The measure and mismeasure of fairness: a critical review of fair machine learning. arXiv preprint arXiv:1808.00023.

    Dahl, E., 2018. Algorithmic accountability: on the investigation of black boxes. Digital Culture & Society, 4(2), pp.1–23. https://doi.org/10.14361/dcs-2018-0201

    Danks, D. and London, A.J., 2017. Algorithmic bias in autonomous systems. Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), pp.4691–4697. https://doi.org/10.24963/ijcai.2017/654

    Dead Kennedys, 1980. Police Truck. On Fresh Fruit for Rotting Vegetables [Album]. Cherry Red Records.

    Diakopoulos, N. and Koliska, M., 2017. Algorithmic transparency in the news media. Digital Journalism, 5(7), pp.809–828. https://doi.org/10.1080/21670811.2016.1208053

    Edwards, L. and Veale, M., 2017. Slave to the algorithm? Why a ‘right to an explanation’ is probably not the remedy you are looking for. Duke Law & Technology Review, 16, pp.18–84.

    Feddersen, T.J. and Pesendorfer, W., 1999. Abstention in elections with asymmetric information and diverse preferences. American Political Science Review, 93(2), pp.381–398. https://doi.org/10.2307/2585770

    Floridi, L., 2016. Mature information societies—a matter of expectations. Philosophy & Technology, 29(1), pp.1–4. https://doi.org/10.1007/s13347-015-0211-7

    Floridi, L., 2018. Soft ethics and the governance of the digital. Philosophy & Technology, 31(1), pp.1–8. https://doi.org/10.1007/s13347-018-0303-9

    Hong, L. and Page, S.E., 2004. Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences, 101(46), pp.16385–16389. https://doi.org/10.1073/pnas.0403723101

    Kahneman, D., 2011. Thinking, Fast and Slow. Farrar, Straus and Giroux.

    Katell, M., Young, M., Herman, B., Guetler, V., Tam, A., Ekstrom, J., et al., 2020. Toward situated interventions for algorithmic equity. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp.45–55. https://doi.org/10.1145/3351095.3372874

    Landa, D., 2019. Information, knowledge, and deliberation. PS: Political Science & Politics, 52(4), pp.642–645. https://doi.org/10.1017/S1049096519000810

    Lupia, A. and McCubbins, M.D., 1998. The Democratic Dilemma: Can Citizens Learn What They Need to Know? Cambridge University Press.

    Mazzucato, M., 2018. The Value of Everything: Making and Taking in the Global Economy. Penguin Books.

    Milano, S., Taddeo, M. and Floridi, L., 2020. Recommender systems and their ethical challenges. AI & Society, 35(4), pp.957–967. https://doi.org/10.1007/s00146-020-00952-6

    Nietzsche, F., 1887. On the Genealogy of Morals. Available at: https://www.gutenberg.org/ebooks/52319 [Accessed 20 Feb 2025].

    Sen, A., 2009. The Idea of Justice. Harvard University Press.

    Shakespeare, W., 1599. Henry V, Act 4, Scene 4. In: The Complete Works of William Shakespeare. Available at: https://www.gutenberg.org/ebooks/100 [Accessed 20 Feb 2025].

    Taylor, C., 1991. The Ethics of Authenticity. Harvard University Press.

    Varoufakis, Y., 2023. Technofeudalism. Penguin Books. Available at: https://www.penguin.co.uk/books/451795/technofeudalism-by-varoufakis-yanis/9781529926095

    Zappa, F., 1985. Senate Hearing Testimony on Record Labeling. United States Senate Committee on Commerce, Science, and Transportation.

    Zappa, F., 1978. The Adventures of Greggery Peccary. On Studio Tan [Album]. Warner Bros. Records.

    Zappa, F., 1966. Who Are the Brain Police? On Freak Out! [Album]. Verve Records.

  • The UK Government’s AI Playbook: Progress, Power, and Purpose

    The UK Government’s AI Playbook for 2025 (UK Government, 2025) aspires to make Britain a global leader in artificial intelligence. Although it commendably emphasizes innovation, expanded compute capacity, and AI integration in public services, the document raises questions about whether it fully aligns with broader societal needs. Viewed through the lenses of ethics, equity, and governance, in my view, the playbook both excels and stumbles in addressing the ethical, social, and political implications of AI.


    Compute Capacity: Efficiency vs. Sustainability

    The playbook envisions a twentyfold increase in compute capacity by 2030, in part through AI Growth Zones (UK Government, 2025). This emphasis on scaling up infrastructure parallels the hitherto rising computational demands of advanced AI models. Yet it risks overshadowing the benefits of algorithmic ingenuity—a possibility illustrated by DeepSeek’s R1 model, which achieves near-reasoning parity with top-tier models at a fraction of the computational and carbon cost (DeepSeek, 2024); as I have already pointed out here. This finding suggests that brute force is not the sole path to progress.

    Luciano Floridi’s concept of environmental stewardship points to the importance of developing technology responsibly (Floridi, 2014). Although the playbook mentions renewable energy, it lacks firm commitments to carbon neutrality, and it fails to recognize rival uses for such energy; even if it is renewable it isn’t free. Without enforceable sustainability targets, the rapid expansion of data centers may undermine ecological well-being. This concern resonates with Amartya Sen’s focus on removing obstacles to human flourishing (Sen, 1999): if AI is meant to serve society over the long term, it should do so without depleting environmental resources. In fact, AI can and should help to enhance bio-diversity and to decarbonize our economies!


    Innovation for Public Good: Missions Over Markets

    While the playbook frames innovation as a cornerstone of national strategy, it falls short of setting specific missions that address urgent societal challenges. Mariana Mazzucato argues that invention for its own sake often enriches existing power structures instead of tackling critical issues like climate adaptation, public health, and digital inclusion (Mazzucato, 2018). Without clearly defined missions, even groundbreaking discoveries can deepen inequities rather than reduce them.

    The proposed £14 billion in private-sector data centers underscores a reliance on corporate partnerships, echoing Shoshana Zuboff’s caution about surveillance capitalism (Zuboff, 2019). These collaborations might prioritize profit unless they include clear standards of accountability and shared ownership. Building in public stakes, as Mazzucato recommends, could align AI development more closely with social goals. Likewise, participatory governance frameworks—anchored in Floridi’s ethics-by-design—would ensure that data usage reflects collective values, not just corporate interests (Floridi, 2014).


    Public Services and Democratic Participation: Empowerment or Alienation?

    Plans to integrate AI into public services—such as NHS diagnostics and citizen consultations—are among the playbook’s most promising proposals. Yet they merit caution. For instance, while AI-powered healthcare diagnostics could expand access, digital exclusion persists without sufficient broadband coverage or user training. Following Sen (1999), true progress lies in increasing the range of freedoms that people can exercise, and this often requires more than technological fixes alone.

    Floridi’s concept of the infosphere reminds us that AI restructures how people interact and make decisions (Floridi, 2014). Tools such as the i.AI Consultation Analysis Tool risk reducing nuanced human input to algorithmically processed data, potentially alienating users from democratic processes. A participatory design approach would help prevent such alienation by incorporating public input from the outset and preserving context within each consultation (our work at Towards People goes in that direction).


    Equity and Inclusion: Bridging Gaps or Reinforcing Barriers?

    Although the playbook mentions upskilling programs like Skills England, it fails to address the systemic forces that marginalize certain groups in an AI-driven economy. Technical training alone might not suffice. Pairing skill-building with community-based AI literacy initiatives could foster trust while mitigating bias in AI systems. Meanwhile, the document’s brief nod to fairness in AI regulation overlooks deeper biases—rooted in datasets and algorithms—that perpetuate discrimination. Zuboff (2019) warns that opaque processes can exclude minority voices, particularly when synthetic data omits their concerns. Regular audits and bias-mitigation frameworks would bolster equity and align with the pursuit of justice; yes, we should still care about that.


    Strengths Worth Celebrating

    Despite these gaps, the playbook contains laudable goals. Its commitment to sovereign AI capabilities demonstrates an effort to reduce dependence on external technology providers, promoting resilience (UK Government, 2025). Similarly, the proposal to incorporate AI in public services—if thoughtfully managed—could enhance service delivery and public well-being. With the right checks and balances, these initiatives can genuinely benefit society.


    Conclusion: Toward a Holistic Vision

    If the UK aspires to lead in AI, the playbook must move beyond infrastructure and economic growth to incorporate ethics, democratic engagement, and social equity. Emphasizing ethics-by-design, participatory governance, and inclusive empowerment would position AI to expand freedoms rather than reinforce existing barriers. Sen’s work remains a fitting guide: “Development consists of the removal of various types of unfreedoms that leave people with little choice and little opportunity of exercising their reasoned agency” (Sen, 1999). By centering AI policies on removing these unfreedoms, the UK can ensure that technological advancement aligns with the broader project of human flourishing.


    References

    DeepSeek, 2024. “DeepSeek R1 Model Achieves Near Reasoning Parity with Leading Models.” Available at: https://www.deepseek.com/r1-model [Accessed 11 February 2025].

    Floridi, L., 2014. The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford University Press.

    Mazzucato, M., 2018. The Value of Everything: Making and Taking in the Global Economy. Penguin Books.

    Sen, A., 1999. Development as Freedom. Oxford University Press.

    UK Government, 2025. AI Playbook for the UK Government. Available at: https://assets.publishing.service.gov.uk/media/67a4cdea8259d52732f6adeb/AI_Playbook_for_the_UK_Government__PDF_.pdf [Accessed 11 February 2025].

    Zuboff, S., 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books.

  • From Carbon Footprints to Sensitive Data—How Diversity in Large Language Models Elevates Ethics and Performance through Collective Intelligence

    Humanity has long grappled with the question of how best to combine many minds into one coherent whole—whether through bustling marketplaces or grand assemblies of knowledge. Today, we find ourselves at a watershed where that same pursuit of unity is taking shape in ensembles of artificial minds (LLMs in particular). In the spirit of Aristotle’s maxim that “the whole is greater than the sum of its parts,” we write a new chapter: Ensembles of artificial minds, composed of multiple specialized models, each carrying its own fragment of insight, yet collectively amounting to more than any monolithic solution could achieve. In that sense, we step closer to Teilhard de Chardin’s vision of a “noosphere,” a shared field of human thought, only now augmented by a chorus of machine intelligences (Teilhard de Chardin, 1959).


    1. Collective Intelligence: Lessons from Humans, Applications for AI

    Thomas Malone and Michael Bernstein remind us that collective intelligence emerges when groups “act collectively in ways that seem intelligent” (Malone & Bernstein, 2024). Far from being a mere quirk of social behavior, this phenomenon draws on time-honored principles:

    1. Diversity of Expertise: Mirroring John Stuart Mill’s argument that freedom of thought fuels intellectual progress (Mill, 1859), specialized models can enrich AI ecosystems. Qwen2.5-Max excels in multilingual text, while DeepSeek-R1 brings cost-efficient reasoning—together forming a robust “team,” much like how varied skill sets in human groups enhance overall performance.
    2. Division of Labor: Just as Adam Smith championed the division of labor to optimize productivity, AI architectures delegate tasks to the model best suited for them. Tools like LangGraph orchestrate these models in real time, ensuring that the right expertise is summoned at the right moment.

    Picture a climate research scenario: Qwen2.5-Max translates multilingual emission reports, DeepSeek-R1 simulates future carbon footprints, and a visual model (e.g., Stable Diffusion) generates compelling graphics. By combining these capabilities, we circumvent the bloat (and carbon emissions) of giant, one-size-fits-all models—realizing more efficient, collaborative intelligence.


    2. Cost & Carbon Efficiency: Beyond the Scaling Obsession

    Hans Jonas (1979) urged us to approach technology with caution, lest we mortgaged our planet’s future. Today’s AI industry, enthralled by the race for ever-larger models, invites precisely the ecological perils Jonas warned against—ballooning compute costs, growing data-center footprints, and proprietary “Stargate” projects fueled by staggering resources.

    A Collective Antidote emerges in the form of smaller, specialized models. By activating only context-relevant parameters (as DeepSeek-R1 does via Mixture of Experts), we not only reduce computational overhead but also diminish the associated carbon impact. Qwen2.5-Max’s open-source ethos, meanwhile, fosters broader collaboration and lowers barriers to entry, allowing diverse research communities—from startups to universities—to shape AI’s future without surrendering to entrenched power structures.


    3. Sensitive Data: Privacy Through Self-Hosted Diversity

    Michel Foucault (1975) cautioned that centralized systems often drift into oppressive surveillance. In AI, this concern materializes when organizations hand over sensitive data to opaque external APIs. A more ethical path lies in self-hosted, specialized models. Here, the pillars of privacy and autonomy stand firm:

    • Local Deployment: Running Llama 3 or BioBERT on in-house servers safeguards patient records, financial transactions, or other confidential data.
    • Hybrid Workflows: When faced with non-sensitive tasks, cost-efficient external APIs can be tapped; for sensitive tasks, a local model steps in.

    Such an arrangement aligns with Emmanuel Levinas’s moral philosophy, prioritizing the dignity and privacy of individuals (Levinas, 1969). A healthcare provider, for instance, might integrate a self-hosted clinical model for patient data anonymization and rely on cloud-based computation for less critical analyses. The result is a balanced interplay of trust, efficiency, and ethical responsibility.


    4. Geopolitical & Cultural Resilience

    Reliance on models from a single country or corporation risks embedding cultural biases that replicate the hegemony Kant (1795) so vehemently questioned. By contrast, open-source initiatives like France’s Mistral or the UAE’s Falcon allow local developers to tailor AI systems to linguistic nuances and social norms. This approach echoes Amartya Sen’s (1999) belief that technologies must expand real freedoms, not merely transplant foreign paradigms into local contexts. Fine-tuning through LoRA (Low-Rank Adaptation) further tailors these models, ensuring that no single vantage point dictates the conversation.


    5. The Human-AI Symbiosis

    Even as AI models excel in bounded tasks, human judgment remains a lighthouse guiding broader moral and strategic horizons. Hannah Arendt’s (1958) celebration of action informed by reflective thought resonates here: we depend on human insight to interpret results, set objectives, and mitigate biases. Rather than supplanting human creativity, AI can complement it—together forging a potent hybrid of reason and ingenuity.

    Malone’s collective intelligence framework (Malone & Bernstein, 2024) can inform a vision of a dance between AI agents and human collaborators, where each movement enhances the other. From brainstorming sessions to policy decisions, such symbiosis transcends the sum of its parts, moving us closer to a robust, pluralistic future for technology.


    Conclusion: Toward a Collective Future

    At this turning point, we have a choice: pursue more monolithic, carbon-hungry models, or embrace a tapestry of diverse, specialized systems that lighten our ecological load while enriching our ethical stance. This approach fosters sustainability, privacy, and global inclusivity—foundations for an AI ecosystem that truly serves humanity. In Martin Buber’s (1923) terms, we seek an “I–Thou” relationship with our machines, one grounded in reciprocity and respect rather than domination.

    Call to Action
    Explore how open-source communities (Hugging Face, Qwen2.5-Max, etc.) and orchestration tools like LangGraph can weave specialized models into your existing workflows. The question isn’t merely whether AI can do more—it’s how AI, in diverse and orchestrated forms, can uphold our ethical commitments while illuminating new frontiers of collaborative intelligence.


    References

    Arendt, H. (1958) The Human Condition. Chicago: University of Chicago Press.
    Buber, M. (1923) I and Thou. Edinburgh: T&T Clark.
    Foucault, M. (1975) Discipline and Punish: The Birth of the Prison. New York: Vintage Books.
    Jonas, H. (1979) The Imperative of Responsibility: In Search of an Ethics for the Technological Age. Chicago: University of Chicago Press.
    Kant, I. (1795) Perpetual Peace: A Philosophical Sketch. Reprinted in Kant: Political Writings, ed. H.S. Reiss. Cambridge: Cambridge University Press, 1970.
    Levinas, E. (1969) Totality and Infinity: An Essay on Exteriority. Pittsburgh: Duquesne University Press.
    Malone, T.W. & Bernstein, M.S. (2024) Collective Intelligence Handbook. MIT Press. Available at: [Handbook Draft].
    Mill, J.S. (1859) On Liberty. London: John W. Parker and Son.
    Sen, A. (1999) Development as Freedom. Oxford: Oxford University Press.
    Teilhard de Chardin, P. (1959) The Phenomenon of Man. New York: Harper & Row.

    Additional references cited within the text or footnotes:
    111 OECD (n.d.) Artificial Intelligence in Science. Available at: https://www.oecd.org/.
    222 LinkedIn (n.d.) Collective Intelligence, AI, and Innovation. Available at: https://www.linkedin.com/.
    333 (n.d.) AI Model Collapse: Why Diversity and Inclusion in AI Matter?
    555 Autodesk (n.d.) Diversity in AI Is a Problem—Why Fixing It Will Help Everyone. Available at: https://www.autodesk.com/.
    666 Atlan (n.d.) Collective Intelligence: Concepts and Reasons to Choose It. Available at: https://atlan.com/blog/.
    777 (n.d.) Why Diversity in AI Makes Better AI for All: The Case for Inclusivity.
    888 GOV.UK (n.d.) International Scientific Report on the Safety of Advanced AI. Available at: https://www.gov.uk/.

  • Toward a Habermas Machine: Philosophical Grounding and Technical Architecture

    Philosophers from Socrates to Bertrand Russell have underscored that genuine agreement arises not from superficial accord but from reasoned dialogue that harmonizes diverse viewpoints. Jürgen Habermas’s theory of communicative action refines this principle into a vision of discourse aimed at consensus through rational argument. Recently, a paper in Science by Michael Henry Tessler et al. (2024) (“AI can help humans find common ground in democratic deliberation”) echoes this idea by describing a “Habermas Machine”—an AI mediator capable of synthesizing individual opinions and critiques to foster mutual understanding. While their study focuses on social and political issues, the underlying concepts extend readily to organizational contexts and knowledge management.

    In our own effort to realize a Habermas-inspired mediator, we employ an architecture that leverages BigQuery as a data warehouse built on a Data Vault schema, managed and orchestrated with dbt (Data Build Tool). The system ingests communications from platforms such as Slack and Gmail, breaking each message into paragraph-level segments for individual vector embeddings. These embeddings are then stored in BigQuery, forming a semantic layer that augments traditional relational queries with more nuanced linguistic searches. In the below diagram, you can see how messages flow from raw capture to an enriched, queryable knowledge graph.

    Png;base64,iVBORw0KGgoAAAANSUhEUgAAA+gAAAJYAQMAAADL0F5mAAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAF9JREFUeNrtwQENAAAAwqD3T+3sARQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANydfAAFYF3K5AAAAAElFTkSuQmCC

    This structural framework, however, only solves part of the puzzle. We then introduce LangGraph agents, enhanced by tooling such as LangSmith, to marry textual and structural data. These agents can retrieve messages based not only on metadata (author, timestamp) but also on thematic or conceptual overlap, enabling them to detect undercurrents of agreement or contradiction in vast message sets. In a second diagram, below, you can see how agent-mediated queries integrate semantic vectors, user roles, and conversation timelines to pinpoint salient insights or latent conflicts that humans might overlook.

    Png;base64,iVBORw0KGgoAAAANSUhEUgAAA+gAAAJYAQMAAADL0F5mAAAAA1BMVEVHcEyC+tLSAAAAAXRSTlMAQObYZgAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAF9JREFUeNrtwQENAAAAwqD3T+3sARQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANydfAAFYF3K5AAAAAElFTkSuQmCC

    The philosophical impetus behind this design lies in extending what Habermas posits for face-to-face discourse—an “ideal speech situation”—to distributed, digitally mediated communication. Like the “Habermas Machine” described by Tessler et al., our system provides prompts and syntheses that help participants recognize areas of accord and legitimize points of dissent, rather than imposing a solution from on high. A final diagram, below, depicts a feedback loop, where humans validate or refute AI-suggested statements, gradually converging on well-supported, collectively endorsed conclusions.

    A8mAAAAAElFTkSuQmCC

    Ultimately, these tools do not replace human judgment; they aspire to enhance it. By combining robust data engineering on BigQuery with sophisticated natural-language reasoning via LangGraph agents, we strive to ground the ideal of rational consensus in a practical, scalable system. Inspired by recent research and Habermasian philosophy, we envision AI as a diplomatic catalyst—one that quietly structures and clarifies discourse, guiding us toward common ground without diluting the richness of individual perspectives.