Breaking the Echo Chamber: A Blueprint for Authentic Online Deliberation

Join us on ManyFold now!

Introduction: The Digital Speech Crisis

A few weeks ago, I found myself catching up with an old college friend—let’s call him Ezra. He used to be the kind of person who devoured books like The Metaphysical Club and his recommendations routinely influenced me. His nuanced, questing intellect once made every conversation feel alive with possibility. This time, though, I barely recognized him. He was rattling off dire warnings about Canada’s Bill C-63 and the EU’s Digital Services Act, insisting these regulations were part of a grand conspiracy to muzzle dissent—especially for people like him, a Jew who feared what he called “silencing tactics.” Then he flipped the script and lambasted “shadowy forces” bent on “canceling” him for his views.

Observing Ezra—a friend once fascinated by complexity—announce so urgently that “free speech” stands on the brink illustrates how readily we gravitate toward a battle cry against censorship. The Greek economist and politician Yanis Varoufakis advances the notion of technofeudalism.  His concept points to a subtler, more encompassing shift: private companies now construct vast arenas for public discourse through data collection and algorithmic design, shaping speech and belief in ways that reinforce their own authority (Varoufakis, 2023). Ezra instinctively recognizes this menace, yet he misdiagnoses it: it is less about policymakers legislating speech and more about newly emerged barons silently dictating the terms of discourse.

Lawmakers have responded to the threat that this manipulation poses, by crafting legislation such as C-63, the EU’s Digital Services Act, and the UK’s Online Safety Bill.  Those bills focus on lists of prohibited behaviors and moderation protocols. Such laws address destructive content but fail to describe a shared vision of digital life. They specify what must be reported, flagged, or removed, when they should instead define constructive goals for civic engagement or personal autonomy–they were elected for their visions. Silicon Valley entrepreneurs, for their part, champion “innovation” for innovation sake, touting free speech–they channel user data to intensify engagement, refine algorithms, and reinforce their platforms’ influence. They thus fill the void of a democratically shaped vision with a vision of their own that has no democratic representation. “A trend monger is a person who dreams up a trend… and spreads it throughout the land, using all the frightening little skills that science has made available!” –Frank Zappa.

Elon Musk, for example, oversees a platform where more than a hundred million people interact within rules he and his teams devise. Mark Zuckerberg refines Meta’s systems to sustain user involvement and expand a massive empire of everyday engagements. These structures function as formidable strongholds, echoing the technofeudal balance of power Varoufakis describes. Although “free speech” often appears intact as a principle, hidden mechanisms and corporate incentives decide which ideas gain traction, how they spread, and to whom they matter.

Manyfold, a social network I co-founded with Neville Newey, treats discourse as a form of collective problem-solving rather than a mere engagement-driven spectacle. Rather than merely multiplying viewpoints, Manyfold aims to make speech serve collective reasoning rather than flashy performance. Hafer and Landa (2007, 2013, 2018) show that genuine deliberation isn’t just an aggregate of opinions—it emerges from institutional frameworks that deter polarization and induce real introspection. If those structures fail, people drift away from public debate. Feddersen and Pesendorfer (1999) find that voters abstain when they think their efforts won’t shift the outcome, mirroring how social-media users retreat when their voices go unheard amid viral noise.

Landa (2015, 2019) underscores that speech is inherently strategic: individuals tailor messages to sway an audience within system-imposed constraints. Conventional platforms reward shock value and conformity. Manyfold, by contrast, flips these incentives—replacing knee-jerk outrages with problem-solving dialogues fueled by cognitive diversity. Speech becomes less about self-promotion and more about refining a shared understanding of complex issues. Goodin and Spiekermann (2018) argue that a healthy democracy prizes epistemic progress—that is, advancing collective understanding—more than simple audience metrics. Manyfold embodies this ethos by prioritizing ideational variety over raw engagement. Landa and Meirowitz (2009) elucidate how well-designed environments elevate the quality of public reasoning: By intentionally confronting users with unfamiliar or underrepresented standpoints, Manyfold fuels the kind of friction that refines thought instead of fracturing it. The platform thus departs from popularity-driven paradigms, allowing fresh or seldom-heard perspectives to surface alongside established ones. In doing so, it champions deeper inquiry and a richer exchange of ideas, steering us away from a race to the loudest shout and toward a more thoughtful digital sphere. Instead of optimizing for clicks or locking users into echo chambers, its algorithms maximize cognitive diversity. Hong & Page (2004) show that when groups incorporate a range of cognitive heuristics, they arrive at better solutions than even a group of individually brilliant but homogeneous thinkers. Manyfold applies this understanding to online speech, ensuring that conversations remain exploratory rather than self-reinforcing. Minority viewpoints are surfaced, ensuring no single entity decides who deserves an audience. This design embraces Jürgen Habermas’s concept of discourse free from domination (Habermas, 1996), presenting a space that encourages empathy, critical thought, and shared inquiry. Rather than reinforcing the routines of a tech industry propelled by data extraction, Manyfold aspires to deepen human capacity for understanding and dialogue.

Varoufakis’s critique of technofeudalism highlights the urgency of reclaiming our digital commons from corporate overlords. Preserving speech in principle means little if individuals rarely see ideas that don’t align with a platform’s opaque priorities. An affirmative vision of technology places nuanced conversation and collective progress at the core of design choices. Manyfold advances this vision of collaboration and exploration rather than funneling human interaction into corridors of control. In that sense, it is an experiment on how digital spaces can foster genuine agency, offering an antidote to the feudal trends reshaping our online lives.

Regulatory Shortfalls: From Frank Zappa to Sen’s Flute

In 1985, Frank Zappa testified before the U.S. Senate to protest the Parents Music Resource Center’s push for warning labels on albums deemed “explicit.” Though that debate might seem worlds away from modern digital regulations like Bill C-63, the EU’s Digital Services Act, and the UK’s Online Safety Bill, Zappa’s stance resonates: labels and blanket bans can flatten cultural nuance and sidestep the crucial question of how creative or controversial content might foster dialogue and moral discernment. These new regulations aim to curb harm, yet they rarely outline ways for users to engage with conflict in ways that spark reflection and growth. As Cass Sunstein (2017) cautions, overly broad or inflexible measures can stifle open discourse by driving heated discussions underground. Rather than encouraging respectful debate, heavy-handed rules may suppress valuable viewpoints and sow mistrust among users who perceive moderation as opaque or punitive.

Charles Taylor’s “ethic of authenticity” (Taylor, 1991) offers a way to understand why mere prohibition leaves a gap. People refine their views by confronting perspectives that challenge them, whether they find these views enlightening or appalling. Imagine someone stumbling on a troubling post at midnight. Instead of encountering prompts that encourage her to dissect the viewpoint or a variety of responses that weigh its moral assumptions, she simply sees it flagged and removed. The window to discover why others hold this stance is slammed shut, turning what could have been a learning moment into a dead end. This echoes Zappa’s warning that reducing complex phenomena to “offensive content” deprives individuals of the friction that deepens understanding.

Amartya Sen offers a memorable illustration that features three children and one flute. One child insists she should own the flute because she can actually play it, and giving it to anyone else would stifle that musical potential—a utilitarian perspective that maximizes the flute’s use for the greater enjoyment. Another child claims ownership because he made the flute himself; to deny him possession would be an affront to his labor—echoing a libertarian mindset that emphasizes individual property rights. The third child points out that she has no other toys, while the others have plenty—an egalitarian appeal rooted in fairness and need.

Sen’s parable of the flute (Sen, 2009) illustrates how disagreements often stem from irreconcilable yet valid moral frameworks—some value the labor that produced the flute, some prioritize the needs of the have-nots, and some emphasize the broad benefits to all if the child who can best play it takes possession. Online speech can mirror these clashing values just as starkly, whether in disputes about free expression versus harm reduction, or in controversies that pit egalitarian ideals against strongly held beliefs about individual autonomy. Traditional moderation strategies seek to quell such turmoil by removing provocative content, but this reflex overlooks how certain designs can prevent harmful groupthink from forming in the first place.  Democratic discourse hinges on the public’s ability to interpret and evaluate information rather than merely receiving or losing access to it, as Arthur Lupia and Matthew McCubbins (1998) emphasize. Blanket removals can therefore undermine deeper deliberation, obscuring why certain ideas gain traction and how best to counter them.

When regulators or platform administrators rely on mass takedowns and automated filters, they address truly egregious speech—like hate propaganda or incitements to violence—by erasing it from view. Yet in doing so, they may also hide borderline cases without offering any path for reasoned dialogue and they inadvertently drum up support for conspiracy theories and extremists who cry foul about their freedom of speech being curtailed.  “Who are the brain police?”  – Frank Zappa.  Daniel Kahneman (2011) observes that cognitive biases often incline us toward simple, emotionally charged explanations—precisely the kind conspiracy theorists exploit. In a landscape overflowing with content, an “us versus them” narrative resonates more than a nuanced account of complex moderation dynamics. As Zappa argued in his day, labeling everything “dangerous” blinds us to distinctions between content that calls for condemnation and content that may provoke vital, if uncomfortable, debate. Equally problematic, automated moderation remains opaque, leaving users adrift in a sea of unexplained removals. This disorients people and fosters the “technofeudal” dynamic that Yanis Varoufakis describes, in which a handful of corporate overlords dictate whose words appear and whose vanish from public view (Varoufakis, 2023). Platforms like Facebook and YouTube exemplify this dynamic through their opaque algorithms. 

Reuben Binns (2018) pinpoints a deep rift in so-called “fairness” models: Should platforms enforce demographic parity at the group level or aim for case-by-case judgments? Group fairness often triggers what researchers call allocative harms, whereby entire categories of users are treated according to blanket criteria, overriding personal context. Meanwhile, purely individual approaches risk masking structural inequities beneath a veneer of neutrality. Berk et al. (2018) reveal that nominally protective interventions can backfire, entrenching existing imbalances and excluding certain subgroups in the process.

Corbett-Davies and Goel (2018) extend these critiques, warning that neat mathematical formulas tend to dodge the thorny trade-offs inherent in real-world scenarios. In content moderation, rigid classification lines rarely distinguish toxic incitement from essential critique or activism. The outcome is a heavy-handed purging of contentious posts in lieu of robust engagement—especially for communities that are already on precarious footing.

Facebook’s News Feed spotlights emotionally charged posts, provoking knee-jerk reactions instead of thoughtful debate. YouTube’s recommendation engine similarly funnels viewers toward increasingly sensational or one-sided content, making it less likely they’ll encounter alternative perspectives.  Underneath these engagement-driven designs lies a deeper issue: the assumption that algorithms can neutrally process and optimize public discourse. Yet, as Boyd & Crawford (2012) warn, big data never just ‘speaks for itself’—it reflects hidden biases in what is collected, how it is interpreted, and whose ideas are amplified. Social media platforms claim to show users what they “want,” but in reality, they selectively reinforce patterns that maximize profit, not deliberation. What looks like an open digital public sphere is, in fact, a carefully shaped flow of content that privileges engagement over nuance. “The empty vessel makes the loudest sound.” –William Shakespeare.  In both cases, and even worse in the case of Twitter, the platforms optimize for engagement at the expense of nuanced discussion, skewing users’ experiences toward reaffirmation rather than exploration. The problem isn’t just one of bias—it’s an epistemic failure. Hong & Page (2004) demonstrate that when problem-solving groups lack diverse heuristics, they get stuck in feedback loops, reinforcing the same limited set of solutions. Social media’s homogeneous feeds replicate this dysfunction at scale: the system doesn’t just reaffirm biases; it actively weakens society’s ability to reason through complexity. What should function as an open digital commons instead behaves like a closed ideological marketplace, where the most reactive ideas dominate and alternative perspectives struggle to surface.

Diakopoulos and Koliska (2017) underscore how opacity in algorithmic decision-making sows distrust, especially when users have no means to contest or even grasp the reasons behind content removals. Meanwhile, Danks and London (2017) argue that bias is not an accidental quirk—it’s baked into the data pipelines and objectives these systems inherit. Tweaking a flawed model does nothing to uproot the deeper scaffolding of inequality. Mittelstadt et al. (2018) label this phenomenon “black-box fairness,” where platforms project an aura of impartiality while stealthily erasing entire points of view, all under the guise of neutral enforcement. Algorithmic opacity is no accident; it’s built into the foundations of digital infrastructure. Burrell (2016) distinguishes three major drivers: corporate secrecy, technical complexity, and user misconceptions. Edwards & Veale (2017) go further, noting that so-called “rights to explanation” often amount to theatrical gestures, revealing little about how moderation decisions are truly made. Users receive sparse summaries that mask deeper biases, leaving them powerless to challenge suspect takedowns. “You have the right to free speech / As long as you’re not dumb enough to actually try it.” –Dead Kennedys. 

Milano, Taddeo, and Floridi (2020) illustrate how recommender systems do more than tailor content; they actively define what enters the public conversation, steering clicks toward certain narratives while quietly sidelining others. This echoes Varoufakis (2023) on technofeudal control: algorithms shape speech with no democratic oversight. Allen (2011) reminds us that privacy isn’t about hoarding personal data—it’s a bedrock for genuine autonomy and civic freedom. Yet as the UK’s Data Science Ethical Framework (2016) shows, “best practices” stay toothless if they lack enforceable governance. The upshot: platforms retain control while individuals navigate curated experiences that corral, rather than liberate, their thinking.

The Algorithmic Trap: Engagement, Moderation, and Speech Distortion

If engagement-driven feeds corrupt how people arrive at conclusions, automated moderation controls what they can discuss at all. Relying on algorithmic filtering, platforms increasingly treat speech as a classification problem rather than a social process. Boyd & Crawford (2012) caution that big data’s greatest illusion is its neutrality—its ability to “see everything” while remaining blind to context. Content moderation follows the same logic: broad rules applied without regard for intent, meaning, or deliberative value.

Floridi (2018) argues that purely compliance-driven moderation—focused on removing “bad” content—fails to address the deeper ethical question of how online spaces should support civic engagement. Automated systems are built for efficiency, not conversation. They eliminate content that could otherwise serve as a basis for debate, treating moral complexity as a bug rather than a feature. Danks and London (2017) maintain that genuine fairness demands more than cosmetic fixes. They propose adaptive, context-aware frameworks, where algorithms are molded by input from the very communities they affect. Rather than chase broad statistical targets, these systems weigh cultural nuances and evolving social norms. Gajane and Pechenizkiy (2018) push a similar notion of “situated fairness,” measuring algorithms by their lived effects, not solely by numeric benchmarks. Cummings (2012) identifies automation bias as a pivotal hazard in algorithmic tools, where people over-trust software outputs, even when intuition or direct evidence suggests otherwise. In content moderation, that leads to an overreliance on machine-driven flags, ignoring the nuance and context behind many posts. Dahl (2018) notes that “black-box” models further blunt accountability, closing off avenues for users to examine or contest the rationale behind takedowns.

Katell et al. (2020) advocate “situated interventions,” weaving AI into human judgment rather than treating it as an all-knowing arbiter. Manyfold embodies a similar principle by letting users encounter a breadth of diverse arguments rather than being funneled by hidden recommendation systems. Instead of passively ingesting whatever the algorithm decides is “best,” participants engage in a process shaped by varied viewpoints, mitigating the blind spots that purely automated systems can create.  In content moderation, a platform might appear balanced in theory while systematically marginalizing particular groups in practice. A truly equitable design, they suggest, must weigh social repercussions in tandem with statistical neatness. Even then, many platforms default to minimal legal compliance while neglecting meaningful public deliberation—what Floridi (2018) terms “soft ethics.” By focusing on liability avoidance instead of robust democratic exchange, they foster speech environments that are technically compliant but remain socially dysfunctional.

 Finally, mass takedowns often sweep away borderline but potentially valuable content, chilling open discussion and leaving marginalized communities especially wary. Research shows that blanket removals disproportionately affect LGBTQ+ advocates and political dissidents, who fear being misunderstood or unjustly targeted thanks to biases rooted in both algorithmic systems and social attitudes (Floridi, 2018). “The problem with the world is that the intelligent people are full of doubts, while the stupid ones are full of confidence,” wrote Charles Bukowski, capturing the cruel irony at play.

Consider Kyrgyzstan, where heightened visibility has spelled grave danger for investigative journalists and LGBTQ+ groups. In 2019, reporters from Radio Azattyk, Kloop, and OCCRP exposed extensive corruption in the customs system—only to face a surge of coordinated online harassment. Meanwhile, local activists returning from international Pride events became victims of doxxing campaigns, receiving death threats once their identities were revealed in domestic media. Despite formal complaints, state officials took no action, embedding a culture of impunity and self-censorship (Landa, 2019). Rather than fostering engagement, algorithmic amplification meant to boost voices merely thrust vulnerable populations into the crosshairs of hostility.

On top of that, algorithmic profiling compounds these risks by failing to safeguard group privacy, leaving at-risk users open to surveillance or distortion (Milano et al., 2020). Paradoxically, well-intentioned moderation efforts that aim to curb harm can end up smothering critical perspectives—sacrificing open discourse in the process.

Most digital platforms exacerbate bias, sustain ideological silos, and reward controversy for its own sake, leaving few genuine alternatives for those seeking more than outrage clicks. Manyfold attempts to invert this model by structuring discourse around collective problem-solving rather than friction for profit. Where conventional algorithms shepherd users into echo chambers, Manyfold transforms disagreement into a crucible for better thinking, not an incitement to factional strife.

Manyfold: Building a More Democratic Digital Commons

Yet the Manyfold approach demonstrates that speech need not be restricted to preserve safety. Instead of banning precarious ideas, the platform recognizes that the real peril arises when such ideas echo among those already inclined toward them. By steering those posts away from cognitively similar audiences, Manyfold’s design deprives extreme positions of a homogeneous echo chamber. This use of algorithm ensures that participants who encounter troubling content do so precisely because they hold starkly different stances, collectively challenging the underlying assumptions rather than reinforcing them. In this sense, the “warning label” emerges organically from a chorus of diverse perspectives, not from regulatory edicts that silence speech before anyone can dissect it. 

To understand why this matters, consider Walter Benjamin’s metaphor of translation in The Task of the Translator (Benjamin, 1923). For Benjamin, translation is not merely about transferring words between languages but uncovering latent meanings hidden beneath surface-level communication. Traditional moderation strategies fail at this task, removing provocative posts without context and thereby depriving users of opportunities for mutual understanding and moral growth. Contrast this with Manyfold’s approach, where diverse responses serve as organic “translations” of controversial ideas, helping users interpret their meaning within broader societal debates. By fostering an environment where conflicting viewpoints are presented alongside one another, Manyfold transforms potentially harmful speech into a catalyst for deeper reflection.

Charles Taylor’s ethic of authenticity (Taylor, 1991) holds that people refine their beliefs by wrestling with opposing perspectives. A skeptic confronted with data on climate change, for instance, might see firsthand accounts from communities grappling with rising sea levels. That experience can provoke deeper questions, moving the skeptic beyond knee-jerk dismissal and guiding her to weigh the moral and practical dimensions of environmental policy.

This is why we built Manyfold, which foregrounds minority viewpoints rather than letting any single authority determine which voices merit attention. By confronting users with a spectrum of ideas—rather than trapping them in algorithmic bubbles—Manyfold cultivates genuine deliberation. “The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently.”–Friedrich Nietzsche. Such an environment echoes Jürgen Habermas’s Herrschaftsfreier Diskurs (Habermas, 1996), in which no hidden power dynamics dictate who speaks or how ideas circulate, granting participants equal footing to engage in shared inquiry.

Returning to Amartya Sen’s parable of the flute (Sen, 2009), we observe moral frameworks that vary from maximizing utility to emphasizing fairness or property rights. Digital conflicts mirror these clashes, whether in debates over free expression, harm reduction, or the tension between egalitarian principles and fierce autonomy. Censorship that imposes one moral system alienates those who prefer another. Neither Elon Musk nor a government official can settle these disputes by decree. Manyfold, however, invites conflicting worldviews to coexist and even challenge each other. Instead of quietly sidelining “problematic” perspectives, the platform allows users to explore—or dismantle—controversial ideas in an open forum. As Arthur Lupia and Matthew McCubbins (1998) argue, democracy thrives when citizens can interpret and judge information, not merely gain or lose access to it. Blanket removals obscure why certain ideas flourish and weaken our ability to refute them thoughtfully.

Luciano Floridi (2018) distinguishes between “hard ethics” grounded in mandatory compliance and “soft ethics” that seeks socially preferable outcomes through design choices. Manyfold leans on soft ethics by weaving empathy, critical thought, and reciprocal inquiry into its algorithms. Participants regularly encounter diverse viewpoints, expanding their horizons and prompting reflection on the assumptions they bring into discussions. This design transcends blunt regulation by embedding a more nuanced ethical philosophy into the platform’s very structure.

Mariana Mazzucato’s call for mission-oriented innovation (Mazzucato, 2018) challenges policymakers to shape digital spaces around bold societal goals—reducing polarization, for example, or strengthening democracy. Instead of simply outlawing undesirable content, legislators might incentivize platforms to experiment with deliberative tools, demand transparency in how algorithms function, and commission regular audits of platforms’ contributions to civic participation. Such steps shift the conversation from merely policing speech to envisioning the kind of discourse that enriches public life and broadens our collective capabilities.

Focusing on how platforms enable genuine engagement moves us past blanket prohibitions. In doing so, it treats speech as a catalyst for transformation—even when that transformation feels unsettling. In keeping with Frank Zappa’s insistence on nuance, Taylor’s call for authenticity, and Sen’s acknowledgment of moral pluralism, Manyfold shows how carefully designed algorithms can create a synergy between community well-being and the principle of free expression. By offering an antidote to corporate dominion and the “technofeudal” dynamic described by Varoufakis (2023), Manyfold orchestrates a space where varied viewpoints challenge one another beyond easy certainties. In turn, it strengthens the communal fabric on which democracy relies. 

If digital platforms steer the trajectory of public life, the question isn’t whether we regulate or reform them—but whether we dare to reinvent them from the ground up.

References

Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M. and Robinson, D.G., 2020. Roles for computing in social change. Available at: https://arxiv.org/pdf/1912.04883.pdf [Accessed 24 Aug 2020].

Allen, A., 2011. Unpopular Privacy: What Must We Hide? Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195141375.001.0001

Berk, R.A., Heidari, H., Jabbari, S., Kearns, M. and Roth, A., 2018. Fairness in criminal justice risk assessments: the state of the art. Sociological Methods & Research, 47(3), pp.437-464. https://doi.org/10.1177/0049124118782533

Benjamin, W., 1923. The Task of the Translator. In: Illuminations.

Binns, R., 2018. Fairness in machine learning: lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, pp.149–159. https://doi.org/10.1145/3178876.3186091

Blyth, C.R., 1972. On Simpson’s paradox and the sure-thing principle. Journal of the American Statistical Association, 67(338), pp.364–366. https://doi.org/10.1080/01621459.1972.10482387

boyd, d. and Crawford, K., 2012. Critical questions for big data: provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), pp.662–679. https://doi.org/10.1080/1369118X.2012.678878

Bukowski, C., 1983. Tales of Ordinary Madness. City Lights Publishers.

Burrell, J., 2016. How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data & Society, 3(1), p.2053951715622512. https://doi.org/10.1177/2053951715622512

Cabinet Office, Government Digital Service, 2016. Data Science Ethical Framework. Available at: https://www.gov.uk/government/publications/data-science-ethical-framework

Corbett-Davies, S. and Goel, S., 2018. The measure and mismeasure of fairness: a critical review of fair machine learning. arXiv preprint arXiv:1808.00023.

Dahl, E., 2018. Algorithmic accountability: on the investigation of black boxes. Digital Culture & Society, 4(2), pp.1–23. https://doi.org/10.14361/dcs-2018-0201

Danks, D. and London, A.J., 2017. Algorithmic bias in autonomous systems. Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), pp.4691–4697. https://doi.org/10.24963/ijcai.2017/654

Dead Kennedys, 1980. Police Truck. On Fresh Fruit for Rotting Vegetables [Album]. Cherry Red Records.

Diakopoulos, N. and Koliska, M., 2017. Algorithmic transparency in the news media. Digital Journalism, 5(7), pp.809–828. https://doi.org/10.1080/21670811.2016.1208053

Edwards, L. and Veale, M., 2017. Slave to the algorithm? Why a ‘right to an explanation’ is probably not the remedy you are looking for. Duke Law & Technology Review, 16, pp.18–84.

Feddersen, T.J. and Pesendorfer, W., 1999. Abstention in elections with asymmetric information and diverse preferences. American Political Science Review, 93(2), pp.381–398. https://doi.org/10.2307/2585770

Floridi, L., 2016. Mature information societies—a matter of expectations. Philosophy & Technology, 29(1), pp.1–4. https://doi.org/10.1007/s13347-015-0211-7

Floridi, L., 2018. Soft ethics and the governance of the digital. Philosophy & Technology, 31(1), pp.1–8. https://doi.org/10.1007/s13347-018-0303-9

Hong, L. and Page, S.E., 2004. Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences, 101(46), pp.16385–16389. https://doi.org/10.1073/pnas.0403723101

Kahneman, D., 2011. Thinking, Fast and Slow. Farrar, Straus and Giroux.

Katell, M., Young, M., Herman, B., Guetler, V., Tam, A., Ekstrom, J., et al., 2020. Toward situated interventions for algorithmic equity. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp.45–55. https://doi.org/10.1145/3351095.3372874

Landa, D., 2019. Information, knowledge, and deliberation. PS: Political Science & Politics, 52(4), pp.642–645. https://doi.org/10.1017/S1049096519000810

Lupia, A. and McCubbins, M.D., 1998. The Democratic Dilemma: Can Citizens Learn What They Need to Know? Cambridge University Press.

Mazzucato, M., 2018. The Value of Everything: Making and Taking in the Global Economy. Penguin Books.

Milano, S., Taddeo, M. and Floridi, L., 2020. Recommender systems and their ethical challenges. AI & Society, 35(4), pp.957–967. https://doi.org/10.1007/s00146-020-00952-6

Nietzsche, F., 1887. On the Genealogy of Morals. Available at: https://www.gutenberg.org/ebooks/52319 [Accessed 20 Feb 2025].

Sen, A., 2009. The Idea of Justice. Harvard University Press.

Shakespeare, W., 1599. Henry V, Act 4, Scene 4. In: The Complete Works of William Shakespeare. Available at: https://www.gutenberg.org/ebooks/100 [Accessed 20 Feb 2025].

Taylor, C., 1991. The Ethics of Authenticity. Harvard University Press.

Varoufakis, Y., 2023. Technofeudalism. Penguin Books. Available at: https://www.penguin.co.uk/books/451795/technofeudalism-by-varoufakis-yanis/9781529926095

Zappa, F., 1985. Senate Hearing Testimony on Record Labeling. United States Senate Committee on Commerce, Science, and Transportation.

Zappa, F., 1978. The Adventures of Greggery Peccary. On Studio Tan [Album]. Warner Bros. Records.

Zappa, F., 1966. Who Are the Brain Police? On Freak Out! [Album]. Verve Records.

Johannes Castner

My quest to make the world a measurably better place started in Hollywood via halls of Columbia University, the Federal Reserve Bank of Boston, eBay and Kingston Business School. Founder of CollectiWise, a venture focused on collective intelligence. Specialises in Capability Sensitive Design—a unique ethical framework for technology.

View Comments

  • I am no longer sure where you are getting your info, but great topic.
    I needs to spend some time learning more or working out more.

    Thank you for magnificent information I used to be on the lookout for this
    info for my mission.

Recent Posts

Is It Really Human Nature—or Are We Programmed to Conform?

Introduction Is our attraction to echo chambers simply “human nature,” or can technology channel more…

5 days ago

The UK Government’s AI Playbook: Progress, Power, and Purpose

The UK Government’s AI Playbook for 2025 (UK Government, 2025) aspires to make Britain a…

4 weeks ago

Toward a Habermas Machine: Philosophical Grounding and Technical Architecture

Philosophers from Socrates to Bertrand Russell have underscored that genuine agreement arises not from superficial…

2 months ago

TP’s NICER Habermas Machine

We are doing this to contribute to an innovative ecosystem that is currently driven by…

2 months ago