The Badger

University of Sussex Students' Newspaper

A pragmatic opinion on “The impact of AI on society”, organised by the University of Sussex’s Philosophy Department

ByHai Tran

Nov 4, 2025

On Friday evening, 10th September, the University of Sussex’s Philosophy Department held its first public panel discussion at Project Niles House, where four philosophers discussed the impact of AI on society. As AI seeps through the cracks and nooks of life, these conversations feel timely. In this article, I will summarise the discussion and provide a pragmatic response as a non-philosophy student.

The event began with each philosopher presenting their arguments, followed by questions from the audience. Three themes emerged throughout.

AI’s rise is inevitable and can be beneficial

All speakers agree that AI has and will continue to develop, potentially matching or surpassing human intelligence. Dan Williams described how companies and governments are investing millions of dollars into AI breakthroughs. He argued, “Given that there is nothing supernatural about human intelligence, there is no reason why AI could not one day reach or even surpass it.” Robyn Waller illustrated AI’s transformative potential via brain-computer interfaces like Neuralink, which allow paralysed people to restore movement and speech.

Warnings about AI

Williams highlighted potential exploitation of AI by the wealthy, corporations, and militaries for self-serving interests. He also cautioned that the growing dependence on AI can make people isolate themselves more from each other. This is incredibly worrisome given the global loneliness epidemic.

Clark additionally warned of two dangers associated with the increasing use of AI-generated content: model collapse and diversity collapse. Model collapse is when AI trains on its own contents, resulting in worsening outputs. Diversity collapse occurs when floods of AI-generated content prevent exposure to new ideas. He noted that current AI cannot test new hypotheses independently, but doomsday may come if the systems are programmed to act and experiment on their own.

The nature of AI

The speakers agree that AI can never replace humans. Beatrice Fazi referred to AI as “stochastic parrots”, arguing that it can mimic but not comprehend like humans. Waller agreed that AI lacks the embodied experience. Williams argued that, as AI did not undergo natural selection like humans, it does not have humanlike survival motivation, unless programmed in. Yet even then, it remains just a “parrot”.  

My pragmatic opinion

The panel’s strength is that the speakers neither glorified nor condemned AI outright. They recognised both its benefits and risks to society, thus provided balanced perspectives. In addition, they posed thoughtful questions about what it means to be human in a world filled with AI.

Nevertheless, some discussions would benefit from more depth. Williams briefly mentioned AI’s role in amplifying existing biases but did not elaborate on how and what mitigation might look like. His warnings about AI exploitation and replacing human interactions lacked empirical evidence, raising questions about their validity.

Furthermore, the utility of Waller’s wonders over human agency is questionable. From a pragmatic standpoint, people who regain communication or movement via AI technologies may care less about who performs these actions than that they can now occur.

Speakers also did not explore how AI actually works. AI is created to simulate human cognition, yet we only partly grasp how human cognition operates. Therefore, the first question to ask is whether we can ever fully understand human cognition. Discussing AI without addressing this seems premature.

Further, the philosophers overlooked how AI affects different groups differently. The younger generation has access to AI early in life. Working people fear being replaced by AI. Older adults and those without internet access may not know what AI is. It is, thus, critical to embed these heterogeneities into conversations about AI, especially when the audience involves the public from all walks of life.

Finally, despite its significant impact, no one discussed the danger of misinformation spread by AI. Social psychologist Michael Hogg observes that during moments of uncertainty, people affiliate more with extremist ideologies. These can be fed by AI algorithms, especially when using AI to search for information becomes commonplace.

As a public event, the tone could have also been more accessible. Some arguments reeked of academic jargon, circular reasoning, and obscure philosophical references, which made it challenging for non-specialists like me to follow. Given that these events aim to enlighten the public on topics that matter to them, prioritisation should be placed on clear and accessible language.  

Overall, the event offered a glimpse into the ongoing philosophical discussions about the impact of AI. It raised valuable questions about the nature of AI, its benefits, and risks to society. However, it would have resonated further had it delved deeper into more pragmatic areas relevant to everyday life, been grounded more in evidence, and used plainer language.  

Another article you may enjoy: https://thebadgeronline.com/2026/01/lewes-stem-fair-bringing-science-to-the-community/

Author

Leave a Reply