We were delighted to host Sean Williams, CEO and founder of AutogenAI at the School of Cybernetics on the 4th of March.
This event brought together our community with interested parties to listen to and ask questions of the founder of one of the fastest growing AI companies in the world.
Sean shared some incredible insights on the day which we have captured for you below, including a highlights video, an introduction to the content of our event, and a full recording of the session.
Highlights from AI in action:
Associate Professor Matthew Holt welcomed students, staff and industry friends and introduced Sean Williams, emphasising AI as a societal shift needing interdisciplinary engagement to assess its impact on human interaction and decision-making.
Sean’s talk started with the idea that technologies that change the world must fulfill two criteria:
- they must enhance a fundamentally human capability
- they must do this in a way that is rapid and scalable
From the printing press to the steam engine to the internet, transformative technologies have followed this pattern. AI, particularly large language models (LLMs), is now doing the same for reading and writing.

A rather surprising thing to hear from the CEO of an AI company is that some of the hype around AI is not justified. As someone deeply involved in AI, including in the assessment of such systems, he provided a balanced perspective on the capabilities and limitations of AI today.
Right now, LLMs and computers are reading and writing, or as Sean says, ‘doing language’. These systems are doing so at a rate quicker than humans, which going by Sean’s above model, gives AI the potential to change the world.
Why the phrase ‘doing language’ though? Essentially, this reading and writing is all forecasting as these sorts of AI systems are essentially large prediction models.
LLMs are picking which syllable and word comes next, and their reference point is just about the whole internet.

Sean shared how his company helps businesses craft high-quality proposals more efficiently. He likened AutogenAI to “Excel for competitive prose,” a tool that enhances, rather than replaces, human judgment.
Most previous forms of prediction models like weather forecasting are predictions that can be measured and tested.
But how do you measure how well an LLM created a sentence?
- Do you measure how many times that sentence has been made before?
- Do you measure based on novelty of the sentence?
Sean compared the current AI landscape to the internet in 1995, predicting that AI will soon transform all reading and writing-related tasks.
The discussion also explored the ethical dimensions of AI-generated content. Audience members raised concerns about AI’s role in decision-making and the risk of AI-generated misinformation. It also provided perspective on the intersection of AI and creativity.
Dr. Ben Swift, senior lecturer at the School, emphasised that AI, while capable of generating text and art, relies on human work and should be viewed as a collaborative tool rather than a replacement for creativity.
The session wrapped reinforcing the need for AI education and critical thinking skills to navigate an AI-driven future. He encouraged continued research and discussion to ensure AI technologies serve society in ethical and meaningful ways.
Q&A Highlights with answers from Sean
Q: Can AI write poetry if it doesn’t experience emotions?
A: “I always thought computers would never write poetry before they cleaned our toilets. But LLMs have proven me wrong. While their poetry is derivative, it is often indistinguishable from human-generated poetry.”
Q: How do you see the future of AI research and business models? Will big corporations dominate?
A: “We are at a plateau with large language models. The fundamental research breakthroughs have already happened, and now the focus is on application rather than making AI significantly smarter. AI will be like electricity—it’s in everything, but the value lies in how we use it.”
Q: If all firms use AI to craft persuasive proposals, how do they differentiate themselves?
A: “AI helps you articulate your unique story faster, but differentiation still comes from your experience and ideas. It’s like Excel—it doesn’t do the thinking for you; it makes it easier to present your financial model.”
Q: With AI writing persuasive content, won’t we end up in an arms race where AI is persuading AI?
A: “Eventually, yes. AI will assist both in crafting and evaluating content. But fundamentally, human decision-making will remain essential. AI should free up our time for the truly complex and human-centered tasks.”
Q: How do we address AI-driven misinformation and ensure trust?
A: “We need clear labeling of AI-generated content and robust fact-checking systems. Just as society adapted to advertising and the internet, we will learn to navigate AI-generated information critically.”
Sean’s talk (just below) answers some of these questions and more in the rest of his talk + following fireside chat.
You can watch the full video on our YouTube channel or here below: