top of page

AI : Demystifying the Technology and Diving in to its Moral Dilemmas and Future Trajectory


I am super greatful to our friend Alex Keiller who's written this piece for us. As I've written in the "Technology" section of our "Main Themes", I've been looking for literature about the different impacts of AI on society and our lives. There is so much coming out right now and Alex thus helps a lot by coming with a shortcut to that search with this piece. Alex has been working with IT, Tech start-ups and Media and on top, has a business that is directly impected (like most you would say) by AI. He wants to point at the fact, that he's partly taken help from Chat GPT... Thanks again Alex!

You will only find the Introduction in this post but the link to the full article is below, for our members...

I. Introduction

In his thought-provoking 2024 address to King’s College London’s Digital Futures Institute, AI: A Means to an End or a Means to Our End, Stephen Fry warned of a coming cultural inflection point. He urged us to consider not only what artificial intelligence can do, but what it might do to us - how it could reshape creativity, identity, and even meaning itself. “It’s one thing to predict how technology changes,” he states, “but quite another to predict how it changes us.”

Fry’s words echo the tension many professionals feel today: awe at AI’s capabilities, unease at its implications. We’ve seen AI compose symphonies, diagnose rare diseases, and impersonate public figures with unsettling precision. But these feats raise uncomfortable questions. Who is accountable for the consequences of autonomous systems? Can synthetic minds respect human values? And what happens to our own role in the creative and cognitive domains?

This article explores these questions, focusing on the ethical concerns and speculative possibilities surrounding AI. In doing so, it aims to provide a critical lens for professionals who engage with technology not just as users, but as decision-makers, creators, and citizens. We begin by demystifying the technology itself before diving into its moral dilemmas and future trajectories.

Read more here !



4 Comments


alexander.keiller
Jun 07, 2025

Agreed - but with the current geo-political climate, it's hard to imagine China, the US and Europe agreeing on any global framework to regulate AI's development any time soon, if ever. And they are not the only nations with AI ambitions. I look forward to your view on Nexus - I'm keen to read it as well!

Like

mfellbom
Jun 04, 2025

Hi Alex, thanks again for your piece!

I have a question for you or other readers... It concerns the point about the undoubtedly necessary global regulation you mention in your conclusion. The first book I read on the subject of AI was called "Life 3.0," written by researcher Max Tegmark in 2017. It was already a central concern for the scientific community working on the topic at the time! It's safe to say that not much has changed in the last eight years despite the acceleration of AI development... And yet, we just learned from Republican Congresswoman Marjorie Taylor Greene, whom I don't particularly care for, that she regrets signing Trump's presidential bill last week on US domestic policy, which…

Like
mfellbom
Jun 06, 2025
Replying to

Thanks Alex, interesting that the researchers themselves try to come up with concrete solutions to avoid a total loss of control. Europe seems to be on the frontline as well, but I guess the only way to go would be a globally accepted way to frame the development...?

I'm finishing "Nexus", written by Harari ("Sapiens") and he seems to have ideas as well. I will follow up with his thoughts.

Like
bottom of page