
After several years of practical exposure to AI-like systems and large language models, this article marks a deliberate pause: a moment to step back, reflect, and ask the difficult questions before diving deeper into formal research. It is not a conclusion, but a starting point ready for research.
At its core, the article asks what AI really means at three levels: the individual professional, the organisations deploying it, and society at large. Posing questions for future debate in these pages.
The take away from this initial exercise are this is a great opportunity to do things better, and as someone who has always leveraged cross pollination of ideas across industries, the empowerment to go faster, further and quicker whilst making fewer mistakes is so exciting I can see this being a new golden age. How this changes society should be for good but there are many challenges and pitfalls along the way.
You are invited and welcome to read along, I will share practical examples of how you can use these things, for free, of if you simply need me to come and help you, that can work to as a formal engagement.
TLDR Section
For those who do not want to go any further, look away now! If you are interested, read on, or ask your pet AI to summarise for you.
Personal
From a personal, practical perspective, AI is already proving useful—particularly in meeting transcription and summarisation reducing information loss during intense schedules of back‑to‑back meetings. However, AI tends to over‑produce actions, assigning equal importance to everything discussed and not able to weed out those which are implicitly resolved through professional context. Without human curation, AI‑generated action lists quickly become unmanageable and misleading.
If everyone can generate more words with less effort, does communication improve—or does understanding degrade? What happens when humans increasingly interact through AI‑generated language, or when AI systems respond to each other with minimal human intervention?
I have been working with AI like systems and LLM; s for a few years now, dabbling as the world moves on. After a very hectic project which has taken too much time and effort to deliver, I am finally able to look and make some sense of it all and create a framework for me. You are invited and welcome to read along, I will share practical examples of how you can use these things, for free, of if you simply need me to come and help you, that can work to as a formal engagement.
This article is a starting point before that research, what will I find, what are my hopes for the new technology and what are my fears. I will look personally as tools set to use in my project management life. And I will look for the business I will undoubtably be helping to deliver AI or AI enabled products into. And finally for society because it will undoubtedly bring big changes to the world.
Are Project and Programme managers to be redundant in the future? Can you throw a methodology / rule book – PRINCE2 or Agile and expect to remove the human?
Personally, what have I found it good at? Meeting summaries certainly, primarily because after 30 years we finally have a working almost real time technology which can make accurate transcripts and then summarise them. They do really well, except when it comes to ‘actionable tasks’ I find them to fussy, too many actions I would not have added to a checkpoint or action myself. I think they attach equal importance on every action, which is never the case.
I think it is because it assumes that none of the people in the room know what they are doing, so it becomes really prescriptive and almost impossible to follow up. Rather people in technical meetings know why they are there and have background in the subjects to take part in the debate, shaping the answers and then executing to create a deliverable.
So far, I have seen the AIs be too task driven, can we make them Deliverable driven and give space for technical teams to create? I shall find out.
I Love Asana, it forms the basis of many of my PM outputs but linking this to spoken ‘actions’ without curation or skill applied leaves an unactionable mess with no reflection that an action disused was later deemed to be un-necessary and no leader to decide the difference.
It does increase the accuracy of my note taking, plan updating and action writing, however. Especially in very busy periods with back-to-back to back meetings all day, things can get missed. The accuracy will help in that regard.
Concerns to research and resolve at PM and Programme Level
Good at recording what happened and producing summaries. Bad at Producing a Plan from that without expert context. Bad at linking the context together, Plan, RAID Logs, Strategic Aims.
How do we avoid all as individuals placing ourselves behind a filter of words which are unverifiable. I can produce more words than I would normally do with less effort by using AI. I can summarise words from someone else to avoid TLDR by using AI. So can everyone else. Has any of that enhanced conversation necessarily enhanced any understanding or improved the outcomes for any of the stakeholders?
How do we avoid removing the Human expertise from this and effectively having a conversation with two AIs with minimal intervention if they are autoreplying to each other?
Societal Questions
Bias, all these models are based on something, but they do not always give you the context, the source material, and they do not balance the things that it left out to provide an answer. Within that the LLM has been curated and is biased in so many ways. It is interesting to me how effective prompting this out will be.
Language Bias – Are the leading models are all English biased? natural speaking in a different language is interpreted results provided and then re interpreted? The models are excellent at translation but there is always loss. Is there also loss in the LLM itself? And how can this be combated? There is a lot of thought which needs to go into prompting and interpreting results and not everyone will do that, so you are going to get some information from real professionals and if they asked a quick lazy prompt you’re not going to get a good answer from them. How do we design strategies to minimise the risk? Should the prompt be part of the answer when used now?
Hollywood Bias – Is there a cultural bias in the model which is pervasive and serving American values and interests? It is a fully established fact that soft power delivery in Cinema and Television creates an American bias across the world where consumed. Given the models are largely American constructs it follows these models will bias the same way even without manipulation
WMD? – In 2006 when working in UK government I was presented with 128bit encryption as either ‘US version – A Classified weapon of mass destruction’ or ‘UK version – Not quite good enough encryption to rely on for RESTRICTED data’, both long since amended. On that scale AI as presented is certainly a capable WMD through many vectors. Economic war ‘the ability to crash a stock market in minutes if not actually seconds’ I am fairly sure this will happen in my lifetime. Superpowered attack vectors for cyber-attacks with superpowered defence systems required to counter act them. This again leads to AI and AI fighting themselves with little or no human involved in the conversation, only being required to use resources to build, feed and water. Can certainly see a future where corporate and or State strategy designed and led by AI will prioritise the continued upscaling of these services to AI itself over other home human considerations.
Who Owns the entirety of human knowledge? – This is probably the most fundamental question which will shape human society until we finally design ourselves to post scarcity. The current model is being baked into the capitalist structure and even concepts that started out as NFP are changing to profit driven models. This turns the entirety of human knowledge into a commodity to be bought and sold to the highest bidder. Can we imagine the internet if it had the same restrictions placed on it at inception. It is what it is (for good and bad) largely because the platform was free to use without subscription or licence. Its taken nearly 30 years for corporations to erode that behind paywalls. What could the human race do with this if it were free to access, like Wiki for example?
Legality. One group of people already under attack in the workplace are the legal profession. Currently with legal copy writers but the problem will become huge and will keep the layers and the courts occupied for many years, backed up by Attack and defence AIs on both sides there is a lot of money to be made here. It fundamentally comes down to lack of societal governance.
Look at the current disclaimers from the major systems. It boils down to ‘AI could be wrong’ and all responsibility is on the user / individual / company. So if you are using AI to proxy for a real expert outside of your field and you do not for example finish this with real legal counsel certainly in English law you are liable for any damage caused as if you were that expert. This will have a profound change in societies across the world.
In the medium term until legislation and governance catch up, actually being an expert in interpreting and acting on AI research will be extremely important for business.
If that all sounds very dystopian then I think I think it could be and it will shape our societies at that fundamental level. How it turns out depends very much on how on how the population of the world shapes that narrative and what it does with it. There will be new levels of bad, there will be new levels of good innovation and as always there will be exploitative people making lots of dirty money in an unregulated industry before society catches up. All aspects of societies current online services are going to be Superpowered and need additional regulation.
LLM’s have learned what they know from the internet, which is ‘to be kind’ an imperfect place.
Individually these are great tools to enhance your own output and make it more valuable to a wider range of people. They can also be used to gain an understanding in thinks in which you are NOT and expert, however, rely solely on that and you will carry the liability you still need a fully rounded real team for the foreseeable future.
Corporate Strategy
Corporately, my work is predominantly in CRM, ERP and Service Management and there are many direct applications in these spaces. Anywhere there was a Chat function will be replaced by Agentic AI, contact centres will continue to move functions across channels to the best balance of cost to customer satisfaction. I can already see that the more expensive contact centres are closing (4 closed in Uruguay in 2026 already).
AI can build code and config fast within platforms so fewer developers required and the Technology side of change will become quicker and easier and that is to be welcomed.
However, Business Analysts are going to be required (using AI toolsets) to generate the right use cases. People and Process and change management are going to be as challenging as ever. And then we get to the real problem, the Data. Having just come of a project which was a very expensive data curation disaster the use of AI tools in CRM, ERP and Service management in a business setting will only ever be as good as the data. It was ever the case but with AI it directly affects the customer output and so standards must be raised. Data Strategy and Data Management will be key to an organisation and ultimately to its profitability. As noted above in these articles this is an opening of a very interested individual in this new technology. I will now be engaging in substantive learning, testing and refining of the technology as I incorporate it for the good of the Organisations I work for.
No AI was used in the article; it may be the last time I can say that