I freely admit that when it comes to knowledge, AI leaves my own collected information sadly lacking.
However, I dispute that AI has the experience necessary to deploy that knowledge in the right way. How could it know that Bill and Joan have different views on a subject that means achieving consensus demands a very specific approach? For example?
‘We all know that’ you may say, but in my experience, I am not certain that we all do: I am increasingly seeing AI being used not as a partner in decision-making, but as a decision-maker.
In my opinion that is a slippery path to poor decisions. The trouble is, AI seems so confident in its assertions. Be that as it may AI, unlike humans, cannot be held accountable for them.
There is no doubt that AI is rapidly transforming the way individuals and organisations make decisions, solve problems, and plan for the future.
"From virtual assistants that organize meetings to advanced analytics systems that forecast market trends, AI has proven itself to be a powerful and reliable partner in many domains. However, while AI can be a trusted partner, it should not be, in my opinion, treated as a trusted advisor."
A trusted partner provides support. It enhances an individual’s or organisation’s capabilities. It increases efficiency and provides insights that help those supported perform better. By contrast, a trusted advisor carries authority. An advisor not only provides information, but also assumes a degree of responsibility for judgement, ethics, and long-term consequences. The difference lies not in intelligence alone, but in accountability, moral reasoning, and contextual understanding.
AI excels as a partner because it is capable of processing and deploying vast amounts of data at speeds no human can aspire to. In healthcare, For example, AI systems can analyse medical images to detect diseases earlier that than traditional methods. In finance, algorithms monitor transactions in real time to detect fraud. Companies such as Google and Microsoft integrate AI into productivity tools that help users draft documents, analyse spreadsheets, and automate workflows. In these roles, AI augments human ability. It deals with repetitive tasks, identifies patterns, and offers data-driven suggestions.
Therein lies the key: what AI offers is suggestions, not opinions, and certainly not advice with accountability. When used appropriately, it acts as a highly capable assistant; one that works tirelessly and objectively.
However, the qualities found in a trusted advisor go far beyond computational skill. Advisors are expected to understand values, emotions, social consequences, and ethical nuance. They consider not only what can be done, but what should be done. AI does not possess moral awareness or lived experience. It does not carry responsibility for outcomes. When an AI system provides a recommendation, no matter how tempting that recommendation might be, it does so based on patterns in training data and programmed objectives. It does not do so based on wisdom, conscience, or accountability.
Consider for a moment legal or medical decision-making. An AI tool may suggest a diagnosis or recommend a legal strategy based on precedent. However, if the outcome harms a patient or client, the AI does not stand accountable in a court of law. Responsibility falls to the human professional who relied on the tool. The advisor role inherently requires moral agency. In other words, the capacity to be responsible for decisions and their consequences. AI lacks this capacity but it is tempting to blindly trust its recommendations because the world knows that it can access vast amounts of data, and because in peoples’ busy lives it gives the appearance of making decision-making easy.
Context is also critical. Humans operate within cultural, emotional, and situational contexts that often defy pure data analysis. A trusted advisor understands subtle interpersonal dynamics, unspoken concerns, and long-term relationship implications. AI, by contrast, interprets inputs based on statistical associations. Even the most advanced AI systems, such as OpenAI’s language models, generate responses by predicting likely patterns in text. While these responses can appear thoughtful, they do not arise from genuine comprehension or intent.
The subject of bias also illustrates why AI cannot serve as a fully trusted advisor. AI systems learn from historical data, which may contain social, economic, or cultural biases.
If those biases are not carefully addressed, AI can reproduce and even amplify them. A human advisor will ideally reflect on fairness and adjust judgements accordingly. While humans are certainly imperfect and biased themselves, they possess the capacity for ethical reflection and reform. AI does not independently self-correct based on moral reasoning. It can only be adjusted by human oversight.
Trust also involves transparency and explainability. Many advanced AI systems function as ‘black boxes’, where even developers cannot fully track how specific outputs are generated. In high-stakes decisions such as loan approvals, recruitment, or criminal sentencing, stakeholders need clear explanations. A trusted advisor must be able to justify their reasoning. If AI cannot clearly explain its logic in comprehensible terms, trust should remain limited to partnership rather than authority.
This in no way diminishes AI’s importance. Quite the reverse: recognizing AI as a trusted partner enables society to leverage its strengths responsibly. AI can provide analysis, simulate scenarios, and reveal options that humans might overlook. It can help leaders make more informed decisions. However, the final judgement must lie where accountability lies, particularly where ethical, legal, or deeply personal consequences are involved.
The future is likely to follow a path of collaborative intelligence with humans and AI working together responsibly. In this model, AI assumes responsibility for data-intensive tasks while humans provide ethical guidance, empathy and accountability. The partnership is powerful precisely because roles are clear. AI informs, humans decide. AI suggests, humans judge. AI supports, humans lead.
AI can, then, be a trusted partner. It is efficient, consistent, and capable of remarkable analytical feats. However, despite its apparent confidence in some cases, it is not a trusted advisor because it lacks moral agency, accountability, contextual wisdom, and lived experience.
"Treating AI as a partner preserves human responsibility while benefiting from technological advancement. Confusing partnership with trusted advice risks delegating judgement to systems that, no matter how sophisticated, cannot truly understand the human consequences of their recommendations."
For more information
Contact the teamView our latest insights
Why ESG matters for OMBs, and how it can give you a commercial edge
FD briefing: The financial reporting impact of global economic shocks
Andrew Moyser
Head of Audit & Assurance, Partner
Thursday will see a fundamental shift in the UK’s payment landscape
Ahmer Khan
Partner