How is Fidelity using AI today? What are the shortcomings that prevent it from being deployed more aggressively? 

John Bradley: Fidelity is applying AI across a broad range of functions, from improving operational efficiency to enhancing client experiences. We’re deliberate in how we deploy these capabilities. There is significant enthusiasm around AI, but not every use case translates into durable client or business value. Our focus is on practical applications that are scalable, measurable, and aligned with our long-term strategy. 

There are two primary constraints on more aggressive deployment. First is risk management. As a financial services firm, we operate in a highly regulated environment where data privacy, model governance, explainability, and security are critical. AI systems must meet the same standards of reliability and oversight as any other core technology. 

Second is the maturity of the technology itself. While AI capabilities are advancing quickly, enterprise-grade implementation requires robust controls, strong data foundations, and clear accountability. Ensuring these elements are in place takes time. 

With the AI landscape changing daily and companies constantly leapfrogging each other, firms (and employees) must navigate numerous channels and tools, resulting in confusion and wasted time.  

In three years, what parts of investing will be fully automated using AI systems for the average person, and what will still need a human? 

JB: Predicting out three years from now is tricky. The pace of change in AI over just the past year has been significant, and it would have been difficult to predict some of the recent breakthroughs. 

That said, in a three-year horizon, I could see much more of the routine investing experience being automated for the average Canadian investor. Digital onboarding, identity verification, account servicing, reporting, portfolio rebalancing, and model-based product recommendations are all areas where AI could streamline processes for investors and improve personalization at scale. 

Where I don’t see full automation is at key decision points that carry regulatory and fiduciary weight. In Canada, suitability, KYC/KYP obligations, and supervisory accountability remain central to how investing happens. When decisions materially affect a person’s financial outcomes, particularly in more complex situations, human oversight and professional judgment from financial advisors will continue to play an essential role.  

A likely outcome is a more AI-enabled investing experience, with automation handling more of the operational and analytical workload, while humans remain responsible for oversight, context, and accountability. 

How trustworthy are investment recommendations or portfolio changes suggested by AI? 

JB: Depends on context.  AI tends to be most reliable when investigating structured, data-rich scenarios where there’s a long performance history, clear objectives, and strong governance behind the models. When AI recommendation engines are built and overseen by regulated institutions with robust validation and risk controls, their outputs can be a meaningful input into portfolio decisions, and complement advice from a financial advisor. 

Caution is warranted in newer, niche, or rapidly evolving areas where data is limited or market dynamics are shifting. AI systems identify patterns in historical data and provide recommendations based on those patterns. When the situation is nascent or novel, those patterns may not hold. The same skepticism should apply to unvetted or lightly governed tools. 

Do you think AI will eventually be better able to do investment analysis and stock picking than even the best human managers? If so, how far away are we? If not, what's the edge humans will always have? 

JB: I think it depends on the scenario. AI and human investors have different strengths. 

AI excels at scale, speed, and extracting signals from large, structured datasets. It could outperform in areas of investing that are data-heavy, pattern-driven, and time-sensitive. In quantitative strategies, where success depends on processing large volumes of information, identifying statistical relationships, and reacting quickly, AI systems already compete effectively. Research and industry results show that algorithms can detect subtle signals across massive datasets in ways humans can’t replicate at scale. 

Humans have the edge in ambiguity, structural change, and judgment under uncertainty. Markets are adaptive systems influenced by regulation, geopolitics, incentives, and behaviour, not just historical data. Research consistently shows that models trained on past data can struggle during regime shifts or novel events. Humans are better positioned to reinterpret assumptions, weigh qualitative factors such as management credibility or policy direction, and exercise fiduciary judgment. 

A likely outcome is not AI replacing the best human managers, but managers who effectively use AI outperforming those who do not.  

Do you think we will get to a place where we have AI agents managing investments for people? Or is there always going to have to be a human in the loop on every decision? 

JB: In the sector, we’re already seeing robo-advisors use algorithms to construct portfolios, rebalance, and optimize based on client inputs with minimal human involvement. As AI advances, it’s reasonable to expect more sophisticated AI-enabled solutions that can manage routine investment decisions at scale with increasing personalization. It’s possible we could reach a point where a human is not involved in every portfolio adjustment or allocation decision. 

That said, in more complex situations (think major life events, significant portfolio changes, or periods of market stress) the perspective of an experienced professional will absolutely remain valuable. Financial advisors will be critical in steering their clients through these bumpy periods.  

As these tools evolve, investors should understand both their strengths and their limitations. Making an informed decision about when to rely on AI versus human advice will be essential. 

One of the effects of more powerful AI would be, I'd think, more cyberattacks, scams, and other financial risks of that nature for individuals. What are some of the emerging fraud risks that you think people should be more aware of?  

JB: Probably the biggest fraud risk to be aware of is the dramatic improvement in both the realism and scale of impersonation enabled by AI tools. 

As AI tools become easier to use, removing the need for technical expertise, they lower the barrier to entry for creating new products and services. That same dynamic applies on the other side as well: it’s now easier than ever for bad actors to generate convincing fakes, clone voices, and create highly personalized scams.

Generative AI now allows attackers to create highly convincing emails, voice recordings, and even video messages that mimic trusted individuals or institutions. What used to be obvious phishing attempts are becoming far more personalized and credible. A scammer can replicate the tone of a bank message, imitate a family member’s voice, or generate realistic documentation to support a fraudulent request. 

AI also enables fraud at a much larger scale. Automated systems can generate thousands of tailored messages, scrape public information to personalize outreach, and continuously adapt tactics based on what works. The result is fewer obvious red flags and a higher likelihood that a message appears legitimate. 

For individuals, one of the most effective safeguards is verifying information outside the original channel. As AI makes scams more convincing, it’s no longer enough to trust what’s in front of you, especially when a request involves money, personal information, or account access. 

A practical approach is to pause and validate the request through a trusted source. That could mean contacting your financial advisor, using a known phone number, or logging into a secure portal rather than responding directly to the message or call. Financial advisors are a strong example, as they’re trained to spot irregularities and operate within established controls, which makes them a reliable point of verification. 

What is the single most financially harmful behavior you see technology amplifying?

JB: The most financially harmful behaviour I see technology amplifying is overconfidence driven by digitally-reinforced confirmation bias. 

AI tools are frictionless, fast, and highly responsive. That combination can be helpful, but it can also create an illusion of certainty. Even when underlying assumptions have missing information, the output often appears comprehensive. That perceived coherence can reinforce conviction rather than challenge it. The use of AI with bad underlying data can lead to very damaging outcomes, very quickly.  

Without appropriate safeguards, AI-generated summaries, social sentiment feeds, and algorithmic recommendations can unintentionally create an echo chamber. 

Keep Reading