Skip to main content
Open this photo in gallery:

Some experts argue that the quality of data collection through online services that use artificial intelligence and its application to individual situations should be a key concern for regulators.Peshkova/iStockPhoto / Getty Images

The emerging use of artificial intelligence (AI) to support or even replace human financial advisors is attracting the attention of regulators – mainly in Britain but also in Canada. While they’re broadly supportive of AI as a cost-efficient tool to broaden the reach of financial advice, they’re also monitoring the potential risks and challenges, trying to ensure that this advice remains both suitable and transparent for clients.

The current crisis is certainly putting the usefulness of the new technology to the test. Tony Vail, chief advice officer at Wealth Wizards, a fwell-known provider of AI-assisted financial advice in Britain, says: “We’re finding increasing demand for our technology solutions [as a result of the crisis]. For example, our digital financial advisor, MyEva, had an unprecedented response to an [online] nudge offering help and guidance with finances related to the impacts of COVID-19.”

Given the increased attention on AI-assisted advice, Britain’s Financial Conduct Authority (FCA) is taking a proactive approach on the matter. Last autumn, the FCA and the Bank of England conducted a survey of more than 100 financial services firms on their experience using AI, resulting in a report published in October 2019. That was to be followed by a forum this spring to solicit more industry feedback, which has been postponed as a result of COVID-19.

Some key areas of concern, FCA documents state, are the practical challenges and barriers to deployment of AI and its potential risks. Industry suggestions for regulatory principles are also being solicited.

In Canada, the use of AI in financial services is beginning to emerge, but at a slower pace. For example, Bank of Montreal introduced BMO Insights in late 2019, which it says “leverages artificial intelligence to deliver personalized, automated, and actionable insights for everyday banking customers.” On the regulatory side, the Ontario Securities Commission says in an e-mail that the topic of AI and financial advice is “an area of interest” and that OSC LaunchPad has worked with “novel” businesses in this area.

Mr. Vail welcomes the regulatory attention this topic is receiving in Britain – especially given the importance of the decisions that clients could be making using the new technology. For example, the firm’s MyEva platform is designed to offer customized, AI-powered, online advice to the employees of large firms such as Unilever PLC.

The platform uses machine learning to offer advice tailored to individual users on matters such as company pensions, saving and borrowing and making the transition to retirement, at no cost to employees. Those who wish to follow up with a human advisor may pay a fee. On average, about 20 per cent of them choose to do so.

Mr. Vail says the FCA has encouraged his firm’s AI-driven innovation. Among other advantages, it has the potential to fill the so-called advice gap that has emerged in Britain after a decade of tighter regulation resulted in a sharp decline in the number of advisors. But he acknowledges that, as its use increases, more regulatory review is expected. “Totally reasonably, [regulators have] to feel uncomfortable about something that is perceived to be, and is actually, a black box,” he says.

Cary List, president and chief executive officer of FP Canada, says there are many potential advantages of using AI to support financial planning. However, he notes that “every individual is different,” and that the quality of advice must be maintained, regardless of how it’s delivered.

Furthermore, the fact that AI is still a very new technology gives reason to be cautious, he says. “Until there’s a sufficient repository of data in these systems, we are going to see a lot of room for error – and that causes a lot of challenges.”

Indeed, the quality of data collection and its application to individual situations should be a key concern for regulators, says Giulia Lupato, a lawyer and senior policy advisor with the Personal Investment Management & Financial Advice Association (PIMFA) in London, which represents most of Britain’s retail investment management firms.

One of the most concerning issues, she says, is data bias – problems that arise when huge amounts of data are drawn from many different sources and, over time, are applied without sufficient customization.

“It’s a big problem. Over time, you end up with outcomes that are not good [for all clients],” Ms. Lupato says.

Such data has to be “cleaned,” which means it has to be reviewed by a person and the biased data removed, she says. “You have to go through it, spot it, and eliminate it. It’s not straightforward.”

Although AI-driven advice is slowly becoming more common, it’s likely to have some growing pains. Regulators and the investment industry alike will be keeping a close eye on its development.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe