• Fri. Apr 19th, 2024

AI In The Canadian Financial Services Industry – Finance and Banking

In recent years, players within Canada’s financial services
industry, from banks to Fintech startups, have shown early and
innovative adoption of artificial intelligence
(“AI“) and machine learning
(“ML“) within their organizations and
services. With the ability to review and analyze vast amounts of
data, AI algorithms and ML help financial services organizations
improve operations, safeguard against financial crime, sharpen
their competitive edge and better personalize their services.

As the industry continues to implement more AI and build upon
its existing applications, it should ensure that such systems are
used responsibly and designed to account for any unintended
consequences. Below we provide a brief overview of current
considerations, as well as anticipated future shifts, in respect of
the use of AI in Canada’s financial services industry.

The Regulatory Landscape and Some Recent Developments

At a high level, Canadian banks and many bank-specific
activities are matters of federal jurisdiction. As a result, they
are subject to the Personal Information Protection and
Electronic Documents Act

(“PIPEDA“)1 and its
“substantially similar” provincial equivalents when it
comes to their use of personal information (including in the
context of developing or deploying AI). Future posts in this series
will engage in a broader discussion of AI and privacy concerns.
Financial institutions’ use of AI is also subject to consumer
protection, competition and human rights legislation.

Multiple regulators, including the Office of the Superintendent
of Financial Institutions (“OSFI“), the
Financial Consumer Agency of Canada
(“FCAC“), and the Financial Transactions
and Reports Analysis Centre of Canada
(“FINTRAC“) play important roles in
regulating banks and financial services institutions. Many
banking-adjacent activities are regulated provincially (including,
for example, by securities regulators) and as a result, financial
services institutions may come under provincial regulation when
engaging in provincially-regulated fields, such as insurance and
securities.

As a result, the current regulatory landscape governing the use
of AI by in the financial services industry is a broad patchwork of
laws and regulations. Some examples of the regulatory initiatives
and constraints of Canadian regulators currently impacting the use
of AI in the financial sector are described below.

On September 24, 2015, the Canadian Securities Administrators
(“CSA“), the umbrella organization of
Canada’s provincial securities regulators, published CSA Staff Notice 31-342: Guidance for
Portfolio Managers Regarding Online Advice

(“CSA Notice 31-342“). See our previous
blog outlining the CSA Notice 31-342, here. Among other things, CSA Notice 31-342
provides guidance for online advisors and suggests that Canadian
securities regulators view online advisors as online platforms
through which a human portfolio manager can provide investment
services, stand-alone wealth management services. 

OSFI’s Guideline E-23: Enterprise-Wide Model Risk Management
for Deposit-Taking Institutions
(“GuidelineE-23“)
places the onus on federally-regulated financial institutions to
develop their own sets of risk management policies and procedures
(including, arguably, in relation to uses of AI) and indicates that
such models should be reviewed regularly to evaluate their
performance. OSFI has signaled a forthcoming revised model risk
guideline (referenced further below).

In 2019, in collaboration with Accenture, the Investment
Industry Regulatory Organization of Canada
(“IIROC“) published its report on the
state of wealth management in Canada, “Enabling the Evolution of Advice in
Canada
,” prepared through consultation with the CSA and
other industry stakeholders. The report canvassed many of the new
business models being implemented by financial services firms, made
recommendations for regulatory shifts and identified some of the
factors that remain unknown as firms continue to embrace an
AI-driven approach to wealth management. 

In September 2020, OSFI released Developing financial sector resilience in a digital
world: Selected themes in technology and related risks
. See our
previous blog outlining this discussion paper, here. Among other things, the paper noted that
the use of AI and ML present new opportunities and risks that
should be approached with soundness, explainability and
accountability. Further, the paper signaled OSFI’s interest in
collaborating with stakeholders to develop guidance that balances
the “safety and soundness” of the Canadian financial
sector against the needs of the sector to innovate.

In its 2020-2021 Annual Report, OSFI stated that AI
and ML are “expected to increase in importance both in terms
of advancing [model risk managing] frameworks and in enhancing or
creating new products and services.” OSFI is focused on
developing additional principles “to address emerging
risks” resulting from the use of AI and ML and anticipates
publishing an industry letter on advanced analytics in 2022, as well as revised model risk guidelines
in 2022-2023.

In July 2021, the Ontario Securities Commission,
British Columbia Securities Commission, Autorité des
Marchés Financiers, and Alberta Securities Commission,
following the CSA’s launch of its Regulatory Sandbox in 2017, (which, at its
inception cited “business models using artificial intelligence
for trades or recommendations” as an example of eligible
sandbox candidates), jointly announced the selection of Bedrock AI
Inc. to support the Cross-Border Testing initiative, a project
involving 23 regulators across five continents. This
marked an important step by the securities regulators towards
broader adoption of AI in their oversight processes.

Current Trends and Uses of AI

Banking in Canada, although now largely digital in operation,
continues to involve many human-based processes. The following are
some examples of how AI is being used in the industry to mitigate
against the potential for human error, increase security and
efficiencies, and adapt to the needs of the modern customer.

  • General and Predictive Analysis

Financial services institutions are developing AI models that
are capable of analyzing large amounts of data to identify market
trends, prioritize risks and monitor them accordingly. These AI
models are used to detect specific patterns and correlations in the
data collected, which can in turn be used identify new sales
opportunities or assist with revenue forecasting, stock price
predictions and risk management.

Financial services institutions have traditionally relied on
“know your customer” (“KYC“)
requirements and rule-based anti-money laundering
(“AML“) monitoring systems to protect
against fraud. With the increase in fraud-related crimes and
consistently changing fraud patterns, financial services
organizations and regulators are applying AI to existing
fraud-detection systems, to identify data anomalies, patterns and
suspicious relationships between individuals and entities that
previously went undetected. By looking at customer behaviours and
patterns instead of specific rules, proactive AI-based systems
represent a significant transition away from more traditional,
reactive approaches to fraud detection.

Chatbots are one of the most commonly used applications of AI
across industries and have been embraced by many financial services
organizations. Chatbots can take different forms, most frequently
serving as a “virtual assistant”, are available 24/7, and
can handle many standard banking tasks and inquiries that
previously necessitated person-to-person interaction. To the extent
that chatbots collect personal information or provide financial
advice, their activities are likely to be subject to regulatory
scrutiny.

  • Loan and Credit Decisions

Many financial services institutions continue to rely on credit
scores, credit history, customer references and banking
transactions to determine whether or not an individual or entity is
creditworthy. However, these credit reporting systems often miss
real-world transaction history and other information that impacts
creditworthiness. As a result, financial services institutions have
implemented AI-based systems to help make more informed, safer and
profitable loan and credit decisions. In addition to working off of
available data, AI-based loan decision systems and ML algorithms
can look at behaviours, patterns and other data to predict the
probability of default, which helps to improve the accuracy of
credit decisions.

However, AI-based loan and credit applications can suffer from
bias-related issues similar to those made by their human
counterparts, a challenge discussed further below.

In simple terms, a robo-adviser attempts to understand a
customer’s financial circumstances by analyzing data shared by
the customer, as well as their financial history. Based on this
data and the customers’ goals, a robo-adviser can provide
appropriate investment recommendations (including with regard to
specific account options, asset holdings and balancing options).
Capable of quickly analyzing current and historical market trends,
AI and ML are now being applied across the investing and wealth
management industry.

Canadian regulators have not yet paved the way for
fully-automated robo-advisers, and as a result, they do not yet
exist in Canada in the same form as in the United States and other
countries. As a result of regulatory guidance like CSA Notice
31-342, online advisers are still required to: (a) fulfill the same
registration and conduct requirements as regular portfolio
managers, including know-your-client
(“KYC“) and suitability obligations, and
(b) ensure that their clients have the possibility to interact with
a human advising representative (“AR“)
during the on-boarding process either “by telephone, video
link, email or internet chat”.2

Any robo-advising currently operating in Canada uses a
“hybrid” model in which an online platform is used for
efficiency, but decision-making is ultimately left to an AR. An
AR’s review of robo-adviser-generated advice is, among other
things, to ensure that: (a) the investor profile generated by the
algorithm corresponds to the client’s KYC information, and (b)
the model portfolio recommended by the algorithm is suitable for
the client. This ultimately places the responsibility of fulfilling
the KYC and suitability obligations on the AR, rather than the
online adviser. In order to ensure continued compliance with KYC
and suitability obligations, online advisers’ systems should
prompt a client to update their personal information online at
least annually or when a material change in their financial
circumstances has occurred so that the software can re-determine
the suitability of that client’s portfolio. As with the initial
advice generated by the algorithm, an AR has to review any new
advice or changes to the initial advice before it is presented to
the client. As online advisers expand their client-base, they must
continually hire ARs to provide adequate services to their clients
and comply with all the regulatory requirements, including the
review of all financial advice.3

As a result of this hybrid approach, securities regulators have
only registered online advisers with relatively simple business
models and portfolios, which are easy to understand by investors
with average financial literacy. As the sophistication and
potential for fuller automation of robo-advisers is enhanced, so
too will their ability to better predict investor behavior and
market conditions. Canada’s investment regulators continue to monitor and respond to these
shifts
. Careful attention should continue to be paid to their
approach moving forward.

Regulatory technology (“RegTech“) has
been cited by the Financial Stability Board
(“FSB“) as an important area of
innovation, involving the application of financial technology for
regulatory and compliance requirements and reporting by regulated
institutions. See our previous blog summarizing the opportunities
and challenges described by the FSB’s 2020 report on the use of
RegTech and supervisory technology
(“SupTech“) by FSB members, including
OSFI, here.

RegTech is being used by financial regulators and institutions
to manage and respond to changes in the financial regulatory
environment and to reduce the costs around compliance (including in
relation to ensuring minimum regulatory standards are met). As
technology-driven regulatory changes continue to occur across
jurisdictions, RegTech compliance frameworks can help financial
organizations ensure that they are meeting shifting requirements.
See our previous blog, describing advancements in the Canadian and
Australian RegTech ecosystems, here.

In February 2020, the Ontario Securities Commission (the
OSC“) established the Capital Markets
Modernization Task Force (the “Task
Force
“) to implement initiatives to modernize
Ontario’s capital markets regulation. In January 2021, the OSC
released the Task Force’s report, which, among other things, considered
the potential use of RegTech in the OSC’s regulation of
Ontario’s capital markets. The Task Force recommended that the
Innovation Officer “should consider how RegTech solutions,
such as automated compliance tools, can benefit market participants
and the OSC.” The Task Force’s recommendations focused on
RegTech that would reduce the regulatory burden, such as assisting
with onboarding clients, fulfilling KYC obligations and conducting
suitability assessments. The OSC further committed to its goal of
incorporating RegTech in OSC Notice 11-794 – 2022-2023 Statement of
Priorities
, wherein the OSC set out an action
item to develop an OSC strategy to consider RegTech solutions.

FINTRAC has also started to enable the use of RegTech,
particularly with respect to KYC requirements. Digital identities,
along with verification technologies, enable faster and more
accurate customer validation and verification for streamlined KYC
processes. Recent amendments have been made to the Proceeds of
Crime (Money Laundering) and Terrorist Financing Act
[4] regulations to
make online identification easier.

Risks and Challenges

Embracing AI comes with certain risks and challenges. Financial
services institutions should ensure that their implementation of AI
systems aligns not only with the developing regulatory regime, but
also with their existing ethics and bias practices. The following
are key concerns that have, and should continue to, guide the
development of financial AI tools and applications.

AI models are necessarily subject to the biases and assumptions
of the humans who developed them. As the performance and fairness
of any AI model turns on the accuracy and diversity of its subject
data, steps should be taken to ensure that data remains precise and
representative of the targeted population. The presence of any bias
can be magnified when a model is deployed, sometimes with troubling
results.

Including as identified in Guideline E-23, once an AI model is
used by a financial services institution, it must be continuously
updated to accommodate new facts and ensure that its decisions are
made fairly.

  • Fairness and Transparency

Financial services institutions and organizations operate under
regulations that may require them to issue explanations for their
credit-issuing decisions to potential customers. Notably, in a
2019 submission
to the Department of Finance, the Office of the
Privacy Commissioner of Canada cited the use of big data analytics
and artificial intelligence in the financial technology realm as an
area “requiring more attention,” particularly with regard
to transparency, accountability and individuals’ ability to
obtain access to their information.

Whether a financial service is required to provide an
explanation for a decision, and the degree of detail required to be
included with that explanation, is context-specific. As a result,
financial services institutions should ensure their AI tools
provide appropriate levels of opacity in their decision-making
processes.

In addition to complying with regulations, financial services
institutions must be mindful of customer trust when using AI tools.
For example, if a financial services institution deploys a chatbot
that makes mistakes or continually misunderstands the
customers’ questions, customers will lose trust in the
technology and the financial services institution will no longer
receive the benefits associated with using the technology.

Conclusion

Financial services firms that invest in AI systems stand to gain
advantages in the market, improve customer satisfaction and enhance
their financial performance at the expense of those that fail to
innovate with AI. However, careful attention should be paid to
ensure that AI-powered applications and tools are developed with
the ever-evolving legal and regulatory AI landscape in mind.

Stay tuned for further McCarthy Tétrault publications on
this subject.

To learn more about how our Cyber/Data Group can help you
navigate the AI, data and privacy landscape, click
here and for more information
about our firm’s Fintech expertise, please see our
Fintech group page.

Footnotes

1The Personal Information Protection and Electronic
Documents Act
, SC 2000, c 5. An example of “substantially
similar” provincial privacy legislation is Québec’s
Act Respecting the Protection of Personal Information in the
Private Sector
, CQLR c P-39.1 s 20, which was recently amended
by Bill 64 and now provides individuals with rights relating to
automated decision making. See our blog series on Bill 64 here.

2 CSA Staff Notice 31-342.

3 CSA Staff Notice 31-342.

4 Proceeds of Crime (Money Laundering) and Terrorist
Financing Act
, SC 2000, c 17.

To view the original article
click here.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *