Powered by MOMENTUM MEDIA
accounting times logo

Powered by MOMENTUMMEDIA

Powered by MOMENTUMMEDIA

‘Significant gaps’ in regulation of AI platforms in financial services, says UNSW

Technology
02 August 2023
significant gaps in regulation of ai platforms in financial services says unsw

The growing use of AI-regulated applications in financial services raises potential concerns around privacy breaches, discrimination and consumer manipulation that should be addressed, the university says.

A number of potential risks from artificial intelligence are not well captured by existing law, particularly in the area of financial services, the University of New South Wales (UNSW) has told the government in response to its discussion paper on responsible AI.

A joint submission between UNSW Allens Hub for Technology, Law and Innovation and UNSW Business School Regulatory Laboratory has identified “significant gaps” in the regulatory regimes for financial services in relation to the growing use of AI-regulated applications.

A recent research project undertaken by UNSW Allens Hub senior research fellow, Dr Kayleen Manwaring, investigated the harms to consumers arising from the growing use of AI-related applications in financial services and how the Australia’s current laws apply to these harms.

==
==

“The research project found that these harms range across a number of subject areas, such as discrimination, privacy breaches, digital consumer manipulation and financial exclusion,” the submission said.

“Additionally, this research project has identified significant gaps in relevant regulatory regimes relating to these harms, such as the Insurance Contracts Act 1984 (Cth), the General Insurance Code of Practice (2020), Corporations Act 2001 (Cth), Australian Securities and Investment Commission Act 2001 (Cth) the Privacy Act 1988 (Cth), and state and Commonwealth anti-discrimination legislation,”

The research project has also suggested some principles for reform.

The research undertaken by UNSW Allens Hub members has discovered exploitative and manipulative conduct by digital platforms and others providing digital services.

“There is growing concern by scholars, practitioners, think tanks and industry commentators that the increase in electronic marketing and transactions, and the vast amount of data exposed to public scrutiny by ecommerce, social media, Internet-connected devices and environments and other online activities, may grant marketers a significantly increased capacity to predict consumer behaviour, and use data and behavioural research to exploit the biases, emotions and vulnerabilities of consumers,” the submission stated.

The submission warns that the ability of commercial entities to manipulate or exploit consumers is greatly enhanced by the use of AI-related technologies, such as machine learning.

“AI technologies such as machine learning are at the forefront of the significant amount of data analysis and inferencing required to predict the behaviour of consumers in any number of situations, and to be able to target them in real-time in specific ways, in particular emotional states, in such locations and at the times when manipulation is most likely to be successful,” the submission stated.

“The commercial benefit firms may gain from such techniques include inducing disadvantageous purchases of products or services, extracting more personal information from consumers than is needed for the transaction, and engaging in unjustifiable price discrimination.”

Examples of this can already be seen in the financial services context, according to UNSW Allens Hub.

“Digital consumer manipulation in this industry often takes the form of ‘margin optimisation’, a ‘process where firms adapt the margins they aim to earn on individual consumers,” the university hub said.

“Even with most commercial entities’ practice of concealing their data-driven business practices where they can, some external evidence exists that EU, UK and US insurance firms, when setting prices, look at consumer’s willingness to pay based on their personal characteristics gained from the insights that external data provides.”

The submission said that machine learning models and algorithms can be used to create inferences of price sensitivity and propensity for switching, based for example on the analysis of consumers’ behaviour on a website or app controlled by the financial firm, the time an individual spends reading terms and conditions, or websites visited before applying to the financial services provider.

The use of dark patterns

A common example of digital consumer manipulation is ‘dark patterns’, according to the research, where the design of user interfaces such as e-commerce websites takes advantage of certain behavioural biases.

“Behavioural biases are well-known psychological biases that can be exploited to make it difficult for consumers to select their actual preferences, or to manipulate consumers into taking certain actions that benefit the interface owner rather than the consumer,” the submission said.

“They are commonly used to manipulate consumers into paying for goods and services they do not need or want, or disclosing personal information that is unnecessary for the transaction and is used by the receiver for their own commercial purposes, or on-sold to third parties.”

The submission noted that Amazon has recently been targeted by the US Fair Trade Commission (FTC) for its use of these dark patterns.

The FTC argued that these digital consumer manipulation techniques constituted unfair or misleading conduct in breach of section 5 of the Federal Trade Commission Act.

The ACCC in its Digital Platform Services Inquiry and consumer advocates has also recently identified the use of dark patterns as a serious issue for Australian consumers, the submission said.

“As the use of machine learning techniques in data analytics increases, and transparency decreases, the likelihood of disadvantages for consumers and other data subjects is likely to increase,” it said.

“The new activities now made possible by hyper-personalised profiling, algorithmic microtargeting of marketing campaigns, and the growth of new data collectors and marketing media via connected devices and environments may lead to an opaqueness unprecedented in the consumer space: in other words, a mass inability to know our own minds.”

The need for regulatory action

The submission said while AI does generate calls for regulatory action, this does not need to be technology specific.

The university said there is no need to define ‘artificial intelligence’ in order to address the issues associated with a diverse range of technological practices.

“Instead, most problems identified are better addressed through a program to reform and update privacy and discrimination legislation, consumer law, administrative law, and so forth, so that they operate to achieve their goals when applied to current practices associated with the broad frame of artificial intelligence,” it said.

The submission also warned that Australia’s existing state privacy or data protection laws are inadequate and too outdated to protect the private of Australians and guard against serious harms caused by privacy infringements.

“Throughout the Privacy Act Review conducted by the Attorney General’s Department from 2020 to 2022, numerous submissions emphasised the need for urgent reform of these laws. The privacy risks introduced by certain AI systems and the widespread adoption of AI applications increase the urgency of proposed reforms,” it said.

About the author

author image

Miranda Brownlee is the news editor of Accounting Times, an online publication delivering analysis and insight to Australian accounting professionals. She was previously the deputy editor of SMSF Adviser and has broad business and financial services reporting experience, having written for titles including Investor Daily, ifa and Accountants Daily. You can email Miranda on: [email protected]

Subscribe

Join our subscribers get exclusive access to freebies and the latest news

Subscribe now!
NEED TO KNOW