Your InFinance Publication

FINSIA’s InFinance keeps you up-to-date and in-the-know. 

Exploring the future role of artificial intelligence in professional bodies

by Ali Cain | 05 Oct 2017
Artificial intelligence (AI) and machine learning (ML) are two emerging technology trends transforming many industries. 

In the future, these tools will also have many different applications for professional bodies such as FINSIA. But they are unlikely to completely replace human intelligence.

Robert Hillard, Deloitte’s managing partner for consulting, says the term artificial intelligence can be misleading.

“What we're talking about is a form of flexible programming; it’s where programming meets data analytics,” says Hillard.

Artificial intelligence can play roles humans play in a different way. For instance, many interactions between a professional body and its members are reasonably procedural – collecting members’ correct personal details or sending out information about professional development courses and events are examples.

“You can teach a machine to do these tasks and improve services to members. But it’s important to remember machines can only respond to questions like the iPhone’s Siri voice recognition app, which is a form of artificial intelligence. Nevertheless there are lots of times members need to ask simple questions of their professional bodies,” Hillard adds.

For instance, members could ask Siri-style technology how many CPD points they have accrued so far in a year. Or they can ask when the annual conference is, or when a workshop they are interested in attending is being held. But humans still have to respond to more subtle questions and questions that haven't been asked before. 

“Done well, artificial intelligence frees up limited resources. For instance, it can respond to people who contact a member body to find out when their membership is due, or if they have professional indemnity cover for a particular situation,” Hillard explains.

“But humans will still have to be involved when members contact their professional body and say, "I'm worried I’ve given a client the wrong advice," because the response won't be easy,” he adds.

Hillard expects chat bots will be professional bodies’ first forays into artificial intelligence. “Getting members comfortable using them will be one of the biggest challenges.”

When it comes to adherence to professional standards, he says AI won’t be used to spot breaches, rather it can identify actions that comply with regulations, leaving room for humans to identify breaches.

“There is also a role for artificial intelligence in professional education, which has evolved in recent years. We've gone from a mixture of printed material and classroom training to electronic training and digital resources curated by students. Now artificial intelligence can tailor a syllabus to student needs,” says Hillard.

For instance AI can help members define their professional development goals, understand members’ diaries and identify periods where they have time to absorb a large amount of information. Conversely, it can identify points where the member only has time to digest bite-sized chunks of information, and keep on adjusting the syllabus to the member’s commitments. 

“You end up with a much more efficient professional development program. But the challenge for professional bodies is keeping on top of all these changes and adjusting their offering accordingly,” he says.

Professor Nick Wailes is associate dean (digital and innovation) at UNSW Business School. He agrees professional organisations and standards-based bodies should be exploring the role technology can play in their operations. 

For instance, professional bodies can help their members explore new ways in which artificial intelligence could be used to help generate advice. 

“A good example where this has been used already is searching for legal precedents,” Wailes explains. For instance Case Analysis Research Assistant (CARA) is an automated research assistant for lawyers. 

Wailes says machine learning will also be able to also teach software what good advice looks like. But he says using machine learning to identify ethics breaches may be more problematic. 

“Ethics breaches are rare and often it is not a clear cut issue that is amenable to simple classification. Financial institutions have sophisticated systems that involve artificial intelligence to identify suspect trades. But in these cases a relatively simple set of parameters can be identified,” Wailes adds.

This is usually not the case for ethics’ breaches, which are often more complex than rogue trading.

“The more interesting issue is whether professional standards bodies have the resources to build AI systems. They require significant upfront investment and a lot of data,” he says. 

Wailes says it is likely to be sometime before machines’ ability to make judgements exceeds human judgement and discernment, which will continue to be important in many fields including complex situations like assessing adherence to professional standards. 

He says in the immediate term machines are likely to enhance rather than replace human judgement. 



Comments

Share this