King & Wood Mallesons (KWM) Digital Future Summit 2024 spanning 11 insightful sessions, spotlighted the intersection of technology and regulation, emphasising trust, safety, and security in our rapidly evolving digital landscape.
This year’s discussions underscored the necessity for a synchronized approach between technology and regulation to foster a digital world that is not only innovative but also secure and trustworthy.
Central to the summit's dialogue was the imperative for technology and regulation to move in tandem. As industries and governments grapple with the advancements in digital platforms, the call for an integrated approach to address issues like competition, consumer protection, privacy, and the governance of emerging technologies became increasingly clear.
Embracing technology with regulatory vigilance
In her address, ACCC Chair Gina Cass-Gottlieb highlighted the Commission's commitment to maintaining consumer trust amidst the surge of digital platforms. The ACCC's approach focuses on a review of Australian Consumer Law to ensure it encompasses new technologies such as generative AI, which could potentially lead to misleading and discriminatory practices.
While innovation from digital platforms is encouraged, the ACCC remains vigilant against dominant players stifling competition, the regulator highlighted.
The ACCC is also collaborating with both domestic and international regulators, including those overseeing the EU's Digital Services Act and Digital Markets Act, to stay ahead of global trends.
Former Information Commissioner Angelene Falk and Privacy Commissioner Carly Kind addressed the "perfect storm" of privacy risks emerging from data breaches and the rush to harness AI.
She noted that there was a pressing need for a comprehensive Privacy Act that addresses data breaches and vulnerabilities within supply chains and that technology change had been dramatic.
"I think we just need to remember that they've caused incredible public benefit, but at the same time they have increased privacy risks,” she said.
Technological advancements such as generative AI and automated decision-making will also continue to pose privacy risks and require careful consideration of accountability, transparency, and compliance with privacy laws.
The summit discussion also found that balancing privacy with other societal interests, such as online safety and integrity, can be challenging. Necessity and proportionality should guide decision-making in these cases, ensuring that privacy is not unnecessarily compromised.
AI: Governance at the forefront
Meanwhile AI governance continues to be a key issue on how regulatory frameworks would support the changes induced by generative AI.
ASIC Chair Joe Longo made a strong case for responsible AI innovation urging businesses to understand and be accountable for the technologies they employ.
While innovation in AI is encouraged, businesses must be aware of and address potential risks to consumers, he noted.
Directors are also expected to understand the technologies used within their businesses, including AI, and to stay informed about regulatory environments. Director liability should be fact-sensitive, allowing for reasonable reliance on others in fulfilling their duties.
ASIC recommended that Australia should align its AI regulations with those of major trading partners while leveraging existing tech-neutral legislation.
Good AI Governance is also crucial to not only managing the range of risks that generative AI usage presents but to ensure that companies can harness the potential benefits of generative AI in a way that fosters and maintains trust.
In a dynamic discussion with KWM’s senior associate Kendra Fouracre, CHOICE's Consumer Data Advocate, Kate Bower, SEEK's Head of Responsible AI, Fernando Mourão and Director of the Centre for AI and Digital Ethics and Professor at Melbourne Law School, Jeannie Marie Paterson, hashed out the key elements of AI governance.
“Companies developing, implementing and using AI shouldn’t wait to 'see what happens' on legal reform before implementing AI Governance,” the panel highlighted.
“Good AI governance isn’t just for IT or risk teams – implementation needs input from across the company including legal, technology, risk, products, HR and other key stakeholders, from the design of the AI Policy to the make-up of the AI Governance Committee and beyond.”
As employees are already using gen AI, the best way to ensure it is used in a safe and responsible manner is through an AI Governance framework that empowers employees, the panel found,
Effective AI governance is also unique to each company – it must complement existing risk management frameworks and processes and reflect how you are using AI. It must also be flexible and able to adapt as generative AI continues to rapidly evolve.
Stay ahead of regulatory challenges
The Regulators 2024 is not just about gaining insights — it is also an unparalleled networking opportunity. The event will bring together the industry’s senior representatives, creating a platform for meaningful connections and collaborative opportunities. This interaction is crucial for fostering partnerships and sharing ideas that drive industry advancement.
Register now to secure your spot (or book a table) at The Regulators 2024 and be part of the conversation shaping the industry’s future.