Home

Australian AI firms warn 'the horse has already bolted'

Jennifer Dudley-NicholsonAAP
The government could struggle to restrict the use and development of AI models in Australia. (AP PHOTO)
Camera IconThe government could struggle to restrict the use and development of AI models in Australia. (AP PHOTO) Credit: AAP

Forcing Australian firms to create ethical AI tools that pay creators for their content and disclose the data they use could put them at a global disadvantage and unable to compete with US tech giants.

The warnings were issued by several Australian AI companies at the Senate’s Adopting Artificial Intelligence inquiry on Wednesday, despite their support for privacy law reforms and restrictions on high-risk AI uses.

Their statements came a day after representatives from the publishing, news, TV and film industries told the inquiry that Australian content had been used by US firms without consent or compensation to create AI models.

Trellis Data chief executive Michael Gately said the government would face a significant challenge when creating restrictions on the use and development of AI models in Australia.

While guidelines were needed, strict rules about paying content creators for their work or transparency about their use of data may slow down Australian firms and be ignored by overseas developers, he said.

“My preference would have always been personally to ensure that people are paid for their work under the Copyright Act.

“But I think the would be difficult to implement and would probably impact Australian companies unfairly against global competition,” Mr Gately said.

“The global large language models have already breached Australian artists on a massive scale.

“To a large degree, the horse has already bolted.”

Haast chief technology officer Liam King said onerous rules around data transparency could also “put Australian organisations at a disadvantage on the world stage”.

Nuvento chief executive David Hohnke agreed, telling the inquiry AI rules introduced in Australia should work alongside regulations in Europe and the US to avoid slowing local innovation.

“You don’t want to be stifled in the fact that Australians have to reference every single piece (of data) and the rest of the world doesn’t,” he said.

“If we do this in isolation, we could harm ourselves and people will go, ‘so what, I’ll just use ChatGPT and throw my documents up there and breach out company requirements’.”

But Atlassian global public policy head David Masters said while strict AI regulations could slow innovation in Australia, there was still scope to set standards for its use and introduce legal reforms.

Ms Masters said the tech firm would like to see high-risk AI uses defined in Australian standards and changes to privacy laws to match European rules.

“We’re very much on the record that we would like to see privacy reforms here in Australia that raise the bar,” he said.

“What you don’t want to stifle is that innovation in fairly low-risk scenarios because that’s the opportunity for Australia ... but making sure that you’ve got appropriate guardrails and the proper level of security in those areas which are high-risk.”

Atlassian regulatory affairs and ethics director Anna Jaffe also told the inquiry any standards around labelling AI should go beyond watermarking content but explaining how the technology had been used.

“You can mandate that a disclosure be made, that AI has been used in the course of a design but if you don’t explain what that means ... transparency of itself isn’t that meaningful,” she said.

The Senate inquiry is expected to issue its findings on the impact of AI in September.

Get the latest news from thewest.com.au in your inbox.

Sign up for our emails