Trust is an issue when it comes to artificial intelligence (AI) according to a University of Queensland study that found 72 per cent of people don’t trust it, with Australians leading the pack.
Trust experts from ºÚÁϳԹÏÍø Business School, , and led the study in partnership with KPMG, surveying more than 6000 people in Australia, the US, Canada, Germany and the UK to unearth attitudes about AI.
Professor Gillespie said trust in AI was low across the five countries, with one nation particularly concerned about its effect on employment.
“Australians are especially mistrusting of AI when it comes to its impact on jobs, with 61 per cent believing AI will eliminate more jobs than it creates, versus 47 per cent overall," Professor Gillespie said.
The research identified critical areas needed to build trust and acceptance of AI, including strengthening current regulations and laws, increasing understanding of AI, and embedding the principles of trustworthy AI in practice.
The survey also revealed that people believe most organisations use AI for financial reasons – to cut labour costs rather than to benefit society.
It found that while people are comfortable with AI for task automation, only one in five believe it will create more jobs than it eliminates.
One positive finding was that people have more confidence in universities and research institutions to develop, use and govern AI in the public’s best interests.
Professor Gillespie said the research showed that distrust came from low awareness and understanding of when and how AI technology was used across all five countries.
“For example, our study found while 76 per cent of people report using social media, 59 per cent were unaware that social media uses AI,” she said.
Professor Gillespie said despite the gap in understanding, 95 per cent of those surveyed across all countries expected organisations to uphold ethical principles of AI.
“For people to embrace AI more openly, organisations must build trust with ethical AI practices, including increased data privacy, human oversight, transparency, fairness and accountability,” she said.
“Putting in place mechanisms that reassure the community that AI is being developed and used responsibly, such as AI ethical review boards, and openly discussing how AI technologies impact the community, is vital in building trust.”
Professor Nicole Gillespie is the KPMG Chair of Organisational Trust and currently integrating the study findings for building trustworthy AI into the new ºÚÁϳԹÏÍø program. The full research report is available .
Media: Professor Nicole Gillespie, n.gillespie@business.uq.edu.au, +61 (0) 435 002 715; Emma Pryor, Business School Communications Manager, e.pryor@business.uq.edu.au; +61 (0) 421 772 888.