My team has just published findings from a brand-new research study, conducted in mid-2025, investigating how a group of Australians are using generative AI, what they think of it and where they think it will go in the future.
The study’s findings have been published as an open access preprint which can be downloaded here.
As part of our research funded by the Australian Research Council Centre of Excellence for Automated Decision-Making and Society, postdoctoral research fellow Bronwyn Bailey-Charteris and I used a qualitative approach, involving conducting five focus groups with a total of 32 Australian adults from all around the country, of different ages, educational levels and ethnicity. This meant we could achieve in-depth insights into their understandings, practices and imaginaries relating to generative AI applications.
Our findings show how Australians have incorporated generative AI applications into their everyday lives. Nearly all participants, regardless of their age, gender, ethnicity or geographical location, had experimented with generative AI. Notably, these Australians saw them as little more than practical, mundane software that were now pervasive and therefore unavoidable. Generative AI was described as helping to achieve better efficiency, timesaving and productivity in accomplishing routine tasks at home and work. As one of our participants described it:
If I’m writing an email or sending an SMS to somebody, I can literally just press a button. It just gathers all the information from the screen on their file. I just skim over it and go, yeah, that sounds good. And then press send.
People gave examples of using it providing advice and information for travel planning, home renovations and garden planting ideas, or health topics, or for writing social media posts.
I look at it like it’s an advisor that you can turn to. You can ask it absolutely anything and it will give you the information.
Our findings are in line with a recent report by the the US National Bureau of Economic Research in collaboration with OpenAI using its data on ChatGPT use across the world. These researchers estimated that by July 2025, close to 10% of the world’s adult population had used it. ChatGPT was used principally for practical purposes for getting tasks done, seeking information and help with writing (particularly work documents) rather than for self-expression, creative or playful activities.
While many participants found generative AI applications to be useful, they also expressed a number of concerns and anxieties about these technologies. The most common drawback expressed by participants was the constant errors in the content returned from their queries. Participants emphasised the need to use generative AI with caution and constantly double-check it due to the risk of incorrect information being provided or missing information.
It’s not always 100% accurate. You have to read carefully the content that is generated by AI. And especially if you know the subject that is generated, you feel sometimes there’s something wrong here.
Even more concerning than continual errors in generative AI content was the idea that the software could be used by others to deliberately mislead people or spread malicious disinformation.
Well, I mean, there’s so much misinformation out there. You can use AI to create images that are just completely false. So there should be some sort of checks and balances, since, you know, you can’t really trust a lot of the news.
However, there was little suggestion from participants that AI itself would become a powerful, super-intelligent agent capable of controlling or replacing humans. Instead, and perhaps because their everyday experiences had amply demonstrated to them the failings and lack of ‘intelligence’ of contemporary AI, our participants’ concerns were grounded in the realisation for some that over-use of these applications could make them lazy or delimit their capacities to learn from doing:
If I can get ChatGPT to write me an email or to write this paragraph in an assignment that I have to do, or write me a recipe – like, I can see that it would be very, very easy to start using it for absolutely everything. And I’m scared of losing who I am as a person.
Participants also expressed feelings of powerlessness over what they could do to avoid using generative AI in the face of the determination by Big Tech – and in some cases, employers and educational institutions – to promote its use. More profound negative impacts were mostly recounted as abstract or as potential problems in a future world if generative AI development by Big Tech was allowed to progress without government regulation.