IBAC's podcast

Artificial intelligence in the public sector

IBAC

Dr Bronson Harry, a Senior Data and Strategic Intelligence Analyst and Meg Gillespie from the Communications Unit, both at IBAC discuss the risks and benefits of Artificial Intelligence (AI). 

Find out more about machine learnt biases, fraud and cyber-crime detection and large-scale information management. 

Understand the benefits of using Artificial Intelligence by schools, councils and police, contributing to new efficiencies in the public sector. 

Hear how Artificial Intelligence can support and enhance integrity, with good governance and guidelines. 

Meg:                 Hello and welcome to the latest IBAC podcast. 

IBAC is Victoria’s anti-corruption agency, and our role is to expose and prevent public sector corruption and police misconduct.

My name is Meg Gillespie, from IBAC’s communications team – today we’ll be exploring a hot topic, but from a different perspective.

Artificial intelligence, or AI – what exactly is it? How can it help the public sector and police? Can the use of AI lead to corruption or misconduct, and how can that be prevented, while still using AI to deliver best value to the Victorian public. 

To discuss, I’m here with IBAC’s senior data and strategic intelligence analyst, Doctor Bronson Harry. 

Bronson thank you for joining us today.

BH:                  Thanks for having me.

Meg:    It’s great to have you here. Now you’re a part of IBAC’s strategic intelligence team – firstly, could you tell us a little bit about what you and the team does?

BH:      Yeah so I’m the data specialist for the team so I analyse data, look at other data sources that would be useful in our team’s function and as a team our job is to the finger in the wind for IBAC. So we’re looking at trends across Victoria Police and the public sector to see what the emerging and persistent integrity risks are for the state. To analyse data, conduct research, consult with our stakeholders to understand corruption and other integrity risks and then communicate those risks to the rest of the organisation but also with our stakeholders so they can better understand their risks as well.

Q2:       Definitely, and you’ve recently done a bit of a deep dive into AI. And there’s a lot of debate about artificial intelligence, and a lot of information out there and it can be quite complicated. Can you explain to us exactly what is artificial intelligence and is there more than one kind?

BH:      Yeah so AI is a pretty broad term, and there’s no one kind of clear definition or at least technical definition of what that is. It’s any kind of computational system that can perform human level tasks, things like perception, decision making, planning. But one key part of AI technology, that has really dominated AI for the last since about 2012, is machine learning. So traditional AI systems were getting a computer scientist to sit down with an expert in a topic or at a task, try to decompose that task down to a series of logical rules that can be implemented in a computer program. Since about 2012 that way of designing AI systems has been completely upended and overturned by machine learning approaches. So basically under a machine learning approach, you take really huge data sets and just train a kind of a generic architecture to perform that task. To identify and look at statistical patterns that are in that data that the machine can exploit to make decisions, create recommendations and things like that. And so these machine learning systems they come with a series of of kind of risks that are kind of inherent to the technology and how they’re sometimes used in organisations and they’re often called discriminative or classifiers, or discriminative AI, because that’s basically what they do. You give them data, and they try to classify or try to identify different entities. So that could be – the classic example is, spam filtering. So we get emails every day, a lot of them are legitimate some of them are spam, and so in the early 2000s Gmail google was really advanced at giving people really good spam filtering because they trained these really big systems machine learning systems to identify the type of text that’s in an email that’s spam, and we all participated in training those machine learning systems by flagging which emails we thought were spam, which then google could feed into their machine learning algorithms to identify the type of text that appears in a spam email. And so these have been rolled out in a lot of different areas, you know Netflix is really famous for advancing kind of recommendation technology again based on the way people view and consume material. The other type of AI that has become really of interest in the last five years, is generative AI. So these are systems that are built on top of the kind of traditional machine learning systems that are used to support decision making but instead of just producing an output like spam or not spam or something like that they can produce images or they can produce text and these are what are running the popular chatbots like openAI and google and anthropic are producing at the moment that has kind of created a lot of renewed interest in the use of AI in lots of different areas. So those are the two main areas - we’ve got discriminative AI and there’s also generative AI. 

 

Meg:    So it sounds like AI has been a part of our lives a lot longer than a lot of realised in the email systems and Netflix and everything you were mentioning there, but it’s definitely become a lot more of a topic in the last few years particularly across government and police but it does have a bit of a reputation - we know it’s been used in scams and fraud and to manipulate images or plagiarise essays, those sort of things – but it can also be used to solve crime or analyse data like you’ve said. What kind of AI do we most commonly see used in the public service and police, and how can it benefit the work?

BH:      The promise of AI is that it can automate routine tasks, so for example machine learning algorithms are really good at pattern matching and that’s been used by the police in many jurisdictions to perform automated face recognition on hours and hours of surveillance data. So instead of tasking a police officer to sit down and trawl through hours of footage, an AI can scrub through the images and try to match a suspect’s image face image to any of the people that are captured in the surveillance footage.. And Generative AI is also being broadly used to help the public service do their jobs better as well, so a recent example is the New South Wales education department that’s rolled out a chatbot that acts as a teaching assistant. As an example a teacher could type into the AI I’m teaching a class grade two class on phonics go get me a lesson plan create a rubric make an assessment and it will go away consult the department’s professional learning materials, things like that, will pull together all of the information from the department that it has on hand and synthesise a lesson plan from that. So it has the potential to really make a lot of the routine tasks a lot easier and faster, and you know increase capacity of the public service to do their work. And then there’s also the idea that many departments are playing with the idea of using these generative AI chatbots to improve the public’s understanding of policies and regulations, so there’s third party platforms called mylot that many councils are using at the moment where council’s can upload their policy documents and regulations to the service and then the public can upload their development plans or the like and they can then get feedback from the chatbot about what regulations might be relevant to their planning proposal and the like. These are examples of how AI can be used in automation. But at the other end of the spectrum the idea is that AI can also help extend our capabilities in areas that involve big data, so the AI can trawl through large complex data sets and find patterns that would otherwise be missed by people. For example, AI can be used in resource allocation, so trying to predict what areas of the public service are going to require more resources where and when. Areas of the public are going to require more resources. So there’s an example of this in predictive policing, in the Netherlands Dutch police force’s have looked at crime data and identified where and when bicycles thefts are going to peak throughout the week so that the police can then appropriately resource and police those areas during bike thefts. 

 

Meg:    That’s such an interesting use of AI. And you’ve done a lot of research into both what AI can offer us in the public service and police but also the risks that come with using AI. What do you think are the main risks you’ve seen that the public sector and police should be aware of?

BH:       So we’ve divided the risks into three broad areas. The first area being maladministration, the second area being cybercrime and fraud and the third area being information management concerns.

            So the first area of maladministration is probably the largest area there’s a lot written about it, unfortunately Australia has the reputation with the Robodebt scandal of having a real premier test case which has exposed a lot of the issues with algorithmic decision making in the public service and the response to Robodebt has had some important consequences for how people think about how data is used and how machine intelligence is used in government. So the main technical concern with the machine learning systems is bias, AI is not an all knowing oracle it is essentially just trained on data which is often historical data so if that data contains discrimination or historical practices of bias then the machine learning system will simply propagate and perpetuate that bias in its recommendations. The classic example here is Amazon’s AI that they rolled out to scan job applications, so when they rolled it out they found that the AI was extremely biased against female applicants that were applying for technical roles. And the problem with AI, with machine learning systems here, is that it can learn to infer these characteristics even when they’ve been removed from the data sets. So even when Amazon tried to remove overt cues about the applicants gender it would still find cues such as the school that the applicant may have come from or the subjects that they took at university and used that as a proxy for those characteristics so in the end that system was just canned. 

            The second major technical issue with machine learning systems is the lack of transparency, they’re often very complicated systems, not always the case, but they can be very complicated systems and they don’t use explicit logic. So it can be very hard for a public servant to understand how a system has reached a recommendation which presents a problem for the public servant, if they accept the AI system’s recommendation then they don’t necessarily have a clear basis for accepting that recommendation. So that can create issues for fairness and those types of concerns as well. There will always be cases where an AI misclassifies data, and so the lack of transparency also makes it difficult for the user to understand when the AI is correct in its recommendations or has made an error. The one example of this type of issue is the Compass system that’s used in the US justice system, so compass produces recommendations about sentencing, bail and parole and it’s used by judges to make these decisions and there have been some studies showing that it produces harsher penalties for black defendants compared to white defendants. And the problem with that system is that it’s a proprietary third-party system that are not required to make the inner workings of that system transparent, and so it’s not clear in those when it’s been used by judges why how it has arrived at a particular decision around bail or a sentence. Beyond the technical concerns with AI that can lead to maladministration is how AI is being used in the organisation, so it’s well known that when users engage with automated systems they can have excessive deference towards the systems because they’re seen as being this all knowing oracle and it couldn’t possibly make a mistake. Organisationally if that attitude when it’s distributed across the whole organisation can be used to stymie criticism of an AI system, ignore legal advice about the risks associated with the system and can also create a climate where the benefits of the AI whether it be reducing costs for example within the organisation can be pursued while ignoring those concerns about how the AI system may not work as expected.

            The second major area of cybercrime and fraud is actually enabled by generative AI. Generative AI has lowered a lot of the barriers the technical barriers to committing cybercrime and fraud. So it’s relatively trivial now to ask one of the popular platform to override the guardrails and get it produce realistic very convincing phishing emails or getting it to write the code for a malicious software, generative AI will intensify those persistent risks that we have surrounding cybersecurity. The other areas where generative AI is assisting with fraud is with generation of images and content, so it also relatively trivial to take a photograph of a receipt or invoice feed that into one of the popular platforms and get it to insert fraudulent items into that document inflating the bill or inflating what’s being procured or the likes. The department of government services in Victoria has done some research showing that it’s again relatively easy to fake an image of water damage, or fire damage, to a property which could be used to fraudulently get money from a disaster relief grant for example. So I think it’s important for people who are now in charge of running programs to really look at how generative AI could be used to exploit the programs that they’re custodians for, and to think abut mitigation methods. So again in this department of government services report, they recommend relatively simple mitigation techniques, such as getting people to take multiple photographs of the damage which could potentially reveal obvious cases where generative AI is being used to fake an image. 

The third major risk area is to do with information management, so obviously as these services become more and more popular, so people are looking to using generative AI in their work more and more, sometimes that’s sanctioned, sometimes a department will have a generative AI the staff can use and sometimes they don’t. And the risk of using generative AI such as ChatGPT or Claude or one of those platforms, is that it will lead to the release of government information to a third party that might be in another jurisdiction. So that’s obviously a concern about potentially identifying information or leaking potentially sensitive information to a third party. But also with generative AI there is that concern with its accuracy. Generative AI fairly frequently hallucinates information. I think some studies have shown that the popular chatbots hallucinate information about a quarter of the time. They’ll fabricate information but the main concern is that it’s very believable and it’s hard to detect. So these chatbots produce these hallucinations with a lot of confidence and they produce fake information that superficially is very convincing. So a growing concern in the US is the use of it in the legal system, so lawyers using Chat GPT to prepare legal arguments and the AI basically fabricating case precedence that just don’t exist but superficially look like they’ve got all the features and properties that you would imagine should be in a legal decision. And so there is a real risk there that with widespread use of generative AI the information integrity of the government could be compromised, if users aren’t being very critical and going over the generated information with a critical eye. The other issue kind of concerns the rapid uptake of AI by the government, so AI in many jurisdictions has been put forward as a solution to many budgetary concerns that AI can be used to address government debt by reducing the cost of government and in some jurisdictions that has led to collaborations with tech giants that has also then resulted in allegations of compromised cyber security and the potential for conflicts of interest between the tech providers as they’re going to have access to a lot of data that could be sensitive, could be data about other businesses, market data, personally identifiable data and it creates a risk there that a third party may be able to access sensitive government data and given that a lot of the industry of the machine learning the AI industry is about data. Your algorithms are only as good as the data you’re putting in, government data has become a lot more valuable to third parties as a result. So there was a case a few years ago a company called Clearview that advertised itself as having the biggest face database in the world for law enforcement to use. And they were directly emailing investigators and intelligence analysts across many of the law enforcement organisations in Australia and encouraging them to upload images of suspects directly onto their platforms as part of a trial period, prior to the organisations entering into any kind of legal agreement with Clearview about terms of use and the like. So there are organisations out there that see the value of data and are looking for ways to access government data, law enforcement data, health data, education data, these are all valuable sources of data for these organisations.

Meg:    That makes it all the more important we’re implementing AI systems with thorough governance, and crossing our Ts and dotting our Is. So for any public service groups out there or police or any other agencies who are looking to implement an AI system, what would your advice be in terms of first steps? 

 

BH:      So fortunately there has been a lot of work in this area in the past five years, so there is a lot of material that the federal government has been putting out. So the key document to look at first is the national framework for the assurance of artificial intelligence in government which is available on digital dot gov website. And in Victoria the department of government services has the administrative guideline for the safe and responsible use of generative artificial intelligence in the Victorian public sector. And the Office of the Information Commissioner have a lot of guidance material about safe use of generative AI in the public sector, and they also just have a good amount of documents as well that explain AI what AI is and can help explain some of that technical detail. Obviously consult whether your own agency has an AI policy and then my other little bit of advice would be regardless of whether it seems like the system that’s being implemented meets the criteria for AI or not, there should be some consideration or consultation surrounding the risks, how that computational systems regardless of whether it meets the criteria of AI or not should be addressed. 

 

Meg:     Thank you so much Bronson, it’s been great to chat with you, AI is such an interesting topic and as you’ve said it’s so important to consider strong governance frameworks to ensure the foundation is there for AI tools to be used appropriately and to the best of its capability. 

            Thank you to everyone for listening.

If you’re interested in learning more about AI, corruption risks, the work of the strategic intelligence team and all of IBAC – you can subscribe to IBAC Insights, our online newsletter, by going to our website.

In this newsletter you’ll find the latest news and events, a heads up on new reports, expert commentary, early research findings and information on conferences and events.

Or you can find more information on investigations, special reports, data, research, corruption prevention resources and more by visiting ibac.vic.gov.au.