Artificial Intelligence and Human Government

Artificial Intelligence in Government. Photo of MyGov screen

The rising role of data in the process of government decision making promises efficiencies for governments but entails risks to individuals and society. A responsible policy is not to stop data-driven decisions. Rather, with an appropriate approach, the benefits of using data can be gained while safeguarding the rights and safety of individuals and promoting the social ideals we desire.

Drawing on the discussions with government agencies, academics and industry I had during my Churchill Fellowship travels to the United Kingdom in 2019,1 the overarching perspective presented here is that data-driven decisions are the result of numerous human-made decisions and that it is possible to improve these intermediate decisions by appropriately training and incentivising the humans who make them, their colleagues and the rest of the public service infrastructure. Doing so will lead to better data and, ultimately, to better decisions.

It is important to begin with a definition and some historical context. In this paper, "data" is used to refer to facts about individuals, regardless of whether they are readily identifiable, de-identified or aggregated. Using data such as this to make decisions is not a new process in government. However, in recent years, both the amount of data available and its granular detail has increased, as has the complexity of the computational systems that extract meaning from these data and the embedding of these systems in decision-making processes. It is because of this changed relationship between people, the data they generate and the government that uses these data that new systems in government are needed.

Government employees training in digital systems. Photo Arlington Research / Unsplash

Photo by Arlington Research / Unsplash.

Government employees training in digital systems. Photo Arlington Research / Unsplash

Photo by Arlington Research / Unsplash.

The use cases for data appear at every level of government's interface with the public: from choosing general policy directions, to apportioning funding to different services or regions, to making decisions about an individual person's treatment.

A prime motivation for this interest in data is one of the key motivations for government action: efficiency.

Using data to make decisions promises savings—both by increasing the accuracy of the decisions and by reducing the amount of human involvement in the decision-making process. And as governments have a responsibility to use public resources without waste, this is a worthwhile path for governments to pursue.

Data, however, does not make decisions on its own. Human decisions contextualise the data. These decisions come both before and after data is collected. And these human decisions have profound impacts on the data and on the decisions that come from them. Before the data are collected, humans make decisions about the questions to be investigated, the mode through which the data will be collected, the amount of data that is required and the computational steps taken to make the collection approximate the population. After the data are collected, the human decisions continue through the design and interpretation of algorithms. Much is made of the mystique surrounding this step. Its impenetrability to those unversed in contemporary computer science techniques and jargon is sometimes seen as a testament to its utility. Added to this, users in government now have at their disposal a wide range of algorithmic tools to choose from that vary in the utility they provide and the awe they inspire.

So it is possible to use data as part of the decision-making process in government. And, in order to improve efficiency, governments are doing so in fields as diverse as planning approval2 and child protection.3 But while this raises the possibility of increased efficiency, it brings with it some significant risks. These risks are myriad but include the risk of introducing systematic biases against groups of people (either because of inadequacies in data collection or the development of algorithms), reproducing or exacerbating existing inequalities in society, risking the privacy of members of the public, de-skilling a workforce that has a nuanced understanding of its field, and optimising a system for one outcome at the expense of other useful features.

Data-driven systems in government that have already faltered due to these risks include a system designed to guide judges in parole decisions,4 and a system to help reduce homelessness.5 It is important to note that a failure to protect against these risks is also a risk to governments which may soon follow large companies in being sued when it is suggested that their use of data contravenes existing privacy and equal opportunity laws.6

Hence, policy makers in government face a question: How can we use data to make decisions while managing for these risks?

It was with these questions in mind that I embarked on my Churchill Fellowship in 2019. Around the world, governments have sought the benefits of using data to help make decisions, but there is an emerging acknowledgement of the risks. This can be seen in the large number of consultation papers and guidelines that have been developed by government agencies around the world.7

Artificial intelligence fingerprint
Artificial intelligence fingerprint

Lessons from other jurisdictions and options for treatability

The most prominent of these international regulations is the General Data Protection Regulation (GDPR) in the European Union, which binds member states to certain standards of data use. The central focus of the GDPR is the relationship between individuals and those who collect and process data about them. The main mechanism the regulations favour is the empowerment of individuals to choose when data about them is collected, how it is used and who has access to it. It provides mechanisms for individuals to be given a copy of data about them and for individuals to request that these data be deleted. Importantly, these regulations highlight the organisational structures that are required to process personal data responsibly by mandating that organisations appoint a Data Protection Officer.

Outside the European Union, the California Consumer Privacy Act has a similar focus on empowering individuals in their relationship to their data. Other jurisdictions have taken different or complementary approaches. In the United Kingdom, the work of several parliamentary committees has resulted in the creation of the Centre for Data Ethics and Innovation, which will advise the government and its regulators on the steps that should be taken to use data successfully and responsibly. The policy landscape for data in the United Kingdom is also shaped by the actions of long established think tanks and research organisations with expertise in this area, including the Open Data Institute, the Ada Lovelace Institute, the Alan Turing Institute, the Nuffield Foundation and Doteveryone.

Photo by Getty Images / maxkabakov & iStock.

Principal options for Australian policymakers

In Australia, both the benefits and the risks of using data to help make decisions are well acknowledged at the federal level. Notably, public consultations have been conducted and policy papers written by the Australian Human Rights Commission,10 the Office of the Australian Information Commissioner11 and the Department of Industry, Science, Energy and Resources.12 Each of these policy papers have suggested ways forward for governments to use data to help make decisions while mitigating the risks. These proposals include legislative reform, new government agencies, public declarations regarding the aims of government data use, increased education of government policy-makers and mechanisms for members of the public to interrogate the data and algorithms used to make decisions about them.

The principal options recommended here confront a fundamental truth to using data to make decisions: that while decisions can be driven by data, there are numerous human-made decisions which both precede and extend after the collection of data. It is these decisions, and these humans, that is the focus of the following policy recommendations.

Policy recommendations

In the short term, focusing these policy recommendations on the human role in data-driven decisions will lead to data-driven decisions that are more in keeping with existing equal opportunity and human rights legislation. In the longer term, it is hoped that this will open the possibility of legislative reform.

Training in data ethics for government technical and procurement staff

Governments routinely mandate continuous workplace training for their employees. For reasons of convenience and cost reduction, this training is often administered online. Examples of these courses include anti-harassment training13 and manual handling training.14 These courses are not intended to make those who take them experts in these fields. Rather, they change the behaviour of staff by setting out a clear baseline of acceptable standards and regularising them into the mundane aspects of work rather than making them aspirational goals.

Government should take a similar approach to data ethics training for quantitative professionals and their managers as well as procurement professionals who will purchase databases and analytics services from industry. This training could be based on existing platforms such as the Open Data Institute's Data Ethics Canvas15 and the Government of the United Kingdom's AI Procurement Guidelines.16

Data training for non-technical staff at the executive level

The complement to the above proposal is the establishment of data literacy training for non-technical government staff at the executive level. The goal of this training is not to make all executive level staff into data scientists. Rather, it is to create a climate of informed confidence in which these executives know that they can engage in a positive spirit of dialogue with their technical colleagues about technical decisions. It will show these executives that their training in law, accountancy and domain-specific skills gives them a basis for helping to improve the decisions made by quantitative experts. Short executive courses in this topic are already offered by Australian universities.17

Funding for new positions with technical expertise in statutory authorities that investigate and report on the function of government

Governments have a long tradition of holding themselves to account by creating statutory bodies which are free to report to parliament without ministerial approval. Bodies such as the Equal Opportunity and Human Rights Commissioners and the Ombudsmen fulfill this fundamental role in society on an ongoing basis, as do Royal Commissions when they are called.

As more government decisions are made using data, these bodies should be fortified with staff who can appropriately investigate the compliance of these decisions with existing legislation and suggest structural changes where appropriate. Statisticians have already been involved in recent Royal Commissions in Australia.18 This should be a regular feature of the funding for more Royal Commissions at the State and Federal Level. And, the funding of statistics positions within permanent statutory bodies in Australia would provide a clear pathway for the long term promotion of the responsible and ethical use of data by governments.

Woman scanning fingerprint at airport
Woman scanning fingerprint at airport

Stakeholder consultation

In order to implement policy that will place human decision making at the centre of conversations about the use of data in government, stakeholder consultation should begin within government itself.

A key group to engage with are the quantitative specialists, statisticians and computer scientists in government who are involved in decisions about data and algorithms. To effectively engage with these quantitative specialists in government, it is also necessary to engage with their managers.

The final group within government who should be engaged for consultation are the statutory bodies who have a legislated position of oversight within government. These include the Equal Opportunity and Human Rights Commissioners and the Ombudsmen. The powers of these offices lie in their history of holding the government to account and proposing structural reforms to remove the biased, unfair and unequal treatment of people. To this end, I have already consulted with the Equal Opportunity Commission of South Australia.

Photo by Getty Images / andresr & iStock


I am grateful to the opportunity provided to me by The Winston Churchill Memorial Trust and The University of Queensland to produce this paper. In particular, I appreciate the editorial contributions of Dr Kirsty Guster and Dr Jennifer Yarnold. I am truly grateful to the comments and suggestions of Commissioner Ed Santow and Ellen Broad. All errors and omissions are mine alone.

Owen Churches, CF 2018 (SA)

Owen Churches is a senior statistician in the South Australian Government where he works for the South Australia Health and Medical Research Institute. His work spans both the mathematical and political aspects of using government data to help make decisions. Read more about Owen Churches and his Churchill Fellowship.

References and endnotes

1. Churches, O. Churchill Fellowship to create fairness and accountability in the use of government decision making algorithms. Churchill Fellowship Report: The Winston Churchill Trust, 2018.
2. Government of South Australia, "PlanSA", Accessed September 25 2020,
3. Government of South Australia, "Current Projects", Accessed September 25 2020,
4. Angwin, J, Larson, J, Mattu, S, and Kirchne, L, "Machine Bias", ProPublica, 23 May 2016,
5. Eubanks, V. "High Tech Homelessness," American Scientist 106, 4 (2018), 230.
6. Germano, S, "German Court Rules Against Facebook on Data Protection", Wall Street Journal, 24 January 2020.
7. Jobin, A, Ienca M, and Vayena, E. "The global landscape of AI ethics guidelines", Nature Machine Intelligence, 1 (2019), 389–399. DOI: 10.1038/s42256-019-0088-2
8. European Union, "General Data Protection Regulation", Accessed 25 September 2020.
9. Centre for Data Ethics and Innovation, "Latest from the Centre for Data Ethics and Innovation", Department for Digital, Culture, Media & Sport. Accessed 25 September 2020,
10. Davis, N, Farthing, S, Santow, E, and Webber Corr, L. Artificial Intelligence: governance and leadership. White paper 2019. Sydney: Australian Human Rights Commision and World Economic Forum 2019.
11. Office of the Australian Information Commissioner, Artificial Intelligence: Governance and Leadership white paper – Submission to the Australian Human Rights Commission. 19 June 2019.
12. Department of Industry, Science, Energy and Resources, "Artificial intelligence", Accessed 25 September 2020,
13. Diversity Australia, "Workplace Bullying and Sexual Harassment Assessment Online", Accessed 25 September 2020,
14. Safe Work Australia, "Lifting, pushing and pulling (manual handling)", Accessed 25 September 2020,
15. Open Data Institute, "Introduction to Data Ethics and the Data Ethics Canvas", Accessed 25 September 2020,
16. Office for Artificial Intelligence, "Guidelines for AI procurement", 8 June 2020,
17. University of Adelaide. "Course Outlines: Data Literacy." Accessed 25 September 2020,
18. Attorney General's Department. Final Report: Royal Commission into Institutional Responses to Child Sexual Abuse, Commonwealth of Australia, 2017.