Skip to main content
SearchLoginLogin or Signup

Accountable AI

Public funding of AI is often just as opaque as its algorithms. Companies influence what tech gets built and deployed in the innovation economy and the policies that support them.

Published onSep 13, 2021
Accountable AI

AI permeates across our lives and works in pervasive ways. Yet, public accountability for these systems and technologies remains a major challenge. Public funding of AI is often just as opaque as its algorithms. There are many forms of public funding -- these vary from grants to contributions to government contracts. Government contracts especially play a key role in public procurement where as a result long term relationships are created and nurtured between the government and private sector. Companies influence what tech gets built and deployed in the innovation economy and the policies that support them.

Public procurement can be seen as a powerful mechanism with the potential to uphold or challenge the status quo, yet the term rarely draws public attention or interest. A government contract, if available and accessible to the public, can instantly raise a myriad of questions: What is being built? Who is building it? For what agency or department? For how much money? And over how much time? What are the intellectual property (IP) requirements? Does the government own the IP? What trade agreements is the contract valid under?

Who is the AI innovation economy for?

One way to address accountability in the innovation economy is to have a say in shaping the public procurement of AI and who is involved, the actors and institutions. After all, examining power can be seen as a prerequisite to challenging power.1

In Canada, a big part of open government mandate is having available and accessible proactive disclosure of contracts and other sources of funding (i.e. contributions, grants), as well as other public information like pre-qualified lists of companies that government can potentially hire to do work on certain themes. An example of such a list is the pre-qualified AI suppliers that government agencies and departments can hire. These companies also can choose to commit to the Algorithmic Impact Assessment (AIA),2 a tool built by the federal government’s Treasury Board of Canada Secretariat that companies can use to determine how acceptable AI solutions are from an ethical and humane perspective, a part of the responsible use of AI in government initiative.3 The AIA is made up of 60 questions that companies can answer in relation to their business process, data and design system decisions for those that build and deploy AI systems in ethical ways. The AIA is still in beta form. So far only half of the companies on the Government of Canada’s pre-qualified AI suppliers list have committed to it, including Palantir Technologies Inc., a company linked to human rights abuses.4 Despite Palantir’s controversial building and deployment of it’s software, the company’s self-governance framework is sufficient to pass the requirements set by AIA. This is a dangerous loophole as what gets procured in one country can profoundly affect other countries.

The AIA needs two major upgrades. First, the AIA needs to have higher standards around internal ethics frameworks of AI companies, as well as a greater commitment to independently investigating their role globally to prevent harmful companies from its commitment. Second, the AIA needs to become legally binding. For greater transparency efforts, if a company is listed as a pre-qualified AI supplier that has committed to the AIA, the government contract should mention the AIA commitment as a category.

Other countries have created AI transparency initiatives in government. In September 2020, Amsterdam and Helsinki launched the first algorithmic registers that show which AI/algorithms the cities use.5 6 7 To improve transparency of AI systems more governments can follow suit and create and make public their own algorithmic registries. More than publicly available data, people should have a say in deciding what types of AI systems can be used by governments. Systems that empower rather than harm populations.

Confronting the “captures”

Corporate capture and state capture are two long standing challenges that will take time and concerted efforts to address. Both operate at different levels in different countries. Nonetheless, they are intertwined on the global arena. In the meantime, regulation is insignificant when they are not upheld, while private sector influence is questionable when governments are forced to hire external consultants. We see this unfold in the innovation space where companies hold a lot of power to imagine and create our future with technology. Government contracts in tech and AI reveal that governments fund companies to do AI work disproportionately to the public sector. Addressing this power imbalance means rethinking the innovation economy to include public sector funding that goes to public organizations that support the public good.

We need accountability in the way AI gets procured nationally and internationally. AI accountability must include an anti-corruption lens. For example, global standards can promote transparency by highlighting how AI is funded. Governments can work together, beyond co-creating high-level declarations and principles. For example, the multi stakeholder initiative Global Partnership on Artificial Intelligence (GPAI) that aims to bridge the gap between theory and practice on “AI-related priorities” can prioritize accountability in AI beyond the Responsible AI working group’s measurement of AI systems’ responsibility and trustworthiness.8 The majority of GPAI’s working groups' civil society expertise relies on representatives from academia and industry. GPAI would benefit from the expertise of civil society organizations that work on internet governance, digital rights, social justice and research advocacy, as well as fiscal and financial transparency organizations. We can no longer operate in bubbles. The way AI systems get built and deployed often have no physical boundaries. Accountability in AI is a global responsibility.

Work Cited

Amnesty usa. ‘USA: FAILING TO DO RIGHT: THE URGENT NEED FOR PALANTIR TO RESPECT HUMAN RIGHTS’, 28 September 2020. https://www.amnesty.org/en/documents/document/?indexNumber=amr51%2f3124%2f2020&language=en.

City of Amsterdam. Algorithm Register Beta. ‘What Is the Algorithm Register?’, n.d. https://algoritmeregister.amsterdam.nl/en/ai-register/.

City of Helsinki. ‘AI Register’. Accessed 20 May 2021. https://ai.hel.fi/en/ai-register/.

D’Ignazio, Catherine, and Lauren F. Klein. Data Feminism. MIT Press, 2020.

‘Global Partnership on Artificial Intelligence - GPAI’. Accessed 20 May 2021. https://gpai.ai/.

Government of Canada. ‘Algorithmic Impact Assessment’, 2019. https://canada-ca.github.io/aia-eia-js/.

Government of Canada, and Treasury Board of Canada Secretariat. ‘Responsible Use of Artificial Intelligence (AI) - Canada.Ca’. Accessed 20 May 2021. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai.html#toc1.

Johnson, Khari. ‘Amsterdam and Helsinki Launch Algorithm Registries to Bring Transparency to Public Deployments of AI | VentureBeat’, 28 September 2020. https://venturebeat.com/2020/09/28/amsterdam-and-helsinki-launch-algorithm-registries-to-bring-transparency-to-public-deployments-of-ai/.

Comments
0
comment
No comments here
Why not start the discussion?