Artificial Intelligence in the Public Sector – Primer Series #2

Written by on 28 February 2019

Welcoming your thoughts on our next AI primer

Last year, the OPSI team launched the “Blockchains Unchained” report. This was the first working paper in a series intended to provide the public sector with an overview of the necessary knowledge about a specific emerging technology. The report also aims to help stakeholders understand the challenges and opportunities associated with the technology. To inform public servants and policymakers, OPSI not only draws from academic discussions but also strives to enrich theory with real-world cases (read more on our case study platform) and insights collected through practice (read more about our ongoing projects).

This year, our ambition is to cover Artificial Intelligence (AI) and its impact on the public sector.

AI has been at the centre of attention of many governments all around the world with many countries developing strategies. On the global stage, AI was also the topic of many international events in the past few months: G20, Davos, World Government Summit, and the G7 Multistakeholder Conference on Artificial Intelligence where Canada and France announced the creation of a joint International Panel on Artificial Intelligence (IPAI).

AI is also an issue gaining traction within the OECD, where horizontal teams have integrated AI as part of the Going Digital initiative and are drafting OECD guidelines on AI and our colleagues in the Digital Government team and OECD Working Party of Senior Digital Government Officials (E-Leaders) and the are drafting a working paper on uses of emerging tech (including AI) in governments. Building on these important efforts, we see the need to further explain and explore AI, including sector-specific considerations, limitations, opportunities, and case studies. In order to build upon and support the movement towards higher awareness of AI in the public sector and encourage governments to build capabilities for anticipatory innovation, we are drafting a primer organised around the following questions:

  1. What is AI?
    • What do we mean when we say AI?
    • Why talk about it now?
    • What role for the public sector in AI?
  2. What are the different technological approaches (e.g. Machine Learning, Natural Language Processing, Planning-Scheduling-Optimisation, Machine Vision and so on)?
  3. What are the emerging practices and associated cases?
  4. How can the public sector act now and prepare for later?

Just as last year’s report was not intended to be an exhaustive, all-encompassing document, the AI primer is all about providing clear and timely knowledge for the public sector audience.

To make sure that we address the most salient points, we invite policymakers, regulators, interested civil servants, and those in industry and civil society to contribute and provide us with comments about the initial structure. We would value as well if there are cases or projects involving AI deemed significant that OPSI could cover and that would bring some light to the public sector. Any other type of inputs would be very much welcomed.

Does our proposed structure cover everything? Is there more that you would like to know? Let us know by adding a comment, emailing is at [email protected], or reaching out on Twitter @OPSIgov by March 22nd, 2019. 

  1. Artificial Intelligence (AI) 101 should cover algorithms including pitfalls and concerns of automated, big data decision-making processes – particularly those that impact humans.

    Important issues are the representativeness of the data on which processes operate, systematic bias and data missingness – all of which impact the outcome and results of algorithms underlying AI.

    At the Australia Human Rights and Technology Conference (July 2018 https://tech.humanrights.gov.au/conference), coverage was lacking in discussion of well-known statistical methodologies, for example, process optimisation of two parameters simultaneously.

    Speakers reported on algorithms for automating computing processes which operated on the basis of incomplete data, without consideration of the relationship between data representativeness and the real world. Cruel outcomes were accepted as business as usual.

    Considering the completeness of data-at-hand, and its difference to representative information in a subject area, can produce large differences in AI outcomes.

    See Tweets:
    https://twitter.com/DrRebeccaO/status/1022658960313135108
    https://twitter.com/DrRebeccaO/status/1021570612043571200

    • Dear Dr. Oyomopito, thank you for your comment! There is a minimum technical knowledge about AI algorithms that definitely need to be acquired by stakeholders in the public sector in order to anticipate the risks and protect citizens but also leverage technology to benefit society. How to effectively disseminate that technical knowledge is a question we would like to answer based on all your feedback.

      The importance of data and how it influences AI systems’ outputs is another critical point we will make sure to address when discussing the different technological approaches in section 2.

  2. From our experience on the subject of Data Analytics & Machine Learning, getting clear and actionnable insights from past data is not easy in many domains. With Scilab, the open-source alternative to Matlab, developed since 1994 in the French Research Institute INRIA, we found out that bringing intelligence into products, processes and services requires scientists and engineers to have a clear vision on expected outcomes. Many technologies are available, and the real challenge is now for us to define what good they can bring to our society. An exciting adventure for today’s makers and the generation to come!

  3. As we explore the use of AI in government programs and services, it is essential to ensure it is governed by clear values, ethics, and laws. The opportunity for innovative in this policy space is vast from policy labs to directives to digital standards to procurement ethics.

    Within the Canadian context in which I work, AI in the Government of Canada is governed by policy to understand and measure the impact of using AI by developing and sharing tools and approaches by being transparent about how and when we are using AI, starting with a clear user need and public benefit; and by providing meaningful explanations about AI decision making, while also offering opportunities to review results and challenge these decisions.

    In addition, our principles include being as open as we can by sharing source code, training data, and other relevant information, all while protecting personal information, system integration, and national security and defence. Lastly, we provide sufficient training so that government employees developing and using AI solutions have the responsible design, function, and implementation skills needed to make AI-based public services better.

    Within the Canadian context, the four projects the federal government are champion are our Strategic Plan for Information Management and Information Technology which outlines the approach to our digital government; federal digital standards for creating better digital services for Canadians; a Directive on Automotive Decision Making; and our innovative ethical approach to procuring AI suppliers. Learn more about each here: https://www.canada.ca/en/government/system/digital-government/responsible-use-ai.html. The Canadian government has also funded the Pan Canadian AI Strategy, which is being led by a non-profit, CIFAR: https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy. In partnership with CIFAR, we also have an AI Policy Lab run by Brookfield Institute, this project aims to help emerging policy leaders across Canada respond to the opportunities and challenges that accompany the rapid development of artificial intelligence: https://brookfieldinstitute.ca/project/ai-futures-policy-labs-a-series-of-workshops-for-emerging-policymakers/.

Leave a Reply

Your email address will not be published. Required fields are marked *