Language selection

Search

Artificial Intelligence Is Here Series: Talking about Bias, Fairness and Transparency (DDN2-V23)

Description

This event recording explores how government entities can use artificial intelligence to make decisions that are fair, transparent and accountable, while fostering the creation of technologies that help regulate its use.

Duration: 01:16:48
Published: July 12, 2022
Type: Video

Event: Artificial Intelligence Is Here Series: Talking about Bias, Fairness and Transparency


Now playing

Artificial Intelligence Is Here Series: Talking about Bias, Fairness and Transparency

Transcript | Watch on YouTube

Transcript

Transcript: Artificial Intelligence Is Here Series: Talking about Bias, Fairness and Transparency

[The CSPS logo appears on screen.]

[Alex Keys appears in a video chat panel.]

Alex Keys, Canada School of Public Service: Hello, everyone. Welcome to the Canada School of Public Service. My name is Alex Keys and I'm a director in the transferable skills team here at the school. I'll be introducing today's event. I'm very pleased to be here with you today and want to welcome all of you who chose to connect. Before proceeding further. I'd like to acknowledge that since I'm broadcasting from Ottawa, I'm on the traditional unceded territory of the Anishinaabe people. While participating in this virtual event, let us recognize that we all work in different places and that therefore we each work in a different traditional indigenous territory. I'd invite you to please take a moment of pause to reflect on this and acknowledge it.

Thank you. Today's event is the fifth instalment of our Artificial Intelligence is Here series, which the school offers in partnership with the Schwartz Reisman Institute for Technology and Society, a research and solutions hub based at the University of Toronto. It's dedicated to ensuring that technologies like AI are safe, responsible, and harnessed for good. After having covered themes such as when and how AI can be used, citizen consent, and its impact on the economy today we'll turn our attention to the topic of AI and bias, fairness, and transparency. The format of today's event will be as follows. First, we'll watch a 30-minute lecture delivered by Professor Gillian Hadfield, the Schwartz Reisman Institute's director and chair in Technology and Society, as well as professor of law and strategic management at the University of Toronto. After the lecture, we'll move into the conversation between our two guest speakers.

The first is Ron Bodkin, the vice president of AI engineering and chief information officer of the Vector Institute, an independent not-for-profit organization dedicated to research in the field of AI. He's also the engineering leads at the Schwartz Reisman Institute. The second is Ishtiaque Ahmed, an assistant professor at the University of Toronto's Department of Computer Science and faculty fellow at the Schwartz Reisman Institute. Ron and Ishtiaque will engage in a conversation about the topics and themes addressed in the lecture, followed by questions from the audience.

Before we begin the lecture presentation, here are a few housekeeping items to help us have the best possible experience. To optimize your viewing, we recommend you disconnect from your VPN or use a personal device to watch the session if possible. If you're experiencing technical issues, we recommend that you relaunch the webcast link that was sent by email. Simultaneous translation is available for our participants joining on the webcast. You can choose the official language of your choice using the video interface, or you can follow the instructions provided in the reminder email, which includes a conference number that will allow you to listen to the event in the language of your choice. Audience members are invited to submit questions throughout the event using the collaborate video interface on which you're viewing the event. To do so, please go to the top right-hand corner of your screen and click the raise hand button and enter your question. The inbox will be monitored throughout the event. Now without further delay, let's begin the lecture.

[A title screen fades in, reading "artificial intelligence is here series."]

[Words on screen read "What's all this talk about bias, fairness, and transparency?"]

[The words fade to Gillian Hadfield, Professor of Law and Economics, University of Toronto, Director and Chair of the Schwartz Reisman Institute. She stands in front of a blue background, with representative images, and graphics appearing at her left.]

Gillian Hadfield: Since around 2016, governments have started paying more attention to the opportunities and challenges of artificial intelligence.

[A graphic reads "What are the challenges of artificial intelligence?"]

 You'll remember that 2016 was the year of the US presidential election, in which we learned that deep learning systems had been deployed using seemingly innocuous psychology quizzes on Facebook to power highly targeted political advertising.

[A headline reads "Data of 50M Facebook users obtained by Trump-linked firm." It fades to another headline reading "How Google's AlphaGo beat a Go world champion."]

 And 2016 was when Google's DeepMind announced that its machine learning system AlphaGo had beaten the world champion in Go. This is also when we started to hear about ways in which AI could go wrong and the concepts of algorithmic bias, fair machine learning and AI transparency entered the public debate. In this video. I'll talk about these concepts and get you up to speed on what everyone is talking about.

[00:04:31 An image shows silhouettes of heads with question marks inside and one with a lightbulb lighting up. Two arrow signs with lightbulbs fade in over the image.]

Before we start though, I want to put up two signposts in this landscape of public debate. First, although the problems of bias, fairness, and transparency are really important ones that deserve close attention in policy responses, they are not the only policy questions we need to address. An analogy to how we regulate medical devices might be helpful.

[A new graphic shows a smart watch flanked by a green man with a check mark above his head and a red woman with an x above hers."]

Suppose we have a new medical device, and we discover it works much better for men than women. Knowing about that bias and protecting against it is of course a critical goal for governments that use, pay for, or licence medical devices. But we don't only care about whether these devices treat men and women or members of different racialized groups equally. We also care about whether they work well, period. Are they reliable? Safe? Do the benefits outweigh the costs?

[The man and woman fade away. Green symbols showing one person asking another person a question, an outstretched hand with a heart floating above it, a heart with a heartbeat signal inside it and a set of scales surround the smart watch. It fades to an image of people pouring over a flow chart. Text reads "We want AI to be unbiased and fair, but we also want it to be safe and reliable.]

The same is true of artificial intelligence. We definitely want it to be unbiased and fair, but we also want it to work well, to be safe and reliable. Sometimes the heavy focus on treating different groups, especially vulnerable groups, equally can lead people to think that this is the only thing we need to worry about, the only thing we need to certify, or regulate, or require in procurement rules.

[Text fades in reading "Fairness is just part of the story."]

But fairness is just part of the story. We're going to focus on this part of the landscape in this video, but there's a lot of other terrain out there that needs to be explored as well.

[A new image fades in of a metal arrow resting on paper with lines bursting out from all directions.]

Here's a second signpost for the discussion. The public debate about fairness, bias, and transparency in AI often goes under the banner of AI ethics.

[Text fades in reading "What does AI ethics mean?"]

Now ethics is an important framework, but to me, ethics is not the core of what we're challenged with in AI. The problem with AI that discriminates against racialized groups is not that it's unethical, which it is, it's that it violates the rules we've put in place everywhere else in our societies.

[Text fades in reading "New risks are not ethical failures or societal failures, but mostly system failures.]
           
The new risks we're facing are not primarily due to ethical failures of individual engineers or societal failures to value the right things. They are mostly system failures, the absence of the regulatory tools and technologies we need to make sure we get the things we value from this complex, fast-moving technology. So, when I hear that people want to talk about AI ethics, I translate that to, oh, what you really want to talk about is AI regulation. And I suggest you do the same, especially if you are in the policy-making business.

[Text fades in reading "AI ethics = AI regulation." A new graphic shows a set of scales and reads "What do we mean by fairness?"]

So, let's talk about fairness. Don't we all love to talk about fairness? If you're a parent, or you remember being a kid, you probably know that one of the first things kids learn to say is, "That's not fair." Here's a diagram that the eight-year-old daughter of a colleague scribbled for him while we were on a Zoom call.

[A handwritten note in crude writing reads "It's SO unfair! Will gets a ice cream thingy! And I get nothing! Nothing." A zero is drawn and labelled at the bottom of the note.]

Unfair that her brother got ice cream on the way home from school, and she didn't. Bigger cookies, extra screen time, fewer chores. We're early on alert to the ways in which goodies and work are being doled out.

[A new graphic shows pebbles balanced on a plank. On one end, only one pebble sits on the plank. On the other, four pebbles are stacked. The pebble holding the plank up is much closer to the stacked pebbles, making both sides sit balanced.]

And from my perspective as a social scientist who studies how humans manage to become the most cooperative species on the planet, engaging in complex systems of dividing up work and sharing the surplus, fairness is the essential glue of society.

[Text fades in reading "Fairness is the essential glue of society." A photo shows John Rawls and an expert from his writing, "Justice as Fairness" (1985).]

One of my intellectual heroes, philosopher, John Rawls, said that fairness was the essence of justice, and indeed the essence of reciprocity.

[New text reads "Nobody wants to participate in societies without rules and norms that ensure fairness."]

Nobody wants to participate in societies without rules and norms that make sure that the exchanges that underpin complex societies are fair. People get paid a fair wage. They get fair access to education and healthcare. They are treated fairly by government officials.

[A new graphic with question marks in speech bubbles fades in. It reads "What do we mean by fair?"]

But what do we mean by fair? Ah, there's the rub. What is a fair wage? Whatever the market will bear? $15 an hour? Whatever it takes to enjoy the same level of economic wellbeing as the average person? You might recognize here questions that our politics engages with every day. And in this sense, ensuring that AI is fair is no different from ensuring that everything else about how our societies and economies operate is fair.

[Headlines fade by detailing projects on AI fairness, reducing bias in AI and discriminatory AI.]

When AI fairness shows up in the headlines these days, it's referring to a particular dimension of fairness. By fairness, current discussions mean AI that does not discriminate against identifiable groups, especially groups that are protected against discrimination by law. You've probably seen some of these headlines. Early ones include this one from 2010 talking about how automated facial recognition software on cameras was biased, doing a poorer job of recognizing blinking on Asian faces than on Caucasian ones. And this one from 2015 capturing a tweet in which a user complained that Google's automated photo tagging had labelled his Black friend as a gorilla.

One of the first statistical studies to bring this issue to the forefront was a report released in 2016 by ProPublica arguing that software used in a number of us criminal court systems to compute risk assessments for bail, probation, and sentencing decisions was biased against Black people. In 2018 MIT researchers documented that facial recognition systems being sold by IBM and Microsoft had significant racial and gender bias, accurately identifying the gender of white men with an error rate of just 1% but getting it wrong as much as 35% of the time when shown faces of darker-skinned women. And in 2018, Amazon made the news for building and then abandoning an AI system to sort through job applications which discriminated against female applicants, or indeed anyone who had words associated with women on their resume. These examples quickly gained the status of lore in the AI community. It's hard to sit through any introductory lecture about AI fairness like this one without hearing about them.

[A new graphic shows the abstract of a paper called "Fairness Through Awareness."]

Away from the headlines, machine learning researchers have been studying the problem of AI bias since at least 2011.

[An image shows a young Black boy in a three-piece suit walking through a maze drawn on pavement.]

Discrimination on the base of protected characteristics, such as sex or race, is of course something that societies have been working to eliminate for decades. Equal treatment is a constitutional guarantee in most democracies, and the subject of longstanding legislation such as equal pay acts and fair housing and lending laws.

[Text reads "Equal treatment is a constitutional guarantee in most democracies. AI doesn't change the goal- but it does change the playing field."]

The emergence of AI doesn't change the goal, but it does change the playing field. To make sure that humans don't discriminate, we do things like change our norms. A well-socialized member of an HR department today wouldn't dream of proposing a job ad like these.

[Newspaper clippings show jobs ads sorted by "male" and "female" jobs.]

And the HR department probably has training and protocols and templates that help ensure that doesn't happen. We use strategies like eliminating gender or race variables from application forms or healthcare data.

[Text reads: Eliminating gender or race variables, training awareness of unconscious bias, legal ramifications for discrimination."]

We create training programs to build awareness of the risks of unconscious bias. And we use the legal system to hold people liable for intentional discrimination and organizations liable for operating in ways that have disparate impact on protected groups.

[A new graphic shows binary code.]

But there are several ways in which AI can undermine our efforts to eliminate discrimination.

[Text reads "Historical data is routinely biased and can freeze biases in place."]

First, the way we currently build almost all of our machine learning models is that we train them on historical data generated by human decision makers. And that data is routinely biased, both because it's historical, so it might include decisions made before decision makers got smarter about how to avoid bias, or at a time when societies or organizations were less diverse than they are today. And because for all our efforts, humans still make biased decisions.

This is what happened to Amazon when they built their AI-based system to sort through job applications.

[Icons fade in representing twelve men and six women.]

Historically they hired more men than women, particularly in technical positions. One reason for that could be that fewer women applied for technical positions. Another could be that the people doing the hiring preferred male candidates, even if they weren't aware of that bias.

[Another icon shows one person asking a question to another person. An arrow points from it to an icon of a head with gears in it.]

A machine learning model trained to predict the decisions humans make is only going to achieve accuracy if it replicates the biases in its training data. It's doing what we asked it to do, but not what we want it to do. And because the training data is historical, it can freeze in place biases from the past even after human decision makers have managed to become less biased.

[The image fades to a woman. Lines connected by dots float over her face in a vague face shape. Text reads "What's in the data set?"]

Another way we can end up with biased training data is that we just choose the wrong data set to train on. This was the problem with the biased facial recognition systems that researchers identified in 2018. They had great accuracy on white male faces, but lousy accuracy on darker skinned females because of training data had many more white men in it than women or darker skinned people of either gender. So, the machine had lots of practice with white men and just didn't get enough opportunities to improve on others. It was a poorly chosen data set for training and a poorly chosen data set for testing. It was perfectly designed to produce biased AI.

[A new image shows a magnifying glass surrounded by question marks. Text reads "What was in the training data?"]
           
So, a key question to ask is what was in the training data? And then you have to take steps to ensure the data is representative and unbiased, not just at the start of a particular AI development process, but in our educational and organizational systems. If you're worried about what the new jobs will be when AI starts automating a lot of work, well, this is one. Doing the work of curating and certifying the fairness of training data.

[A new graphic shows a long line of empty chairs at a board room table. Text reads "Machine learning will not only replicate biases in training data, it can amplify those biases."]

Machine learning will not only replicate biases in training data, it can amplify those biases.

[The graphic of icons representing Amazon employees returns.]

Suppose the Amazon HR department was just slightly biased towards men in past hiring, and that there just historically also happened to be more men applying for jobs at tech companies like Amazon. The way we build a machine learning system is that we ask it to make good predictions. In a sense, we reward it for avoiding errors. One of the ways the machine can reduce errors is by biasing towards features of the data that are more frequent.

[An arrow points from the male icons to an icon of a head with gears in it. Beside the head, an equal sign points to three more men.]

It's a bit like hedging your bets by guessing that the bus that is usually late is probably late today, too, regardless of what else you know about today's traffic. So, a little bit of bias by humans and some naturally occurring imbalance in the data set can teach the machine to double down on bias.

[A new graphic shows speech bubbles with a question mark and exclamation point. Text reads "To a machine, errors have the same value."]

Adding to this effect is the fact that from the machine's point of view, guessing wrong in this way is not called out as especially costly, which is what our norms and laws do for humans. Making a mistake about preferring men to women is no worse to the machine than making a mistake about the value on a few extra points on a qualifying exam. Humans will see a difference there where the machine won't. The problem is especially obvious if we think about that example of Google tagging a Black person with the label gorilla. That's way worse to us than tagging a bicycle as an airplane. But to the machine, those errors have the same value.

[A new image fades in of a megaphone. Text reads "Bias can be amplified through language from other sources."]

Biases also get amplified in machine learning because our AI systems are often assembled using lots of components pulled from other places and a critical way in which this bias sneaks in is through language.

[Icons fade in showing the head with gears, a simple abacus, a pie chart, a bar graph and the math function symbols.]

Machine learning is done using math and statistics, crunching numbers. So, if you want to build an AI system to read resumes or benefits claims, for example, you first have to convert words into numbers.

[New icons fade in. An arrow points from a book to a calculator. Below it, text reads "word embedding."]

And the way machine learning engineers do this is with what's called a word embedding. Effectively a dictionary that translates the words into numbers. Any particular dictionary's built using, you guessed it, machine learning, trained on a body of text.

[The gear head appears over the arrow.]

This type of machine learning is asked to get good at predicting what word is most likely to follow in a sequence. So, it will learn to talk the way the text it was trained on talks.

[A new image shows a pile of individually cut out paper words.]

The problem that computer scientists have discovered here is that it takes a lot of text to train a powerful and useful dictionary. So, the dictionaries that get used are often trained on an easily available huge body of text, like the internet. And guess what? The way people talk on the internet has a lot of bias in it.

[A graph titled "gender bias in profession words" shows a field of dots correlated to gender and words relating to professions.]

One of the most widely used dictionaries was trained on Google News, and it learned to fill in the blanks like this:

[Text reads "Man = computer scientist. Woman = homemaker."]

...man is to computer scientist as woman is to homemaker. Now imagine what your resume-reviewing AI will do with that. It's easy for this kind of bias to be imported into a new AI system because it's a lot cheaper to use off-the-shelf components than to build everything from scratch.

[A new image shows curved pieces of paper in orange and yellow tones, replicating a fire. Text reads "Machines can see patterns that humans don't."]     

Then there's the impact of the fact that machines see patterns that humans don't. That's what makes them potentially so powerful, like seeing tumours that human doctors miss.

[A series of brain scans is shown.]

But that also means they can see things that we don't want to see or don't want to take into account in our decisions.

[A headline reads "New AI can guess whether you're gay or straight from a photograph."]

One study brought this to light by showing that it could train a machine to guess much more accurately than humans could, 90% accuracy versus 55% accuracy, which of two pictures was taken from a dating website of people seeking same sex partners. Even when we exclude all of the variables, we want the machine to ignore from a data set, we take out gender, race, ethnicity, sexual orientation, the machine may still sort people according to these categories and end up discriminating in settings where humans wouldn't be able to.

[Text reads "How can we build AI systems that avoid bias?"]

At the end of this long list of ways in which AI systems risk being biased, it can perhaps start to feel like the game is not worth the candle, but that's not the takeaway. The point is that there are risks and we need to solve for solutions to those risks. Ideally, we'd like to build AI systems that are better at avoiding bias than we are. And I think we can aspire to that, but we will need to do that by thinking hard about the ways in which bias creeps in.

[Text reads "Think hard about the ways biases emerge. Develop a critical approach to the use of data. Incorporate human oversight. Ensure teams building AI are diversified. Train ML engineers on risks. Ensure tech engages with broader politics."]

We probably should not be building automated decision-making systems as we now often do by training machines on a data set that consists only of past human decisions or just grabbing whatever cheaply available data set we can find. We definitely should be finding ways to incorporate human oversight and input into how AI-based systems reach decisions. We should diversify our teams building AI to increase the likelihood that someone notices our data are not representative and train our machine learning engineers about the risks and importance of choosing representative training data.

We should also ensure that the technologies and techniques we use to reduce bias are engaged with our broader politics and that many people other than computer scientists are engaged in building and scrutinizing these technologies and techniques. Here's a key example.

[A new graphic shows balls stacked on a plank held up by a small white ball in the center. On one side, there is a single, large, blue ball. On the other side, three small white balls sit stacked. The plank is evenly balanced. Text reads "Fair machine learning has explored many statistical versions of fairness."]

The field known as fair machine learning and computer science has explored many statistical versions of fairness, such as equalizing odds across different groups, requiring group parity, or seeking to ensure that all predictions made by a model are independent of sensitive attributes such as race or gender.

[Text fades in reading "...and it is not possible to achieve all definitions of fairness at the same time.]

And computer scientists have shown that it is not possible as a matter of statistics to achieve all of these versions of fairness at the same time. So, there are choices, essentially political, moral community choices to be made about what to prioritize in this design of our AI systems. So, if someone comes to you and says, "I can certify that this machine learning model is fair," be sure to ask them, "What's your definition of fairness?"

[An image shows a clear box with a clear lid. Text reads "What do we mean by transparency?"]

As I mentioned at the start, there are a lot of dimensions to fairness in society that go beyond discrimination on the basis of sensitive variables. One of the things I spend a lot of time thinking about as a law professor is fair process. And as I've emphasized, the term fairness in the machine learning conversation right now doesn't mean fair process. But there is one feature of fair process that has gained traction in public debate, and that's the idea of transparency.

[As she lists forms of transparency, they're listed on screen.]

One form of transparency is letting people know that they're interacting with an AI system and not a human. Another form is making the operation of the system understandable. There are challenges in implementing both types of transparency.

[An image shows a complex motherboard. Text reads "Machine learning systems are complex."]

As our discussion of fairness surely highlights, machine learning systems are complex. They are the product of huge data sets and complicated math and statistics. Many of these models are huge, so it's not just those of us who are not engineers that can't understand how they do what they do. The engineers that build them often cannot either.

[00:23:50 An image shows a brain made of lights and wires rising vertically from a motherboard.]

And they're only going to keep getting bigger, more complex, harder to understand. Even the engineers who build these systems don't really know, for example, how it is that training a mathematical model just to predict the next word in a sequence produces a system that is able to do things like take a complex legal document and put it into words that a kid can understand.

[Text reads "Black box problem."]

People often refer to this as the "Black box" problem in AI. It generates the call for transparency, which is a call that won't be easy to answer, precisely because, at least with current methods, complexity and scale is what buys us effective AI.

[a formula reads "Y=mx+b." Text above and below reads "This model has two parameters. GPT-3 has 175 billion parameters."]

This mathematical model has two parameters. GPT-3 has 175 billion parameters. It trained on a data set consisting of 500 billion words. This simple model with two parameters is easy to explain and interpret. Suppose this represented a decision to admit someone who applies for an immigrant visa. This is like a simple point system.

[As data points are named, icons representing them appear.]

In this fictional system, B could represent the number of family members already in the country. X could represent the probability the potential immigrant ends up contributing more in tax revenues than they require in public expenditure, like unemployment benefits, for example. And M represents the weight our policy puts on that factor. Let's suppose that number is 10. Now suppose we set a threshold for our decision, admit, don't admit at 7. Then our model is pretty easy to interpret.

[New icons show a single male icon, a green stoplight icon and a bar graph icon labelled "70%."]

If an applicant is single with no family, they are admitted if they have at least a 70% chance of contributing more than they require in public revenues.

[A second version of the graphic group appears, this time with two small people icons added. The bar graph is labelled "50%."]

But if they have two family members already in the country, they will be admitted even if they have only a 50% chance of being a net contributor rather than recipient. This is a policy we can see in debate, and immigrants can understand what it takes to get admitted and why they were denied. They can challenge a decision if they think it is wrong. Argue that they should have been scored with a higher chance of being a net contributor, or that certain people should be counted as family members.

And if this simple model were the output of a machine learning system, if it was not coded directly by programmers but instead represented what the machine discovered by looking at a trading data set consisting of the historical immigration decisions in our jurisdiction, the AI developers could look to see if the model makes sense, if it accords with the legislative policy on immigration. Maybe the law says number of family members should not matter. And if so, this simple model shows that this is not what has been happening and not what we want to put into our system for the future. The developers could correct the data or modify the model to reflect legislative intent.

[A new image shows a magnifying glass. Text reads "An explainable model can help developers, users and affected parties understand a system."]

This is what is meant by an explainable or interpretable model. An explainable model can help AI developers understand what their system is doing and fix it if needed. Explainability can help the users of the model, such as the government that uses the model to make immigration decisions, by giving them a way to ensure that the model is fit for purpose and doing what they want or intend it to do. And explainability can help the people who are subject to decisions made using the model. It can help them understand what they would need to change to get a different result and a basis for challenging the decisions of the model, making a claim that the decision is not in accordance with law, or constitutional rights, or other principles.

[Text reads "Massive AI models are hard – even impossible- to explain in this way."]

The problem is that massive AI models with hundreds, thousands, maybe billions of parameters are hard, even impossible, to explain in this way. But we are starting to see legislation requiring that AI be explainable.

[graphics show the General Data Protection Regulation and Bill C-11.]

You can find this in the European Union's GDPR, for example, and in legislation that was proposed in Canada in 2021.

[An image shows a pink field with cubes coloured various shades of pink. Text reads "Some explainable models will need to use only simple techniques."]

In some cases, the requirement that AI be explainable will only be met by a decision not to use current approaches in machine learning, to use only much simpler techniques that produce simpler, understandable models, like an immigration model or a bail risk scoring model that uses only a handful of variables. Some computer scientists argue that we should indeed use only explainable models in high-stakes cases and argue that the gains from more complex models are small and not worth it. But it's likely there will be cases, maybe a lot, where we can benefit from more complex models.

[00:29:05 Text fades in reading "However, other cases will benefit from more complex models."]

AI that learns to read scans better than radiologists to detect tumours might not be easily explained, but that's where I think we need to delve more closely into the idea of explainability. In my simple example of an explainable immigration model, explainability meant understanding how the math of the model worked and being able to give a causal account of why, mathematically, a decision was reached. But this kind of explainability is not always what we want.

[An image shows a series of brain scans with varying green and white blobs highlighting different brain areas.]

Think about that AI system detecting tumours, for example. It's not clear everyone needs to understand the math of how it is producing its results or be able to provide a causal account of why this dark spot was labelled tumour and this one was not.

[An image shows a woman in scrubs smiling at brain scans. Text reads "While AI developers may benefit from technical information, users probably do not need those details."]

The AI developers building the system might benefit from understanding the mathematical or technical details of how the system works so they can catch errors and build better systems, but the doctor or healthcare system that uses the AI system to diagnose patients and make treatment decisions probably doesn't need, and maybe can't use, the mathematical and technical details.

[Text reads "Instead, users want to know if the model is tested and reliable."]

They want to know the model has been properly tested and behaves reliably. The patient who is being diagnosed also doesn't want and can't use the math.

[Icons show a person in an arm sling with a thought bubble over their head, showing question marks. Beside the person sits a calculator, bar chart, pie graph and math symbols. It fades to a person thinking of a checkmark, and the other icons fade to icons of a doctor, a hospital, a heart with a heartbeat in it and scales.]

They want to know that the model follows the rules and norms governing how hospitals, and doctors, and healthcare systems make diagnoses and treatment decisions, that the doctor or healthcare system only chose reliable methods, that they took reasonable care to oversee the AI, that they ensured that the AI did not take into account irrelevant considerations.

[An image shows a doctor smiling and shaking hands with a person.]

They want to know that the diagnosis and the treatment they received based in part on an AI system was justified. I think this would be a better term for our legislation like the GDPR.

[Text reads "Justifiable AI: People should be entitled to obtain justification for the decision reached and the capacity to challenge it."]

People affected by automated decision-making systems should be entitled to obtain justification for the decisions reached and the capacity to challenge the justification. Sometimes those justifications will involve an understanding or explanation of how the mathematical model worked, but sometimes they will not. Doctors in hospitals today use lots of technology and treatments, medical devices, surgical techniques, pharmaceuticals without deep knowledge of the mechanisms that explain why they work. But they follow rules and norms like only prescribing pharmaceuticals that have been approved by a health regulator or only in accordance with best practices in the medical profession. We can do the same with AI.

[A new image shows a posable drawing mannequin with a question mark in a speech bubble above it. Text reads "Is explainable AI actually the same as trustworthy AI?"]

There's another reason to delve closely into the call for explainable AI. Lots of people suggest that explainable AI is trustworthy AI. This sounds intuitive. People will trust a system they understand more than one they don't. A system that is explainable is one that people can trust because they can see when it is going wrong. But neither of these claims turns out to be straightforward.

[A new image shows white painted arrows on pavement. Text reads "Explanations can sometimes undermine trust."]

For someone who doesn't understand the math or technical explanation, more of that kind of explanation may undermine rather than build trust. Trust may be better served by giving people confidence that AI is being built and tested with care.

That's why you trust the food served in restaurants, not because they told you the details of how they monitor the safety of food preparation at every step in the supply chain, but because you trust food safety regulations to work properly. And surprisingly, we're also learning that sometimes more explanations cause people to over-trust a system.

[Text adds onto the above statement. "...or even cause people to over-trust a system."]

In one study, people were less likely to catch obviously wrong AI-generated recommendations when they got an explanation of how the system worked than when they didn't. They scrutinized the unexplained model more closely and more appropriately.

[A new image shows a single lit lightbulb among a field of light bulbs lying on a white surface in a dim space.]

We have lots of research still to do into the relationship between explanation and trust. We're still in the early days of even understanding how to properly manage the relationship between humans and machines. Even the simplest form of transparency, letting people know they are chatting with a machine and not a human, is not so simple as people sometimes still come to treat the machine as a person. Is that a problem? We're still figuring that out. But this much is sure: transparency matters.

[The screen fades back to the "Artificial Intelligence is Here Series" screen for a moment before fading into a video chat.]

Ron Bodkin, Schwartz Reisman Institute for Technology and Society: Well, hi everyone. I hope you enjoyed Dr. Gillian Hadfield's comments. Certainly, a lot to think about there, and now we're excited to move to a fireside chat. So, let me quickly introduce myself. I'm Ron Bodkin. I'm the VP of AI engineering and chief information officer at the Vector Institute and also engineering lead at the Schwartz Reisman Institute for Technology and Society. And definitely topics of fairness in AI and more broadly, responsible AI are incredibly important to me. And I'm honoured to have Dr. Ishtiaque Ahmed, who is a colleague, and I will pass it to you, Ishtiaque, to introduce yourself.

Ishtiaque Ahmed: Thanks, Ron. And hi everyone. I'm Ishtiaque Ahmed. I'm an assistant professor of computer science and information science at University of Toronto. I'm also a faculty fellow at Schwartz Reisman Institute. My work broadly focuses on the fairness and marginalization issues in AI, and I'm excited to be here.

Ron Bodkin: Well, great. And just as an organizational matter, we'll be talking for about the next 20 minutes. And then following that, we'll be happy to take some questions from the audience. So, please feel free to submit questions throughout the event using the collaborate video interface, which you're using to view the event. In the top right corner of your screen there's a raised hand button that you can use to enter a question. And we'll track that. And after this initial discussion, we'll definitely make sure to answer some of your questions. So, with that in mind, what I'd be really interested in, Ishtiaque, is what are your overall reactions having heard Gillian's presentation?

Ishtiaque Ahmed, University of Toronto: Right. Yeah. I guess Gillian highlighted a couple of very important points in her talk. So, AI fairness, as you know, is a big topic here and it's really increasing. The concerns are increasing and it's hard to cover everything in a single talk. But one thing that Gillian highlighted in her talk I think is very important for us to discuss is how... The AI fairness question is actually AI regulation questions, because that brings this question toward how these AI systems that are being employed in different sectors in our life, including government sectors like the public service sectors, and even like the companies who are deploying these AI systems, how we can come up with a strategy, the technical or a post level, to make sure that the harm is minimized. Because as Gillian also mentioned in her talk, it's hard to achieve this 100% fairness things in any social technical system, but the best we can do is to put regulatory constraints on them so that we do not go far off.

Ron Bodkin: Yeah, it's a good point. And I guess one could ask, "Well, what's new here?" In the sense that we've been concerned with regulating and dealing with all kinds of harms in many different sectors and industries. And particularly unfair bias and discrimination are not new problems either. So, in your mind, how should we think differently about addressing with AI than maybe other problem's approaches in the past?

Ishtiaque Ahmed: Right. That's a great question. So, the term "regulation" is actually loaded, and it also comes with a number of concerns. What are we basically regulating? Because in the past, in the history, we have seen that in some kind of not so successful imposition of regulation resulted into silencing people, suppressing their voices. And the main challenge here with AI regulation is that whose voices are we supporting and whose voices are we silencing? And the way it is connected to AI fairness or AI ethics is that now as AI scholars or AI practitioners, we need to understand what kind of values, what kind of ethics that we want these AI systems to practice. And if our choice of those values, that doesn't match with the values that different communities hold, then this regulation question becomes really problematic here.

I can give you an example on what's happening here in many cases on social media, which I have been studying for a long time now, that hate speech and misinformation is a big problem in social media. And social media companies are trying to come up with artificial intelligence algorithms to automatically capture the information, like which are wrong and which they believe is harmful or not correct. But while doing so, they are flagging the posts which are often debated. They're debated because some communities believe that that is correct, and some communities believe that that is not correct.

So, we have been working with some faith-based communities in the Global South. So, they believe in things which are often not "scientifically" correct. Now if we put our bet on science, if we think that being aligned with science is being the most ethical things here. And by science, I mean the modern Western, like the scientific practices. Then oftentimes our AI algorithms are flagging all the communities that are faith-based, all their voices, kind of like, and wrong information and fake news, all these kinds of things, which is problematic because if we do that, we will not have a digital social environment which is inclusive. So, regulation here, like the regulation questions, how you impose regulations, becomes difficult at that point. And this is something, I guess, is new and an emerging problem regarding digital governance and AI fairness that's emerging here.

Ron Bodkin: Yeah. It's an interesting point. I guess I think, framing that, and a lot of times when people talk about the harms from online, they sort of gravitate to what could be viewed as hopefully universally agreed upon. So, somehow hate speech. Well, of course you have things like illegal hate speech or incitement to violence. And yet it's far from clear that most of the harm actually comes from these most extreme examples. And so, one of the things that I think about is how little data we really have and how mixed the scholarship has been from outsiders trying to understand the effects of, for example, social media systems. And I suppose you could look at other AI-driven systems if you wanted to study the effects of high-frequency trading. Very difficult in a domain like that, again, where you have advanced AI, that's proprietary trade secrets to really know what's going on.

So, we don't understand what's going on. And yet at the same time, I think you're highlighting an important point, is there isn't a consensus about what is the right answer of what's truthful. I guess I would say not only can you look at the Global South to faith-based communities, but you can look at the trucker's protest happening right now in Canada and say there's some pretty deeply held differences in views about topics that maybe had hitherto been thought of as a broad consensus around, for example, public health, and obligations to one's fellow citizens, and the efficacy of medicines et cetera. So, I don't know that we can proceed based on consensus because there won't be a consensus. And yet, I suppose you... How do you think we should proceed if there's so many different values as a pluralistic democratic society?

Ishtiaque Ahmed: That's really a question for political philosophers, to be honest, because political philosophers have been debating on how to make a place truly democratic. And now as computer scientists, we're trying to learn from them and trying to design systems in a way where different kinds of ideologies, different kinds of voices can coexist together. And it's not very easy because of two reasons that we actually need to understand if we want to understand these problems clearly. One thing is that AI, as much as it is forward-looking, future-looking, it is also so much past-facing because most AI algorithms, they work on historical data. So, the intelligence is built on what people have done in the past. And there is this history of marginalization, not only toward people in the Global South who were victims of colonization, but also in this Western part of the world. There are people from Black communities, people from Indigenous communities, people from LGBTQ communities, people in the immigrant communities. They have historically been silenced. They were not open to talk.

So, now when you were looking at the data, you were basically looking at the biased data set. History is already biased. And anything that you are trying to do better in the future, if your principal mode of work is to work on this past biased data, then that becomes problematic. And this is why I think the other point that Gillian was mentioning in her talk was important. That is justification. So, she was saying that it's not enough to only tell people that explanation of how an AI system is working in terms of how the math works, but you also need to justify why the system is working like this. Because if we build a system that is working on the past data where women were treated worse than men, and now I show you the math, why "my system is correct" because it's using the past data, and it's producing a biased result, that's not the kind of explanation that people are going to accept. We need to justify this. And this is where we need to come up with a better AI system, I would say, which definitely [inaudible 00:45:12], but as a society we want to achieve.

Ron Bodkin: Yeah. I mean, I definitely agree with Gillian's view that we should be justifying the systems. I think one thing is it's important to compare with the alternatives. I think often people try to hold up new inventions of any kind, and AI is foremost in this category, against a higher standard than the alternatives. So, as we try to improve and do better than our failings in terms of historical discrimination, we're trying to improve from the context we're in. So, there is the upside of making it better as well. But I definitely agree that justification you ought to be looking at. So, I think a lot of it too has to do with being able to understand and quantify the real impact of something.

Back to the point I was making about how people don't, we don't have good data about what's really going on, that scholarship is mixed. I think that's one of the first and most critical things is, our society needs to have better insight into the impact of AI systems. And so, I like proposals that have come forward in different jurisdictions around giving more direct access for independent researchers to be able to better study the effect of AI systems so that you can understand and quantify what's the extent of the concern.

In any society, there could be a large number of potential problems, but how do you weigh and understand the urgency? How do you know, for example, what's the risk of silencing historically discriminated against groups? Or what's the risk of teenage suicide and depression from AI algorithms? You need to quantify these things to have a sense of their importance and urgency. And then you could do a better job of having justifications to say, are we in fact doing a good job of addressing them? But it's very hard to believe people justifying something based on... They control the data, and they define the metrics and then magically the metrics keep getting better.

Ishtiaque Ahmed: Yeah, absolutely. And one thing that I would like to add to what you said is that we need to understand the concerns and we need to understand them from the sufferer's point of view. And this is why starting different communities differently is very important. One study that opened my eyes on that was when a group of... I was collaborating with a group of women researchers who were studying the victims of sexual harassment. So, at that point, this #MeToo movement was vital. A lot of women started talking about it. It was a stigma. People were not talking about it, but this hashtag movement broke that stigma and women started to talk.

But when we were actually meeting women from different marginalized communities who were still living with their harassers, they are not still opening their mouths. And this kind of AI system, which was repeatedly bringing those #MeToo messages on their news feed or their harasser's news feed, that was not actually translating very good on them in the long time in their survival, because the harassers are now like a farther pressure rising them for silencing. This goes, like, nastier.

This is why I think getting the voice of different communities so that they can talk about how they're suffering from the systems and where the system is actually biased, it's a hard process. It requires addressing a lot of questions around empowerment, a lot of questions around access, a lot of questions around the hope of getting justice. Because if you get a person, they talk about their vulnerable situation and you fail to give them justice, you fail to save them, it means that you are now putting them in a far more vulnerable situation because now their opposition knows that they are opening their mouth and that's not very safe for them.

This is why any data-driven system where we want quantifiable data, we need concrete evidence, we need to stand by the side of vulnerable marginalized communities. We need to give them safe space so that they can speak up. And there we can get a concrete and understandable quantifiable data. So, yes, I was going to agree with you that we need this quantifiable model. I was just trying to point out that this process requires a lot of things to do in the ground level.

Ron Bodkin: Yeah, no doubt. I mean, I guess if you looked at the whole spectrum from innovative research to scale deployment, where do you see the biggest obstacles? And what are things are approaches you're excited about to make it better in addressing some of these concerns?

Ishtiaque Ahmed: Good question. So, if you really want to know my answer, this answer might sound a bit philosophical, but this is what honestly, I felt while doing this research is the question of hope. A lot of times you will see the privileged people on your Twitter or on your Facebook, they are complaining about the issues they are facing because there is a kind of feeling of hope that other people will listen to them. They will support them. Or if this is a case where there needs to be a societal change and a change in the government, they can at least pass the voice up. And we have seen that happening even in our smaller academic community. When we found something wrong, we raised our voice, and this got fixed. So, this hope actually makes us talk about it and create data that we can use for making policies, making policy changes.

But what happens in these marginalized communities is that oftentimes they do not talk because they do not have that hope. And I have heard this so many times while working with the participants in my studies. They say that "Well, what's the point of talking about this? Who are going to listen to us?" So, this problem is... A part of this problem is technical, but a lot of that goes beyond technology, I guess, about the governments, about the societal changes. Otherwise, we're not going to gate the data.

In our lab, we tried to build this kind of safe space for marginalized communities where they can report with anonymous identity. They can report with all their concerns kind of haze around other issues so that no one can uniquely identify them. But then we also saw that how their oppositions were taking over the systems and kind of like sabotaging them. Their enhancement over anonymous social media use. So, this problem has a larger spectrum. And as you know, this is where I agree with Gillian once again, that computer scientists, they need to work alongside social scientists, philosophers, legal scholars, all kinds of scholars to address this problem.

Ron Bodkin: Yeah. Yeah. It makes sense. I mean, I think trying to address this problem of harassment of any group online, hate speech, and silencing, and victimization, it's an incredibly hard problem in the sense that I don't think that certainly social media and online companies, I don't think they want to see this. But it can be very challenging at scale to detect and prevent this. And of course, the flip side of if you were too aggressive about stopping what might be perceived as, or what an algorithm, anyway, would identify as negative, that could easily silence a different side in the debate. So, getting it right, at minimum it's an extremely expensive proposition. And maybe that's part of the problem, is that do you need to just shift the business model and say it's the price you must pay? But from a technical standpoint, it's very hard to get this balance right at the scale of these massive platforms.

Ishtiaque Ahmed: Absolutely. And Ron, since you mentioned this business model, I'm not sure whether social media companies did honestly want to stop this. Because if there is a debated issue, if there is people talking about hate speech, they get more engagement and they make more profit. So, I really question their honest intention for stopping hate speech or this kind of debate, like a topics and polarisation. This is where they make all their money. But this is fair. I think government regulations are important, like third-party regulations.

Ron Bodkin: Yeah. I mean, certainly I think the most egregious examples are probably net harm to them, but there's a smooth spectrum. Enragement equals engagement. A lot of people think that, well, it's just advertising by the way, which is the root cause of this. But it turns out that lots of business models rely on engagement. Keeping you engaged if you're going to subscribe. If you're not using the thing, it's less likely you'll subscribe. You'll churn and so on. So, as much as I wish that we could replace advertising and solve this problem, I don't think we can.

Let's do this. Let's turn to some great questions from the audience. So, first question we have is, are there certification bodies, accepted maturity models, or other mechanisms we can rely on to help organizations assess and resolve issues around data bias? So, Ishtiaque, I'll let you start on that one.

Ishtiaque Ahmed: Yeah. I can start, but I believe you know what about this than me because you are more on the practitioner side. So, the data, like AI auditing is a growing area where these organizations, they are having these boards who can audit AI systems. And this audit board actually looks into both biases, privacy issues, even how the AI system in the long run is going to impact people's life, including reducing their employment and creating polarisation in the field. So, that kind of auditing system is important. I do not know whether there is a certification on that, like on companies can think of, but that's something, Ron, you can probably comment on.

Ron Bodkin: Yeah. Well, there's definitely standards body work in this area. So, I happen to know ISO has published a standard on bias in AI systems and AI-aided decision making, which I won't read out. Just recently, last November, they published a first edition. So, those interested could look at that. Of course, there's been a flurry of standardization efforts, so I'm not saying this is the only standard. Life would be simpler if we had one standard. But there's so much interest in rising to the challenge of building standards and addressing this, not only in a generic sense, but in many different industry contexts, that we somewhat have an embarrassment of riches of lots of efforts to come up with standards. And there's a lot of private sector innovation looking at things like how to do, as you say, audits. You see private companies of different kinds going from ones that have very technology-driven solutions to try to scale auditing, to more custom, more sort of consultative, interviewing type approaches.

I think one of the things, in certification there's nascent efforts. I happen to have been involved with both World Economic Forum and Responsible AI Institute and some of their efforts around AI certification and trying to come up with some standards. Again, those quickly start to get down into it's not enough to have a generic standard for everything. You want to start to get into the specifics of specific use cases, different industries and different situations. But the other thing I'd say on that is I think moving from a sort of an inputs-based approach to say, oh, well, we're going to say we've defined a process and then make sure we sort of faithfully follow that process, to more of an outcomes-based approach of like, how can we more objectively quantify and understand the impact, I think is incredibly important.

I guess I would go to once you've got reasonably good research that we can hang our hat on in terms of the extent of an issue or a concern, then it's much more reasonable to come up with some kind of a balancing or trade off of what is the right way to say what's justifiable here? And again, totally agree with the point we talked about before. You're not going to get 100% fairness. And in fact, if you optimize for any one dimension of fairness, you probably would be dealing with some harms elsewhere. So, the reality is it's complex and there's trade offs. I know the EU has also, and I'm not an expert in this, but I understand that they've got a much deeper standard around what is considered fair, including what is allowed in terms of some of the algorithmic trade offs of what you can and can't do in terms of fairness.

The good news: there's lots of work on it. The other thing I would argue though, is that often fairness fits into a broader framework of ethics and responsible AI. And some people define fairness to be so broad that almost any ethical violation is unfair, in which case it encompasses the field. But if one chooses a more narrow definition of fairness, then there's a whole host of other norms that we need to be able to think about and justify, good behaviour. So, establishing principles, but then the proper governance to say, how do we enforce those principles in our organization? How do we actually have a process to not only assess and decide what we're going to do, but to continue to monitor? I think often organizations have this one-time stage gate approach that may be very appropriate for extremely mature technologies.

Like when you build an airplane, at this point hopefully you really understand how to build a safe airplane and you do your upfront work, and once you've certified it, there shouldn't be a lot of surprises. Well, AI systems are not like that. They're changing frequently. And so, we have to keep learning. So, often when you're building new systems, you're in the worst position to know what the real risks are. And so, you might identify some of them, but you need to be carefully monitoring and say, what's actually going on, and how do we address the real lived experience of these systems to course correct when things are not going well?

Ishtiaque Ahmed: Absolutely.

Ron Bodkin: Should we take another question then?

Ishtiaque Ahmed: Yeah.

Ron Bodkin: So, second question is, do we have any thoughts about reward systems and conditioning principles being used in other countries? Like if you break a social rule, your bandwidth is throttled down, your picture is posted at the area you're in, and how we can ensure AI regulation is used ethically and not as a tool of social obedience.

Ishtiaque Ahmed: Right. Yeah. So, this is what I was trying to hint at when I was saying that regulation is a very loaded term, and it can have different meanings in different countries and different contexts. One part is that we want to get our digital space or technological space as fair as possible. We don't want to have biased or hateful contents over there. But on the other hand, do we want to silence people by saying that, well, what you were saying is not helpful for, let's say the government? And this is actually a big issue in many countries in the world. If we don't see it that much in the North America per se. Well, I'm not saying that it doesn't happen here. I'm saying that probably we are hearing less about it here. But in many other countries in the world, we know for sure that the government is definitely imposing this kind of regulations or surveillance on the citizens, where if you say something that is not aligned with their ideologies, you are going to have consequences. And this may come from the government. This might also come from the community that you live in. The community is not approving of what you posted on social media, and you are facing consequences.

So, this is where you need regulation, but you need a regulation that is fair. So, this is where I agree with Ron is that this question of fairness. How much the AI system should be engaged with this question of fairness is actually a really difficult question here. One way to think about it, and is an easy way, just we focus on the tools and materials that AI system is engaged with, like the data that we are collecting. Is this a fair data? Is this a representative data? We think about the machines that we are using and the people who are using this. Are they representing the whole population? Are there enough diversities here? Is the process environmentally friendly, and we are not like abusing environments? So, the process is right.

And then, when we are delivering the product to the community, who are people that are going to use it? And if there is a complaint, like probably we haven't thought about the problem and now people are facing it and they complain about that to us, are we listening to them and are we taking actions? So, that way you do not have to "plan" about a fairness thing and say that, well, whatever we are building is 100% fair and ethical, but what you can do at least is to be open to listen to them and change your system based on the feedback. And that's important here.

The other thing that I want to highlight is that this problem is not entirely about AI systems. We ran into this problem in different spheres of our political life, even in Canada. In Canada, we boast of having a very multicultural environment where we invite inclusion and diversity. We want to work with different ideological, different political ideas. Now, how are we solving this problem in non-AI systems in our public space? So, there are already ways political scientists, the policy makers, are handling this kind of tensions. The challenge here now is that the people who are experts in this field, they are not often connected to this technological domain. And all of a sudden in the last few years, technologies, especially these AI-powered digital technologies, they have kind of taken over the citizens. Now these tech companies are sometimes closer to the people than the governments, and these tech companies are not always connected to this kind of political scholars or policymakers who over hundreds of years build the strategies to control these problems and mitigate these issues. The challenge here is to make these connections once again.

Ron Bodkin: All great points. I would certainly say I think that surveillance capitalism controlled by oligopolies is a bad system. Surveillance dictatorship is a worse system. And perhaps a deep collaboration between oligopolists and dictators is the worst of all. So, I resonate with the point of this question, that I suppose that nations that value individual self-representation and liberal democratic values, I think the onus is on us to come up with good systems that work well. I don't think that we should be too worried about the legitimate trade offs that we have to balance between interests in formulating policy. Of course, dictatorships will use those as excuses to cover for their own behaviour, but whatever we do, they will come up with excuses. So, I don't think we should shy away from thoughtful regulation and policy based on how dictatorships are operated.

One thing that I do wish, and I think is important for us to think about is how do we create more incentives and choice? I really don't think that there's good options. I don't like the idea of government increasingly dictating what speech should be and what's allowed and not, because I don't think that goes well. But neither do I like the idea that a small number of private companies increasingly control more and more of the speech that's going on in our society. That's also incredibly dangerous. So, I really think that finding ways of preserving more choice and allowing more voice is important, as well as more ability for things like data trusts and more ways that collectives can come together and have more control over what is done by them and on their behalf is important too.

There's ideas- people talk about algorithmic choice, ways of creating a structure where there's more ability for people to have control of their data and more options around what is done. I think if we created more incentive to serve communities and we pushed hard for policy that saw as desirable a plethora of ways of communicating and people could have choices about what places and spaces worked for them, we'd be in a better place. And I think that's been. A lot of the success of democracies to date has been having... It's been relatively easy to start a publication and have a new voice and a new editorial policy. And we don't worry so much if one voice is out of step or different than ours because we can find a place that resonates. So, I think we have to find policy that takes us in that direction and not try to take a small number of choices and regulate them to serve our goals.

Ishtiaque Ahmed: Absolutely.

Ron Bodkin: So, obviously a topic that we find totally... I think it's an important topic. I appreciate the question. Another question, also a good one. How important is it to consider who's involved in creating, studying, and implementing AI regulations?

Ishtiaque Ahmed: Yeah, I can take this question. I think this is a very important question here. So, now even in Gillian's talk, she talked about the philosopher John Rawls and how John Rawls' philosophy of justice she admires. And I'm also a fan of John Rawls. And one thing Rawls said is that you have to think from the perspective of the most vulnerable person in this society and what could possibly go wrong if you take a policy decision or if you build a technology? So, how do you think about justice is how you... You don't think about the privileged people, because like us, they have some kind of support system already in place there where they can get support from. But for the most vulnerable people, how do you do that?

Now, Rawls was theoretically correct, but how do you do the perspective of the most vulnerable person in the society? That's not easy. So, it's theoretically right, but practically very difficult. This is where the question of representation is very important. When you are building a system, AI system, you need people from different groups, especially the marginalized group, who can look at the system. They can look at the data and say that "Well, this data is not representing us." Or "The system that you are going to build, it may do more harm than good to my community, or the people who I represent." And this is where a representation is very important. And even in Rawls' philosophy, Rawls said that. He was talking about the justice system and there is a question, like even in a courtroom you need like a people from different backgrounds to raise concerns from different marginalized point of view. And this is where I think this representation is very important in AI regulations.

[Alex rejoins the video chat.]

Ron Bodkin: All right. Well thank you. Certainly, I'd say this is a topic we could spend a lot more time discussing, but in the interest of time maybe it's good for us to just give a final couple of thoughts as we close out the discussion. Maybe I'll share a couple and pass it to you, Ishtiaque, to close this out. Certainly, I think it is wonderful to see the engagement and thinking about these important topics. And I think there's also, it's an important thing for us to be looking at how to contribute in a positive way. How do we think about creating AI systems that are positively beneficial and improving on what's gone before? How do we in the public sector think about how AI can help us fulfil the opportunity to better serve the public and empower people and make things better?

So, there's huge opportunity in that space. At the same time do so in a thoughtful way, and how do we come up with systems to really assess and identify where there are challenges and risks and problems and move quickly to address them? And sometimes the answer can be that something's too risky and we shouldn't proceed. And I think that actually it's a good outcome if you've thought about something and then deciding you should take a pass. And I think we do need to be bringing some of the best thinking in computer science and the humanities and social sciences together to inform the public conversation. So, those are a few thoughts and Ishtiaque, I'll turn it to you to share your final thoughts.

Ishtiaque Ahmed: Yeah, absolutely. I would stress on the point where we need more collaboration between different disciplines, because oftentimes people in humanities, they ask questions about how they can learn about AI and how they can use AI to make whatever they're doing better. And in response, when I say that, "Well, we need to learn from you. That's more important for us." So, I guess there should be more conversations between scholars of different disciplines, people from different groups, different places to bring their issues together. And this is where human-centric AI or justice-centric AI, these are the things that we need to put forward. It's not only AI that we are excited about. It's the ethical AI that should be put forward as a national issue, I believe.

Ron Bodkin: Excellent. Well, let me now pass it to Alex to finish our event with some closing remarks.

Alex Keys: Thank you. Wow. Thank you both so much for delivering such an insightful and thought-provoking conversation. We're grateful to be able to connect our public servant learners to the perspective and insight you both bring to the topic of AI bias, fairness, and transparency. Thank you again for sharing your time with us today. We also want to thank our series partner, the Schwartz Reisman Institute for Technology and Society for their support in delivering today's event, including Professor Gillian Hadfield for the opening lecture. We're also grateful for all the learners who registered for this event. Thank you very much for tuning in.

If you're interested in learning more about Ron and Ishtiaque's work, please make sure to check out the link that will be included in the note you'll receive following today's event. And as always, we're very grateful for your input on the evaluation that will be included in that note as well. The next event in the AI is Here series will take place on March 15th on the topic of global efforts to regulate AI. Registration details will be available on the Canada School of Public Service website very soon. And we look forward to seeing you all again next time and invite you to consult our website to discover the latest learning offerings coming your way. Thank you all very much again. Good afternoon.

[The video chat fades to CSPS logo.]

[The Government of Canada logo appears and fades to black.]

Related links


Date modified: