Well-designed AI can help overcome bias said AI expert Professor Nick Jennings, Imperial's Vice Provost (Research and Enterprise).
Professor Nick Jennings, who has previously served as the UK government's Chief Scientific Advisor for National Security, shared insights last Tuesday about the threat posed by biased algorithmic decision-making, the ways the AI community is addressing this, and the potential for algorithms to actively support positive outcomes for diversity and inclusion.
Diversity and inclusion in cyber security
Subtle biases create an environment where racism can exist … [such as] an automatic assumption that the CEO is anyone other than the Black person. Oz Alashe
Professor Jennings was speaking in an online panel discussion on diversity and inclusion in the cyber security industry. The event marked the publication of results from the first annual diversity and inclusion survey carried out by the National Cyber Security Centre and KPMG and supported by Professor Jennings. Dione le Tissier, Director in KPMG's Aerospace and Defence practice, revealed key findings including that 18% of Black respondents, 15% of women and 21% of transgender people experienced career barriers due to their identities.
Panellists such as Ms le Tissier and Oz Alashe MBE, CEO and Founder of cyber security company CybSafe, also shared personal experiences of discrimination. Mr Alashe said: "It's [often] subtle biases that create an environment where racism can exist … whether that's an automatic assumption that the CEO is anyone other than the Black person or clear audible and visible surprise when people hear my accent having previously only seen my name in writing …. These are all manifestations of this bias that I've been talking about, and it's really important that we recognise this."
The National Cyber Security Centre will use the new annual survey to track the effectiveness of measures to improve diversity and inclusion in the industry. Imperial College London is recognised as an Academic Centre of Excellence in Cyber Security Research by the National Cyber Security Centre and the Engineering and Physical Sciences Research Council.
Avoiding biased AI
Here we have a great opportunity to see some of the upside of AI and its ability to make decisions in a different way to humans and from a different standpoint. Professor Nick Jennings
At the event, Professor Jennings addressed the role that algorithmic decision-making employed in cyber-security and other industries can play in amplifying human biases. He said: "Machine learning is very much driven by data … If you feed in biased data, you're going to get biased data out of it."
"High profile examples include recruitment software that looks at previous CVs and looks for what successful candidates have and what unsuccessful ones have. That can lead to biased decisions by algorithms [that have] learnt from biased decisions that have been made in the past. It's important that we look at data. [Otherwise] you end up with face recognition software that's good for white men and less for BAME women, or software used in US courts that's biased against African American people in terms of giving them bail."
Professor Jennings said that biases in the data used for algorithmic decision-making can be avoided through technological solutions and by improving diversity among the people who create and use AI software.
"There are a range of tools and methods you can use to interrogate your data and identify where there are incomplete or unrepresentative examples, and there are examples of de-biasing tools out there," he said. "[We also need to consider] the people that programme AI systems … we need to make sure that they are representative of the community. And when we explore and test software we need to do so in a wide range or circumstances with a wide range of users."
Using AI to avoid bias
While AI can amplify bias, it can also be a useful tool for overcoming it, Professor Jennings said.
"Here we have a great opportunity to see some of the upside of AI and its ability to make decisions in a different way to humans and from a different standpoint. My view of many AI systems is that they're going to come into being in partnership with humans."
"[This] gives you the ability for both the human and the machine to tackle similar problems and check with one another where they disagree. From a diversity and bias perspective … if we can construct our way of working between humans and machines so that there is dialogue between the two then you can get them to cross-check one another."
"If you think about the way medicine works, you often have a number of doctors looking at a particularly problematic case … AI lets you replicate that diversity and multiplicity of decision makers and perspectives."
Progress at Imperial
Imperial is carrying out a range of research to allow industry to interrogate the decision-making procedures used by AI systems, guarantee ethical AI decision-making and promote responsible business leadership.
It is also working to improve the diversity of its community and has recently unveiled a raft of measures to improve diversity and inclusion at the university