Spotlight Series: AI

 

Is there a Venture Capital firm anywhere not talking about AI? For want of sounding like a broken record – there is a good reason AI is the talk of the VC town. It will change our lives profoundly in ways we cannot yet envisage. Undoubtedly valuable businesses will be built upon the intelligent and innovative use of AI. At the same time businesses that fail to adapt will fail – just in the same way as there have been winners and losers from every one of the previous four industrial revolutions.

There is also no shortage of column inches on the risks associated with AI. For some earnest if somewhat absurd conversation about the dangers of AI, google ‘the paperclip apocalypse’ - a thought experiment about how an AI tasked with perfecting the paperclip eventually leads to the destruction of humanity by diverting an ever-increasing share of resources to the problem.

We don’t want to make bold predictions about the future of AI, either with an optimistic or fatalistic lens. However, we do want to discuss some of the near-term impact related risks associated with AI, and shine a light on some of the difficult problems AI is well placed to solve. Back in July we co-hosted a conference alongside Cooleys, Google Deepmind and the Xoogler Community a group of former Googlers (Xooglers) to discuss the risks and opportunities AI brings with it in terms of the creation of inclusive, fair and diverse workplaces and societies. We then hosted a panel with voices from the academic, entrepreneur, policy and investor communities considering some of the broader ethics concerns which arise as AI cements itself in our lives.

To summarise the conversation to just a few sentences – it is clear AI algorithms exhibit bias. The algorithms are built on datasets which are reflective of society. They exhibit bias because society exhibits bias. As an AI founder, investor, or anyone else in the ecosystem - the appropriate response to this group of facts is not to deny it, or even be embarrassed about it. The appropriate response is to recognise the existence of this bias – and create strategies to solve it. Strategies might include keeping a human in the loop, creating some guard rails for the AI, ensure the teams building the algorithms are themselves diverse, or even finding ways to actively edit the way in which the algorithm links data points – such as removing the link between ‘nurse’ and gender, to prevent an assumption in the model that nurses are women.

More broadly there was a reminder to make sure we focus our attentions in the right places. To take an extreme case – in a world in which the Casey report has concluded that the Met police is institutionally racist, homophobic and misogynist – correcting the culture of the Met should be our priority. There is a temptation to focus on the facial recognition software used to look for suspects which is dramatically more likely to give a false positive for a black face – but a police service free of racism would be considerably better at managing the outcome.

We are right to be concerned about bias in AI and the outcomes that it produces – but the correct response is not to be fearful of AI, and limit its vast capacity for good in the fields of climate, health and education. The correct response is to remain acutely and permanently conscious of the risk of bias, and take action to combat it consistently as it inevitably occurs.

Written by Alex Shapiro

 
Previous
Previous

Spotlight Series: Sustainable Cities

Next
Next

Meet the Founder of Joy, Patrick Harding: “At Joy, our mission is to add ten years to life expectancy by addressing the social determinants of health.”