RESPONSIBLE AND ETHICAL AI

Artificial Intelligence has the power to transform businesses. Its impact on society and the way we live our lives has already been profound. But AI can also cause harm, either through malicious acts or insufficient care in how it is designed and used.

Unfortunately, there are already many examples where AI has been shown to be unfair or biased against groups of people and sections of society, and, in most of these examples, there was no obvious malicious intent. We have seen insurance comparison websites give different prices based on the applicant’s first name, virtual reality games favouring locations in white-majority neighbourhoods,  and recidivism algorithms wrongly labelling black defendants as likely to reoffend at almost twice the rate as white defendants.

Many people now recognise that any AI shouldn’t be built without first considering the associated ethics. But exactly how do we do that? There are hundreds of ‘AI ethical frameworks’ that purport to guide organisations and developers of AI toward ethical AI. But these are usually vague and ill-defined, talking about ‘well-being’, ‘human agency’ and ‘transparency’, making them difficult or even impossible to implement in practice. In most instances they are used as a tick-box exercise and for marketing fodder, thus leaving the AI models exposed to the risks that the frameworks purported to protect.

What is needed, therefore, is an approach that is both robust and measurable, that looks at the strategic implications as well as the operational and technical aspects, and that provides clear guidance and actions to mitigate the risks of AI. But this is not an easy task, which is why Greenhouse AI has created an AI Ethics methodology that focuses on practical and meaningful outcomes.

OUR AI ETHICS METHODOLOGY

The approach works at three levels: an organisation-wide strategic perspective that focuses on developing an AI Ethics Code and implementable Strategy; an operational perspective that assesses individual AI projects’ data and AI ethics; and a technical perspective that focuses on the algorithms and how they are deployed.

Each perspective brings together the best tools available to deliver tangible outcomes that can be actioned. For example, at the heart of the Operational approach is the Open Data Institute’s Data Ethics Canvas, a proven tool that evaluates data-dependent projects (Greenhouse AI is a certified facilitator for the Canvas).

Assess your organisation’s Data Ethics Maturity using our 10 minute, 6 question survey. Just click here to be taken to the questionnaire. You will get an instant score, and we will send you more details about how you can improve your data ethics maturity.

Our Own Ethical Values

At Greenhouse AI we believe that AI should be used to enhance and augment human capabilities, not replace them. We believe that citizens should have control of their data and understand how AI is impacting them. We believe in a democratised AI.

We never build AI that causes anyone harm. We don’t work for organisations whose business is to make arms or tobacco or pollute the planet.

We provide services pro bono where there is a strong social impact and need, such as our training work with St Mungo’s Recovery College. As well as being a client, we also provide pro bono strategic AI services to the Natural History Museum in London.

We work closely with organisations such as MindweaverAI to recruit talented people who may otherwise struggle to find work.

We aim to be a carbon-neutral company. Our offices use renewable energy sources. We will always take public transport where that option exists.

AI is a hugely powerful tool, and we will not shirk our responsibilities.

Like to know more?