Powerful AI Needs Diverse Design

Share

Artificial intelligence (AI) technologies, tools and models are redefining the way we live, work and play. Many professions encompassing customer service, visual art, coding, content writing and even medicine are already experiencing AI intervention in one way or another. But, while a general overview can reveal AI’s incredible potential to uplift societies all across the globe, a closer look shows that it also has the power to segment them.

AI bias is a leading cause of controversy. This describes the disposition of AI to follow only what it knows, has seen or been taught, meaning biassed data and machine-learning empowers biassed AI. When fed information that doesn’t fully represent a full cross-section of society, a minimised efficacy to serve diverse populations and solve complex problems results. With this, a greater chance to cause harm to marginalised communities is elucidated.  

However, inclusive design methodologies can play a crucial role in challenging AI bias and preventing the continuation of real-life discrimination within the machine-learning sphere. It highlights the importance of engaging our world’s diverse communities at all points of the design process to ensure AI can effectively serve the greatest number of people. 

The design world has a phenomenal opportunity to lead AI in the name of progress as its technology evolves. Read on to discover how.

SPOT CHECK – Good Design Award Gold Winner – is a personal skin cancer detection system. Combining the power of advanced camera hardware and artificial intelligence, it can identify and track suspicious spots – catching melanoma before it’s too late. Image: Thomas Mackisack

The current state of AI

The AI industry is contemporarily a non-distinctive field, with most decision-makers representing a small portion of our greater society. For example, a look at the top technology companies reveal that there are relatively few people representing the many gender identities, ethnicities, cultures and sexual orientations typical of our global population:

  • Only 2.5% of Google’s entire workforce and 4% of Facebook’s and Microsoft’s are Black
  • Just 22% of AI professionals are non-male identifying
  • In two thirds of the countries leading AI innovation, the AI skills penetration rate for women, non-binary people and transfolk is lower than it is for men
  • Almost 82% of the LGBTQIA+ community within the AI field consider a lack of role models in the industry to be a major career obstacle

Looking at the data, there is disparate amounts of power in the AI field currently within the hands of a majorly white and male-identifying demographic. While it may not be a conscious decision, this naturally leads to AI priorities, avenues of data-collection and use that misrepresent those who may engage with the end product.  

Black in AI cofounder, Timnit Gebru, stated in an interview with the MIT Technology Review that a damaging bias has already been animated in the emerging field.

“There is a bias to what kinds of problems we think are important, what kinds of research we think are important and where we think AI should go,” she said. “If we don’t have diversity in our set of researchers, we are not going to address problems that are faced by the majority of people in the world”.

When you look at recent AI failures worldwide, such as Amazon’s current facial recognition technologies failing to perceive darker skin tones, Microsoft’s Twitter-powered chatbot, Tay, evolving into a misanthropic beast and Berlin Transport’s gender recognition technology neglecting the existence of non-binary and gender-diverse people, Gebru’s statement rings true.

PainChek – 2018 Good Design Award Winner – uses artificial intelligence to assess the micro-facial pain expressions of non-verbal individuals and score pain levels in real time. Image: PainChek

Challenging AI bias with design

With holistic design methodologies that are human-centric and considerate of all our world’s communities, the development, training and optimisation of AI models can readily challenge bias. Diverse collaboration and engagement is at the centre of it all, allowing AI to be exposed to stimuli that speaks to the greatest number of individuals. 

Here are some ways AI designers can prevent the perpetuation of societal inequalities and hierarchies into the AI sphere: 

  • Prioritise inclusive design

To best capture a wide range of perspectives and be readily able to identify potential biases and inequalities during the design process, the design team should include or, at least, prioritise the engagement of all possible stakeholders. This involves collaborating with people who can expand the understanding of the societal impact of the technology. Marginalised communities and underrepresented populations offer extremely valuable insights, as do other developers, ethicists and social scientists. 

  • Emphasise human-centred design

Human-centred design, which prioritises the needs of all end-users, naturally incorporates inclusive design processes. The approach focuses on the human aspect, with AI designers seeking to understand the potential impact of AI technologies on individuals and communities. This helps align AI with ethical values that promote fairness, diversity and equality, and supports the development of more user-friendly, personalised and transparent systems.

  • Adhere to, or develop, ethical frameworks

AI is an emerging industry, meaning there are few legally-binding guidelines for designers and developers to adhere to. That’s why it’s crucial for innovators to adopt or develop ethical frameworks that address biases throughout the design journey. AI algorithms aligned with fair, transparent, accountable, diverse and secure visions are often the most successful and effective.  

  • Collect diverse data 

AI designers must ensure that the data and information exposed to AI models is truly reflective of the world’s diverse populations. This requires data incorporating the many gender identities, ethnicities, cultures and sexual orientations present in our greater society. This circumvents biases emerging from skewed or incomplete datasets and works to avoid reinforcing existing societal inequalities.

  • Actively mitigate bias with constant evaluation

Pre-processing steps such as data augmentation, data balancing and algorithm debiasing are instrumental in mitigating AI bias from the beginning. Continuous evaluation is equally important and can challenge any unintended consequences that may emerge post-deployment. This underlines an ethics-powered and iterative design process that’s innovating for good.  


Be the change

Sociologist Ruha Benjamin, author of Race After Technology: Abolitionist Tools for the New Jim Code, says that the social hierarchies in our everyday lives have corresponding virtual ones, and AI, in its current rudimentary stage, is revealing them. However, she stresses that the diverse human experiences behind AI have the power to twist the script.

“We put so much investment in being saved by these objects we create – by these technologies,” says Benjamin. “But our real resource is ourselves, our communities, our relationships, our stories, our narratives”.

With AI technology still in its emergent stage, the design world and their methodologies have a phenomenal opportunity to ensure AI innovations, frameworks and outcomes are applicable to our world’s diverse communities. It begins at the decision-making level, thrives with collaboration and is optimised over time. The result is incredible technologies, tools and models that not only redefine, but uplift the way we live, work and play.


Explore our incredible design world

As the 2023 Jurors come together to evaluate, crown and celebrate the brightest designs of this year’s hallmark Award season, why not turn back the clock and discover some innovations of the past? Search by category or have a blind deep dive – find inspiration either way.

DIVE INTO THE GOOD DESIGN INDEX HERE

More News