Enhancing AI: Why New Technology Must Include Diversity
Imagine if someone who was wrongfully convicted of a crime was asked to design the algorithm used by police to convict criminals. Imagine if a young person, newly immigrated to the US was asked to design the algorithm used for admissions at top US universities. Imagine if populations, historically marginalized from the use of your products, were asked to design your products. The chances that the outputs from these algorithms would replicate the same output that they do today are slim to none. That is in many ways what AI and machine learning offers — but rather than having systems that embrace diversity of perspective and opinion, if we aren’t vigilant, we can end up with systems that enforce existing biases at best and actively create brand new biases at worst.
The Impact of Bias in AI and Machine Learning
Our society has managed to steadily progress despite the myriad issues around diversity embedded in it, and some might argue that slow progress coupled with the many benefits offered by AI is good enough. I obviously disagree. While there are more than enough moral and ethical reasons for diversity, the most salient fact is that at the end of the day, DEI doesn’t just mean Diversity, Equity, and Inclusion, I believe it should also stand for Diversity Equals Income. Every time a company uses an algorithm that alienates a user, diminishes an outlier in order to fit a model, tamps down diversity when making a hiring decision, or works in a diversity-blind fashion as opposed to a pro-diversity manor, dollars are being left on the table — dollars that few businesses can afford to spare.
The first time the potentially negative interaction of technology and race dawned on me was back in the late ‘90s and early 2000s when I – and many of my black friends – found ourselves unable to be properly identified by the face recognition software used by Facebook. As we soon learned, who was in the room doing the programing mattered. The programmers, the majority of whom did not look like us, trained the machines on faces that looked like theirs and not ours, leaving those of us with darker complexions as mysteries unable to be identified by computerized eyes. One would think that as the years have progressed, things would have gotten better, but a 2018 study conducted by the National Institute of Standards and Technology (NIST) found that some facial recognition algorithms had error rates that were up to 100 times higher for African Americans than for Caucasians.
Sadly, this bias isn’t just found in visual data. A 2019 study by the National Bureau of Economic Research found that algorithms used by credit scoring companies tended to underestimate the creditworthiness of African American and Hispanic borrowers. These algorithms routinely gave these borrowers lower credit scores and higher interest rates.
The Importance of Diversity in Business Success
What does this have to do with diversity? AI has also ushered us into a new age for HR. All across the world companies are using AI to screen resumes for potential hires. The issue is that AI-powered hiring systems have been found to discriminate against women and minorities. A study by the University of Cambridge found that an AI-powered recruitment tool developed by Amazon consequently downgraded resumes that contained words such as “women,” “female,” and “gender,” and as a result, candidates with female-sounding names were less likely to be selected for interviews.
There were two problems — both of which are interconnected, difficult to solve, and which need to be addressed. First, in all these situations, the training set was flawed. If a system is trained on biased information, it will generate and propagate a biased output. In the case of the recruitment tool, it had been trained on resumes submitted to the company over a 10-year period, most of which were from male applicants (who were chosen, in some part, due to the systemic bias of the human HR people). Second, those in charge of these systems didn’t value or consider diversity enough to actually encode it in the system.
The Role of Diversity in AI Development
Like a child learning what is right and wrong or how to behave, an AI needs to be taught. To properly teach it how to deal with the myriad different situations it may encounter, organizations must expose the AI to past examples of right and wrong (or success and failure). These past examples can be redolent with bias against women, immigrants, people with physical or neurodivergence, as well as race and ethnic groups. Currently, since the complexity of the AI’s computations is so high that it is virtually a black box, the best way to check if a system is biased is through testing both the input and the output. Testing for any sort of sampling bias in terms of a specific characteristic, or geography, or demographic marker in what was fed into the system as well as unwanted correlations from what comes out of the algorithm is critical. The issue is that this extra step, while relatively simple, is time consuming and time is money. That being said, a fair question to ask is — is this enough?
In our social discourse, it’s generally understood that simply being colorblind (for example) is insufficient in light of the various systemic structures at play in our society. In order to achieve some sort of equity, “color bravery” — in other words, a more proactive stance on addressing racial disparities — is necessary. So then, if in other circles, simply being color blind is insufficient, why then in this circle, would being un-biased be sufficient? As I’ve said time and again, Diversity, Equity, and Inclusion is important but it’s not just important because it is morally right or humanistically right but because in business (as I also said earlier) Diversity Equals Income.
To give a few examples:
- A 2009 study by the Center for American Progress found that companies with more racial and gender diversity were more likely to have higher profits.
- A 2016 study by the Peterson Institute for International Economics found that companies with more female executives were more profitable.
- A 2018 study by the Boston Consulting Group found that companies with more racially and gender diverse boards of directors were more profitable.
Studies have shown that diverse teams can bring a wider range of perspectives and experiences to the table, leading to more creative problem-solving and better decision-making. They can be more effective in understanding and serving a diverse customer base, and they can be more attractive to top talent, which can lead to higher productivity and innovation. AI provides businesses the opportunity not only to ensure that their hiring practices aren’t biased but that their staff has the diversity needed produce the best goods and services for their ever more diverse consumers.
The organizations that are relying solely on AI to screen resumes, sift through applications for schools, or make decisions about credit, etc. are making a grave mistake. They are mistaking the hammer for the carpenter and the car for the driver. This is actually very similar to a problem I occasionally run into while leading product ideation workshops. Clients will get so invested in the exact rules and procedures of an exercise I’ve devised to help unlock their creativity that they’ll literally get upset when I throw out the rules and start capturing the ideas that start pouring out. They often want to hold their tongue and risk losing their idea, rather than sacrifice the well laid out rules of the exercise — that is until I remind them that the exercise is just a tool, and what really matters is the idea.
AI is merely a tool. Yes, it is a powerful tool, but it is still only a tool — one of many tools that we as businesspeople, members of society, and human beings have at our disposal. We need to remember that the goal needs to remain one of creating a more diverse, equitable, and inclusive business environment — so we can create better products, services, and experiences for our consumers. If we don’t, we are leaving money on the table, we are leaving our consumers unsatisfied, we are leaving our companies without the best talent, and we are leaving ourselves exposed to the first competitor who is smart enough to capitalize on our blind spot.
The smartest companies that I’ve worked with are the ones that define their goals first and find the tools to achieve those goals second — not the other way around. Marshall McLuhan once said, “We shape our tools, and thereafter our tools shape us.” This is one situation where we cannot, and must not, allow our tools to shape us if we hope to continue forward to a more diverse future, let alone a more profitable one for our businesses.
Cerrone Lundy is a Director at Vivaldi. He works with organizations to better understand their client needs and create products, services, experiences and more.