People working in science get to decide what is important, and how it gets researched. These decisions determine what society views as truth in science. When there is a lack of diversity in the people deciding what counts as truth, we are only getting a partial scientific perspective. Having only a partial scientific perspective means we miss out on ideas and research that could enrich our understanding of how the world works.
STEM is currently failing
If I were to describe the STEM academic community as diverse, I would be saying that the community includes a relatively equal spread of individuals across different social identities. Your social identity consists of your gender, race/ethnicity, disability status, sexual orientation, cultural practices, and many other attributes (Gibbs, 2014).
Unfortunately, I cannot describe the STEM academic community as diverse.
There are 20,000 professors in the UK. Out of these, only 0.6% are Black, and a mere 0.2% are Black women. This is unacceptable when 3.4% of the UK population is Black. Only 26% of STEM graduates are female. For specifically Computer Science and Engineering & Technology qualifications, that number drops to 16%. While 20.7% of the general population have a disability, only 12.5% of STEM students at postgraduate level in 2018/19 have a known disability.
The structures that we currently reside within limit access to opportunities for minority students, so there is a distinct lack of representation of minority people in high STEM positions.
Dr Jason Arday (2021) argues that the current Western conception of excellence in STEM has been formed by looking at the professors we have historically seen in Western academia, almost all of whom have been White men. Hence, the conception of academic excellence is seen through the lens of whiteness. By disrupting this narrow conception of excellence, all deserving academics can be given the space to achieve.
Why having a diverse representation of people in science is important
Those at the top of the STEM fields are those that have the power to shape scientific knowledge. The research that gets published and hence makes the most impact in society is that which aligns with the ideologies of the powerful (Winner, 1980).
For example, the bulk of research in biomedical science is funded by pharmaceutical companies (Lexchin, 2012). Negative research outcomes may negatively affect these companies’ sales of medicines. It has been shown that, to combat this deficit in sales, the companies influence publication of positive trials and non-publication of negative trials. Hence, those in power in the pharmaceutical industry can control the research outcomes in biomedical science.
Each of our views of the world are what Donna Haraway denotes as partial perspectives. Each partial perspective gives a different understanding, and so they are all valuable to the aim of establishing a richer, more universally useful account of the world (Haraway, 1988). By reducing these perspectives to those of a certain demographic, we are missing out on valuable input.
For example, A. C. Liedloff et al (2013) integrated indigenous ecological knowledge to improve water management systems. Understanding of the relationships between species and aquatic habitats was enriched by a group of Gooniyandi Aboriginal speakers’ knowledge about related topics. The study combined this indigenous ecological knowledge with existing hydrogeological understanding to show that potential changes in the river flow rates will impact ability to catch high value fish. Indigenous people’s contributions to water management decisions can improve our understanding of the area.
From the perspective of a student in Computer Science, I can see the effects of our failure to champion for diverse voices in science.
A key aspect of digital ethics is responsible innovation. This means programmers should consider the impact of their work on the world. It is important for computer science professionals to identify potential impacts on all communities.
BCS found that in 2017, just ‘17% of IT specialists were female, 8% were disabled, … and 17% were from ethnic minorities’. Often, the number of representatives in a project from a certain community is zero. Then, the chances of unintended consequences of the project for that community skyrockets.
This problem is evident in the algorithm used by the Home Office to assess UK visa applicants (McDonald, 2020). The algorithm was trained on data from historical visa application assessments. This means that the algorithm learned who to deem worthy of a visa by looking at previous decisions made by the home office. Because these previous applications were assessed within a system of racism and bias, the algorithm perpetuates this racism and bias.
There is a crucial need to disrupt patterns of intolerance and inaccessibility for minority groups in science. We have seen that the current set of top academics in the UK is not diverse enough. To increase diversity in STEM would be to allow all excellent academics to be treated as such, while improving our understanding of the world by valuing and incorporating different viewpoints.