Excellent list of resources (part. if you’re working on Athena Swan or similar). From HASTAC.

In 1961, Phyllis Richman applied to graduate school at Harvard and received this letter of rejection.


The often unconscious and unintentional biases against women, including in academe, have been well documented in the autobiographical writings of authors such as Audre Lorde, Adrienne Rich, Patricia Williams, and bell hooks. But is the experience they document merely “subjective”? Several recent social science research studies, using strictly controlled methodologies, suggest that these first-person accounts of discrimination are representative, not simply anecdotal.

The studies aggregated and summarized below offer important policy implications for the traditional ways that we count and quantify the processes leading to hiring, promotion, and tenure. You cannot simply count “outputs” in making an evaluation of someone’s worth and reputation if there is a “biased filter” at the first stage of evaluation, prejudicing judgment at the outset.

These studies should be required reading of any administrators and faculty committees charged with decision-making. They should be required reading for award committess and Human Resources departments, for policy makers and accreditation agencies. As much as possible, these studies supplement anecdotal accounts of gender discrimination with empirical evidence of a gender bias that is unconscious and pervasive.

Call for Additional Citations

We include here a round-up of several of these recent studies. There is also an open, public Google Doc to which we invite others to add other relevant studies and responsible, careful, fact-checked annotations.

Talking Points for Further Discussion and Reflection

Before listing the studies, we would like to offer a number of talking points about what they, in aggregate, suggest.

  • In these studies, actors believe, often quite earnestly, that they are making choices or judgments based entirely on the basis of “quality” or “excellence” or “expertise.” However, several of the studies reveal that changing only the gender identification of the person being judged radically and consistently alters the way others evaluate the quality of the work. Work by “men”—as students, as colleagues, as authors, as experts—is consistently judged to be superior to that by “women”–even when the only difference in is the author’s gender-specific name.
  • Women are as likely as men to make biased judgments that favor men.
  • Culture and representations play an important role in perpetuating gender bias within and beyond academia. Cultural factors are not just of concern to humanists but have a direct impact on perception—they are variables that social scientists and natural scientists must attend to. A study that appeared in proceedings of the National Academy of Sciences acknowledged that gender biases stem “from repeated exposure to pervasive cultural stereotypes that portray women as less competent but simultaneously emphasize their warmth and likeability compared with men.”<
  • The implications of these studies are relevant to both broad and discipline-specific trends. This is important given how different disciplines often have seemingly unique protocols for hiring, tenure, and promotion. Gender bias seems pervasive, even when the forms, methods, and metrics vary by discipline.
  • Almost all of the articles call for conscious, structured, institutional efforts to counteract unconscious and unintentional gender biases. (Left to our own devices, we humans seem incapable of judging without prejudice; corrections need to be built into our systems.)
  • Humanists—whether in women and gender studies, science studies, or education studies—need to attend to these quantitative studies, help students learn to interpret and deploy them, and perhaps even suggest further areas for empirical study. They provide convincing evidence to support many theoretical arguments (and vice versa).
  • Studies of the hard data of gender bias—in an era of hard data—should be required reading of all administrators and all faculty who are called upon to make decisions about hiring, tenure, and promotion based on purely quantitative measures such as “productivity” or “citation counts.” An adage of data scientists is “garbage in, garbage out.” That means if the sample or the data is corrupt or biased when it is first entered, then any conclusions based on mining or crunching that data must be regarded with keen skepticism. You cannot simply count the end product (such as number of articles accepted, reviewed, awarded prizes, or cited) without understanding the implicit bias that pervades the original selection process and all the subsequent choices on the way to such rewards.

Source, list of studies, and lots more information: Gender Bias in Academe: An Annotated Bibliography of Important Recent Studies (HASTAC)