A company called Genderify has what they say is “an AI-powered tool for identifying the gender of your customers”. This is an example of something that is not worth doing (asking is easy and reliable; people will be upset when you get it wrong), but also very difficult.

After seeing some examples on Twitter, I decided to try it on some senior members of the Stats department (whose gender identity I’m reasonably confident of)

“Thomas Lumley” is 63.90% likely to be male and 36.10% likely to be female, and you have to like the four digit precision. But “Dr Thomas Lumley” is 89.40% likely to be male, and “prof thomas lumley” gets up to 94.60%!

“Ilze Ziedins” is 85.20% likely to be male, which will surprise her. “Dr Ilze Ziedins” gets to 96.00%

“James Curran” is 99.60% likely to be male, adding “Dr” or “Prof” gets him up to 99.90%

“Rachel Fewster” is at 72.00% likely to be male, adding her professorial title puts that up to 95.40%

“Renate Meyer” is at 62.30%, her doctorate moves that up to 88.20%, and her promotion to professor makes it 94.00%

Note that none of these are classically gender-neutral or gender-ambiguous names: no Hadley or Hilary or Cameron.  The overall level of accuracy is pretty terrible to start with — but the response to adding qualifications is bizarre.  If that wasn’t in the basic pre-release testing, then what was?

Even better (worse):  it’s not just that adding “Dr” or “Prof” make it think you’re more likely mean a man, adding “Dame” also does.

 

Update: on Twitter, (Dr, Prof) Casey Fiesler raised the possibility that Genderify are just trolling, which I must say is looking quite plausible.