Actions

Work Header

Rating:
Archive Warning:
Category:
Fandom:
Additional Tags:
Language:
English
Series:
Part 89 of Fandom Stats , Part 2 of Gender Stats , Part 15 of Shipping Stats
Stats:
Published:
2018-04-01
Completed:
2018-05-29
Words:
6,140
Chapters:
7/7
Comments:
117
Kudos:
397
Bookmarks:
82
Hits:
5,495

[Fandom stats] Gender representation in movies vs. movie fanworks

Summary:

There are a lot of discussions in fandom about gender representation in fanworks, and whether fandom is shortchanging female characters. Over the last several years, I’ve seen various debates and wondered about the underlying numbers. So I went and found some… and then got slightly obsessed analyzing them.

These analyses are not going to be able to address some kinds of questions, many of which can’t be answered with numbers. But I hope to answer some questions, and add more data and new nuances to discussions.

Notes:

I have ~80 slides that I'll be sharing here, in hopefully digestible chunks. But if you want to see the whole presentation at once, or read the text as actual text instead of in image format, you can do so now (subject to change, especially speaker notes).

A quick note on feedback, because I've very occasionally gotten yelled at in the past when sharing my analyses: I am not a professional, just a fellow fan and hobbyist. I have probably made some mistakes, and I appreciate constructive criticism, but please be nice in pointing out my errors.

Chapter 1: Data sets and initial comparisons

Notes:

As well as viewing this in Google Slides (see intro notes), you can find this chapter's slides (without speaker notes) on imgur.

(See the end of the chapter for more notes.)

Chapter Text

The graphs above are reprinted from from The Geena Davis Institute (PDF), except that I normalized the “All movies” graphs to be on the same scale as the other graphs by excluding screen time with no faces.

Additional data analysis that I found, not represented on the slide above: “One disquieting finding from my research is that this year’s lead actors average 85 minutes on screen, but lead actresses average only 57 minutes. (When you add in supporting categories, all competing actors averaged 59 minutes, while all competing actresses averaged 42 minutes.) Last year’s results were even more imbalanced: nominated male stars averaged 100 minutes on screen to the lead actresses’ 49 minutes.” -- NY Times

Sources:
Bechdeltest.com
The Guardian -- note that the data/analysis they’re referencing has since gone offline, so I can’t compare the methodology of the two sites.

Advantages of this data set: Unlike the GDI data set, this is not just limited to the top 100 grossing movies per year. And these come from a much bigger set of years. And unlike Bechdel test data, this includes detailed dialogue breakdowns.

Note: the authors recorded the actor gender, as listed on IMDB -- I’m going to use the simplifying assumption that character gender matches actor gender -- this may add some noise.

The fandom data used in this chapter was collected about a year ago, in ~March 2017.

This method is messy for series especially. Sometimes it grabs the wrong specific movie, or a weird umbrella tag… and fans are also pretty inconsistent about how they tag fanworks for movies that are part of series. But I decided to include all data from series anyway, especially because so much of fandom is series-based.

I labeled each character by the gender listed in the movie dataset. This has limitations; I'll discuss more later.

This diagram is approimately to scale, which I’m a bit proud of, as it involved calculating the radius and drawing the circles myself, since I couldn’t find a good tool to make Venn diagrams to scale. If anyone knows of such a tool, I’d love to know about it. :)

As an example of the difference you get from the two different possible methods of calculating gender ratios -- imagine we only have three movies in our data set:

 

Mean Girls - 7 women, 3 men (70% women)
Frozen - 2 women, 6 men (25% women)
Dead Poets Society - 3 women, 14 men (17.6% women)

If we look at the gender ratio across all the movies at once, there are 12 women, 23 men (34.3% women). We know nothing about how much difference there is between the ratios for individual movies. And Dead Poet’s Society has more input into the answer than the other two movies do, because it has a larger ensemble cast.
If we look at the average gender ratio per movie, we get (70% + 25% + 17.6%)/3 = 37.5% women. Because we have multiple data points, we can calculate how much variation there is across the movies. And now Dead Poet’s Society carries exactly the same weight as the other movies.

There isn’t a right method, between these two. But I used the second method.

Error bars here and throughout represent standard error.
Wilcoxon unmatched two-tail t-test: W = 364950, p-value = 0.09538

I can hear people freaking out! :) And I’m going to get to possible explanations in a minute. But first the mathy details, for those who care.

one-tailed Wilcoxon tests:
All vs. Popular: W = 225680, p-value = 7.371e-10;
AO3 presence vs. Popular: W = 35990, p-value = 0.0007539

Method note: As before, I calculated the average proportion of characters who are female for each movie and took the average of those percentages. For “AO3 movie fandoms,” that means I calculated the percent characters within each movie who are female, out of all the characters in that movie that appear on AO3 -- and then took the average across all movies. For “Popular on AO3,” that means I found the percent of popular characters in each movie who are female, and again took the average across all movies.

Chapter 2 and beyond will explore all these hypotheses in detail.

Notes:

Raw data is available here; feel free to use.

I will share more complete acknowledgments at the end of this, but huge shoutouts to my primary betas and stats consultants: fffinnagain, Lisa E., Morgan A. and Amy P. And special thanks to dendritic-trees for inspiring these analyses with great questions and sending me the movie data, as well as for the feedback along the way :)

Chapter 2: Do fanworks pay less attention to women than movies do?

Summary:

In which I test the following hypotheses:

H1A: fandom pays less attention to female characters overall (proportional to male characters) than movies do.
H1B: individual female characters tend to get less attention in movies on average than male characters do, and so are less likely to be popular in fandom.

Notes:

You can also see these images on imgur, or browse them in Google Slides (especially if you want text instead of images).

(See the end of the chapter for more notes.)

Chapter Text

It is admittedly a bit odd to compare movie dialogue and number of fandom appearances, though I didn’t have better options. Among the limitations of this method:

  • One is measuring a single work: a movie script, with an objective number of words spoken by each character. The other is measuring a corpus of fanworks, which may vary greatly in length, and may be written and/or tagged very differently from one another. I’ve attempted to come up with a heuristic to approximate attention in each case, but it truly is a heuristic.
  • More than one character can be tagged in a fanwork, whereas only one character speaks each word of dialogue in the script. So a character is far more likely to be tagged in 100% of fanworks than to speak 100% of the dialogue.
  • Many characters aren’t tagged in stories they appear in; especially for more minor characters, the fandom attention metric may significantly underestimate attention.
  • It might make more sense to analyze something within the text of the fanworks, like how many times each character’s name was mentioned, or how many words they spoke in dialogue within the fanfic. (I didn’t do that because I couldn’t easily do full-text analysis, and only had easy access to number of times tags were used.)

I performed a Pearson correlation to get the above numbers. I repeated the analysis with just men (R^2 = 0.295) and with just women (R^2 = 0.331). There is no significant difference in predictiveness for men vs. women -- i.e., the tightness of fit of the points to the line (Fisher r-to-z transformation, one-tailed: p = 0.2483). There is also no significant difference in the slope of the line for men vs. women (p = 0.9591, based on a significance test run here).

Note that the outliers at the top of the graph demonstrate one of the limitations I mentioned before of my measures of “attention” -- no characters speak 100% of the dialogue in a movie, but some characters get tagged in 100% of fanworks for a given movie (usually in small fandoms). This means that many characters get “more attention” in fanworks vs. movies, but that’s at least partly because the two metrics aren’t perfectly comparable.

Also note that because we’re ruling out all fandoms that are small, and the script missed a bunch of characters with only a few appearances, this approximation of fandom attention is biased to be higher than the true number of times each character is tagged.

A two-way ANOVA test (on just AO3 presence & Popular on AO3, since we don’t have reliable fandom data for All Characters) yields the following:
Significant effect of popularity (Popular and AO3 presence) on % attention -- p = 0.0364
No significant effect of media type (Movies vs. Fanworks) on % attention -- p = 0.6034
No significant interaction between media type and popularity -- p = 0.7642

This analysis included 97K fanworks on AO3 and 58K fanworks on FFN.

Note: There were also a lot of original characters (OCs) tagged in these fandoms on both FFN and AO3; those are not included here. But I’ll address OCs later.

Here are some examples of individual characters (not randomly chosen; hopefully mostly well-known to fandom) and the percent of their movie’s dialogue that they speak:

Aron Ralston (M), 127 Hours: 84%
Terry Dolittle (F), Jumpin’ Jack Flash: 70%
Ash (M), Army of Darkness: 70%
Olive (F), Easy A: 66%
Phil (M), Groundhog Day: 58%
Anne Wilkes (F), Misery: 55%
Jesus (M), The Last Temptation of Christ: 55%
Macbeth (M), Macbeth (2006): 51%
Celine (F), Before Sunrise: 50%
--------------- Only 0.5% of F characters and 1.1% of M characters make it past this threshold ---------------

Cobb (M), Inception: 44%
Elle Woods (F), Legally Blonde: 43%
Jack Twist (M), Brokeback Mountain: 42%
Vivian Ward (F), Pretty Woman: 41%
Indiana Jones (M), Raiders of the Lost Ark: 40%
Ferris Bueller (M), Ferris Bueller’s Day Off: 40%
--------------- Only 1.7% of F characters and 3.4% of M characters make it past this threshold ---------------

Bella Swan (F), The Twilight Saga: New Moon: 36%
Sally Albright (F), When Harry Met Sally: 36%
Harry Burns (M), When Harry Met Sally: 35%
Tyler Durden (M), Fight Club: 31%
James T. Kirk (M), Star Trek IV: The Voyage Home: 30%
Aurora Greenway (F), Terms of Endearment: 30%
--------------- Only 4.7% of F characters and 7.2% of M characters make it past this threshold ---------------

Magneto (M), X-Men (2000): 28%
Amélie Poulain (F), Amélie: 27%
Peter Parker (M), Spider-Man 2: 26%
Ripley (F), Alien: 22%
Mark Watney (M), The Martian: 20%
Cher (F), Clueless: 20%
Buzz Lightyear (M), Toy Story: 20%
--------------- Only 11% of F characters and 13% of M characters make it past this threshold ---------------

Mrs. Robinson (F), The Graduate: 18%
Cameron Frye (M), Ferris Bueller's Day Off: 18%
Princess Fiona (F), Shrek: 16%
Emily (F), The Devil Wears Prada: 16%
Batman (M), Batman & Robin: 14%
Frodo (M), LOTR: The Two Towers: 10%
Blade (M), Blade: Trinity: 10%
Storm (F), X-Men 2: 10%
--------------- About 26% of female and male characters make it past this threshold ---------------

And you don’t need to pass that arbitrary 10% threshold to be a major character. For instance:

Frodo (M), LOTR: The Fellowship of the Ring: 9%
Gretchen Wieners (F), Mean Girls: 8%
Darth Vader (M), Star Wars: Episode VI - Return of the Jedi: 7%
River Tam (F), Serenity: 6%
Leeloo (F), The Fifth Element: 3%
Terminator (M), The Terminator: 2%

Notes:

I'll finish up with H1C and H1D, and talk about what we've learned for Hypothesis 1 overall, in the next chapter. I was going to include them here, but I'm double checking my stats -- might have found some errors! :O The next two hypotheses were among the most complicated for me, stats-wise (but shouldn't be hard to understand -- I just need to check that I was doing my significance tests right). Besides, this chapter is getting long already!

Thanks for the awesome response so far. :)

Chapter 3: Changes to individual characters

Summary:

In which I investigate the following hypotheses:
H1C: fandom inflates the role of male characters more often than female characters.
H1D: fandom reduces the role of major female characters more often than major male characters.

Notes:

This chapter has some more complicated stats than most (and ugh, I've redone & revisualized the analyses a whole bunch of times because of that -- I'm SO EXCITED to be done with this chapter and moving on ;P ). Sorry if it's overwhelming to some readers; I've done my best to give lots of examples of characters/movies to help ground the data, even if not every graph is intuitive. And I've summarized everything we've learned so far at the end.

Chapter Text

Percent change metric is ((fandom % attention) - (movie % attention))/(movie % attention).

More characters show an increase than decrease from movies to fandom, probably for a couple reasons -- one, the general discrepancy in these two attention metrics, as discussed previously; and two, because I selected characters that have a lot of fanworks.

  • Kurtosis (the size of the distribution’s tails) = 15.95 F vs. 29.21 M
  • Skewness (the asymmetry of the distribution) = 3.26 F, 4.09 M

Because the data points are not normally distributed, I can’t use a t-test and ANOVA to compare mean and variance. Instead, I performed non-parametric versions of those tests:

  • There is no significant difference between the location of these distributions (one-tailed Wilcoxon rank sum test: W = 30886, p-value = 0.4215).
  • There is no significant difference in spread (distance from the median): (Brown-Forsythe test: F=2.0864, p-value = 0.1492).

Give that neither of these show a difference, there’s no evidence so far that men are more likely to have inflated roles.

When we look at individual characters, the biggest increases in attention (i.e., the most inflated roles) are indeed for male characters -- but interestingly, so are the biggest decreases. This may indicate that our significance analysis was underpowered due to a small data set -- with more data, it’s possible that we could see a significant difference between men and women (we might see the Brown-Forsythe test showing greater spread for men). But that’s very speculative; it could also be that the distributions would remain very similar overall with more data.

Some of these make a bunch of sense to me -- e.g., River Tam and Leeloo are pretty central to the plots of their movies, without saying too much. (Which might not be the only reason they’re more popular in fandom! I think they both have qualities that might inspire lots of fanworks. But part of it is also a reflection of methodology, and using dialogue as a measure instead of something like screentime, billing, or importance as rated by viewers.) Others have a presence that's probably inflated partly by being in a popular ship (more on shipping later).

Methodology notes:

  • A lot of the series listed only had one of the scripts in the movies dataset, whereas sometimes the best matching fandom I found was for the whole series; so the change shown here may not be representative of total percent dialogue across series.
  • In parentheses, I show (% dialogue → % fanworks). I’m inconsistent with my rounding/significant digits -- sorry! My goal is to give readers a notion of the magnitude of attention from both movies and fanworks, rather than to be precise. However, there were a few places where I found it massively confusing not to add some extra digits (because otherwise the ranking looked very wrong), so I did so.

There is no significant difference in the cumulative density functions (one-tailed Kolmogorov-Smirnov test: D = 0.061451, p-value = 0.4371).

Percent change metric is ((fandom % attention) - (movie % attention))/(movie % attention). Note: fandom % attention is recorded as 0% even when there are 0 works in the fandom (even though that would actually give a divide by 0 error).

Note: This analysis includes lots of characters who have fewer than 20 fanworks on AO3 -- which was necessary for this analysis, but which includes characters who appear in 1 out of 1 fanworks devoted to a given movie. That adds a lot of wacky “increases” in attention, since such a character would get 100% attention from fandom using these metrics. So take the number of outliers in the positive direction with a grain of salt. Additionally, a bunch of the characters that allegedly have 0 fanworks actually are on AO3, but my script didn’t find them (I’ll return later to the errors made by my script). So you should take the number of -100% cases with some grains of salt as well.

Kurtosis (the size of the distribution’s tails) = 13.91 F vs. 19.06 M
Skewness (the asymmetry of the distribution) = 3.54 F, 4.00 M

  • There is a strong significant difference between the “location” of these distributions (one-tailed Wilcoxon rank sum test: W = 3910800, p-value = 0.004313). The difference is in the opposite direction of what H1D predicts; male characters tend to have the more reduced roles.
  • The difference in “variance” (or, more accurately, distance from the median) is not significant: (Brown-Forsythe test: F=0.092066, p-value = 0.7616).

Formatting is inconsistent here because in the negative case, all characters have a -100% change, so that’s not very interesting; I instead bolded the amount of dialogue they had in the movie, as that’s the most useful to compare.

As in the analysis of characters popular in fandom, men have greater increases and greater decreases than women. In this case, the difference between genders in the positive direction is less pronounced than in Q5, and the difference in the negative direction is more pronounced.

We can also see some interesting genre differences between which characters/movies fandom tends to focus on or ignore -- I’ll be coming back to genre shortly.

*Here I’ve only included characters who appear in at least 20 AO3 fanworks in the positive direction, because otherwise the numbers are too untrustworthy.

For the negative case, most characters are tied at 100% (though I found a ton of characters that my scripts erroneously missed who were on AO3 -- see more later about script errors). So here I selected some of the ones who are missing from AO3 with the most dialogue. All of them are in movies with 0 fanworks on AO3. I removed a few movies that were really old or really obscure.

There is a significant difference in the cumulative density functions with a one-tailed Kolmogorov-Smirnov test: D = 0.034737, p-value = 0.04752.

This aligns with the significant result from the Wilcoxon test two slides ago.

Okay, so summarizing what we learned in Chapters 2-3:

Next up: what kinds of movies does fandom prefer? And how does that relate to gender?

Chapter 4: How do genre and box office gross relate to representation?

Summary:

In which I investigate the following hypotheses:

H2A: AO3 fandom focuses more on genres of movie that have worse gender representation than average.
H2B: AO3 fandom focuses more on high-grossing movies, and those have worse gender representation than average.

Notes:

Oops, that was an unexpected delay. :)

Just like Ch 3, I double checked a couple assumptions in the process of trying to post this chapter... and realized the story was more complicated than I'd assumed. I spent a while digging through more data to try to understand better, and now I'm sharing a bunch of that work. But I didn't do error bars and significance tests on all these analyses, because I am super ready to get what I already have out there and be done with this project that accidentally ate my spare time for so long. :) Most of these data sets are probably too small to do more than give an initial tentative idea of what might be going on; more data needed to confirm my findings.

This chapter is pretty much completely un-beta'd in it's latest form. Constructive feedback welcome, especially if stuff is unclear!

You can see the high res images here or the original slides here.

Chapter Text

Genre classifications were made based on IMDB & Wikipedia & googling for “[movie] film genre”, plus some subjective judgment. Note that these genres are not mutually exclusive -- most movies have multiple genres.

There are only 42 SF/F movies in the 200 random movies that were sampled, which we compare against 178 total SF/F movies present on AO3 (out of 2000 overall), and 96 popular on AO3.

There are 50 action movies in the 200 random movie sample, and 141 movies present on AO3 (out of 2000 movies) of which 88 are popular on AO3.

The percentages shown are all approximate percentages of AO3 movie fandom, calculated from the 348 films that had at least one character with a presence on AO3.

Why do we see such big differences here for some genres (like Romance)? It’s probably in large part that there just isn’t very much data here. Number of movies included per genre (only 5 most popular fandom genres shown):

		Random 200 movies	    Present on AO3  	Popular on AO3
SF/F	        42				178				96
Action		50				141				88
Thriller        53				105				48
Drama		100				119				46
Comedy	        73				94				45

So, e.g., there are only about half as many comedy movies present or popular on AO3 as SF/F movies, and substantially fewer romance movies.

Another possible problem here relates to my methodology. SF/F and action movies have much larger casts, on average. If some characters don’t get popular on AO3, or my script fails to find some of them, the overall % attention to women often doesn’t vary all that drastically. But there are a lot more instances of thrillers, drama, and comedy where only one or two characters get popular on AO3, or my script only finds one or two of them. Which means that in those genres, I more often have an estimate that 100% of the fandom attention -- or 0% of fandom attention -- for that movie goes to women, and that’s based solely on a single character. (Which is kind of weird, and a downside to doing a per-movie average instead of across-all-movie averages. There is no perfect methodology, but this method is probably suboptimal in this particular case. However, since I’m trying to compare to an analysis I did earlier, I have to use the same methods.)

Anyway, that leads to some wacky statistical swings in this small data set, which seems to be contributing to the noisiness here. Really, we need a lot more data before we can do reliable genre breakdowns.

Source

Chapter 5: How does shipping relate to gender representation?

Summary:

In which I investigate how gender representation differs across shipping categories (e.g., M/M, F/F, F/M), and I also look at whether shipping ratios are predicted by canon.

Chapter Text

This sample of fanworks was chosen by selecting a list of random numbers between 1 and the most recently assigned AO3 fanwork number, and selecting a new random number any time a work was missing. It was not limited to movie fanworks, but I did omit works with fandom “Original Work.” My guess is that movies follow a similar pattern to overall shipping patterns in fandom, but I have not confirmed this because it’s actually harder to get just the movie data with the way I was doing these analyses.

I omitted tags for original characters even when they specified gender, because there aren’t any OCs included in the movie dataset, which is what I was comparing to. But there are plenty of OCs on AO3, and we’ll get into their gender breakdown later. I also omitted original character tags and reader insert tags (“Reader,” “You”), even when the gender was specified -- again, more on those later.

There were two works with one or more genderswapped characters (originally M). I went with the characters’ canon gender (as reported by IMDB/Wikia/other online resources), because in the movie dataset I didn’t record genderswaps, and this analysis is part of trying to explain the movie dataset. If I had recorded those characters as female instead, the F/F and F/M categories would have both paid more attention to women than is shown here.

Some works had more than one category (e.g., there were works tagged both M/M and Multi). There were also 2 Other works and 4 works with no category listed; I included those in All Works.

Number of data points per category:

Category	Num works	Num character appearances
F/F		12		40
F/M		31		133
Gen		21		64
Multi		9		44
M/M		64		172
All works	125		485

As before, these relationship categories are not mutually exclusive.

Elaboration on why not to blame dudeslash or its creators:

  1. Other factors may fully or mostly explain lack of female representation in fanworks. We’ve seen strong evidence for some other factors and will see more.
  2. AO3 is not all of fandom! Just because a lot of M/M creators choose AO3 as a place to to post their works doesn’t mean that these shipping ratios hold everywhere (as we will see).
  3. There are lots of reasons that AO3 may have this shipping ratio, some related to canon, some historical, and some other ones (more on that later).
  4. And importantly, it’s a personal value judgment as to whether fan creators should strive to write more of particular relationship categories than others (and there are lots of arguments out there from multiple sides of that debate). The data doesn’t make a claim either way, and certainly doesn’t assign blame.

On the last point, to be clear about my own biases: I personally am not in favor of the overly simplistic blame games that sometimes get played in fandom (shipping M/M is misogynistic; not liking an M/M or F/F ship is homophobic; not writing F/F is self-hatred for female fans; creators of original female characters are giving fandom a bad name by making Mary Sues; etc.). Besides their over-simplifications, I disagree with the approach. I tend to view creating fiction as a difficult pursuit that people can easily be scared away from doing at all. And I think there are a large number of dimensions in which, in aggregate, I’d like to see better representation in fiction… but I don’t think it’s wrong to have some stories that are mostly about thin, well-off, young, white men. So while I’m very much in favor of having thoughtful, nuanced conversations about representation in fiction, I personally dislike judging fans for what they've created or blaming them for large culture patterns in (lack of) representation.

Note: the years listed here on the X axis are extremely approximate; this was actually sampled in May, 2018 and “2017” actually represents June 2017-May 2018, and so on going back in one year increments.

Please don't take that 10% number very seriously -- I am pretty sure there's been an increase, but that number is a wild guess given how tentative the per-category numbers are.

Prokopetz post (Plus more thoughts in an ensuing thread)

Model P1: 30% of characters overall are women, therefore the percent of F/F we’d expect is 30% * 30% = 9%. 70% of characters are men, therefore the percent of M/M predicted is 70% * 70% = 49%. F/M is what remains: 100% - 49% - 9% = 42%. (recall that we’re only trying to explain these three categories for the moment, so the three categories sum to 100% for this analysis.)

Model P2: The average percent of characters per movie who are women is 31%, so the percent of F/F we’re expect is (31%)2 = 9.6%. M/M is (100-31%)2 = 47.6%. F/M is what remains: 42.8%.

Model P3: I used the numbers from the intro about how many movies pass the Bechdel test (recall that these are not necessarily representative, though top movies in recent years seem to have had similar numbers). Only 58% of movies contain 2+ women who speak to each other about something other than a man. I assumed that the 42% of movies with the lowest ratio of female vs. male characters were (approximately) the same ones that didn’t pass the Bechdel test, and therefore not eligible for F/F shipping. So (for purposes of calculating femslash only) I gave them those bottom 32% of movies a modified 0% F char score. Then I took the modified average of the {% characters who are female} across all movies. The modified average % characters who are female for P3 was 24%, giving a predicted amoung of femslash of 24% * 24% = 5.8%. (I then normalized the predicted amounts of shipping in all categories so it again added up to 100%.)

I also did the same for two partially-passed-the-Bechdel models (not pictured here) -- one where all movies that contained 2+ women were eligible for F/F, and one where all movies with 2+ women who talked to each other were eligible. But there wasn’t a huge difference between these and other models shown, so I only included the Bechdel test model, since it was the most extreme.

Important note: some of the pairings on AO3 are secondary/background pairings -- canonical F/M pairings in particular may be getting tagged more often in the background withough actually being a main focus of the story. I also redid the AO3 analysis for just the fanworks that belong to only one category -- only F/F, only F/M, or only M/M -- to try to remove a lot of those background pairings. In that case, the F/F percent was about the same (8.5%), but the difference between F/M and M/M was magnified -- F/M was then 27.1% and M/M 64.4%.

Note also that none of the canon predictions took into account the genre preferences of fandom. E.g., if we made canon predictions from just the film genres that get most popular on AO3, sci-fi/fantasy and action (a vast oversimplification, but allow me to demonstrate my point) we’d predict 59% M/M, which is close to the actual AO3 proportion. So the genre balance in the predictive model makes a huge difference.

More on AO3 history:
How Archive of Our Own Revolutionized Fandom
Fanlore article

These numbers are not limited to just movie fanworks on either AO3 or FFN, for the same reasons as described in Q10.

The numbers from AO3 are the ratios for every fanwork on AO3. (AO3 allows you to search by category and find out how many fanworks are in each category; FFN doesn’t, which is why I had to hand label a small set of FFN fanworks). But then I normalized those ratios so that M/M + F/F + F/M add up to 100% -- that’s why all these AO3 numbers are higher than in the previous analysis; this analysis is omitting Gen, Other, and Multi. The reason I chose to do this is that the gender ratios of canon don’t make any particular predictions about how many works we should see in the Gen, Other, or Multi categories.

Yes, this is a sidenote to a sidenote. :)

Chapter 6: Other forms of gender representation, and methodological issues

Summary:

In which I figure out that some of my popularity metrics are misleading, and I investigate a whole bunch of types of gender representation that weren't covered in my previous analyses.

Notes:

This is the penultimate chapter -- and the last one full of data! After this, it'll just be a "what did we learn?" overview. :)

High res images on imgur
Slides

Chapter Text

So, this is a large amount of data went missing. That’s not so super surprising, since I was trying to be very certain about matches, and AO3 often tags characters or movies differently than the movie database. If I had spent more time cleaning up data and looking for variants of names, and/or doing more early testing of my scripts on random samples of characters, I probably could have done substantially better at finding AO3 matches.

Major limitations of my method for gathering data from AO3 (mostly causing AO3 data to be missed):

  1. My script didn’t respond well if the screenplay name for the character was different from the AO3 tag (e.g., first name vs. last name; not including a character’s title in one place)
  2. I discovered after I’d already done many analyses that a lot of the longer names in the original dataset were truncated. E.g., “Hermione Grange.” Because my script was doing exact matching for character names, many of these characters got left out.
  3. For characters with really common names in canon (e.g., “Jennifer”), I often failed to find the right character match on AO3, even if they were there -- because the right fandom was not in the top 10 for that name.
  4. Special characters (e.g., é) in character name caused my script to fail. I keep thinking I’ve fixed this bug, but I haven’t. Sorry, Amelié.
  5. If the movie name was too different from movie fandom name, it didn’t pass my similarity thresholds (e.g., “Harry Potter and the Chamber of Secrets” vs. “Harry Potter - J. K. Rowling” I believe just barely made the cut).
  6. This method of gathering data was messy for series of movies especially. Sometimes it grabbed the wrong movie in the series, or an inappropriate meta tag. Reboots also caused some mixups. And fans are pretty inconsistent about how they tag fanworks for movies that are part of series anyway. But I decided to include the data from movie series anyway, especially because so much of fandom is series-based. I tried to set the “Best Matches” threshold high enough that it would ignore a lot of these mixups.

    Note that I didn’t do a similar examination for every slice of dialogue, and results may vary. So this is very tentative. (I chose this amount of dialogue because there were a bunch of characters in it who showed up and/or got popular on AO3, so it made for more reliable stats than some other slices.)

    Methods note: The initial analysis averaged per movie, and this one combines across all movies. (I did it differently here because the data set is small enough that the other way would be pretty noisy, for reasons previously described in the genre analyses.)

    Yeah, so this is kind of huge. It doesn’t mean my analyses were wrong, but it means that a lot of the ways I’ve been investigating are potentially misleading -- especially if we forget to factor in that women speak less than men on average. And this is likely to affect a lot of the conversations we have in fandom about the number of characters who get popular, for any set threshold of popularity. It's not that there aren't far fewer female characters who get popular on AO3, but that may be almost exactly proportional to the size of their roles in canon.

    In this section, I’m going to survey a bunch of different kinds of tags that have not been addressed so far, but are in some way connected to gender. I’m not saying all of the following are equivalent to appearances in fanworks by female characters from canon, which we’ve been looking at so far. I’m just interested to get a richer look at the landscape of gender in fanworks.

    This is not limited to just movie fandoms. The same will be true for the following analyses.

    Year is approximate because I sampled in April 2018, looking at the past 5 years from that date, so they’re only very roughly aligned with calendar years.

    My best estimate is that around 75% of works about trans characters or themes don’t use the Trans Character tag. BTW, these caveats about missing many instances is true of all my tag-based analyses, but this is the only case where I made an attempt to quantify how much was missing (because such works tend to use identifiable terms like “trans” and “nonbinary”, so they’re easier to track down without the tags than, say, all the works that contain original characters).

    I would not take the recent dip in the Nonbinary Character tag as indicating any decrease in actual representation; it’s a relatively small decrease, and it may also be the case that people are still settling on which of a set of related tags to use for these stories.

    “Sex Swap”: not a term I’ve seen elsewhere, but that’s part of the AO3 tag.

    These most often seem to involve men being transformed into women, from my experience, though I haven’t empirically verified that. (There are only 13 works tagged “Male Natasha Romanov,” though, and she’s one of the most popular female characters on AO3.)

    A far smaller number of fanworks use other tags like “Alien Gender/Sexuality.”

    Check out the “Non-Traditional Alpha/Beta/Omega Dynamics” tag for some examples of ways people are playing with gender and sex roles within this AU.

Chapter 7: TL;DR and final thoughts

Summary:

You can skip everything else if you really want to. :)

Chapter Text

Slides cited: 7, 46 (% characters who are women); 6 (Bechdel)

Slides cited: 5 (% speaking time by gender in recent top-grossing films); 28 (speaking gap); 29 (very large roles).

Note that this is only in recent top-grossing films. In the film set I used for most of my analyses, women speak only 28% of dialogue; see slide 23. (The analysis methods were also somewhat different.)

Slides cited: 16 (percent F characters); 24 (percent attention) And yes I totally create my fanfic and fanart with quill and ink, don’t you? ;) (clipart is hard sometimes)

Slides cited: 16 (popular characters); 79 (popularity adjusting for the dialogue difference); 32-41 (individual character popularity).

Slides cited: 44-57. Note that the story regarding genre is not totally straightforward -- with the limited data we have, individual genres don’t always behave as expected. (E.g., SF/F fanworks actually may have better than average gender representation despite the canon lack -- which is possible if the female SF/F characters tend to get tagged a lot. Action genre appears to have lower than average representation for both canon and fanworks, though.) But far more data is needed to do accurate analyses of individual genres. See cited slides for more details.

Slides cited: 68-72. The slides cited contain some other notes about why AO3 may have these shipping ratios, having to do with archive history as well as explicitness of pairings.

M/M makes up 50% of AO3 overall -- it’s shown as 61% here because that’s out of just the F/F + F/M + M/M works (ignoring other categories for reasons discussed in the slides cited).

The canon prediction used here is model P1; see cited slides for alternatives (with fairly similar predictions).

Slides cited: 71 (explicitness); 66 (changes over time); 64 (gender ratios within shipping categories)

Slides cited: 81-87.

This chart shows the rate that each tag was being produced within the past year.

Some of the unanswered questions:

[1] Genre clearly influences box office, but how much? (Someone else probably has this sort of data… I didn’t gather enough genre data to do thorough analyses.) It would be nice to know how much my investigations into genre and gross were getting at overlapping factors.

[2] Top grossing films have less gender representation than films on average -- but is that causal? (Do movie fans actively prefer to see movies with fewer women, or is this just a side effect of genres like action and sci-fi movies being popular?)

[3] Shipping seems related to canon gender ratios (as well as to archive history). But what other factors are related? For instance, do different genres tend to have different shipping patterns -- more so is explained by that genre’s gender ratios? (E.g., it could turns out that dramas produce way more femslash than predicted by the amount of canon gender representation.)

[4] Do factors like genre influence representation in fanworks, above and beyond genre’s impact on movie representation? For instance, maybe action film fanworks are more likely than most to disproportionately focus on the few female characters that appear onscreen in those films.

[5] Is the amount of shipping purely a result of canon gender ratios (and other factors like archive history), or is it also a causal factor contributing to gender ratios in fandom? E.g., does the popularity of M/M on AO3 cause a feedback look leading to less gender representation on AO3? (The rise of F/M and F/F over time make this look less likely, but this could be happening in some pockets of fandom.) And/or are F/F fan creators -- whose fanworks have far better representation than average -- working extra hard to consciously improve representation beyond what canon would predict? There are lots of possible relationships here.

Further explanation about cause vs. correlation:

Things that are correlated could be causally connected (A causes B, or B causes A), or there could be an underlying variable that causes both things to occur (C causes both A and B) or that mediates between them (A causes C, which causes B). Occasionally we also find spurious correlations that don’t mean anything. (Examples: http://www.tylervigen.com/spurious-correlations).

In some cases of correlation, we know that the causal direction can’t go in a particular direction -- e.g., a film’s box office gross can’t cause its genre, because the genre has already been determined before it’s released at the theater. But we still don’t know whether there are other hidden variables involved.

I think a lot of arguments in fandom actually come down to different fans’ intuitions about these underlying variables -- based on their own experiences, and their own guesses about the motives of other fans. These are, however, the sorts of thing I can’t quantify with the studies I’ve done.

Gender representation in fanworks, as referred to here, includes focus on canonical characters -- but also novel gender representation such as OFCs.

The above hidden variables may also be influencing each other. That just got too complicated to graph. :)