Research paper from my Environmental Field & Research I class. The paper's focus was on showing our understanding of statistics, and the topic of my paper is gender disparity in the movie industry as shown by 2016's 20 top-grossing films. Introduction This paper attempts to shine a light on the gender disparity in the movie industry, especially in films geared towards children. The 20 top-grossing films of 2016 were put to 13 tests in order to determine whether the storylines, casts, and production crews presented women in a positive manner and with equal representation (Domestic Box Office For 2016, n.d.). Each of these tests was inspired by the Bechdel Test, which consists of two criteria: are there at least two named female characters, and do these characters have at least one conversation not about a man (Fernandez, n.d.). This test was created in the 1980s to expose sexism in the film industry. Of the 20 films tested, 13 passed this criterion, and 7 did not. The number of tests these 20 films passed will be compared to each other, and the average for each film rating (PG, PG-13, and R) will be compared. The best and worst offenders will also be identified. Data The data used in this paper was gathered by a team of editors and writers at FiveThirtyEight, an online news source (Hickey, Koeze, Dottle, & Wezerek, 2017). The metrics of thirteen tests (the original Bechdel Test as well as 12 “successors” created by women in the television and movie industries) were applied to the 50 top-grossing films in 2016. This data was published on Github as qualitative: each film was graded as passing or failing each of the tests (FiveThirtyEight, 2018). By calculating how many tests each film passed, the data is expressed as quantitative in this paper, as shown in Table 1. The original population of the data is all films that debuted in 2016, with the final data set containing only the 20 top-grossing films. This sample size allows for the most influential movies of 2016 to be considered and is a manageable amount of data to work with. Conclusions and Inferences Figure 1 shows the data set is not greatly affected by outliers, as the median and mode are both 3 and the mean is 3.3, incredibly close to this value. These numbers are out of the total number of tests that can be passed: 13. Many of these films are likely from a man’s perspective of the world, and possibly set unrealistic or unethical standards for and models of women. Figure 2 shows PG films have the widest variety of scores (from 1-7), PG-13 films have a range from 1-4, and the sole R-rated movie tested scored a 1. Films geared towards children probably actively adopt more diverse casts and more female-friendly storylines to appeal to all children and their parents. Figure 3 once again shows PG films have the most variety in test scores, and also shows 44.4% of the PG films tested scored above a 5. This suggests more equalizing hiring practices and exhibiting storylines featuring female role models is not a staple across the board, even for children’s films, as more than half of them received under a 3, the average score for the entire groups of films tested. Table 2 shows the frequency and relative frequency of each score at the same time, making it incredibly easy to draw quick conclusions based on the numerical data. It highlights the fact that the three highest scores present in the sample (5, 6, and 7) were also the three classes with the lowest frequency. This shows that while some films are able to reach these higher scores, many do not try to do so, often favoring more action-packed, male dominated films. Figure 4 shows that a majority of the films scored a 4 or below, scoring 30.7% or worse considering there are 13 tests in play. The industry has a lot of improvements to make, and doing so will likely increase viewers around the world who avoid many blockbusters due to their lack of diversity or anti-women storylines. Figure 5 shows that only 20% of the films tested scored a 5 or above. These films were Sing (#10 of the year, 5/13 passed), Kung Fu Panda 3 (#20 of the year, 5/13 passed), Finding Dory (#2 of the year, 6/13 passed), and Hidden Figures (#14 of the year, 7/13 passed). Films that passed more tests tend to be outperformed by lower-scoring films, possibly because they do not appeal to the traditional cinema audience other action-packed films on the list did. Commentary on Different Types of Visual Data Figure 1 shows the procedure to calculate the mode and median, while also showing the data set in order of score as opposed to age-rating as it has appeared so far. The biggest downside is that it is not very visually appealing, and may confuse those who are unfamiliar with how these statistical measurements are taken. It is the worst visual display shown because of these reasons. Figure 2 is very straightforward and clearly shows how each film performed. Color-coding each age-rating makes it easy to see the rough spread in each category. The three groups cannot easily be compared and contrasted mathematically, the largest drawback. Figure 3 would likely work much better with a substantially larger sample size. This is because there would be more variety (especially with R-rated movies). The pie chart(s) are helpful in comparing the makeup of movies in each age-rating, which cannot be demonstrated as easily with the other figures. Table 2 is very helpful in understanding how Figures 4 and 5 were made. It also outlines the frequencies of each class numerically. The largest pitfall is that the data consists only of whole numbers, making the class boundaries and class midpoints a bit out of place and possibly even confusing, because each class consists of one value. Figure 4 displays the data similarly to Figure 2 but in an easier format to draw mathematical conclusions from. Having the bars in multiple colors is helpful to draw some conclusions, but having the values of each color fragment on each bar would be a highly valuable edition. It is the most straightforward and versatile display of the data. Figure 5 shows the relative frequencies of each score. Having the data presented this way makes it easier to make inferences that could apply to a larger sample of films, such as “About 30% of the films released in 2016 can be expected to pass 3 of the 13 tests.” Detailing each bar to be divided by age-rating (as in Figure 4) would be helpful in making more detailed assessments. References Domestic Box Office For 2016. (n.d.). Retrieved from https://www.boxofficemojo.com/year/2016/?grossesOption=totalGrosses Fernandez, D. (n.d.). About. Retrieved from http://bechdeltestfest.com/about/ FiveThirtyEight. (2018, October 17). The Next Bechdel Test. Retrieved April 1, 2020, from https://github.com/fivethirtyeight/data/tree/master/next-bechdel Hickey, W., Koeze, E., Dottle, R., & Wezerek, G. (2017, December 21). Creating The Next Bechdel Test. Retrieved April 1, 2020, from https://projects.fivethirtyeight.com/next-bechdel/ Ratings, Reviews, and Where to Watch the Best Movies & TV Shows. (n.d.). Retrieved from https://www.imdb.com/ May 2020, 11th Grade
Comments are closed.
|
Categories
All
|