Tuesday, March 10, 2020
A Rollercoaster on YouTube
YouTube has seen many changes and grown profoundly since it was first founded in 2005. Within the last few years, issues of demonetization have been a rollercoaster for YouTubers and viewers to ride.
Note: I am not a YouTuber, just a person who often uses YouTube for watching YouTubers and listening to music. I have noticed what videos get ads and which don’t and how that has changed within the last few years.
YouTubers have always made a portion of their money off of ads. I still remember years ago seeing videos that had multiple little yellow vertical lines in the progress bar that indicated where all the ads were. They have since taken those yellow lines away and now just announce with a countdown of when an ad(s) will start. In the days of the yellow lined ad indicators, YouTube was the place to upload whatever you wanted without concern for Copyright Infringement. You could use a song you like and just have various pictures playing on the screen and that was acceptable. In recent years, YouTube has cracked down on Copyright Infringement, but more than that has changed.
In 2017, ad companies started taking a closer look at the videos their ads were being played on and noticed they were being played on videos with violent/hateful content. Some companies then refused to have their ads on YouTube because of it. This caused YouTube to enter what was referred to as the “Adpocalypse” where YouTube videos were getting demonetized left and right. This was hurting many YouTuber’s profits. While some of the demonetizing was hitting videos and channels that did have hateful/violent content, there were a lot of videos that were getting wrongfully demonetized. YouTube demonetized videos through an algorithm and sometimes it was hitting videos that didn’t have any cursing or violence or sexual references. The algorithms wrongful demonetization of videos were keeping YouTubers from making as much money as they could have to pay their bills. Of course, if a YouTuber got their video demonetized and believed it shouldn’t have been, they could file an appeal and possibly the demonetization would be lifted, but not every time would it be lifted. YouTube insisted that the algorithm was still learning and that given time it would learn how to recognize acceptable content from bad content.
Also in 2017, people found out that there were highly inappropriate videos that were beating the algorithm. And these videos targeted kids. These videos would use characters to tell stories that included violence or sexual content and they would not get demonetized, because cartoons and children’s characters did not seem like they were dangerous. Additionally, there were videos with children where the comment sections were not moderated and had many people posting inappropriate comments. When YouTube found out about this and as more companies were pulling their ads, they worked to take down those videos and channels. They also worked to deactivate the comment sections on videos with minors in them.
In 2019, YouTube was found to be harvesting information from children without their parents’ consent. Channels that were for kids to watch were being used to get children’s information so YouTube would know what ads to attach to the videos. For this YouTube was fined $170 million for violating the Children’s Online Privacy Protection Act (COPPA) by the Federal Trade Commision (FTC). This time YouTube decided that if a video or channel’s targeted audience were kids, they would get demonetized and would not receive ads. YouTube channels that only had the goal of educating children and not harvesting their data were punished for aiming their content at children and having ads on them chosen by an algorithm the YouTube channels themselves did not choose. Again it meant they would make less than what they usually would with ads. It is entirely understandable how using children’s data to have ads targeted at them is bad, but ads are how YouTube channels make money. YouTube also decided to shift the responsibility of content onto the YouTubers. Videos that had subject matter appealing to children and were not declared for children would receive a fine of $42,000 per video by the FTC. This subject matter included cartoons, gaming videos, playful music, bright thumbnails, catchy phrases, and other things. This caused some YouTubers to fear for their channels, because these broad terms could be applied to many different things. Some felt they had to walk a fine line between making sure their content did not include adult material and making sure it did not include subject matter that appealed to kids. Many had to fear getting demonetized without knowing why even though they believed themselves to be within that thin line.
In late 2019, YouTube realized more adult content, like cursing and some sexual references, could be monetized. There were YouTubers who could create content that was for people over 13 or 18 and could just have ads that were more adult, like for Rated-R movies or underwear ads. I especially thought the 2020 Valentine AdoreMe ad was a great more-adult ad to have on videos and was honestly one of the few ads I would willingly watch all the way through because it had a good message. This change has allowed more YouTubers with more adult audiences to feel comfortable cursing in their videos.
YouTube has gone on a rollercoaster with their policies and what is allowed/not allowed. Much of it was in hopes to make YouTube safer for children, which was the right thing to do, but their algorithm has had its blind spots and sometimes demonetized videos that did not violate policy. They even demonetize content that references touchy topics, like the Coronavirus and other topics. All these changes over the years has had an impact on what people feel they can and cannot say in their videos with fear of how it will affect their income. Some YouTubers say they feel as though YouTube is only choosing to do something when companies decide to pull away from allowing their ads from playing on videos. It sometimes seemed as though other companies realized what was going bad before YouTube did. Of course, there is only so much a platform like YouTube can do since there is a great number of videos posted daily and they are using an algorithm that does not have the same judgement as a person. However, only doing things when they start to lose money kinda makes it seem though YouTube chooses profits over employees and viewers. Ideally though, no one would ever post inappropriate content that included children or targeted children and no one would ever make inappropriate comments on videos with children and parents would make sure their children do not watch videos that are for older audiences. Ideally, YouTube would not have to step in as much as they do, but with so many bad things happening, it seems like that’s the only way to solve things.
Sources:
https://www.youtube.com/watch?v=ypNhbm141Gg
https://www.youtube.com/watch?v=FgPjX5hDuPo&t=6s
https://www.youtube.com/watch?v=9mnTCYsbKfw
https://www.youtube.com/watch?v=X_K-shDq-kM
https://www.youtube.com/watch?v=jK2t9V3XvmU&t=21s
https://www.youtube.com/watch?v=SHuMZ2hbjz8&t=915s
https://www.youtube.com/watch?v=oUzBrxQ_47s
https://www.youtube.com/watch?v=WCm_p7R9Gvo&t=4s
https://www.youtube.com/watch?v=j4v7hkS-ktk&t=38s
https://www.youtube.com/watch?v=ExPUNAtbNNs
https://www.theverge.com/2020/3/4/21164553/youtube-coronavirus-demonetization-sensitive-subjects-advertising-guidelines-revenue
https://www.vox.com/the-goods/2019/12/20/21025139/youtube-kids-coppa-law-ftc-2020
https://youtube.fandom.com/wiki/YouTube_Adpocalypse
Subscribe to:
Post Comments (Atom)
Hi Sarah! I am a frequent YouTube watcher, as well, so this topic is very interesting. There is definitely a duality here; the good and bad things that come with a media source as widespread as YouTube. While I think it is wonderful that YouTube supports individual creatives financially, I would agree in that they often do not pull this financial support from harmful creators until they themselves are harmed by them. Similar to censorship on websites like Facebook and Twitter, it is difficult to come up with a realistic solution with a scope wide enough to catch all harmful content. It seems like the way they narrow down what to censor is by being aware of what harms or helps them financially, instead of what helps or harms the most amount of people.
ReplyDeleteGreat post, Sarah! I also watch Youtube frequently and had noticed a lot people upset by everything that was happening ad wise for the platform. It's a really interesting and complicated topic, and it's hurt a lot of people who make their careers on youtube (and some rightfully so). I know that a lot of LGBTQIA+ content in censored and deemed not "suitable" for ad revenue and has also had a lot of age restrictions put on videos that even just have the word "gay" in the title, which I think is taking the idea of way too far. But then, you also have the people putting up harmful content, like a few years ago when Logan Paul posted that horrible video where he vlogs in the Aokigahara forest in Japan (also known as the 'Suicide Forest') and shows a dead body (also note: I don't watch his content, I just saw all of the news about it) and youtube had that video monetized and able to watch until enough people complained.
ReplyDeleteI think, overall, youtube as a platform still has a lot to do in order to perfect their algorithm for monetization.
One aspect of this discussion I found especially interesting (and, unfortunately, relevant to all of us right now) is the increasing prominence of algorithm-generated responses to human input. While I can imagine the potential impact and increased efficiency that algorithms could provide software-based services and their consumers, I have only had unpleasant interactions with machine learning created for professional or academic purposes. For instance, when I used VMock (software designed to review and grade resumes) I was unable to enter the name of an organization due to the inclusion of a personal pronoun in its title. Although the algorithm based its actions on a rule that was generally correct, that a resume should not include personal pronouns, it proved itself incapable of recognizing when a personal pronoun was necessary.
ReplyDeleteWhile I can see the potential for algorithmic analysis of human input, my example above illustrates how it can and does fail.