Wednesday 3 April 2019

YouTube execs reportedly ignored employee warnings about toxic videos

The YouTube logo.

  • A Bloomberg article alleges that YouTube employees warned executives about promoting toxic videos.
  • One employee purportedly offered a solution before leaving in 2016, but the proposal was rejected.
  • YouTube announced earlier this year that it would adopt a solution that seems similar to the rejected proposal.

One of the biggest problems with YouTube in recent years has been the prevalence of toxic videos, covering conspiracy theories, and other misinformation. The problem has been that the video-sharing website has even recommended these high-engagement videos to its users, despite the questionable, false, or incendiary content. Why? To get more views.

Now, Bloomberg reports that current and former Google and YouTube employees raised concerns with the company about these videos, and offered solutions. Unfortunately, these employees were told not to “rock the boat.” The outlet interviewed over 20 former and current staffers, painting a picture of a company that apparently refused to act for fear of rerducing engagement numbers.

Read: Inbox by Gmail is dead, but you can bring back its looks with this Chrome extension

One reported solution was offered by former Googler Yonatan Zunger, who left in 2016, suggesting to simply flag “troubling” videos so they weren’t recommended to users. The outlet claimed that the proposal reached the head of YouTube policy, where it was promptly rejected.

Another workaround proposal was submitted after a video, claiming that Parkland school shooting victims were “crisis actors,” went viral. The proposal by policy staff called for recommendations on the video to be limited to vetted news sources — a source told Bloomberg that this solution was rejected as well.

Engagement at all costs?

These proposals also came against the backdrop of YouTube’s internal goal to hit one billion hours of views per day. And the recommendation system, built on a neural network, was reportedly overhauled in order to meet this goal.

According to Bloomberg, computer scientist Francis Irving, who has been critical of YouTube’s AI, said that he had informed YouTube representatives of the problems facing this system, calling it an “addiction engine.” The scientist said the representatives either responded with doubt or indicated that they had no plans to change the system.

YouTube has since issued fact-check boxes below specific videos, and will no longer recommend videos with 'borderline content.'

YouTube announced earlier this year that it’ll no longer recommend videos with “borderline content” or those that “misinform users in a harmful way.” The solution sounds similar to Zunger’s proposal before he left the company. But if these solutions were indeed proposed beforehand, why were they rejected at first? Is it a case of advertisers voicing their displeasure at Google’s recommendations? It wouldn’t be the first time they intervened following inaction by the platform.

Editor's Pick

The website has also since implemented text boxes below specific videos that question facts, linking users to established sources. But it’s unclear whether these measures are enough to quell YouTube’s reputation as both a store and promoter of misinformation.

Bloomberg‘s article also detailed a proposal by YouTube CEO Susan Wojcicki and senior staff to change the way YouTubers earned cash. The proposal called for users to be paid based on engagement, with incoming money being pooled and then shared among uploaders (even if some creators didn’t have ads on their channel). This proposal was rejected by Google CEO Sundar Pichai, who felt that it could exacerbate the site’s filter bubble problem.

NEXT: How to do a system restore on Windows 10



from Android Authority https://ift.tt/2Ibderh
via IFTTT

0 comments:

Post a Comment