The new policies, which closely mimic guidelines established by Google’s YouTube, come as advertisers demand more accountability from the internet giants related to where and how their messages are delivered.
Facebook and Google were criticized during and after the presidential election for allowing misinformation to spread on their platforms. This year, YouTube had to address advertisers’ concerns after messages from major brands like AT&T were discovered on videos that promoted terrorism and hate speech. The Wall Street Journal found at least 50 acts of violence on Facebook Live broadcasts.
(On the other side of the advertising equation, Facebook disclosed last week that it had identified more than $100,000 worth of ads on divisive issues that ran from June 2015 to May 2017 and had been bought by fake accounts based in Russia.)
The companies are moving quickly to address such issues, particularly as they seek to attract a greater portion of the money earmarked for television advertising to the video content on their sites.
Facebook has enabled hundreds of publishers and individuals to run ads during live video broadcasts in the past year, and the company recently introduced a slate of new shows on a part of its site called “Watch.” If the new guidelines encourage people to post more G-rated video content, they are likely to bolster Facebook’s pitch to advertisers.
“Facebook is this huge, huge, huge platform, and they haven’t really been monetizing original content in the same way as YouTube has,” said John Montgomery, executive vice president for brand safety at GroupM, a media investment group for the advertising giant WPP. “What I think is different for Facebook is that this is a much earlier stage for them that they’re going into this, and the scale is different in that there will be much, much less content uploaded than those stupefying numbers you hear about on YouTube.” (YouTube has said 400 hours of video are added to the site every minute.)
That should be an advantage in policing content, Mr. Montgomery said, especially with the limits that Facebook is placing on who can make money from certain features. For example, the company required pages and profiles that wanted to run ads on live videos this year to have more than 2,000 followers. They could only show ads if they had at least 300 concurrent viewers after four minutes.
Facebook also said it would begin showing advertisers a preview of where their messages may appear before campaigns start, giving advertisers a chance to block undesirable destinations. The company will also report on where the ads actually run.
When brands use Facebook to target specific people with ads, they are able to select from a cornucopia of traits, including age, gender and how many lines of credit a person has. Many ads then show up in the main Facebook and Instagram feeds that people flick through, but they can also appear in articles and videos within Facebook and on outside apps and mobile websites that are part of Facebook’s “audience network.”
Brands have not been able to see beforehand what kind of content that might include, and some have had to contend with objections from consumers after being placed on sites like Breitbart News. Facebook said there were tens of thousands of apps and sites in its audience network and that more than 10,000 publishers displayed articles within its platform through a tool called Instant Articles.
As YouTube has moved to limit ads from running alongside unsavory content, many creators on the platform have complained that their videos have been unfairly penalized by automated systems. Facebook will probably have to grapple with similar complaints as it expands the number of people who can make money from video ads on the site.
“We are not censoring their content; as long as it abides by our community standards, the content can run on the platform,” Ms. Everson said. “If a publisher wants to monetize that content, they have to adhere to the monetization eligibility standards.”
Facebook previously let advertisers opt out of a more limited list of topics, including sites and apps related to dating, gambling and “debated social issues” like religion and politics, Ms. Everson said. She added that the new rules would allow publishers to “understand where we’re placing ads” and make it easier for advertisers to avoid offensive content.
The company, which will also have an appeals process for content deemed ineligible for ads, reiterated its commitment to hiring 3,000 more people to a team of 4,500 to review and remove content that violates its community guidelines, which were announced in May. (It did not provide an update on how many people it has hired.)
In its announcement on Wednesday, Facebook also addressed industry concerns about how it measures ads, an issue that attracted attention again last week after an analyst noted that Facebook’s online ad tools claimed the ads could reach 25 million more young Americans than the Census Bureau says actually exist. Facebook, which said this year that it would seek accreditation from the nonprofit Media Ratings Council to validate how it measured ads, said it hoped to achieve that in the next 18 months for key metrics for its display and video ads.