Social Items

A #meantweet or something worse? Twitter often lets AI be the judge

00:00 00:00
A year ago, Twitter was still relying on users to report abusive posts and employees to remove them.
More and more, however, that's a responsibility delegated to artificial intelligence, a move CEO Jack Dorsey views as necessary as San Francisco-based Twitter and its rivals grapple with growing global scrutiny of harmful content ranging from from sexual exploitation of children to misuse by terrorists.
"Of all the tweets we take down every week for abusive content, 38% of them are now proactively detected by our machine-learning models," Dorsey said on a quarterly earnings call Tuesday. "This is a huge step, as there was 0% just a year ago," he added, and it reflects Twitter's focus on taking the initiative to prevent harmful content from ever reaching its audience of more than 130 million users a day.
Senior Political Correspondent David Drucker on the expanded Washington Examiner magazine
Watch Full Screen to Skip Ads
Policymakers worldwide have increasingly demanded such efforts after complaints that Russian agents exploited social media firms to sow chaos in the United Kingdom's referendum on leaving the European Union as well as the 2016 U.S. presidential election. New Zealand's government promised to scrutinize use of the platforms by terrorists after an attack on two mosques that killed 50 people in March was livestreamed, and Sri Lanka imposed a social media blackout after bombings on Easter Sunday that killed more than 300 people.
"We've been employing a lot more machine learning and deep learning to everything on our system, but have been focusing a lot of our work on making sure that we recognize that everything that happens online has offline ramifications," Dorsey said.
Footage from the New Zealand attack, originally livestreamed on Facebook, was later uploaded on Twitter, Google's YouTube video-sharing site, and other platforms. Rep. Bennie Thompson, D-Miss., noted this in a letter to industry executives that requested a briefing before the Homeland Security Committee he chairs.
Reports that Facebook was alerted to the video by New Zealand police, rather than its protective algorithms, and that YouTube was unable to contain a flood of reposts for about 24 hours, show systemic flaws in the industry's safeguards, Thompson said.
Facebook, which has invested heavily in the past year in systems to block harmful content and prevent its spread, said it took the mosque attacker's video down within minutes of being contacted by New Zealand police. The live broadcast was viewed fewer than 200 times, said Chris Sonderby, the Menlo Park, Calif.-based company's deputy general counsel. Including replays watched afterward, the video was seen about 4,000 times before its removal.
Twitter, which said it suspended an account associated with the attack, is working continuously to make its platform a safe space, Dorsey said.
"We're taking a bunch of the burden away from the victims of abuse and harassment on our service, and we're making the agents that we have working on this process much more effective and much more efficient," he added.
As recently as last fall, in a hearing before the House Energy and Commerce Committee in Washington, Dorsey fielded complaints from Democrats that social media firms had allowed their products to be misused by foreign governments, conspiracy theorists such as Alex Jones of Infowars, and even President Trump, who shares his unfiltered opinions with nearly 60 million followers.
"Twitter policies have been inconsistent and confusing," Rep. Frank Pallone, the New Jersey Democrat who became committee chairman this year after his party regained control of the House in November's elections, said at the time. "The company's enforcement seems to chase the latest headline as opposed to addressing systematic problems."
Republicans, meanwhile, questioned whether Twitter and other firms in liberal-leaning Silicon Valley are using content moderation as a pretext for deleting posts that promote conservative views.
"We hope you can help us better understand how Twitter decides when to suspend a user or ban them from the service and what you do to ensure that such decisions are made without undue bias," said then-Chairman Greg Walden, an Oregon Republican.

No comments