Before the Uvalde massacre, a tech platform failed to answer kids’ alarms. The “technological solutionism” idiots among us.

Home / Uncategorized / Before the Uvalde massacre, a tech platform failed to answer kids’ alarms. The “technological solutionism” idiots among us.

The key point?  That children say that again and again they report people for breaking the rules; again and again, those people quickly appear back online. So what’s the point of the reporting tools? 

Just another example of the “technological solutionism” idiots among us who posit that any social or political problem just needs a technical fix. 

 

BY:

Eric De Grasse
Chief Technology Officer
PROJECT COUNSEL MEDIA

 

2 June May 2022 – A week ago today, an 18-year-old man walked into an elementary school in Uvalde, Texas, and committed the latest in our nation’s never-ending series of senseless murders. And in the aftermath of that horror — 19 children dead, two teachers dead, 18 more injured — attention once again turned to what role platforms might have played in enabling the violence.

This question can feel both urgently necessary and also somehow beside the point. Necessary because people (often teenagers) are constantly being arrested after making threats on social media, and the Uvalde case shows once again why those threats must be taken more seriously.

And yet it’s also clear that American’s gun violence problem will not be solved at the level of platform policy or enforcement. It can be solved only by making it harder for people to acquire and use guns, particularly the assault weapons that figure in every single story like this one.

But around here at Project Counsel Media we are always focusing on platforms, the pipes and wires and networks that make the internet and web work. And with that in mind, let’s take a look at what we’ve learned about the shooter’s online behavior in the week since the shooting. It speaks to issues around child safety and platforms that we’ve talked about many times before –  and points to some clear steps that platforms (and, if necessary, regulators) should take next.

Aside from a handful of private messages, the Uvalde shooter appears not to have much used Facebook. That and Instagram were once the default platforms for making threats like these, but new platforms are growing in popularity with young people. The Uvalde shooter liked one called Yubo, created by a French company called Twelve App. It’s a “live chilling” app similar to Houseparty, the app that Meerkat became after helping to launch the live-streaming craze in the United States in 2015. It’s also apparently quite popular, with more than 18 million downloads in the United States alone, according to the market research firm Sensor Tower. Like Houseparty, Yubo lets users broadcast themselves live to a small group of friends.

The twist is that Yubo focuses on making new friends – finding people with similar interests and letting them chat. Particularly young people. “Yubo is a social live-streaming platform that celebrates the true essence of being young,” the company says. Perhaps for that reason, its seems to have also attracted more than its share of older men and their unwanted sexual advances.

In the days after the massacre, reporters discovered that Yubo appears to have been the shooter’s primary social app. He used it, among other things, to threaten rape – and school shootings. Here are Daniel A. Medina, Isabelle Chapman, Jeff Winter and Casey Tolan at CNN:

Three users said they witnessed Ramos threaten to commit sexual violence or carry out school shootings on Yubo, an app that is used by tens of millions of young people around the world.The users all said they reported Ramos’ account to Yubo over the threats. But it appeared, they said, that Ramos was able to maintain a presence on the platform. CNN reviewed one Yubo direct message in which Ramos allegedly sent a user the $2,000 receipt for his online gun purchase from a Georgia-based firearm manufacturer.

At the Washington Post, Silvia Foster-Frau, Cat Zakrzewski, Naomi Nix and Drew Harwell found a similar pattern of behavior:

A 16-year-old boy in Austin who said he saw Ramos frequently in Yubo panels, told the Post that Ramos frequently made aggressive, sexual comments to young women on the app and sent him a death threat during one panel in January.“I witnessed him harass girls and threaten them with sexual assault, like rape and kidnapping,” said the teen. “It was not like a single occurrence. It was frequent.”He and his friends reported Ramos’s account to Yubo for bullying and other infractions dozens of times. He never heard back, he said, and the account remained active.

Yubo told the network that it is cooperating with the investigation, but declined to offer any details on why the shooter was able to remain on the platform despite having been reported for making threats over and over again.It can seem shocking that a person who repeatedly makes violent threats, and is reported for doing so to the platform, fails to see any consequences. And yet for years now, children have been telling us that this is regular occurrence for them.

Last year we wrote about a report based on a survey of minors by Thorn, a nonprofit organization that builds technology to defend children from sexual abuse. Here are two findings from that survey that are relevant to the Uvalde case:

• Children are more than twice as likely to use platform blocking and reporting tools than they are to tell parents and other caregivers about what happened: 83 percent of 9- to 17-year-olds who reported having an online sexual interaction reacted with reporting, blocking, or muting the offender, while only 37 percent said they told a parent, trusted adult, or peer.

• The majority of children who block or report other users say those same users quickly find them again online: More than half of children who blocked someone said they were contacted again by the same person again, either through a new account or a different platform. This was true both for people children knew in real life (54 percent) and people they had only met online (51 percent).

In short: most kids use platform reporting tools instead of telling parents or other caregivers about threats online, but in most cases those reporting tools aren’t effective.

Julie Cordua, Thorn’s CEO, likened platform reporting tools to fire alarms that have had their wires cut. In the Uvalde case, we see what happens when those alarms aren’t connected to effective enforcement mechanisms.

If there’s any room for optimism here, it’s in the fact that criminals really do seem to be moving away from better-defended platforms to ones that are less established – and, in some cases, have fewer policy and enforcement tools. Surely part of that is simply evidence of changing tastes – Discord and Twitch are much more popular with the average teenager today than Facebook or perhaps even Instagram is.

But part of it is also that Meta, YouTube, and Twitter in particular have invested heavily in content moderation, making it harder for bad actors to make threats with impunity and evade bans. That speaks to the value of content moderation, to both companies and the world at large.

Peruse Yubo’s website and history and you will see a company that appears to be committed to good stewardship. The app has clearly posted community guidelines, albeit ones that have not been updated since 2020. It has a policy on ban evasion. And it uses facial-recognition technology in an effort to prevent users younger than 13 from signing up. The company also says that it uses machine-learning to scan live streams in an effort to find bad behavior, and scans text messages to look for private information that users might be about to share unwittingly, such as phone numbers.

These are good, useful, and expensive tools that many other platforms do not offer. At the same time, these are voluntary measures in a world where regulators still have not established minimum standards for content policy, moderation, enforcement, or reporting what they find – aka “transparency.” We know that Yubo had a policy against basically everything the Uvalde shooter did. We know that kids saw what he was doing online, grew concerned, and used the app’s reporting tools to try to prevent it from happening in the future. And, as is usually the case in these situations, we know nothing about what happened next.

Were the reports reviewed? By humans or machines? What did they find? Platforms that allow users to create accounts should be required to let people report those accounts for bad behavior. For instance, did you know you still can’t report an account on iMessage, one of the world’s biggest communications services?

Platforms should also be required to let us know what they do with those reports, both individually (to the person who reported it) and in the aggregate (so we can understand bad behavior on platforms overall). Doing so will sadly do nothing to stop the epidemic of gun violence in this country. But it will make good on the promise that apps like Yubo are making to their users when they let them report bad behavior – that they will take action when they receive them, and work to prevent further harm.

Nobody forced Yubo build the systems that Thorn’s Cordua rightly called “fire alarms.” But it did. The least that Yubo and other platforms can do now is offer us some evidence that those alarms are actually plugged in.

Meanwhile, the right-wing misinformation machine reved up after the Uvalde massacre, exploiting the shooting to promote false conspiracy theories. As the article notes, and as we have noted in previous posts, right-wing conspiracy theories moved faster than ever from fringe to the mainstream, thanks to a misinformation infrastructure that simply grows stronger and stronger over time.

Alas, perhaps all is for naught. America is a sick, sick society. Americans will never heal the rifts that stand between themselves and put forth any policies to ensure America really is a better nation. C’est dommage 😥

Related Posts