
Programming note: The Interface will be off Thursday as I finish up a project that I will share with you next week.
Since the reckoning over social networks began in 2016, a popular genre of content has emerged that I like to call Hey, These Search Results Are Bad. This genre of story comprises three parts:
- The reporter searches for something using a social network's search engine.
- The search results are bad.
- The reporter writes a story about how the search results are bad.
Representative stories from the Hey, These Search Results Are Bad vault would include Here's how YouTube is spreading conspiracy theories about the Las Vegas Shooting, by Charlie Warzel; As a conspiracy theory video spread after Texas shooting, YouTube works to tweak its algorithm, by Hamza Shaban; and YouTube promoted a video that falsely attacked a Parkland student. How did this happen?, by Abby Ohlheiser.
You may have noticed that YouTube figures prominently in these stories. As Kevin Roose noted yesterday, YouTube and conspiracy are linked at the hip. And while Google is paying more attention to YouTube search results than ever before, there's still plenty of bad to be found there.
Clearly, certain subjects — particularly mass shootings and, as we talked about yesterday, vaccines — lead to more stories about bad search results than others. And so I was delighted to see that Pinterest had taken note of this phenomenon — and taken a surprisingly bold step to protect against it. Here are Robert McMillan and Daniela Hernandez in the Wall Street Journal:
Pinterest has stopped returning results for searches related to vaccinations, a drastic step the social-media company said is aimed at curbing the spread of misinformation but one that demonstrates the power of tech companies to censor discussion of hot-button issues.
Most shared images on Pinterest relating to vaccination cautioned against it, contradicting established medical guidelines and research showing that vaccines are safe, Pinterest said. The image-searching platform tried to remove the anti-vaccination content, a Pinterest spokeswoman said, but has been unable to remove it completely.
In other words, Pinterest realized that it had what researchers danah boyd and Michael Golebiewski have called a "data void." The researcher Renee DiResta summarizes it here:
A situation where searching for answers about a keyword returns content produced by a niche group with a particular agenda. It isn't just Google results—keyword voids are happening on social too. The most shared articles about vitamin K on Facebook are anti-vax, and the CrowdTangle analytics platform shows those articles are reaching an audience of millions. YouTube results are no better; several of the top 10 results feature notable immunology expert Alex Jones.
In 2017, BuzzFeed reported that Pinterest was awash in bad health information. To its great credit, Pinterest realized the potential for harm, and rather than wring its hands over the rights of fringe anti-vaccination groups to take over their viral machinery, Pinterest simply shut them down. As the story notes, users can still pin fringe images to their own boards, but they can no longer use Pinterest for free viral distribution. This is an approach that some call "freedom of speech versus freedom of reach." You can say what you want, but Pinterest has no obligation to share it with the wider world.
And while I'm gushing, may I just recommend this quote from the Journal's story from Pinterest's public policy and social impact manager, Ifeoma Ozoma:
"It's better not to serve those results than to lead people down what is like a recommendation rabbit hole."
If you want to know what taking care of your community looks like — if you want to know what social responsibility for a tech platform looks like — it looks a lot like what Ozoma is saying right there.
Now, I'm sure some Googlers are reading this story and saying to themselves: that's fine for Pinterest, but this is YouTube we're talking about. YouTubers go to war at the mere suggestion that the site might diminish their number of views; the idea that the company would hide entire categories from search results could trigger some sort of apocalypse.
But what if a kind of apocalypse ... were happening at YouTube already? My colleague Julia Alexander has been chronicling a very bad week at the video site, in which one man's search for the phrase "bikini haul" took him down a rabbit hole leading to a host of child exploitation videos:
The videos aren't pornographic in nature, but the comment sections are full of people time stamping specific scenes that sexualize the child or children in the video. Comments about how beautiful young girls are also litter the comment section.
The reaction was swift. Epic Games (maker of Fortnite), Nestlé, and Disney are among the companies who have pulled their advertising from the platform. YouTube creators are bracing for themselves for what, by Alexander's count, would be the fifth "adpocalypse" — a time in which revenue dries up, possibly for many months, as advertisers flee to safer ground.
There's a world in which YouTube proactively sought out bad search results and data voids, blocking access while it works to root out exploitative content. Such a drastic move would surely inspire howls of outrage — and legitimate concerns about the huge power that the company has to set the terms of public debate.
And yet I can't help but be inspired by the move Pinterest took when confronted with the same question. The only folks who lose in this decision are ones who, if they had their way, would trigger a global health crisis. Here's to Ozoma and her team for standing up to them.
Democracy
Can Washington keep watch over Silicon Valley? The FTC's Facebook probe is a high-stakes test.
Tony Romm examines the belief — sometimes espoused in this very newsletter! — that the Federal Trade Commission is become ineffectual and unequal to the task of regulating tech giants:
Nearly a year after announcing an investigation into the incident, the FTC is negotiating with Facebook over a fine that could be billions of dollars, according to multiple people familiar with the probe who spoke on the condition of anonymity last week because they were not authorized to discuss the issues. Experts say the government has to seize on the opportunity to send a message — to Facebook and its peers — that it hears consumers' frustrations and is willing to challenge the tech industry's data-collection practices.
"The Facebook inquiry is a basic test of the credibility of the FTC to be an effective privacy enforcement agency," said William Kovacic, a former Republican commissioner who teaches at George Washington University. "Anything other than a significant penalty will be seen as a form of policy failure and will really impede the agency's ability to function in the future."
Lawmakers want to question Facebook about the privacy of groups - The Verge
Colin Lecher reports that after a complaint to the FTC regarding the safety of health information in private groups, lawmakers plan to investigate:
Now, a letter from lawmakers on the House Committee on Energy and Commerce is questioning whether Facebook users were "potentially misled" about what data they would reveal by joining a closed group. The letter, addressed to Mark Zuckerberg, questions whether the company "may have failed to properly notify group members that their personal health information may have been accessed by health insurance companies and online bullies, among others." The letter requests a staff briefing about the issues raised in the complaint.
Zuboff, whose new book offers a strong critique of Facebook and Google, tells Kara Swisher that data factories like theirs produce a dangerous "asymmetry of knowledge."
There are just a couple problems: One, when customers are fully informed about how their data is being used, they don't like it. So, companies like Google and Facebook have decided to "take without asking," Zuboff said. And whoever has all that data has a tremendous amount of power — so much so that the same people who unwittingly provided more data than they realized to tech companies can then be manipulated toward commercial and political outcomes.
"Right now, surveillance capitalists sit on a huge asymmetry of knowledge," she said. "They have an asymmetry of knowledge, a concentration of knowledge unlike anything ever seen in human history … We have an institutional disfiguring of these huge asymmetries of knowledge and power which are antithetical to democracy.
Twitter Revises Data on Russian Trolls and Their 2017 Activity
Twitter now says that what it previously identified as Russian trolls were more likely Venezuelan trolls, Ben Elgin reports:
On Feb. 8, Twitter removed 228 accounts from the Russian IRA dataset because the social-media company now believes these accounts were operated by a different trolling network located in Venezuela. "We initially misidentified 228 accounts as connected to Russia," Yoel Roth, Twitter's head of site integrity, wrote in an online post. "As our investigations into their activity continued, we uncovered additional information allowing us to more confidently associate them with Venezuela."
Although Twitter's data don't reveal the names of accounts, researchers at Clemson University analyzed the social-media company's changes and said they involve accounts that mostly came online in mid-2017. The researchers, who have constructed and published their own database of the Russian troll farm's output, said those accounts were central in what had appeared to be a surprising surge in post-election activity that was mis-attributed to the Russian troll farm.
WhatsApp is at risk in India. So are free speech and encryption.
Kurt Wagner goes long on the proposal that could end encryption in India — and perhaps around the world:
"I think honestly the biggest [technology] story around the world is India trying to bring these intermediary guidelines," said Jayshree Bajoria, a researcher with the nonprofit organization Human Rights Watch, in an interview with Recode. "We are talking about China-style surveillance here."
This proposed law, known colloquially as Intermediary Guidelines, isn't specific to WhatsApp. If passed, it would apply to all internet companies that host, publish, or store user information, including social networks, messaging platforms, and even internet service providers.
Elsewhere
The Galaxy S10 will have an Instagram mode built into its camera
Here's an interesting growth strategy from Instagram: a return to the days of preloaded software. From Chaim Gartenberg:
Samsung is partnering with Instagram to add a new "Instagram mode" directly to the native camera app on the newly announced Galaxy S10. "We've worked together to rethink the experience of Instagram on the S10," said Instagram's head of product Adam Mosseri onstage at the Galaxy Unpacked event.
Smartphones, teens, and depression: Should we panic? Not yet.
Brian Resnick examines the link between smartphones, young people, and mental health. He finds that there's not much that we can say definitively:
The studies we have so far on the relationship between digital technology use and mental health — for both teens and adults — are more than inconclusive. "The literature is a wreck," said Anthony Wagner, chair of the department of psychology at Stanford University. "Is there anything that tells us there's a causal link? That our media use behavior is actually altering our cognition and underlying neurological function or neurobiological processes? The answer is we have no idea. There's no data."
Several researchers I spoke to — even those who believe the links between digital technology use and mental health problems are overhyped — all think this is an important question worth studying, and gathering conclusive evidence on.
For his first personal internet challenge Mark Zuckerberg of the year, the Facebook CEO sat down with Harvard Prof. Jonathan Zittrain for a friendly discussion about the internet and society. You can read a transcript here; I have no issue with these events but stand by my opinion last summer that we tend to overrate the importance of what the CEO of a tech platform says about it.
When Kids Realize Their Whole Life Is Already Online
Taylor Lorenz explores the phenomenon of children becoming aware that their parents have been posting photos and stories about them to the public internet since birth. Which is apparently called "sharenting." A word I do not feel great about!
Cara and other tweens say they hope to lay down ground rules for their parents. Cara wants her mom to tell her the next time she posts about her, and the 11-year-old would like veto power over any photo before it goes up. "My friends will always text or tell me, like, 'OMG that pic your mom posted of you is so cute,' and I'll get really self-conscious," she said. Hayden, a 10-year-old, said he realized several years ago that his parents used a dedicated hashtag including his name on photos of him. He now monitors the hashtag to make sure they don't post anything embarrassing.
Once kids have that first moment of realization that their lives are public, there's no going back. Several teens and tweens told me this was the impetus for wanting to get their own social-media profiles, in an effort to take control of their image. But plenty of other kids become overwhelmed and retreat. Ellen said that anytime someone has a phone out around her now, she's nervous that her photo could be taken and posted somewhere. "Everyone's always watching, and nothing is ever forgotten. It's never gone," she said.
Here's an interesting nugget in an otherwise anodyne item about Snap being an innovative company:
In December 2017, the company launched Lens Studio, a tool to publish and share augmented reality experiences created in-house and by the Snapchat community. By the end of 2018, over 300,000 Lenses had been created through Lens Studio, and those Lenses were viewed by Snapchatters more than 35 billion times. Over 70 million people use AR on Snapchat every day for an average of 3 minutes per person, making the platform the largest and most engaged global audience for these new kinds of experiences.
Twitter gets Chrissy Teigen to spill her Twitter secrets
Twitter released the best ad for its service to date last week — a zippy video Q&A with power user Chrissy Teigen. You can find it here. It's notable for how (1) it makes Twitter seem like it's mostly just a lot of fun, which it often is; and (2) it's the first Twitter ad to be made by people who seem like they actually use Twitter. A huge leap forward.
The curse of the Twitter reply guy
Chloe Bryan profiles a "mostly harmless but decidedly annoying phenomenon. A lot of people, mostly women, have noticed that one or two men always, no matter what, reply to their tweets." She goes on:
These men are colloquially known as "reply guys." While no reply guy is the same — each reply guy is annoying in his own way — there are a few common qualities to watch out for. In general, reply guys tend to have few followers. Their responses are overly familiar, as if they know the person they're targeting, though they usually don't. They also tend to reply to only women; the most prolific reply guys fill the role for dozens of women trying to tweet in peace.
It's usually pretty easy to ID a reply guy. The sheer volume of responses is a reliable indicator. But there's still some literature on the subject. In a 2018 piece for McSweeney's, for instance, Emlyn Crenshaw wrote an extremely funny Reply Guy Constitution, which focuses above all else on men's commitment to "weigh in on women's thoughts at every possible opportunity."
TikTok Has Created A Whole New Kind Of Cool Girl Called Egirls
Lauren Strapagiel covers the TikTok-centric phenomenon of the "egirl" — "a new kind of cool girl who was born and lives on the platform. She's funny, she's cute, she's totally '90s, and she knows exactly how to play with expectations."
Egirls have become a very visible demographic on TikTok — and, it appears, only on TikTok — consisting mainly of teenagers. The traits of an egirl are as ironic as they are oddly specific.
The makeup is the most iconic part of the look — thick black eyeliner with wings and cute little shapes drawn with the same eyeliner under the eyes. Usually the shapes are hearts, but sometimes they're dots or x's, and they're drawn with the sure hand of someone who grew up idolizing beauty bloggers. Across the cheeks and nose is a bright sweep of blush, with a touch of highlighter just on the button end, usually sitting above a septum piercing. Lips have either a clear gloss or a dark matte lipstick.
Launches
Twitter is launching a public test of its redesigned replies
Hey, I wrote this:
In October, Twitter said it is redesigning conversations on the platform in an effort to encourage friendlier and more useful discussions. Now the company is ready to test the redesign with a wider group of users, and will take applications from anyone who wants to try it out. Users are invited to apply at this link.
Improving Location Settings on Android
Android users of Facebook now have more granular control settings available to them.
Takes
Twitter Should Have Groups and Here Is How They Should Work
Rex Sorgatz has some strong ideas about how Twitter groups should work. His novel idea: to reclaim the hashtag:
Everyone has opened Twitter and been vexed by the flood of arbitrary tweets about The Bachelor finale or the NBA Finals. Yet everyone also has their version of The Bachelor finale or the NBA Finals — topics you yearn to discuss, but fear breach some unspoken etiquette about blasting tangential musings to everyone. (Some of you should fear this more!) Twitter Groups solves this problem: Scribble your witticisms into #TheBachelor and #NBAFinals, and you instantly cease annoying the 90% of your followers who have no interested in Colton or LeBron.
When you tweet from within a Group, your message is placed directly into the context of that Group. Those tweets are still public, in the sense that anyone can still find them, but they are suppressed from the main timeline, unless the viewer has also joined that Group. This serves the dual purpose of removing mass noise and encouraging niche conversation. Interactions become lighter, more intimate, more contextual.
And finally ...
The most memorable moment of Zuckerberg's interview with Zittrain — for me, anyway — was this rather funny exchange. (CNBC really just gives away the whole thing in the headline, don't they? I say bring back the curiosity gap!)
While talking about his desire to build more end-to-end encryption in Facebook's services, Zuckerberg said, "I basically think that if you want to talk in metaphors, messaging is like people's living room, and we definitely don't want a society where there's a camera in everyone's living room."
Harvard Law professor Jonathan Zittrain, who hosted the discussion, pointed out that Facebook's Portal is quite literally a camera in people's living rooms.
It was a funny moment, but could have been worse for Facebook. At least Portal's microphone is not a secret — which is more than we can say for this Nest device.
Talk to me
Send me tips, comments, questions, and non-polluted Pinterest pins: casey@theverge.com.
No comments:
Post a Comment