Main menu

Pages

The Week in Tech: Facebook’s First Step Toward Treating Our Data Better

 The social network’s latest privacy feature at least lets us

imagine how we could better control our data in the future

Credit...Marcio 
Jose Sanchez/Associated Press

Each week, we review the week’s news, offering analysis about the most important developments in the tech industry.

Hi, I’m Jamie Condliffe. Greetings from London. Here’s a look at this week’s tech news:

Big Tech is listening. Listening in, certainly. But listening to its critics? Well, maybe a little.

One trend of the past few months that I’ve not addressed here in detail is the drip feed of stories about what big tech companies are doing with user audio — particularly words uttered to A.I. assistants like Alexa and Siri, ones sent via Messenger or directed at your Xbox. It turns out that Amazon, Apple, Facebook, Google and Microsoft have all been sending sound clips to third-party companies to have them transcribed.

This will surprise no one in Silicon Valley: Picking out information from those audio files helps improve the artificial intelligence systems that now underpin many of Big Tech’s products.

Still, it surprised users. And that’s really my main reflection on this news: It’s a reminder of Big Tech’s reluctance to embrace transparency. The companies rarely explain how data is being used and shared. “The main fear, I think, is that in opening a dialogue with users, they will become more paranoid,” said Max Van Kleek, an associate professor at Oxford University who works on human-centered computing.

But there is now, more than ever, pressure for this to change. And this week, we got a preview of how that could happen.

On Tuesday, Facebook introduced a new tool, called Off-Facebook Activity, that allows users to see the mountain of data held about them in browsing histories, gathered via “Login with Facebook” partners or the company’s tracking pixels, that sites and apps have shared with the social network. Users can also disconnect that data from their account, so Facebook can’t use it for targeted advertising.

  • Thanks for reading The Times.
Subscribe to The Times

What Facebook is offering here “is a good first step” to building systems that are more honest with users about the company’s data collection and sharing, said Mr. Van Kleek. And, if you’re feeling optimistic, you can imagine from the tool a future that Mr. Van Kleek would like to see, where we all have more control over our data: Maybe you’ll choose to give information to some algorithms when you’re willing to trade privacy for better service, but keep it away from others you don’t trust — a kind of licensing out of your data.

But that’s not what’s happening here. Disconnect your browsing data from your Facebook account, and the company can still make use of it; it just can’t be used to aggressively target ads. This is not the jump to the “delete history” feature that Mark Zuckerberg promised the tool would be. It’s not close to stopping Facebook’s collecting the data in the first place. But it’s a first step.

Facebook and Twitter this week identified and removed Chinese accounts from their platforms, which they said were sowing divisive messages about the Hong Kong protests. Facebook deleted a handful of pages, Groups and accounts; Twitter culled 936 accounts and banned state-controlled media from promoting tweets. (China disputes the claim of the social networks, that this was a state-backed disinformation campaign.) YouTube on Thursday joined them, deleting 210 channels that had spread disinformation.

Researchers who study disinformation told me that the swift action by the platforms demonstrated some progress. “In the past it was denial,” said Sinan Aral, a professor at the M.I.T. Sloan School of Management, about the past responses of tech platforms to misinformation. “Then it was slow reaction. Now things are moving in the right direction.” But, he added: “I wouldn’t say it’s where we want it to be. Ultimately it needs to be proactive.”

That’s not easy to achieve for many reasons. A look at the Chinese content that Facebook and Twitter responded to shows that not all disinformation is made equal. Russia’s tactics, used to interfere with the 2016 and 2018 elections in the United States, were offensive, focused on so-called wedge issues to “widen the middle ground” and make it harder for people “to come together to negotiate,” said Samantha Bradshaw, a researcher at the Oxford Internet Institute. China’s have been defensive, “using the voice of authoritarian regimes, for suppressing freedom of speech” and “undermining and discrediting critical dissidents.”

I asked Professor Aral which kind of misinformation was more effective. “Let me be very clear,” he said. “We have very little understanding about its effectiveness.”

There’s no consensus on how to monitor it, or measure its impact. In large part, that’s because social media platforms have been reluctant to share details about how their algorithms work, or how content is moderated. “Some of these really basic stats, researchers still don’t have access to,” Ms. Bradshaw said.

Only by better understanding how misinformation works will we be able to figure out how to overcome it. And unless we want tech platforms to unilaterally solve the problem, they will need to give up some information to make that happen.

If the conclusions of those two stories seem in conflict, that’s because they are. Social networks are under pressure to better protect user data. They’re also being asked to open up so we can understand how they’re tackling issues like misinformation and hate speech.

Professor Aral called this the “Transparency Paradox,” a term he coined in 2018. “The only way to solve it,” he said, “is to thread the needle, by becoming more transparent and more secure at the same time.”

That’s not impossible: There are ways to share data with trusted partners while preserving a level of user privacy. One example, differential privacy, is already used by Apple, Google and even the United States Census Bureau to limit the amount of personal information that is shared with an organization while still allowing it to make useful inferences from the data.

The problem is that reform of data privacy and increased transparency could threaten the core business model of Big Tech — basically, algorithms that churn data to earn cash — by drying up its data stash and revealing its secret algorithmic sauces. So for now, companies seem to be making only the smallest of steps to change. Until they make bigger moves, the critics are unlikely to let up.

  • Huawei got another reprieve. The United States government said American companies would be allowed to continue doing some business with the Chinese tech giant for another 90 days.

  • Facebook’s liberal bias audit pleased nobody. It reported concerns about potential bias, but didn’t say it exists. Republicans called it a whitewash; Democrats said it lent legitimacy to unfounded claims.

  • Antitrust scrutiny of Big Tech intensified. States are joining the Justice Department, the Federal Trade Commission and Congress in the investigations. (Bonus: Here are four main ideas for reining in the industry.)

  • The European Union may clamp down on facial recognition. It is said to be considering how to “impose strict limits” on the use of the technology, The Financial Times reports.

  • Ransomware hit 22 Texas towns. One expert told The New York Times that it was the largest such coordinated attack yet, and could presage more widespread strikes.

  • President Trump claimed that Google manipulated the 2016 election. But the report he cited has been widely discredited.

  • Big Tech companies don’t like France’s new digital tax, and will testify against it to the U.S. Trade Representative next week. (Amazon plans to pass the costs onto French sellers.)

  • For sale on Facebook: guns. Its Marketplace is being gamed by sellers who offer overpriced gun boxes that actually contain rifles, The Wall Street Journal reports.




reactions

تعليقات