Article 43
Broadband Privacy
Saturday, April 29, 2023
And You Were Worried About Cookies
![]()
One of the ways [reCAPTCHA] v3 checks validity is through examining whether you already have a Google cookie installed on your browser. Cookies are stored data about your interactions with a site, generally so elements can load again faster. Sign into a Google account, and reCAPTCHAs like you already…
but due to v3’s wider scope, a more comprehensive online profile must be in place too… The service gathers software and hardware information about site visitors, like IP address, browser plug-ins, and the device you’re using.
- Google’s ReCAPTCHAs Also Capture Your Private Information, MakeUseOf, 2018
reCAPTCHA Enterprise helps state governments reduce false claims by preventing adversaries from automatically reusing credentials on unemployment claims portals.
- How reCAPTCHA Enterprise protects unemployment and COVID-19 vaccination portals, Google Cloud Blog, 2021
Can’t get past captcha to request unemployment check
- Reddit Thread, 2021
I’m one of those that tries to practice good web surfing habits. A noble, but USELESS exercise.
The FLORIDA UNEMPLOYMENT WEBSITE used to only use Google’s recaptcha - making it easy for them to learn whose filing for unemployment - but then during Covid, they got a validation service from ID ME.
Dealing with that website was painful. I couldn’t sign up because it insisted on texting a code to my mobile phone number on file at the unemployment office, but the only number they have for me is a land-line that doesn’t have text. The system kept telling me to answer the text I never got, and never will get.
There was no number for tech support, and only a web form to fill out that kept kicking me out with a 500 ERROR - leaving me wondering if it actually worked. I wrote to Governor DeSantis and the local news for help. It took six weeks to finally sign up. Now that place has copies of my driver’s license, mortgage bills, social security number, my face, etc. Wait until these guys get hacked.
It brought back memories of the government’s 2015 OPM LEAK that included me and 20 million other Americans whose records were stolen. For that the government gave us a lousy year of free credit monitoring. Gee thanks. They could have given us new social security and driver’s license numbers. It’s not like after a year the files will disappear. Although I bet they can set them up with self-destructs like OFFICE 365’S DATA RETENTION.
If those files were MP3S or KINDLE EBOOKS they’d be locked down with DRM.
Since they’re not, my belief is the OPM files are out there forever. Here in America protecting things like music and ebooks is more important than ANY PERSONALLY IDENTIFIABLE DATA OF OURS.
Computer analytics are OUT OF CONTROL, infosec is a LAUGH, and internet privacy even more laughable.
We need laws with teeth protecting privacy. And better oversight of these companies with government contracts.
What do we do about GOOGLE who may know more about us - government and public - than the AKASHIC RECORDS?
Where is the public discussion?
---
Google Promises reCAPTCHA Isn’t Exploiting Users. Should You Trust It?
An innovative security feature to separate humans from bots online comes with some major concerns
By Owen Williams
One Zero
July 19. 2019
A surprising amount of work online goes into proving you’re not a robot. Its the basis of those CAPTCHA questions often seen after logging
They come in many forms, from blurry letters that must be identified and typed into a box to branded slogans like “Comfort Plus” ON THE DELTA WEBSITE - as if the sorry state of modern air travel wasnt already dystopian enough. The most common, however, is Google’s reCAPTCHA, which launched ITS THIRD VERSION at the end of 2018. It’s designed to drastically reduce the number of challenges you must complete to log into a website, assigning an invisible score to users depending on how “human” their behavior is. CAPTCHA, after all, is designed to weed out bot accounts that flood systems for nefarious ends.
But Google’s innovation has a downside: The new version monitors your every move across a website to determine whether you are, in fact, a person.
A necessary advancement?
Before we get into the how of this new technology, its useful to understand where it’s coming from. The new reCAPTCHA disrupts a relatively ancient web technology that has been harnessed for plenty of things beyond security.
CAPTCHA - which stands for Completely Automated Public Turing test to tell Computers and Humans Apart first appeared in the late ‘90s, and it was DESIGNED BY A TEAM at the early search engine AltaVista. Before CAPTCHA, it was easy for people to program bots that would automatically sign up for services and post spam comments by the thousands. AltaVista’s technology was based on a printer manual’s advice for avoiding bad optical character recognition (OCR), and the iconic blurry text in a CAPTCHA was specifically designed to be difficult for a computer to read but legible for humans, thereby foiling bots.
By the early 2000s, these tests were everywhere. Then came reCAPTCHA, developed by researchers at Carnegie Mellon and purchased by Google in 2009. It used the same idea but in an innovative way: The text typed by human users would identify specific words that programs were having trouble recognizing. Essentially, programs would scan text and flag words they couldn’t recognize. Those words would then be placed next to known examples in reCAPTCHA tests - humans would verify the known words and identify the new ones.
By 2011, GOOGLE HAD DIGITIZED the entire archive of the New York Times through reCAPTCHA alone. People would type in text from newspaper scans one blurry CAPTCHA at a time, ultimately allowing Google to make the Times back catalog searchable forever. While creating a velvet rope to keep bots off sites, Google had managed to conscripthuman users into doing the company’s grunt work.
There’s no way to opt out of reCAPTCHA on a site you need to use, forcing you to either accept being tracked or stop using a given service altogether.
With that achievement under its belt, reCAPTCHA switched to showing pictures from Google’s Street View software in 2014, as it does today. After pressing the “Im not a robot” box, you might be prompted to recognize which of nine images contain bicycles or streetlights. Behind the scenes, Google reduced the frequency at which people were asked to complete these tests by PERFORMING BAHAVIORAL ANALYSIS - reCAPTCHA can now run in the background and track how people use websites.
If a Google cookie is present on your machine, or if the way you use your mouse and keyboard on the page doesn’t seem suspiciously bot-like, visitors will skip the Street View test entirely. But some privacy-conscious users have complained that clearing their cookies or browsing in incognito mode DRASTICALLY INCREASES the number of reCAPTCHA tests they’re asked to complete.
Users have also pointed out that browsers competing with Google Chrome, LIKE FIREFOX, require users to complete more challenges, which naturally raises a question: Is Google using reCAPTCHA to cement its own dominance?
Google’s perspective
To use its latest version of reCAPTCHA, Google asks that DEVELOPERS INCLUDE ITS TRACKING TAGS on as many pages of their websites as possible in order to paint a better picture of the user. This doesn’t exist in a vacuum: Google also offers GoogleAnalytics, for example, which helps developers and marketers understand how visitors use their website. It’s a fantastic tool, included on more than 100,000 OF THE TOP ONE MILLION visited websites according to “Built With,” but its also part of a strategy to monitor users’ habits across the internet.
The new version of reCAPTCHA fills in the missing pieces of that picture, allowing Google to further reach into those sites that might not use its Analytics tool. When pressed on this, GOOGLE TOLD FAST COMPANY that it won’t capture user data from reCAPTCHA for advertising and that the data it does collect is used for improving the service.
But that data remains sealed within a black box, even to the developers who implement the technology. The DOCUMENTATION for reCAPTCHA doesn’t mention user data, how users might be tracked, or where the information ends up - it simply discusses the practical parts of the implementation.
I asked Google for more information and what its commitment is to the long-term independence of reCAPTCHA relative to its advertising business - just because the two aren[t bound together now doesn[t mean they couldnt be in the future, after all.
“It will not be used for personalized advertising by Google.”
A Google representative says “reCAPTCHA may only be used to fight spam and abuse” and that ғthe reCAPTCHA API works by collecting hardware and software information, such as device and application data, and sending these data to Google for analysis. The information collected in connection with your use of the service will be used for improving reCAPTCHA and for general security purposes. It will not be used for personalized advertising by Google.”
That’s great, and hopefully Google maintains this commitment. The problem is that there’d no reason to believe it will. The introduction of a powerful tracking technology like this is a move that should come with public scrutiny because we’ve seen in the past how easily things can go sour. Facebook, for example, promised in 2014 that WhatsApp would remain independent, separate from its backend infrastructure but WENT BACK ON THAT DECISION after just two years. When Google acquired Nest, it promised to keep it independent but RECANTED FIVE YEARS LATER, requiring owners to migrate to a Google account or lose functionality.
Unfortunately, as users, there’s little we can do. There’s no way to opt out of reCAPTCHA on a site you need to use, forcing you to either accept being tracked or stop using a given service altogether. If you dont like those full-body scanners at airports, you can at least still opt out and get a manual pat-down. But if a site has reCAPTCHA, thereҒs no opting out at all.
If Google intends to build tools like this with the public good in mind rather than its bottom line, then the company must find better ways to reassure the world that it won’t change the rules when it’s convenient. If it were willing to open-source the project (as it has with many, many others), move it outside the company, or, at the very least, establish third-party oversight, perhaps we could start building that trust.
---
The IRS wants your selfie. ID.me CEO says don’t worry about it.
By Irina Ivanova
CBS News
January 28, 2022
“ID me”, the verification service that most U.S. states turned to during the pandemic to confirm the identity of people applying for unemployment aid, attracted public scrutiny this month when it was revealed that the Internal Revenue Service would start requiring anyone wanting to check their tax information online to register for an account with the private company.
The IRS move has sparked outrage among civil liberties advocates and ordinary taxpayers over concerns that the system - which requires users to upload their ID and submit a selfie or video chat with an agent - could expose troves of personal information to hackers. Some lawmakers also expressed reservations, with Senator Ron Wyden of Oregon SAYING he is “very disturbed” by the IRS’ plan. The agency is paying $86 million on the contract.
Blake Hall, “ID me’s” founder and CEO, sees it differently. In an interview with CBS MoneyWatch, he described the company’s verification technology as both more inclusive than other identification options - many of which won’t verify anyone who lacks a credit report, for instance - and more secure.
“What we’re doing is simply the digital equivalent of what every American does to open up a bank account,” Hall said.
In Hall’s view, the IRS is under assault from burgeoning criminal gangs. ID.me has already stopped would-be fraud in “tens of thousands” of cases, he said.
Over a Zoom interview, Hall shared images of several would-be fraudsters who he said tried to fool “ID dot me” by wearing a mask to take a selfie. “If that check didn’t exist, those people would have become victims of identity theft,” Hall said.
Hall said that just 10% of the people who sign up with “ID me” can’t complete the company’s selfie process and need to move on to verify their identity with a video agent.
No alternative route
However, with 70 million Americans already signed up with the system, even 10% can add up to a lot. State officials have documented complaints of people being unable to prove their identity and being wrongly cut off from benefits. A REPORT from Community Legal Services of Philadelphia called the process “extremely difficult and tedious to complete.”
Several people reached out to CBS MoneyWatch to describe being caught in limbo after they were unable to verify themselves on “ID me”.
Arizona resident Michelle Ludlow said she tried to get a new driver’s license last summer at the HEIGHT OF THE PANDEMIC. Because government offices were closed for in-person business, Ludlow tried verifying herself online with “ID me” - trying for half an hour, over several days, with and without glasses. But the system wouldn’t recognize her face as the one on her license, she said.
“There was no alternative route to go if “ID me” couldn’t make a match with a selfie,” she said in an email.
Ludlow works in a tax-preparation office and is concerned the selfie step will make it impossible for some of her clients to access their IRS records.
“Some of our clients have trouble sending us documents via email, so I can only imagine their frustration at the new system - especially if it doesn’t work,” she said.
Mandatory arbitration
Critics of “ID me” also question the wisdom of having a private company that isn’t subject to open-records laws be the gatekeeper for Americans’ access to vital government services. They point out that “ID me” is required to keep users’ data for seven years - even when a person asks for its deletion - to comply with government requirements.
Users who sign up for “ID me” also have to agree to a mandatory arbitration provision, giving up their right to sue the company in court or join a class-action lawsuit if, for instance, their identity is stolen.
To this, Hall said that Americans could access government services without going online. For instance, taxpayers can request their IRS records and wage transcripts by calling the agency - assuming they[re in the 1 in 4 callers who can get through.
“There are alternative ways to interact with virtually every federal agency that we support,” he said.
“We’ve never been in favor of being the only way to get in,” he continued. “One day,” he suggested, “online identity verification will be much like credit cards, with several options users can choose from.”
“It should be more like Visa and banking,” he said of the emerging industry and its technology. “As long as you can meet the standards, folks should be able to pick who they want their login provider to be.”
Section Privacy And Rights • Section Broadband Privacy •
View (0) comment(s) or add a new one •
Printable view • Link to this article •
Home •
Wednesday, April 26, 2023
Kudos Mullvad VPN
Remember LAVABIT?
I thought they were cool sticking up for their customers when the surveillance state CAME KNOCKING:
I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit. After significant soul searching, I have decided to suspend operations. I wish that I could legally share with you the events that led to my decision. I cannot. I feel you deserve to know what’s going on - the first amendment is supposed to guarantee me the freedom to speak out in situations like this. Unfortunately, Congress has passed laws that say otherwise. As things currently stand, I cannot share my experiences over the last six weeks, even though I have twice made the appropriate requests.
Here’s another cool company - MULLVAD. They run a VPN service, and make a FREE OPEN-SOURCE privacy browser for the public.
The BROWSER is one reason I think they’re cool.
The Mullvad Browser is a privacy-focused web browser developed in a collaboration between Mullvad VPN and the Tor Project. Its designed to minimize tracking and fingerprinting. You could say it’s a Tor Browser to use without the Tor Network. Instead, you can use it with a trustworthy VPN. The idea is to provide one more alternative beside the Tor Network - to browse the internet with more privacy. To get as many people as possible to fight the big data gathering of today. To free the internet from mass surveillance.
Here’s the other:
Mullvad VPN Hit With Search Warrant in Attempted Police Raid
However, Swedish law enforcement left with nothing after learning Mullvad VPN has a strict no-logging policy when it comes to customer information.
By Michael Kan
PC Magazine
April 20, 2023
The risk of law enforcement raiding a VPN provider to try and obtain customer data nearly became real this week for MULLVAD VPN.
The company today REPORTED that Swedish police had issued a search warrant two days earlier to investigate Mullvad VPN’s office in Gothenburg, Sweden. ”They intended to seize computers with customer data,” Mullvad said.
However, Swedish police left empty-handed. It looks like Mullvad’s own lawyers stepped in and pointed out that the company maintains a strict no-logging policy on customer data. This means the VPN service will abstain from collecting a subscriber’s IP ADDRESS, web traffic, and connection timestamps, in an effort to protect user privacy. (Its also why Mullvad VPN is among our most HIGHLY RANKED VPN services.)
“We argued they had no reason to expect to find what they were looking for and any seizures would therefore be illegal under Swedish law,” Mullvad said. “After demonstrating that this is indeed how our service works and them consulting the prosecutor they left without taking anything and without any customer information.”
“Even if police had seized the company’s server, it would not have given them access to any customer information” due to the no-logging policy, Mullvad VPN said.
Section Privacy And Rights • Section Broadband Privacy •
View (0) comment(s) or add a new one •
Printable view • Link to this article •
Home •
Monday, April 10, 2023
The Web Crawler Problem
I’m ok with search engines and web crawlers grabbing the content here as long as they identify themselves, and behave by waiting 20 seconds before grabbing another page - but ask GOOGLE SERVICES, tools that hide themselves and/or HIT YOU SO BAD the server almost crashes, and places that charge to share the stuff found - to keep away.
Everything here is free and the stuff I write carries a CREATIVE COMMONS NON-COMMERCIAL SHARE ALIKE license. Anything not originally mine - eg copied off the internet - has links back to the original source (or internet archive for dead URLs), and credits the author. Check with them for their reprint permissions.
In the early days of the internet, before we realized everything we do, can - and probably will be - tracked, analyzed, stored and shared - internet companies would send us a CD that included a BRANDED version of a web browser, with a unique USER-AGENT STRING - making it easy to track one’s web surfing way back then.
Unless it’s making up user-agent strings on every pass with a randomizer or something like that, MY FAVORITE IPHONE RSS READER may be doing something similar by adding a unique (fuzzed for privacy) string in the same spot:
x.x.x.x - - /index.php article43.com “GET /index.php?/weblog/rss_1.0 HTTP/1.1” 200 7015 80 “-” “ABCDEFGH-1234-5678-9876-ABCDEFGHIJK” “- -”
It seems a little sneaky. Why not just IDENTIFY ITSELF or leave the browser user-agent as-is like FEEDBRO does.
A profile of your interests can be built by examining all the weblogs for a unique identifier. Who knows how many companies share their logs with eachother. PROFILING is big business these days.
One’s personal LIST OF RSS SITES can be very telling for advertisers and bad guys. Imagine if your list is mostly cooking sites, exercise sites, extremist political sites, hook-up sites even though your married, odd fetish sites, etc - and somebody got a hold of it?
I read this article in the app and got this:
66.248.241.120 - - /index.php article43.com “GET /index.php?/weblog/the_web_crawler_problem/ HTTP/1.1” 200 74651 80 “http://www.google.co.uk/url?sa=t&source=web&cd=1” “Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0” “- -”
Instead of coming here and downloading the article like I though it was set up to do - it looks like a middleman did.
Kudos to the app’s PRIVACY POLICY for the explanation:
We maintain a Web Service which works with the iOS and macOS apps to extract text content from web pages. Communication with this web service takes place exclusively via HTTPS. The only information transmitted to the web service are the URLs to the web pages from which text needs to be extracted. This information can in no way be used to identify any personal user or device information. The URLs and the extracted text content is cached by the web service for a short duration to prevent repetitive work in case other customers may want to fetch text content of the same web pages/URLs.
And the vendor for answering my email:
You can disable full-text extraction in the app by going to Settings > Caching Options > Automatic Full-Text Caching, and disable it for all subscriptions.
Also, you would need to go to Settings > Article Options > Open in Full-text View, and disable this for every subscription.
That doesn’t help. All it does is disable automatic article downloads, and doesn’t do anything about the odd user-agent getting the RSS summaries.
Not too cool in my book.
For web crawling here, I invite BING, and a few others in.
Its web bots clearly identify themselves, wait 20 seconds per query, and there’s even an IP LOOKUP TOOL for webmasters.
207.46.13.223 - - /index.php article43.com “GET /index.php?/weblog/comments/2867/ HTTP/1.1” 200 12829 80 “-” “Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) Chrome/103.0.5060.134 Safari/537.36” “- -”
Yeah, I know it’s Microsoft.
But I figure they ALREADY KNOW more than we can possibly imagine.
Section Privacy And Rights • Section Broadband Privacy •
View (0) comment(s) or add a new one •
Printable view • Link to this article •
Home •
Friday, March 24, 2023
Blaming TikTok For Our Lack Of Privacy Protection Laws
![]()
AT&T’s new privacy policy for its Internet and video services is way out of line - an insult to genuine security efforts and a brassy attempt to make its profits your problem… AT&T has also extended its claims on your information by claiming that it can monitor your video usage.... The new privacy policy basically lets AT&T do anything it wants with your information. (Remember, according to the company, it’s its information.) The specific claim is that AT&T can do whatever it wants with your/its data “to protect [the company’s] legitimate business interests.”
- Phone Records For Sale, 2006
Many of the big American players have set up shop in Shenzhen, but they look singularly unimpressive next to their Chinese competitors. The research complex for China’s telecom giant Huawei, for instance, is so large that it has its own highway exit… Sometimes called “market Stalinism,” it is a potent hybrid of the most powerful political tools of authoritarian communism - central planning, merciless repression, constant surveillance harnessed to advance the goals of global capitalism… In 2006, the Chinese government mandated that all Internet cafes (as well as restaurants and other entertainment venues) install video cameras with direct feeds to their local police stations… But the cameras that Zhang manufactures are only part of the massive experiment in population control that is under way here. “The big picture,” Zhang tells me in his office at the factory, “is integration.” That means linking cameras with other forms of surveillance: THE INTERNET, phones, facial-recognition software and GPS monitoring…
- Rise of the Golden Shield, 2008
EMBARQ may use information such as the websites you visit or online searches that you conduct to deliver or facilitate the delivery of targeted advertisements. The delivery of these advertisements will be based on anonymous surfing behavior and will not include users’ names, email addresses, telephone numbers, or any other Personally Identifiable Information. You may choose to opt out of this preference advertising service. By opting out, you will continue to receive advertisements as normal; but these advertisements will be less relevant and less useful to you.
Embarq, WOW Bury Snooping In Terms Of Service, DSL Reports, 2008
Charter Communications, one of the largest ISPs is the country, confirmed Tuesday that its partnering with a company called NebuAd, which pays ISPs to let it install a monitoring box on their networks to sniff customer traffic.
- NebuAd’s Make Believe Opt-Out, 2008
OnStar’s latest T&C has some very unsettling updates to it, which include the ability to sell your personal GPS location information, speed, safety belt usage, and other information to third parties, including law enforcement… OnStar has granted themselves the right to collect this information for any purpose, at any time, provided that following collection of such location and speed information identifiable to your Vehicle
- OnStar Snooping, 2011
Selling Bell Labs and Lucent in 2006 to foreigners was a forgotten moment in US history. The company’s subsidiary - LUCENT GOVERNMENT SOLUTIONS - is a sobering reminder of what’s going on, along with AT&T outsourcing its IT infrastructure, or US carriers selling every TAT cable to foreigners. These expressions that capitalism trumps all - even national security - are a lot scarier to me than fear mongering about HUAWEI and CHINESE TELECOM MANUFACTURERS, that our friends at 60 Minutes did a piece on last night.... Like the guy says at the end of the Forbes article - “It’s much easier to bash China.”
- Bad Moon Rising Part 57, Huawei, 2012
Back in Hitler’s day, the rich were targeted; their wealth stolen and the lives snuffed out, but THIS TIME its the poor who are being sent to the Gulag… The NSA in America, similar to Nazi’s attempt to discriminate based on data collection… The SCAPEGOATS being marked for death and imprisoned this time are the poor.
- PRISM & Purity: NSA follows Nazi tradition, 2013
Thanks to a new VERMONT LAW requiring companies that buy and sell third-party personal data to register with the Secretary of State, we’ve been able to assemble a list of 121 data brokers operating in the U.S. It’s a rare, rough glimpse into a bustling economy that operates largely in the shadows, and often with few rules.
- Data Brokers, 2019
[N]ow we see that the surveillance capitalists behind those services regard us as the free commodity. We thought that we search Google, but now we understand that Google searches us. We assumed that we use social media to connect, but we learned that connection is how social media uses us. We barely questioned why our new TV or mattress had a privacy policy, but we’ve begun to understand that “privacy policies” are actually surveillance policies.
- The Age of Surveillance Capitalism, 2020
The class action complaint is based on a report by researcher FELIX KRAUSE - who claimed Facebook and Instagram apps for iPhones inject JavaScriptcode onto websites people visit. A similar complaint was filed in the same court a week ago… The class action lawsuits say the Facebook app bends Apples privacy rules by opening the links in an in-app browser rather than the users default browser… The Facebook app is a major offender when it comes to privacy so it’s worth at least deleting that - and Instagram - from your iPhone.
- Facebook Keeps Giving Users More Reasons To Delete Their Accounts, 2022
---
America’s online privacy problems are much bigger than TikTok
Concerns of Chinese data access highlight Congress’s own failure to protect Americans personal information
By Will Oremus
Washington Post
March 24, 2024
For a brief moment in a FIVE HOUR HOUSE HEARING on Thursday, TikTok’s CEO Shou Zi Chew let his frustration show. Asked if TikTok was prepared to split off from its Chinese parent company if ordered to do so by the U.S. government, to safeguard Americans online data, Chew went on offense.
"I don’t think ownership is the issue here. With a lot of respect: American social companies don’t have a great record with privacy and data security. I mean, look at Facebook and Cambridge Analytica,” Chew said, referring to the 2018 scandal in which Facebook users’ data was found to have been secretly harvested years earlier by a British political consulting firm.
He’s not wrong. At a hearing in which TikTok was often portrayed as a singular, untenable threat to Americans’ online privacy, it would have been easy to forget that the country’s online privacy problems run far deeper than any single app. And the people most responsible for failing to safeguard Americans data, arguably, are American lawmakers.
The bipartisan uproar over TikToks Chinese ownership stems from the concern that China’s laws could allow its authoritarian government to demand or clandestinely gain access to sensitive user data, or tweak its algorithms to distort the information its young users see. The concerns are genuine. And yet the United States has failed to bequeath Americans most of the rights it now accuses TikTok of threatening.
While the European Union has far-reaching privacy laws, Congress has not agreed on national privacy legislation, leaving Americans’ online data rights up to a patchwork of state and federal laws. In the meantime, reams of data on Americans shopping habits, browsing history and real-time location, collected by websites and mobile apps, is bought and sold on the open market in a multi-hundred-billion-dollar industry. If the Chinese Communist Party wanted that data, it could get huge volumes of it without ever tapping TikTok. (In fact, TikTok says it has STOPPED TRACKING U.S. users’ precise location, putting it ahead of many American apps on at least one important privacy front.)
That point was not entirely lost on the members of the House Energy and Commerce Committee, which convened Thursdays hearing. Last year, their committee became the first to advance a comprehensive data privacy bill, hashing out a hard-won compromise. But it stalled amid qualms from House and Senate leaders.
Likewise, worries about TikToks addictive ALGORITHMS, its effects on teens’ mental health, and its hosting of propaganda and extreme content are common to its American rivals, including GoogleҒs YouTube and Metas Instagram. Congress has not meaningfully addressed those, either.
And if Chinese ownership is the issue, TikTok has plenty of company there, as well: A glance at Apple’s iOS App Store rankings earlier this week showed that four of the top five apps were Chinese-owned: TikTok, its ByteDance sibling CapCut, and the ONLINE SHOPPING APPS SHEIN AND TEMU.
The enthusiasm for cracking down on TikTok in particular is understandable. It’s huge, its fast-growing, and railing against it allows lawmakers to position themselves simultaneously as champions of American children and tough on China. Banning it would seem to offer a quick fix to the problems lawmakers spent five hours on Thursday lamenting.
And yet, without an overhaul of online privacy laws, it ignores that those problems exist on all the other apps that haven’t been banned.
“In most ways, they’re like most of the Big Tech companies,” Rep. Jan Schakowsky (D-Ill.) said of TikTok after the hearing. “They can use Americans’ data any way they want.” She and several other committee members said they’d prefer to address TikTok as part a broader privacy bill, rather than a one-off ban.
But the compromises required to pass big legislation can be politically costly, while railing against TikTok costs nothing. If Chew can take any consolation from Thursday’s hearing, it’s that congressional browbeating of tech companies are far more common than congressional action against them.
For an example, he has only to look at the one he raised in that moment of frustration: For all the hearings, all the grilling of Mark Zuckerberg over Cambridge Analytica, Russian election interference and more, Facebook is still here - and now Congress has moved on to a new SCAPEGOAT.
Section Privacy And Rights • Section Broadband Privacy •
View (0) comment(s) or add a new one •
Printable view • Link to this article •
Home •
Sunday, February 26, 2023
The Mother of All Privacy Battles Part 24 - Shutting Us Up
![]()
[T]he 85-page briefing, titled THE GOOD CENSOR admits that Google and other tech platforms now control the majority of “online conversations” and have undertaken a shift towards “censorship” in response to unwelcome political events around the world.
- Democracy Hollwed Out Part 35 - CensorshipWe look at two cases before the Supreme Court that could reshape the future of the internet. Both cases focus on Section 230 of the Communications Decency Act of 1996, which backers say has helped foster free speech online by allowing companies to host content without direct legal liability for what users post. Critics say it has allowed tech platforms to avoid accountability for spreading harmful material. Critics say it has allowed tech platforms to avoid accountability for spreading harmful material. On Tuesday, the justices heard arguments in Gonzalez v. Google, brought by the family of Nohemi Gonzalez, who was killed in the 2015 Paris terror attack. Her family sued Google claiming the company had illegally promoted videos by the Islamic State, which carried out the Paris attack. On Wednesday, justices heard arguments in the case of Twitter v. Taamneh, brought by the family of Nawras Alassaf, who was killed along with 38 others in a 2017 terrorist attack on a nightclub in Turkey. We speak with Aaron Mackey, senior staff attorney with the Electronic Frontier Foundation, who says Section 230 “powers the underlying architecture” of the internet.
- Video - Free Speech on Trial: Supreme Court Hears Cases That Could Reshape Future of the Internet, Democracy Now February 27, 2023
---
The Supreme Court case that could fundamentally change the internet
By Jessisa Melugin
Washington Examiner
February 24, 2023
The Supreme Court recently heard oral arguments in a case that could fundamentally alter social media.
GONZALES V. GOOGLE, heard by the justices on Feb. 21, asks the highest court in America to determine if the longtime internet liability shield known as a SECTION 230 includes content that the platforms algorithms recommend to users. And about how they present those recommendations.
The case stems from the killing of then-23-year-old Nohemi Gonzalez, who was studying in Paris when she became the only American victim of a terrorist attack that claimed 129 other lives in the city. The Islamic State later took responsibility for the acts.
Back in the United States, Gonzalezs family sued multiple tech platforms, accusing them of radicalizing users into terrorists by promoting pro-ISIS third-party content. Google’s YouTube video-sharing platform is the only defendant that remains, and that’s the case the Supreme Court heard oral arguments on. The case was paired with a similar suit, TWITTER V. TAAMNEH, which was heard the next day.
Both cases address the possible limits of the protection social media platforms have from liability under SECTION 230 OF THE TELECOMMUNICATIONS DECENCY ACT. The law, now commonly shortened to “Section 230,” clarified liability for online sites hosting third-party content at a time when there was uncertainty about their legal responsibility. Legal precedent dealt with traditional publishers, including newspapers, and distributors such as bookstores.
But the online hosts were different in that they were not filtering user content before it was posted, like a traditional publisher did, but only after it was posted, if at all. At the time, CompuServe’s chat board did not moderate posts at all and, because of the precedent in liability law, was therefore not legally responsible for the content it hosted.
A rival service, Prodigy, wanted to take down potentially offensive posts from its users in order to make a family-friendly online environment, but it worried that doing so would trigger legal liability. That’s because, in the past, bookstores were not held liable if they didn’t know of illegal content in the materials they were selling but were on the hook if they did know about it and carried it for sale anyway. Moderating content seemed to be an admission that they knew what they were hosting.
In practice, Section 230 means that host sites cannot be sued for content posted by their users and that taking down any of that content will not trigger liability for the platform. Legal responsibility stays with the creator of the content, not the online host.
Those same issues are at play, but they now apply to small sites and social media platforms with billions of users. More than 500 hours of third-party content is posted to YouTube every minute that’s 720,000 hours per day.
Google and other major social media platforms argue that the volume of hosting can only be maintained because of the legal protections Section 230 affords to them. Without it, the danger of legal expenses for a flood of litigation would cause sites to take down much more content (just to be safe) or to allow everything (so they could claim the old bookstores hands-off protections). That would make for an internet void of anything the least bit controversial or one polluted with violence, spam, and pornography, largely unusable for most people.
The plaintiff’s argument in the case has changed from its initial petition to the court. Legal counsel for the Gonzalez family at first presented the question of Section 230s safe harbor including third-party content when it was algorithmically recommended by the host site, arguing that it should not. But at this week’s oral argument, the Gonzalez family’s lawyer Eric Schnapper concentrated more on whether the thumbnail links in YouTube’s “up next” suggestion for the next video constitute content created by the host, instead of a third party, making them ineligible for Section 230’s protections.
During oral arguments, multiple justices volunteered that they were “confused” about what argument Schnapper was trying to make during their exchange. Chief Justice John Roberts clarified that the YouTube algorithm was the same across the platform and that nothing special was employed in the recommending of the ISIS content. Several justices expressed concerns about the resulting economic upset if the court were to rule in favor of curtailing Section 230s liability shield and exposing online platforms to increased litigation.
The oral arguments went on for almost three hours on Feb. 21, an unusually long amount of time for the justices to spend on a case. More than 70 briefs were filed in the Supreme Court, where interested parties, including other social media platforms, think tanks, and advocacy groups, weighed in on the case.
The court’s ruling could have profound and widespread implications for social media platforms and their users. But depending on how the court decides the aspects of the related Twitter case and its intersection with the Anti-Terrorism Act’s provisions for liability, the Supreme Court may be able to sidestep weighing in on the parameters of Section 230 altogether. America’s highest court is expected to rule on both cases in June.
Section Privacy And Rights • Section Broadband Privacy •
View (0) comment(s) or add a new one •
Printable view • Link to this article •
Home •