Article 43

 

Saturday, May 31, 2008

Is Corporate Social Responsibility an Oxymoron? Part 2

Corporate Citizen - An Oxymoron

Economic Populist
May 2008

There is a fundamental problem these days with economic policy - US corporations have run amok. No longer do they act in the national interest or even give a pretense of being good national corporate citizens. It’s all about profits, maximizing profits. In fact, we are told that is a corporation’s only responsibility. But is that really the case or have we been spun a lie so long and often we believe it to be true without question? How does the United States align it’s Corporations to the interests of the nation?

In a recent House Science Subcommittee hearing, AMERICAN DECLINE OR RENEWAL?  GLOBALIZING JOBS AND TECHNOLOGY, some these fundamental questions of corporate governance were addressed.

it has gradually become clear to me that much of a nation֒s economic strategy is embedded in the institutions through which that particular nation is governed, and that the existence of institutions imply a certain strategy

- Dr. Bruce R. Scott, TESTIMONY before the INVESTIGATION AND OVERSIGHT House Science Subcommittee.

From Dr. Margaret Blair, a law professor and economist, opening statement:

I want to speak to you today on a question about the fiduciary obligations that corporate directors have, by law, in this country. In particular, I want to address a claim often made in the financial press, and by members of what a Delaware Court judge has recently called the corporate governance industry. This is the claim that corporate directors have a legal duty to maximize share value.

What I hope you will take from my testimony today is that this claim is, at best, a misleading overstatement. At worst, this claim is simply false, but is often asserted as a weapon to try to persuade corporate managers and directors that they should take actions that benefit particular shareholders of a given corporation, regardless of whether those actions may impose high costs on creditors, employees, the communities where corporations have their operations, or other stakeholders, or sometimes even on the long run ability of the corporation itself to compete effectively for market share, or to develop the next technology

Dr. Margaret M. Blair even wrote a book about it, OWNERSHIP AND CONTROL: RETHINKING CORPORATE GOVERNANCE FOR THE TWENTY-FIRST CENTURY.

So, is everyone getting that? In order to sustain the United States as a Democracy versus a haven for multinational corporations akin to the No Man’s Land of Oklahoma where outlaws were under no jurisdiction, had no consequence, we must make corporate entities accountable to the citizenry of the United States. We must realize not only can we do precisely that, we must do precisely that. We must hold and make these US based corporations responsible and responsive to the United States national interest.

Ralph Gomory said:

We need to realize that the interests of the American global corporation, whose interest is profit, and the interests of most Americans, who want a higher standard of living, have been diverging.

In other words, globalization is putting America at greater and greater risk by the very corporations it spawned.

So maybe this is a little hard to digest. Why are they bothering with such esoteric concepts? Well, after some pondering, it dawned on me that to enact dramatic, strategic trade and economic policy change, one must directly confront this false assumption that corporations must be like glorified Ferengi.

In order to convince lawmakers to pass legislation and enact policy we desperately need and also to console legislators, to assure such new legislation and policy would not be overturned in the courts, we must address these fundamental definitions of corporate history and governance.

Bruce Scott (read his entire testimony, watch the video) noted:

Todays global economy is much like the US in the later 19 century

In today’s economy, nations and states charter firms to compete in a global common, but no chartering authority exists that wields the political power to impose rules on these global markets. While there are rules for trade, the chartering of financial firms in particular invites a RACE TO THE BOTTOM to escape taxes as well as regulations. At the same time, some countries are imposing conditions on FOREIGN FIRMS as a condition for doing business in their countries. This issue is particularly important in the case of a few very large countries, notably CHINA.

These countries, with priorities that favor rapid growth, are using national power to partner with US firms on the condition that the latter move some of their activities to China. These countries are behaving much the way New Jersey did in an earlier era, taking advantage of an inadequately regulated common

Dr. Scott is pointing out today we are living in a corporate bandit outlaw haven. We’re in No Man’s Land with the multinational globalization corporate cartel, running amok, thumbing their nose at what is actually good for America.

And the kicker, Dr. Scott recommends:

you consider reopening the question of a federal charter or license for US firms as a way to specify certain requirements for behavior

SOURCE

IS CORPORATE SOCIAL RESPONSIBILITY AN OXYMORON - PART 1

Posted by Elvis on 05/31/08 •
Section Dying America
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Full Disclosure And Why Vendors Hate It

Zidarski Dot Com
May 2008

I did a talk recently at O’Reilly’s Ignite Boston party about the exciting iPhone forensics community emerging in law enforcement circles. With all of the excitement came shame, however; not for me, but for everyone in the audience who had bought an iPhone and put something otherwise embarrassing or private on it. Very few people, it seemed, were fully aware of just how much personal data the iPhone retains, in spite of the fact that Apple has known about it for quite some time. In spite of the impressive quantities of beer that get drunk at Tommy Doyle’s, I was surprised to find that many people were sober enough to turn their epiphany about privacy into a discussion about full disclosure. This has been a hot topic in the iPhone development community lately, and I have spent much time pleading with the different camps to return to embracing the practice of full disclosure.

The iPhone is shrouded in secrecy on both sides - Apple (of course) uses their secrets to instill hype (and gloss over many otherwise obvious privacy flaws), while the iPhone development community uses their secrets to ensure they can exploit future versions of the firwmware to find these flaws (along with all the other fun stuff we do). The secrets on both sides appear to have not only hurt the product, but run the risk of devolving an otherwise amazing device into the next surveillance fear. With the military and federal agencies testing the iPhone for possible use, some of the long-held secrets surrounding the iPhone even run the risk of affecting national security.

Secrecy and Hype

Secrecy is nothing new, especially with Apple. One of Apple’s greatest marketing strengths is this ability to add hype around their products by piquing the curiosity of the common geek. When it comes to such an amazing device as the iPhone, Apple seems to be very tolerant when it comes to grassroots hacking - tolerant enough to allow iPhone hackers to come and give talks about it in their store. It almost seems counter-intuitive that the more padlocks Apple places on the iPhone, the more the number of hackers who show up to pick them, and the more phones sold.

Obviously it isn’t just hackers buying iPhones, or the community would be much bigger. Part of what Apple is selling is the hacker image - an image that they ingeniously didn’t even have to invent. By simply locking up the device and attracting the right audiences, every tech store cashier within a thousand mile radius can buy an iPhone and feel like they are in the same class of uber-hacker as the ones who originally wrote the tools they’re using. With more secrets come more hype, and ultimately more people who buy the product to feel like they’re doing something “unsanctioned” or “cool” with it. Apple wants you to think that buying an iPhone is bucking the system - and all they had to do was lock it down. It is estimated that over a third of all iPhones sold have been jailbroken and unlocked, supporting at the very least the claim that a lot of people are unlocking their iPhones just because Apple said they can’t. Apple has proven that secrets really can sell products.

Secrecy and Privacy

The problem with too many secrets is that they frequently rub against the notion of privacy. One would think that secrets and privacy track together, but more often than not, secrets only mean that you don’t know your enemy, or what weapons they have to use against you. Secrets can be a hindrance to privacy because they leave the consumer exposed; not knowing if their home is secure, or if it’s going to be broken into. If you knew that the lock on your front door was broken, you’d probably be less inclined to leave a diamond ring lying on the foyer table. More dangerous is the idea that you have no right to know about your broken front door lock until after the locksmith fixes it. Everyone agrees that security flaws should be fixed; the looming issue is whether full disclosure is appropriate, or whether the “vendor first” approach is more responsible.

The thing with secrets is that someone always has one, and when it comes to protecting your data, a well-informed public is often better equipped to protect themselves than an ignorant one. In the digital world, the locks belong to the vendor, but the data is typically within either the customer or the consumer’s control; and if not the data, then certainly lawyers from hell are within reach. Longstanding arguments have been made that the vendor should be the first to notified, and the owner of the data should remain in ignorance until the front door lock has been fixed. Ironically, this is an argument I only ever hear coming from vendors (or those indoctrinated by vendors). Some vendors take this philosophy so seriously that they attempt to legally bind their own customers from releasing information about vulnerabilities to the public.

The inherent flaw in the “vendor first” argument is this: if you know about a particular vulnerability, chances are the bad guy already does too, and probably knew about it before you did. The bad guy is far more dangerous when the public doesn’t know what he knows, leaving the vendor’s customers and consumers both oblivious that there is any risk, or that an appropriate response to safeguard data is necessary. It is the customer and the consumer who have the most to lose from a breach, and bear the most liability should one occur. It seems that these two groups would be the best suited to also choose how the risk should be mitigated in the short term, and ultimately what procedures for auditing data should be taken after the fact.

If indeed the bad guy knows about the vulnerability, they are certainly already exploiting it, leaving one to wonder what the advantage is to keeping it secret from the public. It would seem as though it would be a rather large disadvantage if no-one is given the knowledge to do anything about it. It’s quite simple logic:

* Full Disclosure Scenario: Vendor screws up grocery chain software. Grocery chain and consumers notified by newspaper. Grocery chain’s customers switch to cash, with minor loss in business. Grocery chain results in exponentially fewer losses than had they gotten sued by credit card companies for a breach.

* Vendor First Scenario: Vendor screws up grocery chain software. Vendor is notified, takes 2 months to patch security vulnerability. Three grocery chains experience data breaches, with a fourth breach while the first three figure out what happened. All four grocery chains sued by credit card companies. Consumers and grocery chains suffer. Vendor has disclaimer, pays nothing.

Just who is the beneficiary of the “vendor first” concept exactly? Full disclosure ultimately protects the consumer, where as “vendor first” only protects the vendor. Full disclosure safeguards the consumer by getting people away from the dam until the leak is plugged. Take this more real-world scenario for example:

* Full Disclosure Scenario: I announced last week that refurbished iPhones may contain previous customer data, and provided some blurred screenshots to show evidence of it. Both Apple and AT&T are suddenly listing refurbished iPhones as unavailable. Apple revises their refurbishing practices, and until the dam is permanently plugged, the flood of refurbished iPhones with customer data has been turned off.

* Vendor First Scenario: Had I reported the problem to Apple directly, they may have decided to quietly fix their internal practices while still selling refurbished units. Additional units are sold with customer data on them, and no-one is any the wiser (except for the people stealing the data). In the time it takes Apple to revise their refurbishing practices, X additional phones containing customer data are leaked. The consumer loses, and might not even know it.

Plausible Deniability

The advantage that vendors gain in keeping secrets from customers is simply having plausible deniability. When a vulnerability is actually fixed, a vendor may deny the privacy flaw ever existed, or at least severely downplay any risk. This can (and has) been used to sweep over any concern, having the side effect of also downplaying any inclination to audit for a security breach. After all, it’s bad for a vendor to have to admit to a security flaw, but entirely disasterous for their image should anyone discover an actual breach occured. As far as the vendor is concerned, ‘tis best not to check.

I ran into this shortly after I discovered a flaw in Verizon’s online billing services some years ago, which allowed me to view other customers’ billing information through Verizon’s web portal. I’ll not likely forget the famous last words of the Verizon security technician, “Thanks for not telling anybody about this.” It was the next day that I talked to the Washington Post, with Verizon denying and/or downplaying each claim. I doubt the leak ever would have come to light otherwise, and most definitely would have never been audited. My screenshots were the only proof that there ever was a problem, and at that point it comes down to mere credibilty.

Plausible deniability is one of a vendor’s greatest advantages when the “vendor first” approach is used instead of full disclosure. By fixing things privately, there is no way (in some cases at least) to verify that the vulnerability ever existed, or by the time the vendor releases information about the vulnerability, it may be well too late to check for a privacy breach. When this happens, it is the word of the person reporting the vulnerability against a team of corporate engineers who will all insist it isn’t as bad as it sounds.

The full disclosure approach solves the problem of corporate accountability by ensuring that the informed public (specifically, security professionals) can verify and assess the situation. Full disclosure gives the public a windowof opportunity to not only verify the vulnerability, but to see just how deep the rabbit hole goes; something the vendor is almost guaranteed to either intentionally ignore or cover up. The bad guy is already going to test and exploit these vulnerabilities long before the public even discovers them - the good guys ought to have a crack at verifying it too.

Public Outcry

Just how large that windowof opportunity is depends on the vendor, and presents another reason why “vendor first” doesn’t work. Vendors can be slow about fixing things - and many have a track record of lethargy. Some software vendors lag months behind. In spite of what you may think, the goal of the vendor is not to produce a quality product; it is to sell product. And in selling product, selling support agreements come with the turf. Carefully timing security updates so that they span certain contractual intervals is one way to ensure that a product’s maintenance fees are going to get bought into. The average MTTR for some of the most widely used operating systems and other popular software is on the order of 3-6 months! So if you’re following along with the thought pattern laid out here, that means 3-6 months of unknown bad guys possibly exploiting these vulnerabilities and stealing personal information that may have otherwise been stopped at the customer or consumer level.

There is, however, one way to ensure a vendor fixes a flaw quickly, and that is public outcry. I find some otherwise slow vendors respond quite snappily when five million consumers are banging down their door and threatening to sue them in class action court. Public outcry has become the Q/A filter for many vendors whose response times have become ridiculously poor in recent years. It lets the vendor know what bugs are going to hurt their bottom line - and those are the ones that are quite likely to receive the most attention. It is certainly advantageous for the vendor to push the “vendor first” approach when it means removing the pressure to repair critical flaws. It is public pressure that has the power to change governments - certainly, it can be an effective tool at fixing security flaws.

Over-Fixing

Of course, over-fixing things is the fear many development teams have with vendors, and is an issue I’ve experienced first hand with Apple, Verizon, and a few other vendors. Before you report a security vulnerability privately to a vendor, pretend the vendor is going to read it miranda rights, because essentially your vulnerability can (and will likely) be used against you. Not to incriminate you, per se, but to rather handicap your ability to follow up.

As an example, the open source community has built up a significant arsenal. We’ve built a solid base of iPhone developers as well as a community distribution mechanism for software. Apple came along a little later (due to public outcry) and decided to build their own solid developer base and their own distribution mechanism, embarrassingly trying to copy the open source community. Apple has effectively positioned themselves as a competitor of the open development community for the iPhone. As is the case with other similar vendors, privately releasing a vulnerability to them is a technological death wish; the technique you used to find the vulnerability in the first place will likely be “fixed” so that you won’t have access to find such a vulnerability again. Make no mistake - this is not to better secure the product; this is to quiet the noise you’ve generated and ensure that they don’t have to hear from you again.

Once again, full disclosure presents a window. This windowof opportunity allows others to collaborate with you by picking up where your work left off. Over-fixes are likely going to happen, but by the time they do, the public will have given the product a thorough proctological and likely uncovered many additional exploits you may have missed.

Litmus Test

Not to suggest that all vendors are evil, lazy, or financially motivated, but in a capitalist society, it is the consumer’s responsibility to hold a corporation accountable. This is not possible if the corporation is controlling the flow of information.

If you’re interviewing vendors, ask them where you can find a manifest of security flaws accompanied by dates reported, dates patches were released, and a report of all associated breaches. If this information is available publicly, you’ve stumbled across a rare breed of responsible vendor.

The bottom line is this: a company that is afraid to tell the customer about a security risk until after it’s fixed is both dangerous and irresponsible. The best litmus test when selecting a vendor is to find vendors who embrace full disclosure in such a way that vulnerabilities are reported quickly to their downstream customers, and if privacy-related, the consumer. Full disclosure is the key to privacy. If your goal is to have security flaws fixed, rather than covered up, full disclosure is the only way to guarantee that your research will be thoroughly tested and patched; what’s more, it is the only way to ensure that the vendor is held accountable in an age of privacy breaches and litigation.

SOURCE

Posted by Elvis on 05/31/08 •
Section Privacy And Rights
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Friday, May 30, 2008

NebuAd’s Make Believe Opt-Out

nebuad.gif

“There are also lingering questions about whether NebuAds systems are as non-invasive as described. A patent application filed by the company in March 2007 describes a monitoring system that actually manipulates data packets and replaces advertisements on third-party websites with their own ads.”

This is a product by another NebuAd company called Fair Eagle. It does, as described in the patent, add its own advertising to the page along with any or all of the advertising that is already there.

I haven’t proven it, but I believe that NebuAd—even without Fair Eagle—performs packet forgery to impersonate the end points and allowing HTTP responses and HTML-embedded scripts to add invisible cookies and images to assist in NebuAd’s data-collection job.

Either way—the idea of modifying IP packets in order to supplant entire TCP streams is a tremendous departure from Internet Standards. For an ISP to do something like this on simply an “opt out” basis seems unconscionable!

This is a two-way forgery. How do HTTP server operators “opt out”?
-Rob Tokowski

Can Charter Broadband Customers Really Opt-Out of Spying? Maybe Not

By Ryan Singel
Wired
May 16, 2008

Three days after it emerged that broadband provider Charter Communications plans to begin eavesdropping on its subscribers’ web serving to build profiles for advertisers, serious questions about the technology remain unanswered—including whether it’s really possible to opt-out of the data collection.

Charter Communications, one of the largest ISPs is the country, confirmed Tuesday that it’s partnering with a company called NebuAd, which pays ISPs to let it install a monitoring box on their networks to sniff customer traffic.

The plan is already drawing unwanted attention from Congressman Edward Markey (D-Mass.) and Joe Barton (R-Texas), two key lawmakers in the area of telecoms.

“Any service to which a subscriber does not affirmatively subscribe and that can result in the collection of information about the web-related habits and interests of a subscriber [...] raises substantial questions,” the pair wrote in a letter to Charter’s CEO Friday.

NebuAd’s appliance categorizes users and their interests, and then uses the data to customize ads on the internet. Charter says the device will not actively inject NebuAd’s advertising into web sessions, but rather NebuAd will provide the profile information to third-party advertisers already paying to place their ads on major websites.

For instance, if a person visits Yahoo Sports, then NebuAd tells the advertising network on Yahoo that the visitor’s web history suggests he lives in Pittsburg and likes hockey. The site could then serve up an ad for a Pittsburgh Penguin jersey.

At issue, though, is NebuAd’s system for allowing customers to opt-out of the data collection. A review of the NebuAd’s and Charter’s statement on the opt-out system, and of NebuAd’s patent on its technology, raises serious questions about whether Charter customers can really opt-out of the spying at all.

Customers seeking to opt-out are directed to a Charter web page that prompts them for their name and address. Then the user is redirected to a NebuAd page that delivers an opt-out cookie.

But because of the way web cookies work, that cookiecan only be read by NebuAd.com, or a website that includes content served from that domain. There’s no technical way for NebuAd’s sniffer to access the cookieand know not to log and analyze an opted-out user’s web usage.

This, despite a claim on the company’s website that users can opt-out of both NebuAd’s targeted ads, and its “information collection.”

Charter’s own opt-out page is careful not to claim that opted-out users won’t be monitored, saying only that if a user “would like to opt-out of this process” an opt-out cookiemeans they “will no longer receive ads that are tailored to your web preferences, usage patterns and commercial interests.”

Indeed, it is possible that the cookiesystem works to prevent opted-out users from receiving the third-party ads, and it could stop NebuAd from sharing a user’s profile with third-party ad networks—assuming those networks include a NebuAd image file, or some other embedded code, in the ads they serve on the web. But NebuAd’s claim that you can opt-out of the surveillance itself remains unexplained.

NebuAd’s press officer promised to put Threat Level in touch with its CEO Thursday, but later set up a meeting for Friday and offered late Friday morning to answer technical questions by e-mail. A spokeswoman for Charter Communication said that vice president Ted Schermp, who spoke with Threat Level on Tuesday, was out of the office Thursday.

There are also lingering questions about whether NebuAds systems are as non-invasive as described. A patent application filed by the company in March 2007 describes a monitoring system that actually manipulates data packets and replaces advertisements on third-party websites with their own ads.

That more-intrusive technique would doubtless anger commercial websites and ad networks, and their lawyers. But, ironically, it would also allow the company to live up to its opt-out claims, because NetbuAds could inject a call to its own website into third-party sites, and thus read the opt-out cookie.

But by injecting ads into packets on the wire, the system described in the patent application would also creates a single vulnerability point for every website on the internet—at least for users whose traffic moves through those boxes. A malicious hacker would only need to be able to find a way to compromise NebuAd’s server in order to insert links to malware into every page loaded by customers on that ISP.

“This is a classic instance where the harm—a user getting quietly compromised— is not felt by the agent receiving the benefit, meaning the ISP profiting off of injected advertisements,” security researcher Dan Kaminsky said via e-mail. “The last three attempts at injection—PaxFire, BareFruit, and Network Solutions—have been shown to push trivial vulnerabilities into the entire web.”

NebuAd emphasizes that there is no communication between its appliance and the ISP, so NebuAd doesn’t know the person’s billing information or name, and the ISP doesn’t have access to the subscriber profiles.

The legality of eavesdropping on Americans’ internet usage also isn’t clear. The practice could violate anti-wiretapping law, according to recent analyses of the legality of academic internet research, because the law says an ISP is only allowed to monitors its customers for security reasons.

SOURCE

Posted by Elvis on 05/30/08 •
Section Privacy And Rights • Section Broadband Privacy
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Thursday, May 29, 2008

The Mother of All Privacy Battles Part 4

Rise of the Anti-NebuAd Vigilantes

---

If you’re pissed off that ISPs are using software like Phorm and NebuAd to track your browsing habits, you could try out ANTIPHORM LITE, an app that generates a never-ending string of spyware radar-chaff, running a second browser that continuously, plausibly browses the web, screwing up your profile and confounding the snoops. They’ve posted the full source for audit as well.

AntiPhorm Lite is a stand alone application that can be left to run on it’s own or in parallel with your own surfing sessions and can be run as either a hidden background process, a desktop console application or in conjunction with your favorite browser, which ever you prefer.

To an ISP It is indistinguishable from a real person surfing the internet. It performs intelligent decisions about where to surf and combines multiple search engines with millions of contextual search subjects and general interest links of your choosing.

The engine features cross references in subject threads, back peddles along re-searching threads, revisits history during a session, recursive branching, subject thread and page lingering, gone-away modes and exhaustive search abandonment. It uses natural time delays based on search interest, page size and link return quality and it is throttled to prevent heavy traffic recognition or misuse. 

---

ANTI-NEBUAD relies on the client, so short of them blocking the domain, they can’t really do much to stop this one.

---

BAD PHORM - When good ISPs go bad.

---

DEEP PACKET ADS - Since Charter is using Deep Packet Inspection to monitor HTTP traffic and serve ads, here is a research program (meant to stimulate discussion on this issue) that creates “noise” HTTP traffic to make collected data meaningless.

---

SQUIGGLESR is a Firefox add-on which generates personalized queries to deceive search engines and protect users privacy. Keywords extracted from RSS flows and search engine statistics are used to create coherent and news related queries. Clicks are randomly simulated on non-sponsored results.

---

TRACKMENOT protects users against search data profiling by issuing randomized queries to popular search-engines.

Posted by Elvis on 05/29/08 •
Section Privacy And Rights • Section Broadband Privacy
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Friday, May 23, 2008

The Mother of All Privacy Battles Part 3

FTC Wants to Know What Big Brother Knows About You
Behavioral Targeting on Web Is Debated

By Peter Whoriskey
Washington Post
May 22, 2008

How do you find a bride these days?

One of the nation’s leading online tracking companies knows.

Monitoring consumers at roughly 3,000 Web sites, Revenue Science identified brides by picking out bridal behavior it had seen: anyone who’d gone online to read about weddings in the news, entered “bridesmaid dresses” into a search engine or surfed fashion pages for wedding styles.

The company found 40,000 such people, whom it knows by random number, not name, and sent them a tailored online ad.

“A successful campaign,” according to company president Jeff Hirsch.

The growing practice of “behavioral targeting,” or sending ads to online users based on their Internet habits, is now under scrutiny by the Federal Trade Commission, whose review could shape not only Web advertising rules but the character of the Web itself.

For while public interest groups argue that compiling profiles of largely unsuspecting Internet users ought to be illegal, online advertisers and publishers respond that their ad targeting tactics protect privacy and may be essential to support the free content on the Web.

Behavioral targeting allows many Web sites to raise ad prices, because advertisers will pay more when they can isolate a particular audience.

Limiting behavioral targeting could “jeopardize the consumer’s ability to get free content on the Internet,” said Paul Boyle of the Newspaper Association of America, a trade group that represents the business interests of most U.S. dailies, including The Washington Post.

The FTC is considering guidelines, for now voluntary, that would make it harder to target behavior. The principles were issued in December after town hall meetings, and the public comment period ended last month.

As the commission’s deliberations begin, some federal and state lawmakers are weighing measures that would be mandatory. New York lawmakers, for example, are considering a law similar to the FTC guidelines.

Now that many Americans spend as much time interacting with the Web as they do watching TV, there is a wealth of information available for targeters: what articles a person reads in online newspapers, what queries he or she types into search engines and what items the person shops for.

Revenue Science and its peers say that because the user profiles they keep are organized by randomly assigned numbers, no personally identifying information is ever stored.

But privacy groups argue that while the items collected by targeters may be “anonymous” when viewed individually, taken together they could enable someone to match the file on “User 927” to a person. For example, if someone repeatedly does an “ego search” on his or her own name, that file might have the name in it repeatedly.

“It is not anonymous if the companies are tracking the same user over time,” said Ari Schwartz of the Center for Democracy and Technology, an advocacy group that has filed comments with the FTC.

Especially troubling, he said, is that the targeters can monitor what people are reading, whether it’s news or dinner recipes.

Underlying the FTC debate among public interest groups and Web media and advertising groups such as Google, eBay, newspapers and magazines are larger questions about the Web economy.

With surfers accustomed to accessing online entertainment for free, media companies have been pushed toward online advertising, rather than subscriptions or fees, to make money. But by many measures, online advertising revenue has proven disappointing.

While television advertising amounts to $64 billion annually, online advertising amounts to $11 billion, according to TNS Media Intelligence. Even at hugely popular sites, such as YouTube and social networks such as Facebook and MySpace, which each count tens of millions of visitors a month, owners have struggled to make money from ads. Television networks and newspapers, too, have seen that online advertising generates but a fraction of what they receive in print or broadcast, even on a per-person basis. Behavioral targeting promises to bolster sagging online ad revenue with a more profitable approach.

Most online ad targeting is relatively unsophisticated. Advertisers might know the geographic area of a user based on his or her Internet address. Or an advertiser might target a user based on the context of the Web page being read. An online magazine for audiophiles is a natural place for stereo ads, for example.

But if there is no obvious product to pitch on a Web page, the value of the ad space may be very low and is likely to attract only low-paying ads such as those flashing ads about looking for a lost classmate.

What behavioral targeting does is allow advertisers to target ads based not on what’s on the page but who is looking at it.

Revenue Science, like other ad targeting services, tracks users by placing a “cookie,” or small file, on a computer when it connects to one of the 3,000 Web sites that the company works with. The cookieessentially identifies that browser as a visitor to sites working with Revenue Science and gives each one a randomly assigned number. No names or other personally identifiable information, such as age or address, are recorded in the cookie.

When a user visits such a site, Revenue Science can record what pages were viewed, what search queries were entered and other information. It can even count, if a newspaper or other publisher allows, how many times a person sees a story regarding any given search term, whether it is “al Qaeda” or “denture adhesive.”

The practice becomes more powerful as users move from site to site, betraying more information about their tastes. While some Web sites refuse to share their behavior files with other sites, the ad networks offer financial incentives to Web sites that do. (The Washington Post, which uses Revenue Science, does not allow Web behavior from The Post’s site to be accessed by others.)

Detecting from previous Web visits and searches that a reader may be interested in new sport-utility vehicles, for example, a Web site can make as much as 10 times the amount of money showing an ad to that user, compared with an undifferentiated ad.

But while the tactic may lead to profits, it also creates unease. A March poll by Harris Interactive showed that six in 10 people are not comfortable when Web sites use information about a person’s online activity to tailor advertisements or content.

As the long-brewing debate shifts to the Federal Trade Commission and possibly Congress, newspapers are likely to play a leading role. The Newspaper Association of America has filed a brief with the FTC arguing that some of the voluntary rules proposed by the agency’s staff might violate the First Amendment.

More than 600 newspapers have formed an ad consortium with Yahoo. Another company, formed by the New York Times and three other chains, similarly offers advertisers behavioral targeting.

“The problem for newspapers is that a story headlined ‘Two Dead in Baghdad’ isn’t very product-friendly,” said Kent Ertugrul, chief executive of Phorm, a behavioral targeting company working with British newspapers. “But if you know who is looking at the page, that’s where the opportunity is.”

It is just such added revenue, newspaper lobbyists argue, that the troubled newspaper industry may need to survive the online transition.

In its first draft of voluntary guidelines, the FTC staff called for clear warnings of tracking and for allowing users to permanently opt out of a Web site’s tracking mechanism.

“Every Web site,” according to the FTC’s draft rules, should allow consumers to “choose whether or not to have their information collected for such purpose.”

But the newspaper association argues that allowing the user to opt out isn’t necessary: If a user doesn’t want to be tracked by a site—assuming the user is aware of being tracked—he or she can simply avoid that site. Besides, Boyle noted, users are free to periodically delete the cookies on their computers.

“I really don’t know that there is a personal privacy issue here,” Boyle said. “The government really needs to let things play out.”

SOURCE

Posted by Elvis on 05/23/08 •
Section Privacy And Rights • Section Broadband Privacy
View (0) comment(s) or add a new one
Printable viewLink to this article
Home
Page 1 of 4 pages  1 2 3 >  Last »

Statistics

Total page hits 7574306
Page rendered in 4.7853 seconds
42 queries executed
Debug mode is off
Total Entries: 3078
Total Comments: 337
Most Recent Entry: 12/13/2017 08:40 am
Most Recent Comment on: 01/02/2016 09:13 pm
Total Logged in members: 0
Total guests: 8
Total anonymous users: 0
The most visitors ever was 114 on 10/26/2017 04:23 am


Email Us

Home

Members:
Login | Register
Resumes | Members

In memory of the layed off workers of AT&T

Today's Diversion

The foundation of all Mental Illness is the unwillingness to experience legitimate suffering. - Carl Jung

Search


Advanced Search

Sections

Calendar

May 2008
S M T W T F S
       1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31

Must Read

Most recent entries

RSS Feeds

Today's News

External Links

Elvis Picks

BLS Pages

Favorites

All Posts

Archives

RSS


Creative Commons License


Support Bloggers' Rights