Article 43

 

Privacy And Rights

Wednesday, September 18, 2019

Your MRI

image: hippa

Maybe this explains why:

The treatment consent form included text that gives the hospital the ok to share any medical and personal information with any third party they wish, without restriction.
- Florida Hospital insists I let them do whatever they want with my medical records

Millions of Americans Medical Images and Data Are Available on the Internet. Anyone Can Take a Peek.
Hundreds of computer servers worldwide that store patient X-rays and MRIs are so insecure that anyone with a web browser or a few lines of computer code can view patient records. One expert warned about it for years.

ByJack Gillum, Jeff Kao and Jeff Larson
ProPublica
September. 17, 2019

Medical images and health data belonging to millions of Americans, including X-rays, MRIs and CT scans, are sitting unprotected on the internet and available to anyone with basic computer expertise.

The records cover more than 5 million patients in the U.S. and millions more around the world. In some cases, a snoop could use free software programs or just a typical web browser - to view the images and private data, an investigation by ProPublica and the German broadcaster Bayerischer Rundfunk found.

We identified 187 servers - computers that are used to store and retrieve medical data in the U.S. that were unprotected by passwords or basic security precautions. The computer systems, from Florida to California, are used in doctors’ offices, medical-imaging centers and mobile X-ray services.

“It’s not even hacking. It’s walking into an open door,” said Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security. Some medical providers started locking down their systems after we told them of what we had found.

Our review found that the extent of the exposure varies, depending on the health provider and what software they use. For instance, the server of U.S. company MobilexUSA displayed the names of more than a million patients all by typing in a simple data query. Their dates of birth, doctors and procedures were also included.

Alerted by ProPublica, MobilexUSA tightened its security last week. The company takes mobile X-rays and provides imaging services to nursing homes, rehabilitation hospitals, hospice agencies and prisons. “We promptly mitigated the potential vulnerabilities identified by ProPublica and immediately began an ongoing, thorough investigation,” MobilexUSA’s parent company said in a statement.

Another imaging system, tied to a physician in Los Angeles, allowed anyone on the internet to see his patients echocardiograms. (The doctor did not respond to inquiries from ProPublica.)

All told, medical data from more than 16 million scans worldwide was available online, including names, birthdates and, in some cases, Social Security numbers.

Experts say it’s hard to pinpoint who’s to blame for the failure to protect the privacy of medical images. Under U.S. law, health care providers and their business associates are legally accountable for securing the privacy of patient data. Several experts said such exposure of patient data could violate the Health Insurance Portability and Accountability Act, or HIPAA, the 1996 law that requires health care providers to keep Americans’ health data confidential and secure.

Although ProPublica found no evidence that patient data was copied from these systems and published elsewhere, the consequences of unauthorized access to such information could be devastating. Medical records are one of the most important areas for privacy because they’re so sensitive. Medical knowledge can be used against you in malicious ways: to shame people, to blackmail people, said Cooper Quintin, a security researcher and senior staff technologist with the Electronic Frontier Foundation, a digital-rights group.

“This is so utterly irresponsible,” he said.

The issue should not be a surprise to medical providers. For years, one expert has tried to warn about the casual handling of personal health data. Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, said medical imaging software has traditionally been written with the assumption that patients data would be secured by the customer’s computer security systems.

But as those networks at hospitals and medical centers became more complex and connected to the internet, the responsibility for security shifted to network administrators who assumed safeguards were in place. “Suddenly, medical security has become a do-it-yourself project,” Pianykh wrote in a 2016 research paper he published in a medical journal.

ProPublicas investigation built upon findings from Greenbone Networks, a security firm based in Germany that identified problems in at least 52 countries on every inhabited continent. GreenboneҒs Dirk Schrader first shared his research with Bayerischer Rundfunk after discovering some patients health records were at risk. The German journalists then approached ProPublica to explore the extent of the exposure in the U.S.

Schrader found five servers in Germany and 187 in the U.S. that made patients’ records available without a password. ProPublica and Bayerischer Rundfunk also scanned Internet Protocol addresses and identified, when possible, which medical provider they belonged to.

ProPublica independently determined how many patients could be affected in America, and found some servers ran outdated operating systems with known security vulnerabilities. Schrader said that data from more than 13.7 million medical tests in the U.S. were available online, including more than 400,000 in which X-rays and other images could be downloaded.

The privacy problem traces back to the medical professions shift from analog to digital technology. Long gone are the days when film X-rays were displayed on fluorescent light boards. Today, imaging studies can be instantly uploaded to servers and viewed over the internet by doctors in their offices.

In the early days of this technology, as with much of the internet, little thought was given to security. The passage of HIPAA required patient information to be protected from unauthorized access. Three years later, the medical imaging industry published its first security standards.

Our reporting indicated that large hospital chains and academic medical centers did put security protections in place. Most of the cases of unprotected data we found involved independent radiologists, medical imaging centers or archiving services.

One German patient, Katharina Gaspari, got an MRI three years ago and said she normally trusts her doctors. But after Bayerischer Rundfunk showed Gaspari her images available online, she said: “Now, I am not sure if I still can.” The German system that stored her records was locked down last week.

We found that some systems used to archive medical images also lacked security precautions. Denver-based Offsite Image left open the names and other details of more than 340,000 human and veterinary records, including those of a large cat named ԓMarshmellow, ProPublica found. An Offsite Image executive told ProPublica the company charges clients $50 for access to the site and then $1 per study. ԓYour data is safe and secure with us, Offsite ImageԒs website says.

The company referred ProPublica to its tech consultant, who at first defended Offsite Images security practices and insisted that a password was needed to access patient records. The consultant, Matthew Nelms, then called a ProPublica reporter a day later and acknowledged Offsite ImageҒs servers had been accessible but were now fixed.

“We were just never even aware that there was A POSSIBILITY that could even happen,” Nelms said.

In 1985, an industry group that included radiologists and makers of imaging equipment created a standard for medical imaging software. The standard, which is now called DICOM, spelled out how medical imaging devices talk to each other and share information.

We shared our findings with officials from the Medical Imaging & Technology Alliance, the group that oversees the standard. They acknowledged that there were hundreds of servers with an open connection on the internet, but suggested the blame lay with the people who were running them.

Even though it is a comparatively small number,Ӕ the organization said in a statement, it may be possible that some of those systems may contain patient records. Those likely represent bad configuration choices on the part of those operating those systems.Ӕ

Meeting minutes from 2017 show that a working group on security learned of Pianykhs findings and suggested meeting with him to discuss them further. That ғaction item was listed for several months, but Pianykh said he never was contacted. The medical imaging alliance told ProPublica last week that the group did not meet with Pianykh because the concerns that they had were sufficiently addressed in his article. They said the committee concluded its security standards were not flawed.

Pianykh said that misses the point. ItԒs not a lack of standards; its that medical device makers donҒt follow them. Medical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice,Ӕ Pianykh wrote in 2016.

ProPublicas latest findings follow several other major breaches. In 2015, U.S. health insurer Anthem Inc. revealed that private data belonging to more than 78 million people was exposed in a hack. In the last two years, U.S. officials have reported that more than 40 million people have had their medical data compromised, according to an analysis of records from the U.S. Department of Health and Human Services.

Joy Pritts, a former HHS privacy official, said the government isn’t tough enough in policing patient privacy breaches. She cited an April announcement from HHS that lowered the maximum annual fine, from $1.5 million to $250,000, for whats known as “corrected willful neglect” - the result of conscious failures or reckless indifference that a company tries to fix. She said that large firms would not only consider those fines as just the cost of doing business, but that they could also negotiate with the government to get them reduced. A ProPublica examination in 2015 found few consequences for repeat HIPAA offenders.

A spokeswoman for HHS Office for Civil Rights, which enforces HIPAA violations, said it wouldn’t comment on open or potential investigations.

“What we typically see in the health care industry is that there is Band-Aid upon Band-Aid applied to legacy computer systems,” said Singh, the cybersecurity expert. She said it’s a “shared responsibility: among manufacturers, standards makers and hospitals to ensure computer servers are secured.

“It’s 2019,” she said. “There’s no reason for this.”

How Do I Know if My Medical Imaging Data is Secure?

If you are a patient:

If you have had a medical imaging scan (e.g., X-ray, CT scan, MRI, ultrasound, etc.) ask the health care provider that did the scan - or your doctor - if access to your images requires a login and password. Ask your doctor if their office or the medical imaging provider to which they refer patients conducts a regular security assessment as required by HIPAA.

If you are a medical imaging provider or doctor’s office:

Researchers have found that picture archiving and communication systems (PACS) servers implementing the DICOM standard may be at risk if they are connected directly to the internet without a VPN or firewall, or if access to them does not require a secure password. You or your IT staff should make sure that your PACS server cannot be accessed via the internet without a VPN connection and password. If you know the IP address of your PACS server but are not sure whether it is (or has been) accessible via the internet, please reach out to us at “medicalimaging at propublica.org.”

SOURCE

---

FDA informs patients, providers and manufacturers about potential cybersecurity vulnerabilities for connected medical devices and health care networks that use certain communication software

FDA
October 1, 2019

Today, the U.S. Food and Drug Administration is informing patients, health care professionals, IT staff in health care facilities and manufacturers of a set of cybersecurity vulnerabilities, referred to as URGENT/11,Ӕ thatif exploited by a remote attackerחmay introduce risks for medical devices and hospital networks. URGENT/11 affects several operating systems that may then impact certain medical devices connected to a communications network, such as wi-fi and public or home Internet, as well as other connected equipment such as routers, connected phones and other critical infrastructure equipment. These cybersecurity vulnerabilities may allow a remote user to take control of a medical device and change its function, cause denial of service, or cause information leaks or logical flaws, which may prevent a device from functioning properly or at all.

To date, the FDA has not received any adverse event reports associated with these vulnerabilities. The public was first informed of these vulnerabilities in a July 2019 advisory sent by the Department of Homeland Security. Today, the FDA is providing additional information regarding the source of these vulnerabilities and recommendations for reducing or avoiding risks the vulnerabilities may pose to certain medical devices.

While advanced devices can offer safer, more convenient and timely health care delivery, a medical device connected to a communications network could have cybersecurity vulnerabilities that could be exploited resulting in patient harm,Ӕ said Amy Abernethy, M.D., Ph.D., FDAs principal deputy commissioner. ғThe FDA urges manufacturers everywhere to remain vigilant about their medical productsto monitor and assess cybersecurity vulnerability risks, and to be proactive about disclosing vulnerabilities and mitigations to address them. This is a cornerstone of the FDAגs efforts to work with manufacturers, health care delivery organizations, security researchers, other government agencies and patients to develop and implement solutions to address cybersecurity issues that affect medical devices in order to keep patients safe.

The URGENT/11 vulnerabilities exist in a third-party software, called IPnet, that computers use to communicate with each other over a network. This software is part of several operating systems and may be incorporated into other software applications, equipment and systems. The software may be used in a wide range of medical and industrial devices. Though the IPnet software may no longer be supported by the original software vendor, some manufacturers have a license that allows them to continue to use it without support. Therefore, the software may be incorporated into a variety of medical and industrial devices that are still in use today.

Security researchers, manufacturers and the FDA are aware that the following operating systems are affected, but the vulnerability may not be included in all versions of these operating systems:

VxWorks (by Wind River)
Operating System Embedded (OSE) (by ENEA)
INTEGRITY (by GreenHills)
ThreadX (by Microsoft)
ITRON (by TRON)
ZebOS (by IP Infusion)

The agency is asking manufacturers to work with health care providers to determine which medical devices, either in their health care facility or used by their patients, could be affected by URGENT/11 and develop risk mitigation plans. Patients should talk to their health care providers to determine if their medical device could be affected and to seek help right away if they notice the functionality of their device has changed.

The FDA takes reports of vulnerabilities in medical devices very seriously and todayԒs safety communication includes recommendations to manufacturers for continued monitoring, reporting and remediation of medical device cybersecurity vulnerabilities. The FDA is recommending that manufacturers conduct a risk assessment, as described in the FDAs cybersecurity postmarket guidance, to evaluate the impact of these vulnerabilities on medical devices they manufacture and develop risk mitigation plans. Medical device manufacturers should work with operating system vendors to identify available patches and other recommended mitigation methods, work with health care providers to determine any medical devices that could potentially be affected, and discuss ways to reduce associated risks.

Some medical device manufacturers are already actively assessing which devices may be affected by URGENT/11 and are identifying risk and remediation actions. In addition, several manufacturers have already proactively notified customers of affected products, which include medical devices such as an imaging system, an infusion pump and an anesthesia machine. The FDA expects that additional medical devices with one or more of the cybersecurity vulnerabilities will be identified.

ғWhile we are not aware of patients who may have been harmed by this particular cybersecurity vulnerability, the risk of patient harm if such a vulnerability were left unaddressed could be significant, said Suzanne Schwartz, M.D., MBA, deputy director of the Office of Strategic Partnerships and Technology Innovation in the FDAԒs Center for Devices and Radiological Health. The safety communication issued today contains recommendations for what actions patients, health care providers and manufacturers should take to reduce the risk this vulnerability could pose. ItӒs important for manufacturers to be aware that the nature of these vulnerabilities allows the attack to occur undetected and without user interaction. Because an attack may be interpreted by the device as a normal network communication, it may remain invisible to security measures.

The FDA will continue its work with manufacturers and health care delivery organizationsԗas well as security researchers and other government agenciesto help develop and implement solutions to address cybersecurity issues throughout a device’s total product lifecycle.

The FDA will continue to assess new information concerning the URGENT/11 vulnerabilities and will keep the public informed if significant new information becomes available.

The FDA, an agency within the U.S. Department of Health and Human Services, protects the public health by assuring the safety, effectiveness, and security of human and veterinary drugs, vaccines and other biological products for human use, and medical devices. The agency also is responsible for the safety and security of our nationגs food supply, cosmetics, dietary supplements, products that give off electronic radiation, and for regulating tobacco products.

SOURCE

Posted by Elvis on 09/18/19 •
Section Privacy And Rights • Section Dying America
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Sunday, August 25, 2019

Defcon Hospital Horror Stories

By Emil Hozan
Serendipity
August 23, 2019

Disclaimer: don’t read this if you don’t want your sense of security involving medical information shattered. This post is based on a Skytalk presented at Def Con 27. The presenter opted to redact their name for privacy concerns. What made this talk quite startling was the fact that the presenter supports over 25 hospitals around the US and has insight of just how poor information systems security is within these hospital environments. Due to the nature of these talks, recordings are prohibited, and I didn’t want to get kicked out, so I avoided taking notes as well just in case. This semi ties into a past post I wrote pertaining to poor MEDICAL DEVICE SECURITY and another follow up post about what the INDUSTRY IS DOING about it.

That said, if you want to learn more about an insiderҒs perspective into the horror stories within the medical industry, read on.

A Barrage of Issues

Hearing all that was said was quite terrifying, from password concerns to the sheer number of internal vulnerabilities detected, I was simply astonished at the words coming from the speaker. What was more than that, however, was UPPER MANAGEMENT’S LACK OF INTEREST in corrective action. Stick with me while I go through the points discussed and what solutions were proposed but not implemented.

For starters, a huge concern was the operating systems in use within the hospitals the speaker supported. He stated that DOS was still being used and he was the only employee on his team who even knew what DOS was. Not to mention the continued use of Windows XP, NT, and 95 now if that doesn’t date a few things, I am not sure what will. These are machines handling personal health information, where critical vulnerabilities are publicized with no available patches or fixes are available for these unsupported systems. Whats even more crazy was a “new robot” that was in charge of provisioning medicine - it, too, ran on DOS!

If you’re curious of release dates, check out this Wikipedia page discussing Windows version and their release dates. On that same note, and one of the most alarming points made, was that on average, his internally ran vulnerability scans results in over 300 critical vulnerabilities! You read that right, yes this is on average.

Next off was poor password practices. From weak passwords just barely satisfying password policies, to doctors openly sharing passwords with staff members, it’s almost as if anyone could access a patients’ health information masked as a doctor. The speaker stated that it wasn’t uncommon for nurses to know the password of at least three doctors they worked with. There were network devices that didn’t even have a password! We all know what can happen with compromised passwords, or even a lack of a password - yikes!

To make matters worse, I forget the password solution used in his supported hospitals, but it was something along the lines of SSGP or similar. What I know is that it was four characters and started with SS. The point is, this speaker was part of a hacker group and this group discovered a vulnerability but opted to not disclose this vulnerability. The speakers dire warning was, all medical staff should change their passwords, immediately!” Think about that for a moment; a password solution with an undisclosed vulnerability I’ll tie these password points in later, keep reading.

Another alarming act was his attempt at personally lockpicking doors protecting secure areas. He mentioned one such incident where two or three people approached him stating, What you’re doing is pretty shady. The speaker replied, “I know, you’re right. What I am doing is shady.” He said that after three hours, no one reported him, nor did security confront him. The speaker was able to break his way into network closets, where equipment was essentially wide open and was able to set up rogue access points, as well as scan the network. Mind you he was doing this in an attempt to check what security measures were in place.

One observation the speaker made was the sheer amount of bacteria and mold growing on this network equipment. He showed pictures he took of Ethernet cables and switches caked with molasses and other icky stuff ewww.

Wow is really all I can say. That was astonishing and to be honest, it was tough to admit and see truth in his alleged statements. However, what made me believe his story more than anything was his interest in his and his father֒s medical conditions. One day he got curious due to the number of hospital visits the two make. When he started poking, he went full throttle to see just how poor security measures are.

Enough Scary Talk, Proposed Solutions

In reading the above section, you should know by now what some proposed solutions would be. Examples include not sharing your password, enabling passwords for that matter, and using currently supported operating systems, as well as ensuring physical security is a thing. If you weren’t thinking of those, now you know.

Past that, and what actually seemed to be a fair solution to avoid a lot of the above: mobile medical units.

The speaker started off by saying mammograms are mobile, and that there should be an effort in mobilizing other critical devices. Get everything mobilized and start treating patients in-house, where they’re most comfortable. That really stuck out to me. There’s always been a notion of making patients most comfortable and the truth is, often times, being at home is whats most comfortable.

I am sure there are more logistics behind that statement, which leads to a desire for expanded conversations on how to go about mobilizing medical staff. It seems semi-feasible, but I also know that there are a lot of varying illnesses and it kind of makes it seem infeasible at the same time. I’m no medical expert so I cant speak too much on this.

Tying in the Loose Ends

Above I left the password talk on a cliff hanger. Allow me to expand in this section.

The speaker stated the number of phishing attempts was simply overwhelming, and that there are many who fall prey. Two examples he gave were more recent: one being where a finance department personnel fell victim to a fraudulent invoice totaling $500,000 (that’s a lot of money), and the other was a critical ransomware attack (which started at $900,000 that the staff was able to work down to $500,000). The latter was facilitated by compromised passwords.

I’m not sure about you but I’ve received many fraudulent invoice requests of varying amounts. It’s easier for me to disregard because I know I am not in the position to handle such matters. The same cannot be said for the one who fell victim though. That said, and with such a large sum of money, employees shouldn’t blindly pay anything without checking the records. There should be a way to validate such invoices and I find it hard to believe there isn’t some sort of paper trail regarding who the hospital does business with and what’s owed to whom. If this isn’t the case, paying an excessive amount of money for an untraceable invoice is an expensive fault that needs correction.

As for the latest ransomware attack- this started Monday, August 5th, the week of Black Hat / DefCon. He got into town that night, went to sleep and was awoken early Tuesday morning with reports of a ransomware attack. Immediately he told the caller to ensure all passwords were changed and what to expect. The backups were too old tsk, tsk 0 so they were left with no choice but to negotiate and pay. The staff did this, yet they failed to change their passwords! After forking out $500k, they were hit again with the same attack Thursday of that same week because they didn’t change their passwords! Imagine that. And to make it worse, the staff agreed to change their password this time. but opted to wait until the following week to do so.

Did they? I am not sure but waiting is such a silly thing to do.

This all leads back to user training. All personnel should be trained on how to look out for phishing emails and other unsolicited emails claiming a lack of payment. The same applies with passwords uses. Reusing passwords is a no-no and with all that was said above, multi-factor authentication would definitely be worth the cost. With these two examples, that’s a fair sum of money paid, and you’d figure that change would be expected.

Conclusion

I would be lying if I said I’d feel comfortable going to a doctor and feeling my personal health information is safe. Obviously when you’re in a critical condition it may not mean as much at that time, your life is on the line after all, but its still a scary thought to know the gravity of just how poor hospital security allegedly is. Further, with the HIPAA violation costs, the speaker stated that hospitals are more prone on not reporting breaches and thus not getting fined. Again, these are all allegations and all I am doing is summarizing what was reported.

Tying in the whole medical device concerns with this development, change is in order. With personal information being publicized on the dark web and accessible by other threat actors, there’s no telling what they may do with that information. There was a lot more that was said in this talk and what I wrote was merely a glimpse. Its difficult to ensure your personal information is safe when you’re not the one responsible for keeping it safe. The truth is, its the doctors’ responsibility along with the medical staff and the IT team of said hospitals.

SOURCE

Posted by Elvis on 08/25/19 •
Section Privacy And Rights
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Wednesday, June 12, 2019

AC Phone Home

snooping on your pc

I got a new HONEYWELL THERMOSTAT for the air conditioner that has internet connectivity for remote access, and pulls a weather report.

Like everything IOT- it INSISTS ON A MIDDLEMAN (pretty much anyone after looking at their EULA) possibly peeking at the things connected to my network, and who knows WHAT ELSE:

The Internet has been around for around 20 years now, and its security is far from perfect. Hacker groups still ruthlessly take advantage of these flaws, despite spending billions on tech security. The IoT, on the other hand, is primitive. And so is its security.

Once everything we do, say, think, and eat, is tracked, the big data thats available about each of us is immensely valuable. When companies know our lives inside and out, they can use that data to make us buy even more stuff. Once they control your data, they control you.

Why can’t I just VPN into the house and connect to it that way?

Because then they can’t SNOOP.

Their EULA SAYS:

We may use your Contact Information to market Honeywell and third-party products and services to you via various methods

We also use third parties to help with certain aspects of our operations, which may require disclosure of your Consumer Information to them.

Honeywell uses industry standard web ANALYTICS to track web visits, Google Analytics and Adobe Analytics.

GOOGLE and Adobe may also TRANSFER this INFORMATION to third parties where required to do so by law, or where such third parties process the information on Google’s or Adobe’s behalf.

You acknowledge and agree that Honeywell and its affiliates, service providers, suppliers, and dealers are permitted at any time and without prior notice to remotely push software

collection and use of certain information as described in this Privacy Statement, including the transfer of this information to the United States and/or other countries for storage

Wonderful.

I connected it to the LAN without asking it to get the weather - or signing up for anything at HONEYWELL’S SITE.

As fast as I can turn my head to peek at the firewall - it was chatting on the internet, and crapped out with some SSL error:

‘SSL_PROTO_REJECT: 48: 192.168.0.226:61492 -> 199.62.84.151:443’
‘SSL_PROTO_REJECT: 48: 192.168.0.226:65035 -> 199.62.84.152:443’
‘SSL_PROTO_REJECT: 48: 192.168.0.226:55666 -> 199.62.84.153:443’

Maybe the website has a problem:

# curl -sslv2 199.62.84.151:443
* About to connect() to 199.62.84.151 port 443 (#0)
* Trying 199.62.84.151… connected
* Connected to 199.62.84.151 (199.62.84.151) port 443 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 199.62.84.151:443
> Accept: */*
>
* Closing connection #0
* Failure when receiving data from the peer

# curl -sslv3 199.62.84.151:443
* About to connect() to 199.62.84.151 port 443 (#0)
* Trying 199.62.84.151… connected
* Connected to 199.62.84.151 (199.62.84.151) port 443 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 199.62.84.151:443
> Accept: */*
>
* Closing connection #0
* Failure when receiving data from the peer

# curl -tlsv1 199.62.84.151:443
curl: (56) Failure when receiving data from the peer

# curl -tlsv1.0 199.62.84.151:443
curl: (56) Failure when receiving data from the peer

# curl -tlsv1.1 199.62.84.151:443
curl: (56) Failure when receiving data from the peer

# curl -tlsv1.2 199.62.84.151:443
curl: (56) Failure when receiving data from the peer

# curl 199.62.84.151:80
curl: (56) Failure when receiving data from the peer

Then I pulled the plug.  Even if Honeywell’s website is broke - I still fear this thermostat will find a way to download software, and maybe START SPYING ON MY HOME NETWORK:

The US intelligence chief has acknowledged for the first time that agencies might use a new generation of smart household devices to increase their surveillance capabilities.

Maybe, someday I’ll firewall off HONEYWELL’S NETBLOCKS, connect it again, see where it goes.

For now - I’m too AFRAID:

When the cybersecurity industry warns about the nightmare of hackers causing blackouts, the scenario they describe typically entails an elite team of hackers breaking into the inner sanctum of a power utility to start flipping switches. But one group of researchers has imagined how an entire power grid could be taken down by hacking a less centralized and protected class of targets: home air conditioners and water heaters.

---

Think that’s bad?  Check this out

Dont Toss That Bulb, It Knows Your Password

By Tom Nardi
Hackaday
January 28, 2019

Whether it was here on Hackaday or elsewhere on the Internet, youҒve surely heard more than a few cautionary tales about the Internet of ThingsӔ by now. As it turns out, giving every gadget you own access to your personal information and Internet connection can lead to unintended consequences. Who knew, right? But if you need yet another example of why trusting your home appliances with your secrets is potentially a bad idea, [Limited Results] is here to make sure you spend the next few hours doubting your recent tech purchases.

In a series of POSTS on the [Limited Results] blog, low-cost smart bulbs are cracked open and investigated to see what kind of knowledge theyve managed to collect about their owners. Not only was it discovered that bulbs manufactured by Xiaomi, LIFX, and Tuya stored the WiFi SSID and encryption key in plain-text, but that recovering said information from the bulbs was actually quite simple. So next time one of those cheapo smart bulb starts flickering, you might want to take a hammer to it before tossing it in the trash can; you never know where it, and the knowledge it has of your network, might end up.’

Regardless of the manufacturer of the bulb, the process to get one of these devices on your network is more or less the same. An application on your smartphone connects to the bulb and provides it with the network SSID and encryption key. The bulb then disconnects from the phone and reconnects to your home network with the new information. It’s a process that at this point were all probably familiar with, and there’s nothing inherently wrong with it.

The trouble comes when the bulb needs to store the connection information it was provided. Rather than obfuscating it in some way, the SSID and encryption key are simply stored in plain-text on the bulbs WiFi module. Recovering that information is just a process of finding the correct traces on the bulbҒs PCB (often there are test points which make this very easy), and dumping the chips contents to the computer for analysis.

It’s not uncommon for smart bulbs like these to use the ESP8266 or ESP32, and [Limited Results] found that to be the case here. With the wealth of information and software available for these very popular WiFi modules, dumping the firmware binary was no problem. Once the binary was in hand, a little snooping around with a hex editor was all it took to identify the network login information. The firmware dumps also contained information such as the unique hardware IDs used by the cloudӔ platforms the bulbs connect to, and in at least one case, the root certificate and RSA private key were found.

On the plus side, being able to buy cheap smart devices that are running easily hackable modules like the ESP makes it easier for us to create custom firmware for them. Hopefully the community can come up with slightly less suspect software, but really just keeping the things from connecting to anything outside the local network would be a step in the right direction.

(Some days later)

[Limited Results] had hinted to us that he had previously disclosed some vulnerabilities to the bulb’s maker, but that until they fixed them, he didn’t want to make them public. They’re fixed now, and it appears that the bulbs were sending everything over the network unencrypted your data, OTA firmware upgrades, everything.  They’re using TLS now, so good job [Limited Results]! If you’re running an old version of their lightbulbs, you might have a look.

On WiFi credentials, we were told: “In the case where sensitive information in the flash memory wasn’t encrypted, the new version will include encrypted storage processing, and the customer will be able to select this version of the security chips, which can effectively avoid future security problems.” Argue about what that actually means in the comments.

SOURCE

Posted by Elvis on 06/12/19 •
Section Privacy And Rights • Section Broadband Privacy
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Tuesday, June 04, 2019

Still Looking For Reasons To Keep Away From Windows? Part 22

badwindows.jpg

Russia’s Would-Be Windows Replacement Gets a Security Upgrade

By Patrick Tucker
Defense One
May 28, 2019

For sensitive communications, the Russian government aims to replace the ubiquitous Microsoft operating system with a bespoke flavor of Linux, a sign of the country’s growing IT independence.

For the first time, Russia has granted its highest security rating to a domestically developed operating system deeming ASTRA LINUX suitable for communications of “special importance” across the military and the rest of the government. The designation clears the way for Russian intelligence and military workers who had been using Microsoft products on office computers to use Astra Linux instead.

There is hope that the domestic OS [operating system] will be able to replace the Microsoft product. “Of course, this is good news for the Russian market,” said German Klimenko, former IT advisor to Russian President Vladimir Putin and chairman of the board of Russia’s Digital Economy Development Fund, a venture capital fund run by the government. Klimenko spoke to the Russian newspaper Izvestia on Friday.

Although Russian officials used Windows for secure communications, they heavily modified the software and subjected Windows-equipped PCs to lengthy and rigorous security checks before putting the computers in use. The testing and analysis was to satisfy concerns that vulnerabilities in MICROSOFT OPERATING SYSTEMS could be patched to prevent hacking from countries like the United States. Such evaluations could take three years, according to the newspaper.

A variant of the popular Linux open-source operating system, Astra Linux has been developed over the past decade by Scientific/Manufacturing Enterprise Rusbitech. In January 2018, the Russian Ministry of Defense said it intended to switch to Astra Linux as soon as it met the necessary security standards. Before that, the software had been on some automated control systems, such as the kind sometimes found on air defense systems and some airborne computer systems.

It’s another example of Russia’s self-imposed IT exile, along with the efforts to disconnect the country from the global Internet by 2021 and to create its own domain name service.

“The Russian government doesn’t trust systems developed by foreign companies to handle sensitive data, due to fears of espionage through those systems,"” said Justin Sherman, Cybersecurity Policy Fellow at New America. Using domestically produced technologies to manage sensitive data is just another component of the Kremlin’s broader interest in exercising more autonomy over the digital machines and communications within its borders.

Sam Bendett, research analyst with the “Center for Naval Analyses” International Affairs Group, said, One of the main sticking points for the Russian government was the fact that imported operating systems had vulnerabilities and back doors that Moscow thought could be exploited by international intelligence agencies. This is essentially Russia ensuring its cybersecurity against potential intrusions.

It’s unsurprising that Moscow distrusts Microsoft software, given that Russian-developed malware, like the NotPetya virus used against energy targets in Ukraine, exploits vulnerabilities in Windows.

Sherman says that while the Russian government may find Astra Linux a suitable substitute for Windows, its not a serious competitor anyplace else. There’s no particular reason for others to use this bespoke variant of Linux. Also suspicion of Russian software has been rising internationally. The country’s most successful and recognized software company, Kaspersky, can no longer sell its wares to the U.S. government. Last May, the cybersecurity firm opened a “transparency lab” in Switzerland in an attempt to assuage jittery European customers.

“If this operating system were to be marketed outside of Russia, the prospects likely aren’t great,” Sherman said. Astra Linux doesn’t exactly have worldwide foothold compared to the systems its replacing within Russia, and this is only compounded by the fact that just as the Russian government has security concerns about software made in other countries - Other countries may very well have security concerns about using software made in Russia and endorsed by the Russian government.

But, says Bendett, a potential client list for Russian software does exist outside of Russia, just as there is for Russian anti-aircraft systems. “There is a growing list of nations that will probably want to have its main government and military systems run on an OS from a nation more friendly to their interest like Syria.. or other countries where Russia is seeking to make inroads. So the possibility for export definitely exists.”

SOURCE

Posted by Elvis on 06/04/19 •
Section Privacy And Rights • Section Microsoft And Windows
View (0) comment(s) or add a new one
Printable viewLink to this article
Home

Thursday, May 30, 2019

Iphone Phone Home

image: iphone phone home

Its the middle of the night. Do you know who your iPhone is talking to?

Apple says, :What happens on your iPhone stays on your iPhone.”
Our privacy experiment showed 5,400 hidden app trackers guzzled our data - in a single week.

By Geoffrey A. Fowler
Washington Post
May 28, 2019

It’s 3 a.m. Do you know what your iPhone is doing?

Mine has been alarmingly busy. Even though the screen is off and I’m snoring, apps are beaming out lots of information about me to companies I’ve never heard of. Your iPhone probably is doing the same - and Apple could be doing more to stop it.

On a recent Monday night, a dozen marketing companies, research firms and other personal data guzzlers got reports from my iPhone. At 11:43 p.m., a company called Amplitude learned my phone number, email and exact location. At 3:58 a.m., another called Appboy got a digital fingerprint of my phone. At 6:25 a.m., a tracker called Demdex received a way to identify my phone and sent back a list of other trackers to pair up with.

And sll night long, there was some startling behavior by a household name: Yelp. It was receiving a message that included my IP address - once every five minutes.

Our data has a secret life in many of the devices we use every day, from talking Alexa speakers to smart TVs. But we;ve got a giant blind spot when it comes to the data companies probing our phones.

You might assume you can count on Apple to sweat all the privacy details. After all, it touted in a recent ad, “What happens on your iPhone stays on your iPhone.” My investigation suggests otherwise.

IPhone apps I discovered tracking me by passing information to third parties - just while I was asleep - include Microsoft OneDrive, Intuits Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alertservice Citizen, shared personally identifiable information in violation of its published privacy policy.

And your iPhone doesnt only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic. According to privacy firm Disconnect, which helped test my iPhone, those unwanted trackers would have spewed out 1.5 gigabytes of data over the span of a month. That’s half of an entire basic wireless service plan from AT&T.

This is your data. Why should it even leave your phone? Why should it be collected by someone when you donӒt know what theyre going to do with it?Ҕ says Patrick Jackson, a former National Security Agency researcher who is chief technology officer for Disconnect. He hooked my iPhone into special software so we could examine the traffic. I know the value of data, and I donӒt want mine in any hands where it doesnt need to be,Ҕ he told me.

In a world of data brokers, Jackson is the data breaker. He developed an app called Privacy Pro that identifies and blocks many trackers. If youre a little bit techie, I recommend trying the free iOS version to glimpse the secret life of your iPhone.

Yes, trackers are a problem on phones running Google’s Android, too. Google wont even let Disconnect’s tracker-protection software into its Play Store. (Googles rules prohibit apps that might interfere with another app displaying ads.)

Part of Jackson’s objection to trackers is that many feed the personal data economy, used to target us for marketing and political messaging. Facebook’s fiascos have made us all more aware of how our data can be passed along, stolen and misused - but Cambridge Analytica was just the beginning.

Jackson’s biggest concern is transparency: If we don’t know where our data is going, how can we ever hope to keep it private?

The app gap

App trackers are like the cookies on websites that slow load times, waste battery life and cause creepy ads to follow you around the Internet. Except in apps, theres little notice trackers are lurking and you can’t choose a different browser to block them.

Why do trackers activate in the middle of the night? Some app makers have them call home at times the phone is plugged in, or think they wont interfere with other functions. These late-night encounters happen on the iPhone if you have allowed “background app refresh,” which is Apple’s default.

With Yelp, the company says the behavior I uncovered wasn’t a tracker but rather an “unintended issue that’s been acting like a tracker.” Yelp thinks my discovery affects 1 percent of its iOS users, particularly those who’ve made reservations through Apple Maps. At best, it is shoddy software that sent Yelp data it didn’t need. At worst, Yelp was amassing a data trove that could be used to map peoples travels, even when they weren’t using its app.

A more typical example is DoorDash, the food-delivery service. Launch that app, and you’re sending data to nine third-party trackers - though you’d have no way to know it.

App makers often use trackers because they’re shortcuts to research or revenue. They run the gamut from innocuous to insidious. Some are like consultants that app makers pay to analyze what people tap on and look at. Other trackers pay the app makers, squeezing value out of our data to target ads.

In the case of DoorDash, one tracker called Sift Science gets a fingerprint of your phone (device name, model, ad identifier and memory size) and even accelerometer motion data to help identify fraud. Three more trackers help DoorDash monitor app performance - including one called Segment that routes onward data including your delivery address, name, email and cell carrier.

DoorDashҗs other five trackers, including Facebook and Google Ad Services, help it understand the effectiveness of its marketing. Their presence means Facebook and Google know every time you open DoorDash.

The delivery company tells me it doesnt allow trackers to sell or share our data, which is great. But its privacy policy throws its hands up in the air: ҒDoorDash is not responsible for the privacy practices of these entities, it says.

All but one of DoorDashӔs nine trackers made Jacksons naughty list for Disconnect, which also powers the Firefox browserҒs private browsing mode. To him, any third party that collects and retains our data is suspect unless it also has pro-consumer privacy policies like limiting data retention time and anonymizing data.

Microsoft, Nike and the Weather Channel told me they were using the trackers I uncovered to improve performance. Mint, owned by Intuit, said it uses an Adobe marketing tracker to help figure out how to advertise to Mint users. The Post said its trackers were used to make sure ads work. Spotify pointed me to its privacy policy.

Privacy policies don’t necessarily provide protection. Citizen, the app for location-based crime reports, published that it wouldn’t share your name or other personally identifying information.ғ Yet when I ran my test, I found it repeatedly sent my phone number, email and exact GPS coordinates to the tracker Amplitude.

After I contacted Citizen, it updated its app and removed the Amplitude tracker. (Amplitude, for its part, says data it collects for clients is kept private and not sold.)

“We will do a better job of making sure our privacy policy is clear about the specific types of data we share with providers like these,” Citizen spokesman J. Peter Donald said. We do not sell user data. “We never have and never will.”

The problem is, the more places personal data flies, the harder it becomes to hold companies accountable for bad behavior including inevitable breaches.

As Jackson kept reminding me: This is your data.

The letdown

What disappoints me is that the data free-for-all I discovered is happening on an iPhone. Isn’t Apple supposed to be better at privacy?

“At Apple we do a great deal to help users keep their data private,” the company says in a statement. “Apple hardware and software are designed to provide advanced security and privacy at every level of the system.”

In some areas, Apple is ahead. Most of Apple’s own apps and services take care to either encrypt data or, even better, to not collect it in the first place. Apple offers a privacy setting called “Limit Ad Tracking” (sadly off by default) which makes it a little bit harder for companies to track you across apps, by way of a unique identifier for every iPhone.

And with iOS 12, Apple took shots at the data economy by improving the ?intelligent tracking prevention” in its Safari web browser.

Yet these days, we spend more time in apps. Apple is strict about requiring apps to get permission to access certain parts of the iPhone, including your camera, microphone, location, health information, photos and contacts. (You can check and change those permissions under privacy settings.) But Apple turns more of a blind eye to what apps do with data we provide them or they generate about us Ӕ witness the sorts of tracking I found by looking under the covers for a few days.

For the data and services that apps create on their own, our App Store Guidelines require developers to have clearly posted privacy policies and to ask users for permission to collect data before doing so. When we learn that apps have not followed our Guidelines in these areas, we either make apps change their practice or keep those apps from being on the store,ד Apple says.

Yet very few apps I found using third-party trackers disclosed the names of those companies or how they protect my data. And what good is burying this information in privacy policies, anyway? What we need is accountability.

Getting more deeply involved in app data practices is complicated for Apple. Todays technology frequently is built on third-party services, so Apple couldn’t simply ban all connections to outside servers. And some companies are so big they dont even need the help of outsiders to track us.

The result shouldn’t be to increase Apple’s power. “I would like to make sure they’re not stifling innovation,” says Andrs Arrieta, the director of consumer privacy engineering at the Electronic Frontier Foundation.  “If Apple becomes the Internets privacy police, it could shut down rivals.”

Jackson suggests Apple could also add controls into iOS like the ones built into Privacy Pro to give everyone more visibility.

Or perhaps Apple could require apps to label when they’re using third-party trackers. If I opened the DoorDash app and saw nine tracker notices, it might make me think twice about using it.

SOURCE

Posted by Elvis on 05/30/19 •
Section Privacy And Rights
View (0) comment(s) or add a new one
Printable viewLink to this article
Home
Page 1 of 70 pages  1 2 3 >  Last »

Statistics

Total page hits 9460301
Page rendered in 1.0929 seconds
41 queries executed
Debug mode is off
Total Entries: 3198
Total Comments: 337
Most Recent Entry: 10/13/2019 10:49 am
Most Recent Comment on: 01/02/2016 09:13 pm
Total Logged in members: 0
Total guests: 10
Total anonymous users: 0
The most visitors ever was 114 on 10/26/2017 04:23 am


Email Us

Home

Members:
Login | Register
Resumes | Members

In memory of the layed off workers of AT&T

Today's Diversion

A State divided into a small number of rich and a large number of poor will always develop a governmet manipulated by the rick to protect the amenities represented by their property. - Harold Laski

Search


Advanced Search

Sections

Calendar

October 2019
S M T W T F S
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    

Must Read

Most recent entries

RSS Feeds

Today's News

ARS Technica

External Links

Elvis Picks

BLS Pages

Favorites

All Posts

Archives

RSS


Creative Commons License


Support Bloggers' Rights