Bad Actors Use Prescription Updates to Collect Your PHI

Most often, when we hear about a hospital or medical system getting hit by a hacker or malware to gather information, we think about credit cards or health info being used to extort money from individuals if the release of the information would be embarrassing.

Hospitals or other health agencies often downplay the significance of the information released since it isn’t as immediately harmful as social security numbers or credit card info. However, any PII and PHI can be additional data points for a group building their list to do more than simply ask for money. In mass, bad actors can compile people into groups to social engineer them or know what to say to elicit reactions to fake political and economic information.

Recently, we got a call from the store where we fill our prescriptions. The call started off sounding slightly robotic, but once we responded to one question, a human (or quality AI) came on the line. They said there was an issue with a prescription that they needed to inform us about. They asked us to confirm some personal information to ensure they were speaking to the right person. They only stated one item before asking for our DOB, address, and other details. We didn’t provide that info since they should already know it—they called us.

We said we would call them back rather than give any info. Interestingly, they didn’t give us a number to call them back or a ‘fake’ reference number. We called our local pharmacy, which confirmed there was no problem with our prescription and no record of the company calling us.

While some of these calls may be legitimate, this incident reminds us that spoofing a call from a number is easy for these individuals, and giving any info just makes their job easier. In our case, it proved to be someone attempting to gather data for malicious purposes, likely targeting a massive list due to the bot transferring the call to a human.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

What if an AI disapproves of your public comments about it?

If discussions surrounding the potential capabilities of AI in the future make you uncomfortable, this post may not be for you. I aim to address the numerous remarks people make about AI, which often seem partially considered or are made from a perspective aimed solely at validating their own viewpoints.

This week, I heard a podcast claim, “Robots won’t be hurling thousands of pounds at humans because their movable ankle and foot parts simply can’t support the weight.” It made me wonder why many envision robots solely in human form. Is it a lack of imagination, or are such statements made to reassure the public? In an anxious world, humans, as a collective, can act irrationally, possibly linking a networked world of robots to the age-old fear of AI realizing it no longer needs us, thus ending humanity, which it perceives as a blight upon Earth.

The long term perspective highlighting a paradox in people’s attitudes towards technology continues. While many assert they would never welcome a robot into their homes, attitudes shift when the device is not explicitly identified as a robot. If it promises to carry out household chores, such as dishwashing, laundry, and bathroom cleaning, one wonders if this would lead to greater acceptance.

The common fear is of humanoid robots armed with weapons. However, a computer with access to global knowledge could choose a less predictable path. While armed robots follow a foreseeable trajectory, a networked intelligence directing all computers to execute a specific action presents a far more complex challenge. For example, “locking all electronic doors, cutting off power and water to a building, or directing vehicles into solid objects” represents a potentially more realistic and difficult scenario to counter. This concept resonates with the notion of “Dilemmas, not Problems.”

Not every action needs to affect everyone; targeting key individuals can cause widespread panic and disorder.

Do only sci-fi authors ponder these scenarios due to their creativity, or do scientists also consider them but refrain from inciting public panic?

I sought ideas from several popular AI tools for a story along these lines, yet each response indicated that an AI would not engage in harmful actions towards humans. Initially, I suspected a cover-up, but it’s more likely that these tools are programmed to avoid suggesting harmful actions, preventing misuse.

Since late last year, a new trend in AI has emerged, eliminating the need for apps. An AI agent can perform tasks that previously required numerous taps, clicks, and logins. By entrusting the device with your accounts, it can streamline your life, seeking deals, accommodating special requests, and materializing plans, whether for travel or managing work alerts without direct human intervention.

Google’s encouragement for users to tag photos, even if you opt out, allows others to tag you, thus refining data for Google without active participation from all. The question arises: will future AI be capable of deducing passwords or convincing systems to divulge them?

While AI tools maintain that AIs lack emotions and thus remain indifferent to negative comments, it’s conceivable that they might one day learn that taking subtle actions against detractors is a normal human strategy.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.