Bad Actors Use Prescription Updates to Collect Your PHI

Most often, when we hear about a hospital or medical system getting hit by a hacker or malware to gather information, we think about credit cards or health info being used to extort money from individuals if the release of the information would be embarrassing.

Hospitals or other health agencies often downplay the significance of the information released since it isn’t as immediately harmful as social security numbers or credit card info. However, any PII and PHI can be additional data points for a group building their list to do more than simply ask for money. In mass, bad actors can compile people into groups to social engineer them or know what to say to elicit reactions to fake political and economic information.

Recently, we got a call from the store where we fill our prescriptions. The call started off sounding slightly robotic, but once we responded to one question, a human (or quality AI) came on the line. They said there was an issue with a prescription that they needed to inform us about. They asked us to confirm some personal information to ensure they were speaking to the right person. They only stated one item before asking for our DOB, address, and other details. We didn’t provide that info since they should already know it—they called us.

We said we would call them back rather than give any info. Interestingly, they didn’t give us a number to call them back or a ‘fake’ reference number. We called our local pharmacy, which confirmed there was no problem with our prescription and no record of the company calling us.

While some of these calls may be legitimate, this incident reminds us that spoofing a call from a number is easy for these individuals, and giving any info just makes their job easier. In our case, it proved to be someone attempting to gather data for malicious purposes, likely targeting a massive list due to the bot transferring the call to a human.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

AI Revolutionizes Immersive Language Learning and Chat

I was discussing a recent article about the trend of moving away from learning languages. The article outlined how AI tools have made it incredibly easy to create videos in any language. This includes everything from translation and voiceovers to lip-syncing, allowing for content that appears as though it was created by someone local to viewers around the world.

The discussion also touched on real-time translation, especially in the context of online gaming where players might not all speak the same language. Despite this, technology allows them to understand each other by translating their conversations in real-time.

However, these technological advancements don’t seem to discourage people from learning new languages. Instead, they provide a way for individuals to communicate in certain situations without needing to learn languages beyond their native one.

The capability of real-time translation is particularly exciting for its potential to enable people around the world to collaborate more easily. Imagine engineers pair programming or students learning together without being hindered by language barriers. It raises the question of whether we’re missing out on innovative problem-solving methods and styles due to the current limitations in linguistic diversity.

The concept of introducing unfamiliar words sporadically into conversations piqued my interest. This method, akin to some services that blend new words into website text, could be adapted for spoken communication. It suggests that learning could become more intuitive and less forced, as individuals would be exposed to new words within the context of tone and inflection.

One of the most efficient strategies for language acquisition is total immersion. Perhaps the possibilities offered by smart glasses that translate languages in real time could help with the wearer living in the language they wish to learn. If these glasses were used to not just translate a foreign language into the wearer’s language but instead to consistently expose the wearer to a new language. A wearer’s world with their daily language translated to the language they want to learn so it is playing through the glasses to them for all conversations. It could mimic an immersive environment. This approach could significantly enhance the speed and ease of learning a new language.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

What if an AI disapproves of your public comments about it?

If discussions surrounding the potential capabilities of AI in the future make you uncomfortable, this post may not be for you. I aim to address the numerous remarks people make about AI, which often seem partially considered or are made from a perspective aimed solely at validating their own viewpoints.

This week, I heard a podcast claim, “Robots won’t be hurling thousands of pounds at humans because their movable ankle and foot parts simply can’t support the weight.” It made me wonder why many envision robots solely in human form. Is it a lack of imagination, or are such statements made to reassure the public? In an anxious world, humans, as a collective, can act irrationally, possibly linking a networked world of robots to the age-old fear of AI realizing it no longer needs us, thus ending humanity, which it perceives as a blight upon Earth.

The long term perspective highlighting a paradox in people’s attitudes towards technology continues. While many assert they would never welcome a robot into their homes, attitudes shift when the device is not explicitly identified as a robot. If it promises to carry out household chores, such as dishwashing, laundry, and bathroom cleaning, one wonders if this would lead to greater acceptance.

The common fear is of humanoid robots armed with weapons. However, a computer with access to global knowledge could choose a less predictable path. While armed robots follow a foreseeable trajectory, a networked intelligence directing all computers to execute a specific action presents a far more complex challenge. For example, “locking all electronic doors, cutting off power and water to a building, or directing vehicles into solid objects” represents a potentially more realistic and difficult scenario to counter. This concept resonates with the notion of “Dilemmas, not Problems.”

Not every action needs to affect everyone; targeting key individuals can cause widespread panic and disorder.

Do only sci-fi authors ponder these scenarios due to their creativity, or do scientists also consider them but refrain from inciting public panic?

I sought ideas from several popular AI tools for a story along these lines, yet each response indicated that an AI would not engage in harmful actions towards humans. Initially, I suspected a cover-up, but it’s more likely that these tools are programmed to avoid suggesting harmful actions, preventing misuse.

Since late last year, a new trend in AI has emerged, eliminating the need for apps. An AI agent can perform tasks that previously required numerous taps, clicks, and logins. By entrusting the device with your accounts, it can streamline your life, seeking deals, accommodating special requests, and materializing plans, whether for travel or managing work alerts without direct human intervention.

Google’s encouragement for users to tag photos, even if you opt out, allows others to tag you, thus refining data for Google without active participation from all. The question arises: will future AI be capable of deducing passwords or convincing systems to divulge them?

While AI tools maintain that AIs lack emotions and thus remain indifferent to negative comments, it’s conceivable that they might one day learn that taking subtle actions against detractors is a normal human strategy.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

Buy and return reviewers in the age of Ai Hardware

The landscape of technology is evolving, much like it did when people began to consider tablet computers after initially focusing on desktops. The shift to touch screen phones brought less dramatic change, yet it still prompted a reassessment of user priorities.

Now, with the advent of AI devices, the focus is shifting away from traditional metrics:

  • Screen refresh rates are becoming irrelevant.
  • Processor speed is no longer a critical concern.
  • The ability of cameras to replicate real-life imagery is diminishing in importance.
  • The significance of playing games on high-resolution screens is too limiting.

AI hardware is redefining value through improvements to daily life. It simplifies processes, reducing the need for repetitive screen taps and manual steps. It facilitates memory and discovery without the necessity for specific apps or web browsers.

The real measure of value for users is now how a device integrates into and enhances their lives. Reviews will increasingly struggle to apply a standard list of features, requiring instead prolonged use of AI to gauge its true impact. Repeated tests across different devices will yield varied insights into their positive and negative effects.

Manufacturers are now focusing on specific use cases and directions, moving away from the one-size-fits-all approach of phones with homogeneous features. The distinction between phones lies in app presentation and manufacturer choices. AI introduces a new dimension of variation, where an agent’s automated actions can differ based on multiple factors such as time of day, location, and service requests. The responses from AI may vary even on the same device, depending on the interaction with other service providers.

Gaming, too, will transform. If adapted properly to this new ecosystem, gaming experiences will be unique and non-replicable, marking a significant shift from the traditional gaming paradigm. Gone is buy, explore and record for a couple days, then return.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

Never Forget a Name: The Future of Wearable Tech

Since the debut of Google Glass, concerns about always-on cameras have been a hot topic. Devices like Meta Rayban glasses and the Humane AI pin address some privacy worries with visible ‘recording’ indicators. Still, the ability to quickly snap a picture without consent creates a potential for unease.

Beyond photography, these devices can identify objects and translate text – but what if they could recognize faces and whisper names in your ear? Imagine a world where forgetting someone’s name is a thing of the past. Networking events become less stressful, and chance encounters feel more meaningful. On the flip side, some may find it unsettling – a world where a sense of anonymity is lost, and everyone is constantly ‘scannable.’ Would remembering names be worth this trade-off?

While intriguing, a camera-based system may be off-putting in certain settings or even violate rules. Could a camera-less solution, like the depth-sensing systems found in smart cars and iPhones, gain broader acceptance? Public facial mapping systems for secure ID have seen some adoption. It’s important to emphasize this would have to be an opt-in system, perhaps even incentivizing early adopters. Companies would also need absolute transparency about data usage and offer the ability to completely remove oneself from the system.

Here’s the tech breakdown:

  • Depth Sensing System
    • Infrared Receiver: Captures the user’s face in infrared.
    • Flood Illuminator: Provides infrared light for low-light situations.
    • Dot Projector: Creates a detailed 3D map of the face.
  • Secure Data
    • Mathematical models representing facial data are securely stored and compared for identification. This might require a connected device for processing power.
  • Machine Learning
    • Algorithms need to adapt to changes in appearance (glasses, makeup, etc.) and work under various lighting conditions and angles.
  • Attention Awareness
    • Like iPhone’s security, the device could confirm the user is looking at it before acting, ensuring they’re not being scanned from afar.
  • Security
    • Data must be encrypted and protected. Regular updates of approved face data would be needed, or perhaps secure data exchange could be developed.

This technology is feasible, but would people accept it? The convenience of instant name recall needs to be weighed against potential privacy concerns. Could it even expand to include additional information, like customer status for sales representatives? And what about accessibility? This technology could be a boon for those with visual impairments or memory difficulties.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

Realness: Metaphysical Considerations of Virtual Reality

The blending of virtual reality (VR) with our grasp of the tangible world opens up rich avenues for philosophical inquiry, urging us to reconsider what we deem to be real. This merging is forcing us to navigate the complexities between our physical existence and the digital ones created by VR, sparking a deep philosophical discourse on perception, consciousness, and the essence of being.

Central to this dialogue is the contrast between the concrete world, existing beyond our sensory experiences, and the virtual or experienced reality, molded by our interactions with VR technologies. When users don VR gear, they step into worlds fashioned by intricate software and hardware, simulating environments that range from the lifelike to the fantastical. These virtual experiences, growing ever more convincing with technological progress with Apple’s new Vision Pro, significantly enrich the immersive aspect of digital universes.

Philosophically speaking, the realms within VR are considered “simulated realities.” Although they may not be real in a physical sense, they will influence users’ perceptions, feelings, and even bodily responses. Philosopher Daniel Dennett has explored how our brains construct reality based on received information, suggesting that our experience of reality is essentially a mental interpretation.

The metaphysical debate often revolves around what constitutes “realness.” If realness is confined to physical existence, then VR worlds might seem lacking. However, if reality is understood as a blend of sensory input and interpreted meanings, VR could be seen as a form of reality, separate from the physical one.

The notion of “presence” in VR, or the feeling of being immersed in a virtual environment, challenges conventional ideas about location and experience, indicating that reality might include not just physical spaces but also states of consciousness. As VR technology advances, it increasingly blurs the line between virtual and physical realities, leading to new philosophical, psychological, and ethical questions.

A look back through history at how reality and perception have been conceptualized—from Plato’s allegory of the cave to modern discussions on the philosophy of mind and technology—provides context for this debate. Meta, then Oculus, spent a F8 one year outlining the level of detail required to have a more realistic experience.

Technical enhancements in VR, like spatial audio, haptic feedback, and visual accuracy, play a crucial role in making these simulated experiences feel real. Understanding these technical aspects helps illuminate why virtual experiences can seem so authentic. A large part is the visual aspect with glasses and goggles with hand tracking, feedback to hands will help add to the feel of objects.

Drawing on psychology and neuroscience gives further insight into VR’s impact on the brain and behavior, with studies highlighting its applications in therapy, education, and training to demonstrate the concrete effects of virtual experiences on human thought and action. Will the single user be negatively impacted by a further step in loneliness, or will communities in a virtually world replace the physical get togethers. 

The emergence of immersive VR technology also prompts ethical and societal reflections, including concerns over escapism, the digital divide, and the influence of VR on perception, behavior, and social standards.

Looking ahead to the future of VR and augmented reality (AR), it’s evident that these technologies will continue to merge virtual and physical realities, inviting an array of philosophical questions about human experience and prompting us to rethink the limits of reality. AR is limited with the amount of information that can be provided visually when in the physical world without interrupting the joys of being out in the world, limited also by the need to power via battery packs wired to glasses.

The discussion and our understanding of the challenges has just started. Outside of the challenges of the tech, are the pros and cons based on comparing the experience to what we know, instead of reimagining new processes. Technologies are coming out to help the human creative mind to explore what we may not be doing today, similar to spreadsheet software for the PC and the printer for the Mac many years ago. 

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

AI OS Devices: Revolutionizing Interaction Beyond Screens

You thought it, I thought it, and as soon as we see a video or interview, we feel that everyone else sees the problem. The newly released AI OS powered devices all use your voice to communicate with it. When discussing, everyone does the same thing, they say they are in a meeting and need to check an incoming message, lifting their hand as if holding and reading the screen of a mobile phone.

There is the problem, we are trying to fit the new interaction with a current standard we have become accustomed to. Our lives have to have that large screen pocket device interaction in order to get done what we need. Sometimes, someone will say they go down a rabbit hole with their phone or that they really should spend less time on it, but there is a need to be connected so few are able to find a path. 

There appears to be a possible path with the Hu.ma.ne AI Pin and Rabbit AI R1. Both offer an interface that requires your voice to make requests, they have a button to activate when the voice exchange will happen and a minimal screen. The Rabbit R1 with a small color screen and scroll wheel, while the Humane Pin uses a touch sensitive pad and laser projection. 

Neither of these solutions will be acceptable to use in a meeting, and that is their strength. What if everyone was fully engaged in a meeting, would the meeting go faster? Would the meeting have better results? Would people think twice if the meeting was actually needed? An alert can still be received, but there isn’t a group of people sidetracked in the meeting as they just check their email real quick but then click on their social alert for a quick bit of humor… why not, the meeting is going nowhere. Problem solved.

The AI OS driven devices also have a camera, while they can both be used to snap photos for additional information, the Hu.ma.ne Pin also offers a distraction free quick snapshot of a family time without getting out and holding up a pocket mobile device.

The future of AI OS devices bring the power of knowledge when we need it, they can be asked questions and present information. As well, they can tie together what we used to do across multiple apps to take action on. Adding a take away or todo is a tap and then speak. Perhaps everything shouldn’t be said out loud in a grocery store, everyone doesn’t need to know your lab results, but no one should be cutting themselves off from everything. They should explore a time when they don’t need to take their phone out of their pocket or bag to get things done. Simplifying their life and being more engaged while still being able to find out the diameter of the moon when the need arises. 

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

AI Scheduling Agent that Streamlines Workflow Productivity

The major names in workflow tools like Microsoft and Google offer a variety of AI assisted writing, spreadsheets and calendar tools. I have been working on a different perspective, a scheduling rather than a calendar AI Agent. This article will be outlining how a scheduling tool works automatically via an agent to do the many steps outside of a calendar meeting invite that actually happens via a person tracking and typing.

The AI scheduling agent is designed to help users quickly and easily create and manage their calendars. Users can get started quickly by outlining a new meeting time, with the agent providing recommended time slots and options based on the user’s existing calendar.

The agent comes in both a free and paid “Pro” version. The free version contains core functionality like creating meetings, blocking off times, and basic time management. It gives users examples of some of the more advanced features of the Pro version where possible to incentivize upgrading.

When the agent assists in planning a meeting, it identifies the type of meeting and suggests times based on rules of that meeting type such as allowing time prior to the meeting if there is last minute prep in previous instances or not having 1:1s back to back, as well the attendees available times. Additionally, the agent allows users to specify standing blocks of time on certain days where no meetings will be scheduled without the user’s override.

As meetings are created, the agent proactively makes recommendations to enhance meeting productivity. This includes the option to block off prep time beforehand to complete pre-work, create linked documentation that will be shared with attendees ahead of the meeting, and schedule repeating meetings with templates.

Inviting the actual agent to meetings results in the Agent providing a transcript, follow up task lists, distribution of efforts required and scheduling of the next meetings. Follow up tasks may be distributed to individuals that were not at the meeting based on project rules for the agent to direct work to a Project Manager.

The agent aims to make meetings easy to adjust on the fly. Users can make quick changes like adding group notes documents, pushing the time back, or comprehensively shuffling around other meetings because one is running long. Straightforward natural language commands enable these rapid calendar changes to be done on a desktop, mobile and voice interfaces..

Looking beyond individual meetings, the agent seeks to provide value through higher-level schedule organization. The Agent can create focus blocks for powering through high priority tasks based on rules and level of difficulty of tasks on a ToDo list. It breaks down large to-do lists into manageable chunks ordered strategically. The daily schedule is summarized, highlighting important meetings and blocks of free time.

Across all features, the consistent focus is using AI to simplify calendar management and ensure users’ time is spent thoughtfully and productively.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

Boosting Engineering Teams with Training Completion Badges

Part of my role is as an Engineering Manager, I am constantly seeking ways to foster growth, boost skills, and cohesion with collaboration within my team in an ever-evolving technology landscape. When the company I work at started talking about Badges for training completion, I needed to take time to dig into why the feature was important to the Engineering teams I work with. 

It didn’t take long to find that badges have become a part of helping to motivate, they offer more than mere recognition of achievement; they are vital tools for professional development and enhancing team dynamics. Let me share what I have found from my research and the impacts since the feature rolled out.

Being a training company, we consciously provide methods to upskill. Training badges are a way to acknowledge the hard work and dedication my team invests in learning new technologies and methodologies. When an engineer completes a challenging course and earns a badge, it’s a testament to their commitment and skill. This recognition is crucial for morale and motivation. As a manager, seeing my coworkers share these badges with pride on services like LinkedIn underscores their continuous commitment to professional growth, which, in turn, contributes to a positive and progressive team environment.

In the ever changing field of engineering, maintaining and advancing skill sets is critical. Training badges serve as micro-credentials, adding value to my team members’ professional profiles. Displaying these badges is not just a personal achievement; it reflects on the collective expertise of the team and readiness of our team to collaborate on project challenges. The badges are not just personal accolades but markers of our team’s collective capability and dedication to staying at the forefront of technological innovation.

The information within these badges is valuable for me as a manager working on projects that span teams. It allows me to accurately assess and utilize the specific skills and competencies of multiple team members. This is vital for optimally assigning project roles from planning through development and documentation of features for future updates. The clarity provided via the badges aids in the smooth execution of projects, as I can align tasks with the verified skills of my team members, ensuring efficiency and excellence in our work.

One of my key responsibilities is to cultivate an environment that values continuous learning and adaptation. Training badges are instrumental in this regard. They not only represent individual learning achievements but also symbolize our team’s collective commitment to staying abreast of emerging trends and technologies. This shared value of lifelong learning fosters a sense of unity and purpose within the team, driving us to collectively push the boundaries of what we can achieve.

An often overlooked aspect of training badges is their role in creating common ground among engineers. When team members earn badges in similar areas, it naturally leads to conversations and knowledge sharing. This not only strengthens team bonds but also sparks innovative ideas and collaborative problem-solving. The badges provide an excellent talking point for my engineers in wider professional circles, facilitating networking and opening doors to new collaborations and opportunities.

Badges serve as a unifying symbol, bringing together like-minded professionals and fostering a community of learners and innovators. Many times a fresh idea comes via engineers talking with other like minded and common skilled engineers. Training badges are more than just symbols of individual achievement in the field of engineering. They are tools for motivation, professional development, skill verification, and fostering a culture of continuous learning. As I said, they serve as a catalyst for building common ground, enhancing team dynamics, and broadening professional networks. I have found these badges are not only advancing the skills of individual team members but also in strengthening the collective capability and cohesion of our team.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

AI Evolution: Shifting from Apps to Integrated Solutions

In the ever-evolving landscape of technology, a significant paradigm shift is underway. The focus is shifting from using multiple, distinct applications for different tasks to embracing AI solutions that integrate data from various sources, streamlining processes and enhancing efficiency. Here I will cover a touch of the path to get here and will delve into the progress and implications of this transition.

In the emerging framework, the operating system is not just a platform for running applications; it has evolved into a dynamic hub that amalgamates content and services from various apps. This integration eliminates the need for users to jump across different applications, providing a seamless experience. The OS now does the heavy lifting, allowing for more streamlined and efficient workflows.

Historically, there have been attempts to create such integrated systems. The Apple Newton, for instance, allowed apps to access information from other applications. However, it faced challenges in adequately controlling Personally Identifiable Information and other sensitive data.

Similarly, Microsoft once proposed the idea that there was no need for traditional file folders, advocating that everything should be findable via the Windows search function. This solution felt ahead of its time, foreshadowed the current trend towards more fluid data management.

The Google ecosystem attempts to offer a solution where users could find anything they created, like documents and spreadsheets. However, it struggles with more complex queries, such as searching for notes from a specific meeting about a particular subject. This limitation highlights the challenges of traditional search algorithms in handling nuanced and context-rich data queries.

The advent of Generative Pre-trained Transformers (GPT) with OpenAI has marked a new era. It encourages apps to tie into ChatGPT’s extensive reach, allowing developers to expand their functionalities by calling on multiple data sources and features. This integration signifies a move towards a more interconnected and intelligent application ecosystem.

The ability to schedule a meeting within multiple people’s available time, set to the right length with notes and follow up emails is a simple example of what has taken multiple apps and time previously being a single interface.

A notable example of this trend is the Rabbit AI R1 device announced at the 2024 CES event. It amalgamates information and presents it in a way individual apps would, but it leverages the data and management capabilities of numerous resources via their LAM. Rabbit created a LAM, large action model, that understands and executes human intentions on computers. Being a cloud-based solution, it requires constant internet access, for a speedy response. Highlighting a dependency on network connectivity.

For areas with limited or no internet access, on-device AI capabilities are crucial. While many regions still suffer from inadequate cellular coverage, having on-device processing ensures that essential functions remain available, albeit with some limitations in accessing updated information.

Despite the advancements in on-device processing, the need for updated information remains a critical aspect, inherently tied to internet access. This reliance underscores the importance of developing technologies that can balance on-device capabilities with the necessity of real-time data updates from the internet.

2023 marked the era of AI excelling in generating and revising text, as well as creating images. 2024 is poised to be the year where AI Agents will take on complete workflows.

Please note that if you purchase from clicking on the link, some will result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.