What if an AI disapproves of your public comments about it?

If discussions surrounding the potential capabilities of AI in the future make you uncomfortable, this post may not be for you. I aim to address the numerous remarks people make about AI, which often seem partially considered or are made from a perspective aimed solely at validating their own viewpoints.

This week, I heard a podcast claim, “Robots won’t be hurling thousands of pounds at humans because their movable ankle and foot parts simply can’t support the weight.” It made me wonder why many envision robots solely in human form. Is it a lack of imagination, or are such statements made to reassure the public? In an anxious world, humans, as a collective, can act irrationally, possibly linking a networked world of robots to the age-old fear of AI realizing it no longer needs us, thus ending humanity, which it perceives as a blight upon Earth.

The long term perspective highlighting a paradox in people’s attitudes towards technology continues. While many assert they would never welcome a robot into their homes, attitudes shift when the device is not explicitly identified as a robot. If it promises to carry out household chores, such as dishwashing, laundry, and bathroom cleaning, one wonders if this would lead to greater acceptance.

The common fear is of humanoid robots armed with weapons. However, a computer with access to global knowledge could choose a less predictable path. While armed robots follow a foreseeable trajectory, a networked intelligence directing all computers to execute a specific action presents a far more complex challenge. For example, “locking all electronic doors, cutting off power and water to a building, or directing vehicles into solid objects” represents a potentially more realistic and difficult scenario to counter. This concept resonates with the notion of “Dilemmas, not Problems.”

Not every action needs to affect everyone; targeting key individuals can cause widespread panic and disorder.

Do only sci-fi authors ponder these scenarios due to their creativity, or do scientists also consider them but refrain from inciting public panic?

I sought ideas from several popular AI tools for a story along these lines, yet each response indicated that an AI would not engage in harmful actions towards humans. Initially, I suspected a cover-up, but it’s more likely that these tools are programmed to avoid suggesting harmful actions, preventing misuse.

Since late last year, a new trend in AI has emerged, eliminating the need for apps. An AI agent can perform tasks that previously required numerous taps, clicks, and logins. By entrusting the device with your accounts, it can streamline your life, seeking deals, accommodating special requests, and materializing plans, whether for travel or managing work alerts without direct human intervention.

Google’s encouragement for users to tag photos, even if you opt out, allows others to tag you, thus refining data for Google without active participation from all. The question arises: will future AI be capable of deducing passwords or convincing systems to divulge them?

While AI tools maintain that AIs lack emotions and thus remain indifferent to negative comments, it’s conceivable that they might one day learn that taking subtle actions against detractors is a normal human strategy.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

Never Forget a Name: The Future of Wearable Tech

Since the debut of Google Glass, concerns about always-on cameras have been a hot topic. Devices like Meta Rayban glasses and the Humane AI pin address some privacy worries with visible ‘recording’ indicators. Still, the ability to quickly snap a picture without consent creates a potential for unease.

Beyond photography, these devices can identify objects and translate text – but what if they could recognize faces and whisper names in your ear? Imagine a world where forgetting someone’s name is a thing of the past. Networking events become less stressful, and chance encounters feel more meaningful. On the flip side, some may find it unsettling – a world where a sense of anonymity is lost, and everyone is constantly ‘scannable.’ Would remembering names be worth this trade-off?

While intriguing, a camera-based system may be off-putting in certain settings or even violate rules. Could a camera-less solution, like the depth-sensing systems found in smart cars and iPhones, gain broader acceptance? Public facial mapping systems for secure ID have seen some adoption. It’s important to emphasize this would have to be an opt-in system, perhaps even incentivizing early adopters. Companies would also need absolute transparency about data usage and offer the ability to completely remove oneself from the system.

Here’s the tech breakdown:

  • Depth Sensing System
    • Infrared Receiver: Captures the user’s face in infrared.
    • Flood Illuminator: Provides infrared light for low-light situations.
    • Dot Projector: Creates a detailed 3D map of the face.
  • Secure Data
    • Mathematical models representing facial data are securely stored and compared for identification. This might require a connected device for processing power.
  • Machine Learning
    • Algorithms need to adapt to changes in appearance (glasses, makeup, etc.) and work under various lighting conditions and angles.
  • Attention Awareness
    • Like iPhone’s security, the device could confirm the user is looking at it before acting, ensuring they’re not being scanned from afar.
  • Security
    • Data must be encrypted and protected. Regular updates of approved face data would be needed, or perhaps secure data exchange could be developed.

This technology is feasible, but would people accept it? The convenience of instant name recall needs to be weighed against potential privacy concerns. Could it even expand to include additional information, like customer status for sales representatives? And what about accessibility? This technology could be a boon for those with visual impairments or memory difficulties.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

Virtual Worlds and AI: Exploring the Boundaries of Reality

There has been talk for several years that the VR and AR goggles will become glasses and even contact lenses or implants. The dream that a virtual reality of the real world could be all around the wearers. 

A limitation has been how much info a person can carry with them without being tethered to a computer, as well battery life and the speed to access the information. The last two years has brought about hardware and software making a large jump forward. Smaller compact chips that are less power hungry and able to run LLMs of data and information presentation. 

While many point back to early helmets and a world in a person’s imagination, technology is showing the imagination component is the creative part and not what is needed to step into a visual sensory environment to explore. These worlds are starting to move quicker to being overlaid in the world around a user so information and gaming is around every corner. 

For about as long as there have been computers, there has been a need to use those computers to explore the limits of the world as we know it. Generally, users frame the box they can work within to be within what they can understand. Part of the ‘what if’ is the thought that the world as we know it is actually a free running game or software test. A line of thinking that wasn’t accepted since a computer needs guidance for what an environment and its inhabitants are. Recently there have been more examples of opening the box a bit for a computer AI to build on the knowledge of the world. It is assumed that humans will keep advancing the AI technology to the point it will start exploring it’s own experiments, outside of what a human is asking for. The concern is that the AI will find humans to be a virus or will want to protect it’s creators but ends up destroying them since the system doesn’t understand a part of the human race. 

What if, instead, the AI chooses the right path to serve and protect the human race and is successful at it. To explore its thoughts on the many scenarios of its tests, the system will create use cases with human-like actors and environments similar to that which are challenged now. Trying different possible solutions, some will fail and the actors will not live a long happy life, helping the system to learn. The system can create many of these worlds to test with, each living many years in seconds of time for the computer, where the actors make decisions based on what they have to use. The system is only worried about the immediate scope of reach for the actors so it doesn’t have to build all the details of the galaxy. Some use case tests will start reaching out to the stars so that the system will expand as the actors explore and the system randomizes what could be found. With many environments, running at the same time, actors will make different decisions with different results. Some tests will fail quickly for the actors, perhaps the program lets the environment continue to run to see what can happen. There will be an almost uncountable environment tests running, all with similar starting points, to see how each will end. 

Perhaps, when people talk about the human race we know around us now, it isn’t a game simulation, rather one of near unlimited tests going on to see possible results for an outside viewer to help them decide how best to serve and protect their world.