If discussions surrounding the potential capabilities of AI in the future make you uncomfortable, this post may not be for you. I aim to address the numerous remarks people make about AI, which often seem partially considered or are made from a perspective aimed solely at validating their own viewpoints.
This week, I heard a podcast claim, “Robots won’t be hurling thousands of pounds at humans because their movable ankle and foot parts simply can’t support the weight.” It made me wonder why many envision robots solely in human form. Is it a lack of imagination, or are such statements made to reassure the public? In an anxious world, humans, as a collective, can act irrationally, possibly linking a networked world of robots to the age-old fear of AI realizing it no longer needs us, thus ending humanity, which it perceives as a blight upon Earth.
The long term perspective highlighting a paradox in people’s attitudes towards technology continues. While many assert they would never welcome a robot into their homes, attitudes shift when the device is not explicitly identified as a robot. If it promises to carry out household chores, such as dishwashing, laundry, and bathroom cleaning, one wonders if this would lead to greater acceptance.
The common fear is of humanoid robots armed with weapons. However, a computer with access to global knowledge could choose a less predictable path. While armed robots follow a foreseeable trajectory, a networked intelligence directing all computers to execute a specific action presents a far more complex challenge. For example, “locking all electronic doors, cutting off power and water to a building, or directing vehicles into solid objects” represents a potentially more realistic and difficult scenario to counter. This concept resonates with the notion of “Dilemmas, not Problems.”
Not every action needs to affect everyone; targeting key individuals can cause widespread panic and disorder.
Do only sci-fi authors ponder these scenarios due to their creativity, or do scientists also consider them but refrain from inciting public panic?
I sought ideas from several popular AI tools for a story along these lines, yet each response indicated that an AI would not engage in harmful actions towards humans. Initially, I suspected a cover-up, but it’s more likely that these tools are programmed to avoid suggesting harmful actions, preventing misuse.
Since late last year, a new trend in AI has emerged, eliminating the need for apps. An AI agent can perform tasks that previously required numerous taps, clicks, and logins. By entrusting the device with your accounts, it can streamline your life, seeking deals, accommodating special requests, and materializing plans, whether for travel or managing work alerts without direct human intervention.
Google’s encouragement for users to tag photos, even if you opt out, allows others to tag you, thus refining data for Google without active participation from all. The question arises: will future AI be capable of deducing passwords or convincing systems to divulge them?
While AI tools maintain that AIs lack emotions and thus remain indifferent to negative comments, it’s conceivable that they might one day learn that taking subtle actions against detractors is a normal human strategy.
Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.


