Buy and return reviewers in the age of Ai Hardware

The landscape of technology is evolving, much like it did when people began to consider tablet computers after initially focusing on desktops. The shift to touch screen phones brought less dramatic change, yet it still prompted a reassessment of user priorities.

Now, with the advent of AI devices, the focus is shifting away from traditional metrics:

  • Screen refresh rates are becoming irrelevant.
  • Processor speed is no longer a critical concern.
  • The ability of cameras to replicate real-life imagery is diminishing in importance.
  • The significance of playing games on high-resolution screens is too limiting.

AI hardware is redefining value through improvements to daily life. It simplifies processes, reducing the need for repetitive screen taps and manual steps. It facilitates memory and discovery without the necessity for specific apps or web browsers.

The real measure of value for users is now how a device integrates into and enhances their lives. Reviews will increasingly struggle to apply a standard list of features, requiring instead prolonged use of AI to gauge its true impact. Repeated tests across different devices will yield varied insights into their positive and negative effects.

Manufacturers are now focusing on specific use cases and directions, moving away from the one-size-fits-all approach of phones with homogeneous features. The distinction between phones lies in app presentation and manufacturer choices. AI introduces a new dimension of variation, where an agent’s automated actions can differ based on multiple factors such as time of day, location, and service requests. The responses from AI may vary even on the same device, depending on the interaction with other service providers.

Gaming, too, will transform. If adapted properly to this new ecosystem, gaming experiences will be unique and non-replicable, marking a significant shift from the traditional gaming paradigm. Gone is buy, explore and record for a couple days, then return.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

Never Forget a Name: The Future of Wearable Tech

Since the debut of Google Glass, concerns about always-on cameras have been a hot topic. Devices like Meta Rayban glasses and the Humane AI pin address some privacy worries with visible ‘recording’ indicators. Still, the ability to quickly snap a picture without consent creates a potential for unease.

Beyond photography, these devices can identify objects and translate text – but what if they could recognize faces and whisper names in your ear? Imagine a world where forgetting someone’s name is a thing of the past. Networking events become less stressful, and chance encounters feel more meaningful. On the flip side, some may find it unsettling – a world where a sense of anonymity is lost, and everyone is constantly ‘scannable.’ Would remembering names be worth this trade-off?

While intriguing, a camera-based system may be off-putting in certain settings or even violate rules. Could a camera-less solution, like the depth-sensing systems found in smart cars and iPhones, gain broader acceptance? Public facial mapping systems for secure ID have seen some adoption. It’s important to emphasize this would have to be an opt-in system, perhaps even incentivizing early adopters. Companies would also need absolute transparency about data usage and offer the ability to completely remove oneself from the system.

Here’s the tech breakdown:

  • Depth Sensing System
    • Infrared Receiver: Captures the user’s face in infrared.
    • Flood Illuminator: Provides infrared light for low-light situations.
    • Dot Projector: Creates a detailed 3D map of the face.
  • Secure Data
    • Mathematical models representing facial data are securely stored and compared for identification. This might require a connected device for processing power.
  • Machine Learning
    • Algorithms need to adapt to changes in appearance (glasses, makeup, etc.) and work under various lighting conditions and angles.
  • Attention Awareness
    • Like iPhone’s security, the device could confirm the user is looking at it before acting, ensuring they’re not being scanned from afar.
  • Security
    • Data must be encrypted and protected. Regular updates of approved face data would be needed, or perhaps secure data exchange could be developed.

This technology is feasible, but would people accept it? The convenience of instant name recall needs to be weighed against potential privacy concerns. Could it even expand to include additional information, like customer status for sales representatives? And what about accessibility? This technology could be a boon for those with visual impairments or memory difficulties.

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

AI OS Devices: Revolutionizing Interaction Beyond Screens

You thought it, I thought it, and as soon as we see a video or interview, we feel that everyone else sees the problem. The newly released AI OS powered devices all use your voice to communicate with it. When discussing, everyone does the same thing, they say they are in a meeting and need to check an incoming message, lifting their hand as if holding and reading the screen of a mobile phone.

There is the problem, we are trying to fit the new interaction with a current standard we have become accustomed to. Our lives have to have that large screen pocket device interaction in order to get done what we need. Sometimes, someone will say they go down a rabbit hole with their phone or that they really should spend less time on it, but there is a need to be connected so few are able to find a path. 

There appears to be a possible path with the Hu.ma.ne AI Pin and Rabbit AI R1. Both offer an interface that requires your voice to make requests, they have a button to activate when the voice exchange will happen and a minimal screen. The Rabbit R1 with a small color screen and scroll wheel, while the Humane Pin uses a touch sensitive pad and laser projection. 

Neither of these solutions will be acceptable to use in a meeting, and that is their strength. What if everyone was fully engaged in a meeting, would the meeting go faster? Would the meeting have better results? Would people think twice if the meeting was actually needed? An alert can still be received, but there isn’t a group of people sidetracked in the meeting as they just check their email real quick but then click on their social alert for a quick bit of humor… why not, the meeting is going nowhere. Problem solved.

The AI OS driven devices also have a camera, while they can both be used to snap photos for additional information, the Hu.ma.ne Pin also offers a distraction free quick snapshot of a family time without getting out and holding up a pocket mobile device.

The future of AI OS devices bring the power of knowledge when we need it, they can be asked questions and present information. As well, they can tie together what we used to do across multiple apps to take action on. Adding a take away or todo is a tap and then speak. Perhaps everything shouldn’t be said out loud in a grocery store, everyone doesn’t need to know your lab results, but no one should be cutting themselves off from everything. They should explore a time when they don’t need to take their phone out of their pocket or bag to get things done. Simplifying their life and being more engaged while still being able to find out the diameter of the moon when the need arises. 

Please note that if you purchase from clicking on a link, it may result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

AI Evolution: Shifting from Apps to Integrated Solutions

In the ever-evolving landscape of technology, a significant paradigm shift is underway. The focus is shifting from using multiple, distinct applications for different tasks to embracing AI solutions that integrate data from various sources, streamlining processes and enhancing efficiency. Here I will cover a touch of the path to get here and will delve into the progress and implications of this transition.

In the emerging framework, the operating system is not just a platform for running applications; it has evolved into a dynamic hub that amalgamates content and services from various apps. This integration eliminates the need for users to jump across different applications, providing a seamless experience. The OS now does the heavy lifting, allowing for more streamlined and efficient workflows.

Historically, there have been attempts to create such integrated systems. The Apple Newton, for instance, allowed apps to access information from other applications. However, it faced challenges in adequately controlling Personally Identifiable Information and other sensitive data.

Similarly, Microsoft once proposed the idea that there was no need for traditional file folders, advocating that everything should be findable via the Windows search function. This solution felt ahead of its time, foreshadowed the current trend towards more fluid data management.

The Google ecosystem attempts to offer a solution where users could find anything they created, like documents and spreadsheets. However, it struggles with more complex queries, such as searching for notes from a specific meeting about a particular subject. This limitation highlights the challenges of traditional search algorithms in handling nuanced and context-rich data queries.

The advent of Generative Pre-trained Transformers (GPT) with OpenAI has marked a new era. It encourages apps to tie into ChatGPT’s extensive reach, allowing developers to expand their functionalities by calling on multiple data sources and features. This integration signifies a move towards a more interconnected and intelligent application ecosystem.

The ability to schedule a meeting within multiple people’s available time, set to the right length with notes and follow up emails is a simple example of what has taken multiple apps and time previously being a single interface.

A notable example of this trend is the Rabbit AI R1 device announced at the 2024 CES event. It amalgamates information and presents it in a way individual apps would, but it leverages the data and management capabilities of numerous resources via their LAM. Rabbit created a LAM, large action model, that understands and executes human intentions on computers. Being a cloud-based solution, it requires constant internet access, for a speedy response. Highlighting a dependency on network connectivity.

For areas with limited or no internet access, on-device AI capabilities are crucial. While many regions still suffer from inadequate cellular coverage, having on-device processing ensures that essential functions remain available, albeit with some limitations in accessing updated information.

Despite the advancements in on-device processing, the need for updated information remains a critical aspect, inherently tied to internet access. This reliance underscores the importance of developing technologies that can balance on-device capabilities with the necessity of real-time data updates from the internet.

2023 marked the era of AI excelling in generating and revising text, as well as creating images. 2024 is poised to be the year where AI Agents will take on complete workflows.

Please note that if you purchase from clicking on the link, some will result in my getting a tiny bit of that sale to help keep this site going. If you enjoy my work, perhaps you would consider donating to my daily cup of coffee, thank you.

The Humane Ai Pin Personal Assistant isn’t a phone

The project captured Ai tech followers attention from its introduction at a Ted Talk. The talk was mostly a product demo rather than an outline of challenges people have in real life and a solution to make life better. Perhaps that should have been telling for someone to step up and suggest an alternate direction future product discussions should take. 

A lot of attention has been given to how the announcement on the 9th was handled, with less on how much the device would impact people’s lives. To not belabor that point, choosing to ask the Ai device the order of and the talking points of the product introduction would have helped sell the device. 

A key item that the company is leaning on is that the device/service replaces the mobile phone people carry now. This puts people into a comparison mode of thinking “yea, but can it do this thing I do with my phone”, of course since people are used to looking at their screen they can’t envision another way of getting what they need to support what they normally do by tapping on a touch screen device. 

The phrase “Personal Digital Assistant” wasn’t someone saying they were adding features to a phone, it was a device that carried information that a user needed in a small pocket device. Initially a keyboard device, then a pen entry interface and now a finger/gesture device that has onboard information as well they can reach out to the internet for additional services. The PDA was not a better phone than the one people talked with other people though, played snake and had a list of contacts on. The PDA made it possible to look up a wealth of information, have a calendar to plan a day with, and a place to jot down a note. Later, apps and internet connected features were added, soon after, people found their lives were better with a digital assistant and they wanted more.

The Humane Ai Pin is a new way of thinking to get information and have a device to improve a person’s life, but it isn’t a cell phone. There is nearly no one that has a life that allows them to talk only on speaker phone any time they need to make a call and communicate with others. Using only the device, a user would be cut off from ever getting a doctor’s call and update, the need for personal connections and updates is often the reason a phone is carried. 

Just a few thoughts of how a Humane Ai Pin could have been shown being a proactive positive impact would have: 

  • -Saying back a phone number or address someone just gave you is entered into the device’s system to use later via just speaking, no need to take a card to enter later or tap on a screen keyboard. 
  • -Asking what song is being played, then asking the pin later to play that song heard in the store around noon yesterday. 
  • -Anytime there is a message coming in, offer if it should be read out loud or shown as a laser text on the users hand seems like it is a day one needed feature but perhaps a fast follow update.
  • -Have the pin play a child’s song for the child in the parent’s arms to fall asleep or sing along with. That brings up an interesting thought, I don’t remember it being covered that there are environmental volume changes, the speaker should know the time/location of the user to not have it blast a reminder at the wrong time (whisper mode please).
  • -It wasn’t covered, does the device know where it is to give turn by turn directions to get to a meeting? 
  • -Creating a quick text and sending as a reminder are usual use cases shown by other solutions providers, making it a relatable demo.
  • -Will it work as a voice control to home automation? I thought I had seen a similar mention but am not able to find it now. Voice controlling lights is a nice demo, especially if the device is location aware so it is simply “turn on the lights”.
  • -Demoing more creative use of reminders and timers like when in a kitchen cooking.
  • -Asking the device for information about a person or location while in the car. 
  • -Reading the summary of an article or meeting notes shared with the user.
  • -For fun, asking to divide up a dinner tab amongst the group of people where the bill total is mentioned and people’s names are said too. In a small group, no one would not put their part in if a device just called out how much they owe by saying their name specifically. 
  • -I’m not providing a list of this one here, but a discussion of all of the information that could be entered and retrieved without the need of a computer and keyboard will make the usability more relatable too.

I look forward to seeing how a bold rethink of information entry and retrieval will be creatively used, but a fast run away from saying a person will be phone less because of the Humane Ai Pin is just going to have people finding all the ways they can’t do things as a reason to not buy.