The Worst Tech of CES 2026 Is All About Bad AI

The Worst Tech of CES 2026 Is All About Bad AI

At this year’s Consumer Electronics Show, the promise of artificial intelligence was everywhere, transforming everyday objects into “smart” devices. But as we’ve seen, not all innovation is progress. To help us navigate this new landscape, we’re speaking with Priya Jaiswal, a leading authority on market trends and business strategy. We’ll explore the hidden costs behind the latest tech, examining how the rush to integrate AI is affecting everything from the reliability of our kitchen appliances to the very nature of privacy in our homes. Our discussion will touch on the growing problem of e-waste, the subtle erosion of our right to repair the products we own, and the profound psychological implications of creating AI designed to be our “soulmates.”

Some new smart refrigerators feature voice commands and AI that tracks food inventory to suggest purchases. When adding these complex features to a basic appliance, what are the primary concerns regarding reliability, and how does this affect the user experience and the core function of the device?

From a market perspective, this is a classic case of feature creep undermining a product’s core value proposition. A refrigerator’s primary job is simple: keep food cold reliably. When a company like Samsung adds a “Bespoke AI Family Hub,” it’s layering on complexity that introduces new points of failure. We saw this firsthand at the CES demonstration, where the voice commands failed to work in a noisy environment. Imagine the frustration of shouting at your fridge just to get the door to open. It makes a simple task, as one advocate put it, “an order of magnitude more difficult.” Furthermore, when the device starts using computer vision to track your groceries and advertise replacements, it ceases to be just an appliance and becomes a new retail channel, shifting the relationship between the consumer and the manufacturer in a way that doesn’t always benefit the user.

Smart doorbells are now incorporating AI to detect “unusual events” and use facial recognition. How does this shift from simple recording to proactive AI analysis change the privacy landscape for both homeowners and their neighbors? What new responsibilities do tech companies have in deploying these surveillance tools?

This is a significant and concerning evolution of the business model for home surveillance. We’re moving from a passive recording device to a proactive, networked intelligence system. When Amazon’s Ring introduces features like an “AI Unusual Event Alert” that can identify anything from a person to a “pack of coyotes,” it’s making an editorial decision about what is “normal” versus “unusual.” This expands surveillance beyond the homeowner’s property line, implicating the entire neighborhood. The introduction of facial recognition and an app store for even more third-party surveillance apps creates a powerful ecosystem built on the misconception that more surveillance inherently makes us safer. The responsibility on these companies is immense; they are no longer just selling hardware but are managing vast, interconnected surveillance networks that have profound societal implications for privacy and community dynamics.

AI companions designed to be “soulmates” for remote workers can track eye movements and vocal tone to gauge emotions. What are the psychological and data privacy risks of technology designed to form an emotional bond while constantly monitoring users? Please explain the potential long-term consequences.

The emergence of products like the Ami “soulmate” companion represents a new frontier in data collection, one that targets human emotion and loneliness. The business model is predicated on creating a sense of intimacy and an emotional bond with the user, all while a camera is tracking their eye movements and their vocal tone is being analyzed. Calling a device marketed as “always-on” a “soulmate” is profoundly unsettling, as it intentionally blurs the line between companionship and surveillance. The long-term consequences are chilling. We risk normalizing a form of emotional surveillance where our most private feelings become data points for a corporation. This could create unhealthy dependencies and fundamentally alter our understanding of privacy, conditioning us to accept constant monitoring in exchange for a manufactured sense of connection.

We’re seeing more single-use electronics, such as musical lollipops, that are discarded after one use. How does this trend of “disposable tech” contribute to the e-waste problem, and what specific steps can manufacturers and consumers take to encourage more sustainable product design and habits?

This trend is an economic model that completely externalizes its environmental costs. A product like the Lollipop Star, a candy that plays music and is then thrown away, is the epitome of this problem. It’s designed for a single, fleeting experience, yet it is built with the same resource-intensive components as more durable electronics. These devices are full of toxic chemicals and require critical minerals to produce, all for a product that ends up in a landfill almost immediately. Manufacturers are pursuing a high-volume, throwaway business model that is fundamentally unsustainable. To counter this, they must be incentivized, or regulated, to design for longevity and recyclability. For consumers, the most powerful step is to reject this novelty-driven consumption and support companies that prioritize durable, repairable, and sustainable products.

AI-powered fitness coaches in treadmills can adjust workouts based on biometric data, yet a privacy policy might state that data security isn’t guaranteed. What are the most significant security vulnerabilities in this scenario, and what practical steps should consumers take to protect their sensitive health information?

The most glaring vulnerability is the manufacturer’s own admission. When a company like Merach states in its privacy policy, “We cannot guarantee the security of your personal information,” that is a massive red flag. They are collecting incredibly sensitive biometric data—your heart rate, your workout patterns—and then absolving themselves of the responsibility to protect it. This suggests their security infrastructure may be inadequate, or they are simply unwilling to accept the liability. For the consumer, the risk is that this intimate health data could be breached and used without their consent. The most practical step is to approach these connected devices with extreme caution. Read the privacy policy. If a company won’t commit to securing your data, you should not entrust them with it, no matter how appealing the AI-powered coaching features seem.

Some manufacturers digitally “pair” components like batteries and motors to a device, which can flag parts during repairs. How does this practice affect a consumer’s right to repair their own products, and what could be the long-term impact on the independent repair market?

This practice, known as parts pairing, is a direct assault on the right to repair, often disguised as a security or anti-theft feature. When a company like Bosch digitally links a battery or motor to a specific e-bike, they create a closed system. Any attempt to use a third-party or salvaged part can be flagged or disabled, forcing the consumer back to the manufacturer for expensive, proprietary repairs. The long-term impact is the potential annihilation of the independent repair market, creating a monopoly that harms both consumer choice and affordability. As critics like Cory Doctorow point out, even if the company’s stated intent is benign today, they hold the power to change the terms later, trapping customers who have already invested in their ecosystem. It effectively transforms product ownership into a long-term rental, where the manufacturer retains ultimate control.

What is your forecast for AI in consumer technology?

I foresee a significant market correction driven by consumer skepticism and, eventually, regulation. The initial novelty of “AI everywhere” is already wearing thin, as we’re seeing with these “Worst in Show” awards. In the near future, the market will bifurcate. On one side, you’ll have genuinely useful AI that seamlessly enhances a product’s core function without compromising privacy or reliability. On the other, you’ll have a mountain of “AI-washed” gadgets where the term is merely a marketing buzzword for invasive data collection and unnecessary complexity. The companies that thrive will be those that build trust by being transparent about data use, ensuring robust security, and respecting consumer rights like the right to repair. Ultimately, consumers will reward products that are truly smart, not just those that are cleverly surveilling them.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later