google.com, pub-7870541769880094, DIRECT, f08c47fec0942fa0

Meta Ray-Ban Glasses Have a Dad’s Deep Brain


Even though I have experience with a wide gamut of AI chatbots, they have one thing in common: they are confident liars. I Ray-Ban Meta smart glasses features built-in AI image recognition, and I thought I’d ask it some questions about my most common, high-risk hobbies. Instead, the glasses resisted engaging with any books, figures, texts, or toys I showed them. The only time I experienced similar levels of disconnection was in casual, awkward conversations with my father.

Last week Meta Connect Conference, CEO Mark Zuckerberg has launched a wave of updates to Ray-Ban sunglasses. On Wednesday, Zuck announced that the glasses are all getting reminders (those with the latest update should be able to ask you, “Where did I park my car?”), QR code scanning, and other features like the ability to easily reply to them. friends on WhatsApp or Messenger. More updates are coming down the road that should add real-time rendering and real-time video scanning. That feature should allow AI to comment on what it sees live.

But I wouldn’t trust AI to accurately describe my room decor, let alone grocery shopping. Meta provided me with a new pair of Headliner transition lenses. I can confidently say that they look much better than my current pair of old, yellow shades. The pictures they take are very high quality compared to my iPhone, but I don’t have too many complaints about the built-in sound. They’re no match for high-quality headphones or earbuds, but they easily beat most portable speakers for streaming through my playlists on Apple Music. I would consider them a solid choice for personal sound when sitting on the beach.

Messages and music integration are great, but I wanted to see how this wearable AI works, and other devices have failed spectacularly. I took Meta Ray Bans to my place and asked them questions about my stacks of RPG tablets, my wall prints, my comic book character portraits, and my collection of Warhammer 40K legends. It was like talking to my father’s brick wall—someone who has no interest in fantasy or science fiction and who pretends to engage. Unlike my dad, who can sometimes try, Meta’s glasses have a fib so good that they can’t hide how little you care.

Doesn’t Meta’s AI Have Nerdy Information in its Training Data?

I pointed out the mirrors in my metal print area for the 2019 RPG scene Disco Elysium. Its best guess was “The Borderlands.For some reason, it thought that the loyal detective Kim Kitsuragi was Claptrap. Harry Dubois, AKA “Tequila Sunset,” was “one of the vault hunters.” I asked it to identify what my game setup consisted of. It looked at the PlayStation 5 on my shelf and told me, with certainty, that it was a PlayStation 4.

I experimented with memorabilia, both less and more esoteric. Check out my stats on Will from Brian K. Vaughan and Fiona Staples Saga jokes, and Meta told me that the figure was Dr. Strange. My Marv statue from the Sin City The joke was, according to Meta, The Hulk. Like my parents, the glasses seem to think that anything wrong is probably a character from the Marvel movies. The mirrors looked at the prints hanging on my wall—two artworks of Samus Aran walking in and out of battle gear Metroid series, and Meta told me he looked like Iron Man.

Even when the glasses got things right, the AI ​​struggled to be precise or accurate. The AI ​​confidently read the articles of indie RPG rulebooks, that is Deathmatch Island again Lacuna. However, in the most father way possible, it suggested that these role games have something to do Warhammer miniature wargaming. Yes, Dad, I play Warhammer 40K. no, Father, these books have nothing to do with it.

But hey, the device knew who Luigi was. Nintendo’s reach obviously extends beyond the confines of my little nerd bubble. Still, you’d think AI could tell Pokémon apart from the legend of Zelda Korok.

Meta’s Ray Bans Fall Short of Privacy, Even Reminders Feel Helpful

© Photo: Kyle Barr / Gizmodo

Meta mirrors are light on details but heavy on guesswork. Yes, it’s funny to watch the glasses constantly fail to understand nerd trivia, but they won’t be useful for some basic tasks. It will look for a bottle of pomegranate molasses in my cupboard and tell me it’s soy sauce. Remember when Google’s first attempt at using AI was on a device lied about the Webb Telescope? The Meta AI model used for Ray-Bans will lie to your face, to your eyes.

The answers you get are often short and unhelpful. It can provide a basic explanation of the fiction written by an author like Dan Abnett (at least it knows who he is). You can ask AI more about his dictionary of fiction, but when I asked how many books he wrote for Games Workshop’s Black Library, he told me, “More than 25, but the exact number is unknown.” That number is highly quantifiable. You can follow the AI ​​link on Wikipedia, and you’ll find the number is closer to 50 if you count them all yourself.

We have yet to hear Meta’s Llama 3.2 multimodal models. Meta’s AI says it still uses Llama 3.1 70B but that LLM may not be suitable for general questions. Mirrors cannot access location data (which may not be the best). The wearable AI couldn’t tell me where the nearest boba tea place was near Union Square. There are two within a three-block radius.

I had no luck accessing QR code scanning or the new reminders features despite being on the latest update. Reminders seem to be best used by mirrors but know that if you take a picture of your license plate and ask the mirrors to analyze it, Meta can see that, too. The Zuckerberg-led tech giant told TechCrunch this week that it captures any photo you take with your glasses to train its AI.

AI models are purposely limited by other privacy measures, not yours. Meta’s AI will not be able to describe any face or person it sees. You can still take pictures of anyone you want with a subtle press of the capture button, but the AI ​​will refuse to identify anyone or comment on their appearance.

Despite Meta’s efforts, the Ray-Bans still have serious privacy implications. A group of university students hacked Ray-Bans sunglasses to increase facial visibility. Modified glasses will pull even more information from the Internetincluding names, phone numbers, emails, or more sensitive data. The team posted a video showing how well their glasses worked on Twitter last week.

This is not what the Ray-Ban Metas were designed for. A Meta spokesperson told 404 Media that the facial recognition software will work on any camera, not just the one on Ray-Ban glasses. But at the same time, Meta went out of its way to make its smart glass cameras as unique as possible. Meta is pitching their Ray-Bans to the crowd who want to drop their photos on Instagram. AI, as it stands right now, doesn’t offer much to that audience beyond a funny clip of Reels.

These brand name designer glasses are not really the kind made for the audience who will go to New York Comic Con and ask their glasses for more information about which character is cosplaying. In their current state, I wouldn’t use the AI ​​functions for anything more than party tricks.





Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top