Apple’s New AI: 3 Ways Siri Could Beat ChatGPT | Trending Viral hub

[ad_1]

Apple seemed slow to get on the generative AI train, but it’s new investigation related to contextual understanding could make Siri better than ChatGPT.

The tech giant was conspicuously silent during the meteoric ChatGPT rise and the subsequent barrage of generative AI tools and features from companies like Google, microsoftand Goal. But Apple researchers have a new model that could give Siri the generative AI upgrade Apple fans have been hoping for.

“Human speech typically contains ambiguous references such as ‘they’ or ‘it’, the meaning of which is obvious (to other humans) given the context,” the researchers said. The article proposes a model called ReALM (Reference Resolution as Language Modeling) that addresses the problem of large language models (LLMs) not always being able to understand context when it comes to on-screen, conversational, and background references (e.g. (for example, applications or functions that run in the background) with the goal of achieving a “true hands-free experience in voice assistants.”

While ChatGPT is quite good and has certain types of context understanding, the researchers said that ReALM outperforms GPT-3.5 and GPT-4 (which power the free and paid versions of ChatGPT) in all of their context tests. Here’s what that could mean for Siri.

1. On-screen context clues

Apple researchers trained ReALM using “on-screen” data from web pages, including contact information, allowing the model to understand text in screenshots (for example, addresses and bank account details). While GPT-4 can also understand images, it was not trained on screenshots, which the paper says makes ReALM better understand on-screen information that Apple users would ask Siri for help with.

2. Conversational and background understanding

Conversational references mean something that is relevant to the conversation, but may not be explicitly mentioned in the message. By training ReALM with data such as business lists, the model can understand prompts such as “call below” in reference to a list of nearby pharmacies displayed on the screen, without needing to provide more specific instructions.

ReALM is capable of understanding “background entities,” meaning something running in the background of a device “that may not necessarily be a direct part of what the user sees on their screen or their interaction with the virtual agent.” “, such as playing music. or an alarm ringing.

3. Completely on device

Last but not least, ReALM is designed to be on the device, which would be a big problem since LLMs require a lot of computing power and are therefore mainly cloud-based. Instead, ReALM is a smaller LLM, “but tuned specifically and explicitly for the reference resolution task.” Apple has historically touted its commitment to privacy as a selling point for its devices, so a generative AI version of Siri that runs entirely on the device would be very on-brand and a major milestone for devices with AI capabilities.

Apple has unsurprisingly remained silent about its AI plans, but CEO Tim Cook said that A big AI announcement is expected. later this year, so all eyes are on Apple’s Worldwide Developers Conference (WWDC) in June 10th.



[ad_2]

Check Also

iPhone 16 Pro: New feature will reportedly fix this annoying camera issue | Trending Viral hub

[ad_1] We have not yet seen the iPhone 16but we continue to learn more about …

Google workers protest cloud contract with Israeli government | Trending Viral hub

[ad_1] Dozens of Google employees began to occupy company offices in New York City and …

US Senate to vote on wiretapping bill critics call ‘Stasi-like’ | Trending Viral hub

[ad_1] The United States Senate is about to vote on legislation this week that, at …

Leave a Reply

Your email address will not be published. Required fields are marked *