Unlocking the Power of AR/VR: View Apple's Innovative 3D Content with Meta Quest 3 - Detailed Tutorial | CNET
Unlocking a Smarter Home Experience with Apple’s Innovative AI-Powered Assistant, Supercharging Siri - Insights
Getty Images/Yuuji
Despite not launching any AI models since the generative AI craze began, Apple is working on some AI projects. Just last week, Apple researchers shared a paper unveiling a new language model the company is working on, and insider sources reported that Apple has two AI-powered robots in the works. Now, the release of yet another research paper shows Apple is just getting started.
On Monday, Apple researchers published a research paper that presents Ferret-UI, a new multimodal large language model (MLLM) capable of understanding mobile user interface (UI) screens.
Also: Generating music using AI in Copilot just got even better
MLLMs differ from standard LLMs in that they go beyond text, showing a deep understanding of multimodal elements such as images and audio. In this case, Ferret-UI is trained to recognize the different elements of a user’s home screen, such as app icons and small text.
Identifying app screen elements has been challenging for MLLMs in the past due to their small nature. To overcome that issue, according to the paper, the researchers added “any resolution” on top of Ferret, which allows it to magnify the details on the screen.
Building on that, Apple’s MLLM also has “referring, grounding, and reasoning capabilities,” which allow Ferret-UI to comprehend UI screens fully and perform tasks when instructed based on the contents of the screen, according to the paper, as seen in the photo below.
K. You et al.
To measure how the model performs compared to other MLLMs, Apple researchers compared Ferret-UI to GPT-4V, OpenAI’s MLLM, in public benchmarks, elementary tasks, and advanced tasks.
Also: The best AI image generators to try right now
Ferret-UI outperformed GPT-4V across nearly all tasks in the elementary category, including icon recognition, OCR, widget classification, find icon, and find widget tasks on iPhone and Android. The only exception was the “find text” task on the iPhone, where GPT-4V slightly outperformed the Ferret models, as seen in the chart below.
K. You et al.
When it comes to grounding conversations on the findings of the UI, GPT-4V has a slight advantage, outperforming Ferret 93.4% to 91.7%. However, the researchers note that Ferret UI’s performance is still “noteworthy” since it generates raw coordinates instead of the set of pre-defined boxes GPT-4V chooses from. You can find an example below.
K. You et al.
The paper does not address what Apple plans to leverage the technology for, or if it will at all. Instead, the researchers more broadly state that Ferret-UI’s advanced capabilities have the potential to positively impact UI-related applications.
“The advent of these enhanced capabilities promises substantial advancements for a multitude of downstream UI applications, thereby amplifying the potential benefits afforded by Ferret-UI in this domain,” the researchers wrote.
Also: Google updates Gemini and Gemma on Vertex AI, and gives Imagen a text-to-live-image generator
The ways in which Ferret-UI can improve Siri are evident. Because of the thorough understanding the model has of a user’s app screen, and knowledge of how to perform certain tasks, Ferret-UI could be used to supercharge Siri to perform tasks for you.
There’s certainly interest in an assistant that does more than just respond to queries. New AI gadgets such as the Rabbit R1 get plenty of attention for being able to carry out an entire task for you, such as booking a flight or ordering a meal, without you having to instruct them step by step.
Artificial Intelligence
How I used ChatGPT to scan 170k lines of code in seconds and save me hours of detective work
6 ways to write better ChatGPT prompts - and get the results you want faster
6 digital twin building blocks businesses need - and how AI fits in
Google’s Gems are a gentle introduction to AI prompt engineering
- How I used ChatGPT to scan 170k lines of code in seconds and save me hours of detective work
- 6 ways to write better ChatGPT prompts - and get the results you want faster
- 6 digital twin building blocks businesses need - and how AI fits in
- Google’s Gems are a gentle introduction to AI prompt engineering
Also read:
- [New] In 2024, Elevate Your Videography Mastering the Dimensions
- [Updated] 2024 Approved Gopro Match-Up Ultimate Buyer's Analysis
- 2024 Approved Peak Performance Audio Organizer, Android
- 8 Ultimate Fixes for Google Play Your Realme Narzo 60 5G Isnt Compatible | Dr.fone
- Easy Steps to Remove Horizontal or Vertical Line Defects on Your Television Display
- Elevate Your Content A Compreranial Guide to Template-Driven TikTok Videos for 2024
- Exploring Amazon Prime Day 2^3^3_3^: Key Info for Savvy Shoppers
- From Zero to Hero: Building a Personal Meta Quest or Quest 2 Presence
- How To Make Use Of A Smartphone As A Wireless Modem For Your Fire Stick - Complete Instructions Inside!
- How To Solve The 'Missing ksuser.dll' Error on Your Computer
- In 2024, How to Transfer Text Messages from Honor X50 GT to New Phone | Dr.fone
- In 2024, Unbiased Review Sony Vegas vs Adobe Premiere Pro - Which One Is Right for You?
- MacOS Tweak: How to Deactivate Mouse Acceleration Feature
- Nine Indispensable Electronics Every Learner Requires in Classroom
- The Role of Vector Databases in Optimizing AI Data Management and Analysis
- The Tech Phoenix: Apple's Historic PowerBook of '94 Revived, Equipped with an iPad Display & 2015 MacBook Pro Core Internals for the Ultimate Vintage-Modern Hybrid Device
- Unstuck Your Windows 10 Search: Essential Fixes for Optimal Performance
- Title: Unlocking the Power of AR/VR: View Apple's Innovative 3D Content with Meta Quest 3 - Detailed Tutorial | CNET
- Author: James
- Created at : 2024-10-15 16:11:26
- Updated at : 2024-10-19 16:09:30
- Link: https://technical-tips.techidaily.com/unlocking-the-power-of-arvr-view-apples-innovative-3d-content-with-meta-quest-3-detailed-tutorial-cnet/
- License: This work is licensed under CC BY-NC-SA 4.0.