Apple Quitely Unveils Open-Source Multimodal LLM, Ferret 

12 months ago 54

Ferret can refer to image regions in any free-form shape and automatically establish grounding for text deemed groundable by the model.  The post Apple Quitely Unveils Open-Source Multimodal LLM, Ferret  appeared first on Analytics India Magazine.

Apple

Two months back, Apple, in collaboration with Columbia University, quietly unveiled Ferret, a new multimodal large language model adept at referring and grounding. 

Check out the GitHub repository here

Ferret can refer to image regions in any free-form shape and automatically establish grounding for text deemed groundable by the model. 

I somehow missed this. @Apple joined the open source AI community in October. Ferret’s introduction is a testament to Apple’s commitment to impactful AI research, solidifying its place as a leader in the multimodal AI space. Way to go @Apple – ps: I'm looking forward to the day… https://t.co/Pi1kQrsVvx

— Bart de Witte (@OpenMedFuture) December 23, 2023

The researchers have curated the GRIT dataset for model training. The dataset includes 1.1 million samples that contain rich hierarchical spatial knowledge, with 95K hard negative data to promote model robustness. 

They also said that the resulting model achieved superior performance in classical referring and grounding tasks and greatly outperformed existing MLLMs in region-based and localisation-demanded multimodal chatting. 

“Our evaluations also reveal a significantly improved capability of describing image details and a remarkable alleviation in object hallucination,” the researchers said that Ferret, like most MLLMs, may produce harmful and conunterfactual reponses. 

Citing LISA, the researchers said they plan to enhance Ferret to output segmentation masks and bounding boxes. 

The Significance of Ferret’s Stealthy Debut

Apple’s strategic move to release Ferret without a formal announcement speaks volumes about the company’s dedication to staying at the forefront of multimodal AI. The unexpected embrace of open-source development departs from Apple’s traditional closed-door approach, setting the stage for potential collaboration and community-driven advancements.

Talking about its functionality, it is elegantly simple yet powerful. Beyond identifying elements within an image, the model draws connections to formulate responses to user queries. This opens up possibilities in image search, accessibility, and other applications where nuanced contextual understanding is crucial.

Ferret’s versatility is further enhanced by a spatial-aware visual sampler capable of handling various sparsity patterns associated with different shapes. Ferret accommodates diverse regional inputs, including points, bounding boxes, and free-form shapes.

The post Apple Quitely Unveils Open-Source Multimodal LLM, Ferret  appeared first on Analytics India Magazine.


View Entire Post

Read Entire Article