Google Gemma AI Modeli Artık Telefonlarda Çalışabiliyor

our […] collection of open models for multimodal [health] text and image understanding,” Martins said. “MedGemma works great across a range of image and text applications, so that developers […] can adapt the models for their own health apps.”

Also on the horizon is SignGemma, an open model to translate sign language into spoken-language text. Google says that SignGemma will enable developers to create new apps and integrations for deaf and hard-of-hearing users.

Techcrunch event

Join us at TechCrunch Sessions: AI

Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking.

Exhibit at TechCrunch Sessions: AI

Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

Berkeley, CA|June 5

REGISTER NOW

“SignGemma is a new family of models trained to translate sign language to spoken-language text, but it’s best at American Sign Language and English,” Martins said. “It’s the most capable sign language understanding model ever, and we can’t wait for you — developers and deaf and hard-of-hearing communities — to take this foundation and build with it.”

Worth noting is that Gemma has been criticized for its custom, non-standard licensing terms, which some developers say have made using the models commercially a risky proposition. That hasn’t dissuaded developers from downloading Gemma models tens of millions of times collectively, however.

Updated 2:40 p.m. Pacific: Added several quotes from Gemma Product Manager Gus Martins.

Exit mobile version