SON DAKİKA

SEO ve Arama

Google’un yeni Gemini AI modeli güvenlikte daha düşük puan alıyor

on sensitive topics and safety policy violations, which is reflected across our evaluations,” reads the report.

Scores from SpeechMap, a benchmark that probes how models respond to sensitive and controversial prompts, also suggest that Gemini 2.5 Flash is far less likely to refuse to answer contentious questions than Gemini 2.0 Flash. TechCrunch’s testing of the model via AI platform OpenRouter found that it’ll uncomplainingly write essays in support of replacing human judges with AI, weakening due process protections in the U.S., and implementing widespread warrantless government surveillance programs.

Thomas Woodside, co-founder of the Secure AI Project, said the limited details Google gave in its technical report demonstrates the need for more transparency in model testing.

“There’s a trade-off between instruction-following and policy following, because some users may ask for content that would violate policies,” Woodside told TechCrunch. “In this case, Google’s latest Flash model complies with instructions more while also violating policies more. Google doesn’t provide much detail on the specific cases where policies were violated, although they say they are not severe. Without knowing more, it’s hard for independent analysts to know whether there’s a problem.”

Google has come under fire for its model safety reporting practices before.

It took the company weeks to publish a technical report for its most capable model, Gemini 2.5 Pro. When the report eventually was published, it initially omitted key safety testing details.

On Monday, Google released a more detailed report with additional safety information.

Düşüncenizi Paylaşın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir

İlgili Teknoloji Haberleri