After DeepSeek Shock, Alibaba Unveils Rival AI Model That Uses Less Computing Power

Alibaba has unveiled a new version of its AI model, called Qwen2.5-Max, claiming benchmark scores that surpass both DeepSeek's recently released R1 model and industry standards like GPT-4o and Claude-3.5-Sonnet. The model achieves these results using a mixture-of-experts architecture that requires significantly less computational power than traditional approaches. The release comes amid growing concerns about China's AI capabilities, following DeepSeek's R1 model launch last week that sent Nvidia's stock tumbling 17%. Qwen2.5-Max scored 89.4% on the Arena-Hard benchmark and demonstrated strong performance in code generation and mathematical reasoning tasks. Unlike U.S. companies that rely heavily on massive GPU clusters -- OpenAI reportedly uses over 32,000 high-end GPUs for its latest models -- Alibaba's approach focuses on architectural efficiency. The company claims this allows comparable AI performance while reducing infrastructure costs by 40-60% compared to traditional deployments. Read more of this story at Slashdot.

Jan 29, 2025 - 23:25
 0
After DeepSeek Shock, Alibaba Unveils Rival AI Model That Uses Less Computing Power
Alibaba has unveiled a new version of its AI model, called Qwen2.5-Max, claiming benchmark scores that surpass both DeepSeek's recently released R1 model and industry standards like GPT-4o and Claude-3.5-Sonnet. The model achieves these results using a mixture-of-experts architecture that requires significantly less computational power than traditional approaches. The release comes amid growing concerns about China's AI capabilities, following DeepSeek's R1 model launch last week that sent Nvidia's stock tumbling 17%. Qwen2.5-Max scored 89.4% on the Arena-Hard benchmark and demonstrated strong performance in code generation and mathematical reasoning tasks. Unlike U.S. companies that rely heavily on massive GPU clusters -- OpenAI reportedly uses over 32,000 high-end GPUs for its latest models -- Alibaba's approach focuses on architectural efficiency. The company claims this allows comparable AI performance while reducing infrastructure costs by 40-60% compared to traditional deployments.

Read more of this story at Slashdot.