On February 25, 2025, Alibaba Cloud officially open-sourced the visual generation base model Wanxiang 2.1 (Wan), a move that has sparked a technological wave in the AI video generation field.
As the first online AIGC creation platform globally to fully integrate Wanxiang 2.1, RunningHub offers creators a “zero-threshold, high-performance, all-ecosystem” advantage, allowing them to harness cutting-edge AI technology with no barriers.
Wanxiang 2.1: A Domestic Large Model Surpassing Sora
As the culmination of Alibaba Cloud’s visual generation technology, the 14B version of the Wanxiang model excels in instruction following, complex motion generation, physical modeling, and text-to-video generation. The test results of the 1.3B version not only surpass larger open-source models but even approach some closed-source models.
In the authoritative benchmark VBench, Wanxiang 2.1 achieved an impressive score of 86.22%, significantly surpassing domestic and international models like Sora, Luma, and Pika, maintaining its top spot. It particularly shines in complex motion generation (e.g., character rotation, jumping), physical law simulation (e.g., collisions, rebounds), and Chinese text-to-video generation.
RunningHub: The Perfect Partner for Wanxiang 2.1
In response to the challenges posed by deploying the 14B parameter model locally, RunningHub, as a pioneer in the AI creation field, was the first to integrate the Wanxiang 2.1 model, allowing users to access the model with just one click without complex local deployment.
To meet the high computational demands of the 14B model, RunningHub leverages a massive cloud GPU cluster, providing a smooth online generation experience that completely alleviates local hardware pressure. Whether professional creators or beginners, users can efficiently create professional-level videos through the web interface.
As the most popular online AIGC creation platform for global creators, RunningHub integrates over 6000+ nodes and a vast array of models, including Stable Diffusion, Lora, and more, covering creative fields like gaming, animation, e-commerce, and design. Combined with Wanxiang 2.1’s powerful capabilities, users can easily create diverse content.
Currently, many creators have already used RunningHub’s Wanxiang 2.1 to develop dozens of professional ComfyUI workflows and AI applications.
Click to Create Similar
Click to Create Similar
Click to Create Similar
The open-sourcing of Wanxiang 2.1 marks the acceleration of AI video technology commercialization, while RunningHub’s deep integration makes this technology easily accessible. When Wanxiang 2.1 meets RunningHub’s cloud ecosystem, the technological barrier is no longer a constraint on creativity. Whether it’s e-commerce teams wanting to quickly generate product demo videos with AI, or independent creators trying to tell stories with dynamic comics, all it takes is opening a browser to access Hollywood-level effects generation capabilities and rapidly transform ideas into works.
What are you waiting for?
Join RunningHub now and become one of the first creators to master Wanxiang 2.1! Here, every bit of inspiration is empowered by AI, and every second of video is defining the future.
Click to Join Now → RunningHub Official Website
RunningHub is the world’s first open-source ecosystem-based AI graphic, audio, and video AIGC application co-creation platform. Through a modular node system and cloud computing power integration, it transforms complex processes such as design, video production, and digital content generation into “building block” style operations. The platform serves users from 144 countries, processing over a million creative requests daily, fundamentally reshaping the traditional content production model.
RunningHub is not only a creation tool but also a creator ecosystem community! It supports developers in uploading nodes and workflows to earn revenue, forming a sustainable economic model of “creativity – development – reuse – monetization.”