FAQ from VIGGLE
What is VIGGLE?
VIGGLE is an intelligent video generation system engineered for realism and control—powered exclusively by JST-1, the pioneering foundation model that embeds Newtonian mechanics, collision dynamics, and material properties directly into its generative architecture.
How to use VIGGLE?
After securing beta access, users import characters or select from curated templates, then describe or demonstrate desired motion (e.g., “jump off a ledge and roll upon landing”). VIGGLE interprets intent through physics-constrained diffusion—ensuring every frame obeys realistic force, inertia, and spatial continuity.
What is JST-1?
JST-1 is the foundational AI model behind VIGGLE—trained on multimodal physics simulations, real-world motion capture, and synthetic 3D-video datasets. Unlike conventional video models, JST-1 *reasons* about mass, friction, torque, and deformation—making it the first of its kind to generate video where movement isn’t just plausible, but physically verifiable.
How can I join the beta version of VIGGLE?
Reserve your spot in the VIGGLE beta via our official waitlist. Early access is granted progressively to creators, researchers, and studios aligned with our mission of physics-integrated generative media.
Can I import my own characters into VIGGLE?
Absolutely. VIGGLE supports diverse input formats—including segmented images, rigged avatars, and even rough line drawings—and automatically infers skeletal structure, center-of-mass, and joint constraints to enable accurate, physics-aware animation—no technical art pipeline needed.