We have just released embedl/Cosmos-Reason2-2B-W4A16-Edge2, a new mixed-precision variant of Cosmos Reason 2 that recovers virtually the full baseline reasoning accuracy while preserving edge deployment efficiency.
After our earlier Cosmos Reason 2 releases (W4A16 and NVFP4A16), this is the variant teams have been asking for: edge-efficient deployment without giving up reasoning quality.
The Assumption We Tested
A recent writeup summarized the prevailing view: full-precision Cosmos models still require AGX Thor or DGX-class hardware, and quantization improves accessibility but introduces accuracy trade-offs. It remains an open question whether the accuracy trade-off is fundamental or a function of how quantization is applied.
Embedl's W4A16-Edge2 is the answer to that question. This model has the accuracy of the original higher-precision model while being as efficient as the fully quantized model.
Benchmark Results
On the official Physical AI Bench Reason Task (https://huggingface.co/spaces/shi-labs/physical-ai-bench-leaderboard):
|
Model |
Overall |
|
nvidia/Cosmos-Reason2-2B (unquantized) |
50.60 |
|
embedl/Cosmos-Reason2-2B-W4A16 |
48.68 |
|
embedl/Cosmos-Reason2-2B-W4A16-Edge2 |
50.58 |

W4A16-Edge2 brings quantization performance in line with the unquantized baseline, effectively closing the accuracy gap. In contrast, our earlier W4A16 variant showed an approximate two-point drop, which could be potentially impactful in certain deployment scenarios. With W4A16-Edge2, we achieve baseline-level accuracy while still meeting strict edge memory and latency requirements.
How W4A16-Edge2 Works
Embedl’s W4A16-Edge2 is a targeted mixed-precision recipe built on top of W4A16. The approach uses aggressive 4-bit quantization, which the model is robust to, while retaining FP16 precision on sensitive layers where it impacts reasoning fidelity.
Finding the right precision allocation - which layers are sensitive and which aren't - is where Embedl's proprietary optimization algorithms come in. When executed correctly, you can recover benchmark-level quality without sacrificing the memory and latency efficiencies that enable edge deployment.
On-Device Performance
We benchmarked W4A16-Edge2 on Jetson hardware across text, image, and video workloads (batch size 1, 1280×720 representative paths).
NVIDIA Jetson AGX Orin
- Text e2e: 5.62s → 2.56s
- Video e2e: 6.47s → 3.42s
- Image e2e: 8.65s → 3.56s
NVIDIA Jetson Orin Nano Super
- Baseline full-precision configuration is out of memory in our setup
- W4A16-Edge2 runs text, image, and video workloads on-device
On Jetson Orin Nano Super, the base Cosmos Reason 2 model runs out of memory, while W4A16-Edge2 runs comfortably across all modalities. On AGX Orin, W4A16-Edge2 roughly halves end-to-end latency across the board, turning Cosmos Reason 2 into something you can actually deploy at the edge with responsive, real-time behavior.
You can find our detailed benchmarks in the model card: https://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16-Edge2
Why This Matters Now
Physical AI is moving fast from prototypes to deployed systems. The adoption signal is clear across robotics and autonomy ecosystems. At that stage, the deployment question becomes practical: can it run on the target hardware, and can it maintain the required reasoning quality?
W4A16-Edge2 answers both with hard numbers.
Try It Out Today
You can try the new W4A16-Edge2 model here:
https://huggingface.co/embedl/Cosmos-Reason2-2B-W4A16-Edge2
If you're not on Blackwell, W4A16-Edge2 is now the most accurate and edge-efficient option in the Cosmos Reason 2 lineup. For Blackwell deployments, our NVFP4A16 variant remains available, and W4A16 remains a strong option for maximal memory compression.
If you're deploying Cosmos Reason 2 at the edge and want support with model selection and optimization for your hardware profile, contact us at sales@embedl.com.
.png?width=1080&height=200&name=SLM%20Models%20Hugging%20Face%20(1).png)