Get models compatible, quantized and running on edge hardware
Eliminates late-stage deployment failures by making PyTorch models hardware-compatible and optimized from the start.
The deployment workflow
Validates and compiles PyTorch models for embedded target backends
pip install embedl-deploy[tensorrt,tidl] embedl init \
--backend tensorrt \
--device orin.localhost.dev
# Wizard handles:
# - Docker container orchestration
# - SSH key configuration for target device
# - Remote inference
Automated Environment Setup
Configures vendor-specific compilers, Docker environments, and SSH access. Simplifies setup for inference with device-side runtimes.
Hardware-Aware Model Definition
Custom hardware-aware PyTorch operations that mirror the behavior of the compiled model with fusions and compiler constraints enforced directly in PyTorch.
from embedl.tensorrt import nn
from embedl.tensorrt import transform
# Define a model that will be hardware-compatible
# and optimal from the start class MyModel(nn.Module):
def init(self):
super().init()
self.convbnrelu = nn.ConvBatchNormRelu(...)
# Make a vanilla PyTorch model hardware compatible pretrained_model = SomeModel()
compatible_model = transform(pretrained_model)
embedl compile ./model.onnx --output-dir ./artifacts --quantization int8 --calib-data ./calibration_set embedl benchmark ./artifacts/model.engine --tests ones,random,uniform --runs 5 --log-dir ./results embedl serve ./artifacts/model.engine
Compilation, Quantization, and Benchmarking
Compiles models to hardware-specific binaries with explicit quantization. Benchmarks performance directly on-device to verify latency and accuracy.
Hardware backend for Edge deployment
Eliminate vendor-specific compilation and quantization errors
Automates TensorRT integration by identifying and replacing unsupported operators prior to compilation. Prevents runtime failures and unexpected graph fusions on Jetson and Orin platforms.
Manages TIDL-specific quantization and memory alignment. Prevents silent failures and accuracy drops caused by automatic operator substitution in the TI compiler toolchain.
Get started with Embedl Deploy
Ready to eliminate your model deployment problems?
Frequently Asked Questions
Embedl Deploy is a Python package to make AI models deployment-ready for any hardware.
It makes PyTorch models compatible, quantized, and deployable for selected embedded hardware platforms. It bridges the gap between flexible AI model development in Python and constraints during deployment. Instead of discovering incompatibilities late in the deployment process, you detect and resolve them early in PyTorch.
Embedl Deploy is built for:
- Deep learning engineers designing models
- Edge and embedded developers deploying models
- Engineering managers who need predictable deployment timelines
If your team has struggled with quantization issues, unsupported operators, or opaque compiler behavior, Embedl Deploy is designed for you.
Most deployment failures happen because:
- The model uses unsupported operations
- The compiler silently modifies the graph
- Quantization introduces unexpected accuracy drops
- The runtime behaves differently than PyTorch
Embedl Deploy makes these transformations explicit and traceable before compilation. You can see what changed, compare outputs, and understand why.
It reduces late-stage surprises.
Vendor toolchains compile models. Embedl Deploy prepares models so they compile correctly and behave predictably.
Most vendor tools:
- Apply implicit graph rewrites
- Fuse operations without visibility
- Perform opaque quantization adjustments
- Fail late when unsupported ops are detected
Embedl Deploy moves compatibility and quantization earlier into the PyTorch stage. You get transparency and control before the vendor compiler runs.
We don’t replace vendor compilers — we make them reliable to use.
No.
Embedl Deploy uses existing quantization mechanisms where appropriate. It makes quantization explicit in the model graph and aligns PyTorch behavior with compiled behavior.
It is not a new quantization algorithm.
It is a compatibility and traceability layer.
Embedl Deploy focuses on a limited set of selected hardware backends.
We intentionally support only a small number of platforms per release to ensure robustness and deep compatibility, rather than broad but fragile coverage.
Check the documentation for the currently supported targets.
In many cases, no.
Embedl Deploy provides:
- TorchSafe operator subsets
- Model transformation utilities
- Explicit quantization integration
You can either build your model using safe operators from the start, or transform an existing PyTorch model into a compatible version.
The goal is minimal friction, not rewriting architectures from scratch.
Embedl Deploy will detect and surface them early.
Depending on the backend, it may:
- Suggest supported alternatives
- Replace operations with compatible equivalents
- Provide a clear explanation of why compilation would fail
You won’t discover the issue at the final deployment step.
Yes. Using for example the Embedl Studio or Embedl Hub you can easily visualize the graph differences across compilation stages.
Transparency is a core design principle. You can:
- Compare model graphs
- Compare outputs before and after quantization
- Benchmark compiled artifacts against the original PyTorch model
You remain in control of what changed.
No system can guarantee that.
Quantization and compilation always introduce trade-offs. What Embedl Deploy does guarantee is:
- You can measure those trade-offs early
- You can trace where deviations occur
- You can align PyTorch and compiled behavior within defined tolerances
It replaces guesswork with measurable behavior.
No.
Embedl Deploy abstracts most of the setup complexity — Docker environments, compiler versions, runtime artifacts — into a reproducible workflow.
Advanced users can still access lower-level controls, but basic usage does not require vendor-specific expertise.
Embedl Deploy provides publicly accessible tools and documentation.
Commercial use may require a license depending on the backend and deployment context. Contact us for licensing details.
Yes.
Embedl Deploy produces real compiled artifacts that run on-device. It is not a simulation layer.
For enterprise usage, support and licensing options are available.
Embedl Deploy does not:
- Perform neural architecture search
- Do large-scale retraining or pruning
- Provide custom kernel development
- Replace vendor compilers
It focuses strictly on compatibility, quantization alignment, and deployment reliability.
In a typical supported setup, you can:
- Install the package
- Initialize the backend
- Compile and benchmark a model
in under an hour on a clean machine with a supported device.
That is the bar we hold ourselves to.
You can build internal scripts.
But most teams eventually accumulate:
- Fragile shell pipelines
- Version mismatches
- Hidden quantization assumptions
- Knowledge trapped in one engineer’s head
Embedl Deploy aims to provide a general, tested, reproducible solution rather than a patchwork system.
The difference is not whether deployment works once.
The difference is whether it works reliably across projects.