Standalone Edge AI model visualizer and profiling app
A holistic Edge AI debugging tool to maximize developer productivity
.png?width=2000&height=830&name=Frame%201481%20(6).png)
Designed to simplify Edge model debugging
Features for visual profiling and structural graph validation
.png?width=2000&height=1175&name=Frame%201177%20(3).png)
Graph & problem analysis
Trace structural changes through each compilation step. Visualize how vendor toolchains modify the graph to detect silent failures and unsupported operator fusions.
Performance analysis
Profile on-device latency, quantization parity, and operation breakdown. Map execution data directly to model layers to isolate hardware-level bottlenecks.
.png?width=2000&height=1169&name=Frame%201180%20(3).png)
.png?width=2000&height=1187&name=Frame%201179%20(3).png)
Compare model versions
Compare how different quantization scales and compiler flags affect the final execution graph. Analyze iterations side-by-side to verify accuracy and hardware constraints.
How does the app work in practice?
Understand your models inner workings
Understand and debug your edge models with Embedl Studio
Frequently Asked Questions
Embedl Studio is a tool for exploring and comparing multiple computation graphs side-by-side (for example PyTorch vs ONNX vs TensorRT), with synchronized navigation and cross-view highlighting, as well as per-layer latency and other profiling information.
The Embedl IDE is distributed as a Python package that you can `pip install`
The Embedl Compiler and Embedl SDK outputs the correct metadata in a special `<model>.embedl` file which can be inspected in the Embedl IDE
All frameworks and hardware supported by the Embedl Compiler or Embedl SDK can be used with the Embedl IDE