Last week our Deep Learning Researcher Andreas Ask and our CPO Ola Tiverman attended The embedded world Exhibition&Conference in Nuremberg, Germany. They met with customers, partners, prospects and journalists (see EE Times article). In the VEDLIoT booth they presented Embedl's software development tools for more efficient deep learning applications in embedded systems. Besides the world class hardware aware neural network optimization and compression they also showcased a brand new innovation where a neural network was optimally distributed between two separate hardware over 5G....
Our Mission
We help customers build extraordinary
Deep Learning based products.

Accelerated
Time-to-Market

improved Product margins

Better Product Specifications

Value Added Partnership
product
Technology
Our award winning Deep Learning Optimization Engine optimizes your Deep Learning model for deployment (inference) to meet your requirements of:
- Execution Time (Latency)
- Throughput
- Runtime Memory Usage
- Power Consumption
Embedl enables you to deploy Deep Learning on less expensive hardware, using less energy and shorten the product development cycle.
Embedl interfaces with the commonly used Deep Learning development frameworks, e.g. Tensorflow and Pytorch. Embedl also have world leading support for hardware targets including CPUs, GPUs, FPGAs and ASICs from vendors like Nvidia, ARM, Intel and Xilinx.
We are happy to answer any questions and/or demonstrate Embedl on your Deep Learning model(s)!

benefits

FASTER
EXECUTION
By using state-of-the-art methods for optimizing Deep Neural Networks, we can achieve a significant decrease in execution time and help you reach your real time requirements.

SMALLER
FOOTPRINT IN DEVICE
The Embedl Optimization Engine automatically reduces the number of weights , and thus size of the model, to make it suitable to be deployed to resource constraint environments such as embedded systems

SHORTER
TIME-TO-MARKET
The tools are fully automatic, which reduces the need for time consuming experimentation and thus shorter time-to-market. It also frees up your data scientists to focus on their core problems.

LESS
ENERGY USAGE
Energy is a scarce resource in embedded systems and our optimizer can achieve an order of magnitude reduction in energy consumption for the Deep Learning model execution.

IMPROVED
PRODUCT MARGINS
By optimizing the Deep Learning model, cheaper hardware can be sourced that still meets your system requirements leading to improved product margins.

DECREASED
PROJECT RISK
Optimizing and deploying our customers’ Deep Learning models to embedded systems is what we do. By outsourcing this to us, your team can then focus on your core problems.
Zenseact
(Volvo Cars)
Veoneer
Chalmers University
Bielefeld University
Technion – Israel Institute of Technology
Siemens
Christmann Informationstechnik
Barcelona Supercomputing Center
Osnabrück University
Research Institutes of Sweden
Antmicro
Maxeler Technologies
Gothenburg University
Neuchatel University
Technisch Universität Dresden






press
We are always flattered when our technology finds itself into media. To reach out with experience from our experimentation, we regularly post findings in our blog. Sign up for our newsletter below to not miss out!
Latest News
Latest Blog
“What hardware should we use?”
In our guide, Overcome 4 main challenges when deploying deep learning in embedded systems , we list the most common challenges you face during leading a deep learning project (DL). Here is one of the challenges you can find in our guide: Problem...
Stay up to date with our newsletter
