FAQ

Product

What is a Python Software Development Kit (SDK)?
A Python SDK, or software development kit, provides developers with a toolbox of resources, tools, and algorithms designed to help them create software applications. Python SDKs enable developers to create software more efficiently and effectively by offering a consistent and reliable set of tools and resources.
What versions of Python does the Embedl Python SDK support?
Embedl recommends using a stable version of Python, also known as a maintained version or bugfix status. The current status of the various Python versions can be viewed at https://devguide.python.org/versions/. (Sometimes the latest version of Python isn't yet compatible with Tensorflow or Pytorch, in which case there may be a delay before it is supported by Embedl. See the release documentation for the current status of each release.) In addition, Embedl will normally support Python versions that still get security fixes, dropping support versions that have reached their end of life.
What is included in the Embedl Model Optimization SDK?
The embedl SDK is a set of tools and resources designed to help developers optimize their deep learning models for deployment on resource-constrained devices or real-time applications. It includes pre-built algorithms for pruning, quantizing, and compressing models, which can significantly reduce the model size and speed up inference times. The embedl SDK is built on a modular approach, allowing developers to interchange and tweak components for specific applications, and transfer their domain knowledge into the SDK for better results. Additionally, the SDK provides unique visualization tools, enabling developers to track changes made to the model during optimizations. Overall, the embedl SDK is a powerful tool that can help developers improve the efficiency and accuracy of their deep learning models.
What operating system should I use with the Embedl software?
Embedl recommends using Ubuntu Long Term Support versions (Ubuntu LTS), that still have maintenance and security support (see https://endoflife.date/ubuntu), provided they are used with a supported version of Python (see above). Other Linux distributions can sometimes be supported if necessary (contact sales).
What hardware is supported by Embedl Model Optimization SDK?
Our goal is to support every available hardware target, we typically add them on request from our customers. The current supported hardware includes Xilinx FPGAs, Nvidia GPUs, Texas Instruments DSPs, ARM CPUs, NXP NPUs, Intel CPUs, GPUs and FPGAs.
We have a proprietary inference engine built inhouse, can we use Embedl?
Yes, we support any inference engine.
Can I deploy deep learning models optimized by Embedl Model Optimization SDK just as I would deploy my current models?
Yes, the end product you get after optimizing your deep learning model does not have any additional runtime dependencies. The only thing that differs between the pre-optimized model and the model optimized by Embedl Model Optimization SDK is the contents of the model itself. Therefore, you can deploy optimized models just as you would deploy your current model. You are not dependent on Embedl to run the optimized model.
What hardware platforms are supported by Embedl?
Embedl supports a wide variety of hardware platforms used for deployment of deep learning models, including CPU, GPU, FPGA and MCU platforms. For the complete hardware support specification, see (insert list of supported hardware types with some examples of specific chips).
Our company already have a team of DL experts, do we need Embedl?
Yes, we typically work with companies that have DL competence in house from a couple of data scientists or engineers focusing on DL to companies with hundreds of DL-engineers and researchers.

Research

What industries can benefit from the use of Neural Architecture Search?
Industries such as computer vision, natural language processing, and robotics can benefit from the use of Neural Architecture Search.
What is the future of Neural Architecture Search?
The future of Neural Architecture Search looks promising, with continued research and development likely to lead to further advancements in the field.
What is the difference between Neural Architecture Search and Neural Network Architecture Design?
Neural Architecture Search is a technique for automatically designing neural network architectures, while neural network architecture design typically involves manual design by human experts.
What are some of the challenges of Neural Architecture Search?
The computational cost of the search process and the limited interpretability of the resulting architectures are two major challenges of Neural Architecture Search.
How does NAS improve the performance of machine learning models?
NAS can improve the performance of machine learning models by designing neural network architectures that are tailored specifically to the task at hand, resulting in improved accuracy and efficiency.

benefits

FASTER EXECUTION

FASTER
EXECUTION

By using state-of-the-art methods for optimizing Deep Neural Networks, we can achieve a significant decrease in execution time and help you reach your real time requirements.

SMALLER FOOTPRINT IN DEVICE

SMALLER
FOOTPRINT IN DEVICE

The Embedl Optimization Engine automatically reduces the number of weights , and thus size of the model, to make it suitable to be deployed to resource constraint environments such as embedded systems

SHORTER TIME-TO-MARKET

SHORTER
TIME-TO-MARKET

The tools are fully automatic, which reduces the need for time consuming experimentation and thus shorter time-to-market. It also frees up your data scientists to focus on their core problems.

LESS ENERGY USAGE

LESS
ENERGY USAGE

Energy is a scarce resource in embedded systems and our optimizer can achieve an order of magnitude reduction in energy consumption for the Deep Learning model execution.

IMPROVED PRODUCT MARGINS

IMPROVED
PRODUCT MARGINS

By optimizing the Deep Learning model, cheaper hardware can be sourced that still meets your system requirements leading to improved product margins.

DECREASED PROJECT RISK

DECREASED
PROJECT RISK

Optimizing and deploying our customers’ Deep Learning models to embedded systems is what we do. By outsourcing this to us, your team can then focus on your core problems.

Want to find out more?  Our award winning Deep Learning Optimization Engine optimizes your Deep  Learning model for deployment BOOK A DEMO