DLtrain platform development took few years of innovation and coding. DLtrain is designed to make their silicon EDGE computing ready for AI work load. DLtrain is a perfect tool to handle issues in porting trained AI models in Edge computers with ease.

Silicon Vendor team can take advantage of the above infrastructure and move their GPU silicon into the IoT edge market. DLtrain licence with source code ( non exclusive ) can be provided to customer teams such that they can port DLtrain on their CPU+GPU silicon, Porting PyTorch, Tensor flow models on to Embedded device is one of the challenging problem and DLtrain is solving the same issue and provide option for Silicon vendor to move in to OEM ready solution. For example TI has their own inference engine (TIDL), Qualcomm has their own (SPNE) and ST Micro and many other silicon vendors. DLtrain provides C and C++ code along with a licence for the Customer team to quickly move to market.