Deployment of Model
Deployment of trained model on to cloud side requires to support clients with varying languages. REST API appears to be one such a method in which Client can communicate with Inference Engine which is performing inference for a given input data. Where, input data come from Client applications which are written in different language
Deployment of trained model on to Edge requires lot more care and hard work. Following sub systems configuration is given for IoT Edge. In this, Android phone working as IoT device, It will work well for small size model. Deploying huge model on to Android phone might create challenges and may not work well if reduce model size to fit in Android phone .