Deployment of trained model on to cloud side  requires to  support  clients with varying languages.  REST API appears to be one such a method in which Client can communicate with Inference Engine which is performing inference for a given input data. Where, input data come from Client applications which are written in different language 

tfServingDockerRestV2