09:18:00 +07:00 Nvidia launches a set of microservices for optimized inferencing - Specialised containers that bundle in the inference model used to actually run the trained models. Should make it super easy to get up and running for teams that can't build out their own infrastructure. # techcrunch.com