markjgsmith

Linkblog

09:18:00 +07:00 Nvidia launches a set of microservices for optimized inferencing - Specialised containers that bundle in the inference model used to actually run the trained models. Should make it super easy to get up and running for teams that can't build out their own infrastructure. # techcrunch.com

For enquiries about my consulting, development, training and writing services, aswell as sponsorship opportunities contact me directly via email. More details about me here.