Learn how to build, package, deploy and serve machine learning (ML) models on the cloud. During this session, you will first deploy a simple “pickled” model, followed by a more involved Hugging Face Transformers-based model.
This is the first in a series of sessions covering various aspects of ML model deployment, serving and monitoring.
| Delivery Mode | Online |
| Duration | 1.5 hours |
| Prerequisites | Containerization 101, Serverless Computing |
What Will Be Covered
The following will be covered during this session:
- Train a model, then save it.
- Develop a simple model inference server using
FastAPI. - Package the solution using Docker.
- Deploy the model inference server onto the cloud.
Upcoming Sessions
| When | Where | |||
|---|---|---|---|---|
| Sunday, 28 Jan 2024 | 2 - 3:30 PM 🌏 | Online | Register |
Please note:
- All times above are expressed in SGT. Click on the 🌏 icon next to sessions of interest to get your location’s corresponding date and time.
- For reference, 2 PM SGT is 11:30 AM (Bangalore), 11 AM (Lahore), 1 PM (Jakarta)
Other Dates & Times
đź‘‹ I am interested but don’t see a date or time that works for me.
You can indicate your preferred date & time when you register.
What’s Next?
What’s next after attending this session?
Check out the complete list of upcoming sessions.
Alternatively, if you are a hands-on creator, check out the upcoming Hands-On Learning Sessions.