This is a quantized version of DistilGPT-2 optimized for browser deployment.
Smaller file size (120MB compared to 317MB original model)
- Downloads last month
- 4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for GhostScientist/distilgpt2-int8-browser-completion
Base model
distilbert/distilgpt2