This model was made with the Micro Distillery app available at:
webxos.netlify.app/MICROD
by webXOS
- **Model size**: 42M parameters
- **Model type**: micro-distill-grpo-vae
Model Description
This is a distilled language model trained using Group Relative Policy Optimization (GRPO) with VAE filtering.
MICROD v1.0 (micro-distill-grpo-vae) is a small template model designed to be built upon for custom ground up builds. It is distillated into a
small set of files the user can use to template their own agents. Designed for educational learning and micro scalling.
Use MICROD V1.0 (micro-distill-grpo-vae) in your own custom projects and train it from the ground up.
The model's architecture details further underscore an educational niche: a hidden size of 512, 8 layers, 8 attention heads, a vocabulary of 50,257 tokens,
and a max sequence length of 1024. Licensed under Apache 2.0, it's openly available for modification, and its small footprint allows quantization,
making it runnable on modest hardware like CPUs or even browsers via TensorFlow.js integration.
Model Details
Model type: micro-distill-grpo-vae
Model size: 42M parameters
Language: English
License: Apache 2.0
Training Methodology
GRPO (Group Relative Policy Optimization): 8 groups
VAE Filtering: 32D latent space
KV-Cache Reuse: 512 cache size
Architecture Details
Hidden size: 512
Number of layers: 8
Attention heads: 8
Vocabulary size: 50257
Maximum sequence length: 1024
Usage
-Model Distillation Training: Simulate GRPO optimization with VAE filtering for small LLMs (42M-345M params).
-Policy Experimentation: Test group sizes, KL penalties, cache reuse for RLHF-like training.
-VAE Filtering: Apply latent space compression to improve distillation quality.
-Sandbox Testing: Execute safe Python code with feedback masking.
-Export & Deployment: Generate deployable models for inference in various frameworks.
-Offline Usage: PWA supports offline training simulation and exports.
MICROD_v1 may not rival larger models in breadth, but a focus on accessibility and browser-based AI development opens doors for
innovators balancing all perspectives in the Small Language Model space.
Prototype without Internet
Offline Simulations in Black Box
Simple Story Generators
Custom Agentic Development
Train on Custom Data
Experiment with max_length
AI agents for custom Games
Educational Fine-Tuning
Prepare Datasets
Fine-tune via GRPO Trainer
Evaluate PY in Sandbox
Create task-specific Variants like Code Tutors
OVERVIEW
In terms of applications, small distilled models like MICROD_v1 align with broader trends in SLMs, which prioritize efficiency, accessibility,
and specialization over the scale of large language models (LLMs). For example, they can be fine-tuned for targeted tasks such as customer support
chatbots, where quick responses on edge devices are crucial, or educational tools for teaching natural language processing concepts. In healthcare,
distilled models might power privacy-focused symptom checkers on mobile apps, avoiding data transmission to cloud servers. Automation and control
systems benefit from their low latency, as surveyed in research on tiny language models (TLMs), which use techniques like knowledge distillation and
quantization to enable on-device inference for robotics or IoT devices.