ChatGCLM-270M

ChatGCLM Hero A high-performance language model architecture.

Overview

ChatGCLM is a generative language model that deviates from the traditional Transformer architecture by utilizing a hybrid approach of Local and Global Convolutions. By leveraging Fast Fourier Transforms (FFT) for global context, ChatGCLM achieves a massive receptive field with a fraction of the computational overhead associated with standard attention mechanisms.

The architecture is designed for efficiency, speed, and high-quality generation, featuring a custom vocabulary reduction system that optimizes the embedding space for specific datasets.

This repository provides the implementation for training and sampling from the ChatGCLM-270M model, which consists of 270 million parameters.

The model has the full vocabulary of GPT-2, so it can be fine-tuned on any dataset that GPT-2 can be fine-tuned on.

πŸ“¦ Installation

Download this repository and extract it.


Usage

1. Training the Model

Place your .txt data files in the data/ directory and run:

python train_chatgclm.py

This script will build the vocabulary and train the foundation model

2. Sample from the model

Run sample.py to generate text with the model

python sample.py

Fine-tuning

You may fine-tune the model by resuming training from a checkpoint, you may use a different dataset, you may also change parameters such as the learning rate, batch size, etc.

Built with ❀️ by AG

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support