Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,208 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Medical-ChatBot-DPO LoRA 模型
|
| 2 |
+
|
| 3 |
+
## 模型概述
|
| 4 |
+
|
| 5 |
+
基于 SFT 模型的直接偏好优化(Direct Preference Optimization, DPO)LoRA 适配器,通过人类偏好数据进行对齐训练。
|
| 6 |
+
|
| 7 |
+
- **基础模型**: SFT 阶段训练后的 LLaMA-3.1-8B (sft-full-multi)
|
| 8 |
+
- **训练阶段**: Direct Preference Optimization (DPO)
|
| 9 |
+
- **训练状态**: 🚧 训练中
|
| 10 |
+
|
| 11 |
+
## 1. 数据集
|
| 12 |
+
|
| 13 |
+
**数据集**: [bootscoder/Medical-ChatBot-DPO](https://huggingface.co/datasets/bootscoder/Medical-ChatBot-DPO)
|
| 14 |
+
|
| 15 |
+
详细数据集信息请查看上述链接。
|
| 16 |
+
|
| 17 |
+
## 2. 训练流程
|
| 18 |
+
|
| 19 |
+
### 技术栈
|
| 20 |
+
|
| 21 |
+
- **DeepSpeed**: ZeRO Stage 1 分布式训练
|
| 22 |
+
- **PEFT**: LoRA 参数高效微调
|
| 23 |
+
- **BitsAndBytes**: 4-bit NF4 量化 + 双重量化
|
| 24 |
+
- **Flash Attention 2**: 加速注意力计算
|
| 25 |
+
- **TRL**: DPOTrainer 偏好优化
|
| 26 |
+
|
| 27 |
+
### 训练阶段
|
| 28 |
+
|
| 29 |
+
1. **加载 SFT 模型**: 使用 SFT 阶段的 full model 作为起点
|
| 30 |
+
2. **偏好数据处理**: 处理 chosen/rejected 回答对
|
| 31 |
+
3. **DPO 训练**: 使用 sigmoid loss 进行偏好对齐
|
| 32 |
+
4. **分布式训练**: DeepSpeed 8卡并行训练,2 epochs
|
| 33 |
+
5. **保存模型**: 保存 LoRA 适配器权重
|
| 34 |
+
|
| 35 |
+
## 3. 参数配置
|
| 36 |
+
|
| 37 |
+
### 硬件配置
|
| 38 |
+
|
| 39 |
+
```
|
| 40 |
+
GPU: 8 × NVIDIA A5000 (24GB VRAM)
|
| 41 |
+
实际使用卡: 0,1,2,4,5,7,8,9
|
| 42 |
+
分布式: DeepSpeed ZeRO Stage 1
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
### 训练超参数
|
| 46 |
+
|
| 47 |
+
```yaml
|
| 48 |
+
seq_length: 512 # 序列长度
|
| 49 |
+
max_prompt_length: 128 # 最大提示长度
|
| 50 |
+
max_completion_length: 128 # 最大回答长度
|
| 51 |
+
batch_size: 4 # 每卡批次大小
|
| 52 |
+
gradient_accumulation_steps: 8 # 梯度累积
|
| 53 |
+
effective_batch_size: 256 # 4 × 8 × 8
|
| 54 |
+
num_train_epochs: 2 # 训练轮数
|
| 55 |
+
learning_rate: 5e-6 # 学习率 (低于 SFT)
|
| 56 |
+
lr_scheduler_type: cosine # 余弦调度
|
| 57 |
+
warmup_ratio: 0.05 # 预热比例
|
| 58 |
+
bf16: true # BF16 混合精度
|
| 59 |
+
gradient_checkpointing: true # 梯度检查点
|
| 60 |
+
beta: 0.1 # DPO 温度参数
|
| 61 |
+
loss_type: sigmoid # Sigmoid loss
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
### QLoRA 配置
|
| 65 |
+
|
| 66 |
+
**量化配置**:
|
| 67 |
+
```python
|
| 68 |
+
load_in_4bit: True # 4-bit 量化
|
| 69 |
+
bnb_4bit_quant_type: nf4 # NF4 量化
|
| 70 |
+
bnb_4bit_compute_dtype: bfloat16 # BF16 计算
|
| 71 |
+
bnb_4bit_use_double_quant: True # 双重量化
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
**LoRA 配置**:
|
| 75 |
+
```python
|
| 76 |
+
r: 64 # LoRA 秩
|
| 77 |
+
lora_alpha: 8 # 缩放因子 (alpha/r = 0.125)
|
| 78 |
+
target_modules: [q_proj, k_proj] # Q, K 投影层
|
| 79 |
+
bias: none # 不训练 bias
|
| 80 |
+
trainable_params: ~54MB # 可训练参数
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
**DPO 特性**:
|
| 84 |
+
- **Beta 参数 (0.1)**: 控制偏好强度,较小的 beta 使模型更积极地学习偏好
|
| 85 |
+
- **Sigmoid Loss**: 稳定的损失函数,适合偏好学习
|
| 86 |
+
- **无需 Reference Model**: 隐式参考模型,节约显存
|
| 87 |
+
- **显存节约**: ~90% (相比全参数训练)
|
| 88 |
+
|
| 89 |
+
## 4. 峰值显存占用
|
| 90 |
+
|
| 91 |
+
**单卡峰值**: ____________ GB
|
| 92 |
+
**8卡总计**: ____________ GB
|
| 93 |
+
|
| 94 |
+
## 5. 模型预期表现
|
| 95 |
+
|
| 96 |
+
### 相比 SFT 模型的改进
|
| 97 |
+
|
| 98 |
+
**改进**:
|
| 99 |
+
- 生成内容更符合人类偏好
|
| 100 |
+
- 回答质量和安全性提升
|
| 101 |
+
- 减少不必要的冗长或不当内容
|
| 102 |
+
- 更好的拒答能力(对不确定问题)
|
| 103 |
+
- 输出风格更加友好和专业
|
| 104 |
+
|
| 105 |
+
**相比 Base LLaMA-3.1-8B**:
|
| 106 |
+
- 医疗领域知识(CPT)+ 指令遵循(SFT)+ 偏好对齐(DPO)
|
| 107 |
+
- 完整的 RLHF 替代方案(DPO 作为 PPO 的替代)
|
| 108 |
+
- 更安全、更可控、更符合用户期望
|
| 109 |
+
|
| 110 |
+
**局限**:
|
| 111 |
+
- 依赖偏好数据质量,可能存在偏好数据偏差
|
| 112 |
+
- 对于偏好数据未覆盖的场景,改进有限
|
| 113 |
+
- 仍建议在实际应用中进行人工审核
|
| 114 |
+
|
| 115 |
+
## 使用方法
|
| 116 |
+
|
| 117 |
+
### 加载模型
|
| 118 |
+
|
| 119 |
+
```python
|
| 120 |
+
import torch
|
| 121 |
+
from peft import PeftModel
|
| 122 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 123 |
+
|
| 124 |
+
# 加载 SFT full model
|
| 125 |
+
base_model = AutoModelForCausalLM.from_pretrained(
|
| 126 |
+
"/path/to/sft-full-multi",
|
| 127 |
+
torch_dtype=torch.bfloat16,
|
| 128 |
+
device_map="auto"
|
| 129 |
+
)
|
| 130 |
+
|
| 131 |
+
# 加载 DPO LoRA 适配器
|
| 132 |
+
model = PeftModel.from_pretrained(base_model, "/path/to/dpo-lora")
|
| 133 |
+
tokenizer = AutoTokenizer.from_pretrained("/path/to/dpo-lora")
|
| 134 |
+
|
| 135 |
+
# 合并适配器(可选)
|
| 136 |
+
model = model.merge_and_unload()
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
### 对话示例
|
| 140 |
+
|
| 141 |
+
```python
|
| 142 |
+
# 构建对话格式
|
| 143 |
+
SYSTEM_PROMPT = "You are a Medical Chatbot, you should friendly answer the question."
|
| 144 |
+
|
| 145 |
+
def format_prompt(question):
|
| 146 |
+
return f"###System: {SYSTEM_PROMPT}\n###Question: {question}\n###Answer: "
|
| 147 |
+
|
| 148 |
+
# 生成回答
|
| 149 |
+
question = "感冒了应该怎么办?"
|
| 150 |
+
prompt = format_prompt(question)
|
| 151 |
+
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
| 152 |
+
|
| 153 |
+
outputs = model.generate(
|
| 154 |
+
**inputs,
|
| 155 |
+
max_new_tokens=256,
|
| 156 |
+
temperature=0.7,
|
| 157 |
+
top_p=0.9,
|
| 158 |
+
do_sample=True,
|
| 159 |
+
repetition_penalty=1.1
|
| 160 |
+
)
|
| 161 |
+
|
| 162 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 163 |
+
print(response)
|
| 164 |
+
```
|
| 165 |
+
|
| 166 |
+
### 推理优化建议
|
| 167 |
+
|
| 168 |
+
```python
|
| 169 |
+
# 使用更保守的采样参数以提高输出质量
|
| 170 |
+
outputs = model.generate(
|
| 171 |
+
**inputs,
|
| 172 |
+
max_new_tokens=256,
|
| 173 |
+
temperature=0.6, # 降低随机性
|
| 174 |
+
top_p=0.85, # 更集中的采样
|
| 175 |
+
do_sample=True,
|
| 176 |
+
repetition_penalty=1.15, # 避免重复
|
| 177 |
+
no_repeat_ngram_size=3 # 避免 n-gram 重复
|
| 178 |
+
)
|
| 179 |
+
```
|
| 180 |
+
|
| 181 |
+
## 模型文件
|
| 182 |
+
|
| 183 |
+
```
|
| 184 |
+
dpo-lora/
|
| 185 |
+
├── adapter_config.json # LoRA 配置
|
| 186 |
+
├── adapter_model.safetensors # LoRA 权重 (~54MB)
|
| 187 |
+
├── special_tokens_map.json # 特殊 token 映射
|
| 188 |
+
├── tokenizer.json # 分词器
|
| 189 |
+
└── tokenizer_config.json # 分词器配置
|
| 190 |
+
```
|
| 191 |
+
|
| 192 |
+
## 训练进度
|
| 193 |
+
|
| 194 |
+
**当前状态**: 🚧 训练中
|
| 195 |
+
|
| 196 |
+
训练完成后,模型将包含完整的 CPT → SFT → DPO 训练流程,形成一个完整的医疗对话模型。
|
| 197 |
+
|
| 198 |
+
## 许可证
|
| 199 |
+
|
| 200 |
+
遵循 [Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
|
| 201 |
+
|
| 202 |
+
## 注意事项
|
| 203 |
+
|
| 204 |
+
⚠️ **医疗免责声明**:
|
| 205 |
+
- 本模型仅供研究和教育用途
|
| 206 |
+
- 不应作为专业医疗建议的替代
|
| 207 |
+
- 任何医疗决策都应咨询专业医疗人员
|
| 208 |
+
- 模型输出可能包含错误或不完整信息
|