Vrjb commited on
Commit
fd7dc9c
·
verified ·
1 Parent(s): 0cb0a4d

Model save

Browse files
Files changed (4) hide show
  1. README.md +16 -11
  2. generation_config.json +0 -1
  3. model.safetensors +1 -1
  4. training_args.bin +2 -2
README.md CHANGED
@@ -4,6 +4,8 @@ license: bsd-3-clause
4
  base_model: Salesforce/blip-image-captioning-base
5
  tags:
6
  - generated_from_trainer
 
 
7
  model-index:
8
  - name: BLIP_Captioning
9
  results: []
@@ -16,7 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.0001
 
20
 
21
  ## Model description
22
 
@@ -39,25 +42,27 @@ The following hyperparameters were used during training:
39
  - train_batch_size: 16
40
  - eval_batch_size: 1
41
  - seed: 42
 
 
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
- - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 1000
45
- - num_epochs: 5
46
  - mixed_precision_training: Native AMP
 
47
 
48
  ### Training results
49
 
50
- | Training Loss | Epoch | Step | Validation Loss |
51
- |:-------------:|:------:|:----:|:---------------:|
52
- | 0.0023 | 0.9634 | 1500 | 0.0006 |
53
- | 0.0012 | 1.9268 | 3000 | 0.0014 |
54
- | 0.0007 | 2.8902 | 4500 | 0.0005 |
55
- | 0.0006 | 3.8536 | 6000 | 0.0001 |
56
- | 0.0002 | 4.8170 | 7500 | 0.0001 |
57
 
58
 
59
  ### Framework versions
60
 
61
  - Transformers 4.55.4
62
- - Pytorch 2.5.1+cu121
 
63
  - Tokenizers 0.21.4
 
4
  base_model: Salesforce/blip-image-captioning-base
5
  tags:
6
  - generated_from_trainer
7
+ metrics:
8
+ - bleu
9
  model-index:
10
  - name: BLIP_Captioning
11
  results: []
 
18
 
19
  This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.5714
22
+ - Bleu: 1.0
23
 
24
  ## Model description
25
 
 
42
  - train_batch_size: 16
43
  - eval_batch_size: 1
44
  - seed: 42
45
+ - gradient_accumulation_steps: 2
46
+ - total_train_batch_size: 32
47
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
+ - lr_scheduler_type: cosine
49
  - lr_scheduler_warmup_steps: 1000
50
+ - num_epochs: 3
51
  - mixed_precision_training: Native AMP
52
+ - label_smoothing_factor: 0.1
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Bleu |
57
+ |:-------------:|:-----:|:----:|:---------------:|:----:|
58
+ | 1.3592 | 1.0 | 779 | 1.6711 | 1.0 |
59
+ | 1.3583 | 2.0 | 1558 | 1.5660 | 1.0 |
60
+ | 1.3582 | 3.0 | 2337 | 1.5714 | 1.0 |
 
 
61
 
62
 
63
  ### Framework versions
64
 
65
  - Transformers 4.55.4
66
+ - Pytorch 2.7.1+cu118
67
+ - Datasets 4.1.1
68
  - Tokenizers 0.21.4
generation_config.json CHANGED
@@ -1,5 +1,4 @@
1
  {
2
- "_from_model_config": true,
3
  "bos_token_id": 30522,
4
  "eos_token_id": 2,
5
  "pad_token_id": 0,
 
1
  {
 
2
  "bos_token_id": 30522,
3
  "eos_token_id": 2,
4
  "pad_token_id": 0,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f3dffd7a09ced1ecb17d80104b13c77c72113adcd64a09bfb0d4de26a830d703
3
  size 989717056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49434326890cee392c50acd200aece356e5151a96e6b0d70be91c2566ab31d40
3
  size 989717056
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6d99c7965f1c664e8d8e2ea3839b612b59430ca9373068bcf20a3a487484e554
3
- size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:758a6760b4a0d7e227f3efb63e2032af6cb151203ef84894c1c0489f258df79d
3
+ size 5969