Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
SaraPieri commited on
Commit
85739ee
·
0 Parent(s):

Initial commit

Browse files
.gitattributes ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
60
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,314 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: split
5
+ dtype: string
6
+ - name: image_id
7
+ dtype: string
8
+ - name: file_name
9
+ dtype: string
10
+ - name: image_info
11
+ struct:
12
+ - name: data_source
13
+ dtype: string
14
+ - name: file_name
15
+ dtype: string
16
+ - name: height
17
+ dtype: int64
18
+ - name: id
19
+ dtype: string
20
+ - name: width
21
+ dtype: int64
22
+ - name: caption_info
23
+ struct:
24
+ - name: caption
25
+ dtype: string
26
+ - name: caption_ann
27
+ dtype: string
28
+ - name: id
29
+ dtype: int64
30
+ - name: image_id
31
+ dtype: string
32
+ - name: label_matched
33
+ list:
34
+ - name: mask_ids
35
+ sequence: int64
36
+ - name: txt_desc
37
+ dtype: string
38
+ - name: labels
39
+ sequence: string
40
+ - name: mask_annotations
41
+ list:
42
+ - name: area
43
+ dtype: int64
44
+ - name: bbox
45
+ sequence: float64
46
+ - name: category_id
47
+ dtype: int64
48
+ - name: id
49
+ dtype: int64
50
+ - name: image_id
51
+ dtype: string
52
+ - name: iscrowd
53
+ dtype: int64
54
+ - name: segmentation
55
+ struct:
56
+ - name: counts
57
+ dtype: string
58
+ - name: size
59
+ sequence: int64
60
+ - name: thing_or_stuff
61
+ dtype: string
62
+ - name: categories
63
+ list:
64
+ - name: id
65
+ dtype: int64
66
+ - name: name
67
+ dtype: string
68
+ splits:
69
+ - name: train
70
+ num_bytes: 29443350
71
+ num_examples: 2070
72
+ - name: val
73
+ num_bytes: 4782919
74
+ num_examples: 420
75
+ - name: test
76
+ num_bytes: 10976834
77
+ num_examples: 980
78
+ download_size: 25273455
79
+ dataset_size: 45203103
80
+ configs:
81
+ - config_name: default
82
+ data_files:
83
+ - split: train
84
+ path: data/train-*
85
+ - split: val
86
+ path: data/val-*
87
+ - split: test
88
+ path: data/test-*
89
+ ---
90
+ # PanoCaps (PANORAMA): Panoptic grounded captioning via mask-guided refinement
91
+
92
+ [![Paper](https://img.shields.io/badge/ArXiv-Paper-brown)]()
93
+ [![Code](https://img.shields.io/badge/GitHub-Link-orange)](https://github.com/sarapieri/panorama_grounding)
94
+ [![Data](https://img.shields.io/badge/HuggingFace-Data-blue)](https://huggingface.co/datasets/HuggingSara/PanoCaps)
95
+ [![Website](https://img.shields.io/badge/Web-Page-purple)](https://www.di.ens.fr/willow/research/panorama/)
96
+ <!-- [![Model](https://img.shields.io/badge/HuggingFace-Model-green)]() -->
97
+
98
+ <p align="center">
99
+ <img src="https://www.di.ens.fr/willow/research/panorama/resources/panorama_teaser.jpg"
100
+ width="100%"
101
+ alt="Panorama teaser image" />
102
+ </p>
103
+
104
+ PanoCaps is a unified dataset for **panoptic grounded captioning**. A model must generate a full-scene caption and ground every mentioned entity (things and stuff) with pixel-level masks.
105
+
106
+ Every caption:
107
+ - Is **human-written**
108
+ - Covers the **entire visible scene**
109
+ - Contains **rich open-vocabulary descriptions** beyond category labels
110
+ - Includes **inline grounding tags** referring to segmentation masks
111
+ - Supports **one-to-many** and **many-to-one** text ↔ mask mappings
112
+
113
+ This makes PanoCaps suitable for training and evaluating **vision–language models** requiring both detailed scene understanding and fine-grained spatial grounding.
114
+
115
+ The repository includes:
116
+
117
+ 1. **Raw annotations** in JSON format (`annotations/`) → best for **training & evaluation**
118
+ 2. **A processed Hugging Face dataset** → best for **visualization & inspection**
119
+
120
+ This dataset is intended **exclusively for research and non-commercial use**.
121
+
122
+ ## Dataset Details
123
+
124
+ ### Dataset Description
125
+
126
+ This benchmark supports **panoptic grounded captioning**—a task requiring models to generate long-form, descriptive captions for the entire scene and link all mentioned entities (things and stuff) to pixel-level masks. Masks follow standard **COCO-style panoptic annotations**.
127
+
128
+ The dataset comprises **3,470 images** with a total of **34K panoptic regions**, averaging **~9 grounded entities per image**. The human-written captions are designed for maximum quality and detail:
129
+ * **Comprehensive:** Covers the entire visible scene.
130
+ * **Open-Vocabulary:** Entity descriptions extend beyond simple category labels.
131
+ * **Fully Grounded:** Uses in-text markers and explicit mapping structures (`label_matched`) to link text spans to masks, ensuring **>99% of regions are grounded**.
132
+
133
+
134
+ ### Images
135
+
136
+ **Images are *not* included** in this repository.
137
+
138
+ To use the dataset, download the original images from the source datasets:
139
+
140
+ | Dataset | Data Download Link | Associated Publication |
141
+ |---------|---------------------------------------------|-----------------------------------------|
142
+ | ADE20K | [ADE20K Download](https://groups.csail.mit.edu/vision/datasets/ADE20K/) | [ADE20K Paper](https://arxiv.org/abs/1608.05442) |
143
+ | COCONut | [COCONut GitHub](https://github.com/bytedance/coconut_cvpr2024) | [COCONut Paper](https://arxiv.org/abs/2404.08639) |
144
+ | VIPSeg | [VIPSeg GitHub](https://github.com/VIPSeg-Dataset/VIPSeg-Dataset/) | [VIPSeg Paper](https://openaccess.thecvf.com/content/CVPR2022/html/Miao_Large-Scale_Video_Panoptic_Segmentation_in_the_Wild_A_Benchmark_CVPR_2022_paper.html) |
145
+
146
+
147
+ The JSON annotations reference these images by consistent `file_name` and `id`.
148
+
149
+ ### Repository Structure
150
+
151
+ <details>
152
+ <summary>Show Repository Structure</summary>
153
+ <pre>
154
+ PanoCaps/
155
+
156
+ ├── 📁 annotations/
157
+ │ ├── 📄 test_caption.json
158
+ │ ├── 📄 test_mask.json
159
+ │ ├── 📄 train_caption.json
160
+ │ ├── 📄 train_mask.json
161
+ │ ├── 📄 val_caption.json
162
+ │ └── 📄 val_mask.json
163
+ ├── 📁 data/ (parquet/HF version)
164
+ └── 📄 README.md
165
+ </pre>
166
+ </details>
167
+
168
+ ### Recommended Usage
169
+
170
+ This dataset is provided in two complementary formats:
171
+
172
+ ### **1. Hugging Face Dataset Format (recommended for inspection & visualization)**
173
+ The `train`, `val`, and `test` splits uploaded to the Hugging Face Hub combine **captioning** and **panoptic mask** information into a **single unified entry per image**. This format is ideal for browsing samples interactively in the Dataset Viewer or quick experimentation.
174
+
175
+ ### **2. Original COCO-Style JSON Format (recommended for training & evaluation)**
176
+ Raw annotations are provided under `annotations/` as pairs of Caption files and Mask files (e.g., `train_caption.json` / `train_mask.json`).
177
+
178
+ These follow the original COCO-style structure and are best suited for:
179
+ - Model training
180
+ - Model evaluation
181
+ - Direct integration into COCO-based pipelines
182
+
183
+ Caption and mask files can be matched using the shared `image_id` / `id` fields in `images[*]` and `annotations[*]`.
184
+
185
+ ### Detailed COCO Format
186
+
187
+ <details>
188
+ <summary>Show Caption File Example (Structure + Single Entry)</summary>
189
+
190
+ ```javascript
191
+ {
192
+ "annotations": [
193
+ {
194
+ "caption": "The image shows a small, brightly lit bathroom dominated by a white tiled wall...",
195
+ // Clean natural-language caption
196
+ "caption_ann": "The image shows a small, brightly lit bathroom dominated by a <0:white tiled wall>...",
197
+ // Caption with grounded <mask_id:text> references
198
+ "label_matched": [
199
+ { "mask_ids": [0], "txt_desc": "white tiled wall" },
200
+ { "mask_ids": [5], "txt_desc": "white bathtub with chrome faucets" }
201
+ // ...
202
+ ],
203
+ // Mapping text spans → one or more mask IDs
204
+ // Masks may appear multiple times with different descriptions
205
+ "id": 0,
206
+ // Caption annotation ID
207
+ "image_id": "00000006",
208
+ // Matches the images[*].id field
209
+ "labels": ["wall", "floor", "ceiling", "window", "curtain", "tub", "sink"]
210
+ // All unique semantic labels from the original annotations
211
+ }
212
+ ],
213
+ "images": [
214
+ {
215
+ "file_name": "00000006.jpg",
216
+ // Image filename
217
+ "height": 973,
218
+ "width": 512,
219
+ // Image resolution
220
+ "id": "00000006",
221
+ // Image identifier (matches annotation.image_id)
222
+ "data_source": "ADE20K"
223
+ // Image source
224
+ }
225
+ ]
226
+ }
227
+ ```
228
+ </details>
229
+
230
+ <details>
231
+ <summary>Show Mask File Example (Structure + Single Entry)</summary>
232
+
233
+ ```javascript
234
+ {
235
+ "annotations": [
236
+ {
237
+ "id": 0,
238
+ // Unique ID of this panoptic region
239
+ "image_id": "00000006",
240
+ // Links this region to the image and caption (matches images[*].id and caption image_id)
241
+ "category_id": 100,
242
+ // Semantic category ID (from the original annotations)
243
+ "segmentation": {
244
+ "size": [973, 512],
245
+ // Height and width of the full image (needed to decode the RLE mask)
246
+ "counts": "d1`1Zk0P2C=C<D=C=C6J=..."
247
+ // RLE-encoded mask in COCO panoptic format
248
+ },
249
+ "area": 214858,
250
+ // Number of pixels covered by this segment
251
+ "bbox": [0.0, 0.0, 511.0, 760.0],
252
+ // COCO-format bounding box [x, y, width, height]
253
+ "iscrowd": 0,
254
+ // 0 for normal segment, 1 if this region is a crowd
255
+ "thing_or_stuff": "stuff"
256
+ // Whether this region is an object-like "thing" or background-like "stuff"
257
+ }
258
+ ],
259
+ "images": [
260
+ {
261
+ "file_name": "00000006.jpg",
262
+ // Image file name (in the original dataset)
263
+ "height": 973,
264
+ "width": 512,
265
+ // Image resolution
266
+ "id": "00000006"
267
+ // Image identifier (matches annotations[*].image_id and caption image_id)
268
+ "data_source": "ADE20K"
269
+ // Image source
270
+ }
271
+ ],
272
+ "categories": [
273
+ {
274
+ "id": 1,
275
+ // Category ID (referenced by annotations[*].category_id)
276
+ "name": "object"
277
+ // Human-readable category name
278
+ }
279
+ ]
280
+ }
281
+ ```
282
+ </details>
283
+
284
+ ---
285
+
286
+ ## Curation and Annotation Details
287
+
288
+ PanoCaps was built to overcome the limitations of prior grounded captioning datasets (e.g., auto-generated captions, limited vocabulary, and incomplete grounding). Our goal was to create a resource where captions describe every meaningful region using open-vocabulary language, with explicit grounding for each referenced entity.
289
+ The creation process involved four stages:
290
+
291
+ 1. **Image Selection:** A diverse subset of images was curated from ADE20K, COCONut, and VIPSeg to ensure visual quality and suitability for dense grounding.
292
+ 2. **Captioning:** Professional annotators wrote long-form, fine-grained scene descriptions, highlighting attributes, relationships, and all visible entities.
293
+ 3. **Grounding:** Annotators tagged textual references with `<mask_id:description>` markers and produced **label_matched** structures that map text spans to one or more segmentation masks.
294
+ 4. **Validation:** A second QC stage verified the correctness of grounding IDs, completeness of region coverage, and annotation consistency.
295
+ **Data Producers:** The base panoptic masks were sourced from the original datasets (ADE20K, COCONut, VIPSeg). However, all **captions and grounding annotations** were created specifically for PanoCaps by paid professional annotators following internal guidelines.
296
+
297
+ ---
298
+
299
+ ## License (Research Only)
300
+ Because this repository merges, normalizes, and redistributes content from already existing datasets, the combined dataset is provided **strictly for research and non-commercial use**.
301
+ Commercial use is **not permitted**. Users must comply with the licenses of each original source dataset.
302
+
303
+ ---
304
+
305
+ ## Citation
306
+ If you find our work useful for your research, please consider citing our [paper]():
307
+ ```
308
+ @article{YOUR_CITATION_HERE,
309
+ title={Your Title},
310
+ author={Your Name},
311
+ year={2024}
312
+ }
313
+ ```
314
+
annotations/test_caption.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf851b9654b225dad6ffbaa5674f467fe2b05d96d455d571c3f40a22e8d1eb67
3
+ size 2267339
annotations/test_mask.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d02ddf770641f0f45e812532624c9348b0f4b2d3e7ea43200ba0ebc6522813d0
3
+ size 11879034
annotations/train_caption.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50666f262c27fc7a2d78dd262cc322aa8b5363c621eaca2d4630812c5222c638
3
+ size 5537689
annotations/train_mask.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b7880707a112ee0bfd11130608de3ff332b224a870b37cab599311eda61dd74
3
+ size 31847769
annotations/val_caption.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e2aa96ebc0172cfec8d6c4a4561ed27d496fc4d5a4b8c6e7005e8d5de0eefc9
3
+ size 991772
annotations/val_mask.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3faebf4cded1e35d438bf5e06702d1cd19b03a857e29de835d0b6afde11a36ec
3
+ size 5168463
data/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94b64e59d485dd438c76cdfc9ab20e44583320c2e48dadd89d8bdbf0215a80a6
3
+ size 6163271
data/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f898a851b0ef8af1dbfe455d72079cec7cbae306235ff2461d152f9de1ba5713
3
+ size 16466439
data/val-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33caec8851b71bb3f13fd5c94f209e7febfe5bbaaf88a1e82496bd2ffa15254c
3
+ size 2643745