MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation
Paper
• 2305.15296 • Published
• 1
image imagewidth (px) 1.28k 7.95k | label class label 3 classes |
|---|---|
0blue_train | |
0blue_train | |
0blue_train | |
0blue_train | |
0blue_train | |
0blue_train | |
0blue_train | |
1red_train | |
1red_train | |
1red_train | |
1red_train | |
1red_train | |
1red_train | |
1red_train | |
1red_train | |
1red_train | |
1red_train | |
1red_train | |
2yellow_train | |
2yellow_train | |
2yellow_train | |
2yellow_train | |
2yellow_train | |
2yellow_train |
In our paper MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation we propose the MCC-250 benchmark to evaluate generative image composition capablities for multimodal inputs. MCC-250 is built on a subset of CC-500 which contains 500 text-only prompts of the pattern "a red apple and a yellow banana", textually describing two objects with respective attributes.
With MCC-250, we provide a set of reference images for each object and attribute combination, enabling multimodal applications.
All images where source from these four stock imagery providers: