| # Model Card for DeBERTa-v3-base-tasksource-nli | |
| This is [DeBERTa-v3-base](https://hf.co/microsoft/deberta-v3-base) fine-tuned with multi-task learning on 600 tasks. | |
| This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for: | |
| - Zero-shot entailment-based classification pipeline (similar to bart-mnli), see [ZS]. | |
| - Natural language inference, and many other tasks with tasksource-adapters, see [TA] | |
| - Further fine-tuning with a new task (classification, token classification or multiple-choice). | |
| # [ZS] Zero-shot classification pipeline | |
| ```python | |
| from transformers import pipeline | |
| classifier = pipeline("zero-shot-classification",model="Azma-AI/deberta-base-multi-label-classifier") | |
| text = "one day I will see the world" | |
| candidate_labels = ['travel', 'cooking', 'dancing'] | |
| classifier(text, candidate_labels) | |