Model Card for TG-LLM
TG-LLM consists of supervised fine-tuned models designed for temporal reasoning with large language models (LLMs). It includes two primary tasks:
- Story-to-Temporal-Graph Translation (story_TG_trans) – converting a narrative into its corresponding temporal graph.
- Temporal-Graph Reasoning (TGR) – reasoning over a given temporal graph to answer questions.
Model Details
TGQA_story_TG_trans
Base Model:
meta-llama/Llama-2-13b-chat-hfLoRA Configuration:
lora_alpha: 8r: 8target_modules:["q_proj", "k_proj", "o_proj", "v_proj"]bias:"none"
TGQA_TGR
Base Model:
meta-llama/Llama-2-13b-chat-hfLoRA Configuration:
lora_alpha: 8r: 8target_modules:["q_proj", "k_proj", "o_proj", "v_proj"]bias:"none"
For more details, please visit the TG-LLM GitHub repository.
Citation
@inproceedings{xiong-etal-2024-large,
title = "Large Language Models Can Learn Temporal Reasoning",
author = "Xiong, Siheng and
Payani, Ali and
Kompella, Ramana and
Fekri, Faramarz",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.563",
doi = "10.18653/v1/2024.acl-long.563",
pages = "10452--10470"
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for sxiong/TG-LLM
Base model
meta-llama/Llama-2-13b-chat-hf