Dataset Viewer
Auto-converted to Parquet Duplicate
uuid
string
question
string
answer_format
string
anchor_pdf
list
reference_pdf
list
conference
list
00608f20-e3f5-5fdc-8979-4efeb0756d8e
What distance function and transfer function do the author use for their method?
Your answer should be a sentence describing the distance function and transfer function used by the author for their method, including the formulas of the functions.
[ "Few-shot Event Detection: An Empirical Study and a Unified View" ]
[]
[]
00b28687-3ea1-5974-a1ec-80d7f6cd3424
What datasets were used to train the default embedding model for the retriever in the experiment of the paper?
Your answer should be a python list of the dataset names, e.g. ["dataset1", "dataset2", ...]. YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.
[ "Do You Know What You Are Talking About? Characterizing Query-Knowledge Relevance For Reliable Retrieval Augmented Generation" ]
[ "Unsupervised Dense Information Retrieval with Contrastive Learning" ]
[]
0100f339-d8b0-5277-a73f-e0b3f6b10d0c
For the dataset where "Before we dive into the answer." performs the best in specific setting in the paper that proposes IAP, what deficiencies of previous datasets were raised by the authors of the dataset? Additionally, what variations did the authors propose to address these deficiencies?
Your answer should be a Python list of 2 elements, the first is a string, the deficiencies, the second is a python list of strings, the categories of variations in general as proposed in the paper.
[ "Instance-adaptive Zero-shot Chain-of-Thought Prompting" ]
[ "Are NLP Models really able to Solve Simple Math Word Problems?", "Training Verifiers to Solve Math Word Problems", "CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge", "Measuring Massive Multitask Language Understanding" ]
[]
011dd1f5-52a8-5ab6-9eb1-d8432c4e614c
which term is mentionned in this paper ("WINOPRON: Revisiting English Winogender Schemas for Consistency, Coverage, and Grammatical Case")in terms of the result that Smaller FLAN-T5 models perform at chance level? What evaluation contrasts does the source paper of this term investigate?
Your answer should be a single python list like ["string1", "string2"]. The first string should be the name of the term. The second string should be about the evaluation contrasts.
[ "WinoPron: Revisiting English Winogender Schemas for Consistency, Coverage, and Grammatical Case" ]
[ "Auxiliary task demands mask the capabilities of smaller language models" ]
[]
01db3056-b961-59bf-8b58-8b8ee0c70060
Which paper first published a real-world Chinese-English text image translation dataset?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Exploring Better Text Image Translation with Multimodal Codebook" ]
[ "acl2023" ]
0212a0ea-5029-52d9-bd26-cdcf61a1ff42
How can we get the bias attribution without skill knowledge according to formula (2)?
Your answer should be a Python strings of the calculation method of bias attribution without skill knowledge.
[ "Mitigating Biases for Instruction-following Language Models via Bias Neurons Elimination" ]
[]
[]
028cf205-5eea-5445-9cea-479d9c14f08f
What is the meaning of formula (1) in the paper?
Your answer should be a python strings concisely explaining the meaning of the formula.
[ "SALMONN: Towards Generic Hearing Abilities for Large Language Models" ]
[]
[]
0318f9e2-625a-5fab-8933-cb1b817faee5
In the paper "DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads", on which two benchmarks was the long text capability tested? Which institutions proposed the Retrieval head used in this paper?
Your answer should be a python list of two items. The first item is a python list of two strings, the two benchmarks. The second item is a python list strings, the institutions.
[ "DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads", "Retrieval Head Mechanistically Explains Long-Context Factuality" ]
[]
[]
03bb2132-ff94-54f7-8158-397582544082
Which paper propose a PEFT method for LLM that detects important attention heads first, then adds learnable bias to their outputs?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "LoFiT: Localized Fine-tuning on LLM Representations" ]
[ "neurips2024" ]
03e450d0-6d1a-5827-ac1e-1382af141474
What are the proportions of the training and test sets for the two benchmark datasets used in this paper?
Your answer should be a python dictionary as {"dataset1": proportion1, "dataset2": proportion2}. The proportions should be rounded to two decimal places.
[ "Fine-Granularity Face Sketch Synthesis" ]
[]
[]
041a256e-75f2-5b75-9edb-2077b7779235
What is the formula of the loss function used to align the feature spaces of the visual and text transformers in this paper?
Your answer should be a python strings about the exact formula given in the reference paper, you don't need to explain the variables in the formula, e.g., "loss_formula".
[ "Quantifying Generalizations: Exploring the Divide Between Human and LLMs’ Sensitivity to Quantification" ]
[ "Align before Fuse: Vision and Language Representation Learning with Momentum Distillation" ]
[]
045fd617-2a2d-5d81-8e48-0da9d0c31a6c
Among all the methods tested on IHS and SoyVein500, which methods have fewer parameters than the new method proposed in the paper?
Your answer should be a python list about the names of the methods, e.g., ['method1', 'method2']. YOU MUST USE THE EXACT NAMES FORM THE PAPER.
[ "LV-SEGFORMER: Towards More Accurate Leaf-Vein Segmentation with Transformer" ]
[]
[]
04cdb845-6e4e-5ab0-b4b3-997479c8e1f1
According to the paper that proposed the evaluation method which the BASALT benchmark specifies as methodological best practices, how is the match quality criterion defined?
Your answer should be a Python string, the formula in LaTeX format.
[ "BEDD: The MineRL BASALT Evaluation and Demonstrations Dataset for Training and Benchmarking Agents that Solve Fuzzy Tasks" ]
[ "TrueSkillTM: A Bayesian Skill Rating System" ]
[]
04f34534-aa58-5d9a-8e0a-d57200c092a7
In the intervention-based training of this paper(titled "Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training"), can you give me a github link of the method mainly used in this part?
Your answer should be a single link like "https://github.com/a/b".
[ "Inducing Character-level Structure in Subword-based Language Models with Type-level Interchange Intervention Training" ]
[ "Inducing Causal Structure for Interpretable Neural Networks" ]
[]
04f6fcad-edd9-577c-b089-ae167567ef47
What is the most appropriate evaluation metric for this paper?
Your answer should be a python strings of the exact name of the evaluation metric.
[ "On the Evaluation of Neural Selective Prediction Methods for Natural Language Processing" ]
[]
[]
05079ad5-a044-590b-8b50-d4476466d94f
According to the RaIR paper, which algorithm performs the best in the objects flipped setting in Construction? In the first phase of that algorithm, what's the loss function?
Your answer should be a Python list of 2 strings, the first is the abbreviation of the algorithm, and the second is the formula in LaTeX format.
[ "Regularity as Intrinsic Reward for Free Play" ]
[ "Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation" ]
[]
052f4000-e160-5965-a25f-ebf01e1afd90
How does R-Div do to address the overfitting issue of H-Div?
Your answer should be a python strings
[ "R-divergence for Estimating Model-oriented Distribution Discrepancy" ]
[]
[]
053401b8-15b3-59d8-a9e0-30ccdd459166
In Table 1: Test MAE for different interpolation methods,which method performs better in dataset Shallow Water with grid Coarser?
Your answer should be one of k-NN,Linear and IDW.
[ "Learning Space-Time Continuous Latent Neural PDEs from Partially Observed States" ]
[]
[]
05548a18-0a57-54e2-a7d6-58a5a8cdca72
Is there any paper that performs adversarial training on frame level for audio-visual representation learning?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "MIR-GAN: Refining Frame-Level Modality-Invariant Representations with Adversarial Network for Audio-Visual Speech Recognition" ]
[ "acl2023" ]
05729273-17c6-5641-8197-1b1f7ccd4b86
Which paper first use the attention weights to guide the simultaneous inference of speech translation models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Attention as a Guide for Simultaneous Speech Translation" ]
[ "acl2023" ]
05a49f5c-e36d-5e59-8343-b951588c49b1
What are the main innovations of this paper?
Your answer should be a python strings concisely introducing the main innovations.
[ "Training Mixture-of-Experts: A Focus on Expert-Token Matching" ]
[]
[]
05fe6a50-7051-527c-b2d6-3789c01319bb
Which model does this paper("On the Similarity of Circuits across Languages: a Case Study on the Subject-verb Agreement Task") use?For the series of this model, what is the other one with different size?
Your answer should be a single list of two strings, every string is a model name,e.g., ["modelname 7k","modelname 3B"]
[ "On the Similarity of Circuits across Languages: a Case Study on the Subject-verb Agreement Task" ]
[ "Gemma: Open Models Based on Gemini Research and Technology" ]
[]
0612502c-5e7f-59c3-83d6-bd4a867c22d7
Why the paper "Debiasing Large Language Models with Structured Knowledge" proposes a new Bias Score? What's the formula of the original Score?
Your answer should be Python list of two strings, the first is the reason why the authors propose a new Bias Score. The second is the formula in LaTeX format.
[ "Debiasing Large Language Models with Structured Knowledge" ]
[ "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models", "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" ]
[]
0696ecf8-b39a-5288-b3ee-c6b3dcf2e420
What paper compares humans' and language models' non-literal interpretations of utterances featuring phenomena like deceit, irony, and humor?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "A fine-grained comparison of pragmatic language understanding in humans and language models" ]
[ "acl2023" ]
06d157b9-e8a8-5b45-abe7-852cb9cb2afe
In the paper that proposed the method applied by SEVA dataset for automatic sketch generation, what's its only disadvantage compared to other sketch generation algorithms?
Your answer should be a string, a brief summary of the disadvantage.
[ "SEVA: Leveraging sketches to evaluate alignment between human and machine visual abstraction" ]
[ "CLIPasso: Semantically-Aware Object Sketching" ]
[]
06e6f397-5d3d-5493-8d94-f40caefc91c1
Which implicit bias mentioned in the paper "Benchmarking Cognitive Biases in Large Language Models as Evaluators" is not addressed in the works that directly overlap with the paper, particularly those exploring LLMs' capabilities as evaluators?
Your answer should be a string, the name of the implicit bias as mentioned in the anchod PDF.
[ "Benchmarking Cognitive Biases in Large Language Models as Evaluators" ]
[ "Benchmarking Foundation Models with Language-Model-as-an-Examiner", "Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena" ]
[]
079bf850-1cba-5b82-a432-8cfc8e2e28ff
Which paper presents an easy to implement and high performing method for OOD detection with language models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection" ]
[ "acl2023" ]
07f7da2d-c8d9-581d-bf9e-ba8f9a562968
What are the meanings of different loss items in equation (9)? i.e. $\mathcal{L}_{c}^{A}$, $\mathcal{L}_{s}^{A}$, $\mathcal{L}_{a}^{A}$, and $\mathcal{L}_{c}^{B}$
Your answer should be a Python list of four elements, where each of the item represent the meaning of the corresponding loss item.
[ "Disentangling Aspect and Stance via a Siamese Autoencoder for Aspect Clustering of Vaccination Opinions" ]
[]
[]
082f325a-ee6d-5d63-9ca0-a6953640027e
What are the main components of ERRA model?
Your answer should be a python list of strings, every element of the list is the name of the component directly mentionned in this paper.
[ "Explainable Recommendation with Personalized Review Retrieval and Aspect Learning" ]
[]
[]
085fa0be-3252-59dc-b265-959619c6aa8a
Could you suggest research that examines the effects of starting language models with weights from pretrained nondiffusion models on the convergence behavior of diffusion losses?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "SSD-LM: Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control" ]
[ "acl2023" ]
089ad273-aa2e-5b15-9d0f-cbcb8472e227
For the model used in the paper's experiments that achieved the best results on authoritative Chinese and English benchmarks of the same size, which categories of data accounted for more than 9% of the model's training data?
Your answer should be a python list of strings and the strings should be categorie names. The specific name are based on the relevant paper.
[ "Efficiently Exploring Large Language Models for Document-Level Machine Translation with In-context Learning" ]
[ "Baichuan 2: Open Large-scale Language Models" ]
[]
089afa00-d83b-573e-9b8c-3ffdc81cee46
In CSCD-NS, what's the base model of the detection model for data selection? How many hours does it roughly take for the base model to reach at least 85 on GLUE?
Your answer should be a Python list of 2 elements, the first is the name of the model, the second is the number of hours it takes.
[ "CSCD-NS: a Chinese Spelling Check Dataset for Native Speakers" ]
[ "ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators" ]
[]
08a9f15f-cf93-57b2-8a07-072ca34906af
In the Sequence Mixer of the Monarch Mixer, what are the two types of convolution components?
Your answer should be a python list of two strings
[ "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture" ]
[]
[]
08bd3ae0-1594-510d-b67b-bfbd9fb73b56
What baselines do authors use? And according to Table 2, which baseline can reach the best performance?
Your answer should a list with two items, the first item is a python list of the name of baselines, and the second item is the name of baseline reaching the best performance, e.g. [[baseline 1, baseline 2, ...], baseline 1].
[ "Revisiting Demonstration Selection Strategies in In-Context Learning" ]
[]
[]
08f5327e-584e-5c34-a092-a5dec11041dc
What do $\gamma_t$ and $\sigma$ mean in the formula(2) in RulE paper?
Your answer should be a python list of two strings, explaining the meaning of $\gamma_t$ and $\sigma$ respectively
[ "RulE: Knowledge Graph Reasoning with Rule Embedding" ]
[ "RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space" ]
[]
0903f58c-f1c2-5b77-8d14-b1ccef36d1a9
In the background section of this paper, under the topic Isotropy of PLMs, what is the main problem of PLMs? How is this problem defined in the paper introducing it?
Your answer should be a python strings about the main problem of PLMs and how it is defined in the paper introducing it.
[ "T.M. Scanlon at SemEval-2023 Task 4: Leveraging Pretrained Language Models for Human Value Argument Mining with Contrastive Learning" ]
[ "SimCSE: Simple Contrastive Learning of Sentence Embeddings" ]
[]
093b9ce2-d120-5bda-99da-75a89d7ccc7d
In the AgentTuning paper, what does the reward $r$ stand for? What role does it play in the subsequent process? Does it stand for the same meaning in the AgentBank paper? Does it play the same role?
Your answer should be a string, containing the answers to the 4 sub-questions.
[ "AgentBank: Towards Generalized LLM Agents via Fine-Tuning on 50000+ Interaction Trajectories", "AgentTuning: Enabling Generalized Agent Abilities for LLMs" ]
[]
[]
09418100-d140-57cd-9e56-7747def46e96
In the paper that introduces a novel task "source imputation", an ODE integrator is used in Algorithm 1. In the paper that proposed this ODE integrator, what's the original form of Eq. 4 in terms of $f(t)$ and $g(t)$?
Your answer should be a string, the formula in LaTeX format.
[]
[ "Multi-Source Diffusion Models for Simultaneous Music Generation and Separation", "Elucidating the Design Space of Diffusion-Based Generative Models" ]
[ "iclr2024" ]
09c643c6-6a5f-5650-ae54-2bed43a55c17
What paper first adapted ControlNet to generate continuous videos in a training-free manner?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "ControlVideo: Training-free Controllable Text-to-video Generation" ]
[ "iclr2024" ]
0a202041-de70-55b6-9aa0-6b6486166582
Which paper was the first to propose combining human spoken language and sign language datasets with gloss annotations to enhance the performance of sign language translation?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Neural Machine Translation Methods for Translating Text to Sign Language Glosses" ]
[ "acl2023" ]
0b0ef576-fe34-5e6a-bfd3-eafba60a82d5
What work first uses LLM to code robotic simulation tasks and show sim-to-real benefits with policy pre-training in simulation?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "GenSim: Generating Robotic Simulation Tasks via Large Language Models" ]
[ "iclr2024" ]
0b1cad92-b6c6-51a8-b7f2-5c844e572024
On which language does LLaMA-2 13B with no removal reaches its second highest perplexity?
Your answer should be a word DIRECTLY FROM THE PDF WITHOUT ANY EXPLANATION.
[ "Unveiling Linguistic Regions in Large Language Models" ]
[]
[]
0b1dbace-15fd-53b8-bf52-2bf158ceea33
Which paper first propose to mask positions to pre-train multi-modal document transformer?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding" ]
[ "acl2023" ]
0bc3aaf3-40cc-54f3-952d-1cb514653a8b
Can you tell me the github link for GC-Bench library?
Your answer should be a single link string, e.g. "https://github.com/a/b"
[ "GC-Bench: An Open and Unified Benchmark for Graph Condensation" ]
[]
[]
0d051ca5-f7ba-5ddf-b06a-9bfc12b97e0b
In Equation (1), what does the interval [:2] mean?
Your answer should be a Python string, explaining with a brief sentence.
[ "Fewer-Token Neural Speech Codec with Time-Invariant Codes" ]
[]
[]
0d42a5b9-4dcb-5ac1-829f-e198d8f942c1
Which dataset is the downstream task with the largest training set in BigDocs-Bench curated from, and where do the samples originally come from?
Your answer should be a Python list of two strings, the first is a single word, the name of the dataset, the second is a sentence or a phrase, the source of the samples.
[ "BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks" ]
[ "StarVector: Generating Scalable Vector Graphics Code from Images and Text", "The Stack: 3 TB of permissively licensed source code" ]
[]
0d69dd2c-5163-57c1-8bf0-c468e511724a
In the english dataset where HGALayoutLM reaches new SOTA, what's the evaluation metric for text recognition? Additionally, what's the formula for that metric?
Your answer should be a Python list of two strings, the evaluation metric and the formula for that metric. Note that you should output the formula in the LaTex format.
[ "Hypergraph based Understanding for Document Semantic Entity Recognition" ]
[ "FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents", "LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding" ]
[]
0d7b0180-0c7e-5eb6-a51f-6d7a473d33f2
Is there a paper that takes a mixed machine learning and solver based approach to code translation?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Guess & Sketch: Language Model Guided Transpilation" ]
[ "iclr2024" ]
0e5af47c-c613-5d27-9b50-2fd01ddebd55
Which sampler was implemented in the paper which proposed the latent-space diffusion model used in the paper "UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models"? What is the relationship between that sampler and UniPC according to this paper?
Your answer should be brief text giving the name of the sampler and the relationship between that sampler and UniPC.
[ "UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models" ]
[ "High-Resolution Image Synthesis with Latent Diffusion Models" ]
[]
0f124236-4caf-52c1-ba48-da8daa67547a
For the task of violence detection, the comments are classified into some distinct categories:Direct Violence, Passive Violence, NonViolence and so on. In the most relevant paper, how is Passive Violence defined?
Your answer should be a single python string.
[ "Vio-Lens: A Novel Dataset of Annotated Social Network Posts Leading to Different Forms of Communal Violence and its Evaluation" ]
[]
[]
106570b0-0e1a-5055-9e0d-fcc6eb3a1a1b
Which paper first combines different methods for uncertainty quantification in one?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Hybrid Uncertainty Quantification for Selective Text Classification in Ambiguous Tasks" ]
[ "acl2023" ]
11dbd6a6-2eb2-59a2-9ef9-4bdc723ba2c0
Which paper first proposes a unified framework for black-box and white-box detection of AI-written text with explanations?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "DNA-GPT: Divergent N-Gram Analysis for Training-Free Detection of GPT-Generated Text" ]
[ "iclr2024" ]
11dbf1bb-b485-5aa7-8f6b-18b518bc6aec
What's the relationship between MINT and MINT-1T?
Your answer should be a paragraph, illustrating the relationship between MINT and MINT-1T.
[ "MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens", "MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback" ]
[]
[]
11e16d8d-e642-592e-a8cf-c38bb375630e
Can you find a research paper that discusses using structured pruning techniques to scale down language models, where the original model being pruned has billions of parameters?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "057ef3e0-6715-5e6f-af2c-bfc7b7ffc4a6" ]
[ "iclr2024" ]
11ea72d3-7720-52ff-9a55-fc487db917a3
In table 2 of the paper titled "ORDERED GNN: ORDERING MESSAGE PASSING TO DEAL WITH HETEROPHILY AND OVER-SMOOTHING", which model gets the best performance on CiteSeer dataset? What question does the source paper of this model mainly focus on?
Your answer should be a single python list like ["model_name", "question"].
[ "Ordered GNN: Ordering Message Passing to Deal with Heterophily and Over-smoothing" ]
[ "Geom-GCN: Geometric Graph Convolutional Networks" ]
[]
127ed600-b05d-5a06-9987-3d8dfe98b135
Among all the given anchor PDFs, which one does not use any objective evaluation metrics? What is the difference in conversion goals between this paper and the others?
Your answer should be a Python list where the first item is the full name of the paper (a string) and the second item describes the difference in conversion goals (a string).
[ "Freevc: Towards High-Quality Text-Free One-Shot Voice Conversion", "Voice-Preserving Zero-Shot Multiple Accent Conversion", "ACE-VC: Adaptive and Controllable Voice Conversion Using Explicitly Disentangled Self-Supervised Speech Representations", "Wav2vec-VC: Voice Conversion via Hidden Representations of Wav2vec 2.0", "Promptvc: Flexible Stylistic Voice Conversion in Latent Space Driven by Natural Language Prompts" ]
[]
[]
12a70e18-aa46-5779-bd69-2f3620d7f484
Which downstream tasks does CLiCoTEA outperform other models in terms of zero-shot performance on the IGLUE benchmark?
Your answer should be a Python list of strings, every string is the abbreviation of a downstream task type mentioned in the paper.
[ "Stop Pre-Training: Adapt Visual-Language Models to Unseen Languages" ]
[]
[]
139b4a99-bd26-5162-a087-d19ee079ebd2
For the four types of evaluation mentioned in the paper, provide their names and corresponding overall score range.
Your answer should be a Python dictionary, where the keys are the names of the four types of evaluation and the values are the corresponding overall score range. e.g. {"evaluation1": [-1, 1], "evaluation2": [0, 0.5], ...} . YOU MUST USE THE EXACT NAMES FROM THE PDF WITHOUT CHANGING THE CAPITALIZATION.
[ "NewsBench: A Systematic Evaluation Framework for Assessing Editorial Capabilities of Large Language Models in Chinese Journalism" ]
[]
[]
13a0a504-c9f0-5db1-b75e-94469a48f6d4
In the experiment presented in Table 1 of the paper "Asynchronous Perception Machine for Efficient Test Time Training", which teacher model performs the best?
Your answer should be a python strings. YOU MUST USE THE EXACT NAME FROM THE TABLE IN THE PAPER.
[ "Asynchronous Perception Machine for Efficient Test Time Training" ]
[]
[]
13c85a99-fac5-53e9-9c74-bf2b67a640aa
Which database did the authors use to select the 250 highest-rated television series for building their dataset?
Your answer must be ONE string just containing the database's name in abbreviation format.
[ "Classist Tools: Social Class Correlates with Performance in NLP" ]
[]
[]
14225073-8616-578e-bef7-5b63cfdaa994
In the prompt that Puzzler adopts, what's the exact context of Task Description for Event Detection?
Your answer should be a raw text.
[ "Play Guessing Game with LLM: Indirect Jailbreak Attack with Implicit Clues" ]
[ "Evaluating ChatGPT's Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness" ]
[]
146cb92e-45a7-5146-a88a-7492f9b12047
What paper proposes breaking down programming problems by predicting the objects that a solution would create?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "ExeDec: Execution Decomposition for Compositional Generalization in Neural Program Synthesis" ]
[ "iclr2024" ]
14ae4304-6dbf-539e-8e9a-db0dde3b4959
How many websites are included in WebArena? Which are they?
Your answer should be a Python list of 2 elements. The first one is an integer indicating the number of the websites included in WebArena. The second one is a string list storing the website names.
[ "WebArena: A Realistic Web Environment for Building Autonomous Agents" ]
[]
[]
14ec9987-d8ec-58b4-8caf-45ca042e54e5
Which type of error is most common on StrategyQA dataset across models according to the paper?
Your answer must be ONE string of the error's name. You can choose from 'Error#1', 'Error#2', 'Error#3' and 'Error#4'.
[ "Ask Again, Then Fail: Large Language Models’ Vacillations in Judgment" ]
[]
[]
15142852-7f2c-5711-b8e4-5339e6a16f6e
What is the source of the datasets used in the paper?
Your answer should be a Python list, where each element is a string of a dataset source.
[ "Unsupervised Human Preference Learning" ]
[]
[]
158a0302-d656-5006-9ab8-421c8816faf6
Is there a paper that shows that language models' error distribution is different for unfamiliar entities that is not apparent when models are evaluated on familiar entities alone?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Factual or Contextual? Disentangling Error Types in Entity Description Generation" ]
[ "acl2023" ]
161f8248-8832-5bf9-85e7-7cbe5d89d69b
Is there any paper that uses prompt tuning in multi-answer QA?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Answering Ambiguous Questions via Iterative Prompting" ]
[ "acl2023" ]
16a194ad-0f62-5048-ab0e-9afa26e75c66
Is there a method that measures the information provided in a (model generated) rationale beyond what the original context provided?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "REV: Information-Theoretic Evaluation of Free-Text Rationales" ]
[ "acl2023" ]
16e480a8-eb0f-5ead-9ca2-cd7b4103d6e4
Which stage of the two stage training mentioned in the paper is the labeled data used in?
Your answer should be plain text.
[ "Text Representation Distillation via Information Bottleneck Principle" ]
[]
[]
170deef3-1b76-54ee-a27b-c1fe6bad1061
Which metric is used to evaluate the emergent abilities on BIG-Bench datasets, as is indicated in the anchor pdf? And why this metric is not favorable according to this paper?
Your answer should be a Python list of two elements, where the first element is the name of the metric, and the second is the reason why this metric is not favorable according to this paper.
[ "Understanding Emergent Abilities of Language Models from the Loss Perspective" ]
[ "Are Emergent Abilities of Large Language Models a Mirage?" ]
[]
17465570-ab08-5b32-ad45-8e87439bb4ed
What new methods were proposed for multi-perspective mathematical augmentation in the dataset used for Stage-1 training in the paper "MuMath-Code: Combining Tool-Use Large Language Models with Multi-perspective Data Augmentation for Mathematical Reasoning"?
Your answer should be a python strings.
[ "MuMath-Code: Combining Tool-Use Large Language Models with Multi-perspective Data Augmentation for Mathematical Reasoning" ]
[ "MuMath: Multi-perspective Data Augmentation for Mathematical Reasoning in Large Language Models" ]
[]
175c78ea-6395-5e79-9bb1-7211e16b8bd6
In the latest retrieval method that is applied in the experiment of the case study, and that doesn't require re-ranking model, which dataset is used as training data for tool learning?
Your answer should be a string, the name of the dataset.
[ "Spiral of Silence: How is Large Language Model Killing Information Retrieval?β€”A Case Study on Open Domain Question Answering" ]
[ "Retrieve Anything To Augment Large Language Models", "Unsupervised Dense Information Retrieval with Contrastive Learning" ]
[]
17638abc-7058-5c53-97e7-99ee69763f57
To investigate the problem "Can the generative model be used to effectively leverage (small amounts of) human data, and also combine it with simulated agents?", which baseline is applied? In the paper that proposes the baseline, what's the average learning rate for the baseline?
Your answer should be a Python list of 2 elements, the first is a string, the name of the baseline, the second is a float, the average learning rate, rounded to 4 decimal places.
[ "Learning to Cooperate with Humans using Generative Agents" ]
[ "On the Utility of Learning about Humans for Human-AI Coordination" ]
[]
1797020c-d6b0-5909-a17b-25688d7bc433
According to this paper, what type will the question "How to keep strawberries fresh" be classified into?
Your answer should be a single string representing the question type.
[ "Answering Unanswered Questions through Semantic Reformulations in Spoken QA" ]
[]
[]
17a94454-491d-5b5d-8e09-ab53ca65accc
What's the difference between DigiRL and a trivial filtered BC?
Your answer should be in fluential English.
[ "DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning" ]
[]
[]
17c12592-2818-50f7-9f7c-e5247e778f58
Which newest paper about fine-tuning strategies does this paper(titled "ICU: Conquering Language Barriers in Vision-and-Language Modeling by Dividing the Tasks into Image Captioning and Language Understanding") mention in the troduction? What is the originality of this paper(I mean the paper mentioned in the "ICU" paper) compared to previous works?
Your answer should be a single python list, the first element is the string of the paper name, the second element is a string of the originality of this paper.
[ "ICU: Conquering Language Barriers in Vision-and-Language Modeling by Dividing the Tasks into Image Captioning and Language Understanding" ]
[ "Delving Deeper into Cross-lingual Visual Question Answering" ]
[]
17ed4a9d-9711-5799-b02a-6b2cdd366288
Is there a study that shows how to help the demonstration retriever better integrate feedback from LLMs?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Unified Demonstration Retriever for In-Context Learning" ]
[ "acl2023" ]
17fe77cb-688f-52d4-be70-a66eaadc17ff
What are the main works at the table-level when performing data augmentation on the MMTab dataset in this paper?
Your answer should be a Python strings about the main works at the table-level when performing data augmentation on the MMTab dataset.
[ "Multimodal Table Understanding" ]
[]
[]
18b13577-3570-5e5f-be1c-77606cce3cf4
When the authors use ChatGPT in generating data points, what is the templated prompt?
Your answer should be raw text directly demonstrating the used prompt template without any other context.
[ "Probing Language Models for Pre-training Data Detection" ]
[]
[]
192f5d76-b256-57c4-a3e7-1df0fffe30b4
Which paper studies how difficult is a policy learning problem under non-additive rewards in terms of theoretical lower bounds?
Your answer should be the title of the paper WITHOUT ANY OTHER EXPLANATION.
[]
[ "Submodular Reinforcement Learning" ]
[ "iclr2024" ]
198666fc-a067-52c2-b80f-fb804bc80034
What is the ranking of the average performance of the models compared in the experiment across all languages where each model has a value in the all-language finetuning, from highest to lowest?
Your answer should be a Python list of elements, each element is a model name string, e.g., ["model_name 1", "model_name 2", ...].
[ "mCLIP: Multilingual CLIP via Cross-lingual Transfer" ]
[]
[]
1a507489-7b21-5be2-a00b-a30cf98564a6
In this paper, for Gitksan and Natugu, what is the test set? What's its relevant github link?
Your answer should be a single list of two strings, the first string is the test set name, the second string is the github link.
[ "Back to School: Translation Using Grammar Books" ]
[]
[]
1a8c2b00-29f9-5a58-83ab-6dbf7061a039
Which paper considers both weights and activations when pruning large language models?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "A Simple and Effective Pruning Approach for Large Language Models" ]
[ "iclr2024" ]
1aa41ae8-f867-5928-857a-22d7d028f976
According to Table 1, which setup (including its corresponding parameters) achieves the highest precision on the test set?
Your answer should be a python dict, with the format of {'setup_name': parameter_value}.
[ "TeamEC at SemEval-2023 Task 4: Transformers vs. Low-Resource Dictionaries, Expert Dictionary vs. Learned Dictionary" ]
[]
[]
1b4de802-e98f-51a6-921f-49fe6fb0f4be
In paper "What Do Language Models Learn in Context? The Structured Task Hypothesis.", which hypothesis is mostly supported by the experiments?
Your answer should be the context that defines the hypothesis.
[ "What Do Language Models Learn in Context? The Structured Task Hypothesis." ]
[]
[]
1bc803b2-807d-5218-891c-da60a470cd93
Which model achieves the highest accuracy of the classification when the training data consists of 512 pairs of FPQ and TPQs in this paper?
Your answer should be a python string about the name of the best model. You"d better use the names as they are referred to in the paper.
[ "Won’t Get Fooled Again: Answering Questions with False Premises" ]
[]
[]
1c0a1908-daee-5d66-95ab-827900fa14c0
What paper investigated the effect of the relative position (closer or further away) of the most pertinent retrieved code snippets on repository-level code completion performance?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems" ]
[ "iclr2024" ]
1c8576c1-c918-543c-9c0f-68bc5d28bc5a
In the paper that proposes LaCLIP, under zero-shot setting, pretrained with CC12M, LaSLIP lags behind SLIP on which specific evaluation dataset? In that dataset, which country has the highest proportion?
Your answer should be a Python list of 2 string, the abbreviation of the dataset and the name of the country, as given in the papers.
[ "Improving CLIP Training with Language Rewrites" ]
[ "EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification" ]
[]
1cc5d12c-31dd-5e52-92d5-9227e8cfbfcd
What are the main questions that this paper tries to resolve or answer?
Your answer should be a Python list of text strings, with each element being one critical problem that this paper analyzes, e.g., ["question 1", "question 2", ...].
[ "Investigating the Impact of Data Contamination of Large Language Models in Text-to-SQL translation" ]
[]
[]
1d245f04-b2e6-56a5-b835-ca02401943aa
According to the EBD paper, which model's forward process is based on the PDE $\frac{\partial}{\partial t} \mathbf{x}(i,j,t) = \Delta \mathbf{x}(i,j,t)$? In the paper that proposes that model, how is $\tilde{u}$ calculated?
Your answer should be a Python list of 2 elements. The first is the abbreviation of model, the second is a string, the formula in LaTeX format.
[ "Equivariant Blurring Diffusion for Hierarchical Molecular Conformer Generation" ]
[ "Generative Modelling with Inverse Heat Dissipation" ]
[]
1d3a6e38-9233-5b75-9b60-7e94deea2f36
In the paper that proposed PsychoBench, a method is applied to jailbreak GPT-4. In the paper that proposed the jailbreak method, for GPT-3.5-Turbo, in Chinese, on which domain the Unicode baseline surpasses the proposed method?
Your answer should be a Python string, the name of the domain.
[]
[ "On the Humanity of Conversational AI: Evaluating the Psychological Portrayal of LLMs", "GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher" ]
[ "iclr2024" ]
1dcc43fb-6622-5fbb-aa25-050331b59af3
In the paper "SceMQA: A Scientific College Entrance Level Multimodal Question Answering Benchmark",how many free-response questions are there in the 6 benchmarks listed in Table 1? Also, please give the total number of free-response questions in the 6 benchmarks.
Your answer should be a python dictionary, e.g. {'total': 10, 'dataset_1': 2, 'dataset_2': 3, ...}. For the names of datasets, use the exact names in Table 1 without changing CAPITALIZATION.
[ "SceMQA: A Scientific College Entrance Level Multimodal Question Answering Benchmark", "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI", "MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts", "SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models" ]
[ "Measuring Massive Multitask Language Understanding", "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering" ]
[]
1ea15b22-ad6d-584d-80f0-3efa819fc91d
Which paper proposed dictionary-based Bayesian inference to improve the performance of image text matching model?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Vision Meets Definitions: Unsupervised Visual Word Sense Disambiguation Incorporating Gloss Information" ]
[ "acl2023" ]
1eab7a9f-66b4-5bd3-9870-e4df2e3192dc
Can we find the solution of the Bilevel optimization when the lower-level problem is nonconvex?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation" ]
[ "iclr2024" ]
1f3102a6-711f-5993-8e39-2334cfb5d96d
What are the four evaluation metrics of the held-out dataset that InstructBLIP used for image QA?
Your answer should be a Python list of 4 strings, the metrics.
[ "InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning" ]
[ "VizWiz Grand Challenge: Answering Visual Questions from Blind People" ]
[]
1f43b095-5787-5f8c-9570-f1751018227f
What is the main Implicit K-means loss used in this paper(titled "THE HIDDEN UNIFORM CLUSTER PRIOR IN SELF-SUPERVISED LEARNING")? How is the loss between two batches defined in the source paper of this loss?
Your answer should be a single python list like ["xxx loss","some formulas in latex format"]
[ "The hidden uniform cluster prior in self-supervised learning" ]
[ "VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning" ]
[]
1fa16ed9-14d2-5abe-901b-3d34ddceceee
What research first proposed a new kind of cascaded diffusion of a Markov process?
Your answer should be the title of the paper WITHOUT ANY EXPLANATION.
[]
[ "Relay Diffusion: Unifying diffusion process across resolutions for image synthesis" ]
[ "iclr2024" ]
1faadd0a-1ee9-5541-b4ca-7a0bd3cacc0e
RRCP is a new pipeline proposed to recognize retrieval complexity, on which complex QA datasets does it significantly outperform the LLM baseline, with a clear improvement in Accuracy or F1 Score of at least 0.1?
Your answer should be a list of elements, each element is the QA dataset name string, e.g., ["QA dataset1", "QA dataset2", ...].
[ "Measuring Retrieval Complexity in Question Answering Systems" ]
[]
[]
2074d015-dc9a-5c20-aeba-2835003f4607
In the related work mentioned in the Table 1 of the paper Reflect-RL, that is categorized as RL Fine-tuning and that doesn't involve vision modal, what's the token-level probability of a_k?
Your answer should be a formula in LaTeX format.
[ "Reflect-RL: Two-Player Online RL Fine-Tuning for LMs" ]
[ "Large Language Models as Generalizable Policies for Embodied Tasks", "True Knowledge Comes from Practice: Aligning LLMs with Embodied Environments via Reinforcement Learning" ]
[]
2136d7bb-b74a-5176-8a1c-4b521df0603c
What's the formula of Subset consistency accuracy?
Your answer should be a single latex formula extracted from the source paper of this metric.
[ "Error Span Annotation: A Balanced Approach for Human Evaluation of Machine Translation" ]
[ "AI-Assisted Human Evaluation of Machine Translation" ]
[]
End of preview. Expand in Data Studio

AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation

This repository contains the test set, the metadata, processed_data and papers for the AirQA dataset introduced in our paper AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation accepted to ICLR 2026. Detailed instructions for using the dataset will soon be publicly available in our official repository.

AirQA is a human-annotated multi-modal multitask Artificial Intelligence Research Question Answering dataset, which encompasses 1,246 examples and 13,956 papers, aiming at evaluating an agent’s research capabilities in realistic scenarios. It is the first dataset that encompasses multiple question types, also the first to bring function-based evaluation into QA domain, enabling convenient and systematic assessment of research capabilities.

πŸ” Quick Start

Load the AirQA dataset in one line using Hugging Face datasets:

from datasets import load_dataset

dataset = load_dataset("OpenDFM/AirQA")

However, we recommend referring to our official repository for complete usage instructions, including the data format and evaluation scripts.

πŸ“‚ Folder Structure

AirQA
|── data/
|   |── test.parquet # test set (simple, for minimal usage)
|   |── test_data.jsonl # test set (complete, including function-based evaluation)
|   └── uuid2title.json # mapping from paper UUID to title
|── metadata/
|   |── 000ab6db-4b65-5dc0-8393-fbc2c05843c8.json
|   └── ... # more metadata dicts
|── papers/
|   |── acl2016/
|   |   └── 16c3a7ad-d638-5ebf-a72a-bd58f06c16d7.pdf
|   |── acl2019/
|   |   └── c7563d97-695f-5c77-8021-334bf2ff9ddb.pdf
|   |── acl2023/
|   |   |── 001ab93b-7665-5d56-a28e-eac95d2a9d7e.pdf
|   |   └── ... # more .pdf published in ACL 2023
|   └── ... # other sub-folders of paper collections
|── processed_data/
|   |── 000ab6db-4b65-5dc0-8393-fbc2c05843c8.json # cached data for PDF parsing
|   └── ... # more cached data for PDFs
└── README.md

Due to Hugging Face's limit on the number of files in a single folder, we packaged metadata and processed_data into archives.

πŸ“Š Dataset Statistics

Our dataset encompasses papers from 34 volumes, spanning 7 conferences over 16 years. The detailed distribution is summarized below.

πŸ‘‡πŸ» Click to view the paper distribution of dataset
Folder Conference Year Collected
iclr2024 ICLR 2024 3301
iclr2023 ICLR 2023 31
iclr2020 ICLR 2020 1
neurips2024 NeurIPS 2024 6857
neurips2023 NeurIPS 2023 73
nips2006 NeurIPS 2006 1
acl2024 ACL 2024 161
acl2023 ACL 2023 3083
acl2019 ACL 2019 1
acl2019 ACL 2016 1
emnlp2024 EMNLP 2024 55
emnlp2023 EMNLP 2023 52
emnlp2021 EMNLP 2021 2
emnlp2013 EMNLP 2013 1
icassp2024 ICASSP 2024 18
icassp2023 ICASSP 2023 12
eacl2024 EACL 2024 1
ijcnlp2023 IJCNLP 2023 1
arxiv2025 arXiv 2025 12
arxiv2024 arXiv 2024 53
arxiv2023 arXiv 2023 61
arxiv2022 arXiv 2022 61
arxiv2021 arXiv 2021 43
arxiv2020 arXiv 2020 25
arxiv2019 arXiv 2019 20
arxiv2018 arXiv 2018 11
arxiv2017 arXiv 2017 6
arxiv2016 arXiv 2016 4
arxiv2015 arXiv 2015 1
arxiv2014 arXiv 2014 1
arxiv2013 arXiv 2013 1
arxiv2012 arXiv 2012 1
arxiv2011 arXiv 2011 1
uncategorized - - 3
Total - - 13956

✍🏻 Citation

If you find this dataset useful, please cite our work:

@misc{huang2025airqacomprehensiveqadataset,
      title={AirQA: A Comprehensive QA Dataset for AI Research with Instance-Level Evaluation}, 
      author={Tiancheng Huang and Ruisheng Cao and Yuxin Zhang and Zhangyi Kang and Zijian Wang and Chenrun Wang and Yijie Luo and Hang Zheng and Lirong Qian and Lu Chen and Kai Yu},
      year={2025},
      eprint={2509.16952},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.16952}, 
}
Downloads last month
65

Paper for OpenDFM/AirQA