Commonsense UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations Paper • 2311.08469 • Published Nov 14, 2023 • 11
UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations Paper • 2311.08469 • Published Nov 14, 2023 • 11
Prompting Instruction-Following Evaluation for Large Language Models Paper • 2311.07911 • Published Nov 14, 2023 • 22 Prompt Engineering a Prompt Engineer Paper • 2311.05661 • Published Nov 9, 2023 • 23 Contrastive Chain-of-Thought Prompting Paper • 2311.09277 • Published Nov 15, 2023 • 35
Instruction-Following Evaluation for Large Language Models Paper • 2311.07911 • Published Nov 14, 2023 • 22
Personalization ChatAnything: Facetime Chat with LLM-Enhanced Personas Paper • 2311.06772 • Published Nov 12, 2023 • 35
ChatAnything: Facetime Chat with LLM-Enhanced Personas Paper • 2311.06772 • Published Nov 12, 2023 • 35
Alignment MART: Improving LLM Safety with Multi-round Automatic Red-Teaming Paper • 2311.07689 • Published Nov 13, 2023 • 9 Trusted Source Alignment in Large Language Models Paper • 2311.06697 • Published Nov 12, 2023 • 12 Unveiling Safety Vulnerabilities of Large Language Models Paper • 2311.04124 • Published Nov 7, 2023 • 9
MART: Improving LLM Safety with Multi-round Automatic Red-Teaming Paper • 2311.07689 • Published Nov 13, 2023 • 9
Unveiling Safety Vulnerabilities of Large Language Models Paper • 2311.04124 • Published Nov 7, 2023 • 9
LLM Llamas Know What GPTs Don't Show: Surrogate Models for Confidence Estimation Paper • 2311.08877 • Published Nov 15, 2023 • 7
Llamas Know What GPTs Don't Show: Surrogate Models for Confidence Estimation Paper • 2311.08877 • Published Nov 15, 2023 • 7
Commonsense UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations Paper • 2311.08469 • Published Nov 14, 2023 • 11
UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations Paper • 2311.08469 • Published Nov 14, 2023 • 11
Alignment MART: Improving LLM Safety with Multi-round Automatic Red-Teaming Paper • 2311.07689 • Published Nov 13, 2023 • 9 Trusted Source Alignment in Large Language Models Paper • 2311.06697 • Published Nov 12, 2023 • 12 Unveiling Safety Vulnerabilities of Large Language Models Paper • 2311.04124 • Published Nov 7, 2023 • 9
MART: Improving LLM Safety with Multi-round Automatic Red-Teaming Paper • 2311.07689 • Published Nov 13, 2023 • 9
Unveiling Safety Vulnerabilities of Large Language Models Paper • 2311.04124 • Published Nov 7, 2023 • 9
Prompting Instruction-Following Evaluation for Large Language Models Paper • 2311.07911 • Published Nov 14, 2023 • 22 Prompt Engineering a Prompt Engineer Paper • 2311.05661 • Published Nov 9, 2023 • 23 Contrastive Chain-of-Thought Prompting Paper • 2311.09277 • Published Nov 15, 2023 • 35
Instruction-Following Evaluation for Large Language Models Paper • 2311.07911 • Published Nov 14, 2023 • 22
LLM Llamas Know What GPTs Don't Show: Surrogate Models for Confidence Estimation Paper • 2311.08877 • Published Nov 15, 2023 • 7
Llamas Know What GPTs Don't Show: Surrogate Models for Confidence Estimation Paper • 2311.08877 • Published Nov 15, 2023 • 7
Personalization ChatAnything: Facetime Chat with LLM-Enhanced Personas Paper • 2311.06772 • Published Nov 12, 2023 • 35
ChatAnything: Facetime Chat with LLM-Enhanced Personas Paper • 2311.06772 • Published Nov 12, 2023 • 35