Pretrain, Prompt, Predict

A New Paradigm for NLP

Pre-train and Prompt Learning

This paper aims to provide a survey and organization of research works in a new paradigm in natural language processing, which we dub prompt-based learning. [Update: 2021-08-15]

Description Preview Suggested Readings Updated Date
Outline [Pdf] ... - 2021-07-29
Two Sea Changes in NLP [Pdf] - 2021-07-29
A formal Description of Prompting [Pdf] ... - 2021-07-29
Pretrained Language Model [Pdf] ... 1. Deep contextualized word representations
2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
3. Improving Language Understanding by Generative Pre-Training
4. Language Models are Unsupervised Multitask Learners
5. Language Models are Few-Shot Learners
6. ERNIE: Enhanced Representation through Knowledge Integration
7. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
8. Unified Language Model Pre-training for Natural Language Understanding and Generation
9. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
10. Cross-lingual Language Model Pretraining
11. XLNet: Generalized Autoregressive Pretraining for Language Understanding
12. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
2021-07-29
Prompt Engineering [Pdf] ... Discrete Prompt
1. Universal Adversarial Triggers for Attacking and Analyzing NLP
2. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
3. How Can We Know What Language Models Know?
4. BARTScore: Evaluating Generated Text as Text Generation
5. BERTese: Learning to Speak to BERT
6. Making Pre-trained Language Models Better Few-shot Learners
Continuous Prompt
1. WARP: Word-level Adversarial ReProgramming
2. Prefix-Tuning: Optimizing Continuous Prompts for Generation
3. The Power of Scale for Parameter-Efficient Prompt Tuning
4. Factual Probing Is [MASK]: Learning vs. Learning to Recall
5. Multimodal Few-Shot Learning with Frozen Language Models
Hybrid Prompt
1. GPT Understands, Too
2. PTR: Prompt Tuning with Rules for Text Classification
2021-07-29
Answer Engineering [Pdf] ... Discrete Answer
1. Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
2. Automatically Identifying Words That Can Serve as Labels for Few-Shot Text Classification
3. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
4. Making Pre-trained Language Models Better Few-shot Learners
5. AdaPrompt: Adaptive Prompt-based Finetuning for Relation Extraction
Continuous Answer
1. WARP: Word-level Adversarial ReProgramming
2021-07-29
Multi-prompt Learning [Pdf] ... Prompt Ensemble
1. How Can We Know What Language Models Know?
2. Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
3. Few-Shot Text Generation with Pattern-Exploiting Training
4. Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
Prompt Augmentation
1. Language Models are Few-Shot Learners
2. What Makes Good In-Context Examples for GPT-3?
3. Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Prompt Composition
1. PTR: Prompt Tuning with Rules for Text Classification
Prompt Decomposition
1. Template-Based Named Entity Recognition Using BART
2021-07-29
Prompt Learning Strategies [Pdf] ... In-context Learning
1. Language Models are Unsupervised Multitask Learners
2. Language Models are Few-Shot Learners
3. Language Models as Knowledge Bases?
Prompt-only Tuning
1. WARP: Word-level Adversarial ReProgramming
2. Prefix-Tuning: Optimizing Continuous Prompts for Generation
3. The Power of Scale for Parameter-Efficient Prompt Tuning
4. Multimodal Few-Shot Learning with Frozen Language Models
Model Fine-tuning
1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
2. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Prompt-fixed Fine-tuning
1. Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
2. Few-Shot Text Generation with Pattern-Exploiting Training
3. Making Pre-trained Language Models Better Few-shot Learners
4. Meta-tuning Language Models to Answer Prompts Better
Prompt Fine-tuning
1. PADA: A Prompt-based Autoregressive Approach for Adaptation to Unseen Domains
2. GPT Understands, Too
3. PTR: Prompt Tuning with Rules for Text Classification
4. Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
2021-07-29
Application [Pdf] ... Factual Probing
1. Language Models as Knowledge Bases?
2. How Can We Know What Language Models Know?
3. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
4. Factual Probing Is [MASK]: Learning vs. Learning to Recall
Text Classification
1. Zero-shot Text Classification With Generative Language Models
2. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
3. Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
4. Making Pre-trained Language Models Better Few-shot Learners
5. WARP: Word-level Adversarial ReProgramming
Information Extraction
1. AdaPrompt: Adaptive Prompt-based Finetuning for Relation Extraction
2. PTR: Prompt Tuning with Rules for Text Classification
3. Template-Based Named Entity Recognition Using BART
Question Answering
1. UNIFIEDQA: Crossing Format Boundaries with a Single QA System
Text Generation
1. Language Models are Unsupervised Multitask Learners
2. Language Models are Few-Shot Learners
3. Prefix-Tuning: Optimizing Continuous Prompts for Generation
4. Few-Shot Text Generation with Pattern-Exploiting Training
Others
1. A Simple Method for Commonsense Reasoning
2. BARTScore: Evaluating Generated Text as Text Generation
3. Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
4. Generating Datasets with Pretrained Language Models
5. Multimodal Few-Shot Learning with Frozen Language Models
2021-07-29
Relevant Topics [Pdf] ... - 2021-07-29
Challenges [Pdf] ... - 2021-07-29
Meta Analysis [Pdf] ... - 2021-07-29