- Corrected CBOW Performs as well as Skip-gram
Ozan ˙Irsoy, Adrian Benton and Karl Stratos
- Does Commonsense help in detecting Sarcasm?
Somnath Basu Roy Chowdhury and Snigdha Chaturvedi
- BERT Cannot Align Characters
Antonis Maronikolakis, Philipp Dufter and Hinrich Schütze
- Two Heads are Better than One? Verification of Ensemble Effect in Neural Machine Translation
Chanjun Park, Sungjin Park, Seolhwa Lee, Taesun Whang and Heuiseok Lim
- Finetuning Pretrained Transformers into Variational Autoencoders
Seongmin Park and Jihwa Lee
- Are BERTs Sensitive to Native Interference in L2 Production?
Zixin Tang, Prasenjit Mitra and David Reitter
- Zero-Shot Cross-Lingual Transfer is a Hard Baseline to Beat in German Fine-Grained Entity Typing
Sabine Weber and Mark Steedman.
- Comparing Euclidean and Hyperbolic Embeddings on the WordNet Nouns Hypernymy Graph
Sameer Bansal and Adrian Benton.
- When does Further Pre-training MLM Help? An Empirical Study on Task-Oriented Dialog Pre-training
Qi Zhu, Yuxian Gu, Lingxiao Luo, Bing Li, Cheng LI, Wei Peng, Minlie Huang and Xiaoyan Zhu
- Recurrent Attention for the Transformer
Jan Rosendahl, Christian Herold, Frithjof Petrick and Hermann Ney
- On the Difficulty of Segmenting Words with Attention
Ramon Sanabria, Hao Tang and Sharon Goldwater
- The Highs and Lows of Simple Lexical Domain Adaptation Approaches for Neural Machine Translation
Nikolay Bogoychev and Pinzhen Chen
- Backtranslation in Neural Morphological Inflection
Ling Liu and Mans Hulden
- Learning Data Augmentation Schedules for Natural Language Processing
Daphné Chopard, Matthias S. Treder and Irena Spasi´c
- An Investigation into the Contribution of Locally Aggregated Descriptors to Figurative Language Identification
Sina Mahdipour Saravani, Ritwik Banerjee and Indrakshi Ray.
- Blindness to Modality Helps Entailment Graph Mining
Liane Guillou, Sander Bijl de Vroe, Mark Johnson and Mark Steedman
- Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Fewshot NLI
Yangqiaoyu Zhou and Chenhao Tan
- Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics
Prajjwal Bhargava, Aleksandr Drozd and Anna Rogers
- Challenging the Semi-Supervised VAE Framework for Text Classification
Ghazi Felhi, Joseph Le Roux and Djamé Seddah
- Active Learning for Argument Strength Estimation
Nataliia Kees, Michael Fromm, Evgeniy Faerman and Thomas Seidl
In addition to the above, the following EMNLP Findings papers will be presented at Insights:
- Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models’ Transferability
Wei-Tsung Kao and Hung-yi Lee
- Do We Know What We Don’t Know? Studying Unanswerable Questions beyond SQuAD 2.0, Elior Sulem
Jamaal Hay and Dan Roth
- Does Vision-and-Language Pretraining Improve Lexical Grounding?
Tian Yun, Chen Sun and Ellie Pavlick
- Investigating Numeracy of a Text-to-Text Transfer model
Kuntal Kumar Pal and Chitta Baral
- Temporal Adaptation of BERT and Performance on Downstream Document Classification: Insights from Social Media
Paul Röttger and Janet B. Pierrehumbert