Logo

Workshop on Insights from Negative Results in NLP


June 20, 2024
(co-located with NAACL)

About

Call for papers

Accepted papers

Program

Sponsors

Organization

Invited Speakers

Program Committee

Insights 2023

Insights 2022

Insights 2021

Insights 2020

Program

09:00 Opening Remarks

09:15 Techincal session 1

09:45 Techincal session 2

10:30 Coffee Break

11:00 Techincal session 3

11:30 Invited Talk 1. Sasha Luccioni.

Title: Reproducibility in ML and the Environment: What’s the Connection?

Abstract: In the last 5 years, since the advent of BERT Large Language Models (LLMs) have become ubiquitous in the current AI research landscape, as well as increasingly deployed in user-facing products in contexts ranging from health to education. However, many characteristics of LLMs - their inherent lack of reproducibility, the steep compute cost of their training, and the lack of openness in terms of access – are having wide-ranging repercussions. In this talk, I will talk about recent progress in AI and how this is changing the field of AI in terms of scientific rigor and open science. I will also propose concrete steps that can be taken to ensure that AI research and practice stays reproducible, sustainable and ethical.

Bio: Dr. Sasha Luccioni is a leading scientist at the nexus of artificial intelligence, ethics, and sustainability, with a PhD in AI and a decade of research and industry expertise. She is the Climate Lead at Hugging Face, a global startup in responsible open-source AI, where she spearheads research, consulting and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.

12:00 Lunch

14:10 Best paper award announcement

14:15 Technical session 4

15:00 Invited Talk 2. Marius Mosbach

Title: From Insights to Actions: The Role of Analysis Work in NLP

Abstract:

Interpretability and analysis researchers are often motivated by the idea that a better understanding of our existing models and methods is imperative to improve their efficiency, robustness, and trustworthiness, and will ultimately lead to more successful and safe deployment of NLP systems. However, a commonly voiced criticism is that interpretability and analysis research fails to deliver on this promise and often lacks actionable insights. In my talk, I will present results from our recent work in which we seek to quantify the impact of IA research on the broader field of NLP. We find that while NLP researchers build on findings from IA work and perceive it as important for progress in NLP, there are several important features missing in interpretability and analysis work today. I will present an example from my own work to show how interpretability and analysis work can lead to actionable insights, and end with a call to action with recommendations for a more impactful future of IA research.

Bio:

Dr. Marius Mosbach is a postdoctoral researcher at McGill University and Mila - Quebec AI Institute, working with Siva Reddy. Prior to this, he did his PhD at Saarland University, Germany, where he focused on analyzing pre-trained and fine-tuning language models. He is broadly interested in building NLP systems that are well understood, robust, and easy to adapt. Beyond research, he enjoys CrossFit and explaining to people where Saarland is.

15:30 Coffee Break

16:00: Poster Session

17:00 Closing Remarks