Are LLMs safe?
Curious about the safety of LLMs? 🤔 Join us for an insightful new episode featuring Suchin Gururangan, Young Investigator at Allen Institut...
Radio and PodcastLive Radio & PodcastsOpening Radio and Podcast...

Radio and PodcastLive Radio & PodcastsFetching podcast shows and categories...
Radio and PodcastLive Radio & PodcastsFetching podcast episodes...

Welcome to the NLP highlights podcast, where we invite researchers to talk about their work in various areas in natural language processing. The hosts are the members of the AllenNLP team at...
Curious about the safety of LLMs? 🤔 Join us for an insightful new episode featuring Suchin Gururangan, Young Investigator at Allen Institut...
This podcast episode features Dr. Mohamed Elhoseiny, a true luminary in the realm of computer vision with over a decade of groundbreaking re...

Our first guest with this new format is Kyle Lo, the most senior lead scientist in the Semantic Scholar team at Allen Institute for AI (AI2)...

In this special episode of NLP Highlights, we discussed building and open sourcing language models. What is the usual recipe for building la...

In this special episode, we chatted with Chris Callison-Burch about his testimony in the recent U.S. Congress Hearing on the Interoperabilit...

How can we generate coherent long stories from language models? Ensuring that the generated story has long range consistency and that it con...

Compositional generalization refers to the capability of models to generalize to out-of-distribution instances by composing information obta...

We invited Urvashi Khandelwal, a research scientist at Google Brain to talk about nearest neighbor language and machine translation models....

In this episode, we talk with Kayo Yin, an incoming PhD at Berkeley, and Malihe Alikhani, an assistant professor at the University of Pittsb...

This episode is the third in our current series on PhD applications. We talk about what the PhD application process looks like after applica...

This episode is the second in our current series on PhD applications. How do PhD programs in Europe differ from PhD programs in the US, and...

This episode is the first in our current series on PhD applications. How should people prepare their applications to PhD programs in NLP? In...

In this episode, we discussed the Alexa Prize Socialbot Grand Challenge and this year's winning submission, Alquist 4.0, with Petr Marek, a...

What can NLP researchers learn from Human Computer Interaction (HCI) research? We chatted with Nanna Inie and Leon Derczynski to find out. W...

In this episode, we talk with Lisa Beinborn, an assistant professor at Vrije Universiteit Amsterdam, about how to use human cognitive signal...

In this episode, we talk to Shunyu Yao about recent insights into how transformers can represent hierarchical structure in language. Bounded...

We discussed adversarial dataset construction and dynamic benchmarking in this episode with Douwe Kiela, a research scientist at Facebook AI...

We invited members of Masakhane, Tosin Adewumi and Perez Ogayo, to talk about their EMNLP Findings paper that discusses why typical research...

We invited Lisa Li to talk about her recent work, Prefix-Tuning: Optimizing Continuous Prompts for Generation. Prefix tuning is a lightweigh...

How can we build Visual Question Answering systems for real users? For this episode, we chatted with Danna Gurari, about her work in buildin...

We invited Jayant Krishnamurthy and Hao Fang, researchers at Microsoft Semantic Machines to discuss their platform for building task-oriente...

In this episode, Robin Jia talks about how to build robust NLP systems. We discuss the different senses in which a system can be robust, rea...

We invited Nils Holzenberger, a PhD student at JHU to talk about a dataset involving statutory reasoning in tax law Holzenberger et al. rele...

We invited Alona Fyshe to talk about the link between NLP and the human brain. We began by talking about what we currently know about the co...

We invited Asli Celikyilmaz for this episode to talk about evaluation of text generation systems. We discussed the challenges in evaluating...

In this episode, Diyi Yang gives us an overview of using NLP models for social applications, including understanding social relationships, p...

In this episode, we talked about Coreference Resolution with Marta Recasens, a Research Scientist at Google. We discussed the complexity inv...

We interviewed Sameer Singh for this episode, and discussed an overview of recent work in interpreting NLP model predictions, particularly i...

We invited Yonatan Bisk to talk about grounded language understanding. We started off by discussing an overview of the topic, its research g...

In this special episode, Carissa Schoenick, a program manager and communications director at AI2 interviewed Matt Gardner about AllenNLP. We...

We invited Marco Tulio Ribeiro, a Senior Researcher at Microsoft, to talk about evaluating NLP models using behavioral testing, a framework...

We invited Fernando Pereira, a VP and Distinguished Engineer at Google, where he leads NLU and ML research, to talk about managing NLP resea...

We invited Steven Cao to talk about his paper on multilingual alignment of contextual word embeddings. We started by discussing how multilin...

We invited Jon Clark from Google to talk about TyDi QA, a new question answering dataset, for this episode. The dataset contains information...

In this episode, Tom Kwiatkowski and Michael Collins talk about Natural Questions, a benchmark for question answering research. We discuss h...

How do we know, in a concrete quantitative sense, what a deep learning model knows about language? In this episode, Ellie Pavlick talks abou...

In this episode we invite Verena Rieser and Ondřej Dušek on to talk to us about the complexities of generating natural language when you h...

In this episode, we invite Hao Tan and Mohit Bansal to talk about multi-modal training of transformers, focusing in particular on their EMNL...

In this episode, we talked to Emily Bender about the ethical considerations in developing NLP models and putting them in production. Emily c...

In this episode we invite Sudha Rao to talk about question generation. We talk about different settings where you might want to generate que...

In this episode we talked with Victor Sanh and Thomas Wolf from HuggingFace about model distillation, and DistilBERT as one example of disti...

We talked to Brendan O’Connor for this episode about processing language in social media. Brendan started off by telling us about his projec...

What exciting NLP research problems are involved in processing biomedical and clinical data? In this episode, we spoke with Dina Demner-Fush...

In this episode, Jonathan Frankle describes the lottery ticket hypothesis, a popular explanation of how over-parameterization helps in train...

For our 100th episode, we invite AI2 CEO Oren Etzioni to talk to us about NLP startups. Oren has founded several successful startups, is him...

For this episode, we chatted with Neil Thomas and Roshan Rao about modeling protein sequences and evaluating transfer learning methods for a...

What function do the different attention heads serve in multi-headed attention models? In this episode, Lena describes how to use attributio...

In this episode, we talk to Taylor Berg-Kirkpatrick about optical character recognition (OCR) on historical documents. Taylor starts off by...

In this episode, we chat with Luke Zettlemoyer about Question Answering as a format for crowdsourcing annotations of various semantic phenom...

In this episode, we invite Yejin Choi to talk about common sense knowledge and reasoning, a growing area in NLP. We start by discussing a wo...