דילוג לניווט ראשי דילוג לחיפוש דילוג לתוכן הראשי

Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context NLP

  • Omer Goldman
  • , Alon Jacovi
  • , Aviv Slobodkin
  • , Aviya Maimon
  • , Ido Dagan
  • , Reut Tsarfaty

פרסום מחקרי: פרק בספר / בדוח / בכנספרסום בספר כנסביקורת עמיתים

תקציר

Improvements in language models' capabilities have pushed their applications towards longer contexts, making long-context evaluation and development an active research area. However, many disparate use cases are grouped together under the umbrella term of “long-context”, defined simply by the total length of the model's input, including - for example - Needle-in-a-Haystack tasks, book summarization, and information aggregation. Given their varied difficulty, in this position paper we argue that conflating different tasks by their context length is unproductive. As a community, we require a more precise vocabulary to understand what makes long-context tasks similar or different. We propose to unpack the taxonomy of long-context based on the properties that make them more difficult with longer contexts. We propose two orthogonal axes of difficulty: (I) Dispersion: How hard is it to find the necessary information in the context? (II) Scope: How much necessary information is there to find? We survey the literature on long context, provide justification for this taxonomy as an informative descriptor, and situate the literature with respect to it. We conclude that the most difficult and interesting settings, whose necessary information is very long and highly dispersed within the input, is severely under-explored. By using a descriptive vocabulary and discussing the relevant properties of difficulty in long context, we can implement more informed research in this area. We call for a careful design of tasks and benchmarks with distinctly long context, taking into account the characteristics that make it qualitatively different from shorter context.

שפה מקוריתאנגלית
כותר פרסום המארחEMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
עורכיםYaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
מוציא לאורAssociation for Computational Linguistics (ACL)
עמודים16576-16586
מספר עמודים11
מסת"ב (אלקטרוני)9798891761643
מזהי עצם דיגיטלי (DOIs)
סטטוס פרסוםפורסם - 2024
פורסם באופן חיצוניכן
אירוע2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024 - Hybrid, Miami, ארצות הברית
משך הזמן: 12 נוב׳ 202416 נוב׳ 2024

סדרות פרסומים

שםEMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference

כנס

כנס2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024
מדינה/אזורארצות הברית
עירHybrid, Miami
תקופה12/11/2416/11/24

הערה ביבליוגרפית

Publisher Copyright:
© 2024 Association for Computational Linguistics.

טביעת אצבע

להלן מוצגים תחומי המחקר של הפרסום 'Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context NLP'. יחד הם יוצרים טביעת אצבע ייחודית.

פורמט ציטוט ביבליוגרפי