GURT 2023 Program Schedule
This page gives the schedule of the GURT 2023 Program.
All times listed here are in Eastern Time (New York/Washington D.C.) Note: Daylight Saving Time in Washington DC will begin Sunday, March 12 at 2am, “springing us ahead” 1 hr! This site can help you sort out the time difference for your location each day.
Logistics:
- Plenary sessions (talks, keynotes, panel) will take place in the ICC Auditorium. Remote presenters will receive a Zoom link.
- Time allocations: long papers—15+7 (15 min. plus 7 min. for questions); short papers—10+5.
- The welcome reception, coffee breaks, and the poster session will take place in the ICC Galleria.
- Food: The welcome reception will include wine and light bites (including meat, seafood, and vegetarian options). Morning coffee breaks will include pastries. For meals, there are dining options on and near campus.
- Poster boards will be 30 in. by 40 in., and can be oriented horizontally or vertically. (local poster printing options)
Thursday, March 9, 2023
10:00 Excursion to Planet Word museum (meet at the museum; make sure to reserve your tickets days in advance!)
15:00 Registration desk opens (ICC Galleria)
16:00 Keynote: Guy Perrier Why is graph rewriting interesting for computational linguistics?
Graph Rewriting is a computational paradigm, not widely used in Computational Linguistics (CL). And yet, all linguistics resources such as treebanks, corpora annotated with semantics and lexicons can be considered as graphs and many treatments of these resources boil down to graph matching or graph rewriting. Unfortunately, there is no standard model for graph rewriting. Bruno Guillaume, Guillaume Bonfante and myself have designed a model adapted to CL. We implemented it in the GREW tool (https://grew.fr/). After a description of the model and the tool, I will present various applications to CL. First, I will consider the component of GREW dedicated to graph matching: GREW-MATCH (http://match.grew.fr/). It can be used independently of GREW for the linguistic exploration of annotated corpora, for annotation correction and for grammar extraction from treebanks. GREW, with its rewriting component, can be used to convert an annotated corpus from one format to another, and to produce a representation at a certain linguistic level (e.g. semantics) from a representation at another level (e.g. syntax).
17:00 Welcome Reception 🍷🧀🍤
20:00 End of Day
Friday, March 10, 2023
8:00 Coffee & Pastries ☕🥐
8:30 – 12 Registration
9:00 Paper Presentations
- 9:00 Necva Bölücü, Burcu Can Which Sentence Representation is More Informative: An Analysis on Text Classification [Depling] (remote presentation)
- 9:22 Yixuan Li Character-level Dependency Annotation of Chinese [Depling] (remote presentation)
- 9:44 Antoine Venant, François Lareau Predicates and entities in Abstract Meaning Representation [Depling]
- 10:06 Dag Trygve Truslew Haug, Jamie Yates Findlay Formal Semantics for Dependency Grammar [Depling]
10:30 Coffee Break ☕🥐
11:00 Paper Presentations
- 11:00 Yamei Wang, Geraldine Walther Measure Words are measurably different from sortal classifiers [Depling]
- 11:22 Maja Buljan What quantifying word order freedom tells us about dependency corpora [Depling]
- 11:44 Eva Fučíková, Jan Hajič, Zdeňka Urešová Corpus-Based Multilingual Event-type Ontology: annotation tools and principles [TLT]
- 12:06 Cristina Fernández-Alcaina, Eva Fučíková, Jan Hajič, Zdeňka Urešová Spanish Verbal Synonyms in the SynSemClass Ontology [TLT]
12:30 Lunch
13:30 – 16:30 Registration
14:00 Keynote: Joan Bresnan Cooccurrence probabilities predict English pronoun encliticization
English object pronouns that encliticize to their host verbs, as in get’em, stop’er, are common in conversational speech but rarely represented in orthographic texts. They have conflicting analyses in previous linguistic work, and have escaped corpus study. From two corpus studies of American speech, I will show that pronoun encliticization is predicted by the probability of cooccurrence of lexical verbs with their object pronouns. The higher the conditional probability of cooccurrence of individual lexical verbs with an object pronoun, the greater their likelihood of encliticization in ongoing conversations. This empirical finding is new, but it is just what would be expected in the hybrid formal and usage-based theory of Bresnan (2021), which combines a dynamic, exemplar-based lexicon with a lexical syntactic theory (LFG) of the co-lexicalization of adjacent words. This work implies that not only the grammar of English object enclitics but also their usage probabilities are part of English speakers’ implicit linguistic knowledge in active use during language production.
15:00 Coffee Break ☕🍪
15:30 Paper Presentations
- 15:30 Ee Suan Lim, Wei Qi Leong, Thanh Ngan Nguyen, Dea Adhista, Wei Ming Kng, William Chandra Tjhi, Ayu Purwarianti ICON: Building a Large-Scale Benchmark Constituency Treebank for the Indonesian Language [TLT]
- 15:52 Julia Bonn, Skatje Myers, Jens E. L. Van Gysel, Lukas Denk, Meagan Vigus, Jin Zhou, J. Andrew Cowell, William Croft, Jan Hajič, James H. Martin, Alexis Palmer, Martha Palmer, James Pustejovsky, Zdeňka Urešová, Rosa Vallejos, Nianwen Xue Mapping AMR to UMR: Resources for Adapting Existing Corpora for Cross-Lingual Compatibility [TLT]
- 16:14 Federica Gamba, Daniel Zeman Universalising Latin Universal Dependencies: a harmonisation of Latin treebanks in UD [UDW]
- 16:36 Stella Markantonatou, Nikolaos Constantinides, Vivian Stamou, Vasileios Arampatzakis, Panagiotis G. Krimpas, George Pavlidis Methodological issues regarding the semi-automatic UD treebank creation of under-resourced languages: the case of Pomak [UDW]
17:00 End of Day
Saturday, March 11, 2023
8:00 Coffee & Pastries ☕🥐
8:30 – 12:00 Registration
9:00 Paper Presentations (remote presenters)
- 9:00 Daisuke Bekki, Hitomi Yanaka Is Japanese CCGBank empirically correct? A case study of passive and causative constructions [TLT]
- 9:15 Kim Gerdes, Sylvain Kahane, Ziqian Peng Word order flexibility: a typometric study [Depling]
- 9:37 Erica Biagetti, Oliver Hellwig, Sven Sellmer Hedging in diachrony: the case of Vedic iva [TLT]
10:00 Coffee Break ☕🥐
10:30 Keynote: Joakim Nivre Ten Years of Universal Dependencies
Universal Dependencies (UD) is a project developing cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective. Since UD was launched almost ten years ago, it has grown into a large community effort involving over 500 researchers around the world, together producing treebanks for 138 languages and enabling new research directions in both NLP and linguistics. In this talk, I will review the history and development of UD, explore the UD research community through a bibliographic survey, and discuss challenges that we need to face when bringing UD into the future.
11:30 Lunch
13:00 Poster Session (ICC Galleria) ☕🍪
- Gabor Simon Constructions, collocations, and patterns: alternative ways of construction identification in a usage-based, corpus-driven theoretical framework [CxGs+NLP]
- Haibo Sun, Yifan Zhu, Jin Zhao, Nianwen Xue UMR annotation of Chinese Verb compounds and related constructions [CxGs+NLP]
- Allison Olivia Fan, Weiwei Sun Constructivist Tokenization for English [CxGs+NLP]
- Simon Mille, Josep Ricci, Alexander Shvets, Anya Belz A pipeline for extracting abstract dependency templates for data-to-text Natural Language Generation [Depling]
- Zoey Liu, Stefanie Wulff The development of dependency length minimization in early child language: A case study of the dative alternation [Depling]
- Christopher Sapp, Daniel Dakota, Elliott Evans Parsing Early New High German: Benefits and limitations of cross-dialectal training [TLT]
- John Bauer, Chloé Kiddon, Eric Yeh, Alexander Shan, Christopher D Manning Semgrex and Ssurgeon, Searching and Manipulating Dependency Graphs [TLT]
- Amir Zeldes, Nathan Schneider Are UD Treebanks Getting More Consistent? A Report Card for English UD [UDW]
- Chihiro Taguchi, David Chiang Introducing Morphology in Universal Dependencies Japanese [UDW]
- Leonie Weissweiler, Valentin Hofmann, Abdullatif Köksal, Hinrich Schütze The Better Your Syntax, the Better Your Semantics? Probing Pretrained Language Models for the English Comparative Correlative [CxGs+NLP] (EMNLP 2022 publication)
- Agata Savary, Sara Stymne, Verginica Barbu Mititelu, Nathan Schneider, Carlos Ramisch, Joakim Nivre PARSEME Meets Universal Dependencies: Getting on the Same Page in Representing Multiword Expressions [UDW] (NEJLT 2023 publication)
13:30 – 16:00 Registration
14:30 Coffee Break ☕🍪
15:00 Paper Presentations
- 15:00 Diego Alves, Božo Bekavac, Daniel Zeman, Marko Tadic Analysis of Corpus-based Word-Order Typological Methods [UDW]
- 15:22 Jamie Yates Findlay, Saeedeh Salimifar, Ahmet Yıldırım, Dag Trygve Truslew Haug Rule-based semantic interpretation for Universal Dependencies [UDW]
- 15:44 Jonathan Dunn Exploring the Constructicon: Linguistic Analysis of a Computational CxG [CxGs+NLP]
- 16:06 Priyanka Dey, Roxana Girju Investigating Stylistic Profiles for the Task of Empathy Classification in Medical Narrative Essays [CxGs+NLP]
16:30 End of Day
Sunday, March 12, 2023
8:00 Coffee & Pastries ☕🥐
8:30 – 12:00 Registration
9:00 Paper Presentations (remote presenters)
- 9:00 Chamila Liyanage, Kengatharaiyer Sarveswaran, Thilini Nadungodage, Randil Pushpananda Sinhala Dependency Treebank (STB) [UDW]
- 9:22 Alexey Koshevoy, Ilya Makarchuk, Anastasia Panova Building a Universal Dependencies Treebank for a Polysynthetic Language: the Case of Abaza [UDW]
- 9:37 Arthur Lorenzi, Vânia Gomes de Almeida, Ely Edison Matos, Tiago Timponi Torrent Modeling Construction Grammar’s Way into NLP: Insights from negative results in automatically identifying schematic clausal constructions in Brazilian Portuguese [CxGs+NLP]
- 9:59 Jussi Karlgren High-dimensional vector spaces can accommodate constructional features quite conveniently [CxGs+NLP]
10:15 10:30 Coffee Break ☕🥐
10:45 10:55 Keynote: Jonathan Dunn Emerging Structure in Computational Construction Grammar
This talk focuses on the emergence of grammatical structure in computational CxG given increasing amounts of exposure (i.e., training data). We divide this process of emergence into three phenomena: (i) the increasing complexity of category formation in producing basic slot-constraints, (ii) the increasing level of abstractness as constructions migrate from item-specific to more generalized constraints, and (iii) the clipping together of first-order constructions into larger second-order constructions. These three types of scaffolded structure produce grammars of increasing complexity given exposure to more training data.
11:45 11:55 Lunch
13:15 13:30 Panel
13:30 – 16:30 Registration
15:00 Coffee Break ☕🍪
15:30 Paper Presentations
- 15:30 Katrien Beuls, Paul Van Eecke Fluid Construction Grammar: State of the Art and Future Outlook [CxGs+NLP]
- 15:52 Kristopher Kyle, Hakyung Sung An argument structure construction treebank [CxGs+NLP]
- 16:14 Ludovica Pannitto, Aurelie Herbelot CALaMo: a Constructionist Assessment of Language Models [CxGs+NLP]
- 16:34 Leonie Weissweiler, Taiqi He, Naoki Otani, David R. Mortensen, Lori Levin, Hinrich Schuetze Construction Grammar Provides Unique Insight into Neural Language Models [CxGs+NLP]
17:00 End of Day