
Man Science semināra (15.02.) vieslekcijas kopsavilkums:
Prof. Kees van Deemter, University of Aberdeen"Computational Models of Referring” In this talk I will introduce Natural Language Generation (NLG) and argue that, in addition to making practically useful systems, NLG algorithms can be interesting models of human language use, because they are able to embody insights that elude other approaches to the study of language and communication. Focussing specifically on models of human referring (i.e., the production of referring expressions), I will introduce some classic algorithms, discussing what these algorithms are able to do well and what it is that they still struggle to do. Next, I will use evidence from experiments with human speakers and hearers to argue that the most difficult problems in this area arise from situations in which reference is something other than the "simple" identification of a referent through shared knowledge; I will give examples of these problematic situations, and of generation algorithms that address them. This talk includes themes from my book “Computational Models of Referring: A Study in Cognitive Science”, MIT Press, June 2016. The book is now freely available from here.
Valsts prezidenta lekciju cikla “Pasaules līderu lasījumi” lekcijas (16.02.) kopsavilkums:
Prof. Kees van Deemter, University of Aberdeen“Lying: the View from Natural Language Generation” Recent political upheavals have caused global debate about the spread of “fake news” on social media and elsewhere: reports that look like news, but which are intentionally untruthful. Fake news is often distributed for political or commercial gain; a classic example is how opponents spread news-like reports claiming that former US President Obama was not born in the USA (and, by implication, not a ligitimate US president). The present talk, which will be very informal, will examine the idea of “deviating from the truth”. I will do this by taking an engineering approach: First I will sketch the working of a typical data-to-text Natural Language Generation (NLG) system. Next, I show how each stage of the NLG pipeline has to make debatable decisions which can impact on the truth of the resulting text, making deviations from the truth very difficult to avoid. From these observations, I will argue that the notion of fake news is extremely difficult to pin down and detect, and I will suggest that effective solutions to the problem of fake news should therefore focus not on the computational side of the problem but on educating the human recipient of the news. This is joint work with Ehud Reiter, also at the University of Aberdeen.