• seminar

seminar

SEMINAR

In this talk, after briefly introducing the formal semantics in modern type theories (MTT-semantics), I shall argue that it is both model-theoretic and proof-theoretic. This is due to the unique features of MTTs: they contain rich type structures that provide powerful representational means (e.g., to represent collections as types) and, at the same time, are specified proof-theoretically as rule-based systems whose sentences (judgements) can be understood inferentially.

Considered in this way, MTTs arguably have promising advantages when employed as foundational languages for formal semantics, both theoretically and practically.

Lecturer:
Zhaohui Luo is a Professor of Computer Science at the Department of Computer Science, Royal Holloway, University of London.

Date: 2016-04-27 15:15 - 17:00

Location: T219, Olof Wijksgatan 6

Permalink

SEMINAR

There is a fair amount of evidence indicating that language acquisition in general crucially relies on probabilistic learning. It is not clear how a reasonable account of semantic learning could be constructed on the basis of the categorical type systems that either classical or revised semantic theories assume. We present probabilistic TTR (Cooper et al 2014) that makes explicit the assumption, common to most probability theories used in AI, that probability is distributed over situation types, rather than over sets of worlds.

Improving on and going beyond Cooper et al (2014), we formulate elementary Bayesian classifiers (which can be modelled as two-layer Bayesian networks) in probabilistic TTR and use these to illustrate how our type theory serves as an interface between perceptual judgement, semantic interpretation, and semantic leaning. We also show how this account can be extended to cover general Bayesian nets.

Date: 2016-04-07 15:15 - 17:00

Location: Seminar room, Dicksonsgatan 4

Permalink

SEMINAR

In this seminar I will talk about my PhD studies: general aims, review of completed and ongoing work, as well as ideas on how to continue.

My work revolves around building semantic representations of linguistic units. Word embeddings have become ubiquitous in Natural Language Processing upstream tasks in the last years. In this project, we ask in which ways restricting representations to word forms limits the capacity of these models, and whether we can avoid this by focusing on representing more complex, less superficial linguistic units. I apply Machine Learning techniques for this purpose, and have worked on several models that learn word sense representations from corpora and lexicons. As a possible line of future research, I propose making use of exciting developments of learning algorithms to try and obtain representations of complex semantic units like multiword expressions.

Date: 2016-05-19 10:30 - 12:00

Location: L308, Lennart Torstenssonsgatan 8

Permalink

SEMINAR

Language-processing software is becoming increasingly present in our society. Making such tools available to the greater number is not just a question of access to technology but also a question of language as they need to be adapted, or localized, to each linguistic community. It is thus important to make the tools necessary to the engineering of language-processing systems as accessible as possible, for instance through automation. Not so much to help the traditional software creators but more importantly to enable communities to bring their language use into the digital world on their own terms.

Smart paradigms are created in the hope that they can decrease the amount of work for the lexicographer who wishes to create or update a morphological lexicon. In the first paper, we evaluate smart paradigms implemented in GF. How good are they to guess the correct inflection tables? How much information is required? How good are they at compressing the lexicon?

In the second paper, we take some distance from the smart paradigms, although they have been used in this work, they are not the main focus of the study. Instead, we compare two rule-based machine translation systems based on different translation models and try to determine the potential of a possible hybridization.

In the third paper we come back to the smart paradigms. If they can reduce the work of the lexicographer, someone still needs to create the smart paradigms in the first place. In this paper we explore the possibility of automatically creating smart paradigms based on existing traditional paradigms using machine-learning techniques.

Finally, the last paper presents a collection of tools meant to help grammar engineering work in the Grammatical Framework community: a tokenizer; a library to embedded grammars in Java applications; a build server; a document translator and a kernel to Jupyter notebooks.

Opponent: Assistant Professor Måns Huldén, Department of Linguistics, University of Colorado, U.S.A.

Date: 2016-06-02 10:00 - 12:00

Location: room EA, Hörsalsvägen 11, Chalmers

Permalink

SEMINAR

Dementia is a gradual decline of cognitive abilities, often resulting from neurodegeneration. In some cases, such as primary progressive aphasia (PPA), language abilities are specifically impaired. In other cases, such as Alzheimer’s disease (AD), language disabilities may occur together with other cognitive impairments. In each of these instances, a narrative language sample can provide a wealth of data regarding an individual’s linguistic capabilities. Traditionally, analysis of speech samples was conducted by hand, but this is painstaking and time-consuming work. In this talk, I will show that many lexical and syntactic features can be automatically extracted from speech transcripts and used in machine learning classifiers to distinguish between PPA participants and controls, between participants with different subtypes of PPA, and between AD participants and controls. I will also discuss some of the challenges we face in terms of small data sets, the use of automatic speech recognition in these populations, and potential confounding factors.

Bio: Katie Fraser is a PhD candidate in the University of Toronto Computational Linguistics group. Her dissertation work focuses on automatically detecting signs of dementia through computational analysis of narrative speech. She has published papers in a number of computer science conferences, as well as the journals Cortex and the Journal of Alzheimer's Disease. Her work has been supported by NSERC and Google, and she was an invited participant in the 2015 MIT Rising Stars in Electrical Engineering and Computer Science workshop. She is also a co-founder of Winterlight Labs, a Toronto-based start-up focused on building tools to monitor cognitive impairment through speech. Katie holds a Masters of Computer Science from Dalhousie University and a B.Sc. in Physics from St. Francis Xavier University.

Read more about Katie Fraser here: http://www.cs.toronto.edu/~kfraser/.

Date: 2016-03-17 10:30 - 12:00

Location: L308, Lennart Torstenssonsgatan 8

Permalink

SEMINAR

In this talk, I will discuss the use of Modern Type Theoretical Semantics (MTTs) , i.e. type theories within the tradition of Martin Löf (1974, 1981), for reasoning about natural language semantics. I will first present a brief introduction of the features that make MTTs an attractive formal language to interpret NL semantics to. In particular, I will discuss a number of issues that have been successfully dealt with using MTTs like adjectival/adverbial modification, copredication and intensionality among other things. Then, I will argue that the proof-theoretic nature of MTTs, i.e. the fact that they are proof-theoretically specified, in combination with their expresiveness makes them fit to perform reasoning tasks. This proof-theoretic aspect of MTTs has been the main reason that a number of proof-assistants implement variants of MTTs. One such proof-assistant, Coq, will be used as a way to show the applicability of MTTs in dealing with Natural Language Inference (NLI). Firstly, I will show how NL semantics can be implemented in Coq and then I will present how one can use Coq in order to reason with these semantics. I will draw examples from the FraCas test suite platform in order to show the predictions the implemented semantics make as regards inference. I will then discuss issues like coverage and proof-automation and a number of ideas for future work, like extracting type ontologies from GWAP lexical networks and creating a parser/translator that will translate between English (or any other language) and the syntax of Coq. I will end the talk by discussing the potential use of Coq implementing other semantic frameworks, like Montague Semantics, Davidsonian semantics and eventually a discussion on how Coq can be used with TTR (or even ProbTTR).

Date: 2016-03-09 15:15 - 16:30

Location: Seminar room, Dicksonsgatan 4

Permalink

SEMINAR

In the last number of years deep learning models have made a significant impact across a range of fields. Machine Translation is one such area of research. The development of the encoder-decoder architecture and its extension to include an attention mechanism has led to deep learning models achieving state of the art MT results for a number of langauge pairs. However, an open question in deep learning for MT is what is the best attention mechanism to use. This talk will begin by reviewing the current state of the art in deep learning for MT. The second half of the talk will present a novel attention based encoder-decoder architecture for MT. This novel architecture is the result of collaborative research between John Kelleher, Giancarlo Salton, and Robert J. Ross.

Date: 2016-03-11 13:15 - 15:00

Location: T307, Olof Wijksgatan 6

Permalink

SEMINAR

In artificial neural networks, attention models allow the system to focus on certain parts of the input. This has shown to improve model accuracy in a number of applications. In image caption generation, attention models help to guide the model towards the parts of the image currently of interest. In neural machine translation, the attention mechanism gives the model an alignment of the words between the source sequence and the target sequence.

In this talk, we'll go through the basic ideas and workings of attention models, both for recurrent networks and for convolutional networks. In conclusion, we will see some recent papers that applies attention mechanisms to solve different tasks in natural language processing and computer vision.

Date: 2016-02-18 10:30 - 12:00

Location: EDIT-room 3364, Chalmers Johanneberg

Permalink

SEMINAR

Constraint Grammar (CG) is a formalism used to disambiguate morphologically analysed text. A grammar written in CG contains rules that /select/ or /remove/ an analysis, based on what other words surround the target word. An example of such rule is [REMOVE verb IF -1 determiner]: given an ambiguous text such as "the wish", the rule would disambiguate /wish/ into a noun.

A typical grammar contains hundreds to thousands of such rules. Since the rules are very shallow and depend on the context of words, it easy to accidentally write rules that conflict each other, or that can never apply to any output.

In this talk, I describe a method for analysing CG by encoding the rules in SAT (joint work with Koen Claessen). Our tools can detect internal conflicts or redundancies in a grammar, as well as generate examples to demonstrate the effect of some rule or rule set. This can help users to diagnose and improve their grammars. No corpus is required, only a morphological lexicon.

Date: 2016-02-25 10:30 - 12:00

Location: L308, Lennart Torstenssonsgatan 8

Permalink

SEMINAR

Inferences made by machine learning methods increasingly form the basis of actions in the real world. To learn how to act requires understanding of cause-effect relationships, and while often overlooked in machine learning, modern applications like personalised medicine cannot function without causal inference. A common problem arising in such settings is that of counterfactual inference: “What would have happened if X instead of Y?” We put this question in the context of machine learning methods, such as contextual bandits and representation learning, and discuss relevant theory and applications.

Date: 2016-02-11 10:30 - 12:00

Location: EDIT-room 3364, Chalmers Johanneberg

Permalink

X
Loading