Computational semantics deals with the extraction of semantic representations from natural language text. The types of representations can be divided into logical representations that allow for logical reasoning, and representations based on similarity or probability that allow for applications like clustering and information retrieval.
In the domain of logical analysis we will give an overview of Montague Semantics following the textbook [Blackburn, 2005].
In the domain of representations based on similarity or probability we discuss Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and graph-based semantic similarity measures.
Introduction to semantics of natural languages
Logical representation and reasoning
Representation and reasoning based on similarity or probability
-
Latent Semantic Analysis (LSA)
-
Probabilistic Latent Semantic Analysis (PLSA) and Expectation Maximization (EM) Algorithm
-
Graph-based semantic similarity
Discourse Representation Theory (DRT)
Syntax learning
Didactics
The exercise consists of a theoretical and practical part with programming examples.
Further Information
ECTS-Breakdown:
25 h Lecure+ Exercise
25 h Preparation of exercises
23 h Preparation for exam
2 h Written exam
------
75 h = 3 ECTS
Patrick Blackburn, Johan Bos, Representation and Inference for Natural Language. A First Course in Computational Semantics, CSLI, 2005
Chris Manning, Hinrich Schütze, Foundations of Statistical Natural Language Processing, MIT Press, 1999