学术报告: A Distributional Theory of Natural Language Content
时间: 2019-04-15 发布者: 文章来源: 博彩平台 审核人: 浏览次数: 892

特邀主讲人: Mark Steedman (英国国家学术院、爱丁堡皇家学会院士,AAAI/ACL FellowACL终身成就奖获得者)

时间:2019424日上午1100-1200

地点:理工楼321

  

  

报告摘要:

Modern wide-coverage parsers work as well as they do because of a clever combination of supervised machine learning from small human-annotated corpora of syntactic structures supporting semantic interpretation, with further unsupervised machine-learning trained over vast corpora of unlabeled text, usually using neurally-inspired algorithms, sometimes called Deep Learning. The latter component builds a hidden essentially Markovian sequence representation that acts as a powerful disambiguator, particularly of unseen events, such as words that were not exemplified in the small supervised corpus, and their likely compatibility with syntactic operations. Such parsers can process text and assemble semantic representations from unseen text such as that on the web at speed that are several orders of magnitude faster than human reading. Such machine-reading offers the possibility of fulfilling some of the oldest ambitions of computational natural language processing and artificial intelligence, such as question-answering and building large symbolic knowledge graphs, or semantic networks.

  

This promise remains largely unfulfilled. The main obstacle to further progress lies in the theories of natural language semantics that such parsers use to assemble meaning representations, which are mostly inherited from linguistics and philosophical logic. Such semantic theories tend to tie meaning representation too closely to specific linguistic forms: for example, they fail to capture the fact that if we are asked Which author wrote 'Macbeth'? then the phrase Shakespeare's 'Macbeth' in a text may imply the answer. Leaving such commonplace entailments to later open-ended inference via theorem-proving is not practical, and since the early '70s, many have tried and always failed to hand-build a less form-dependent computable semantics.

  

This talk argues for a different approach, treating the dimensions of semantic relation as hidden variables, to be discovered by machine learning from data. It will contrast two techniques for doing so. The first, like the parsing models mentioned earlier, is based on word-embeddings, in which the meaning of a word is represented as a dimensionally-reduced vector derived from the string contexts in which it has been seen in large amounts of unlabeled text, and semantic composition is represented by linear algebraic operations. The second uses machine-reading using wide-coverage parsers on large amounts of text to find type-consistent patterns of directional implication in traditional semantic relations across the same n-tuples of arguments---for example that for many pairs of entities X of type author and Y of type literary work, if we read about X writing Y, we also read elsewhere about X's Y. The original form-specific semantics in the parser is then replaced by a form-independent semantics, in which paraphrases are collapsed into a single relation label, and directional entailments are represented by conjunction. The latter method has the advantage of being immediately compatible with the logical operators of traditional logical semantics, such as negation and quantification.

  

主讲人简介:

Mark Steedman is Professor of Cognitive Science in the School of Informatics at the University of Edinburgh, to which he moved in 1998 from the University of Pennsylvania, where he taught for many years as Professor in the Department of Computer and Information Science. He is a Fellow of the British Academy, the Royal Society of Edinburgh, the American Association for Artificial Intelligence (AAAI), the Association for Computational Linguistics (ACL), and the Cognitive Science Society (CSS), and a Member of the European Academy. In 2018, he was the recipient of the ACL Lifetime Achievement Award.

His research covers a wide range of problems in computational linguistics, natural language processing, artificial intelligence, and cognitive science, including syntactic and semantic theory, and parsing and interpretation of natural language text and discourse, including spoken intonation, by humans and by machine. Much of his current research uses Combinatory Categorial Grammar (CCG) as a formalism to address problems in wide-coverage parsing for robust semantic interpretation and natural language inference, and the problem of inducing and generalizing semantic parsers, both from data and in child language acquisition. Some of his research concerns the analysis of music using related grammars and statistical parsing models.