Deep Learning Approaches to Text Production. Shashi Narayan

Читать онлайн.
Название Deep Learning Approaches to Text Production
Автор произведения Shashi Narayan
Жанр Программы
Серия Synthesis Lectures on Human Language Technologies
Издательство Программы
Год выпуска 0
isbn 9781681738215



Скачать книгу

target="_blank" rel="nofollow" href="#litres_trial_promo">8.2Overview of Covered Neural Generators

       8.3Two Key Issues with Neural NLG

       8.4Challenges

       8.5Recent Trends in Neural NLG

       Bibliography

       Authors’ Biographies

       List of Figures

       1.1Input contents and communicative goals for text production

       1.2Shallow dependency tree from generation challenge surface realisation task

       1.3Example input from the SemEval AMR-to-Text Generation Task

       1.4E2E dialogue move and text

       1.5Data-to-Text example input and output pair

       2.1A Robocup input and output pair example

       2.2Data to text: A pipeline architecture

       2.3Simplifying a sentence

       2.4A Sentence/Compression pair

       2.5Abstractive vs. extractive summarisation

       2.6A document/summary pair from the CNN/DailyMail data set

       2.7Key modules in pre-neural approaches to text production

       3.1Deep learning for text generation

       3.2Feed-forward neural network or multi-layer perceptron

       3.3Convolutional neural network for sentence encoding

       3.4RNNs applied to a sentence

       3.5Long-range dependencies

       3.6Sketches of LSTM and GRU cells

       3.7Two-dimensional representation of word embeddings

       3.8RNN-based encoder-decoder architecture

       3.9The German word “zeit” with its two translations

       3.10Bidirectional RNNs applied to a sentence

       3.11RNN decoding steps (Continues.)

       3.12(Continued.) RNN decoding steps

       3.13RNN decoding: conditional generation at each step

       4.1Example input/output with missing, added, or repeated information

       4.2Focusing on the relevant source word

       4.3Sketch of the attention mechanism

       4.4Example delexicalisations from the E2E and WebNLG data sets

       4.5Interactions between slot values and sentence planning

       4.6Example of generated text containing repetitions

       4.7The impact of coverage on repetition

       4.8Evolution of the DA vector as generation progresses

       5.1Bidirectional RNN modelling document as a sequence of tokens

       5.2Linearising AMR for text production

       5.3Linearising RDF to prepare input-output pairs for text production

       5.4Linearising dialogue moves for response generation

       5.5Linearising Wikipedia descriptions for text generation

       5.6Hierarchical document representation for abstractive document summarisation

       5.7Multi-agent document representation for abstractive document summarisation

       5.8Communication among multiple agents, each encoding a paragraph

      5.9Extractive summarisation with a hierarchical encoder-decoder model.

       5.10Graph-state LSTMs for text production from AMR graphs

       5.11Graph-triple encoder for text production from RDF triple sets

       5.12Graph convolutional networks for encoding a sentence

      6.1An example of abstractive sentence summarisation.

       6.2Selective encoding for abstractive sentence summarisation

       6.3Heat map learned with the selective gate mechanism

       6.4Two-step process for content selection and summary generation

       6.5Graph-based attention to select salient sentences for abstractive summarisation

       6.6Generating