Publications

WriterForcing: Generating more interesting story endings

Published in Storytelling workshop, ACL, 2019

This project aims to generate diverse and interesting story endings by forcing to attend on the keywords present in the story.Builds on the simple attention of Sequence to Sequence models by using ITF loss and ”forcing” loss to generate more interesting endings to a given story context

Code available here

View results here

Dr.Quad at MEDIQA 2019: Towards Textual Inference and Question Entailment using contextualized representations

Published in MediQA, ACL, 2019

Presents an indepth study of using textual entailment in the field of medicine to incorporate domain knowledge in State of the art Systems. We use state of the art BERT models to perform both question entailment and inference on sentences. We then use the results of both of these models to filter relevant answers for a question.

Quantifying the Effect of In-Domain Distributed Word Representations: A Study of Privacy Policies

Published in PAL, AAAI Spring Symposium, 2018

A detailed study on the impact of in-domain word embeddings to understand and interpret privacy policies. Visualized the word embeddings to identify the clusters of words which are specific to a privacy policy.

Neu0

Published in ICLR Workshop, 2017

Used state of the art deep-learning models at the time to research and develop neural computational models capable of executing code. Conceptualized “Program Embeddings” – a vectorized representation of Assembly Language program statements (eg. ARM, MIPS). Augmented the Neural Turing machine with novel ways to access large main memory, a fuzzy register bank and an instruction bank. Ensembled Neural Networks whose execution was governed by the NTM controller and program counter to learn to execute ARM code from examples.

Code available here

View results here