Royal
Sequiera,
Master’s
candidate
David
R.
Cheriton
School
of
Computer
Science
With the advent of deep learning methods, researchers are abandoning decades-old work in Natural Language Processing (NLP). The research community has been increasingly moving away from otherwise dominant feature engineering approaches; rather, it is gravitating towards more complicated neural architectures. Highly competitive tools like Parts-of-Speech taggers that exhibit human-like accuracy are traded for complex networks, with the hope that the neural network will learn the features needed. In fact, there have been efforts to do NLP "from scratch" with neural networks that altogether eschew featuring engineering based tools (Collobert et al., 2011).
In our research, we modify the input that is fed to neural networks by annotating the input with linguistic information: POS tags, Named Entity Recognition output, linguistic relations, etc. With just the addition of these linguistic features on a simple Siamese convolutional neural network, we are able to achieve state-of-the-art results. We argue that this strikes a better balance between feature vs. network engineering.