Annette-s-Responses

View on GitHub

July 29, 2020

Using NLP to build a sarcasm classifier

  1. Question 1: Pick two or three news sources and select a few news titles from their feed (about 5 is likely enough). For example you could select CNN, Fox News, MSNBC, NPR, PBS, Al Jazeera, RT (Russia Today), Deutsche Welle, Facebook, BBC, France24, CCTV, NHK World or another source you wish you analyze. Run your sarcasm model to predict whether the titles are interpreted as sarcastic or not. Analyze the results and comment on the different news sources you have selected.

Text generation with an RNN

  1. Question 2: Use the generate_text() command at the end of the exercise to produce synthetic output from your RNN model. Run it a second time and review the output. How has your RNN model been able to “learn” and “remember” the shakespeare text in order to reproduce a similar output?

Neural machine translation with attention

  1. Question 3: Use the translate() command at the end of the exercise to translate three sentences from Spanish to English. How did your translations turn out?
    • With the example phrases on the tensorflow exercises, the model was able to translate the sentences quite well. However, when I added my own sentences (which were a little more advanced than the ones in the exercise), the model was only able to accurately translate the easiest of the three. The other sentences were not even close to the English translation. For example, sentence 1 was supposed to be translated from:

image

to “Did he/she have to go to the doctor or hospital?”

Sentence 2 was the only sentence translated correctly:

image

Sentence 3 was supposed to be translated from:

image

to “To whom do I leave $20 dollars for?”