Code 7 Landmark NLP Papers in PyTorch (Full NMT Course)

Estimated read time 2 min read

Post Content

โ€‹ย This course is a comprehensive journey through the evolution of sequence models and neural machine translation (NMT). It blends historical breakthroughs, architectural innovations, mathematical insights, and hands-on PyTorch replications of landmark papers that shaped modern NLP and AI.

The course features:
– A detailed narrative tracing the history and breakthroughs of RNNs, LSTMs, GRUs, Seq2Seq, Attention, GNMT, and Multilingual NMT.
– Replications of 7 landmark NMT papers in PyTorch, so learners can code along and rebuild history step by step.
– Explanations of the math behind RNNs, LSTMs, GRUs, and Transformers.
– Conceptual clarity with architectural comparisons, visual explanations, and interactive demos like the Transformer Playground.

๐ŸŒ Atlas Page: https://programming-ocean.com/knowledge-hub/neural-machine-translation-atlas.php

๐Ÿ’ป Code Source on Github: https://github.com/MOHAMMEDFAHD/Pytorch-Collections/tree/main/Neural-Machine-Translation

โค๏ธ Support for this channel comes from our friends at Scrimba โ€“ the coding platform that’s reinvented interactive learning: https://scrimba.com/freecodecamp

โญ๏ธ Chapters โญ๏ธ
โ€“ 0:01:06 Welcome
โ€“ 0:04:27 Intro to Atlas
โ€“ 0:09:25 Evolution of RNN
โ€“ 0:15:08 Evolution of Machine Translation
โ€“ 0:26:56 Machine Translation Techniques
โ€“ 0:34:28 Long Short-Term Memory (Overview)
โ€“ 0:52:36 Learning Phrase Representation using RNN (Encoderโ€“Decoder for SMT)
โ€“ 1:00:46 Learning Phrase Representation (PyTorch Lab โ€“ Replicating Cho et al., 2014)
โ€“ 1:23:45 Seq2Seq Learning with Neural Networks
โ€“ 1:45:06 Seq2Seq (PyTorch Lab โ€“ Replicating Sutskever et al., 2014)
โ€“ 2:01:45 NMT by Jointly Learning to Align (Bahdanau et al., 2015)
โ€“ 2:32:36 NMT by Jointly Learning to Align & Translate (PyTorch Lab โ€“ Replicating Bahdanau et al., 2015)
โ€“ 2:42:45 On Using Very Large Target Vocabulary
โ€“ 3:03:45 Large Vocabulary NMT (PyTorch Lab โ€“ Replicating Jean et al., 2015)
โ€“ 3:24:56 Effective Approaches to Attention (Luong et al., 2015)
โ€“ 3:44:06 Attention Approaches (PyTorch Lab โ€“ Replicating Luong et al., 2015)
โ€“ 4:03:17 Long Short-Term Memory Network (Deep Explanation)
โ€“ 4:28:13 Attention Is All You Need (Vaswani et al., 2017)
โ€“ 4:47:46 Google Neural Machine Translation System (GNMT โ€“ Wu et al., 2016)
โ€“ 5:12:38 GNMT (PyTorch Lab โ€“ Replicating Wu et al., 2016)
โ€“ 5:29:46 Googleโ€™s Multilingual NMT (Johnson et al., 2017)
โ€“ 6:00:46 Multilingual NMT (PyTorch Lab โ€“ Replicating Johnson et al., 2017)
โ€“ 6:15:49 Transformer vs GPT vs BERT Architectures
โ€“ 6:36:38 Transformer Playground (Tool Demo)
โ€“ 6:38:31 Seq2Seq Idea from Google Translate Tool
โ€“ 6:49:31 RNN, LSTM, GRU Architectures (Comparisons)
โ€“ 7:01:08 LSTM & GRU Equations

๐ŸŽ‰ Thanks to our Champion and Sponsor supporters:
๐Ÿ‘พ Drake Milly
๐Ÿ‘พ Ulises Moralez
๐Ÿ‘พ Goddard Tan
๐Ÿ‘พ David MG
๐Ÿ‘พ Matthew Springman
๐Ÿ‘พ Claudio
๐Ÿ‘พ Oscar R.
๐Ÿ‘พ jedi-or-sith
๐Ÿ‘พ Nattira Maneerat
๐Ÿ‘พ Justin Hual

Learn to code for free and get a developer job: https://www.freecodecamp.org

Read hundreds of articles on programming: https://freecodecamp.org/newsย ย ย Read Moreย freeCodeCamp.orgย 

#programming #freecodecamp #learn #learncode #learncoding

You May Also Like

More From Author