Post Content
โย This course is a comprehensive journey through the evolution of sequence models and neural machine translation (NMT). It blends historical breakthroughs, architectural innovations, mathematical insights, and hands-on PyTorch replications of landmark papers that shaped modern NLP and AI.
The course features:
– A detailed narrative tracing the history and breakthroughs of RNNs, LSTMs, GRUs, Seq2Seq, Attention, GNMT, and Multilingual NMT.
– Replications of 7 landmark NMT papers in PyTorch, so learners can code along and rebuild history step by step.
– Explanations of the math behind RNNs, LSTMs, GRUs, and Transformers.
– Conceptual clarity with architectural comparisons, visual explanations, and interactive demos like the Transformer Playground.
๐ Atlas Page: https://programming-ocean.com/knowledge-hub/neural-machine-translation-atlas.php
๐ป Code Source on Github: https://github.com/MOHAMMEDFAHD/Pytorch-Collections/tree/main/Neural-Machine-Translation
โค๏ธ Support for this channel comes from our friends at Scrimba โ the coding platform that’s reinvented interactive learning: https://scrimba.com/freecodecamp
โญ๏ธ Chapters โญ๏ธ
โ 0:01:06 Welcome
โ 0:04:27 Intro to Atlas
โ 0:09:25 Evolution of RNN
โ 0:15:08 Evolution of Machine Translation
โ 0:26:56 Machine Translation Techniques
โ 0:34:28 Long Short-Term Memory (Overview)
โ 0:52:36 Learning Phrase Representation using RNN (EncoderโDecoder for SMT)
โ 1:00:46 Learning Phrase Representation (PyTorch Lab โ Replicating Cho et al., 2014)
โ 1:23:45 Seq2Seq Learning with Neural Networks
โ 1:45:06 Seq2Seq (PyTorch Lab โ Replicating Sutskever et al., 2014)
โ 2:01:45 NMT by Jointly Learning to Align (Bahdanau et al., 2015)
โ 2:32:36 NMT by Jointly Learning to Align & Translate (PyTorch Lab โ Replicating Bahdanau et al., 2015)
โ 2:42:45 On Using Very Large Target Vocabulary
โ 3:03:45 Large Vocabulary NMT (PyTorch Lab โ Replicating Jean et al., 2015)
โ 3:24:56 Effective Approaches to Attention (Luong et al., 2015)
โ 3:44:06 Attention Approaches (PyTorch Lab โ Replicating Luong et al., 2015)
โ 4:03:17 Long Short-Term Memory Network (Deep Explanation)
โ 4:28:13 Attention Is All You Need (Vaswani et al., 2017)
โ 4:47:46 Google Neural Machine Translation System (GNMT โ Wu et al., 2016)
โ 5:12:38 GNMT (PyTorch Lab โ Replicating Wu et al., 2016)
โ 5:29:46 Googleโs Multilingual NMT (Johnson et al., 2017)
โ 6:00:46 Multilingual NMT (PyTorch Lab โ Replicating Johnson et al., 2017)
โ 6:15:49 Transformer vs GPT vs BERT Architectures
โ 6:36:38 Transformer Playground (Tool Demo)
โ 6:38:31 Seq2Seq Idea from Google Translate Tool
โ 6:49:31 RNN, LSTM, GRU Architectures (Comparisons)
โ 7:01:08 LSTM & GRU Equations
๐ Thanks to our Champion and Sponsor supporters:
๐พ Drake Milly
๐พ Ulises Moralez
๐พ Goddard Tan
๐พ David MG
๐พ Matthew Springman
๐พ Claudio
๐พ Oscar R.
๐พ jedi-or-sith
๐พ Nattira Maneerat
๐พ Justin Hual
—
Learn to code for free and get a developer job: https://www.freecodecamp.org
Read hundreds of articles on programming: https://freecodecamp.org/newsย ย ย Read Moreย freeCodeCamp.orgย
#programming #freecodecamp #learn #learncode #learncoding