Complete NLP Machine Learning In One Shot
Start Contributing in Open Source Projects
The-Grand-Complete-Data-Science-Materials
github.com/krishnaik06/The-Gr...
Timestamp
00:00:00 Roadmap To Learn NLP
00:16:00 Practical Usecases Of NLP
00:21:23 Tokenization And Basic Terminologies
00:31:31 Tokenization Practicals
00:45:11 Text Preprocessing Stemming Uing NLTK
01:02:02 Text Preprocessing Lemmatization USing NLTK
01:14:40 Stopwords, Parts Of Speech, NAmed Entity Recognition
01:50:50 Different types Of Encoding
02:45:18 Word Embedding, Word2vec
03:29:37 Skipgram Indepth Intuition
03:39:06 Average Word2vec With Implementation
------------------------------------------------------------------------------------------------------
Complete Langchain Playlist:
• Amazing Langchain Seri...
--------------------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
/ @krishnaik06
------------------------------------------------------------------------------------------------------------------------------
►Data Science Projects:
• Now you Can Crack Any ...
►Learn In One Tutorials
Statistics in 6 hours: • Complete Statistics Fo...
Machine Learning In 6 Hours: • Complete Machine Learn...
Deep Learning 5 hours : • Deep Learning Indepth ...
►Learn In a Week Playlist
Statistics: • Live Day 1- Introducti...
Machine Learning : • Announcing 7 Days Live...
Deep Learning: • 5 Days Live Deep Learn...
NLP : • Announcing NLP Live co...
►Detailed Playlist:
Python Detailed Playlist: • Complete Road Map To B...
Python playlit in Hindi: • Tutorial 1- Python Ove...
Stats For Data Science In Hindi : • Starter Roadmap For Le...
Machine Learning In English : • Complete Road Map To B...
Machine Learning In Hindi : • Introduction To Machin...
Complete Deep Learning: • Why Deep Learning Is B...
I watched this video entirely , its very useful for me. I paid 55K for a data science and I am learning from here you are much better than any one there . Thank you Krish 😍
Excellent Video, Just started with it, got clear about basics of NLP
Thank you for uploading such nice and comprehensive lectures (not videos) and explaining it so nicely. Your commitment is quite commendable. Please make such one shot videos in the future too.
Thank you for being a guiding light in the vast sea of information, providing clarity and understanding to those who seek knowledge. Your commitment to the betterment of individuals and society as a whole is truly uplifting.
Aww, you did an incredible job, Krish! I was fully engaged for the entire 4 hours and didn't get bored once.
Who needs institutes when krish sir is ready to give everyone this much free resources.
Thanks for all the amazing information you keep sharing over youtube. Please keep up the excellent work, your data science knowledge is of great help to the community of aspiring data scientists.
Helped to revise the concepts. Thank you Krish sir
Thankyou so much Krish Sir for this amazing video.
Thanks for sharing and educating us. Keep it up.
Thank you so much for this tutorial.
Thanks for all you do Krish 🙏
Thank you for this oneshort awesome video in NLP
Thank you for this amazing content
Thankyou sir.. It was very useful to have it in single video.. All concepts were very clearly explained. God bless you sir.. I am 45 and trying to learn AI Ml😊
Excellent roadmap. Really looking ahead.
best course, it really helped me!
NLTK is a comprehensive and educational toolkit suitable for a wide range of NLP tasks, while SpaCy is a focused and efficient library designed for production use, particularly for tasks like entity recognition and part-of-speech tagging.
Great video Krish ❤ much needed
Thank you Krish a friend introduce me to your videos. very wonderful and educative. Akpe na wo Mawu ne yrawo 💯
i enjoyed this video, thanks Krish
This video was really needed
Outstanding series
Awesome! Thanks Krish sir!
Thankuuu sir❤
Upon reading the documentation and paper on CBOW, I have two questions - 1. You explained that when we chose a window size, for example 3, we take 3 consecutive words from the corpus and take the middle word as target word and words before and after (1 each in this case) to provide context for the target word. However the documentation says that the number of words we take as window size determines the number of words taken before and after the target word. So for example if we take window_size = 3, we take 3 words before and 3 words after the target word to provide context. 2. We can chose the hidden layer to be any size. It is not important that it matches the window size, since the input layer does average or sum of the input vector and hence it's size is always [1 x V] where V is the vocabulary size. The input-hidden layer matrix is on size [V x N] where N is the hidden layer size, and then the hidden-output layer matrix is of [N x V] and finally the output layer if [V x 1] Can you please clarify my doubts here
I also have confusion in this step. i cant figure out the output dimension
Can you please create end to end project with real time data, ie using Kafka for streaming, Django for the backend and atleast use kubeflow for tracking 😊,,I'll appreciate
We really want this kind of project.
in ineuron there is a course for end-to-end data science projects. You can check it out
Awesome session
Godamn that is an incredible speech
NLTK is widely used in research, Spacy is focuses on production usage.
at 2:30:20 you finish BOW video and transition into TF-IDF video and say that in previous video you have mentioned about N-grams. I didnt find the N-grams video. Am I missing something?
Yes! N-gram topic was skipped
most waited...♥
Hey Krish. İ think you can read my mind😅. Thank you for video
Amazing videos. Curious to know which app/tool you use for creating notes
hello krish , how to handle curse dimensionality in huge corpus?
However, the process of tokenization usually goes from a larger unit to a smaller one, not the other way around. So, it’s more common to tokenize a paragraph into sentences or a sentence into words, rather than tokenizing sentences into a paragraph. @37:21 your comment says ##sentence to paragraph tokenization however you ended up using sent_tokenization() which accepts a paragraph and breaks it down into smaller sentence
you are the best
Great insight, what is the name of that digital board that you're using to capture your illustrated drawings?
Hello krish Are you starting any new batch for data science?? plz let me know If possible plz shre any link regarding that!! Thank you
Hello Krish, A small problem at time 2:30:21 the n-gram explanation video is skipped. Please add the video corresponding to N-gram. Thank you.
needed one on GenAI
that arabic was - Kayfa Haalak in case someone was interested in pronunciation
how is parts of speech going to work for ungrammatical sentences like some word's part of speech may depend on context and semantic in sentence as well right?
in NLP do we use standardization or normalization for text data ?
Great Video Krish, Some how Ngrams is skipped, Can you please add it
Thanks for this video. N-grams is missing from this video. Could you please upload that separately annd share the link?
Where can I find the next part..? Like practical implementation of word2vec with model training from scratch using gensim or Glove... Also practical implementation of tf-idf, bow... Pls share those videos as well
Nice
really nice explanation brother, please, can you share your notes?
okay for lemmetization how would we find the pos is noun, verb, adjective or anything else like for a big corpus?? because we cant explicitly check for all the types right?
Hey nice but this is old vedio sir i am one of your very old subcriber so i remember last year you upload this vedio. And at 16.26 vedio time frame please check your time too at the right bottom of your vedio when you open gmail. 17/10/2022 So please Sir Krish Naik I really respcct to you becuase i very learn from your side you are one of my best Online Professior. SO please don't mind for my commend. If you mind then i sorry you and please accept my applogies too. My Main request is this if you upload new vedio on NLP then its very helpfull for us.
Sir will you also cover deep learning models in nlp?
please do include transformers and bert related part too
What is the sw tool and gadgets that you use to present it , writing in a pen ?
💯💯
petition to upload the deep learning part of NLP ASAP, i have college exam next month
First to comment,, The best Data Science,Al, Machine Learning teacher of all time,, Straight outta Kenya❤
Hello sir, can you help me with my final year project, i'm trying to build a website where users uploads resumes in pdf format and admin classifies all resumes into predefined categories and ranks them based on job descriptions using cosine similarity and they said we cant use external libraries. can you help me out?
N grams topic is missing. Could you kindly check the video again ?
Sir can you tell how can I use azure openai api for llm in python
Can anybody please tell me how can I enable extension support like code completion in jupyter lab. I have searched stackoverflow but all effort had been in vain.
Can you please provide notes for this video
what are tools used in preparing the video?
n-grams tutorial video is missing
Awesome
Sir, There is N-grams topic is skipped automatically .so, please discuss it again or fixed this video
pls share link to this notebook
Please make same one shot video on Deep Learning ❤
its already there..
🙏👍
i too got here one of the first
Where are the missing parts for this?
Where is Transformers and BERT?
Watched video
Hi Krish , Can you please continue the LLM playlist
Yes please
Can u pls share the playlist link
N grams is skipped! 😢
so this is combination of your old videos right?
No these are new recorded videos
@@krishnaik06Dhanyawad, pichle videos samazhke skip karne wala tha
Can you help.. I am getting error while installing gensim package Building wheel for gensim (pyproject.toml): started Building wheel for gensim (pyproject.toml): finished with status 'error'