Python Sentiment Analysis Project with NLTK and 🤗 Transformers. Classify Amazon Reviews!!
In this video you will go through a Natural Language Processing Python Project creating a Sentiment Analysis classifier with NLTK's VADER and Huggingface Roberta Transformers. The project is to classify the seniment of amazon customer reviews. 🤗 provides some great open source models for NLP: huggingface.co/models. We will look at the difference between model outputs from the two packages and compare the results. Seniment analysis is an important tool for data scientists to use in laguage modeling.
Link to Kaggle Notebook: www.kaggle.com/robikscube/sen...
Timeline:
00:00 Intro
01:10 Setup + NLTK
10:44 VADER Model
23:42 RoBERTa Model
35:51 Compare Results
Follow me on twitch for live coding streams: / medallionstallion_
My other videos:
Speed Up Your Pandas Code: • Make Your Pandas Code ...
Speed up Pandas Code: • Make Your Pandas Code ...
Intro to Pandas video: • A Gentle Introduction ...
Exploratory Data Analysis Video: • Exploratory Data Analy...
Working with Audio data in Python: • Audio Data Processing ...
Efficient Pandas Dataframes: • Speed Up Your Pandas D...
* KZhead: youtube.com/@robmulla?sub_con...
* Discord: / discord
* Twitch: / medallionstallion_
* Twitter: / rob_mulla
* Kaggle: www.kaggle.com/robikscube
#nlp #python #machinelearning #huggingface
great content, this deserves a million views... {'roberta_neg': 0, 'roberta_neu': 0, 'roberta_pos': 100}😀
Haha. Best comment! Pinned.
Pos should be 1, since the maximum value is 1. lol
@@robmulla plz give ur what's app no
Good one!!😅
Thank you so much for this step by step process it has opened up all sorts of new analysis opportunities for our customer insights. Really well explained and easy to follow
I don't often left comments on youtube but, finally someone that explains everything from scratch...I am a JS developer. And it's really cool your that you explain every piece of code. That really helped, I was able to understand everything.
Hey! I really apprecaite this comment. Thanks so muc.
I like the pace at which you teach this content it is relaxed and very enjoyable to watch for me.
Just completed it. I really enjoyed working on it. Your way of teaching is just awesome!
I'll admit I watched this on two times speed, but those were the best spend 21 minutes of the day! Very helpful and we'll explained!
I find the topic really interesting , the way you explain were pretty articulated and having a fundamental approach
Really interesting video. I've been following a lot of your tutorials lately and I must say that I really like the way you explain things, it's so easy to understand and follow along. Thank you!
Thanks so much for the feedback Juan. It's always hard to tell when I'm recording these if they are any good, so it's great to hear that it is helpful to you.
Good, very good video! You cannot imagine how valuable this kind of video is for someone like me who is trying to transition to data science...
Amazing content man! Your channel and videos deserve a lot more attention. Hope you have an amazing week!!
Thanks so much. I really appreciate the feedback. Please consider sharing the video with anyone else you think might learn from it.
I am so happy to have discovered your channel. Many thanks friend.
Great content. I am doing a project in my uni where I need to do sentiment analysis on book reviews. This helped me a lot. Thanks.
Your channel is a gem, thanks so much for the free course.
Glad you enjoyed it. Thanks for watching!
Great work🎉🎉🎉🎉 ty for this amazing video .Your explanation , flow , content everything is up to the mark 🚩
Great video. Your explanations were very clear and concise and easy to follow.
Rob, you are the Best! Thank you for all the quality content you are uploading! Greetings from Greece!
Thanks so much Pavlos for watching. Sending a 💙 to Greece.
Huge thank you to you!!! I recently participated in a ML hackathon and they had sentiment analysis as one of their problem statements. I had watched your video prior to the competition and used hugging face whereas everyone else used the standard vader. I ended up getting the highest accuracy and placed first, all in my second year of engineering. Genuinely, can’t thank you enough for the information! Team random_state42
Mil gaya tu yaha
This is so awesome! Thanks for sharing. I posted a screenshot of your comment on twitter, hope that's ok!
Btw huge fan of your statistics' notes Mr. Patawar, didn't expect to find you here.
@@bhaumik3118 i also study statistics from mr patawar
nice man
Great video, I am starting to understand NLP much more. Thank you so much!
Thanks for such a wonderful tutorial. I used your shared data on my own with Google Collab and worked so well. Just I had to download a few more libraries for tokenization. Wonderful content and I truly enjoyed it.
This video is incredibly helpful! Thanks!
Extremly useful, super easy to understand! Thank you so much for a great and valuable video !!
Really appreciate the feedback. Comments like this make me want to keep making more videos!
A great video! Many thanks for your valuable content.❤
This may be the test tutorial on any language/library/app I have ever watched. One part, very concise and well explained. Thank you.
Glad it was helpful! This comment makes me really happy and excited to make more tutorials!
More of an appetite wetter. to make any use of it, I have to learn Python first 😀 But then, that's valuable by itself.
Thanks for posting the awesome tutorial. Would love to learn more from you.
Thanks for watching and learning!
Extremely helpful! Thanks a bunch!
I've watched bunch of ML videos and you are THE TOP! 👍👍👍
I’m so glad I found this channel!!
Me too!
You are my newly found Python mentor. Good content Rob
Happy to be! There are a lot of good channels out there.
Your videos like gem to me learned a lot your use of modules packages are like cherry on cake. Currently I'm working as an Jr. Data scientist in KPMG but man oh man you taught me many things thank you 😊 🙏
Great to hear you enjoyed the video. Data science is a never ending learning journey for all of us!
Bro, I just need to talk to u. I wanted to ask few questions regarding the profile you are working on. I have secured a job with Deloitte but want to switch to KPMG (Gurgaon).
great content, perhaps the best material I found on sentiment analysis in youtube!!!
Thanks for the compliment Ayush! That means a lot to me.
Thank you so much. This tutorial helped me in my project. Thanks a lot.
Just found your channel through Twitter. Great work, I am doing research in sentiment analysis and related to a lot of the video. Cool stuff! I will have to use the pariplot, I typically use a confusion matrix.
Awesome Josiel. Glad you find it helpful. Check out some of my other videos if you have time and share the video with friends!
Hey brother , you just provided the best NLP sentiment project , your channel deserve million+ subscriber , nd now I am just one new subscriber now to reach you there
Thank you so much 😀
I really liked this video a lot, it answered lot of my questions, thanks a lot.
i cannot thank you enough , you saved my 6th semester
Awesome! I am shocked that everything is so efficient and amazing. THANKS!
Glad it was helpful! Share the video with friends.
Excellent video, started coding with chatgpt, and this adds a new layer of info , thank you mate :) Subd
Thank you so much for this video tutorial! I wanted to ask if you created the Amazon review dataset from scratch or was it already pre-made from somewhere else?
thank you for this content! Great quality! Now subscribed!
Thanks so much for watching!
This video was genius and very helpful thank you
Great resource! Thanks Rob.
Glad you liked it! Thanks for watching.
Thank you very much for this video. I'm new to the field of Data Analysis and related disciplines so this sentimental analysis project is pretty insightful for me.
Glad you found it helpful
Thank you! Great content and easy to understand!
Appreciate that!
This was a good tutorial. I'm trying to get my feet wet in data analytics and found myself overwhelmed while trying to read the NLTK documentation, so thanks for the structured guidance. I'm working on analyzing sentiment across a dataset I've gathered myself, so I wasn't following along in kaggle and hit a hiccup as AutoModelForSequenceClassification requires pytorch and I initialized a python 3.10 environment. Oopsy poopsy. All the same, you made my headache significantly less daunting. Thank you. :)
Thanks so much. I’m glad it helped you get started with NLTK it can be a lot easier when you see it in action once. Setting up an environment that works with all the packages can also sometimes be frustrating so I can relate!
Really great, helped me a lot in my project!
Glad it helped. Thanks for watching.
I founf this video immensely helpful Rob Thanks
So glad you found it helpful!!
crystal clear explanation thanks my friend
Glad you liked it!
just did all of that as a thesis by myself without knowing you made a video about it lol, luckily I've used a different Bert model from hug face at least. Nice video btw!
Thanks!
Thanks for great model ideas.
Glad you like them!
What a video! I lovee this. Please keep continue this content. Greetings
Thank you! Will do, Adem!
wow. speechless. both you and ml.
Thanks for the video, we have a school project to do anything coding related and while my classmates are using scratch I wanted to do something flashier, and some kind of language analysis seemed the way to go. I'll use this video as inspiration.
I love it! Good luck on your project !
insane
@@techingenius2540 in the membrane?
what an absolute legend
Rob you are the best. Hands Down mate.
very usefulll!
Thnak you so much
great stuff!!
Great content.thank u
Great content, thanks
THANK YOU!
New viewer and sub!! great work!!!
Top-notch 🔥 !!
Thanks Akshat!
Great tutorial, for anyone facing the error of tensor_size more than 514 need to add the max_length as an argument in tokenizer... def polarity_scores_roberta(example): encoded_text= tokenizer(example, return_tensors='pt', truncation=True, max_length=512) # (max_length should be 512) output= model(**encoded_text) scores= output[0][0].detach().numpy() scores= softmax(scores) scores_dict= { 'roberta_neg': scores[0], 'roberta_neu': scores[1], 'roberta_pos': scores[2] } return scores_dict
Thank you. Great content
Glad you enjoyed it! Make sure you sub and share!
importante lesson thanks
Great Content, thanks man
Thanks!
I rarely comment on YT videos but this is amazing! +1 subscriber!
That really means a lot to me. Thanks for leaving a comment.
Hi, thank you for the amazing video. Your presentation was informative and insightful. Looking forward to your future content! Btw, I want to ask how can I save my expected result, it seems like I had a good training and dont want to keep going. What should I do in this situation ? Thank you
This is a great video, thanks a lot.
Glad you like it. Thanks for watching
your content is goldmine
Thank you sir! Share the goldmine with others!
Great content. Please do more content model which solves attrition prediction for org. Very complex subject because its hard to find already made models on such topics. It would be great help if you can make something attrition prediction model with variables more than 45-50.
Thanks for the video. Very well explained. Is there any token limit for the transformer based Roberta model ?
Amazing!
Thanks for this video, it was descriptive, well structured and well explained. I have two questions and I would appreciate if you can give your opinion and guidence on that. 1. At the end of the day star reviews and sentiment are giving the same results so how can we justify going through all this process when we already have a very good indication of user sentiment based on the star reviews. 2. How can we get the strength and weakness of the product based on the reviews using the sentiment analysis.
Pls make more such videos, that was great. I am a data engineer and wants to move to Data Science, please make videos for guidance also. Love from India
I will! Hope this video was helpful for you in your journey into data science.
Great content.
@robmulla, great presentation but I have looked through videos on your channel, it appears you have not done one on finetunning a BERT model with custom dataset. I am particularly wanting to learn how you would finetune a BERT model for multiclass text classification, maybe on Google collab. I think many of us subscribers would love it. Thanks.
Thank you
Excellent explanation and material. Thank you for your efforts in making learning enjoyable. A brief query about reviews that are negative (5 stars) and positive (1 stars), where the algorithm is unable to forecast the relevancy score. Regarding these kinds of situations, how would you advise handling them??
loved what you did, but would be nice to show how you got the amazon data as well. Plus, do you have any videos on sentiment analysis for company stocks?
thanks man
you are awesome.. thanks a lot..
Thanks for watching. Share with a friend!
how you don't have 100k subs, defeats me.
Hah. Thanks Juan. Maybe someday 😊
Great video. Also, is there a way to include the number of retweets or followers in the sentiment analysis process?
Very interesting!!
Thanks!
Awesome video. Would be great to see you follow the sentiment analysis with a topic analysis. I’ve seen a few different options out there (LDA, Top2Vec and BERTopic), but would love to see your take on it.
Great suggestion! I'll keep that in mind for future videos.
@@robmulla Looking forward to that!! :)
hey sir! thx for the tuto!! for an end to end project , can we save those models example roberta with pickle to deploy it on the web or is there other method for this kind of models?
Great Content, We need more tutorial on Transformers please
Glad you liked it. Anything specific about transformers you would like to see? Huggingface has so many of them for various NLP tasks.
@@robmulla Please explain Transformers and BERT architect. Also tutorial with use case in current industry
great tutorial , quick question sir ... does the hugging face model understand emojis 😃🤬 and can it be translated to the score points of the sentiments results
you are awesome bro
No, YOU are awesome. Thanks for watching.
Nice work
Thanks for watching!
Love from India ♥️
Thanks! ❤️
Super
Thankyou
You’re welcome 😊
Clearly explained and the comparison vaders versus transformers is quite interesting. I see that transformers Bert model is much better in understanding nuances in sentences. Do you know what kind of algorithm textblob used? I just bumped to this channel when searching for sentiment analysis and like the content very much and subscribed also.
Thanks for subscribing! I'm glad you learned something new. I've never used textblob but it says it's a "lexicon-based approach" so I'm gussing it's similar to VADER.
One of the best tutorials on Vader and the Huggingface Transformers I have seen. One question I had: How is the confidence score calculated on the Pipeline model and is there a way to evaluate the model's performance on these calculations?
Thanks so much for the feedback. Glad you found it helpful. Evaluating the model performance is a bit tricky without ground truth labels. The output of the Pipeline model is essentially the probability the model predicts of each class given the dataset it was trained on. Check out the actual model description on the huggingface site here along with the noted limitations: huggingface.co/distilbert-base-uncased-finetuned-sst-2-english Specifically this part is interesting: ``` Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations. For instance, for sentences like This film was filmed in COUNTRY, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this colab, Aurélien Géron made an interesting map plotting these probabilities for each country. ```
@@robmulla FWIW - I reached out to the creator of this and what I was told is that the score is calculated using the activation function after the final layer of the neural net. It is used to determine polarity (and is not a confidence score). The model returns an array with the score for each polarity, and the larger is the prediction. The values will always be positive, regardless of the actual sentiment class tagged to the text. This is unlike Vader's model which provides a composite polarity score that could be a positive or negative float based on the inferred sentiment (positive, negative, neutral).
@@timdentry9754 thanks for clarifying. Cool that you got a response from the creator!
great!! i hope you will create video more than!! tkssssssssss
Thank you, I will. I appreciate you watching.
Hey Rob, great content as always. I am currently working on sentiment prediction project and labelling customer review data with its corresponding sentiments to later train a supervised model on it. How would we for example deal with observations that have a positive sentiment score but a very low customer rating? Can this be considered noise and we simply remove it or do we just assume weird human behaviour and leave it in? Kind regards
Could also be a result of people using ratings and comment as two different mediums of feedback. I often use ratings to guide the quality and comments to write about anything i found that was peculiar or negative, not necesarily reflecting the overall quality of the product, just highlighting the bad features.
Very good , thanks. Do you have any toturial regarding readability tests in Python with many texts in a Excell file?
Very well explained video and clear guidance! I have a question about the preprocessing part of the text before putting it into the tqdm sia loop, do we directly put the raw data into it, or do we do the tokenize, remove stop words and stuff first, and then go for the sentiment analysis? Looking forward to your reply!
Hey Huan! Glad you found the video helpful. I'm not sure about the loop you are referring to but typically the text needs to be tokenized, but depending on the model it may handle that within the predict function. Hope that helps.
@@robmulla Hi Medallion, got it and that makes sense, thanks for the clarification!