Control Tone & Writing Style Of Your LLM Output
2023 ж. 17 Мам.
12 302 Рет қаралды
Twitter: twitter.com/GregKamradt
Newsletter: mail.gregkamradt.com/signup
Code: github.com/gkamradt/langchain-tutorials/blob/main/data_generation/Instructing%20LLMs%20To%20Match%20Tone.ipynb
0:00 - Intro
0:49 - Main Tips
1:57 - Code Overview
2:50 - Method 1: Describe tone
4:21 - Method 2: Describe tone + examples
6:27 - Method 3: AI-Assisted
9:28 - Method 4: Technique Fusion
13:19 - Extra Credit: Loop through more people
You're a legend mate. I learn so much in a few minutes of your videos. Thanks for sharing your valuable knowledge and helping shape the world.
Love it - thank you very much!
Cool, that reminds me of textual inversion in stable diffusion where they feed (more or less) random strings to the prompt and check how close the result is of the desired output.
Awesome job! Keep it up, you are definitely pushing out a ton of valuable and actionable material :)
Shaan, levelsio and stephsmite really sounds like their human versions aside from hashtag and emojis. fascinating. Great work. Loving the breakdown of overview and then the code steps.
This is for sure one of the best videos I've seen this year! Amazing.
Awesome job! Cool! Great work
dude this content is insane...thanks for putting it out there!
Nice! Awesome young - thanks for the support
insane output
This is exactly what I needed, amazing insight!
What’s the project you’re working on?
@@DataIndependentSome personality addition to a conversational LLM. I want to have a variety of personalities to prepare people for certain social interactions.
Your tutorials are awesome!
Nice thank you
This content is god level. I found your channels thanks to the langchain cookbook course. So glad of that discovery.
Nice!! Glad to hear it and thank you
You are amazing!
Thank you for sharing and explaining.
Thank you
Fascinating way of explaining and teaching , thanks for the amazing content By any chance , do you know how to add a prompt or a personality to my retrievalQa chain ?
🔥🔥🔥🔥
Lol. I trained the bot to write like me a couple of months ago and used some of this approach among others.
Would fine tuning yield better result or is that not guaranteed? Especially if you have large amounts of wirting examples
hi awesome video could you make the next videos a bit louder? even on full vol on YT and my computer still sounds low, thanks!
Thanks for the feedback, will do.
Why do you use the percent sign (%) for headers? Is this a best practice? Could you make a video about that or suggest a blog post?
I picked it up from Greg Brockman here kzhead.info/sun/otmtk6usmaCDqIk/bejne.html tbh I haven't seen a performance increase but I need to test it out more
Gr8t stuff! It solved 90% of a project I am working on. Next step is to create a virtual persona with the COMBINED stiles of a given group of authors. For example: define the tone and style of an extreme right-wing political author, by combining many moderate to extreme right-wing authors. Then tweet away!!
Nice! I don't love the controversy spreading on twitter, but I love the exercise in AI tone management! Have fun exploring
The "Control Tone and writig Style for LLMs " playbook is not n the GitHub repo..
What about fine tunning on the tweets dataset? Would It work?
That would work but you would need a massive amount of tweets. At the beginning he said this is another option.
#interesting #generativeAI #LLM
What a wonderful explanation, when i trying to use the tweepy API I think the free version is not enough no more=(