AI YouTube Comments - Computerphile
2017 ж. 3 Қаз.
284 935 Рет қаралды
Generating KZhead comments with a neural network trained on KZhead comments. What could possibly go wrong? Dr Mike Pound replied to our comment...
EXTRA BITS: • EXTRA BITS: More AI Ge...
Neural Networks & Deep Learning: • Neural Networks
Andrej Karpathy Blog post mentioned by Mike: bit.ly/C_Blog_RNN
Code that Mike used to create this: bit.ly/C_RNN_Code
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com
12:11 I'm in the video! :O This is amazing! The robots love me!
Also... this might be a sign that I need more to do with my life besides comment on KZhead videos...
+IceMetalPunk hey, thanks for the comments! >Sean
Hi IceMetalPunk. ☺️
Well, ask him for the code. That way, you'll never have to comment again!
The code is in the description.
Developing an AI based on KZhead's comment section might not be the brightest idea in the world. The "I" in AI stands for intelligence.
Maybe he was developing Artificial Idiocy
I always preferred Artificial Incompetence.
Artificial ignorance.
Artificial imitation
Space doubt wins....
"I was able to want to be able to be happy."
me too... me toooo...
Same
That's an important first step.
It has reached consciousness!
watcherFox they became self aware! 😱
Anyway, excitement about my five minutes of robot fame aside, I did want to comment something related to the actual subject of this video :) My favorite method of generating plausible text is a simple multilayer Markov chain. Way back in my freshman year at uni, one of my assignments was to create a three-layer Markov chain trained on excerpts from the Wizard of Oz to generate a new page from the book. It was interesting. But then, we were allowed to train our Markov chain's dictionary on *any corpus* we wanted, so I chose the US Constitution. Needless to say, seeing brand-new laws come into existence at the (metaphorical) hands of a computer was extremely amusing.
I love you, baby.
Nice try computer. Your phony back story isn't fooling anyone. Well done on the improvements though. Your comments are getting better.
IceMetalPunk I am a knight and what I say to all this is Nee! Just nee, nee, nee!
+IceMetalPunk A little late, but if you have any excerpts, please post them?
After this video i will never be able to trust computerphile comments ever again.
IceMetalPunk comments on my videos too :-D Deffo real person!
xisumavoid xisuma here wow
Cool to see you here
Hey Xisuma. Love your vids. good channel you watch here
didnt expect to see you here :D
xisumavoid Oh, hey xisuma
First KZhead Comment
Mike is the best speaker IMO
Computerphile reply
1:25 Don't right?
do you support online learning from websites like "Free code amp" and 'Solo learn" 'Code academy". gosh! I am soo noob
Great video!
I find your own difference.
This relayed having two first.
We all find your own difference on this blessed day.
Hmm, profound words; they art spilling from thine mouth.
I just wanted to let you know that I was just wondering if you were able to get the kids to school.
Does it write "first"-comments?
Ebumbaya ' first
Probably.
Yes, but excessively so and at inappropriate times.
@@Treddian So... like normal youtube comments then.
In one of the screens you can see a comment which says "One!", which is kinda the same
Train it on computerphile transcripts, then act out the output!
Yes! I second this entirely!!
"This video was written by an AI."
even better, train it on classic literature, read its essays at a TED Talk
Using comments as input on university computers? Would be a shame if someone would '); DROP TABLE Students;--
Hey! Leave Bobby out of this! :D
I wonder what lil Bobby has gotten himself into this time
"I was able to want to be able to be happy" The network is trying to tell us something.
The output is nonsense, but it looks quite similar to badly translated Chinglish found in the manuals for cheap eBay electronics :) I am impressed with the way this isn't just randomly sticking words together, it's actually making the words themselves, letter by letter - without even really knowing what a "word" even is.
I just discovered Numberphile and Computerphile recently and appreciate everything you guys are doing here. The content is definitely top notch. I'm lovin' it.
@IceMetalPunk Please drop a hello
Hello! :D
You're internet famous now!
At least within the Computerphile-watching community, and for the next week or two. But I'll take that! :D
IceMetalPunk yey!
That's amazing! how many comments have you posted lol!
It would be interesting to create a neural network that would predict comment likes/dislikes based on the content of the comment.
@@dejfcold that's an over simplification
I predict your comment will receive 76 likes before it is forgotten.
@@abstractapproach634 I don't think it's gonna make it :(
@@samuelthecamel Prob 42 likes
Hey Computerphile, I love your comment at 13:43, where you asked the question "How do you watch if you basically have one hardware?". Great video, quite fun to read through those generated comments to find some that almost make sense.
Hi Max. If you're interested in AI, you can check out my introductory book "How to Create Machine Superintelligence" available for FRЕЕ in amazon kindle store till 6th October. In this book, I go over the following: - intelligence as a form of information processing - basics of classical computing - basics of quantum computing - some basics of machine learning and artificial neural networks - and also share some thoughts on building general AI and dealing with the control problem
Amazing stuff! And ironically, the fact that it outputs typos every now and then makes the comments much more realistic.
I know, right? I found that to be simultaneously cool, and a bit thought-provoking. 💜
I dunno why, but I really love this guy. He's very good at explaining things and he's funny. Thanks Mike!
Agreed!
I think it's his passion for explaining and kinda learning at the same time.
"I find your own difference." It sounds so deep and profound! Now all we need to do in translate it into Latin, "Differentiae tuae invenio." That's one solid tattoo right there! Or it should be Chinese, if someone would like the chime in on that one?! :D
Extremely cool demonstration! I find neural networks so fascinating.
This was done by a KZheadr called CaryKH, he applies neural networks to various tasks.
Yea i love the stuff he does - like the language-recognision :)
I was about to mention. I wonder what difference there is between their methods, if any
Charles Thatisall There might be, if there is I'm voting for an epic NN battle.
He is so good at explaining this stuff
please keep on making videos. it really really really helps.
11:47-12:01 makes me stop and think for a second about that for a little bit. Very interesting conclusion, as it's true, it is trained on data with typos, and it "learned" to make typos! That is both amazing and a little bit deep. Love this! 💜
Cool :) I also did this a while ago when I learned about Andrew Karpathy's blog, to generate song lyrics with using a certain music style as input.
"I find your own difference" is amazingly deep!
Mike's the best at Computerphile videos
One more comment for your neural network :) I admire your work and your ambition.
Nice to see you using Lua for this! Just recently began learning the language
IceMetalPunk is a KZheadr gamer who makes Minecraft let’s plays. This dude exists!
I only know them as a KZhead viewer whose viewing habits often overlap with mine.
Well, I used to... I haven't had much time lately :( Having two jobs can really hinder a social (media) life XD
Thanks, great vid. Also, would be great to hear from you about spiking neural networks. There isn't much free and quality information about spiking nets.
"I have no ideas so I use a neural network"
why are you so mean?
Because there is better AI. I am not mean, just angry for not being heard.
Since RNNs are ideal for predicting the next thing in a long sequence, it's really fun to train them on audio (predicting the next position of the waveform based on the audio leading up to now). It's much slower than dealing with text, though. In case anyone's curious, I've made a couple of videos showing results of that (using torch-rnn, the same software Mike uses here).
"I find your own difference." -Hessil200, 2017
I guess the network would output a lot of "first" as comments and a lot of "first" related replies
"I said yesterday I walked to the park 2 days ago."
"I was able to want to be able to be happy". A perfectly normal KZhead comment.
Great video. Phone and so, crunching (with the jump).
I like that when you read out the comments that its real enough that my brain thinks its real english but I wasn't paying enough attention to understand it.
I realize this is more powerful in a general sense, but to just generate text (what I know of as) a markov chain seems... easier. They're fun and extremely easy to get started with if nothing else, and I feel like playing with the generator order gives a sense of how (non)random language is.
A recurrent neural network looks like a "recursive" markov chain to me.
Yeah, definitely. This isn't the best use of a recurrent neural network, but it does sort of help for demonstration purposes. But I love me some Markov chains! I always wanted to use Markov chains to generate songs based on a given artist's corpus of song lyrics, but the one time I tried to make one, I did it very crudely and naively and ended up accidentally DOS'ing a lyrics site... so I stopped XD I should get back to that one day, but do it more intelligently this time... if I ever have any free time for that anymore.
@@superdau A RNN is kinda like a Markov chain but using thousands of different states instead of one. Also each state is modulated by math/weighted connections instead of probability. It's much harder to generate with quality of RNN if the Markov Chain is working character by character.
Mike is one super enthusiastic computer scientist! Go Mike!
surprised it didn't say "WE LOVE YOU MIKE" all the time
Watching this 5 years later now that we have ChatGPT and such...
and where he said that a Chatbot from this is "theoretically" possible and here we are
Has there been much work on giving such systems initial information from which to build off of? For example, one might give the system a dictionary of English words, acronyms, etc (possibly letting the system expand it to some extent) with a list of the potential (or probable) types of word (nouns, verbs, adverbs, adjectives, etc.) and even, perhaps verb tenses. Besides going letter-to-letter and word-to-word, it could start building models of overall sentence structures. This would significantly increase the complexity of the system and prescribing rules, rather than letting the system learn rules, might limit it in certain ways and would require more work initially.
That ASCII art is awesome
10:23 Neural network literally saying: "I don't think before doing" Mike Pound, you created a NN that can lie.
first!'); DROP TABLE comments;--
Hello, little Bobby Tables :D
IceMetalPunk That should teach you to sanitize your inputs.
Clever, except that the machine learning server is probably quite remote from the administrative servers
Yea because a Doctor of computer science would use SQL HAHAHAHA... Not.
I love you, Mike Pound
Gentlemen, our work here is done. Computerphile will now thrive on its own.
Bzzz So from now on, all the comments will be written by either ladies or bots?
It's matching his idea professional, and probably about it. After creators and governments is made about them.
Great video!
Hahaha, it generated my name in the random username part! (Yeah I've commented on multiple Computerphile videos in the past) I did not expect to see that.
We welcome our new AI overlord! 👑
Isn't trying to generate an AI trained on KZhead comments a bit counterproductive?
Depending on the channel, it might indeed become an Artificial Stupidity. However, as this AI is trained on Computerphile comments it's probably above average.
Daan Wilmer tru nuff, it was a joke tho.
No, because even it the bot replicates natural stupidity, you've proved that you have an algorithm than can replicate human behaviour accurately.
I am currently working on something very similar with reading books and writing small sub stories. I am happy I am not the only one getting the number loop problem
IceMetalPunk: You've been targeted for termination.
D: NO, I welcome my robot overlords! I'm honored they picked me and I will work with them however they see fit! Don't kill me!
Immediatly came to see IceMetalPunk's comment.
I commented a few comments just now... actually, I'm sure my habit of commenting multiple times throughout the course of watching a video helped the AI notice me xD
IceMetalPunk is a chat bot helping to train a chatbot to become a chatbot.
:) Looks it is. If you're interested in AI, you can check out my introductory book "How to Create Machine Superintelligence" available for FRЕЕ in amazon kindle store till 6th October. In this book, I go over the following: - intelligence as a form of information processing - basics of classical computing - basics of quantum computing - some basics of machine learning and artificial neural networks - and also share some thoughts on building general AI and dealing with the control problem
I won't read it because if you're rationalising what intelligence is you're misrepresenting what intelligence is
12:12 So did IceMetalPunk ever appear? Did the happy reunion ever happen? I so want to know!
I did appear! :D
The craziest thing is - that most actual comments look like that to me.
This video was spectacular. - A real person
I think there was a horribly expensive experiment where stocks were either bought or sold based on a neural network trying to decipher the mood of the language used related to the stock.
how many times does 'Hitler' pops up in those generated comments
Too often #Godwin’s law
This is the greatest solution of life changing information and the best of /.
I imagine it would be possible to layer this with some sort of pre-processing? For example, I imagine it would be possible to parse the sentences into word objects pretty easily first, and then run the same kind of a network against words rather than letters?
Wow its better than mine the chatbot my team made only could successfully achieve contextual awareness. we used LSTMs (a type of node for RNNs)
This is the same thing that powers the subreddit simulator! that sub actually gives nice plausible posts and stories! ( sometimes )
4k? HELL YEAH
This guy needs his own channel.
+IceMetalPunk .. we are waiting!
Wait no longer! :D
Carykh
Auto-completion has been given too much power. It must be stopped...
I was, but now it has been even more excellent hardware prediction of information that we are to be. Great video. Beep Boop.
Automatic writing to a whole new level! André Breton would be proud!
Is this a reupload? I feel like I remember this, down to the IceMetalPunk guy, but it could just be déjà vu
You probably just have seen my comments before... I comment way too much... hence why the AI noticed me XD
Nice one. Looks like me using keyboard suggestions to generate test text when developing an app ^^
Hello. If you're interested in AI, you can check out my introductory book "How to Create Machine Superintelligence" available for FRЕЕ in amazon kindle store till 6th October. In this book, I go over the following: - intelligence as a form of information processing - basics of classical computing - basics of quantum computing - some basics of machine learning and artificial neural networks - and also share some thoughts on building general AI and dealing with the control problem
There are several different ways that predictive text systems can be built, but it seems the most common way is actually a trie-based Markov chain system, not a neural network. The result is basically the same in the case of predicting text, though :)
I would have trained the thing on the comments of just one video, and I would have had only CAPS. Other than that, good job!
It would be interesting to see what it comes up with if you train it to predict comments based on the video title (and perhaps description).
10:24 "I was able to want to be able to be happy."
I was No pretty puzzling the code gaming so they would have found out the livelutoo tred s+so the larger stop information. I don't think before doing "is when it can predict it as computing and A/P?" R it wasn't a high-fact a bit stamp idea
It would have been interesting to have a 3 way chat with this LSTM network, a Markov model trained on the same data and a GAN. Has anyone made a text generating GAN yet, it should be trivial but the focus is mostly on image generation these days.
In 2023 this should be a 4 way chat with progressively more complex chatbots: Preprogrammed response chatbot, Markov bot, LSTM, ChatGPT.
crazy to see this and how now with transformers taking it to a whole new level !
Great Video!
I wish my professor taught us like this, using such examples, during our soft computing class in college !
And i wish you had actually made an effort to learn by yourself instead of crying how you were not properly taught. Sad
@@MrSkinkarde I did, my bachelor's course work project was based on GANS, however, then I realised ML/AI is going to be very difficult for me, so I shifted to networking/distributed systems and now work as a distributed systems engineer.
I don't see how this is different from building a probability tree on sequential words. I'm thinking of a hash map filled up by scanning the words in order, like this: { "the": ["house", "river"], "house": ["is", "has", "is" ] } Pick a starting word at random, and pick the next words at random from the corresponding array in the hash map. More probable words appear more times in the array, hence has a higher probability to be chosen. I don't see the point in training a neural network for something that is just plain statistics.
I recently texted my partner "Give yourself 5 minutes to clear the car, it may be frosty" and my phone suggested "relationship" as the next word. Hmmm?
Where's IceMetalPunk? Someone let them know.
Well, aside from the many comments my videos are now getting to let me know about this, I'm also subscribed to Computerphile. Hence why I leave enough comments for the AI to notice me, senpai :D So I saw it! :D
How far we have come
I bought a dinosaur from a vending machine, but I was unable to transport it home on my unicycle.
I think going below word level is like SHA1: You can't get it back. In this case, the semantics of the language itself. you could have a word level neural net run in parallel. that'd be neat.
I want to see it in the cinema...
Please provide full references for scientific papers when used on the show. People like me might want to read it for ourselves.
Loren ipsum 2.0 Seriously though, this is an interesting way of producing random text which at the same time is not entire gibberish. I could use this as is, now!
So, the challlenge here is to give the NN a memory right? Is what you're doing approximately the same as giving a state the previous states as input ? I guess it's not totally equivalent since you said the hidden layers' weighs are shared. If it is different, why not do as I said ? Too many inputs ? Moreover, is the "memory" length fixed ? Like 30 characters for example ? I'm actually interested in this since I'm trying to figoure out how to design a NN for training on a racing game, and sometimes, I need to know that I've already been through some path, to not go there again, so I need memory. Thx for your answer!!!
LSTM is different from other neural nets because it incorporates timesteps. Every timestep, data is inserted into the model input. The one thousand neurons in Computerphile's LSTM decides the output based on the inputs and previous output. This allows the LSTM to remember, and unlike a Markov model, the memory capacity of an LSTM is theoretically infinite. Unfortunately in practice the memory is not actually infinite because Back-Prop through time (optimization algorithm) results in slowly vanishing gradients which limit it's memory to a few hundred characters.
Now I really want to make a neural network that automatically replies to all my emails.
I'm glad I have contributed to the advancement of automatic Internet trolling.
He's working with the phone customisation.
Now teach a neural network to detect sarcasm.
what would happen if you tried using this as a generator for a generative adversarial network and then make the classifier as real vs generated? is it too random to get to realistic comments?
what about using iconic authors from the past to get reasonable conversations or responses for the present time or maybe for commentary towards the future?
It would be cool if you train this network on the whole(or part of) the wikipedia archive.
GPT-2: