6. Multicore Programming
2024 ж. 22 Мам.
135 719 Рет қаралды
MIT 6.172 Performance Engineering of Software Systems, Fall 2018
Instructor: Julian Shun
View the complete course: ocw.mit.edu/6-172F18
KZhead Playlist: • MIT 6.172 Performance ...
This lecture covers modern multi-core processors, the need to utilize parallel programming for high performance, and how Cilk abstracts processor cores, handles synchronization and communication protocols, and performs provably efficient load balancing.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu
*My takeaways:* 1. Why we need multicore processors 1:13 - End of Moore's law - End of frequency scaling 2. Abstract multicore architecture 7:40 3. Share-memory hardware - Cache coherence 10:05 - MSI protocol 13:04 4. Concurrency platforms 19:21 - Pthreads (and WinAPI Threads) 24:45 - Threading building blocks 39:48 - OpenMP 45:27 - Cilk 50:55
Man you've been putting in much effort. Cheers.
I use C+ Builder, a brother product to Delphi and they use TThreads and I understand it fully. Been using threads for a decade, and had a stint with parallel coding.
집구석에서 배긁으면서 이런 수업을 들을 수가 있다고? 21세기 사랑해
This is a great class!
Thank you so much!
Excellent thank you for the content
Great class.
in the old days of Moore's Law, each doubling of transistor density, allowing for 2X transistors at a given die size with a new process, the expected performance gain over the previous architecture on the same process was about 1.4X. That, coupling with the 1.4X transistor switching frequency was the foundation of the original Moore's Law. To a degree, this was roughly the case for 386 -> 486 -> Pentium -> Pentium Pro. From that point on, it was difficult to 1) continue scaling performance at the single core level, and 2) coordinate the size of a team necessary to design more complex processor/cores. After Pentium and Pentium Pro, Intel started adding SIMD instructions which improved performance for code that could use the vector registers and instructions. The two Pentium 4's, Willamette and Prescott probably represented the final steps in trying to make heavy push to single core complexity. Afterwards, most effort/die are went to increasing core count, though the super scalar width did increase (from 4 to 6 to 8 and 10?). In going to two cores, the expected throughput gain for a threaded workload could be 2X, i.e., almost linear with transistor count, versus square root of 2 for 2X, but obviously no gain for a single threaded work load. The reason we focused on increasing core complexity for 1.4X was that in theory, this automatically benefits most software, whereas increasing core count (and vector instructions) only benefit for code written to match the capability of recent generation processor architectures. Its been twenty five years since multi-processor systems have been generally accessible, yet many people still cannot write good multi-threaded code.
this was the most articulate programming lecture i've ever seen.
It was elementary, no wonder you found it articulative
@@Titanrock797 Watson
Nice lecture.
"Favourite textbook". Yes. Most definitely.
This is a great presentation, it would be more fruitful if a timing or quantitative comparison also provided.
Thanks!
The examples are not consistent: sometimes one of the fib() is evaluated in the main thread and the other in a spawned thread, while, sometimes, two threads are spawned to compute each fib().
Good introduction And less laziness in lecturing We are all stupid Unless somebody like you Enlighten us with this knowledge Thank you for the effort I will make donation Once I have some money
I wouldn't donate. They have 18 billion dollars for only 4,000 undergrads, and 11,000 total students (including undergrads). They can definitely maintain MIT opencourseware the 18 billion dollars that they already have. Give money to a college that is going bankrupt from the covid19 induced economy
@@dominic8147 lol
the pthread example could have been written to execute on multiple threads not just 2 by starting the pthread in the fib function instead of the main function
That would not be a good idea though, it'd turn a small recursive function into one creating an exponential number of threads.
Amazing
@1:5:26 : Don't reducers need to be commutative as well?
How I can get these slides to read or any source material where I can read these?@MIT Opencourse Ware
The course materials are available on MIT OpenCourseWare at: ocw.mit.edu/6-172F18. Best wishes on your studies!
@@mitocw thank you very much 😍
@16:15 Don't hide anything! Explain the protocol.. I am assuming There might be some Opcode in which it can represented by a signed integer or maybe some constant number (i.e. a flag)defining the invalidity of the particular processor's cache state.
@-58:09 It was implemented on hardware.. What does that mean? I am assuming if we want to know the constant or signed integer or flag used in the processor's cache We need to understand the architecture of the processor like for example the ALUs and logic gate like how many AND, OR, NOT, NAND, NOR, XOR, and XAND. If we know that, then we can program the cpu based on that. Meaning we can write a code with for, then statement. Like if one processor want to modify the value of the variable stored in it's private cache, then, we can set(write) or (shall I say minus the constant value of the flag or signed integer so that the value is zero) so now we name it as "modified" state. Imagine these in binary: If "modified" state=00 "Shared" state = 01 " Invalidate" state = 10 So, if these are the Opcode, So the processor will know what to do next.
@-50:40 "Library functions mask the protocols involved in interthread coordination" This is the reason I am here. When people said to me Object oriented programming is easier to learn because you can imagine an animal class. And then you can do whatever you want in that class from a dog class, donkey class etc. You can do abstraction, implementation, and bla2 bla2. And then define your method once and your can use it again by creating object of a method in a class. Or maybe instance of that class i.e object of that class. Bla2 bla2. And then you try to code using the library functions, things are going too complicated at the moment. With so many functions and class instances that can perform its functionality within their own framework. For example my button.setOnClickListener(new View.Onclick) { OnClick (View v)} So, I can't write it in another way, Otherwise I'm fucked up So I need to memorise these syntax. So I believe maybe this is just a masked syntax there are numbers underneath them. They are constants, you cannot change that. If you can change that constant for example, you can unlock anything I guess.
how had only 2 people seen the fibonacci program? The last lecture was literally based around fib.
I love the sarcasm... Especially when he's talking about the reasons for the plateau in 2004/2005.
Is that what you got out of this video?
@@Titanrock797 wow, you're material for r/iamverysmart, lmao
@@yendar9262 lol seriously
11:06 ~ 11:28 You sure about this part? if the green rectangle indicated private cache memory
protocols?
This has a decent introduction to multicore history. A few comments. The pthread code could be easily cleaned up and generalized. Other pthread based lib or preprocessor are merely syntactic super. Platform that requires special language processor or operating environment will face adaptation difficult. Multi thread programming is difficult to master but starting learning with sugar coated language is not a good way master the skill specially for a formal computer science curriculum. Lack of coverage of synchronization basics like muted, semaphore, lockless technique, etc. at early stage is kind of misguided. Perhaps these are covered later in the course. But these are core concepts not afterthoughts. It might be better first to teach how to deal with pthread and related foundational libs then move on to higher level abstractions and syntactical aid.
Good points - absolutely agree! And wanted to add more: 1. no coverage for C++ std multithreading, 2. no mention of NUMA at all... 3. too big of attention to Cilk - at some point I've got an impression that MIT was advertising it!
I don't get the pthreads example of fib(n). Wouldn't we want to use as many threads as we have processor cores, and not just 2 threads? E.g. if we have a 16-core machine, couldn't the program be written so that for 45 >= n >= 30 we start a new thread, and do that in fib(), and not in main() ? We could keep track of how many threads we already started with a mutex, and limit that to some cons MAX_THREADS... and calculate the rest sequentually. No? Yes?
It would be much easier to introduce a very simple thread pool, with as many threads as logical cores (or whatever), and use it. And if you have a clean well defined interface to this pool, the program may look as simple as OMP or Cilk, but will not require any additional compilers, and "tribal knowledge" associated with this crap!
Julian Shun I would like to meet you and ask you some questions :)
I was waiting for assembly level of detail in this.
One word: COROUTINES
Isnt it called multithreaded programming to be more exact? The cores are just the physical processors where the threads are the logical processors which can basically used as a "core" as described in the video.
Kind of, but a lot of architectures don’t have multithreading by default I.e a bone stock arm core and there’s a bunch of extra steps to multithread Also multithreading isn’t necessarily true concurrent processing and can be marginally slower than using single native threads, but unless you’re getting into real-time systems and HPC there probably wouldn’t ever be a need for the distinction between native and logical hardware
Libertyis user using /employees usein library
2,335 views without any DISLIKES : Hypos !!!
I disliked the video just for this comment.
crew of phr0zen
Mit ?
This is my biggest criticism of this class, which is otherwise perfect - Why Cilk?! Sure it is the professor's baby, but it is obsolete. Gcc doesn't even support it anymore.
Yeah OpenMP would have been a better choice.
I mean, you don't actually learn multicore programming using these platforms or libraries since it abstracts it all away.
The programming should been learned by self, only coding can help gain experience
Set is red colour, refund amount this one red colour
Who meine fuinsn her
"Eeeh..." - This guy is actually quite a good lecturer imo. All the more annoying that he says it all the time. It's not even hesitation, it's just a bad habit. MIT should do some basic coaching/feedback on stuff like this. I also have never heard swallowing etc picked up by the mic like this. Ought to be an easy fix like put the mic further down or more to one side or a different device.
paralyzation != parallelization pronounce all the syllables :)
It's entirely possible he's a non-native speaker/not a great public speaker.
@@yellowblanka6058 Because he's Asian? Lol, his accent's American and his name is Julian...
Given the bad algorithm used paralyse was accurate!
Would you please stop saying ...Urh...word..words.Urh.Urh.more.words.Urh...