4. How Kafka Works | Apache Kafka Fundamentals
cnfl.io/apache-kafka-101-lear... | In this video we’ll teach you how Kafka works through code overview for a basic producer and consumer; high availability through replication; data retention policies; producer design and guarantees; delivery guarantees; partition strategies; consumer group rebalances; compacted topics; troubleshooting strategies; and a security overview.
After you’ve watched the video, you can take a quick quiz to check what you’ve learned and get immediate feedback here: forms.gle/exmv2J6Y2nXFTN8K8
As always you can visit us here: cnfl.io/kafka-training-certif...
LEARN MORE
► Apache Kafka 101 course: cnfl.io/apache-kafka-fundamen...
► Learn about Apache Kafka on Confluent Developer: cnfl.io/confluent-developer-t...
ABOUT CONFLUENT
Confluent, founded by the creators of Apache Kafka®, enables organizations to harness business value of live data. The Confluent Platform manages the barrage of stream data and makes it available throughout an organization. It provides various industries, from retail, logistics and manufacturing, to financial services and online social networking, a scalable, unified, real-time data pipeline that enables applications ranging from large volume data integration to big data analysis with Hadoop to real-time stream processing. To learn more, please visit confluent.io
#apachekafka #kafka #confluent
You are an AMAZING teacher and a presenter.
This was honestly, the best explanation of any technology I've seen on the Internet! Thanks Confluent.
Don't do this often, but I got to do it here - my complements to the author's great narration/delivery skills.
Thanks for a great explanation. This video has definitely answered some of the questions that I had.
Great presentation and clear information. Thank you
wow, this was a great presentation.
This video answered so many questions I had about Kafka. Awesome!
Fantastic talk, thanks for sharing this with us 👍🏻
Beautiful! If nothing else, I'll build a complex piece of architecture, only so "The slide looks easy". :) Jokes aside, this is really helpful. Thanks a ton for putting this together.
Thanks, Tim, great vid!
Thanks for the very interesting presentation! Just enough to start with Kafka! Are the slides shared publicly? Any link please?
That was awesome! Thank you so much!!
Could you please list the documentation sites from confluent web pages in the description as well?
This is unbelievably good !! 😍
This dude rules, honestly
Great tutorials, thank you sir
wow Simply Brilliant !
Great fundamentals series. I am a network guy moving up to the application layers working with banking eservices and Kafka will be one of the carriage horses in my new team, so I needed this overview. I am glad I found this clear and to the point explanation.
U 0:07 😊😂❤😂😂😂😂❤😂😂😂😂😮😂😢😂😂😂
U 0:07 😊😂❤😂😂😂😂❤😂😂😂😂😮😂😢😂😂😂
Hi there! 11:13 Why does topic A and Topic B are inside the producer? I guess, they are on the broker side. The video is awesome! I can easily understand the material!
Amazing.
Thanks for the presentation, i'm starting to learn Kafka, but i have a question about compacted log: How, based on timestamp, the brokers know exactly what is the last value for the key when you talked about compacted log ? If there are multiple brokers on multiple datacenters how the clock synchronization between these brokers is done to avoid a more recent event to be override by a old one ?
great talk
Does it make sense to consume a compacted log? It’s not event driven anymore, more like snapshots
Can we get the links provided at 26:25? cannot click, or copy those links
Consumer Application is POLLING. What is this Polling actually? Connection to Kafka Server, its Client Server Connection? Polling connection thread ever gets expired? Its kind of PULL events/messages? Please suggest. Thank You.
So if we alwas write to and read from LEADER partition why you sai in a previous video that partitioning is a way how scaling worka in Kafka. Its not about scaling, but about durability. In orther words partitioning does not give us increased performance of message number which might be processed per second. Right?
Super
Since the segments are continually removed from the head of the partition queue, how does the Kafka maintain the correct index offset after a segment expires ? Here is what I mean: as the segment with 0 offset expires, do the other segments get to keep their indexes, or the segment 1 will become the new head segment with offset 0 ?
Using zookeeper
Yeah, but I want exactly-once delivery from a Kafka consumer to an external API :v
Set the speed to 0.75
8:42, when a broker dies and a new partition elected as the leader, how can we guarantee that no data is lost? Those messages sent to the dead broker but not replicated to its followers yet, are lost, aren't they?
Depends on your ack settings, if NONE or LEADER yes, possible data loss, but if ALL, followers already have a copy.
Hello. Can we have the slides please?
I see a lot of dad-jokes on this one. Learnt a few for my child.
11:00 bro I if everything was fine, I wouldn't be here. Great explanation tho!
Congratulations folks: if you’re watching this video than you’re ready to transit to the Senior Engineer level
But actually new grads who can’t find a job ;(
“Is the disc on fire?!” :)
Talking about GDPR... if a customer asks me to wipe it's data out, how would I deleted events originated by this user once Kafka's logs are immutable? 🤔
Just spitballing, encrypt all user messages with a cryptographic hash function. Retain the key for that user's hash for the lifetime of their account. Use that key to decrypt that user's messages when consuming them. Then throw the key away when and if they ask to be forgotten or delete their account. Messages can then stay in the log but are no longer readable by anyone because the key is gone. Don't listen to me, literally never solved this problem before, but I imagine others have worked around log immutability without the cost of, well, mutating a log. I bet there are open source libraries or blog posts out there that have some better solutions baked in already.
Compacted Topics are more subjective
could we have a link for sharing slide???
how come it has only 258 likes?
This course would be so much better if the speaker would not change tone, volume, and speed of his voice so frequently.