Vision Transformer Basics

2024 ж. 22 Мам.
16 712 Рет қаралды

An introduction to the use of transformers in Computer vision.
Timestamps:
00:00 - Vision Transformer Basics
01:06 - Why Care about Neural Network Architectures?
02:40 - Attention is all you need
03:56 - What is a Transformer?
05:16 - ViT: Vision Transformer (Encoder-Only)
06:50 - Transformer Encoder
08:04 - Single-Head Attention
11:45 - Multi-Head Attention
13:36 - Multi-Layer Perceptron
14:45 - Residual Connections
16:31 - LayerNorm
18:14 - Position Embeddings
20:25 - Cross/Causal Attention
22:14 - Scaling Up
23:03 - Scaling Up Further
23:34 - What factors are enabling effective further scaling?
24:29 - The importance of scale
26:04 - Transformer scaling laws for natural language
27:00 - Transformer scaling laws for natural language (cont.)
27:54 - Scaling Vision Transformer
29:44 - Vision Transformer and Learned Locality
Topics: #computervision #ai #introduction
Notes:
This lecture was given as part of the 2022/2023 4F12 course at the University of Cambridge.
It is an update to a previous lecture, which can be found here: • Neural network archite...
Links:
Slides (pdf): samuelalbanie.com/files/diges...
References for papers mentioned in the video can be found at
samuelalbanie.com/digests/2023...
For related content:
- Twitter: / samuelalbanie
- personal webpage: samuelalbanie.com/
- KZhead: / @samuelalbanie1

Пікірлер
  • This is one of the best explanations of not just ViT, but transformers in general that I have watched. Excellent video

    @rldp@rldp4 ай бұрын
  • The best video to easily understand VIT

    @aminkarimi1068@aminkarimi10682 күн бұрын
  • Unbelievable quality. Happy to be here before this channel blows up.

    @whale27@whale275 ай бұрын
    • Thanks!

      @SamuelAlbanie1@SamuelAlbanie15 ай бұрын
  • Goodness, what a remarkable video. This is by far the best explanation video I have watched about vision transformers.

    @capsbr2100@capsbr21002 ай бұрын
  • This is one of the cleanest explanation of ViTs I have come across. Amazing work Samuel! Inspiring.

    @thetechnocrack@thetechnocrack3 ай бұрын
  • for a beginner like me, I would say, this is the introduce video that we were waiting for :')

    @jesusalpaca7170@jesusalpaca71702 ай бұрын
  • Excellent video! Honored to be here before it goes viral 🙏🏾

    @continuallearning8366@continuallearning83665 ай бұрын
  • Thank you for making this wonderful video. So clear! Please continue your awesome video work!

    @user-iy6gq8yd3p@user-iy6gq8yd3p5 ай бұрын
  • man, what a video. thank you!

    @gnorts_mr_alien@gnorts_mr_alien28 күн бұрын
  • I was studying up on Transformers and ViTs half a year ago, and recently checked back to find this (to my surprise). Great clear explanations, can tell CAML is in great hands!

    @thecheekychinaman6713@thecheekychinaman67132 ай бұрын
  • I've held guest lectures on the inner workings of transformers myself, but I still learned a bunch from this! Everything after 22:15 was very exciting to watch, very well presented and easy to understand! Very well done, I dubscribed for more :)

    @PotatoKaboom@PotatoKaboom5 ай бұрын
  • Your weekly ai news was really useful Please bring it back

    @abhimanyuyadav2685@abhimanyuyadav26855 ай бұрын
  • great explanation

    @minute_machine_learning5362@minute_machine_learning536215 күн бұрын
  • Thank you so much!

    @zainbaloch5541@zainbaloch5541Ай бұрын
  • 🎯 Key Takeaways for quick navigation: 00:00 🧠 *The Evolution of AI and Computer Vision* - General methods leveraging computation prove most effective in AI development. - Evolution from handcrafted features to Convolutional Neural Networks (CNNs) and then to Transformers, showcasing a reduction in inductive biases and an increase in data-driven approaches. 01:09 🤖 *Neural Network Architectures* - Importance of network architecture in building intelligent machines. - Distinction between network architecture and network parameters, focusing on resource limitations and efficient design. 02:32 💡 *Introduction to Transformers* - Transformers' dominance in AI, initially in Natural Language Processing (NLP) and then in Computer Vision. - Discussion on why Transformers took time to transition from NLP to Computer Vision. 03:57 🌐 *Understanding Transformers: Encoder and Decoder* - Explanation of the Transformer architecture with its encoder and decoder components. - Different variants of Transformers: Encoder-only, Decoder-only, and Encoder-Decoder architectures. 05:33 🔍 *Applying Transformers to Computer Vision* - Vision Transformers (ViT) process images by slicing them into patches, using position embeddings and Transformer encoders. - The methodology of transforming images into a sequence of embeddings for the Transformer encoder. 07:08 🔗 *Multi-Head Attention in Transformers* - Detailed explanation of the multi-head attention mechanism in Transformers. - Role of queries, keys, and values in facilitating communication between different embeddings. 09:12 🧩 *Transformer Encoder Blocks and Scaling* - The structure and function of Transformer encoder blocks, including multi-head attention and MLP. - Importance of residual connections and layer normalization in optimizing Transformer models. 11:05 🚀 *Scaling and Hardware Influence in AI* - The impact of scaling and hardware advancements on Transformer model performance. - Discussion on the exponential increase in computational resources for training large models. 13:50 🛠 *MLP and Optimization in Transformers* - Role of the multi-layer perceptron (MLP) in Transformer architecture for independent processing of embeddings. - Importance of non-linearities like ReLU and GELU in Transformer models. 15:00 ⚙️ *Residual Connections and Layer Normalization* - Implementation and significance of residual connections and layer normalization in Transformers. - These components facilitate gradient flow and stable learning in deep network training. 17:05 🌐 *Positional Embeddings in Transformers* - Explanation of positional embeddings in Transformers, necessary for maintaining spatial information in sequences. - Different methods of implementing positional embeddings in Transformer models. 19:27 🔄 *Cross Attention and Causal Attention in Transformers* - Discussion of Made with HARPA AI

    @user-fv5oj4qk1l@user-fv5oj4qk1l5 ай бұрын
  • That is a masterpiece of a video! Many thanks for your work!

    @rmmajor@rmmajorАй бұрын
  • Thank you so very much for sharing your insights and intuition behind soooo many concepts.

    @amoghjain@amoghjain5 ай бұрын
    • Glad it was helpful!

      @SamuelAlbanie1@SamuelAlbanie15 ай бұрын
  • Thanks for such a informative and educational video

    @mattsong6875@mattsong68755 ай бұрын
  • Wow, this video helped me a lot in understanding Attention and ViT. Packed with all the logics needed to design a solution using the latest as of this day.

    @vil9386@vil93863 ай бұрын
  • Very well presented!

    @sbdzdz@sbdzdz5 ай бұрын
  • Very good video - contents & it’s presentation!

    @soylentpink7845@soylentpink78456 ай бұрын
  • Great work!

    @EigenA@EigenA2 ай бұрын
  • Great video

    @tomrichter9021@tomrichter90213 ай бұрын
  • More Compute Is All You Need

    @miraclemaxicl@miraclemaxicl2 ай бұрын
  • Excellent and clearly communicated. Thanks. question in 20:05 when discssing positional embeddings, the legend of the waves says dim 4,....dim 7. Here, does dim refer to the length of the pathch D? as in, we'll get as many sine waves as D dims ?

    @flamboyanta4993@flamboyanta49935 ай бұрын
  • A+++ quality from other planets.

    @geomanisgod@geomanisgod2 ай бұрын
  • Another question: in 30:00 discussing how early attention layers tend to focus on local features and deeper ones on more global features of the input. I didn't understand the significance of the x-axis (sorted attention head). is this just a count of how many attention head there are in the respective block? Which suggests that in the large data regime, even early attention blocks with 14+ heads will also tend to observe the features globally? Is this correct? And thank you in advance!

    @flamboyanta4993@flamboyanta49935 ай бұрын
  • any ViTs that are open source?

    @iez@iez2 ай бұрын
  • So for someone approaching this now, working on resource-constrained devices, both for training and inference, it makes more sense to just stick to CNNs?

    @capsbr2100@capsbr21002 ай бұрын
KZhead