Stream Your Components From NextJS Server Actions

2024 ж. 21 Мам.
14 699 Рет қаралды

The Vercel AI library has an amazing component streaming feature that nobody is talking about! Be the first on your team to know about how to take advantage of this awesome new functionality.
Finished code: github.com/jherr/streaming-sa...
Original Example: sdk.vercel.ai/docs/api-refere...
👉 Upcoming NextJS course: pronextjs.dev
👉 Don't forget to subscribe to this channel for more updates: bit.ly/2E7drfJ
👉 Discord server signup: / discord
👉 VS Code theme and font? Night Wolf [black] and Operator Mono
👉 Terminal Theme and font? oh-my-posh with powerlevel10k_rainbow and SpaceMono NF
00:00 Introduction
02:12 How It Works
04:26 Using Different Prompts
06:10 Parsing The Response
07:04 Connecting To Spotify
09:53 State Sharing With Context
12:00 This Isn't "Micro-FEs"
12:47 Outroduction

Пікірлер
  • I think the real point of this is to be able to have the LLM generate input parameter data from functions (ie a weather function that fetches weather data and then derives a whole bunch of information from that weather data, like a short summary, or an answer to a question should I wear a jacket today, etc, and returns it as JSON) and then stream that data to the screen in the form of a UI component instead of text. Edit: to be clear Jack, instead of that parsing function you should use OpenAI's functions API which will return the data you require as a guaranteed JSON structure (eg, params that your component expects). With one small further disclaimer, although OpenAI says the system is much better about guaranteeing the form of the response in JSON via the functions API, you should still obviously validate it. I recommend Zod.

    @avi7278@avi72785 ай бұрын
    • I think that's new, right? The functions API?

      @jherr@jherr5 ай бұрын
    • @@jherr The functions aren't new and actually the "functions" api as it used to be called is deprecated, now referred to as "tools", but it's essentially the same thing with recent aspects and improvements that are new: 1. to the guaranteed response format from the recent openai dev days. Before it was a big problem getting valid JSON in the correct structure guaranteed back. Imagine 10% of your calls failing due to malformed responses. In my experience it's closer to 1% or less now. (see: response_format: json) 2. to the parallel function execution. You used to be only able to process one function at a time, so if your app needs to check the weather and details about nearby attraction parks, they'd have to be done in separate calls. the streaming component stuff seems to tie it all together nicely. ask a question, spit out a UI with the exact component params you need and a single call to the LLM.

      @avi7278@avi72785 ай бұрын
    • @@avi7278 Ah, I thought they recently came out with an alternative JSON output. Thanks for the tips.

      @jherr@jherr5 ай бұрын
  • Learn a bunch here and React Context clicked even more now, thanks!:)

    @edgarasben@edgarasben5 ай бұрын
  • Hey, Jack! I really like this method to invoke whole components to client from server. Don’t you think that when we send a bunch of data it will big i size? I’m working with a bunch of data and I liked this way to handle it, but I think it will cost a lot of mb and user time to wait this data. May I use it without streaming?

    @nick-ui@nick-ui5 ай бұрын
  • Very interesting!! When we are past the token limitation, I can envision a future where the AI can combine information from not only the user input, but also your codebase, and return interesting HTML

    @leonardolombardi3968@leonardolombardi39685 ай бұрын
  • Is that also makes possible to optionally fetch data at the server side (for example tab activation)?

    @f4pq4bt@f4pq4bt5 ай бұрын
  • Hi Jack, I am confused on some point while dealing with RSC and server actions. To clarify my problem, I want to give an example. For example, on client side when we setState or other setter things, new virtual dom is created and reconciliation happens with diff algorithm. However, I couldn't catch the point how RSC deal with that. I know, RSC doesn't re-render but how can we optimistically update those things. In my guess, when we use server-action and revalidate path, function components get call. Then, new virtual dom created and send the browser with rpc and react on the browser do reconciliation and diffing. Those things happen on the browser on server? I am confused and not sure is my question clear

    @berkaycirak@berkaycirakАй бұрын
  • Nice video. Got me wanting to learn how their experimental streaming react response from the AI library works under the hood. The ai stuff is cool, but i feel understanding how you'd code this with server actions directly is the the real valuable nextjs/react knowledge. Might go have a dig around myself Edit: foind the library looks like it's native web streaming API on client side, neat! Really complex though

    @griffadev@griffadev5 ай бұрын
    • It is a wildly complex library. Not quite sure why that is. It feels like someone got a solution going and then said; let's overoptomize this code to make it really hard to follow.

      @jherr@jherr5 ай бұрын
    • @@jherr yeah I was hoping to see how i'd apply the pattern myself and gave up pretty swiftly.

      @griffadev@griffadev5 ай бұрын
    • @@griffadev I think I might keep at it. Slice off a day here and just see if I can tear out all the AI stuff and see if I can get it down to just the streaming core. I really think that it bein wound around the axel of AI is just ... crappy.

      @jherr@jherr5 ай бұрын
    • @@jherr It doesn't seem to be that hard. They're doing a ton (I MEAN A TON) of work under the hood, but if you just want to make a server action create a readable/writable stream, then turn it into promises to be resolved by the client, it's actually pretty easy. I'm sure what I have can be improved- but you can write a client component with it's own state just like the demo above. You can even send a server component wrapped inside a `suspense` tag and it works just as you'd expect if it were rendered from a page. I can throw it up into a repo if you want to check it out. It's pretty sloppy and I would love some help with it.

      @brandonetter@brandonetter5 ай бұрын
    • @@brandonetter I did something like that today: github.com/jherr/generic-streaming This does the streaming from a server action but removes the AI connection.

      @jherr@jherr5 ай бұрын
  • What should I do to save the "saved albums" when switching to a different page?

    @colepeterson9543@colepeterson95435 ай бұрын
    • It would go away, it's just in memory. If you wanted to save that state during a page nav you'd have to move that provider up into the layout. That would still be in memory but persist it between page navs, but not on a page refresh. If you wanted to keep it permanently you'd have to send the add even back to an API or server actions, then store that in a database or kv store, and retrieved on page load. Probably indexed by user session. So you'd need auth for that.

      @jherr@jherr5 ай бұрын
  • Great 🎉

    @zksumon847@zksumon8475 ай бұрын
  • what font and theme are u using?

    @aer1th621@aer1th6215 ай бұрын
  • Hey Jack can you tell the your vscode theme name

    @zatakzataks@zatakzataks5 ай бұрын
    • Check Description 🎉

      @zksumon847@zksumon8475 ай бұрын
  • I'm not sure if I'm dumb (maybe 😅) or I somehow cannot get it, but what are the benefits of using this "streaming" Next.js/React approach? I still don't understand why this is such a "groundbreaking" feature. How does this differ from regular server-side rendering?

    @Will4_U@Will4_U5 ай бұрын
    • If you have a bunch of requests to make, or the requests are slow/inconsistent, then streaming is a fantastic way of ensuring that the customer is still seeing something. As opposed to the Pages Router, or other SSRs, that require all the requests to be completed before rendering the page.

      @jherr@jherr5 ай бұрын
    • @@jherr that makes sense though

      @najlepszyinformatyk1661@najlepszyinformatyk16615 ай бұрын
    • @@najlepszyinformatyk1661 The other thing is that you are returning fully interactive components initialized with their state and able to connect to context. Maybe something like PHP/WebComponents could ballpark this without the streaming.

      @jherr@jherr5 ай бұрын
  • What I'm missing is returning streams from server actions

    @neociber24@neociber245 ай бұрын
  • Streaming react? When HTMX makes a cameo? lol

    @AtRiskMedia@AtRiskMedia5 ай бұрын
  • When front end devs reinvent RPC

    @RyanGuthrie0@RyanGuthrie05 ай бұрын
  • it is awesome at the same time it is complex as f

    @ThugLifeModafocah@ThugLifeModafocah5 ай бұрын
  • Feels like way too much magic is going on here, im thinking Remix is simpler at this point

    @nulI_dev@nulI_dev4 ай бұрын
    • Remix is bringing in RSCs. We'll start to see more of these concepts in there as well.

      @jherr@jherr4 ай бұрын
    • @@jherr what are you thoughts on the way remix is doing data fetching vs Next? The loader functions seem much more simple and remix seems to be adhering to web standards more than Next Curious as I only have production experience with Next page router, I need to check out remix for my next side project

      @nulI_dev@nulI_dev4 ай бұрын
    • @@nulI_dev Loader always, to me, to be the rough equivalent of getServerSideProps (yes, I know, it also handles static, etc.) RSCs are different though, they get data self contained to the component. So you can bring in a data bound component by just using it in the tree and it will load its own data. As opposed to loader or GSSP which have you doing the work of loading the data and routing it to the component. I know Remix touts their architecture as more "web standard", but I don't see how that's the case. Loader and GSSP are the same flow and AFAIK web standards have exactly nothing to say about how/when a server loads its data. Honestly I don't find either the "closer to web standards" or the "loosely coupled to React" arguments from Remix to be anything more than just talking points. When I've pressed them on the customer value of the "loosely coupled to React" idea they didn't have good answers for how we, as users of the framework, would benefit from that. The responses struck me as "framework first", meaning that the framework is more important than the app that sits on top of it. Which is a design philosophy I am diametrically opposed to. Frameworks help us build apps. They aren't the "thing" they are the thing that helps us get to the "thing".

      @jherr@jherr4 ай бұрын
  • 0:36 experimental_StreamingReactResponse That is an abomination to naming conventions. Whoever chose that name should have their developer license revoked immediately.

    @Dylan_thebrand_slayer_Mulveiny@Dylan_thebrand_slayer_Mulveiny5 ай бұрын
    • To be honest, I think it's a good use of naming in programming. Putting a disclaimer in documentation or showing a warning log in dev tools or terminal usually gets ignored. I think this is a good explicit way to mark something as experimental and it can be useful during code review so that these things don't just slip in unnoticed.

      @rand0mtv660@rand0mtv6605 ай бұрын
    • Is written that way for a reason

      @neociber24@neociber245 ай бұрын
  • Bad, bad title, man…

    @feldinho@feldinho5 ай бұрын
  • Woooow. Amazing php trick

    @ruel1983@ruel19835 ай бұрын
KZhead