00:00:00- So the final public version of V+ will probably be,
00:00:03it will feel like something like fun.
00:00:06- This is Evan Yu.
00:00:07- Evan Yu.
00:00:09- Evan Yu.
00:00:10- Evan Yu!
00:00:10- I made Vue, I made Vite.
00:00:14Now I run a company called Voice Zero.
00:00:16- How is Vite different from Vite+?
00:00:19- Your dev experience will be exactly like
00:00:21what Vite is like today.
00:00:22But if you want to go further,
00:00:24it's there for you all the way.
00:00:26- How is the team and yourself using AI?
00:00:28- We started doing crazy experiments
00:00:30like porting Angular compiler to Rust.
00:00:32- What are your thoughts on React Server components?
00:00:34- I've been a skeptic from day one.
00:00:36- Usually when I introduce a podcast,
00:00:39I ask the guests to intro themselves.
00:00:40But I think if someone's watching it
00:00:42and they don't know who you are, I'd be super surprised.
00:00:44I think you're just so well known.
00:00:46But everyone should know,
00:00:48or most people should know who you are.
00:00:49- They've at least heard of Vite or Vue, definitely.
00:00:53- Yeah, so I made Vue, I made Vite.
00:00:57Now I run a company called Voice Zero
00:00:59where we work on even more open source projects.
00:01:03There's Rodeo, Vtest, OXC.
00:01:07And yeah, so Vue and Vite are probably more popular,
00:01:11but some of the stuff we're working on with Voice Zero
00:01:14are also pretty cool.
00:01:15'Cause Rodeo is a Rust-based bundler.
00:01:18OXC is this full Rust tool chain that includes
00:01:22all the way from the parser to the resolver, transformer,
00:01:25minifier, et cetera.
00:01:28And on top of OXC, we have OX Linked and OX Format,
00:01:32which is a ESLink compatible linker
00:01:35and a Prettier compatible formatter.
00:01:37And there are more stuff we're still working on, but yeah.
00:01:41So we want to mostly talk about the open source for now.
00:01:45- Sure.
00:01:45So because you're working on so many things,
00:01:47how do you split your time?
00:01:50- Well, I don't personally write code
00:01:52for all of these projects.
00:01:53Actually, I write much less code nowadays
00:01:56ever since I started the company.
00:01:58So at the company, there are many engineers
00:02:01who are way better than me at Rust
00:02:03and they're now all AI-pilled.
00:02:05So it's like half them and half Cloud Code or Codex
00:02:10running a bunch of Rust code.
00:02:12And I have to decide on what to do
00:02:17on a lot of the DX positions,
00:02:22decide like what we want to focus on next.
00:02:25And obviously there's also products
00:02:28that like how do we turn this into a product
00:02:31that will make money,
00:02:32which is something we're still working on.
00:02:34Yeah, so it's all the things you need to do
00:02:39to run a company nowadays.
00:02:41- What are the sort of ideas
00:02:43for the new open source projects coming from?
00:02:45Are they largely from sort of internal needs
00:02:48that you realize could help
00:02:49solve other people's problems as well?
00:02:53- So it actually all starts from Vite, right?
00:02:56So when I created Vite, I was just hacking on it.
00:03:01It was started as a prototype
00:03:03and then I was like, we need a bundler.
00:03:07We started with this completely unbundled
00:03:10native ESM dev server, right?
00:03:13And that idea worked great for simple code,
00:03:16but then we started pulling in some big dependencies
00:03:18and realized, okay, this is not gonna scale well
00:03:21if you load all the dependencies unbundled.
00:03:24For example, if you have like load SES,
00:03:26which is like 700 files.
00:03:28So we're like, okay,
00:03:30we need something to bundle the dependencies, right?
00:03:34Back then there was Rollup and esbuild and Webpack.
00:03:41So Webpack does not output ESM, so cannot use that.
00:03:47So I looked at Rollup and Rollup is quite slow.
00:03:50It's like very slow compared to esbuild, right?
00:03:53It's faster than Webpack, but it's slow compared to esbuild.
00:03:56So we used esbuild to pre-bundle the dependencies,
00:03:59which is blazing fast.
00:04:00And then we serve all the source code as unbundled ESM
00:04:02and that worked great.
00:04:04And then when it comes to production,
00:04:06we originally was like, okay,
00:04:08let's just use esbuild to bundle the whole thing
00:04:10for production.
00:04:11And then we realized esbuild has very limited control
00:04:14over how you split chunks.
00:04:17Which is a very common need if you build larger applications
00:04:19because you want to be able to control,
00:04:21say I want to put these library dependencies
00:04:24in a vendor chunk, so it's better cached.
00:04:26I don't want this chunk to ever change, right?
00:04:28So it has the consistent hash.
00:04:32So even if I change my source code,
00:04:33that chunk hash should stay the same.
00:04:35So users always get that chunk cached
00:04:38when they visit the website.
00:04:39So there are a lot of these kinds of optimizations
00:04:41that esbuild just does not allow at all.
00:04:44Like it has one default chunk splitting behavior
00:04:47and that's it.
00:04:49It's plugging system is also less flexible.
00:04:53It's like if there is a one plugging that decides
00:04:56that I'm going to process this file, that's it.
00:04:59Like no other plugins can touch it anymore.
00:05:02So whereas we have been using Rollup a lot,
00:05:05we are familiar with Rollup's plugging system.
00:05:08So what we ended up doing is, okay,
00:05:10like say we're gonna use Rollup for the production bundling,
00:05:13but esbuild for the dev pre-bundling.
00:05:15That was kind of like each thing just did
00:05:20what it's best at in this combo.
00:05:23And in fact, even today V8.7 is still based
00:05:26on this combination.
00:05:27And that works decently well for a lot of people, right?
00:05:31But obviously there are problems because esbuild
00:05:35is written in Go, Rollup is written in JavaScript,
00:05:40which means the production build
00:05:41is actually still quite slow compared to say
00:05:43fully Rust based bundlers like RSPack.
00:05:47And for the dev server, esbuild because of esbuild
00:05:52and Rollup have different plugging systems, right?
00:05:55We cannot apply the same set of plugins
00:05:57to the dependencies during dev,
00:05:59but they are applied to the dependencies
00:06:01during production build.
00:06:03And then there are like subtle interrupt behavior.
00:06:07Like when you have a mixed ESM and CDS graph,
00:06:10esbuild and Rollup handles it slightly differently.
00:06:13They're like tree shaking behavior differences.
00:06:15So while they both do a good job, right?
00:06:18Like we also patch around all the behavior inconsistencies
00:06:22and all that.
00:06:22We made things work, but deep down we know, okay,
00:06:25it's just two different thing that we somehow
00:06:29like cobble together, right?
00:06:31So in order to A, make the production build faster
00:06:35and B make the dev and production builds more consistent.
00:06:40The best thing to do is to have one bundler
00:06:42that does both, right?
00:06:44But the problem is esbuild is fast,
00:06:47but it's not super extendable.
00:06:50The code base is like, it's all go.
00:06:54So Evan Wallace, who's the author of esbuild.
00:06:57Obviously he's a math scientist, he's a genius
00:06:59and he made esbuild extremely fast,
00:07:02but it's not particularly suitable for other people
00:07:05to sort of extend it or fork it
00:07:07or like sort of maintain a layer on top of it.
00:07:10It's not easy to do that, right?
00:07:12And also it's really hard to convince Evan Wallace
00:07:15to do things he doesn't want to do
00:07:17because he doesn't need money and he doesn't care.
00:07:21So we're like, okay, like, and what about Rollup?
00:07:27Can we make Rollup faster, right?
00:07:28So there are some experiments,
00:07:30but fundamentally Rollup is written in JavaScript
00:07:33and JavaScript means it's single-threaded.
00:07:36So we try things like worker polls, plug-ins in workers.
00:07:41The maintainer of Rollup tried to putting a Rust parser,
00:07:47SWC parser into Rollup.
00:07:50That did not improve the performance noticeably
00:07:54because when you have a mixed Rust and JS system,
00:07:57there's always the state of passing cost.
00:07:59Like you're passing big chunks of strings back and forth.
00:08:02If you ever need to clone the memory, it gets even slower.
00:08:05So it turns out the raw performance gain from Rust,
00:08:09when you have just the Rust parser,
00:08:12but everything else is in JavaScript,
00:08:13the performance gain is offset by the data passing cost.
00:08:16So it ended up almost having the same performance, right?
00:08:19So we're like, okay, drastically making Rollup faster
00:08:23is not really technically possible.
00:08:26So the only option is to rewrite this thing,
00:08:30rewrite a bundler that just like is completely designed
00:08:33for Vite essentially, and it needs to blazing fast, right?
00:08:37So that starts all the quests of like thinking, okay,
00:08:40should we, what should we do?
00:08:42So we decided to essentially,
00:08:44originally the idea is to fork Rollup in Rust.
00:08:48Not fork, port, right?
00:08:49We want to port Rollup to Rust.
00:08:51That's why the project is called Rolldown.
00:08:53So it's Rollup, Rolldown.
00:08:54And started out as a direct port,
00:08:58but we still realized code written in JavaScript
00:09:02isn't super easy to port directly over to Rust
00:09:06because JavaScript is very dynamic.
00:09:08It's a dynamic language, right?
00:09:11Even if you use TypeScript,
00:09:13you can still just like mutate things as much as you want.
00:09:16And Rust is very strict about memory.
00:09:19It's strict about life cycles, ownerships, et cetera.
00:09:23So you just have to structure things very differently
00:09:25compared to JavaScript.
00:09:27So it will never be straightforward
00:09:29to just port existing JavaScript code
00:09:31to a language like Rust.
00:09:33It's pretty much a rewrite.
00:09:35And we ended up actually,
00:09:37we also wanted to have the best of both worlds, right?
00:09:42Rollup itself is a pretty lean core.
00:09:45So if you want to turn Rollup
00:09:47into a production ready preset,
00:09:49it's actually pretty involved
00:09:51because you need something like the node resolver plugging,
00:09:54like resolving node modules is not a built-in feature.
00:09:56It needs to be added to your plugin.
00:09:58You need to add CommonJS plugin to support CommonJS modules
00:10:03because Rollup core is ESM only.
00:10:06And then you have to add just a bunch of plugins
00:10:10like define, inject, replace.
00:10:14A lot of these features are built in in ES build,
00:10:17but they require plugins in Rollup.
00:10:20And what's first is most of these plugins in JavaScript land,
00:10:25we implement it as a extra full AST parse transform cogent.
00:10:30So every plugin actually does the full,
00:10:33like take the code from the previous plugin,
00:10:36parse it again, transform it,
00:10:38generate new code, generate new source map.
00:10:41And then you have to merge all the source maps together.
00:10:43That's why JavaScript build system gets slower and slower
00:10:46because every plugin is just like repeating this over and over.
00:10:49So we're like, okay, we need to have these built in as well.
00:10:53So we ended up having the scope of ES build,
00:10:56but the API shape of Rollup and that is Rolldown, right?
00:11:01But in order to build Rolldown, we're like,
00:11:03we need a parser, we need all the transforms, right?
00:11:07We need a minifier, we need a resolver.
00:11:10So how do we get that?
00:11:12And that is where OXC comes in.
00:11:14OXC is the set of low level language tool chain
00:11:17that gives you all of that.
00:11:20So the author of OXC was working at ByteDance Backland
00:11:25and I had an eye on the project for a long time.
00:11:28So Borshin, he's the author of OXC
00:11:30and he's now our VP of engineering at Voice Zero.
00:11:33He didn't join the company immediately when I founded it.
00:11:36I was trying to get him to join, but he was like,
00:11:38I don't know, like,
00:11:39but we started building Rolldown on top of OXC anyway.
00:11:44We're like, well, this is good stuff.
00:11:45Like I believe this is,
00:11:47because I looked at all the available options, right?
00:11:51I want something that's composable.
00:11:54I want something that has each part of the tool chain
00:11:57that is individually usable as crates.
00:12:00I want it to be also extremely fast, right?
00:12:03So we look at OXC versus SWC.
00:12:06OXC, this parser is like three times faster
00:12:09than SWC parser when they're both written in Rust,
00:12:12because there are a lot of these design decisions
00:12:15and low level technical details
00:12:18that just led to this performance difference.
00:12:20The main thing is Borshin has been obsessing
00:12:24about parser performance and linking performance
00:12:27for most of the part before he joined the company.
00:12:30And like, for example,
00:12:32OXC uses something called an arena allocator,
00:12:34which put all the memory allocation for the AST
00:12:39in a consecutive chunk of memory.
00:12:41It just like allocates a big chunk of memory
00:12:43and put the AST in it directly.
00:12:45So you have faster time of dropping the memory.
00:12:50It also unlocks some interesting things we did
00:12:53that enables fast JS plugins in OXLint,
00:12:57because the consistent memory allows us
00:12:59to pass the whole chunk of memory to JavaScript
00:13:01without cloning it, and then deserialize it on the JS side.
00:13:05So there are a lot of benefits,
00:13:06but back then I was looking at the project,
00:13:10I was really impressed,
00:13:10and we decided to build Roadout on top of it
00:13:13and eventually convince Borshin to join.
00:13:16And so now the scope of the company essentially becomes
00:13:21like we have this vertical Rust stack
00:13:24that starts all the way from a parser.
00:13:26It covers all the way up to bundling to Vite,
00:13:29and then we have Linter, FullMatter, TestRunner, right?
00:13:33So we have a whole tool chain.
00:13:34And what we're doing next,
00:13:37actually we've been working on it for a while,
00:13:40is to put all these things together into a coherent package
00:13:43so that you don't have to install five separate things
00:13:47just to get the base app working, right?
00:13:50You also don't need to have
00:13:51like six, seven different configuration files.
00:13:55We just like put them all in one config file,
00:13:57and they're guaranteed to work together
00:13:59because they're all based on the same parser,
00:14:02the same transform pipeline, so the same resolver.
00:14:05So there will be no surprises.
00:14:07For example, if you use Webpack and Jest,
00:14:10you have to configure their resolution logic separately
00:14:14because they just don't use the same thing.
00:14:16So yeah, so the vision really is like,
00:14:19okay, let's just like build a vertical stack
00:14:22that just works consistently across the place.
00:14:25Make the dev story, the dev experience as straightforward
00:14:29and fast as possible, right?
00:14:30Performance is like a big thing.
00:14:32I kind of took it as a given,
00:14:34but you've probably seen tweets about how like rodown
00:14:39is like 20 times faster than roll up.
00:14:43OX link is like 50 to 100 times faster than yes length.
00:14:47OX format is like 30 to 40 times faster than prettier.
00:14:51So our goal is to really like make it compatible
00:14:57so you can migrate over without doing major refactors,
00:15:00but you just get these huge performance boosts
00:15:04and now your test loop, your linked check
00:15:08and everything will just be much faster, smoother.
00:15:12And yeah, so that would allow people
00:15:15to build more apps faster.
00:15:17- You know, I love how quickly that sort of escalates
00:15:20from, oh, we need this sort of build tool for Vue to,
00:15:22oh, well, I want to improve that piece now.
00:15:24So I want to improve that bit now.
00:15:25And yeah, as you said, you do really own
00:15:27sort of the full vertical stack.
00:15:29That's a very impressive and it is very quick.
00:15:32I was telling the guys before it started
00:15:33that one of my old jobs,
00:15:35I saw like we started on a legacy project
00:15:37and it used web pack and it took like 50 minutes to build.
00:15:40I have no idea what was going on,
00:15:42but like the first thing I said to them was like,
00:15:43we need to switch this to Vue immediately.
00:15:46'Cause like changing CSS,
00:15:47you'd have to wait like two minutes
00:15:49for a rebuild and everything.
00:15:49And I was like, this is not good.
00:15:52We need to use hot module reloading.
00:15:54Like when I save the file, it should make the change.
00:15:57So yeah, Vue really definitely helped with that.
00:15:59And I think the progress and sort of the rate
00:16:02Vue has taken off is super impressive.
00:16:05It's on like 200 million NPM downloads
00:16:07I saw monthly or something crazy.
00:16:09It's-
00:16:10- Yeah, we crossed 50 million weekly just a while ago.
00:16:13- Yeah, that's a mind block.
00:16:15- I was thinking of the 50 million.
00:16:19There's probably a bit of inflation
00:16:21from these like vibe coded apps.
00:16:23That's all just like scaffolding throw away apps.
00:16:26Still like it shows a lot of people
00:16:29or probably a lot of AI agents are using it.
00:16:33- I was gonna say the engineering team
00:16:34at Betaslack are huge fans of Vue.
00:16:36So it rails up on the backend with Vue on the front end.
00:16:40And they've got some questions which I'll ask
00:16:42throughout the podcast based on where it goes.
00:16:46But you mentioned something about bundling
00:16:48and one of their questions was,
00:16:49because they use import maps in rails,
00:16:52where do you see the future of bundling?
00:16:54'Cause you don't have to bundle much
00:16:56if you're using import maps.
00:16:57So yeah, where do you see it going?
00:17:00- So I actually have this dedicated page
00:17:02on Rodel's documentation,
00:17:04where the title of the thing is called,
00:17:07Why Do We Still Need Bundlers?
00:17:10- Have you been asked this a lot by any chance?
00:17:13- Yeah, I also, I mean like DSH is very vocal
00:17:16about no bundle, no build.
00:17:18So kind of have to pay attention to that.
00:17:20And so import maps work to a certain extent,
00:17:24but unbundled in general is a concept
00:17:29that works only to a certain scale.
00:17:35Like if your app's below a thousand modules,
00:17:39your whole module graph probably loads
00:17:41within a couple hundred milliseconds
00:17:43and that's totally acceptable.
00:17:45And if you know you're working within that constraint,
00:17:48that is great actually.
00:17:50It's lazy by default,
00:17:53which means if you have a big app
00:17:56and each of the page is kind of siloed,
00:17:58you have this sub-module graph,
00:18:00it works decently well.
00:18:01That's why Vite works decently well in development.
00:18:05But it's not a silver bullet
00:18:07because what we noticed with Vite itself
00:18:09and the reason we are working on something
00:18:12called full bundle mode in Rodel
00:18:15is unbundled mode has its limitations,
00:18:18which is like the bottleneck is really the number of modules.
00:18:21So there are many, many apps where it's,
00:18:25they are loading thousands and thousands of modules
00:18:29during development, right?
00:18:32You could be loading like 3000 modules
00:18:33and that would choke your browser.
00:18:36The bottleneck is at the network level
00:18:38because with native ESM,
00:18:40you're sending an HTTP request for every module to fetch you.
00:18:44And if you have a deep import graph,
00:18:46it actually has to fetch the first module back
00:18:49and realize, okay, I need these additional modules
00:18:52and I fetch those.
00:18:53And then I fetch those,
00:18:54like you have to traverse the whole graph eagerly
00:18:57before you can actually evaluate the first importing module.
00:19:00So if you're on a bad network,
00:19:04you have chances of like multiple network round trips
00:19:06before you can actually render the first thing.
00:19:09And then if you have thousands of modules,
00:19:13the situation just gets amplified by that network.
00:19:17Even in local development, in a V to dev server,
00:19:20if you have like more than 3000 modules,
00:19:23it don't take like one or two seconds to load it locally.
00:19:27So imagine what that would do in production
00:19:29over the network, right?
00:19:31You really don't want that because if you bundle it,
00:19:35it'll probably take like 100 milliseconds, right?
00:19:38So it's like free optimization on a table
00:19:40that you should always take
00:19:41when you cross a certain threshold.
00:19:45I think the main argument of avoiding bundling
00:19:47and build tools altogether is just people got tired
00:19:52of configuring the tools, right?
00:19:55They ran, they probably ran into bugs,
00:19:56they ran into configuration issues they couldn't figure out.
00:20:01And because Webpack made it so complicated,
00:20:04everyone just kind of, you know,
00:20:06when you think about, oh, configuring the bundler,
00:20:08it's not a job for me, I don't wanna do it, right?
00:20:11So I think like people just have this resentment
00:20:14about like, when they hear the build step,
00:20:16they're just like, it's bad, I want to avoid it, right?
00:20:19So in a way, what we want to do with these
00:20:22and the set of tools we're doing is like,
00:20:24we want to make these concepts so straightforward
00:20:28and it's never going to be straightforward
00:20:32for big complex apps, right?
00:20:34But we want to make it simple enough for a fresh app
00:20:37so that you don't need to really think about it too much
00:20:41if your app is, you know, not super complicated, right?
00:20:45So you should be able to just say, okay, spin up this app,
00:20:48it's using Vite and I know things will be great.
00:20:50So in fact, I know people, there is a community jam
00:20:55called Ruby Vite, Vite Rails or something
00:20:59that just makes Vite work pretty well in Rails.
00:21:05I think the no build setup has its benefits, right?
00:21:12It makes you feel comfortable because you know,
00:21:14like you can avoid a lot of dependencies
00:21:17and like uncertainties that may make things break.
00:21:20I think there's also some people's like, you know,
00:21:23their loss of trust in the build system is like,
00:21:26there's always going to be something that goes wrong.
00:21:29The build will break when I upgrade the dependency, you know,
00:21:33they can't indeed avoid all of that, which is tempting.
00:21:36But I think at the end of the day,
00:21:37if the tech is good enough and stable enough, right?
00:21:41You always want the best possible UX for your end users
00:21:45and doing full unbundled means you have to stay within
00:21:48a very limited constraint of size of your application.
00:21:52You have to worry about optimization too,
00:21:54because you have to think about like,
00:21:57am I not accidentally importing too much
00:22:01on a certain page I'm visiting, right?
00:22:03How do I cache my modules smartly?
00:22:06I believe even with unbundled rails,
00:22:08still need to do something kind of like a pre-process step
00:22:11to stamp the modules so they're cached properly.
00:22:15So inevitably you still need to pay attention
00:22:18to optimization in order to make things work.
00:22:21I would say it definitely works for a very, you know,
00:22:24decent number of use cases, but it's not a,
00:22:29it's not going to cover all use cases.
00:22:31And some people just build really large apps, right?
00:22:35That has a lot of features.
00:22:37So you can't, you can't just force them to go unbundled
00:22:39and then lock themselves, lock them into this like
00:22:42unoptimizable performance situation.
00:22:45- So for those who aren't familiar with it or too much,
00:22:49how is, how is Vee different from Vee plus?
00:22:54And then what kind of people get out of that?
00:22:57- So Vee plus, we are going through a little bit of like
00:23:02mini pivot and what Vee plus really should be at the moment.
00:23:06The idea here is if you're just getting into
00:23:11JavaScript development completely fresh,
00:23:14like you're new to JavaScript development,
00:23:17you have a fresh machine that never installed anything.
00:23:21Like how do you go from like zero to a working app
00:23:25with hot module placement, all the best practices,
00:23:28LinkedIn formatting, testing, all figured out for you, right?
00:23:33Right now that's, that's a lot to learn.
00:23:36Like the first thing you need to learn is like,
00:23:38what is Node.js?
00:23:39How do I install it?
00:23:40What is a Node version manager?
00:23:42Which patch manager should I use?
00:23:44Which, which build tool should I use?
00:23:45Which linter should I use?
00:23:47You have to answer all of these questions.
00:23:49We want to remove all of that questions.
00:23:50Like we give you this opinionated starting point
00:23:52and it's like, you know,
00:23:54you don't even need to install Node.js, right?
00:23:57So we are, we're experimenting with this new way
00:23:59of working with V plus is like curl,
00:24:03like HTTPS vplus.dev install pipe bash.
00:24:08And then VP dev, VP new, and you have a new project
00:24:15and then you VP dev and you have a,
00:24:17you have a full suite of things going set up for you.
00:24:21And you have the linter, you have the file matter,
00:24:25test runner, bundler, everything you can also use it
00:24:28to scaffold a motor repo.
00:24:31It has library bundling.
00:24:32We do plan to add built-ins like things like link stage,
00:24:39managing your change log.
00:24:41If you're doing a big monorepo libraries,
00:24:44and then there is also something called VP run
00:24:49that is a runner, similar to like PMPM run,
00:24:52but like you can, it's a bit more sophisticated,
00:24:57kind of like an X where it can figure out
00:24:59the right like order to run your tasks
00:25:03and also cache them smartly.
00:25:04This is opt in though.
00:25:07So it's like this whole set of thing that, you know,
00:25:11if you don't need these additional stuff,
00:25:13like you can still treat it just like base feet, right?
00:25:17Your dev experience will be exactly like
00:25:18what feet is like today.
00:25:20But if you want to go further,
00:25:24go all the way scale into a enterprise level
00:25:27production ready monorepo, it's there for you all the way.
00:25:31And also like, because it's built on
00:25:33one of all of these proven technologies
00:25:35that's being used by people in those situations already.
00:25:39So that is what we hope to bring, right?
00:25:44So we are converting a lot of existing users
00:25:47to our open source offerings,
00:25:48like where people are migrating from web pack to V,
00:25:52they're migrating from ESLink to LXLink.
00:25:54What we hope V+ serves is the,
00:25:57this sort of like, what do I do
00:26:00if I'm just getting into JavaScript?
00:26:02What's the fastest and simplest way to get started?
00:26:05I want to answer that question
00:26:07and also to make it work really, really well with AI.
00:26:11- Is the goal of the company then,
00:26:14I think a lot of people get scared when they hear
00:26:15there's a company behind open source projects
00:26:17because you might start sort of paywalling certain features
00:26:20but is it more the goal as you always have,
00:26:23you could always do what V+ can do yourself.
00:26:25It's just a lot of configuration
00:26:26and V+ is sort of just a convenience
00:26:29and packages it all as one, as you said.
00:26:31So you would never paywall a feature.
00:26:34- Yeah, so we sort of teased the idea
00:26:37behind V+'s licensing, right?
00:26:39We said, okay, like if your company
00:26:41is about a certain threshold, you need to pay for it.
00:26:44That thinking has been evolving
00:26:46'cause we've been talking to a lot of interested companies
00:26:50and just trying to see what would be the good balance
00:26:53of getting it into the hands of more people
00:26:56and create value versus allowing us to capture value
00:27:00and be sustainable, right?
00:27:02I think we're probably going to move that threshold way up.
00:27:07So it's only a very small category of companies
00:27:11would have to pay for it.
00:27:14So the majority of users should be able to just enjoy it
00:27:17for free and also because we have,
00:27:20we are working on some ideas that's more service-like
00:27:25rather than just pay getting features, right?
00:27:27So a service that pairs with V+
00:27:31that sort of improves the code quality
00:27:35and monitors your code quality
00:27:37and helps you, gives you ideas or tips,
00:27:39like helps you improve certain things.
00:27:41Like, because there are a lot of like domain knowledge
00:27:44that we can now make scalable through AI agents.
00:27:48So that's the direction we're kind of exploring.
00:27:51- Okay, I was going to wonder as well,
00:27:53in sort of V+ making everything convenient,
00:27:56do you think that AI can do that with existing solutions
00:28:00or do you find, have you had sort of experience
00:28:02with just asking AI to piece together the format
00:28:05of the linter, the build and everything,
00:28:07could you think it's A, going to rely on old tech
00:28:09'cause of its training data and create a bit of a mess?
00:28:13- So a lot of, we see a lot of AI scaffolded apps
00:28:17still using like V6, for example, right?
00:28:20'Cause like one big thing is when we release a new version,
00:28:26when we ship new features, it takes time for the models
00:28:29to train on those data, right?
00:28:31Models is always going to lack behind the latest news
00:28:34and tech, so that's part of the things we want to do
00:28:37is like, for example, if we release a new version of V+,
00:28:41it's going to give you, so first of all,
00:28:44it will come with its own agent style MD and skills.
00:28:47So when you upgrade V+, it just upgrades,
00:28:50it'll patch the part relevant to it in your agent's MD
00:28:54and link to the skills that's being updated
00:28:58in your MPM package.
00:29:00And then there's also,
00:29:05we could give you a prompt that just tell you like,
00:29:08if you want to upgrade from this version to this version,
00:29:10like this prompt should help your agent do it more smoothly.
00:29:13So a lot of that will have to come
00:29:17from the tooling authors, right?
00:29:19Because you can't, one thing we've noticed
00:29:22is we got OX Slain, Anarx format and Vtest,
00:29:26they are used in OpenClaw, right?
00:29:29And OpenClaw is a crazy code base.
00:29:31It's like 54,000 lines of JavaScript
00:29:34and moving at crazy pace.
00:29:36And the author is just merging stuff without reading it.
00:29:40And it's like, it's a lot of,
00:29:43there is a lot of things
00:29:45that just doesn't make sense in there.
00:29:46So we're looking at like some PRs that upgrades like OX Slain
00:29:51or like upgrading, like adopting OX Slain.
00:29:54It's just like hallucinated options that isn't there.
00:29:57And we're like, wait, like we don't have this option,
00:29:59we have to.
00:30:00And then like when it's doing type checking,
00:30:04you just like fix the type checking.
00:30:06Okay, I'm turning this role off.
00:30:07So the type will pass.
00:30:09So it's like AI will take shortcuts
00:30:12if you don't give it guardrails, right?
00:30:15And the more important thing is Peter,
00:30:18who's the author of OpenClaw,
00:30:20he is not a TypeScript developer.
00:30:22He just chose TypeScript to do it with.
00:30:25So he's not a tooling expert.
00:30:26He's not experienced in this field.
00:30:29AI helped him do it.
00:30:30But as the authors of the tools that's being used by AI,
00:30:35we noticed like where it falls short.
00:30:38And it's like, okay, like if you keep doing this
00:30:41without us actually pointing that out,
00:30:44your code is gonna fall apart in three months.
00:30:46So yeah, so this is kind of the value
00:30:50we think we can provide in the AI era is like,
00:30:54like how do you make sure like you are shipping fast
00:30:58without breaking things?
00:30:59How can you like keep shipping fast features with AI?
00:31:03Because like the velocity of code like shipping
00:31:06is just increasing massively because they are agents, right?
00:31:11People can do ship features much faster than they can.
00:31:14But are these features all properly reviewed?
00:31:19Are they all, you know, when you merge 20 PRs a day,
00:31:22is the code base still, you know,
00:31:25properly maintained as it should be?
00:31:26The code health kind of like, you know, it's very volatile.
00:31:30So you kind of have to go from time to time
00:31:33to do like we do with human development, right?
00:31:36You ship features for a while,
00:31:37you have to just stop and think like,
00:31:38okay, we need to clean things up.
00:31:40We need to pay off the tech debt that accumulated.
00:31:42So with AI agents, we're shipping much faster now.
00:31:45We're also accumulating tech debt much faster, right?
00:31:49So you need to leverage AI to pay down that debt as well.
00:31:53So yeah, I think this is the part
00:31:56that people aren't overlooking
00:31:57and in need of solution right now.
00:32:00- Yeah, no, I had a look around the open core code base,
00:32:03as you said, and it is a bit chaotic.
00:32:05It definitely is a great example of what happens
00:32:07when you just let AI unleashed
00:32:09and let it do whatever it wants
00:32:11without any sort of oversight.
00:32:13It, yeah, it's been a fun sort of few weeks on the internet
00:32:16of that trending and seeing everything it's been doing.
00:32:19But yeah, I was also gonna ask in sort of the role of AI,
00:32:22I think it's, do you change the way you build a formatter
00:32:26and sort of a linter so that AI agents can use them better?
00:32:29Is that sort of shaped the future
00:32:31or do you think the way that you've just built formatters
00:32:34and linters to be fast sort of just helped in the age of AI?
00:32:38I think obviously them being fast AI agents can use them.
00:32:40- I think we are, that's definitely a good thinking
00:32:45'cause like we are starting to think about that problem
00:32:48because like the original scope of these linters
00:32:50and formatters is actually massive
00:32:53'cause we're trying to be compatible
00:32:54with something like ESLint and Prettier,
00:32:56which has been in production for years, for a decade,
00:33:00where people have all these custom rules,
00:33:03like legacy use cases,
00:33:06and we're trying to be 100% compatible with it.
00:33:09It's like, it's a massive amount of work
00:33:13that we finally achieved 'cause like we just recently hit
00:33:17100% ESLint plugging compatibility.
00:33:21We passed all ESLint's plugging tests
00:33:23and we also reached 100% conformance
00:33:25with Prettier and our formatter, right?
00:33:28So these two milestones means like, okay,
00:33:31like we can now confidently recommend people
00:33:34to move over to our tools and also what's next, right?
00:33:38Where, you know, that's the good question to ask.
00:33:40Like how does LinkedIn formatting, you know,
00:33:44how should they adapt when agents are using it, right?
00:33:49It is definitely a question we are working on actively here.
00:33:53Yeah.
00:33:54- Basically, pending an answer on that one, I would say.
00:33:57It's still evolving, yeah.
00:33:59It's, AI is certainly changing a lot of things
00:34:01in the coding world, so it's interesting to see.
00:34:04- Sticking on the topic of Veeq+,
00:34:06so you showed it off at VeeqConf 2025
00:34:10and you showed a feature that was Veeq install.
00:34:14So my question is, is that feature still a thing?
00:34:17And how much overlap is Veeq+ going to have
00:34:19with something like BUN?
00:34:21- That's a good question, right?
00:34:23So things did change a bit since VeeqConf, right?
00:34:27I would say the final,
00:34:30so the final public version of Veeq+ will probably be,
00:34:33it will feel like something like BUN in a sense, right?
00:34:38So the onboarding experience, like I said,
00:34:41if you're, you have a fresh machine,
00:34:43you're like, I want to start building a web app
00:34:45as fast as I can.
00:34:46You just like, you just curl and curl the scripts
00:34:51and you get a global binary called VP and you do,
00:34:56so when you are inside a project, right?
00:35:02If you have a .node version file
00:35:04and you have a patch manager field in a patch.json,
00:35:06which is the common things we use to specify these things,
00:35:11the environment, the JS environment you're working in, right?
00:35:15Veeq+, so when you say VP run build,
00:35:20it'll automatically use,
00:35:22even like when you just say VP build or VP link or whatever,
00:35:26anything that involves running JavaScript,
00:35:28it will just automatically pick the right version of node
00:35:31and the right patch managers and do that thing for you.
00:35:36So in a Veeq+ enable,
00:35:40actually, even if you're not using Veeq+ in that project,
00:35:44as long as you're using
00:35:45these conventional environment sources,
00:35:48you can use Veeq+ as a replacement for MVM.
00:35:52You can use it as a replacement for core pack.
00:35:55You just stop thinking about like versions.
00:35:59The idea is like, because when you run your workflow,
00:36:02you do it through, you also stop doing npm run.
00:36:05You use VP run.
00:36:06So the idea here is like, when you do a VP run,
00:36:10all the, it'll use the right node version,
00:36:14it'll use the right patch manager version.
00:36:16So, and do the right things.
00:36:19So the install means it'll just,
00:36:24so we don't have a patch manager of our own first, right?
00:36:27So it's more like a core pack equivalent
00:36:30where I don't know if you've used the package called NI,
00:36:34it's from Anthony Fu.
00:36:36So NI essentially means like when you run NI,
00:36:41it'll automatically infer the right patch manager to use,
00:36:45whether you're doing run or install or uninstall,
00:36:48whatever, right?
00:36:49So Veeq install is essentially that,
00:36:51plus the patch manager part.
00:36:56That's what core pack does, right?
00:36:58So even if you don't have anything else installed,
00:37:01you go into a project and the patch manager,
00:37:03patch.json file has like patch manager PMPM
00:37:07at certain version.
00:37:08You run Veeq install, it'll automatically check
00:37:12if that version of PMPM is installed.
00:37:14If it doesn't, it'll just install it
00:37:16and run the install process with PMPM install.
00:37:20So yeah, so the idea is we want to be, you know,
00:37:25solving more than just LinkedIn and formatting.
00:37:31It's like the whole, like all the common things you'd need
00:37:34in your jobs for workflow, right?
00:37:36We want to eliminate these common problems where like,
00:37:40so a beginner don't even need to think about it, right?
00:37:43So the first time you scaffold a project,
00:37:45we will use the latest LTS of node and we recommend PMPM.
00:37:50And it will also write those information into your project.
00:37:53So the next time you come into this project,
00:37:55it's always using the right combination of things.
00:37:59- Why do you recommend PMPM out of curiosity?
00:38:02- It's just has the right balance of feature set
00:38:06and correctness and disk efficiency and speed
00:38:10and good workspace support, like things like catalog,
00:38:15you know, it's just, you know, when we,
00:38:19when we compared all the workspace features,
00:38:21we just find PMPM still to provide the best balance of it.
00:38:25And it's like, we know BUN is like ridiculously fast,
00:38:29but PMPM has been fast enough for a lot of us.
00:38:33And also like, we don't really rule out the possibility
00:38:35for us to support BUN in our runtime
00:38:37and patch manager version managing, right?
00:38:40So you can, you can say, I want to use BUN
00:38:42and we'll just run things with BUN.
00:38:44- So with V8, I think you said it was supposed to release
00:38:49after Lunar New Year, right?
00:38:52- Yes.
00:38:54- Yeah, so what are a few things in the beta
00:38:58that you're focusing on before you actually drop it?
00:39:00- It's really all stability.
00:39:04So ecosystem CI,
00:39:06like we have a really massive ecosystem CI system
00:39:10where we run V8 in downstream project that depends on it.
00:39:15So one recent things we did is we, you know,
00:39:17spell kits, all the tests now pass on V8.
00:39:21So, you know, that's, that's big for us
00:39:24because stability is really the most important thing
00:39:27because if you think about it,
00:39:28we are swapping out two bundlers
00:39:30with a new one we built from scratch.
00:39:33So it's like, we are swapping the engines of a flying plane
00:39:36and hope it will just operate smoothly afterwards.
00:39:40She just, just never has to be more careful about it.
00:39:43- I was going to ask earlier actually, the choice of Rust,
00:39:46was that sort of just people on your team
00:39:48had knowledge of Rust?
00:39:49Because I see a lot of people in the TypeScript world
00:39:51prefer Go because I think it's a closer sort of pull over
00:39:55and that's why TypeScript is even going to Go
00:39:57for that compiler.
00:39:58- Yeah, so I think the choice of like TypeScript team
00:40:02being reported to Go is, like I said,
00:40:04Go is a much easier language to port TypeScript to, right?
00:40:09Because the mental model is much, much more similar.
00:40:13I think one of the big reasons that's kind of locker for us
00:40:17is Go has kind of subpar web assembly support.
00:40:21It produces huge web assembly binaries
00:40:26and its performance, web assembly performance
00:40:29is not that great compared to Rust.
00:40:32And then with Rust, the thing is,
00:40:35yes, a lot has to do with available talent,
00:40:39people who already are passionate
00:40:41and invested into the ecosystem.
00:40:44For example, when we looked around
00:40:46on the foundations to build on,
00:40:48there isn't a set of parser or the tool chain
00:40:54that is as well implemented and as composable.
00:40:59And basically like OXC is built to be built on top of, right?
00:41:04It's these low level utilities.
00:41:08We don't see that kind of equivalent in the Go world.
00:41:11Yes, build has its own parser and everything,
00:41:14but it's a big monolithic system.
00:41:18You can't just take its parser out and somehow use it.
00:41:23But also like all the features in Yes, build,
00:41:25like define, inject, transforms, modification.
00:41:30In order to get the best performance,
00:41:33it's implemented in three AST passes,
00:41:36which means like in the same AST pass,
00:41:37you may have mixed concerns of,
00:41:39we're doing some transformation here,
00:41:41we're doing some inject feature here,
00:41:42we're doing some modification feature here.
00:41:45That makes it not ideal for an extensible system
00:41:51where like, for example,
00:41:54we want to be able to ship more transforms
00:41:57and allow people to toggle the transforms on and off.
00:42:01We want to allow people to write their own transforms.
00:42:03You want to have a linter system where we can be,
00:42:07we want it to be cleanly layered
00:42:09so we can also have more people work on it at the same time.
00:42:13So it just has a lot to do with what we have available.
00:42:16Rust is also really performant.
00:42:22It is indeed a bit tricky to write good transforms with Rust.
00:42:26We had to spend quite a bit of time
00:42:28figuring out the good architecture
00:42:31for the visitors and transformer pipeline,
00:42:34because the memory ownership issue,
00:42:37like when you traverse really down in a tree
00:42:39and you need to change the parent tree,
00:42:42it gets really tricky, right?
00:42:44But we figured something out.
00:42:45It's much easier in Go, but when we think about it,
00:42:49we do want our things to be able to compile
00:42:51to WebAssembly and run in the browser.
00:42:53So like, Rodan is able to run in the browser
00:42:57and run very decently fast.
00:42:59I mean, ESBuild is able to run in the browser,
00:43:01but Rust's WebAssembly is just better.
00:43:05- The sort of other question I was gonna ask
00:43:07based on the team building in Rust and stuff,
00:43:09how is the team and yourself using AI?
00:43:12You mentioned earlier that a lot of people
00:43:14are using sort of AI on the team.
00:43:16Do you find that AI is good at the work you're doing?
00:43:19'Cause I feel like sort of web dev and building a website,
00:43:21there's so many examples on GitHub of that.
00:43:23So AI is trained quite well.
00:43:25But I feel like what you're doing is sort of a lower level
00:43:28of technical or a high level of technicality at least.
00:43:30So is AI helpful there or are you still doing
00:43:32a lot of manual coding?
00:43:34- It is definitely helpful.
00:43:38I think the thing is the field is changing so fast.
00:43:41I think just last year this time, I was a skeptic.
00:43:45I'm like, well, I tried it, it just doesn't work for me
00:43:49because the work I'm doing is too low level, right.
00:43:52And then I guess, so Borshen who is the lead of OXC,
00:43:59he's probably the deeply AI pale person
00:44:03in the company right now.
00:44:04So he started going crazy with the experiments.
00:44:07And like, I think last month there was like a week
00:44:11he shipped like 60 PRs with AI
00:44:13and like just running agents in parallel.
00:44:16And then we started doing crazy experiments
00:44:19like porting Angular compiler to Rust and see,
00:44:24we just like threw at it and see if it's gonna work.
00:44:27And somehow it's working.
00:44:29So maybe we'll have something along that lane in the future.
00:44:33But yeah, so I think like we are constantly being like update
00:44:39and like our perception of like the reach of capability of AI
00:44:43is just being refreshed every few months
00:44:46with new models coming out,
00:44:48with better harnesses being implemented
00:44:51and better these new practices like use plan mode
00:44:55and then say write your agents MD.
00:45:00This is like use these tips and tricks.
00:45:03So when you apply these little things, you realize, okay,
00:45:06like it is indeed getting better and better.
00:45:08And so adoption still, usage still varies by people, right?
00:45:13We don't like, we encourage everyone at the company
00:45:18to use it to the extent where they see fit, right?
00:45:22We give them a monthly credit
00:45:24so they can use clock of max if they want.
00:45:27So I think some are really happy about it
00:45:33and actually pretty vocal about it.
00:45:36And obviously they are lending really good PRs, right?
00:45:40I think it really depends like on how well you can use it.
00:45:45It's part of it, it's just the raw model capability.
00:45:49Part of it is like the harness you're using,
00:45:52but I think the harness layer
00:45:54is kind of like JavaScript frameworks back in the days.
00:45:57It's like everyone is just like doing their version of it.
00:46:00And they're doing more or less the same thing.
00:46:03Maybe this thing has a few different tricks up their sleeves,
00:46:08but a few months later, everyone is doing it, right?
00:46:11So it's just gonna be sort of like this
00:46:13very competitive field and model is the same.
00:46:17Like every few months,
00:46:18like you see Sono 5 is about to drop.
00:46:21I think a DeepSeek is about to drop a new model.
00:46:23It's like, it's just gonna get better and better.
00:46:26And I think it's pretty clear that AI is extremely capable
00:46:32with the right steering,
00:46:34but I think the steering part is still extremely important.
00:46:37You can't expect someone who has zero knowledge of Rust
00:46:41to be able to work in the OXC code base, even with AI.
00:46:45See, you probably wouldn't even know how to prompt like, right?
00:46:50But someone who has been already efficient in OXC
00:46:54as a Rust engineer himself with AI
00:46:58becomes much more productive
00:47:00and can ship more features faster, okay?
00:47:03So I think that is my general take on it.
00:47:08So I would say I'm probably like the one who,
00:47:13the amount of code I produce with AI is very, very little,
00:47:16very minimal compared to other engineers in the company.
00:47:20I use it more for research and just like a sounding board.
00:47:25- You know, it's certainly a weird world,
00:47:27the sort of coding is going in
00:47:29and I'm finding it sort of hard to stay on top of learning
00:47:32how many sub-agents you're supposed to use,
00:47:34parallel agents, what markdown file you're supposed to have
00:47:37in your repo now.
00:47:38It's, yeah, it's changing all the time.
00:47:40So it's curious to see where we sort of land on that
00:47:43in the future.
00:47:43- If we were to go back to Veet just for a bit.
00:47:47So in Veet 7, you released React server component support.
00:47:52And in my opinion, React server components
00:47:54hasn't been the slam dunk that the team thought it would be.
00:47:57I mean, there are some meta premix that are not acceptable,
00:48:00like tan stack.
00:48:01And I think remix have even gone their own complete direction.
00:48:04So what are your thoughts on React server components
00:48:07and why you think hasn't landed as well as it should have?
00:48:10- Yeah, I have always been really conservative about it
00:48:14or I've been a skeptic from day one.
00:48:17That's why we never considered implementing
00:48:20something similar in view.
00:48:23I think the fundamental idea here is what exact problem
00:48:28is it trying to solve?
00:48:30And I think it was what it was being pushed, right?
00:48:35I think in trying to hype people up,
00:48:38it's being sort of advertised as the silver bullets.
00:48:40That's gonna be the best thing ever.
00:48:42That's gonna make all your websites faster.
00:48:44Turns out when it lands, people realize, okay,
00:48:48like maybe I shouldn't use it in all cases.
00:48:51It only applies to a certain type of cases
00:48:54where it will benefit you.
00:48:56And then in other cases, it's just a bunch of trade-offs.
00:48:59Because like the parts that lives on the server,
00:49:04you now all the interactions actually have to go
00:49:06to a network round trip.
00:49:08And that's actually pretty good,
00:49:10pretty bad for like offline first experience, in my opinion.
00:49:14And then you can't really fully escape the hydration costs,
00:49:20in my opinion.
00:49:21And you are offsetting a lot of client-side hydration cost.
00:49:26You're offloading that to the server, right?
00:49:29Now you're bearing the cost of every request
00:49:31you're doing more work on the server.
00:49:33So people have these like theory, conspiracy theory,
00:49:38where like the cell is pushing it to sell or compute.
00:49:44I don't really think that is what it is, right?
00:49:47But it is also true that using ISC
00:49:51means you have more server load.
00:49:52You're running more things on the server,
00:49:54you're using more compute minutes.
00:49:56And ultimately there are other benefits like say,
00:49:59if you put a part of your thing on the server,
00:50:01you save the bundle size.
00:50:02There are a lot of different ways to solve that problem,
00:50:06which doesn't necessarily involve
00:50:07you having to run no JS server, right?
00:50:10And a lot of this is my personal opinion, right?
00:50:14Like in front end, we tend to say, okay,
00:50:17like architecture really matters.
00:50:19Do you want to use an SPA?
00:50:21Do you need server-side rendering, right?
00:50:24ISC is even more specific.
00:50:27Like, do you need ISC is a very important question
00:50:31and a very hard question to answer.
00:50:33And when you say yes,
00:50:35you also need to be aware of the cost you're paying
00:50:37because I think one of the reasons
00:50:39it didn't get really well adopted is first of all,
00:50:43it's extremely complicated.
00:50:46The thing itself is hard to explain.
00:50:48How it works is hard to explain.
00:50:50We had to like go really deep
00:50:52because it actually requires built tool level orchestration
00:50:56to make the whole system work, right?
00:50:58And then, so very few people actually understand
00:51:02how raw ISC works.
00:51:04Most people know about it through the implementation
00:51:07in Next.js because raw ISC is something that an average user
00:51:11wouldn't be able to set up by themselves, right?
00:51:14You'd have to really understand how to all fits together
00:51:17to be able to put something up from scratch
00:51:20with plain React and Vite or Webpack, right?
00:51:23It's not for the day-to-day development, right?
00:51:27So you want to use a framework.
00:51:29That is what's built for.
00:51:30But in order to use ISC in a framework,
00:51:33the framework has to make design decisions
00:51:36to sort of how do you present ISC
00:51:39in a way that would give you the decent DX?
00:51:42And I think Next.js didn't nail it, I would say, right?
00:51:47Like the use server, use client confusion,
00:51:51the mixed graph where like when you make something use server
00:51:55and something, these things just stop working.
00:51:58Like you're limited to only be using these things
00:52:01and then you have to import a dependency
00:52:03and the dependency is like, it doesn't work on use server.
00:52:06Now you have to use client again, right?
00:52:08So this kind of back and forth,
00:52:10I think it's just like a lot of these little paper cuts
00:52:12in DX makes people think, okay,
00:52:15like in order to get the claimed benefits,
00:52:20now I have to also just put up with this annoyance of DX
00:52:24all the time, forever into the future.
00:52:27Is it really worth it, right?
00:52:28So yeah, so I think it's fair that people are like,
00:52:35do I really want to use it?
00:52:37And also like even for framework authors, right?
00:52:40Vercel had this really close relationship with the React team
00:52:42so they can collaborate and iterate on it fast.
00:52:45But for third party, I wouldn't even say third party
00:52:49because technically Vercel is third party, right?
00:52:52But for other frameworks like remix and test stack,
00:52:57it's not that straightforward for them to work on,
00:53:02to work on this problem because a lot of these API iterations
00:53:06from the React team is like prioritizing Next.js.
00:53:08So I think I'm not really criticizing them for it
00:53:13because it's like, okay, Vercel is their design partner.
00:53:15They want to partner with Vercel
00:53:17to get this feature polished and ship it,
00:53:19which makes sense, right?
00:53:21But I think, and because of that,
00:53:25like Next.js essentially was the only real way
00:53:29for people to use RSC.
00:53:31And that experience hasn't been super great.
00:53:33So I think that's why it just didn't pan out as it could.
00:53:38And also like, I think the general premise
00:53:41of even in an ideal world where RSC has like perfect DX,
00:53:46I still don't think it's going to be a silver bullet
00:53:49in all cases, right?
00:53:50So you need to be fully educated
00:53:52on where it makes sense, where not.
00:53:54So just too many trade-offs.
00:53:57- I assume there's been no sort of push
00:53:59to implement something similar in view
00:54:01because obviously linking is back to Vercel.
00:54:03They bought Nuxt Labs,
00:54:05which is the sort of meta framework on top of Vue
00:54:08for piecing it all together.
00:54:09How's that sort of relationship been between Nuxt and Vue
00:54:13now that Vercel own them?
00:54:14- Honestly, it didn't change much.
00:54:18I think Vercel has been pretty hands-off since the acquisition
00:54:21so the Nuxt team is just happy to be able
00:54:24to keep doing what they do.
00:54:25There are probably some efforts to say,
00:54:30make Nuxt work better on Vercel,
00:54:32make a first-class citizen.
00:54:34But I think the thing is Vercel is aware
00:54:38that some of the images it has in the community
00:54:43and they would be really careful not to damage it further.
00:54:47And so after acquiring Nuxt, right,
00:54:50the last thing they'd want to do
00:54:52is to force Nuxt to do things people don't like.
00:54:54- Unfortunately, Evan had to leave early
00:54:56to take an important call,
00:54:58but we really appreciate his time
00:55:00and all his insightful opinions on all the questions we asked.
00:55:04If you have any future guests you'd love on the podcast,
00:55:06please let us know in the comments.
00:55:08And if you have any feedback in general,
00:55:10also let us know too.
00:55:11We'd love to hear it.
00:55:12Find us on anywhere you listen to podcasts
00:55:15like Spotify or Apple Podcasts.
00:55:17And until next time, it's a bye from me.
00:55:20- Bye from me.
00:55:21- Bye from me.
00:55:21- It's a pleasure, thank you all.
00:55:23- Thank you very much for joining us.