00:00:00with how powerful generative AI has become.
00:00:02New tools are constantly emerging,
00:00:04and amongst all, Google has really stepped up its game since Gemini 3 came out.
00:00:07Because of this, you've probably seen people on X one-shotting these amazing landing pages
00:00:12and saying that the model is a game changer.
00:00:14But they're lying.
00:00:14The truth is, they need to use multiple tools to build these sites.
00:00:18And Google has been going crazy with their experimental tools,
00:00:20which are powered by Gemini 3 and Nano Banana.
00:00:23But Google doesn't offer a way to use all of them together.
00:00:26So you're going to need a complete workflow that combines all these tools.
00:00:29We've got a lot to talk about today,
00:00:31as I wasn't really expecting them to work so well together.
00:00:34The animations that you see on those sites are just a series of frames.
00:00:37But if you use AI tools to generate these frames, you don't get consistent results.
00:00:41Google solves this by releasing an experimental tool called WISC,
00:00:44which is particularly designed for assets generation.
00:00:47It is powered by Nano Banana for image generation.
00:00:49I often use WISC to generate sequences of images for hero sections.
00:00:53I provide prompts in simple words,
00:00:55update the details step by step for each sequence,
00:00:58and then use the resulting images in the hero sections.
00:01:00Using this whole process, I created this landing page
00:01:03and was able to implement this cool animated effect with the camera.
00:01:07To start, we're going to generate just the first and last frames of a sequence,
00:01:10and then create an animation using those two keyframes for the hero section.
00:01:14But if you just start prompting it without any structure,
00:01:17then the keyframes you want won't have the same structure continuity.
00:01:20For this purpose, you have to clearly define the subject,
00:01:23the visual intent, and the level of detail you want in the image.
00:01:26WISC uses a subject, scene, and style framework to guide image generation,
00:01:30allowing you to combine them into a single new visual.
00:01:33That's why I included all the details on what kind of camera I wanted,
00:01:36how I wanted the reflections on the lens to look,
00:01:39the resolution, and the depth of the image,
00:01:41and it created the visual exactly as the prompt outlined.
00:01:44The generated image will not always meet your expectations.
00:01:47In which case, you just have to mention the change in simple words,
00:01:50and it incorporates the changes into a new visual.
00:01:53What I like most about WISC is that you don't have to write long,
00:01:56detailed prompts to get great results.
00:01:58This is because it uses Gemini 3 as a middle layer,
00:02:01which writes the detailed prompt on top of your simple words,
00:02:04leading to solid visuals.
00:02:05Now it raises the question of why I chose WISC.
00:02:08While Nano Banana requires extensive text prompts,
00:02:10and Google's Mix Board is designed for mood boards,
00:02:13neither is optimized for fast, controlled image remixing.
00:02:16WISC's core strength lies in combining reference images,
00:02:19allowing you to generate images with better control.
00:02:21Once I had my first frame,
00:02:23I wanted my last frame to be a side angle of the camera,
00:02:26with the lens taken apart to show the components.
00:02:28So I prompted it to generate a technical cutaway,
00:02:31specifying how it should layer the internal lenses,
00:02:33and how the background should appear.
00:02:35It didn't get it right on the first try.
00:02:36It broke down the internal circuitry too,
00:02:38which I didn't want it to show.
00:02:40As I said earlier, you just mentioned the change you need to make.
00:02:42So I instructed it to only show the lens layering,
00:02:45after which it successfully generated the image without showing the internal circuitry.
00:02:49Now, WISC also supports animations using the VIO model.
00:02:52But WISC's animations focus on animating one image,
00:02:55rather than being able to connect multiple keyframes together.
00:02:58That's why I used another tool called Google Flow.
00:03:00Flow uses Gemini for planning the story,
00:03:03Google's image models to design steady characters,
00:03:05and VIO to create video content.
00:03:07I had already created my starting and ending frames for the camera animation,
00:03:10and now I wanted to create a transition in them.
00:03:13So I used the frame to video option and provided my frames.
00:03:16In order to ensure a smooth transition,
00:03:18you need to mention in the prompt how the starting frame leads to the ending,
00:03:21because it provides the model with a logical path.
00:03:24Your prompt should include how you want the animation to flow,
00:03:26how the starting frame should transit into the ending frame and the animation style,
00:03:30as these details ensure the motion is intentional.
00:03:33Sometimes these models tend to make mistakes with complex tasks,
00:03:36so it didn't get my animation right the first time.
00:03:38The generated video got both the spin direction
00:03:41and the ending frame completely wrong, making the transition awkward.
00:03:44The fix was simply re-prompting with some of the necessary changes to ensure the animation was correct,
00:03:49just like how I prompted it to change the direction of the spinning of the camera for smoother transition,
00:03:54after which it produced a version I wanted which I downloaded for use in my project.
00:03:58Now the video generation is not available on the free tier
00:04:01unlimitedly because video generation models are costly.
00:04:04It only provides 180 monthly credits depending on the region.
00:04:08Since each video generation with VIO 3.1 uses 20 credits, you get up to 9 videos per month.
00:04:14Since the videos generated by Flow are in MP4 format and can't be used directly in Hero sections
00:04:20because they are harder to map through scroll animations in,
00:04:22for this reason I converted them to WebP using a free online converter.
00:04:26I uploaded the MP4 video, set the conversion settings to produce the best quality animated WebP
00:04:31and it converted to a WebP format which I downloaded for use in my project.
00:04:35Choosing WebP is important because with this, it is easier to map scroll interactions,
00:04:40since on the web, this format is treated as an image which doesn't require a media player
00:04:44wrapper like other formats do. These files are more compact and deliver improved performance,
00:04:49making them ideal for short form animated content.
00:04:52I added the converted WebP file to the public directory of my newly initialized Next.js project
00:04:57because this is where all the assets reside in the project.
00:05:00Now once we had our animation ready, I wanted it to add it to the Hero section in my landing page.
00:05:05I generally prompt Claude in XML format because their models are optimized for understanding XML,
00:05:10letting them parse structured intent more reliably and reason over each section independently.
00:05:15The prompt I gave Claude for adding the animation included context on what I wanted to build,
00:05:20where the assets for the animation are located and how the scroll through animation should work
00:05:24and our goal in the context tag. I included all the requirements in the requirement tags,
00:05:28described how the animation should behave in animation behavior tags and specified
00:05:33the implementation details, constraints and required output directly in the prompt within
00:05:37their respective tags. When I gave Claude this prompt, it automatically implemented the animation
00:05:42exactly as I wanted. Even though our Hero section looked good, the rest of the components looked like
00:05:47the usual generic websites AI tends to generate. This is because we were expecting high quality
00:05:52results from vanilla CSS instead of relying on existing beautiful component libraries.
00:05:57There are multiple UI libraries, each with its own specialized style and design themes,
00:06:02but you have to choose the library that suits your project style best. For my camera landing page,
00:06:06I was going for an Apple style UI and the closest library to that idea is Hero UI. It's built on top
00:06:12of Tailwind CSS and relies on Framer Motion to bring its components to life across the website.
00:06:17The library supports most commonly used frameworks like Next.js, Vite and Laravel. So using it with my
00:06:23existing Next.js setup was easy. There are two methods of installation. Either you install it
00:06:28project wide with all components available via the install command or you install individual components
00:06:33as needed, which is what I did with Claude. I prompted Claude to replace the existing components
00:06:37with Hero UI components and the website was significantly improved, giving the site a much more
00:06:42professional look and feel. Users decide whether to stay on a landing page within a few seconds by
00:06:47looking at how engaging the UI is. Motion helps guide the visitor's attention to the features we
00:06:52want them to notice, ensuring higher user retention. Implementing animations from scratch using vanilla
00:06:58JavaScript can be challenging, so I rely on existing libraries to simplify the process. For this project,
00:07:03I used Motion.dev, a free and open source library that offers a wide range of ready-to-use
00:07:08animations. Normally, animations would require manually syncing DOM updates with animation
00:07:13timings. However, Motion.dev abstracts this logic by handling DOM updates internally. It automatically
00:07:18updates components as the user scrolls, so animations play smoothly without the need to
00:07:23manually track scroll positions. This library uses motion components instead of standard ones. These
00:07:28components have start and end states defined in the props, and the library handles the transitional
00:07:34logic between them automatically. For our landing page, I prompted Claude to implement parallax and
00:07:39scroll animations using the library. As a result, the user experience improved by guiding attention
00:07:44toward the key sections of the page. Describing how the sections of a page should look is a tedious
00:07:49process. It is better to get inspiration from existing galleries where people post their
00:07:53creations. I used 21st.dev, a platform that offers inspiration for a variety of UI components built
00:07:59by multiple designers. The components are built on top of popular UI libraries like Aceternity UI,
00:08:05Prism UI, Coconut UI, Magic UI, and many others. While looking for ideas, I came across this call
00:08:11to action section that would look great on my landing page. The part I like best about 21st.dev
00:08:17is that for any component I want to use, I can just copy a prompt specifically tailored for
00:08:22AI coding agents. I don't need to guide Claude myself. The prompt is extensively structured,
00:08:26including project requirements such as ShadCN and TypeScript support. It provides code with
00:08:31instructions for the coding agent to paste directly into the components directory. It includes all
00:08:36necessary dependency code and the paths where it should be added, and it lists the required NPM
00:08:41packages to install. It also includes an implementation guide for your AI agent, detailing
00:08:46all the steps needed to integrate the component directly into your application and how the agent
00:08:50should tailor it to the specific project's needs. I gave this prompt to Claude and it integrated the
00:08:55exact same call to action section for which I had copied the prompt. It also added motion from the
00:09:00motion library we had installed, even though I did not explicitly mention adding motion anywhere. I
00:09:05also got the footer from 21st.dev, even though the demo footer included icons for GitHub and Twitter.
00:09:11When I gave Claude the copied prompt, it omitted the GitHub icon since it wasn't relevant to our
00:09:16project. It customized the footer to include only the icons related to the camera product site,
00:09:21creating a footer that perfectly fit our landing page. That brings us to the end of this video.
00:09:25If you'd like to support the channel and help us keep making videos like this, you can do so by
00:09:30using the super thanks button below. As always, thank you for watching and I'll see you in the
00:09:34next one.