00:00:00With the way that AI coding is going, so many things are becoming automated.
00:00:03What wrong with another thing going out of our hands?
00:00:06LLM's got tools, and just like that, so much of what humans did was automated.
00:00:10With Puppeteer MCP, we saw automated UI testing.
00:00:13Now Ingest just gave us a monitoring layer that lets your coding agents
00:00:17become live debuggers of the code they generate.
00:00:20They're doing this by releasing their MCP for the Ingest dev server,
00:00:23which is basically a local version of their cloud platform.
00:00:26The platform lets you test all the functions you've built inside your agent,
00:00:30and provides a visual interface for everything along with the different events that run.
00:00:35With this, you can directly ask your AI agents like Claude Code or Cursor
00:00:39to do all the automated testing.
00:00:41If Versil had something like this,
00:00:43their deployment and debugging would only require a single prompt.
00:00:46For those who don't know, Ingest is an open-source workflow orchestration platform
00:00:51that lets you build reliable AI workflows and takes care of so many problems that come with it.
00:00:55I've been using it to build agentic workflows in our company,
00:00:58and the developer experience is really good.
00:01:00With the MCP server, it's gotten even better.
00:01:03These workflows are built with async functions,
00:01:06and there are some problems with testing and debugging them.
00:01:09Most of them are triggered by external events.
00:01:11They run asynchronously with multiple steps.
00:01:13For those of you who don't know what asynchronous means,
00:01:16these are functions that can pause and wait for something to finish,
00:01:19and then continue without blocking everything else.
00:01:22These functions are part of larger workflows, which makes debugging even harder.
00:01:26This usually leads you to manually trigger these events,
00:01:29or you might need to continuously switch between your code editor and your browser from time to time.
00:01:34You might even have to dig through the logs
00:01:36to understand what actually happened with that single function,
00:01:39or why it might have failed or anything else.
00:01:41Or you might even need to recreate complex events,
00:01:44or trigger them yourself to actually test the function.
00:01:47But now with the MCP integration, your AI agent can handle all of this automatically.
00:01:52They also had this context engineering in practice paper,
00:01:55where they explained how they actually built an AI research agent.
00:01:58I'll be using this agent to show how the MCP works.
00:02:01In the agent, they implemented context engineering inside it,
00:02:04rather than using it to just build it,
00:02:06both in its context retrieval phase and its context enrichment phase.
00:02:10They also explain the difference between context pushing and context pulling really well.
00:02:14It's a really interesting article as well, and I might be making a video on this.
00:02:18So if you're interested in that, do comment below.
00:02:20The agent is completely open source.
00:02:22I copied the link, cloned it, installed the dependencies, and initialized claud code.
00:02:27I had it analyze the code base and create the claud.md.
00:02:30The article also specifies why we should use different models for their different strengths,
00:02:35and they've implemented agents with separate LLMs for different roles in the research agent.
00:02:39They're using the AI gateway with Vercel, which gives you access to 100+ models.
00:02:44I wanted to use a single model.
00:02:46Using the claud.md, it updated the code base and switched it to use OpenAI's API.
00:02:51After editing it just told me which files it had changed.
00:02:54After that, I copied the configuration for claud code, created a .mcp.json file,
00:02:59pasted it in, started the Next.js app,
00:03:01and then started the ingest dev server which you've already seen.
00:03:04After that, I restarted claud code and checked that the MCP was connected.
00:03:08Inside the MCP, you have event management,
00:03:11where it can basically trigger functions with test events and get run ids,
00:03:15along with other functions that allow it to list and invoke functions as well.
00:03:19You have monitoring tools which allow it to get the status and documentation access too,
00:03:23so if something does go wrong with the ingest functions,
00:03:26I no longer have to dig around manually to find out what's wrong with my agent.
00:03:30These tools can automatically tell claud what went wrong, and it can fix it for me.
00:03:34It used the send event tool to query the main research function with the question,
00:03:39what is context engineering?
00:03:40After that, it polled the run status,
00:03:42which basically means it asked over and over again whether the run was complete or not.
00:03:47Then it tested it again and saw that all of them were using the correct model name
00:03:51and the workflow was still executing nicely.
00:03:53In their own words, this represents a fundamental shift
00:03:56in how they're building and debugging serverless functions.
00:04:00Instead of functions being black boxes that the AI model just reads from the outside,
00:04:04AI can now work in the proper execution and provide real-time insight,
00:04:08and hopefully we'll see this happening with other tools as well,
00:04:11where we're giving AI more autonomy.
00:04:13And I'm pretty excited for it.
00:04:14That brings us to the end of this video.
00:04:16If you'd like to support the channel and help us keep making videos like this,
00:04:20you can do so by using the super thanks button below.
00:04:23As always, thank you for watching and I'll see you in the next one.