Transcript

00:00:00With the way that AI coding is going, so many things are becoming automated.
00:00:03What wrong with another thing going out of our hands?
00:00:06LLM's got tools, and just like that, so much of what humans did was automated.
00:00:10With Puppeteer MCP, we saw automated UI testing.
00:00:13Now Ingest just gave us a monitoring layer that lets your coding agents
00:00:17become live debuggers of the code they generate.
00:00:20They're doing this by releasing their MCP for the Ingest dev server,
00:00:23which is basically a local version of their cloud platform.
00:00:26The platform lets you test all the functions you've built inside your agent,
00:00:30and provides a visual interface for everything along with the different events that run.
00:00:35With this, you can directly ask your AI agents like Claude Code or Cursor
00:00:39to do all the automated testing.
00:00:41If Versil had something like this,
00:00:43their deployment and debugging would only require a single prompt.
00:00:46For those who don't know, Ingest is an open-source workflow orchestration platform
00:00:51that lets you build reliable AI workflows and takes care of so many problems that come with it.
00:00:55I've been using it to build agentic workflows in our company,
00:00:58and the developer experience is really good.
00:01:00With the MCP server, it's gotten even better.
00:01:03These workflows are built with async functions,
00:01:06and there are some problems with testing and debugging them.
00:01:09Most of them are triggered by external events.
00:01:11They run asynchronously with multiple steps.
00:01:13For those of you who don't know what asynchronous means,
00:01:16these are functions that can pause and wait for something to finish,
00:01:19and then continue without blocking everything else.
00:01:22These functions are part of larger workflows, which makes debugging even harder.
00:01:26This usually leads you to manually trigger these events,
00:01:29or you might need to continuously switch between your code editor and your browser from time to time.
00:01:34You might even have to dig through the logs
00:01:36to understand what actually happened with that single function,
00:01:39or why it might have failed or anything else.
00:01:41Or you might even need to recreate complex events,
00:01:44or trigger them yourself to actually test the function.
00:01:47But now with the MCP integration, your AI agent can handle all of this automatically.
00:01:52They also had this context engineering in practice paper,
00:01:55where they explained how they actually built an AI research agent.
00:01:58I'll be using this agent to show how the MCP works.
00:02:01In the agent, they implemented context engineering inside it,
00:02:04rather than using it to just build it,
00:02:06both in its context retrieval phase and its context enrichment phase.
00:02:10They also explain the difference between context pushing and context pulling really well.
00:02:14It's a really interesting article as well, and I might be making a video on this.
00:02:18So if you're interested in that, do comment below.
00:02:20The agent is completely open source.
00:02:22I copied the link, cloned it, installed the dependencies, and initialized claud code.
00:02:27I had it analyze the code base and create the claud.md.
00:02:30The article also specifies why we should use different models for their different strengths,
00:02:35and they've implemented agents with separate LLMs for different roles in the research agent.
00:02:39They're using the AI gateway with Vercel, which gives you access to 100+ models.
00:02:44I wanted to use a single model.
00:02:46Using the claud.md, it updated the code base and switched it to use OpenAI's API.
00:02:51After editing it just told me which files it had changed.
00:02:54After that, I copied the configuration for claud code, created a .mcp.json file,
00:02:59pasted it in, started the Next.js app,
00:03:01and then started the ingest dev server which you've already seen.
00:03:04After that, I restarted claud code and checked that the MCP was connected.
00:03:08Inside the MCP, you have event management,
00:03:11where it can basically trigger functions with test events and get run ids,
00:03:15along with other functions that allow it to list and invoke functions as well.
00:03:19You have monitoring tools which allow it to get the status and documentation access too,
00:03:23so if something does go wrong with the ingest functions,
00:03:26I no longer have to dig around manually to find out what's wrong with my agent.
00:03:30These tools can automatically tell claud what went wrong, and it can fix it for me.
00:03:34It used the send event tool to query the main research function with the question,
00:03:39what is context engineering?
00:03:40After that, it polled the run status,
00:03:42which basically means it asked over and over again whether the run was complete or not.
00:03:47Then it tested it again and saw that all of them were using the correct model name
00:03:51and the workflow was still executing nicely.
00:03:53In their own words, this represents a fundamental shift
00:03:56in how they're building and debugging serverless functions.
00:04:00Instead of functions being black boxes that the AI model just reads from the outside,
00:04:04AI can now work in the proper execution and provide real-time insight,
00:04:08and hopefully we'll see this happening with other tools as well,
00:04:11where we're giving AI more autonomy.
00:04:13And I'm pretty excited for it.
00:04:14That brings us to the end of this video.
00:04:16If you'd like to support the channel and help us keep making videos like this,
00:04:20you can do so by using the super thanks button below.
00:04:23As always, thank you for watching and I'll see you in the next one.

Key Takeaway

Ingest's new MCP for its dev server revolutionizes AI coding by enabling AI agents to become autonomous live debuggers, streamlining the testing and debugging of complex asynchronous workflows.

Highlights

Ingest has released a new monitoring layer (MCP) for its dev server, enabling AI coding agents to act as live debuggers for the code they generate.

The MCP provides a visual interface for testing agent functions, allowing AI agents like Claude Code or Cursor to perform automated testing and simplify debugging.

Ingest is an open-source workflow orchestration platform that facilitates building reliable AI workflows, with the MCP server significantly enhancing the developer experience.

The MCP addresses the inherent difficulties of testing and debugging asynchronous, event-driven workflows by automating manual tasks such as event triggering and log analysis.

AI agents can now automatically identify and fix issues within Ingest functions, eliminating the need for developers to manually investigate problems.

This integration represents a fundamental shift in debugging serverless functions, granting AI models real-time insight into execution and greater autonomy.

Timeline

Introduction to AI Coding Automation & Ingest's New Layer

Discusses the increasing automation in AI coding, the role of LLMs with tools, and introduces Ingest's new monitoring layer that transforms AI coding agents into live debuggers.

Ingest Dev Server & Automated Testing

Explains Ingest's release of MCP for their local dev server, which provides a visual interface for testing agent functions and allows AI agents like Claude Code to perform automated testing.

Understanding Ingest and Asynchronous Workflows

Defines Ingest as an open-source workflow orchestration platform and highlights the challenges of testing and debugging asynchronous, multi-step workflows triggered by external events.

Traditional Debugging vs. MCP Solution

Contrasts the difficulties of manual debugging (triggering events, log digging, recreating events) with how Ingest's MCP integration allows AI agents to handle these tasks automatically.

Context Engineering & Agent Setup

Mentions Ingest's paper on context engineering in AI research agents, and details the process of setting up an agent, including cloning, installing dependencies, and configuring LLMs.

Configuring and Utilizing the MCP Server

Describes the steps to configure the MCP server, including creating the .mcp.json file, starting the dev server, and outlines the MCP's capabilities for event management and monitoring.

MCP in Action & Future Implications for AI Debugging

Demonstrates how an AI agent uses MCP tools to query functions, monitor run status, and verifies workflow execution, concluding that this represents a fundamental shift towards AI autonomy in debugging serverless functions.

Community Posts

No posts yet. Be the first to write about this video!

Write about this video