00:00:00Co-work gives non-developers the powerful capabilities of cloud code, letting them create
00:00:04real automations, document workflows, pipelines and research tasks that were previously native
00:00:10to terminal only.
00:00:11But most of them are still getting low quality output and complain that using co-work has
00:00:15been eating up their limits.
00:00:17This is not happening because of its tool, it happens because there is no effort being
00:00:21put in the pre-setup.
00:00:22There is no right way for the pre-setup, it is always a series of steps you take to make
00:00:26the workflow tailored for what you need.
00:00:28Now I know we have been talking about such best practices in most of our previous videos
00:00:32but we found some new ones that were actually good and had a high impact on our workflows.
00:00:37The first thing you need to do before doing anything is creating a manifest.md for every
00:00:42folder you work with.
00:00:43This file lives in the root of the folder and contains guidance on how the actual folder
00:00:47looks.
00:00:48For cloud code users, this file is the same as the claud.md file.
00:00:52If you, like us, also work in a folder that contains a lot of information that is nested
00:00:56and structured, manifest helps better in working in such folders.
00:01:00Claud tends to get lost and pull noise from irrelevant files.
00:01:03This is because without a manifest file claud tends to navigate around the folder guessing
00:01:07where the right file actually is.
00:01:09It bloats the context unnecessarily which leads to using the wrong file as the source and generating
00:01:14bad quality output.
00:01:15This file tells which documents are the source of truth, which sub-folders map to which domain
00:01:20and what to skip entirely.
00:01:21The manifest.md contains 3 tiered levels of files to let claud know which file to give
00:01:27importance to and which to not.
00:01:28Tier 1 contains all the files your model should always load and which acts as the source of
00:01:33truth, listing all those files which cannot be skipped at all.
00:01:36Tier 2 is those files that you want to be loaded on demand.
00:01:39These include those kinds of files that you don't need right away but might need.
00:01:43And lastly, the third tier is the archive data, which is your past versions of data that you
00:01:48don't need but keep for records sake.
00:01:50That's why we flagged it as "Ignore it unless asked".
00:01:53Now with this setup in place, whenever we asked any question in cowork, it loaded the manifest.md
00:01:59file first and from that located the file which contained the data needed and then it responded
00:02:03to our query much faster and more reliably than working without it.
00:02:07Now aside from the manifest.md, you need to create 3 more context files that tell what
00:02:12your identity is.
00:02:13These files are about me, brand voice and working style, each explaining how you prefer responses
00:02:19so that Claude knows how to behave.
00:02:20This eliminates generic AI output because Claude actually knows what your working style is.
00:02:25So we placed these files in the Claude context folder inside the Documents folder and had
00:02:29them accessible from everywhere by telling Claude in the instructions.
00:02:33This ensures that Claude responds according to what we need and does not behave in a way
00:02:38we don't like.
00:02:39These files are not meant to be created once and then used forever.
00:02:42They need to be refined frequently and if you see that Claude didn't follow the instructions
00:02:46you gave in your files, then iterate with it on whether it's a prompt problem or a context
00:02:51problem.
00:02:52If it's either, you can add lines to these files to fix things.
00:02:54Now aside from these files, you need to create memory files so that if you're working continuously
00:02:59in a particular folder, it retains memory between sessions from these files.
00:03:03This works similarly to coding, letting files act as our external memory for all the decisions
00:03:08and tasks that need to be done.
00:03:10The next thing is something that people often ignore, the global instructions.
00:03:13Many people just leave this blank, but they're actually powerful because these instructions
00:03:17are loaded before anything else, even before your prompt is loaded.
00:03:21They act as a starting point for all of your prompts.
00:03:23For Claude code, this looks like the instructions in the Claude.md file in the .Claude folder
00:03:28of the home directory.
00:03:30In my global instructions, I specifically stated that the manifest.md is the first thing Claude
00:03:35should look at and how to navigate around it.
00:03:37But there are also other practices that make working with Claude manageable.
00:03:41For example, I let Claude ask clarifying questions before doing anything.
00:03:45This way it doesn't blindly do whatever it thinks is right and can course correct with
00:03:48relevant questions.
00:03:50Another thing to include in your global settings is asking Claude to show a brief plan before
00:03:54taking action.
00:03:55When it lays out a plan first, you can actually see if the direction is right or not.
00:03:59You can add other rules as you like.
00:04:00For example, I added instructions to avoid filler words and not to pad the output, which
00:04:05Claude tends to do normally.
00:04:06I also explicitly stated that if the confidence is low, Claude should ask instead of giving
00:04:11wrong answers confidently.
00:04:12All of these contribute to a much better experience with cowork.
00:04:16Even with vague prompts, this setup makes it answer accurately.
00:04:19And as I already mentioned, we use Claude context files to guide the voice and personality, so
00:04:24I also included this in the global instructions so it can reference them whenever needed.
00:04:28Even though this is something you must be hearing in our videos repeatedly, you also need to
00:04:32ensure that the context given to your agent is minimal, either by explicitly stating it
00:04:36in the prompt or by controlling it with files like the manifest.md or similar.
00:04:41The less the context window is bloated with noise, the better it performs.
00:04:44Now the prompts, setup instructions and templates are available in AI Labs Pro.
00:04:48For those who don't know, it's our recently launched community where you get ready to use
00:04:52templates that you can plug directly into your projects for this video and all previous ones.
00:04:57If you've found value in what we do and want to support the channel, this is the best way
00:05:01to do it, the link's in the description.
00:05:03Now another thing we need to do is define the end state of what you want to achieve instead
00:05:07of defining the process.
00:05:09As we always say, if we show the model what the correct output looks like, it tends to
00:05:13perform better and iterate toward that goal.
00:05:16That correct output can be anything, test cases, the final output in the prompt or similar references.
00:05:21Now this principle applies to all agents, be it cowork or Claude code or any other agent.
00:05:26Now when we wanted to perform a reorganization task in our folder, we specifically stated
00:05:31which version of each file should go into which folder and what each folder should contain
00:05:36after the reorganization was done instead of vaguely mentioning it to reorganize the files.
00:05:40We also detailed how it should treat nested folders and explicitly mentioned what it should
00:05:45not touch.
00:05:46This prompt allowed Claude to iterate toward that goal in an orderly way, making the task
00:05:50much easier because now it knew what the correct output looked like.
00:05:54We need to explicitly tell Claude what it should do if it's uncertain about any task.
00:05:58Most commonly we give Claude clear instructions in our prompts on what it needs to do and
00:06:03the best path to follow, but we don't mention how it should handle edge cases.
00:06:06In those situations, Claude tends to guess, and most of the time, it's wrong because
00:06:10it doesn't know our preferred approach.
00:06:12So you need to specifically state what it should do in those situations.
00:06:16We did so by adding this in our global instructions that if Claude is uncertain about anything,
00:06:21it should ask by mentioning it in words, and if the confidence is low, it should say so.
00:06:25In our document folders under Working Style, we also specified that if it's unsure about
00:06:30something, it should say so and explicitly not guess or present it as a fact.
00:06:34Now with this in place, Claude flags uncertainty up front instead of confidently guessing wrong.
00:06:39But before we move forwards, let's have a word with the sponsor, Scrimba.
00:06:42Most of us learn to code by watching a video, getting stuck, and constantly switching between
00:06:46a browser and an editor until our brains melt.
00:06:49Scrimba fixes this.
00:06:50They created Scrim technology, where the video player is actually a live code editor.
00:06:54At any moment, you can hit pause, click directly into the instructor's code, and start editing
00:06:58right there to see what happens.
00:07:00It's like pair programming with experts, and that's what makes the learning actually
00:07:04stick.
00:07:05Scrimba offers the specialized training needed to master AI engineering and full stack development
00:07:09for a high quality portfolio.
00:07:11If you're a student or prepping for interviews, these deep dives help you prepare for technical
00:07:15screens covering data structures in Git.
00:07:18It's the most efficient way to bridge the gap from vibe coding to professional engineering.
00:07:22Stop watching passive tutorials and start gaining real world experience through interactive
00:07:26building today.
00:07:27Get started today with their free courses and when you're ready, use our link in the pinned
00:07:31comment below to save an extra 20% on their pro plans.
00:07:35Instead of using a different session for each task, you need to batch related work into a
00:07:39single session.
00:07:40Now how do you identify which tasks can be grouped into a single session and which cannot?
00:07:45The first clue is that some tasks actually share context among them because the output
00:07:49of one task is fed as input to the next and so on.
00:07:51For example, generating the monthly budget summary report often involves multiple interconnected
00:07:56tasks.
00:07:57In such cases, we need to group similar tasks together so they run faster, cheaper, and with
00:08:01higher quality.
00:08:02This also helps prevent hitting session limits frequently since you're completing more tasks
00:08:06in fewer sessions.
00:08:08When we gave Claude prompts, we explicitly started with a goal, then mentioned the first
00:08:12step it needed to do, then the next, and so on until the goal was achieved.
00:08:16This approach allowed us to complete more tasks much faster.
00:08:19However, if the tasks are not interconnected, batching them together will not only waste
00:08:23tokens but may also result in incorrect output if done this way.
00:08:26Now batching of tasks doesn't have to be done only sequentially.
00:08:29If there are tasks that can be done in parallel, you can integrate them by utilizing parallel
00:08:34agents.
00:08:35Claude can automatically identify the need for parallelism and execute it on its own.
00:08:39But it doesn't hurt to explicitly mention this in your prompt.
00:08:42We also use subagents heavily to make our tasks faster and more convenient.
00:08:46With subagents, a large number of tasks can be completed quickly and their dedicated context
00:08:50windows prevent the main context from being bloated with unnecessary information.
00:08:55However, one thing to watch out for is that subagents consume a lot of tokens, so you need
00:08:59to use them only when it's absolutely necessary.
00:09:02Also, if you are enjoying our content, consider pressing the hype button because it helps us
00:09:06create more content like this and reach out to more people.
00:09:10Co-work has an edge because we can schedule tasks, which we used to do manually by giving
00:09:14prompts repeatedly.
00:09:15Now we can schedule a range of tasks that we perform every day.
00:09:18These scheduled tasks only run when your computer is awake and while the Claude desktop is open
00:09:23so that's an important consideration.
00:09:24Since we already had a system running all the time, running Open Claude to research new ideas,
00:09:29track new tool releases, and report to us in our Discord channel, we asked Co-work to schedule
00:09:34another automation.
00:09:35Now we use the schedule skill and ask Claude to analyze the meeting notes where we discuss
00:09:39new ideas and tool releases and to write a report based on those notes for the same day
00:09:44in this folder.
00:09:45We also gave it a proper file naming format and asked it to identify the actionable items.
00:09:49In response to this prompt, Claude asked us questions about the frequency and then scheduled
00:09:54the task for us.
00:09:55And now we receive frequent reports from our meeting notes, ideas, and the tools that we
00:09:59can use in our videos, all derived from our discussions.
00:10:02This process can be improved even further by using connectors to link Gmail or Google Drive,
00:10:07allowing us to write emails or save files directly to the inbox.
00:10:11We can also do this by setting up cron jobs in Claude code and letting it interact with
00:10:15MCP tools and CLIs to do the same job.
00:10:18To make our workflow much more efficient, we need to use plugins to compound capabilities.
00:10:22Each plugin essentially contains a bundle of skills or commands, along with sub-agent integrations,
00:10:27all targeted toward a specific area and specialized in working within that domain because they
00:10:32include tailored instructions.
00:10:34Claude already has many plugins built for common use cases, but we can also create our own.
00:10:38These plugins are open source and available on GitHub.
00:10:41Now the plugin suite even contains a plugin to create plugins.
00:10:44When we wanted to create a plugin of our own, we simply asked the chat interface to do so
00:10:49and it ran the skill for building the new plugin.
00:10:51Claude asked us a set of questions in a session and then presented a plan.
00:10:55Once we approved the plan, it started building everything.
00:10:58This makes the process even easier because now we don't have to rely on plugins built
00:11:02by others, we can create our own, specifically tailored for our unique use cases.
00:11:07Another thing worth mentioning is using skills.
00:11:09We've already talked in detail about how to build a good skill and walked you through
00:11:13the process of creating new skills, including how to handle the problems we encountered
00:11:17while building our own.
00:11:18You can check out those guides on our channel, they'll help you when building skills of
00:11:22your own.
00:11:23Claude also comes with many built-in skills tailored for commonly used tasks, but we can
00:11:27create custom skills specifically designed for our own unique use cases.
00:11:31Finally, we have to treat cowork as an employee, not a toy.
00:11:35Cowork is still a research preview with limited guardrails, which means it can modify things
00:11:39that shouldn't be modified if not properly restricted.
00:11:42We need to give it clear boundaries to make the most of it.
00:11:45Sensitive data should be kept in separate folders, exposing only what is actually needed, ensuring
00:11:49that cowork does not touch private information.
00:11:52We also need to tightly scope its tasks to ensure good performance.
00:11:56For example, adding instructions like "don't delete anything" ensures that it won't
00:12:00delete files and will ask before removing anything if necessary, like how we were doing when we
00:12:04were prompting it.
00:12:05There is also a risk of prompt injection.
00:12:07If a document or website contains harmful instructions, cowork might execute them and cause issues.
00:12:12Additionally, cowork uses more resources than normal chat, so if you use it excessively,
00:12:17your context limit will be reached quickly.
00:12:19You need to harness it carefully to make the most out of it.
00:12:22That brings us to the end of this video.
00:12:23If you'd like to support the channel and help us keep making videos like this, you can do
00:12:27so by using the super thanks button below.
00:12:30As always, thank you for watching and I'll see you in the next one.