Practical Tips for Working with AI Models
Posted on 2026-04-06, by Mark Watson
These are my random thoughts as of April 2026 about what AI models are good and bad at, and how we can optimize their use. None of this is groundbreaking - it’s just patterns I’ve noticed while working with these tools day to day.
Note: Some of the tips below (caching, /context, CLAUDE.md) are specific to Claude models and Claude Code. As of the time of writing Claude is on Opus 4.6
The Basics Still Matter
Model choice and effort settings make a difference. This sounds obvious but it’s worth stating. Not every task needs the most expensive model, and tuning the effort/thinking budget for your use case can save real money without hurting quality.
The model knows a lot. Even short industry jargon or lingo can push the model in a specific direction. You don’t always need to spell everything out - sometimes a well-chosen term does the work of a paragraph.
Managing Context Is Everything
One of the biggest levers you have is managing what’s in the context window and how much of it you’re using.
If you have a lot of tools, skills, or system prompts, that eats into your context window. Keep things small where you can. In Claude Code, you can debug this with /context to see what’s taking up space.
Large projects are harder for AI. The model has to spend tokens just learning the structure of your codebase. It helps to persist structural information (like a CLAUDE.md file) so the model doesn’t have to rediscover it every session via scanning and tool calls. But don’t go overboard - a very large CLAUDE.md itself uses a lot of the context window and doesn’t always help.
Start new sessions for new topics. As context grows, you hit two problems: the model becomes less focused because there’s too much in the window, and it gets more expensive because you’re sending more tokens each turn. Keep sessions focused.
Respond within 5 minutes. If your session sits idle for more than about 5 minutes, the cached tokens (stored in VRAM) expire and you’ll pay full price for them on the next turn. Quick back-and-forth is cheaper.
Prompt Structure Matters
The model pays more attention to the beginning and end of prompts. Structure your prompts accordingly:
- Topic - what you’re working on
- Context/data - the supporting information
- Instructions/asks - what you want done
This isn’t just theoretical - you’ll often get noticeably better results by putting your actual request at the end rather than burying it in the middle.
The Model Is Not a Thinker
The model is bad at “thinking.” Even though it’s conversational, it’s not a good entity to bounce ideas off of. It will generally just agree with you. You need to tell it exactly what you want.
The most thinking I’d let it do is pattern matching: “what are similar approaches to this problem?” Humans always make the final call.
Your job is to think, not to sign off on AI output. This means critically evaluating the result, not just clicking “accept.” It’s really easy to fall into the pattern of accepting every change, and then you end up limited by the model’s capability because you’re not adding anything. This could work one day, but it doesn’t work today.
This is true as of early 2026 - models are getting better at reasoning, so this may change over time.
The Model Is Often Wrong
The model presents wrong information in subtle and convincing ways. You can’t fully trust it. This has a few practical implications:
Use AI where you have well-defined goals and objective measurement. If you can’t tell whether the output is correct, you’re in dangerous territory.
Automated testing is not optional. Because the model is wrong some percentage of the time, errors will propagate every time you touch the code. Without tests, you’ll end up with a pile of code that keeps breaking in different ways as your project grows. You’ll also spend all your time manually verifying everything, which defeats the purpose of using AI in the first place.
Things Change
The models change over time. Something that works today may not work tomorrow. Don’t get too attached to any one approach or prompting strategy. Stay flexible and keep experimenting.
Generalization is improving but still limited. Models can often generalize, but they have real limits today. This will likely get better over time, but for now, be specific about what you want.
Popular frameworks have an advantage. Models are trained on existing code and written text, so they perform better with common frameworks that appear more often in the training data. Using React will likely give you better results than an obscure internal JS library. When you have a choice, leaning toward widely-used tools can make AI assistance noticeably more effective.