Xavier Martin - Cybersecurity Portfolio

Documenting my journey in cybersecurity and data science

View on GitHub
16 January 2026

Why You Don't Need To Be A Prompt Engineer

by Xavier Martin

Why You Don’t Need to Be a Prompt Engineer to Get Value from AI

I wholeheartedly believe that prompt engineering has value, real value. However, that value is largely isolated to a relatively small portion of users. In this post, I want to share what I’ve learned over the last six months of using AI daily and building with it intensively.

During this time, I learned how application architecture works behind the scenes. I learned how iteration and specialization affect outcomes. Most importantly, I learned that context is what truly matters when working with AI.


Context, Iteration, and Specialization

Context, iteration, and specialization are the most important concepts to understand when discussing how to effectively use AI.

Because of the underlying nature of AI models and how they work under the hood, there is always a degree of randomness. When you are trying to develop an application or complete a complex task, that randomness can sometimes lead to failures or inconsistent results.

What I found is that iterating on ideas with AI models before implementing them helps surface the parts of a plan that are not fully thought out. AI excels at starting wide and then narrowing down. Early iterations act as a form of pressure testing.

You can reduce iteration overhead in two ways:

Both approaches work. The key is ensuring the model has enough information to operate within your intended constraints.


Context in Practice: Building JRVS

When building tools for my local AI inferencing app, JRVS, there are multiple ways to give the model the context it needs to produce consistent features that fit the existing architecture.

One of the simplest and most effective methods I’ve found is this:

That’s it.

With minimal effort on my part, the AI now has enough context to build a new feature that:

Using this method, I shipped a research agent feature in one day. When I originally built JRVS, it took three to four months of development. Once the base application exists, adding new features takes roughly a quarter of the time compared to building from scratch.

This is not because of better prompt engineering. It’s because the context already exists.


You Don’t Need Prompt Engineering, You Need Awareness

Through building JRVS with different AI models, I learned that most models operate in similar ways. The biggest factor affecting output quality is how I interact with them, not how cleverly I engineer prompts.

I don’t need to be a prompt engineer. I do need to be mindful of:

This is where specialization comes in.

By specialization, I mean asking questions like:

These are questions anyone can answer with a small amount of research and experimentation.

For example, if you’re using AI for both writing and coding, the same model is unlikely to be equally good at both, unless you’re using large cloud-based models with billions of parameters. Even then, cloud providers still encourage specialization through agents and sub-agents, because the concept is universal.


Final Thoughts

If you want better results from AI, start thinking in terms of:

Prompt engineering has its place, but you can go very far without it. Simply being mindful of how you use models, what information you provide, and which model you choose will dramatically improve your results.

The more you help the AI, the more it can help you.

tags: