Understanding AI-assisted coding workflows

Not all AI-assisted coding workflows are created equal, and the variance across foundation models is substantial. With different available tool integrations and prompting strategies for each model, there's a lot of confusion around how to best use them, and, whether the benefits are real at all, or just hype. I've been using AI-assisted coding tools for a while now and I've found that there are three main categories of workflows that can be used to leverage the power of large language models. I've found that the last category, agentic workflows, is both powerful and productive, and feel strongly that it will shape the future of software development.
The most basic version of AI-assisted coding is line completion, which was the main feature of the original GitHub Copilot when I used it to write a Tetris clone in November 2021. While it could suggest more than a single line, the more you asked from it, the more likely it was to fail. This extension isn't necessarily must-have, but I do notice when it's not installed on a fresh machine and become annoyed at having to type out full lines with my own fingers. As a productivity booster for churning out lines of code I would have otherwise written on my own, I prefer it, but attempting to prompt it for more than the immediate line tends to confuse it. Think of it as classic Auto-complete or Intellisense on steroids - it uses the current file's context to suggest more intelligent completions.
The next category is chat, and while there's a GitHub Copilot Chat extension, you can achieve a similar result directly from foundation model providers' chat interfaces, such as OpenAI's ChatGPT or Anthropic's Claude. In a chat session, you might describe what you want and receive a suggested block of code, which you can either copy/paste or insert via an integrated IDE extension. The code might not work at first due to type errors, variable naming issues, or incorrect dependency references, and you can ask the model to adjust the code by using either plain language or by sharing error messages until you're satisfied.
Finally, the last category is what is referred to as an agentic workflow, which expands on chat in two distinct ways: tool use, which integrates the LLM with greater access to your code and ways to modify it, and automation, which removes the manual processes prevalent in chat. In an agentic workflow, you might ask for a change and the LLM could:
- query your existing code for imports and read other files it deems relevant, like type declarations
- apply code changes directly either automatically (via permissions) or with approval steps
- after code changes are applied, automatically detect and resolve type issues, linting errors and test failures
- search the internet for more information
- use whatever other tools you've integrated
Each of these workflows has its own learning curve, and there are a number of approaches and tools for achieving each. It takes practice and effort, but eventually this pays off with productivity gains being larger than you might expect. If you're yet to investigate the latest improvements in AI-assisted coding, I recommend you do so, as the future of software development will be shaped by these tools.