AI Coding Agents: What's Actually Working (And What Isn't)
Everyone has an AI coding assistant now. GitHub Copilot, Cursor, Continue, Claude Code, and dozens of others. The tools have gotten remarkably capable.
But capability and usefulness aren’t the same thing. After talking with dozens of development teams and experimenting extensively myself, patterns are emerging about where AI coding tools genuinely help versus where they’re more trouble than they’re worth.
Where AI Coding Shines
Certain tasks have become genuinely easier with AI assistance:
Boilerplate and scaffolding. Setting up projects, creating file structures, writing configuration files. The tedious stuff that follows patterns. AI handles this well and saves real time.
Code translation. Converting between languages or frameworks. Taking a Python script and making it TypeScript. Adapting code from one library to another. AI is quite good at these mechanical translations.
Test generation. Given existing code, generating reasonable test cases. Not comprehensive coverage, but a good starting point that’s faster than writing from scratch.
Documentation. Generating docstrings, README files, code comments. AI produces reasonable drafts that humans can refine.
Familiar patterns. Implementing something that’s been done millions of times before - CRUD operations, standard algorithms, common integrations. The patterns are well-represented in training data.
Learning and exploration. Understanding unfamiliar codebases, exploring APIs, learning new technologies. AI is a useful conversation partner for figuring things out.
Where AI Coding Struggles
Other tasks remain problematic:
Novel architecture. Designing systems that don’t match common patterns. AI tends to produce generic architectures rather than solutions optimized for specific constraints.
Complex debugging. When bugs involve subtle interactions between components, AI suggestions often miss the root cause. It’s better at suggesting surface-level fixes than understanding deep issues.
Performance optimization. AI can suggest obvious optimizations but struggles with the profiling and analysis needed to identify real bottlenecks.
Security-sensitive code. AI can introduce vulnerabilities by following common patterns that happen to be insecure. It doesn’t think about threat models.
Legacy systems. Codebases with unusual conventions, outdated dependencies, or non-standard patterns confuse AI tools. The context doesn’t match training data.
Cross-cutting changes. Modifications that require understanding relationships across many files are harder for AI that sees limited context at once.
The Productivity Reality
The productivity claims for AI coding tools are all over the place. I’ve seen everything from “10x more productive” to “net negative after fixing AI mistakes.”
My observation: the productivity gain depends heavily on what kind of coding you’re doing.
High productivity gain: simple web apps, scripts, standard integrations, prototypes. Work that follows established patterns.
Moderate productivity gain: business applications, API development, data processing. Work with some novelty but clear structure.
Minimal or negative gain: systems programming, performance-critical code, security-sensitive applications, novel algorithms. Work that requires deep reasoning or unusual patterns.
The developers I talk to who are most enthusiastic are typically working on applications that follow common patterns. Those who are skeptical are often working on more specialized systems.
Team and Process Effects
Beyond individual productivity, AI coding tools are changing team dynamics:
Code review changes. Reviewing AI-generated code requires different attention than reviewing human-written code. AI makes different kinds of mistakes. Code review practices need to adapt.
Junior developer considerations. Some worry that juniors relying on AI miss learning fundamentals. Others argue AI accelerates learning. The truth is probably context-dependent.
Consistency questions. AI can generate code that works but doesn’t match team conventions. Linters and style guides become more important.
Knowledge distribution. When AI writes code, does the team understand it? There’s a risk of code that works but nobody fully comprehends.
Choosing and Configuring Tools
The AI coding tool landscape is crowded. Some considerations for selection:
IDE integration matters. Tools that work within your existing workflow cause less friction than those requiring context switches.
Context window is important. Tools that can understand larger portions of your codebase provide better suggestions. The gap between small-context and large-context tools is significant.
Customization capability. Ability to add project-specific context, custom prompts, or fine-tune on your codebase can improve relevance.
Privacy considerations. Where does your code go? For sensitive projects, local or private options may be necessary.
Team features. Shared configurations, usage analytics, and collaboration features matter for team deployment.
Implementation Recommendations
For teams adopting AI coding tools:
Set expectations appropriately. AI helps with some tasks, not all tasks. Understand where it adds value for your specific work.
Invest in evaluation. Have developers actually measure whether suggestions are helping or hindering. Don’t assume.
Establish code review practices. AI-generated code should face the same (or more rigorous) review as human-written code.
Consider training. Effective use of AI tools is a skill that improves with practice. Invest in helping developers use tools well.
Monitor for issues. Track bugs, security issues, and maintainability problems. Some may be AI-related.
Organizations implementing AI coding tools at scale often benefit from working with AI consultants Melbourne who have experience with enterprise developer tooling and can help with rollout strategies.
Where This Goes
AI coding tools will continue improving. Context windows will grow. Models will get smarter. Integration will get smoother.
But I don’t think we’re heading toward AI that replaces developers. We’re heading toward AI that changes what developer time is spent on - less typing, more thinking. Less boilerplate, more architecture. Less implementation, more design.
The developers who’ll thrive are those who can effectively direct AI assistance while maintaining deep understanding of systems they’re building. Using AI as a tool without becoming dependent on it.
For now, teams like Team400 are helping organizations figure out how to integrate these tools effectively - balancing productivity gains against quality maintenance, and training developers to use AI assistance without losing fundamental skills.
The technology is good and getting better. The challenge is figuring out how to use it well. That’s a human problem, not a technology problem.