Blog

Research notes, platform updates, and perspectives on the future of quantitative trading.

5 April 2026

Lessons Learnt by Working with Anthropic's Claude Code in Software Development

A Fully Automated Quant Trading System — schematic diagram
Schematic of the end-to-end automated quantitative trading system built with Claude Code.

With Neucore.ai, I have spent the last few weeks intensively developing an end-to-end, fully automated quantitative trading system with Anthropic's Claude Code — an AI agent for coding. Having now completed a full development cycle first-hand, here are some thoughts I would like to share.

Read full article

1. There is an existential crisis for the software industry

The level of sophistication, prowess, and speed exhibited in software development by AI agents is simply revolutionary. By working with Claude, I have been able to compress a development task that would have required a team of 3–4 people over 2+ years into just one person in 5 weeks. This compression in time and human resources does not compromise quality or rigor — rather, it reinforces the DevOps principles of rapid iteration and continuous improvement, delivering both speed and precision.

This also means a small team with complementary skills can now aspire to take on scaled projects that were once the exclusive domain of mega corporations. The implications are profound and far-reaching for how organisations are structured and resourced. Think back to the predominant practice that preceded the advent of AI agents — I believe we have passed the point of no return.

2. What is working well

The ability of an AI agent to interpret natural language prompts and translate them into itemised development tasks is as good as human experts. But it does not stop there. The agent can rapidly scan dependencies and gaps, then recommend how to close them effectively from both internal and external sources. For writing code, it generates alternative approaches and development paths for consideration. For testing, it writes code to evaluate performance and verify consistency — namely, the AI agent writes code to test the code it has written. Debugging can be interactive and personalised, with the agent guiding the user through step-by-step testing directly in terminals or GUIs. Taken together, this makes the AI agent highly productive and persistent throughout the software development lifecycle.

3. What is not working too well

The AI agent knows what is known to the public. This is one of its greatest strengths, but also the origin of its limitations. When I am already at the boundary of my knowledge envelope and trying to push it further, I often find the agent lacks an edge: the ability to cut through the current envelope and think genuinely outside of it. This became most evident when refactoring the existing codebase and seeking suggestions to improve the fidelity of the software. The agent would repeatedly fall back on methods already defined in the code, or on approaches widely known in the public domain — even after sustained prompt engineering — until I found a way forward independently.

The first-pass success rate is also on the low end. For a task of reasonable complexity, the initial version of the code hardly goes without bugs — though this can be partially mitigated by the agent's self-diagnosis and fast iteration cycles. More problematic is when the agent gets stuck in a loop, repeatedly invalidating a method it has already tried without conviction. This is the moment where human intervention becomes essential: to reframe the problem, break the cycle, and point the agent toward a viable path forward.

4. What I think is a good practice

At this stage, the strength of AI agents lies primarily in their breadth of general knowledge, logical reasoning, and speed in building out codebases. They remain prone to intermittent errors and hallucinations. Intensive work on vector database design and embedding pipelines can also be costly and time-consuming. This means AI agents excel as builders and enabling assistants, but should not be the ones making business-critical decisions. For a critical system, the architecture should rely on conventional databases for data persistence and processing, and deterministic code for logic execution — leaving AI agents for the foundational build and semantic interpretation.

To cap it: AI agents are the builders and interfaces, not the engine itself. The organisations that navigate this transition well will be those that deploy AI agents where they excel, while preserving human judgment and deterministic systems where reliability and accountability are non-negotiable.

Coming Soon

Why End-to-End Matters

Most algorithmic trading platforms optimise one part of the workflow. We're building for the whole thing — and here's why that changes everything.

Full article coming soon.

Coming Soon

Regime Detection in Practice

How stochastic models can dynamically adjust portfolio risk exposure — from theory to our live implementation.

Full article coming soon.

Coming Soon

From Flow to Conviction

Decoding institutional options flow to identify pre-positioning patterns — and building a systematic screener around persistence signals.

Full article coming soon.