In our recent project, we explored an innovative approach to automated coding by integrating two distinct Language Models (LLMs): a large, powerful LLM for technical planning and code review, and a smaller, local LLM specifically focused on writing the actual code. This dual-model architecture significantly boosts productivity, code quality, and maintainability in complex web projects involving HTML, CSS, and JavaScript.
See the proof of concept here: https://github.com/neurotechie/ai–developer-concept
Detailed Approach to Our Dual LLM System
1. Technical Planning with a Large LLM
We leverage an advanced LLM like GPT-4o-mini to manage initial project planning. The workflow begins when the large LLM receives high-level project descriptions from users. The LLM explicitly generates structured JSON (tasks.json
), breaking down tasks into:
- File actions (create, update, delete).
- Explicit file paths and directory structures.
- Detailed content instructions for the small LLM.
- Clearly defined interdependencies, such as HTML files explicitly linking CSS and JavaScript files.
2. Code Generation with a Local Small LLM
After receiving structured tasks, the small, local Ollama-hosted LLM (Qwen) swiftly generates code following explicit instructions from the large LLM. The small LLM uses chunk-based modificationsโparticularly for HTMLโto selectively update specific sections (like <body>
), avoiding unintended content overwrites or duplication. Its outputs are always structured in explicit JSON, ensuring robust and error-free parsing.
3. Automatic File and Folder Management
The system employs a dedicated file manager to execute file operations precisely as instructed in tasks.json
. It seamlessly integrates with the file system, automatically creating, updating, or deleting files and directories. Comprehensive error-handling mechanisms ensure no invalid or empty content is written, maintaining file integrity.
4. Automated Code Review and Correction
Once the initial coding is complete, the large LLM automatically conducts detailed reviews explicitly for modified files. Issues such as JavaScript reference errors, broken HTML links, or CSS styling errors are identified and documented in code_review.json
. The large LLM then provides detailed, step-by-step fix instructions, which the small LLM explicitly and promptly implements, ensuring a robust and error-free codebase.
5. Smart File Tracking and Context Management
Our system continuously maintains comprehensive file tracking through file_tracker.json
, capturing file paths, summaries, and their interconnections. This context-rich environment allows the large LLM to conduct intelligent planning and precise reviews, maintaining coherence and consistency across the project.
Diagrams and Visualizations
Flow Diagram:

Sequence Diagram:

State Diagram:

Pros of Our Dual LLM Approach
- Enhanced Productivity: Automation significantly accelerates the project lifecycle.
- Improved Code Quality: Structured planning and rigorous automated reviews drastically reduce errors.
- Efficiency: Rapid execution by local small LLMs minimizes latency.
- Contextual Awareness: Ongoing file tracking ensures accurate, context-aware interactions.
- Reduced Overhead: Automation dramatically reduces manual coding and debugging effort.
Technology Stack and File Structure
project-root/
โโโ index.js # Main automation script
โโโ openaiClient.js # Large LLM client (GPT-4o-mini)
โโโ ollamaClient.js # Small LLM client (Qwen via Ollama)
โโโ fileManager.js # File management operations
โโโ chunkManager.js # Chunk-based updates management
โโโ codeReviewer.js # Automatic code reviews
โโโ generated_files/
โโโ tasks.json # Task instructions
โโโ code_review.json # Review documentation
โโโ file_tracker.json # Comprehensive file tracking
Ideal Use-Cases
This system is particularly suitable for:
- Rapid prototyping of web applications (HTML/CSS/JavaScript).
- Automated management of large, multi-file codebases.
- Quick setup and validation of frontend projects.
- Consistent and explicit code quality control.
Conclusion
Our dual LLM system, combining sophisticated technical planning with efficient local execution, represents a significant advancement in automated software development, substantially improving productivity, reliability, and maintainability.
Leave a Reply