PCPL Dev Core Team • February 2026

Software Engineering
in the AI World

We went from Cursor → Claude Code. What's the next leap?

Press or click to navigate

The Wake-Up Call

Last week, Spotify dropped this on their earnings call:

"Our best developers have not written a single line of code since December."
— Gustav Söderström, Co-CEO, Spotify (Feb 10, 2026)

They shipped 50+ features in 2025. Their engineers fix bugs and ship features from their phone on the morning commute.

It's Not Just Spotify

The world's best engineering teams have already shifted

85%
of Box engineers
use Cursor daily
30,000
NVIDIA devs on Cursor
3x more code committed
1M+
lines of AI code merged
at Dropbox per month
30%+
velocity increase
at Salesforce

💡 These aren't startups experimenting. These are enterprises with thousands of developers seeing measurable, material improvements.

How We Got Here — Including Us

The industry evolved. So did PCPL.

2023
IDE-based development with autocomplete — VS Code, IntelliJ, basic code suggestions
2024-25 — PCPL adopts Cursor
AI writes code you describe — Composer, multi-file edits, inline chat. Our team lived here.
Late 2025 — PCPL moves to Claude Code
AI works locally on your machine — Terminal-native, full repo context, no cloud IDE dependency
2026 — THE NEXT LEAP
AI agents you talk to on Slack/Telegram — they code, test, push branches, you review. No IDE needed.

Inside Spotify's "Honk" System

Engineer on morning commute → feature shipped before office

💬

1. Describe

Engineer types in Slack from phone: "Fix the shuffle bug on iOS"

🤖

2. AI Codes

Claude Code reads repo, writes fix, runs tests automatically

📱

3. Review

New build pushed to engineer's phone via Slack for testing

🚀

4. Ship

Engineer approves → merged to production. Done.

🎯 Key insight: The best developers became reviewers and architects, not typists. They spend time on what to build and why, not how to type it.

Three Levels of AI-Assisted Development

We're at Level 2. The opportunity is Level 3.

⌨️

Level 1: AI in Your IDE

Copilot, Cursor autocomplete. AI suggests, you accept/reject. You're still typing.

PCPL: Been here ✓
💻

Level 2: AI on Your Terminal

Claude Code / Codex locally. Powerful — but one agent, one task, one branch. Your machine is blocked while agent works. You wait.

PCPL: We're here now
💬

Level 3: Multi-Agent on Slack/Telegram

Multiple agents, multiple branches, in parallel. Fire off 3 tasks before standup. Review all 3 MRs by lunch. Your machine stays free.

THE NEXT LEAP →

💡 Level 3 is what Spotify built with "Honk." Two big shifts: chat replaces IDE as the interface, and you can run multiple agents in parallel — one fixing a bug, another building a feature, a third writing tests. That's the real multiplier.

The PCPL Proposal

TheFoundry

Chat-first, IDE when needed.
A Mac Mini with all repos cloned, test suites ready, and specialized AI agents
that your team talks to on Slack — owned by the dev team, built for the dev team.

No IDE needed • Code from your phone • Tests run before you even see the branch

TheFoundry Architecture

One Mac Mini. All repos. All test suites. Specialized agents.

Mac Mini (TheFoundry)
├── ~/code/cuelinks/ — all 13 repos cloned, bundled, test DB ready
├── ~/code/desidime/ — DesiDime repos, local test env
├── ~/code/zingoy/ — Zingoy repos, local test env
├── OpenClaw → orchestrator + Slack/Telegram interface
│   ├── RailsBot (Rails/Sinatra) • DroidBot (Android/Kotlin)
│   ├── FlutterForge (Flutter/Dart) • UIForge (React/CSS)
│   ├── QABot (test gen + runner) • ReviewBot (auto review)
│   └── DocsBot (API docs) • DataBot (SQL/migrations)
└── Slack + Telegram → the developer's interface

🔑 Key: Local Test Execution

Every product's repo is cloned with dependencies installed and test DB configured. Agents run rspec, gradle test, flutter test locally before pushing. The branch you get back has already passed tests.

💬 Key: Chat Is the Interface

Devs don't SSH in or open an IDE. They message on Slack or Telegram: "Fix the pagination bug in cuelinks API" — and get back a tested, reviewed branch ready for merge.

The Most Interesting Agent

TechLeadBot: From PRD to technical clarity

What if an agent could read a PRD, understand our current architecture, and ask the right questions to Product before a single line of code is written?

Product team shares PRD in Slack

1. TechLeadBot reads the PRD
2. Cross-references with .agents/ARCHITECTURE.md across repos
3. Identifies gaps, conflicts, ambiguities
4. Posts back:
"The PRD says 'add IFSC validation' but:
→ Migrations are in cuelinks_admin, not cuelinks — which repo owns this?
→ We already validate IFSC via Razorpay API — should this replace or complement?
→ India-only or international publishers too?"

5. Once answers come in → generates task breakdown for TheFoundry agents

🎯 This is where developers become more critical, not less. The agent surfaces the right questions. The developer's architectural thinking and domain knowledge is what makes the answers valuable. PRD → refined requirements → code is a better pipeline than PRD → guesswork → code → rework.

A Day in the Life

Code from Slack. Unit tests run on Mac Mini. Branch lands in your lap.

9:00 AM — Dev messages on Slack (or Telegram from phone):
💬 "RailsBot: Add pagination to campaign API. Use kaminari. 25 per page."

9:01 AM — RailsBot on the Mac Mini:
📂 Reads repo conventions from .agents/PATTERNS.md
🌿 Creates branch, writes code + unit tests
🧪 Runs rspec locally → 47 tests, 0 failures
🔍 ReviewBot auto-reviews the diff + lint check
🚀 Pushes to GitLab → creates MR

9:04 AM — Dev gets Slack notification:
✅ "MR ready — unit tests passing, review done. Your call 👍"

Dev hasn't opened their IDE. ☕ still warm. Tests already green.

The Hidden Time Sink

Git overhead: small tasks, big drain

⏱️ Without The Foundry

Every task = a git workflow tax:

  • git checkout -b → create branch
  • git add / commit → stage & commit
  • git pull --rebase → resolve conflicts
  • git push → push to remote
  • Open GitLab → create MR → fill template

~5-10 min per task × dozens of tasks/week

🚀 With The Foundry

You say what you want. Agent handles the rest:

  • 💬 "Fix the nil error in transactions endpoint"
  • → Agent creates branch
  • → Writes fix + test
  • → Rebases on latest main
  • → Pushes + creates MR

You just review and merge. Zero git commands.

🎓 Huge for junior devs. No more fighting with rebase conflicts, detached HEADs, or accidentally pushing to main. The learning curve for git disappears — they focus on understanding code, not git commands.

How We Organize It

Each repo gets an .agents/ folder — the AI's playbook

📁 Per Repository

each-repo/
├── .agents/
│   ├── AGENTS.md — conventions, do's/don'ts
│   ├── ARCHITECTURE.md — system design
│   ├── PATTERNS.md — "how we do X here"
│   └── TESTING.md — test conventions
└── ... existing files

📁 Product-Wise

~/code/
├── cuelinks/ — 13 repos + .agents/
├── desidime/ — DesiDime repos
├── zingoy/ — Zingoy repos
└── shared/
    ├── CODING_STANDARDS.md
    ├── SECURITY_RULES.md
    └── DEPLOY_PROCESS.md

🙋 This is just an example structure. Every developer on this call will help define and refine what goes into these files. You know your repos best — you'll write the playbook that teaches the AI how your codebase works. This is collaborative, not top-down.

The Real-World Challenge

Our services don't live in isolation — how does the agent handle that?

🔗

Cross-Repo Dependencies

Example: You change the publisher panel (cuelinks) but migrations live in cuelinks_admin.

Solution: Dependency Maps

Each repo's .agents/ARCHITECTURE.md documents:

  • → Where migrations live
  • → Shared models / gems / services
  • → Which repos need to deploy together
  • → API contracts between services
🧠

How the Agent Handles It

Agent reads the dependency map and works across repos when needed:

Dev: "Add a verified field to IndiaPayment"

Agent:
1. Reads cuelinks/.agents/ → "migrations are in cuelinks_admin"
2. Creates migration in cuelinks_admin
3. Updates model + views in cuelinks
4. Pushes branches to both repos
5. Notes in MR: "Deploy admin first"

💡 This is exactly how a senior dev thinks — "where does this actually live?" The .agents/ docs encode that tribal knowledge so the AI doesn't have to guess.

Testing: What's Realistic?

The agent writes and runs unit tests.
Integration/E2E stays with the team.

Agent Does: Unit Tests

  • → Writes RSpec / JUnit / Flutter tests
  • → Runs them locally on the Mac Mini
  • → Validates models, services, helpers
  • → Checks for regressions in existing tests
  • → Reports: "47 tests, 0 failures" ✅
Runs on Mac Mini
🔍

Agent Does: Static Analysis

  • → Rubocop / lint checks
  • → Security scan (hardcoded keys, SQL injection)
  • → Code review (N+1, error handling)
  • → Pattern compliance (.agents/ rules)
  • → Catches issues before human review
No infra needed
👨‍💻

Team Does: Integration + E2E

  • → Full stack integration tests
  • → Browser/UI testing
  • → Cross-service API testing
  • → Performance / load testing
  • → Final staging validation
Needs real infra

🎯 Think of it this way: the branch you get back has already passed unit tests + code review + lint. Your job shifts from "does this code work?" to "does this feature do what we want?"

Where We've Been, Where We Could Go

Our journey + the options ahead

A

Cursor (IDE-based AI) — we've done this

Great autocomplete, inline chat, Composer. But still individual, still IDE-bound. We outgrew this.

B

Claude Code (Terminal AI) — we're here now

Powerful, local, full repo context. But still requires dev's terminal, dev's machine, dev's attention. One dev = one agent at a time.

D

Claude Code / Codex + TheFoundry (Hybrid)

Keep using Claude Code or OpenAI Codex locally for quick tasks. Use TheFoundry for bigger features, reviews, and test automation. Best of both — individual speed + shared infra.

The New Developer

What fades vs. what rises

📉 Fading

  • Typing speed
  • Memorizing syntax
  • Manual debugging
  • Writing boilerplate
  • Stack Overflow copy-paste

📈 Rising

  • Prompt engineering
  • System design & architecture
  • Code review & verification
  • Describing intent clearly
  • Understanding why code works

⚡ A developer using AI effectively can do the work of 3-5 developers working manually. Same team, 3-5x the output. That's not a threat — it's a superpower.

The Opportunity

You become the architect, not the typist

🚫 What AI Won't Do

  • Understand your business logic
  • Make architectural decisions
  • Debug complex production issues with context
  • Know what to build (product sense)
  • Handle ambiguity and tradeoffs
  • Think about security holistically

✅ What AI Will Do

  • Write boilerplate in seconds
  • Generate tests from specs
  • Refactor across 50 files at once
  • Catch bugs before review
  • Write documentation
  • Handle repetitive migrations

Developers who embrace AI won't be replaced.
Developers who refuse AI will be outpaced.

How We Get There

A phased rollout — no big bang

Phase 1: Foundation PARTIALLY COMPLETE

Week 1-2
  • ✅ Set up Mac Mini + OpenClaw
  • Create .agents/ in top 3 repos
  • ✅ Deploy CodeBot + ReviewBot
  • Connect Slack + GitLab
  • Pick 2-3 volunteer devs for pilot

Phase 2: Expand

Week 3-4
  • Add DroidBot, FlutterForge, UIForge
  • Auto-review on all new MRs
  • Onboard remaining dev team
  • Measure PRs/week, time-to-merge

Phase 3: Accelerate

Month 2-3
  • Full agent coverage, all repos
  • QABot for auto test generation
  • GitLab webhook → auto-review pipeline
  • Track velocity metrics before/after

The Elephant in the Room 🐘

Let's talk about what you might be thinking

😰 "Is this replacing us?"

No. It's replacing the boring parts of your job. The boilerplate, the repetitive CRUD, the "I've written this pagination code 50 times" work. You get to focus on the parts that actually need a human brain.

🤔 "Will AI-generated code be good enough?"

That's why you review every MR. The agent proposes, you approve. Nothing ships without human eyes. Think of it as having a very fast junior dev who never gets tired — but still needs your review.

😤 "I'll lose my coding skills"

Did calculators make mathematicians worse? You'll read more code than ever (reviewing), think about architecture more, and tackle problems you never had time for. Your skills evolve, not atrophy.

🏗️ "Our codebase is too complex for AI"

That's exactly why we need .agents/ docs — to encode the complexity. And honestly? Our cross-repo setup today? We already solved that with Maya + CodeBot. It works.

💬 If you have concerns — voice them today. This is a discussion, not a mandate. The goal is to make your life better, not scarier.

The Real Opportunity

Stop building forms. Start building the future.

When agents handle the routine work, your team unlocks time for the engineering that actually moves the needle:

🔍

Semantic Search

Vector search across deals, products, content. Understand intent, not just keywords. Personalisation at scale.

DesiDime + Zingoy
🔌

MCP Servers

Build AI-native APIs that Claude, GPT, and other models can use directly. Make our platforms AI-accessible.

All Products
🤝

Partner Integrations

Deeper affiliate network APIs, real-time deal feeds, automated campaign optimisation. More partners, less manual work.

Cuelinks
🧠

Local LLMs

Run models on our own infrastructure for sensitive data processing, content generation, spam detection — without API costs.

AI-Ready Infra
📊

AI-Powered Analytics

Natural language queries over our data. "Show me top performing campaigns this week" → instant answers, no SQL needed.

DataWizard

Smart Automation

Intelligent deal detection, automated content generation, dynamic pricing, predictive fraud detection.

All Products

This Isn't Just for Senior Devs

Every level benefits differently

👨‍💻 Senior Developers

Finally have time for the work you've wanted to do but never could:

  • → Design systems architecture, not write CRUD
  • → Build semantic search, not pagination
  • → Mentor the team, not fight boilerplate
  • → Experiment with LLMs, MCP, vector DBs
  • → Your experience becomes the multiplier — you direct, agents execute

🌱 Junior Developers

Level up years faster than the previous generation:

  • → Learn architecture by reviewing AI-generated code
  • → Work on advanced features from day one
  • → AI explains patterns as it codes — built-in mentor
  • → Focus on understanding why, not memorising how
  • → Ship meaningful features in your first month, not your first year

⚡ The goal: stop building the same APIs and forms repeatedly. Let agents handle the routine. Every developer — senior or junior — gets to do more interesting, more impactful, more career-growing work.

Let's Talk

Questions for the team

  1. You've used Cursor and Claude Code — what worked, what didn't? What are you still doing manually that AI should handle?
  2. Would you actually use a Slack/Telegram bot to code? Or does it feel weird not being in your IDE?
  3. What's the first thing you'd ask TheFoundry to do? (Bug fix / Feature / Tests / Migration / Refactor?)
  4. What guardrails do we need? Should agents auto-push or always wait for human approval?
  5. The Spotify question: Could you see yourself reviewing code all day instead of writing it? How does that feel?
  6. Hybrid or all-in? Keep Claude Code / Codex locally + add TheFoundry, or go full Slack-native?

Project Foundry Is Yours

This is your project to own and shape

💬

Chat-First

Talk to agents on Slack. Describe what you want. They build it.

🏭

You Decide

What agents do you need? What tasks should they handle? Shape the system.

What's the first thing you'd ask TheFoundry to do?

The companies shipping fastest
aren't the ones with the
most developers.

They're the ones whose developers
have the best tools.

25 developers with the right AI setup → output of 75+

The cost of being slow is much higher than the cost of these tools.

Prepared by Maya 🎯 • Feb 2026