BUILDING PLUTONAL - THE ENGINEERING DECISIONS NOBODY TALKS ABOUT

"Everyone Asks: How Do You Deliver $6 Million Intelligence For $99 A Month?"

The answer isn't what you think.

It's not simplification. It's not watered-down analysis. It's not cutting corners.

It's obsessive engineering discipline applied to every single decision over the past eighteen months.

When I told people I was building institutional-grade market intelligence for retail investors, the most common response wasn't "that's a bad idea." It was "that's impossible."

And I get it. The maths doesn't seem to work.

Hedge funds spend four to six million per year on market intelligence. Bloomberg Terminal costs $24,000 annually just for the basics. Institutional data feeds run $50,000 to $500,000 per year depending on what you need.

How do you deliver the same thing for the cost of a gym membership?

You don't. Unless you're willing to rebuild everything from scratch.

This is the story of the technical decisions that made Plutonal possible. Not the sexy AI stuff everyone wants to talk about. The boring, unglamorous infrastructure choices that nobody notices until they're not there.

THE PROBLEMS EVERYONE IGNORES

Most fintech startups take the easy path. They wrap ChatGPT in a nice interface, connect it to a free stock API, charge $29 per month, and call themselves "AI-powered financial intelligence."

It works. Until it doesn't.

The interface looks pretty. The AI responds quickly. Users feel like they're getting insights.

But here's what they're actually getting: surface-level analysis based on the same publicly available data everyone else sees, processed the exact same way everyone else processes it, delivered with confidence that isn't backed by depth.

It's not intelligence. It's aggregation with better marketing.

I didn't want to build that.

When my uncle was spending six hours researching one stock, he wasn't struggling because the information didn't exist. He was struggling because the information existed in fifty different places, in fifty different formats, with fifty different levels of reliability, and no way to synthesise it all without spending hundreds of thousands on tools he couldn't afford.

The problem wasn't access. The problem was architecture.

If I wanted to build something that actually worked, I needed to solve problems most people don't even know exist.

PROBLEM 1: THE DATABASE NIGHTMARE

Here's something nobody tells you about financial data: it doesn't fit neatly into rows and columns.

Stock prices are time-series. Company relationships are graphs. News sentiment is unstructured text. Options flow is event-driven. Regulatory filings are documents. Technical indicators are computed values that depend on historical sequences.

Most platforms pick one database and force everything into it. Then they wonder why queries are slow and insights are shallow.

We use five different databases.

Not because I like complexity. Because financial intelligence requires different data structures for different problems.

Neo4j for knowledge graphs. Because when you're tracking how companies relate to each other, who owns what, which executives sit on which boards, and how supply chains connect, relationships matter more than rows.

PostgreSQL with TimescaleDB extension for time-series market data. Because stock prices aren't isolated points, they're sequences that need to be queried across time with millisecond precision.

Pinecone for vector embeddings. Because when you're doing semantic analysis of news, filings, and earnings calls, you need to find similar content based on meaning, not just keywords.

Redis for caching. Because repeatedly hitting expensive APIs for data that hasn't changed is how you go bankrupt.

AWS S3 for bulk storage. Because historical data is massive and needs to be accessible but not actively queried.

Five databases. Five different query languages. Five different scaling characteristics.

Most engineers would call this over-engineering. They'd be wrong.

The complexity isn't in having five databases. The complexity is in making them work together seamlessly so users never know they exist.

But that complexity is exactly what lets us deliver institutional-depth analysis. Because we're storing and querying data the same way institutions do, not the way most startups do.

PROBLEM 2: THE COST SPIRAL

Let me tell you about the day I nearly gave up.

I'd built the first working prototype. It worked. Really worked. The analysis was good. The insights were solid. Users in the small beta group were excited.

Then I ran the cost projections.

At the rate I was hitting APIs and running computations, each user query was costing me between $2 and $5.

Think about that.

If someone paid $99 per month and used the platform thirty times, I'd lose $60 to $150 per user per month. At scale, that's not a business. That's a charity funded by venture capital until the money runs out.

I nearly quit that night.

The next morning, I started rebuilding from scratch.

The problem wasn't the intelligence. The problem was that I was treating every query like it was unique. Fetching fresh data every time. Running full analyses from scratch. Hitting expensive APIs constantly.

But here's the thing about financial markets: most data doesn't change that quickly.

A company's 13F filing from last quarter? That's not changing. An earnings report from last week? Static. Historical price data? Immutable. Even real-time price data doesn't change every second, it changes every few seconds at most.

The solution wasn't doing less analysis. It was being smarter about what actually needed to be computed fresh versus what could be retrieved from work we'd already done.

I can't tell you the specifics of how we solved this. That's proprietary. But I can tell you the result: we reduced costs by 60 to 80% without reducing quality at all.

That's the difference between a feature that works in a demo and a business that works at scale.

PROBLEM 3: THE INFRASTRUCTURE SCALING TRAP

Most startups make one of two mistakes with infrastructure.

Mistake one: they over-provision from day one. Kubernetes clusters, multi-region deployments, microservices architecture, the whole enterprise stack. Costs $10,000 per month before they have a single paying customer. They run out of money before they find product-market fit.

Mistake two: they under-provision and take on technical debt. Monolithic architecture that can't scale. Single database that becomes a bottleneck. No separation between compute and storage. Works fine for 100 users. Crashes spectacularly at 1,000.

We needed a third path.

Start lean enough to survive on startup funding. But architect for scale from day one so we're not rebuilding everything when growth comes.

Here's what that actually looks like in practice:

Beta launch (100 to 200 users): Single AWS EC2 instance, single database, minimal infrastructure. Costs $150 to $200 per month. Just enough to prove the product works.

Early growth (500 to 1,000 users): Upgrade compute, add specialised databases, improve caching. Costs $400 to $500 per month. Still sustainable on seed funding.

Scaling (1,000 to 5,000 users): Load balancer, horizontal scaling, database replicas, separate background workers. Costs $1,500 to $2,000 per month. Now we're profitable and scaling infrastructure matches revenue.

Growth (5,000 to 20,000 users): Kubernetes for auto-scaling, multi-availability-zone deployment, dedicated time-series cluster. Costs $5,000 to $8,000 per month. Full enterprise reliability.

Scale (20,000 plus users): Multi-region deployment, microservices, event-driven architecture with proper message queues. Costs $15,000 to $30,000 per month. Now we're operating at institutional scale.

The key isn't where you are now. The key is having a clear path from here to there without having to rebuild everything at each stage.

Most startups don't fail because they couldn't scale. They fail because they couldn't afford to reach the scale where economies of unit make sense.

We designed the whole journey before writing the first line of code.

PROBLEM 4: THE DUAL-MARKET CHALLENGE EVERYONE AVOIDS

When I tell people Plutonal covers both US and Indian markets, most just nod. They don't realise how insanely complicated that is.

It's not just twice the data. It's two completely different market structures, two different regulatory frameworks, two different sets of data providers, two different time zones that only overlap for a few hours, two different currencies, two different settlement systems.

And the real kicker: the two markets affect each other in ways that aren't immediately obvious.

Indian IT stocks often move based on US tech earnings. US companies with large India operations get affected by Indian policy changes. Currency fluctuations between USD and INR affect companies with cross-border revenue. Supply chain relationships mean events in one market ripple to the other.

To deliver real intelligence, you can't treat them separately.

Most platforms do US only. A few do India only. Almost nobody does both, and the ones that try just run two separate systems that don't talk to each other.

We built unified data models from day one. One knowledge graph that spans both markets. One analytical framework that understands cross-market correlations. One interface where users seamlessly track positions across both.

Why do competitors avoid this? Because it's expensive and complicated and there's no easy shortcut.

Why did we do it anyway? Because that's where the real opportunity is. US markets are saturated with tools. Indian markets are underserved. Retail investors operating in both markets are completely abandoned.

Being the only platform that serves both properly isn't a nice-to-have. It's our entire moat.

PROBLEM 5: REAL-TIME PROCESSING WITHOUT GOING BANKRUPT

Here's the question that kept me up at night for months:

How do you monitor thousands of stocks in real-time when every API call costs money?

If you monitor everything constantly, you go bankrupt. If you only check when users ask, you miss the signals that matter. If you batch everything overnight, you're not really real-time.

The answer is event-driven architecture, but that term doesn't do justice to how complex it actually is.

Think about it this way:

Most stocks, most of the time, aren't doing anything interesting. They're just moving with the market, following normal patterns, being boring.

But when something interesting happens, unusual volume, sharp price move, breaking news, regulatory filing, options activity, you need to catch it immediately and analyse it deeply.

The trick is knowing what's interesting without checking everything constantly.

We use message queues, background workers, and intelligent triggers that activate deeper analysis only when needed. User watchlists get intensive monitoring. Market-wide scanning happens intelligently, not constantly. Expensive analysis gets triggered by events, not run on schedules.

I can't share the specifics because that's core IP. But I can tell you the philosophy: don't treat all data equally. Ninety per cent of your intelligence should come from ten per cent of your compute. The skill is identifying which ten per cent matters.

THE TECHNICAL DEBT WE AVOIDED

Want to know what makes me most proud? It's not what we built. It's what we didn't build.

We didn't wrap GPT in a chat interface and call it AI-powered intelligence. Everyone's doing that. It's easy. It's also shallow. Large language models are incredible at language, mediocre at maths, and terrible at real-time data integration. Using them as your entire system is like using a hammer for brain surgery because it worked great on nails.

We didn't build a monolithic architecture. One big application that does everything is easy to start but impossible to scale. When one part breaks, everything breaks. When one part needs more resources, you have to scale everything. We built modular from day one even though it took longer.

We didn't take shortcuts on data quality. Garbage in, garbage out isn't just a saying. If your underlying data is wrong, your AI can be brilliant and your insights will still be rubbish. We validate everything, track data lineage, and maintain audit trails like a regulated financial institution even though we're not one.

We didn't limit to US markets only. Would've been easier. Would've been faster to launch. Would've also meant competing in the most saturated market with nothing unique to offer. Sometimes the hard path is the only path that works.

We didn't prioritise features over infrastructure. Every startup wants to ship features users can see. Infrastructure is invisible until it breaks. We chose to be slower to launch but ready to scale.

These decisions cost us time. They cost us complexity. They cost us the ability to raise money with a pretty demo in three months.

But they bought us the ability to actually deliver on the promise.

THE PART THAT SCARES ME

You know what keeps me up at night now?

It's not whether the technology works. It works.

It's not whether we can scale. We can.

It's not whether the analysis is good enough. It is.

What scares me is that I've spent eighteen months building something incredibly complex to solve a problem most people don't even know they have.

Retail investors have been trading blind for so long they think that's just how it is. They don't know about dark pools because nobody's told them. They don't track options flow because they don't know it exists. They don't see institutional positioning because they can't afford the tools.

You can't sell people a solution to a problem they don't know they have.

That's why blog posts like this matter. That's why I'm being so open about the technical challenges. Because if people understand how complex the problem is, they'll understand why the solution matters.

WHAT'S NEXT

We're close. Really close.

The infrastructure works. The data pipelines are running. The analysis is solid. The beta is coming in February.

But "close" in engineering means there are still 10,000 small decisions to make. Database query optimisations. API rate limit handling. Error recovery patterns. UI polish. Documentation. Testing. Security audits.

None of it is sexy. All of it matters.

When we launch, you won't see the five databases. You won't see the caching layers. You won't see the event-driven architecture. You won't see the dual-market data synchronisation.

You'll just see insights that actually help you make better decisions.

That's the point.

The hard part of engineering isn't building something complex. It's building something complex that feels simple.

I don't know if this will work. The odds are still against us. Most startups fail regardless of how good their technology is.

But I know this: if it does work, it won't be because we had better AI or more funding or slicker marketing.

It'll be because we did the boring, hard, unglamorous engineering work that everyone else skipped.

Zeus tried to keep Plutus blind by making the tools too expensive for regular people.

We're making them affordable through better engineering.

One database decision at a time. One cost optimisation at a time. One infrastructure choice at a time.

It's not heroic. It's just work.

But it's the work that matters.