Build AI That
Changes Industries.
Jamshedpur, India. On-premise AI. Real impact.
We're a small, focused team building on-premise AI systems for Healthcare, Education, BFSI, Manufacturing, and Government. We deploy open-source models on GPU clusters. We build RAG pipelines, MCP integrations, voice agents, and agentic systems. If you want to work on production AI — not demos — this is the place.
What Makes This Different
Most AI companies are API wrappers. They call OpenAI, add a UI, and charge a subscription. That's not what we do.
We deploy complete AI infrastructure on client premises — from the GPU server rack to the application layer to ongoing maintenance. Our clients include hospitals managing patient data, banks processing transactions, and government departments handling citizen records. Their data can't leave the building. So neither does our AI.
Working here means you'll touch every layer of the stack: hardware provisioning, model deployment, inference optimization, RAG pipelines, agentic systems, voice agents, and production operations. Not in a sandbox — on real infrastructure serving real users.
“We don't hire people to write training notebooks. We hire people to ship production AI systems that run inside hospitals and banks.”
Capability Over Credentials
We don't care where you studied. Show us what you've shipped.
Ship Production Code
We measure output by what runs in production, not what passes in a notebook. If it can’t handle real load, real data, and real edge cases, it doesn’t ship.
Own the Full Stack
There’s no “that’s not my job” here. You’ll debug CUDA out-of-memory errors, optimize Docker containers, fine-tune models, and fix CSS in the same week.
Data Sovereignty First
Every system we build keeps client data on client premises. This isn’t a talking point — it’s an architectural constraint that shapes every decision we make.
What You'll Work With
Real infrastructure. Real models. Real GPUs sitting in server racks we maintain.
GPU Cluster: 4 nodes — LLM inference (Qwen3-Omni-30B + TTS), training (ASR, VL, embeddings, MedGemma), secondary LLM (Qwen3.5-35B-A3B), and the MCMS application platform. You'll have direct access.
Small Team. Real Ownership.
No layers of management. No sprint ceremonies for the sake of process. Build, ship, maintain.
Direct Client Interaction
You’ll talk to clients directly — not through PMs or BAs. Understand the problem first-hand, then build the system to solve it.
Deploy to Production Weekly
We ship to production environments every week. Not staging, not dev — real client infrastructure. You’ll see your code running in hospitals and banks.
Progressive Autonomy
Same model we use for our AI agents: start with oversight, earn independence through consistent delivery. Senior contributors own entire product lines.
Jamshedpur-Based
We’re headquartered in Jamshedpur, Jharkhand. Lower cost of living, focused work environment, and direct access to the GPU cluster. Remote considered for exceptional candidates.
This Is For You If...
You'll Thrive Here If
Probably Not a Fit If
When Positions Open
We hire selectively based on project needs. These are the roles we typically bring in for — not all are open at any given time.
AI / ML Engineer
Model deployment, fine-tuning, inference optimization, RAG pipelines, agentic systems. Production-grade AI on GPU clusters.
Full-Stack Developer
FastAPI backends, React frontends, PostgreSQL, Docker. Building the application layer for AI-native products across the ATC suite.
Voice / NLP Engineer
ASR (Whisper), TTS (Qwen3-Omni), voice agent pipelines, SIP/VoIP integration, real-time audio processing on GPU.
DevOps / Infrastructure Engineer
GPU cluster management, Docker orchestration, monitoring, networking, and on-premise deployment automation for client sites.
Pipeline means we accept expressions of interest and reach out when a matching project opens. Open means we're actively hiring for that role. Check back or follow us on LinkedIn for updates.
Send Your Work
No Resumes. Show Us What You've Built.
We don't read CVs — we read code, system architectures, and deployment logs. Send us links to your best work: GitHub repos, deployed projects, blog posts about systems you've built, or a short write-up of a hard technical problem you solved.
Send Your Work
Email links to your projects, repos, or a write-up of your best technical work.
Technical Review
We review your work asynchronously. If it’s a fit, we’ll schedule a technical conversation.
Paid Trial Project
A short, paid project on a real problem from our stack. Mutual evaluation — you assess us too.
Build With Us
On-premise AI. Real clients. Real infrastructure. If you're ready to work on production systems that matter — we want to hear from you.
Your Data. Your Premises. Your AI.