I Built an AI System Design Interviewer That Runs in My Mac Terminal
Recently, I’ve been noticing something strange.
Developers around me are writing no code.
AI tools help with architecture, implementation, testing, documentation, and even debugging. In many workflows, developers are shifting from coding to directing.
If coding itself is changing, it raises an interesting question:
Will coding interviews eventually lose relevance?
I don’t have a definitive answer. But one trend is very clear: companies are putting increasing emphasis on system design interviews.
And those are notoriously hard to practice.
The Real Problem With System Design Interviews
Algorithm problems are solo-friendly. You can grind them on LeetCode anytime.
System design interviews are different.
They require:
- Someone to guide the conversation
- Follow-up questions that adapt to your answers
- Verbal explanation of your thought process
- Structured feedback after the session
Building a Virtual System Design Interviewer (Mac CLI)
That led me to build a virtual system design interview tool that runs entirely inside the Mac terminal.

It’s built using:
- Swift for the native Mac experience
- Strands Agents to orchestrate the AI interviewer workflow
- MLX Whisper for speech recognition
- Qwen3 TTS to generate interviewer voice responses
- Claude Sonnet to generate structured interview feedback
The goal wasn’t to build another chat-based Q&A bot.
I wanted it to feel like a real interview.
How the Interview Works
Once the session starts, the AI interviewer leads a ~30 minute system design interview based on a selected topic.
The experience is fully voice-driven.
Flow:
- The AI interviewer asks a question (spoken via TTS)
- You answer using your microphone
- The conversation continues dynamically
- Every question and answer is transcribed automatically
After the interview ends, the system generates a detailed evaluation report.
Making the Interview Feel Real
I integrated Qwen3 TTS to simulate a speaking interviewer. It’s not perfect, but it dramatically improves immersion and forces you to explain ideas verbally - which is critical in real interviews.
The interview can end in two ways:
- You manually stop using
/endcommand - The 30-minute timer expires (but you still want to send
/endcommand)
After that, the AI generates structured feedback including:
- Strength analysis
- Weakness detection
- Communication clarity evaluation
- Architectural decision quality
- Tradeoff reasoning depth
Example of What Gets Captured
Because every interaction is transcribed, you can review:
- Where you hesitated
- How your design evolved
- Whether your explanations were structured
- How well you justified tradeoffs
This turned out to be one of the most valuable features for me personally.
Current Limitations
This project is still experimental.
Right now:
- ❌ Session audio is not recorded
- 🔜 Local LLM support is planned
- 🔜 Streaming LLM integration is planned
The current version prioritizes simplicity and low friction setup.
Demo / Source Code
If you’re curious or want to try it yourself:
👉 https://github.com/elbanic/sdi.coach
For more on system design interview preparation, check out my AI System Design Tutor that helps you learn system design concepts right within your IDE.
Final Thoughts
It’s an uncertain time to be a developer.
The industry is evolving fast. Hiring expectations are shifting. The tools we rely on are changing every year.
But building tools to adapt to those changes feels like the most reliable way forward.
If this helps someone practice system design or sparks new ideas, that alone makes the project worth it.
If you have feedback, feature ideas, or want to contribute, I’d genuinely love to hear from you.