QA / QE Automation Engineer
Description
Hi!
We are looking for a QA / QE Automation Engineer to own quality engineering and build an automation-first QA culture.
This is not a manual QA role.
70–80% of your work: automation, AI QA agents, and infrastructure 20–30%: exploratory testing, AI evaluation, and release validation You will be responsible for ensuring the entire team (and AI coding agents) can ship with confidence.
What You’ll Do 🤖 AI / Agentic QA Automation (70–80%) Configure and operate AI QA agents (Testsigma, QualGent, Verifiq, etc.) Build autonomous regression test suites (UI, API, user flows) Implement self-healing test systems Generate tests from PRDs, Figma designs, and technical specs Write custom automation scripts in Node.js and/or Python Cover complex cases: multi-tenant systems, Kafka events, gRPC contracts
⚙️ CI/CD & Test Infrastructure Integrate automated tests into GitHub Actions pipelines Build and maintain CI/CD quality gates Manage Docker-based test environments Optimize test execution time via parallelization Build QA dashboards (coverage, flakiness, deployment confidence metrics)
🤖 AI / ML Quality & Evaluation Build evaluation frameworks for LLM outputs, RAG systems, and voice agents Write regression tests for models, prompts, and knowledge base changes Validate AI performance (latency, cost, quality) Test ASR/TTS systems and conversational flows Design MLOps validation scripts (pre/post deployment comparisons)
🔍 Manual & Exploratory Testing (20–30%) Test complex end-to-end flows (chat + WhatsApp + voice handoff) Validate real-time systems (WebSocket, live agent transfers) Test multi-tenant and RBAC scenarios Evaluate AI behavior (hallucinations, tone, helpfulness) Participate in release sign-offs for critical deployments
Requirements
Must Have
3–5 years in QA / QE with strong automation focus Experience with AI QA tools (Testsigma, QualGent, Verifiq, etc.) Strong Node.js and/or Python skills for automation & evaluation scripts Experience with CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI, etc.) Experience testing REST APIs and microservices Experience with AI system testing (LLMs, RAG, prompt regression, hallucination detection) Strong debugging skills (logs, tracing, root cause analysis) Git-based development workflow (PRs, code reviews)
Should Have
Experience with Databricks or ML pipeline testing Voice AI testing (ASR, TTS, conversational flows) WebSocket / real-time system testing Kafka or event-driven architectures (RabbitMQ, etc.) Multi-tenant SaaS testing (RBAC, isolation) NoSQL database testing (MongoDB, data integrity) Observability tools (Datadog, New Relic, etc.)
Nice to Have
Experience with LangChain / LangSmith Playwright for E2E testing Docker for test environments Building internal QA frameworks or tools Experience evaluating or adopting QA automation agents
Tech Stack
QA Tools: Testsigma, QualGent, Verifiq, and similar agentic QA platforms Automation: Node.js, Python CI/CD: GitHub Actions, Docker, Azure Pipelines Backend: Node.js, Python, Java Frontend: React, Next.js Data: MongoDB, Redis, Kafka, Pinecone AI/ML: OpenAI, Groq, LangChain, ElevenLabs, Databricks
What We Look For
Strong automation-first mindset Engineering-level quality in test code Pragmatic thinking (knowing what to automate vs. what not to) Curiosity about AI systems and non-deterministic testing
What We Offer
Paid vacation days Paid sick leave Additional paid days off for family events Full ownership of QA engineering function Opportunity to build QA systems from scratch Career growth Work with cutting-edge AI/ML and voice technologies Remote-first, async-friendly culture Competitive compensation
Skills
Want AI to find more roles like this?
Upload your CV once. Get matched to relevant assignments automatically.