About the Role
Lead the implementation of a holistic test strategy for data and model evaluation as well as the fundamental ML pipeline testing and monitoring
Implement best practices into the AI/ML development cycle so that it is highly stable, discovers issues and requires low maintenance.
Supporting ML Engineers in Model Evaluation (e.g. Error Analysis), Data Quality Validation, Data Annotation and Model Monitoring to ensure a high quality of AI driven products.
Working closely with ML Engineers, MLOPs Engineers and other QA engineers to ensure a common understanding and implementation of best practices.
Contribute to the growth and continuous improvement of quality and testing processes within the organization.
About You
Basic Qualifications :
Strong Experience in Quality Assurance and Test Automation with strong focus on AI/ML
Understanding of QA methodologies, tools, and processes
Strong experience at least in statistical analysis or machine learning algorithms, natural language processing, LLMs
Experience in AI/ML System Design and Life cycle of ML Models
Other Qualifications:
Hands on experience with test automation and tools such as Pytest, Giskard, Garak
Proficiency in SQL and Python
Experience in Red Teaming of LLM and ML Models
Experience in AI/ML Model performance validation, data quality/validation testing, model monitoring and data pipeline