Pre-trained models have transformed AI system development. From GPT to BERT to YOLO, reusing models trained by others has become common practice. But this raises a key question for testers: How do pre-trained models affect software testing?
A pre-trained model is an AI system that has been trained on large datasets for a general task and later fine-tuned for a specific one. According to the ISTQB Certified Tester AI Testing (CT-AI) syllabus, these models bring opportunities and risks that directly impact the testing process.
From a QA perspective, pre-trained models offer:
This allows testers to concentrate on integration, explainability, fairness, and performance.
The ISTQB outlines several concerns:
Testers must adopt techniques like fairness validation, outlier analysis, and ethical testing.
Modern testers should:
A fintech company adopts a credit scoring model pre-trained on European data. After deployment, it consistently rejects applications from recent immigrants. QA redesigns the test cases to include diverse profiles and mitigates the issue.
Pre-trained models are powerful but complex to test. They require critical thinking, domain knowledge, and modern QA strategies. According to ISTQB, understanding their risks and strengths is essential for certified AI testers.