Automated Visual Testing: Using Screenshots for Quality Assurance
Implement automated visual testing workflows using screenshot comparison and regression detection techniques.
Table of Contents
Table of Contents
Automated Visual Testing: Using Screenshots for Quality Assurance
Automated visual testing revolutionizes quality assurance by detecting visual regressions and UI inconsistencies. Implementing comprehensive visual testing workflows improves software quality and reduces manual testing overhead.
Automated Visual Testing Overview
Screenshot Technology Overview
Understanding automated visual testing requires comprehensive knowledge of underlying technologies and implementation strategies.
Key Components
Technical Foundation
- Core technology principles
- Industry standards and protocols
- Performance requirements
- Security considerations
Business Value
- Operational efficiency improvements
- Cost optimization opportunities
- Risk mitigation strategies
- Competitive advantages
Business Value
- Operational efficiency improvements
- Cost optimization opportunities
- Risk mitigation strategies
- Competitive advantages
Practical Implementation Examples
Screenshot Comparison Engine
// Production-ready screenshot comparison and visual regression detection system
interface ScreenshotComparison {
baselinePath: string
currentPath: string
diffPath?: string
similarity: number // 0-100 percentage
pixelDiff: number // number of different pixels
regions: DiffRegion[]
metadata: {
timestamp: number
baselineHash: string
currentHash: string
comparisonMethod: string
threshold: number
}
}
interface DiffRegion {
x: number
y: number
width: number
height: number
severity: 'low' | 'medium' | 'high' | 'critical'
description: string
suggestedFix?: string
}
interface VisualTestResult {
testName: string
status: 'passed' | 'failed' | 'error'
similarity: number
threshold: number
comparison: ScreenshotComparison
error?: string
duration: number
timestamp: number
}
class ScreenshotComparator {
private imageMagickPath: string = 'imagemagick' // In production: actual path
private comparisonCache: Map<string, ScreenshotComparison> = new Map()
private baselineStorage: string = './baselines'
constructor() {
this.initializeStorage()
}
async compareScreenshots(
baselinePath: string,
currentPath: string,
options: {
threshold?: number
ignoreRegions?: Array<{ x: number; y: number; width: number; height: number }>
comparisonMethod?: 'pixel' | 'perceptual' | 'hybrid'
} = {}
): Promise<ScreenshotComparison> {
const cacheKey = this.generateCacheKey(baselinePath, currentPath, options)
// Check cache first
if (this.comparisonCache.has(cacheKey)) {
const cached = this.comparisonCache.get(cacheKey)!
if (Date.now() - cached.metadata.timestamp < 5 * 60 * 1000) { // 5 minutes cache
return cached
}
}
try {
// Validate input files exist
await this.validateInputFiles(baselinePath, currentPath)
// Preprocess images (resize, normalize)
const processedBaseline = await this.preprocessImage(baselinePath)
const processedCurrent = await this.preprocessImage(currentPath)
// Apply ignore regions
const maskedCurrent = await this.applyIgnoreRegions(processedCurrent, options.ignoreRegions || [])
// Perform comparison
const comparison = await this.performImageComparison(
processedBaseline,
maskedCurrent,
options
)
// Generate diff visualization
if (comparison.similarity < (options.threshold || 95)) {
comparison.diffPath = await this.generateDiffVisualization(
processedBaseline,
maskedCurrent,
comparison
)
}
// Cache result
this.comparisonCache.set(cacheKey, comparison)
return comparison
} catch (error) {
console.error('Screenshot comparison failed:', error)
throw new Error(`Screenshot comparison failed: ${error}`)
}
}
private async validateInputFiles(baselinePath: string, currentPath: string): Promise<void> {
// In production, check file existence and accessibility
if (!baselinePath || !currentPath) {
throw new Error('Invalid file paths provided')
}
// Validate file formats
const validExtensions = ['.png', '.jpg', '.jpeg', '.webp']
const baselineExt = baselinePath.toLowerCase().substring(baselinePath.lastIndexOf('.'))
const currentExt = currentPath.toLowerCase().substring(currentPath.lastIndexOf('.'))
if (!validExtensions.includes(baselineExt) || !validExtensions.includes(currentExt)) {
throw new Error('Unsupported image format')
}
}
private async preprocessImage(imagePath: string): Promise<string> {
// In production, use ImageMagick or similar for preprocessing
// For demo, return original path
return imagePath
}
private async applyIgnoreRegions(
imagePath: string,
regions: Array<{ x: number; y: number; width: number; height: number }>
): Promise<string> {
if (regions.length === 0) return imagePath
// In production, mask regions using image processing
// For demo, return original path
return imagePath
}
private async performImageComparison(
baselinePath: string,
currentPath: string,
options: any
): Promise<ScreenshotComparison> {
const startTime = Date.now()
try {
// Calculate image hashes
const baselineHash = await this.calculateImageHash(baselinePath)
const currentHash = await this.calculateImageHash(currentPath)
// Perform pixel-by-pixel comparison
const pixelComparison = await this.comparePixels(baselinePath, currentPath)
// Use perceptual comparison for better results
const perceptualComparison = await this.comparePerceptually(baselinePath, currentPath)
// Combine results
const similarity = Math.max(pixelComparison.similarity, perceptualComparison.similarity)
const pixelDiff = pixelComparison.pixelDiff
// Identify regions of difference
const regions = await this.identifyDiffRegions(pixelComparison, perceptualComparison)
return {
baselinePath,
currentPath,
similarity,
pixelDiff,
regions,
metadata: {
timestamp: Date.now(),
baselineHash,
currentHash,
comparisonMethod: options.comparisonMethod || 'hybrid',
threshold: options.threshold || 95
}
}
} catch (error) {
throw new Error(`Comparison execution failed: ${error}`)
}
}
private async calculateImageHash(imagePath: string): Promise<string> {
// In production, use perceptual hashing (pHash, dHash, etc.)
// For demo, simulate hash calculation
const hash = Buffer.from(imagePath).toString('base64').slice(0, 16)
return hash
}
private async comparePixels(baselinePath: string, currentPath: string): Promise<{
similarity: number
pixelDiff: number
}> {
// In production, use image processing library for pixel comparison
// For demo, simulate pixel comparison
const similarity = Math.random() * 20 + 80 // 80-100% similarity
const pixelDiff = Math.floor(Math.random() * 1000) // Random pixel differences
return { similarity, pixelDiff }
}
private async comparePerceptually(baselinePath: string, currentPath: string): Promise<{
similarity: number
regions: DiffRegion[]
}> {
// In production, use SSIM (Structural Similarity Index) or similar
// For demo, simulate perceptual comparison
const similarity = Math.random() * 15 + 85 // 85-100% similarity
const regions: DiffRegion[] = []
// Simulate detected regions
if (similarity < 95) {
regions.push({
x: Math.floor(Math.random() * 100),
y: Math.floor(Math.random() * 100),
width: Math.floor(Math.random() * 50) + 10,
height: Math.floor(Math.random() * 50) + 10,
severity: similarity < 90 ? 'high' : 'medium',
description: 'Visual regression detected in header area'
})
}
return { similarity, regions }
}
private async identifyDiffRegions(pixelComparison: any, perceptualComparison: any): Promise<DiffRegion[]> {
// Combine pixel and perceptual comparison results
const allRegions = [
...pixelComparison.regions || [],
...perceptualComparison.regions || []
]
// Merge overlapping regions
return this.mergeRegions(allRegions)
}
private mergeRegions(regions: DiffRegion[]): DiffRegion[] {
if (regions.length <= 1) return regions
// Simple region merging for demo
// In production, implement proper region clustering
return regions.slice(0, 3) // Return top 3 regions
}
private async generateDiffVisualization(
baselinePath: string,
currentPath: string,
comparison: ScreenshotComparison
): Promise<string> {
// In production, generate visual diff using ImageMagick or similar
// For demo, return mock path
return `./diffs/diff_${Date.now()}.png`
}
private generateCacheKey(baselinePath: string, currentPath: string, options: any): string {
const keyData = `${baselinePath}-${currentPath}-${JSON.stringify(options)}`
return Buffer.from(keyData).toString('base64').slice(0, 32)
}
private initializeStorage(): void {
// Initialize baseline storage directory
// In production, set up proper storage (local, S3, etc.)
console.log('Screenshot comparison storage initialized')
}
// Batch comparison for multiple screenshots
async compareScreenshotBatch(
comparisons: Array<{ baseline: string; current: string; name: string; threshold?: number }>
): Promise<Map<string, ScreenshotComparison>> {
const results = new Map<string, ScreenshotComparison>()
// Process in parallel with rate limiting
const batchSize = 10
for (let i = 0; i < comparisons.length; i += batchSize) {
const batch = comparisons.slice(i, i + batchSize)
const batchPromises = batch.map(async (comp) => {
try {
const result = await this.compareScreenshots(
comp.baseline,
comp.current,
{ threshold: comp.threshold }
)
return { name: comp.name, result }
} catch (error) {
console.error(`Batch comparison failed for ${comp.name}:`, error)
return { name: comp.name, result: null }
}
})
const batchResults = await Promise.all(batchPromises)
batchResults.forEach(({ name, result }) => {
if (result) {
results.set(name, result)
}
})
// Small delay between batches
if (i + batchSize < comparisons.length) {
await new Promise(resolve => setTimeout(resolve, 100))
}
}
return results
}
}
// Visual test runner and CI/CD integration
class VisualTestRunner {
private comparator: ScreenshotComparator
private testResults: Map<string, VisualTestResult[]> = new Map()
private baselineManager: BaselineManager
constructor() {
this.comparator = new ScreenshotComparator()
this.baselineManager = new BaselineManager()
}
async runVisualTest(
testConfig: {
name: string
url: string
viewport: { width: number; height: number }
baselinePath: string
threshold?: number
waitFor?: string // CSS selector to wait for
ignoreRegions?: Array<{ x: number; y: number; width: number; height: number }>
}
): Promise<VisualTestResult> {
const startTime = Date.now()
try {
// Generate current screenshot
const currentScreenshot = await this.generateScreenshot(testConfig)
// Compare with baseline
const comparison = await this.comparator.compareScreenshots(
testConfig.baselinePath,
currentScreenshot,
{
threshold: testConfig.threshold || 95,
ignoreRegions: testConfig.ignoreRegions
}
)
// Determine test result
const status = comparison.similarity >= (testConfig.threshold || 95) ? 'passed' : 'failed'
const duration = Date.now() - startTime
const result: VisualTestResult = {
testName: testConfig.name,
status,
similarity: comparison.similarity,
threshold: testConfig.threshold || 95,
comparison,
duration,
timestamp: Date.now()
}
// Store result
this.storeTestResult(testConfig.name, result)
return result
} catch (error) {
const duration = Date.now() - startTime
const result: VisualTestResult = {
testName: testConfig.name,
status: 'error',
similarity: 0,
threshold: testConfig.threshold || 95,
comparison: {} as ScreenshotComparison,
error: error.message,
duration,
timestamp: Date.now()
}
this.storeTestResult(testConfig.name, result)
return result
}
}
private async generateScreenshot(config: any): Promise<string> {
// In production, use Puppeteer, Playwright, or similar for screenshot generation
// For demo, simulate screenshot generation
const screenshotPath = `./screenshots/current_${config.name}_${Date.now()}.png`
// Simulate screenshot generation delay
await new Promise(resolve => setTimeout(resolve, 1000))
return screenshotPath
}
private storeTestResult(testName: string, result: VisualTestResult): void {
if (!this.testResults.has(testName)) {
this.testResults.set(testName, [])
}
this.testResults.get(testName)!.push(result)
// Keep only last 10 results per test
const results = this.testResults.get(testName)!
if (results.length > 10) {
results.shift()
}
}
async runTestSuite(testSuite: {
name: string
tests: Array<{
name: string
url: string
viewport: { width: number; height: number }
baselinePath: string
threshold?: number
}>
parallel?: boolean
}): Promise<{
suiteName: string
totalTests: number
passedTests: number
failedTests: number
errorTests: number
results: VisualTestResult[]
duration: number
}> {
const startTime = Date.now()
const results: VisualTestResult[] = []
try {
if (testSuite.parallel) {
// Run tests in parallel
const testPromises = testSuite.tests.map(test =>
this.runVisualTest(test)
)
const testResults = await Promise.allSettled(testPromises)
testResults.forEach((result, index) => {
if (result.status === 'fulfilled') {
results.push(result.value)
} else {
// Create error result for failed test
results.push({
testName: testSuite.tests[index].name,
status: 'error',
similarity: 0,
threshold: testSuite.tests[index].threshold || 95,
comparison: {} as ScreenshotComparison,
error: result.reason?.message || 'Test execution failed',
duration: 0,
timestamp: Date.now()
})
}
})
} else {
// Run tests sequentially
for (const test of testSuite.tests) {
const result = await this.runVisualTest(test)
results.push(result)
}
}
const duration = Date.now() - startTime
return {
suiteName: testSuite.name,
totalTests: testSuite.tests.length,
passedTests: results.filter(r => r.status === 'passed').length,
failedTests: results.filter(r => r.status === 'failed').length,
errorTests: results.filter(r => r.status === 'error').length,
results,
duration
}
} catch (error) {
console.error('Test suite execution failed:', error)
throw error
}
}
getTestHistory(testName?: string): VisualTestResult[] {
if (testName) {
return this.testResults.get(testName) || []
}
// Return all test results
return Array.from(this.testResults.values()).flat()
}
generateTestReport(suiteName?: string): {
summary: {
totalTests: number
passed: number
failed: number
errors: number
successRate: number
}
flakyTests: string[]
performanceTrends: Array<{ test: string; avgDuration: number; trend: 'improving' | 'degrading' | 'stable' }>
} {
const allResults = suiteName
? this.testResults.get(suiteName) || []
: Array.from(this.testResults.values()).flat()
const summary = {
totalTests: allResults.length,
passed: allResults.filter(r => r.status === 'passed').length,
failed: allResults.filter(r => r.status === 'failed').length,
errors: allResults.filter(r => r.status === 'error').length,
successRate: 0
}
summary.successRate = summary.totalTests > 0
? (summary.passed / summary.totalTests) * 100
: 0
// Identify flaky tests (tests that pass/fail inconsistently)
const flakyTests = this.identifyFlakyTests(allResults)
// Performance trends
const performanceTrends = this.analyzePerformanceTrends(allResults)
return { summary, flakyTests, performanceTrends }
}
private identifyFlakyTests(results: VisualTestResult[]): string[] {
const testGroups = new Map<string, VisualTestResult[]>()
// Group results by test name
results.forEach(result => {
if (!testGroups.has(result.testName)) {
testGroups.set(result.testName, [])
}
testGroups.get(result.testName)!.push(result)
})
const flakyTests: string[] = []
for (const [testName, testResults] of testGroups.entries()) {
if (testResults.length >= 3) {
const statuses = testResults.map(r => r.status)
const uniqueStatuses = [...new Set(statuses)]
// Test is flaky if it has both passed and failed results
if (uniqueStatuses.length > 1) {
flakyTests.push(testName)
}
}
}
return flakyTests
}
private analyzePerformanceTrends(results: VisualTestResult[]): Array<{
test: string
avgDuration: number
trend: 'improving' | 'degrading' | 'stable'
}> {
const testGroups = new Map<string, VisualTestResult[]>()
results.forEach(result => {
if (!testGroups.has(result.testName)) {
testGroups.set(result.testName, [])
}
testGroups.get(result.testName)!.push(result)
})
const trends: Array<{ test: string; avgDuration: number; trend: 'improving' | 'degrading' | 'stable' }> = []
for (const [testName, testResults] of testGroups.entries()) {
if (testResults.length >= 5) {
const durations = testResults.map(r => r.duration)
const avgDuration = durations.reduce((a, b) => a + b, 0) / durations.length
// Calculate trend (simplified)
const recent = durations.slice(-3)
const older = durations.slice(0, 3)
const recentAvg = recent.reduce((a, b) => a + b, 0) / recent.length
const olderAvg = older.reduce((a, b) => a + b, 0) / older.length
let trend: 'improving' | 'degrading' | 'stable' = 'stable'
if (recentAvg < olderAvg * 0.9) trend = 'improving'
else if (recentAvg > olderAvg * 1.1) trend = 'degrading'
trends.push({ test: testName, avgDuration, trend })
}
}
return trends
}
}
class BaselineManager {
private baselines: Map<string, {
path: string
hash: string
created: number
lastUsed: number
version: string
}> = new Map()
async createBaseline(
name: string,
screenshotPath: string,
version?: string
): Promise<void> {
const hash = await this.calculateImageHash(screenshotPath)
this.baselines.set(name, {
path: screenshotPath,
hash,
created: Date.now(),
lastUsed: Date.now(),
version: version || '1.0.0'
})
console.log(`Baseline created: ${name} (v${version || '1.0.0'})`)
}
async updateBaseline(name: string, newScreenshotPath: string): Promise<void> {
const existing = this.baselines.get(name)
if (!existing) {
throw new Error(`Baseline not found: ${name}`)
}
const newHash = await this.calculateImageHash(newScreenshotPath)
// Only update if hash is different
if (newHash !== existing.hash) {
this.baselines.set(name, {
...existing,
path: newScreenshotPath,
hash: newHash,
lastUsed: Date.now()
})
console.log(`Baseline updated: ${name}`)
}
}
getBaseline(name: string): any | null {
return this.baselines.get(name) || null
}
listBaselines(): Array<{ name: string; version: string; lastUsed: number }> {
return Array.from(this.baselines.entries()).map(([name, baseline]) => ({
name,
version: baseline.version,
lastUsed: baseline.lastUsed
}))
}
private async calculateImageHash(imagePath: string): Promise<string> {
// In production, use perceptual hashing
return Buffer.from(imagePath).toString('base64').slice(0, 16)
}
}
// Initialize systems
const screenshotComparator = new ScreenshotComparator()
const visualTestRunner = new VisualTestRunner()
const baselineManager = new BaselineManager()
// API endpoints for visual testing
app.post('/api/visual-test/run', async (req, res) => {
try {
const testConfig = req.body
if (!testConfig.name || !testConfig.url || !testConfig.baselinePath) {
return res.status(400).json({ error: 'Test configuration incomplete' })
}
const result = await visualTestRunner.runVisualTest(testConfig)
res.json({
testResult: result,
timestamp: new Date().toISOString()
})
} catch (error) {
console.error('Visual test execution error:', error)
res.status(500).json({ error: 'Test execution failed' })
}
})
app.post('/api/visual-test/suite', async (req, res) => {
try {
const suiteConfig = req.body
if (!suiteConfig.name || !suiteConfig.tests) {
return res.status(400).json({ error: 'Suite configuration incomplete' })
}
const suiteResult = await visualTestRunner.runTestSuite(suiteConfig)
res.json({
suiteResult,
timestamp: new Date().toISOString()
})
} catch (error) {
console.error('Test suite execution error:', error)
res.status(500).json({ error: 'Suite execution failed' })
}
})
app.get('/api/visual-test/history/:testName?', (req, res) => {
const { testName } = req.params
const history = visualTestRunner.getTestHistory(testName)
res.json({
history,
count: history.length,
timestamp: new Date().toISOString()
})
})
app.get('/api/visual-test/report/:suiteName?', (req, res) => {
const { suiteName } = req.params
const report = visualTestRunner.generateTestReport(suiteName)
res.json({
report,
timestamp: new Date().toISOString()
})
})
app.post('/api/baselines', async (req, res) => {
try {
const { name, screenshotPath, version } = req.body
if (!name || !screenshotPath) {
return res.status(400).json({ error: 'Baseline name and screenshot path required' })
}
await baselineManager.createBaseline(name, screenshotPath, version)
res.json({
message: 'Baseline created successfully',
baseline: { name, version: version || '1.0.0' },
timestamp: new Date().toISOString()
})
} catch (error) {
console.error('Baseline creation error:', error)
res.status(500).json({ error: 'Baseline creation failed' })
}
})
app.get('/api/baselines', (req, res) => {
const baselines = baselineManager.listBaselines()
res.json({
baselines,
timestamp: new Date().toISOString()
})
})
console.log('Screenshot comparison engine initialized')Visual Regression Detection and CI/CD Integration
// Advanced visual regression detection with CI/CD pipeline integration
interface VisualRegressionConfig {
projectId: string
branch: string
baselineBranch: string
thresholds: {
similarity: number
pixelDiff: number
maxRegions: number
}
ignorePatterns: string[]
dynamicContent: {
selectors: string[]
replacementStrategy: 'remove' | 'replace' | 'ignore'
}
}
interface CIDeployment {
projectId: string
buildId: string
branch: string
commit: string
environment: string
screenshots: Array<{
name: string
path: string
viewport: string
}>
}
interface RegressionReport {
projectId: string
buildId: string
totalTests: number
passedTests: number
failedTests: number
regressions: Array<{
testName: string
severity: 'low' | 'medium' | 'high' | 'critical'
regions: DiffRegion[]
baselineScreenshot: string
currentScreenshot: string
diffScreenshot?: string
}>
summary: {
overallSimilarity: number
maxPixelDiff: number
totalRegions: number
}
}
class VisualRegressionDetector {
private config: Map<string, VisualRegressionConfig> = new Map()
private regressionHistory: Map<string, RegressionReport[]> = new Map()
private ciIntegrations: Map<string, any> = new Map()
constructor() {
this.initializeCIIntegrations()
}
async detectRegressions(
deployment: CIDeployment,
baselineScreenshots: Map<string, string>
): Promise<RegressionReport> {
const startTime = Date.now()
const regressions: RegressionReport['regressions'] = []
let totalSimilarity = 0
let maxPixelDiff = 0
let totalRegions = 0
try {
// Run visual tests for all screenshots
for (const screenshot of deployment.screenshots) {
const baselinePath = baselineScreenshots.get(screenshot.name)
if (!baselinePath) {
console.warn(`No baseline found for ${screenshot.name}`)
continue
}
// Run comparison
const comparison = await screenshotComparator.compareScreenshots(
baselinePath,
screenshot.path,
{ threshold: 95 }
)
// Check for regression
if (comparison.similarity < 90) { // Regression threshold
const severity = this.calculateRegressionSeverity(comparison)
regressions.push({
testName: screenshot.name,
severity,
regions: comparison.regions,
baselineScreenshot: baselinePath,
currentScreenshot: screenshot.path,
diffScreenshot: comparison.diffPath
})
}
// Update summary metrics
totalSimilarity += comparison.similarity
maxPixelDiff = Math.max(maxPixelDiff, comparison.pixelDiff)
totalRegions += comparison.regions.length
}
const totalTests = deployment.screenshots.length
const passedTests = totalTests - regressions.length
const failedTests = regressions.length
const report: RegressionReport = {
projectId: deployment.projectId,
buildId: deployment.buildId,
totalTests,
passedTests,
failedTests,
regressions,
summary: {
overallSimilarity: totalTests > 0 ? totalSimilarity / totalTests : 100,
maxPixelDiff,
totalRegions
}
}
// Store report
this.storeRegressionReport(deployment.projectId, report)
return report
} catch (error) {
console.error('Regression detection failed:', error)
throw error
}
}
private calculateRegressionSeverity(comparison: ScreenshotComparison): 'low' | 'medium' | 'high' | 'critical' {
const similarity = comparison.similarity
const pixelDiff = comparison.pixelDiff
const regionCount = comparison.regions.length
if (similarity < 70 || pixelDiff > 10000 || regionCount > 5) {
return 'critical'
}
if (similarity < 80 || pixelDiff > 5000 || regionCount > 3) {
return 'high'
}
if (similarity < 90 || pixelDiff > 1000 || regionCount > 1) {
return 'medium'
}
return 'low'
}
private storeRegressionReport(projectId: string, report: RegressionReport): void {
if (!this.regressionHistory.has(projectId)) {
this.regressionHistory.set(projectId, [])
}
const reports = this.regressionHistory.get(projectId)!
reports.push(report)
// Keep only last 50 reports
if (reports.length > 50) {
reports.shift()
}
}
// CI/CD Integration
async integrateWithCI(
ciProvider: 'github' | 'gitlab' | 'jenkins' | 'azure-devops',
config: any
): Promise<void> {
switch (ciProvider) {
case 'github':
await this.setupGitHubIntegration(config)
break
case 'gitlab':
await this.setupGitLabIntegration(config)
break
case 'jenkins':
await this.setupJenkinsIntegration(config)
break
case 'azure-devops':
await this.setupAzureDevOpsIntegration(config)
break
}
this.ciIntegrations.set(ciProvider, config)
console.log(`CI/CD integration configured for ${ciProvider}`)
}
private async setupGitHubIntegration(config: any): Promise<void> {
// GitHub Actions integration
// In production, set up webhooks, API tokens, etc.
console.log('GitHub Actions integration configured')
}
private async setupGitLabIntegration(config: any): Promise<void> {
// GitLab CI integration
console.log('GitLab CI integration configured')
}
private async setupJenkinsIntegration(config: any): Promise<void> {
// Jenkins pipeline integration
console.log('Jenkins integration configured')
}
private async setupAzureDevOpsIntegration(config: any): Promise<void> {
// Azure DevOps pipeline integration
console.log('Azure DevOps integration configured')
}
async runCIPipeline(deployment: CIDeployment): Promise<{
success: boolean
report: RegressionReport
shouldBlock: boolean
blockingRegressions: number
}> {
try {
// Get baseline screenshots for comparison
const baselineScreenshots = await this.getBaselineScreenshots(deployment)
// Run regression detection
const report = await this.detectRegressions(deployment, baselineScreenshots)
// Determine if deployment should be blocked
const criticalRegressions = report.regressions.filter(r => r.severity === 'critical').length
const highRegressions = report.regressions.filter(r => r.severity === 'high').length
const shouldBlock = criticalRegressions > 0 || highRegressions > 2
// Report to CI system
await this.reportToCI(deployment, report, shouldBlock)
return {
success: !shouldBlock,
report,
shouldBlock,
blockingRegressions: criticalRegressions + highRegressions
}
} catch (error) {
console.error('CI pipeline execution failed:', error)
throw error
}
}
private async getBaselineScreenshots(deployment: CIDeployment): Promise<Map<string, string>> {
const baselines = new Map<string, string>()
// In production, fetch from baseline storage based on project/branch
// For demo, simulate baseline retrieval
for (const screenshot of deployment.screenshots) {
baselines.set(screenshot.name, `./baselines/${screenshot.name}.png`)
}
return baselines
}
private async reportToCI(deployment: CIDeployment, report: RegressionReport, shouldBlock: boolean): Promise<void> {
// In production, send results to CI system (GitHub PR comment, etc.)
if (shouldBlock) {
console.log(`🚫 Blocking deployment ${deployment.buildId} due to visual regressions`)
} else {
console.log(`✅ Deployment ${deployment.buildId} passed visual regression checks`)
}
// Generate detailed report
const reportSummary = this.generateCIReport(report)
// In production, post comment to PR, update build status, etc.
console.log('CI Report:', reportSummary)
}
private generateCIReport(report: RegressionReport): string {
const successRate = (report.passedTests / report.totalTests) * 100
return `
## Visual Regression Test Results
**Overall Status:** ${successRate >= 95 ? '✅ Passed' : '❌ Failed'}
**Success Rate:** ${successRate.toFixed(1)}%
### Summary
- Total Tests: ${report.totalTests}
- Passed: ${report.passedTests}
- Failed: ${report.failedTests}
- Overall Similarity: ${report.summary.overallSimilarity.toFixed(1)}%
### Regressions Found (${report.regressions.length})
${report.regressions.map(r => `- ${r.testName} (${r.severity})`).join('\n')}
### Recommendations
${report.regressions.length === 0
? '✅ No visual regressions detected'
: '❌ Visual regressions require attention'
}
`
}
getRegressionHistory(projectId: string, builds?: number): RegressionReport[] {
const reports = this.regressionHistory.get(projectId) || []
return builds ? reports.slice(-builds) : reports
}
analyzeRegressionTrends(projectId: string): {
trend: 'improving' | 'degrading' | 'stable'
averageSuccessRate: number
commonRegressionAreas: string[]
} {
const reports = this.regressionHistory.get(projectId) || []
if (reports.length < 3) {
return {
trend: 'stable',
averageSuccessRate: 100,
commonRegressionAreas: []
}
}
// Calculate success rate trend
const recentReports = reports.slice(-5)
const successRates = recentReports.map(r => (r.passedTests / r.totalTests) * 100)
const averageSuccessRate = successRates.reduce((a, b) => a + b, 0) / successRates.length
// Simple trend analysis
const olderAvg = successRates.slice(0, 2).reduce((a, b) => a + b, 0) / 2
const recentAvg = successRates.slice(-2).reduce((a, b) => a + b, 0) / 2
let trend: 'improving' | 'degrading' | 'stable' = 'stable'
if (recentAvg > olderAvg + 5) trend = 'improving'
else if (recentAvg < olderAvg - 5) trend = 'degrading'
// Analyze common regression areas
const allRegressions = reports.flatMap(r => r.regressions)
const areaCounts = new Map<string, number>()
allRegressions.forEach(regression => {
regression.regions.forEach(region => {
areaCounts.set(region.description, (areaCounts.get(region.description) || 0) + 1)
})
})
const commonRegressionAreas = Array.from(areaCounts.entries())
.sort(([, a], [, b]) => b - a)
.slice(0, 5)
.map(([area]) => area)
return {
trend,
averageSuccessRate,
commonRegressionAreas
}
}
private initializeCIIntegrations(): void {
// Initialize CI/CD platform integrations
console.log('CI/CD integrations initialized')
}
}
// Dynamic content handling for visual testing
class DynamicContentHandler {
private contentRules: Map<string, {
selector: string
strategy: 'remove' | 'replace' | 'ignore'
replacement?: string
}> = new Map()
constructor() {
this.initializeDefaultRules()
}
async preprocessScreenshotForComparison(
screenshotPath: string,
rules?: Array<{ selector: string; strategy: string; replacement?: string }>
): Promise<string> {
const rulesToApply = rules || this.getApplicableRules(screenshotPath)
if (rulesToApply.length === 0) {
return screenshotPath
}
// In production, use image processing to mask/replace dynamic content
// For demo, return original path
return screenshotPath
}
private getApplicableRules(screenshotPath: string): Array<any> {
// In production, determine applicable rules based on screenshot metadata
return Array.from(this.contentRules.values())
}
private initializeDefaultRules(): void {
const defaultRules = [
{
selector: '.timestamp, .date, [data-dynamic="timestamp"]',
strategy: 'remove' as const,
description: 'Remove timestamp elements'
},
{
selector: '.user-count, .visitor-count, [data-dynamic="count"]',
strategy: 'replace' as const,
replacement: '123',
description: 'Replace dynamic counts with static value'
},
{
selector: '.weather, .stock-price, [data-dynamic="live"]',
strategy: 'ignore' as const,
description: 'Ignore live data elements'
}
]
defaultRules.forEach(rule => {
this.contentRules.set(rule.selector, rule)
})
}
addContentRule(
selector: string,
strategy: 'remove' | 'replace' | 'ignore',
replacement?: string
): void {
this.contentRules.set(selector, { selector, strategy, replacement })
}
}
// Initialize systems
const visualRegressionDetector = new VisualRegressionDetector()
const dynamicContentHandler = new DynamicContentHandler()
// CI/CD pipeline integration endpoints
app.post('/api/visual-regression/detect', async (req, res) => {
try {
const deployment: CIDeployment = req.body
if (!deployment.projectId || !deployment.buildId || !deployment.screenshots) {
return res.status(400).json({ error: 'Deployment configuration incomplete' })
}
// Get baseline screenshots
const baselineScreenshots = new Map<string, string>()
// In production, fetch from baseline storage
const report = await visualRegressionDetector.detectRegressions(deployment, baselineScreenshots)
res.json({
report,
timestamp: new Date().toISOString()
})
} catch (error) {
console.error('Visual regression detection error:', error)
res.status(500).json({ error: 'Regression detection failed' })
}
})
app.post('/api/ci-integration/setup', async (req, res) => {
try {
const { provider, config } = req.body
if (!provider || !config) {
return res.status(400).json({ error: 'Provider and config required' })
}
await visualRegressionDetector.integrateWithCI(provider, config)
res.json({
message: 'CI integration configured successfully',
provider,
timestamp: new Date().toISOString()
})
} catch (error) {
console.error('CI integration setup error:', error)
res.status(500).json({ error: 'CI integration setup failed' })
}
})
app.post('/api/ci-pipeline/run', async (req, res) => {
try {
const deployment: CIDeployment = req.body
if (!deployment.projectId || !deployment.buildId) {
return res.status(400).json({ error: 'Deployment information incomplete' })
}
const result = await visualRegressionDetector.runCIPipeline(deployment)
res.json({
pipelineResult: result,
timestamp: new Date().toISOString()
})
} catch (error) {
console.error('CI pipeline execution error:', error)
res.status(500).json({ error: 'CI pipeline execution failed' })
}
})
app.get('/api/regression-history/:projectId', (req, res) => {
const { projectId } = req.params
const { builds } = req.query
const history = visualRegressionDetector.getRegressionHistory(projectId, builds ? Number(builds) : undefined)
res.json({
history,
count: history.length,
timestamp: new Date().toISOString()
})
})
app.get('/api/regression-trends/:projectId', (req, res) => {
const { projectId } = req.params
const trends = visualRegressionDetector.analyzeRegressionTrends(projectId)
res.json({
trends,
projectId,
timestamp: new Date().toISOString()
})
})
// Dynamic content handling endpoints
app.post('/api/dynamic-content/preprocess', async (req, res) => {
try {
const { screenshotPath, rules } = req.body
if (!screenshotPath) {
return res.status(400).json({ error: 'Screenshot path required' })
}
const processedPath = await dynamicContentHandler.preprocessScreenshotForComparison(screenshotPath, rules)
res.json({
originalPath: screenshotPath,
processedPath,
timestamp: new Date().toISOString()
})
} catch (error) {
console.error('Dynamic content preprocessing error:', error)
res.status(500).json({ error: 'Preprocessing failed' })
}
})
app.post('/api/dynamic-content/rules', (req, res) => {
try {
const { selector, strategy, replacement } = req.body
if (!selector || !strategy) {
return res.status(400).json({ error: 'Selector and strategy required' })
}
dynamicContentHandler.addContentRule(selector, strategy, replacement)
res.json({
message: 'Dynamic content rule added successfully',
rule: { selector, strategy, replacement },
timestamp: new Date().toISOString()
})
} catch (error) {
console.error('Dynamic content rule creation error:', error)
res.status(500).json({ error: 'Rule creation failed' })
}
})
console.log('Visual regression detection and CI/CD integration system initialized')Best Practices {#best-practices}
Essential practices for visual testing success.
Best Practices:
- Maintain baseline versioning
- Handle dynamic content appropriately
- Set appropriate similarity thresholds
- Integrate with CI/CD pipeline
- Review and approve visual changes
- Track flaky tests and investigate
- Document expected visual changes
Conclusion {#conclusion}
Automated visual testing detects UI regressions through screenshot comparison. Success requires robust comparison algorithms, baseline management, dynamic content handling, CI/CD integration, and appropriate thresholds.
Implement visual regression testing with our screenshot comparison APIs, designed to catch UI bugs before production.