When an analysis completes, ScreenScore AI returns a structured result containing an overall score, five sub-scores across key visual dimensions, and a feedback array with actionable suggestions. Understanding what these values mean helps you decide where to invest design effort for the greatest impact.
For a deeper explanation of the scoring methodology, see Scoring concepts.
Example analysis response
{
"id": "ana_01k3xyz",
"project_id": "proj_01j9abc",
"status": "completed",
"created_at": "2026-04-16T10:22:00Z",
"completed_at": "2026-04-16T10:22:09Z",
"score": 74,
"breakdown": {
"visual_clarity": 81,
"visual_hierarchy": 70,
"color_contrast": 88,
"cta_effectiveness": 62,
"brand_consistency": 69
},
"feedback": [
"The primary call-to-action button lacks sufficient visual weight. Increase its size or use a higher-contrast color to make it stand out.",
"Headline and body text are competing for attention. Establish a clearer size hierarchy between the two.",
"Brand logo placement is inconsistent with your other analyzed screens. Consider anchoring it to the top-left."
]
}
Overall score
The score field is a weighted composite of the five sub-scores, ranging from 0 to 100. Use it to compare screens at a glance or track improvement over time.
| Range | Rating |
|---|
| 0–40 | Needs improvement |
| 41–70 | Good |
| 71–85 | Great |
| 86–100 | Excellent |
Sub-score dimensions
Each sub-score reflects a distinct aspect of visual performance:
visual_clarity — How easily a viewer can parse the screen’s content without confusion or visual noise.
visual_hierarchy — How well the layout guides the viewer’s eye from the most to least important elements.
color_contrast — Whether foreground and background color combinations meet readability and accessibility standards.
cta_effectiveness — How prominently and persuasively the primary call-to-action is presented.
brand_consistency — How closely the screen aligns with the visual patterns of other screens in your project.
When optimizing, start with your lowest sub-score. Improving a weak dimension typically produces a larger gain to the overall score than refining a dimension that already scores well.
Feedback array
The feedback array contains plain-language suggestions generated for the specific screen. Each item describes a concrete issue and suggests a direction for improvement. Feedback is always returned with completed analyses — there is no separate request needed.
Dashboard overlays
If you access results through the ScreenScore AI dashboard, each analysis includes visual overlays that highlight the regions contributing to low sub-scores. These overlays make it faster to locate problem areas without cross-referencing the JSON response manually.