Skip to main content
An analysis is the fundamental unit of work in ScreenScore AI. When you submit a screen — whether by providing an image URL or uploading a file — the API creates an analysis object that tracks the job from submission through completion. Once the analysis is complete, the object holds the full score breakdown, dimension scores, and actionable feedback for that screen.

The analysis object

Each analysis contains your input, the current processing status, and the scored results once the job finishes.
analysis object
{
  "id": "ana_01j9xyz",
  "project_id": "proj_01j9abc",
  "status": "completed",
  "image_url": "https://example.com/screenshot.png",
  "score": 78,
  "breakdown": {
    "visual_clarity": 82,
    "visual_hierarchy": 75,
    "color_contrast": 88,
    "cta_effectiveness": 71,
    "brand_consistency": 74
  },
  "feedback": [
    "Increase button contrast",
    "Reduce text density in the hero section"
  ],
  "created_at": "2024-01-15T10:30:00Z",
  "completed_at": "2024-01-15T10:30:07Z"
}

Key fields

FieldTypeDescription
idstringUnique identifier for the analysis.
project_idstringThe project this analysis belongs to.
statusstringCurrent lifecycle state: pending, processing, completed, or failed.
image_urlstringThe URL of the image submitted for analysis.
scorenumberComposite score from 0–100. null until status is completed.
breakdownobjectPer-dimension scores. null until status is completed.
feedbackarrayList of improvement suggestions generated by the model.
created_atstringISO 8601 timestamp when the analysis was created.
completed_atstringISO 8601 timestamp when scoring finished. null if not yet complete.
Analyses are immutable. Once an analysis reaches completed status, its results never change. To re-score a screen after making changes, submit a new analysis.

Analysis lifecycle

Every analysis moves through the following states in order. Poll the GET /v1/analyses/{id} endpoint or use webhooks to track progress.
1

pending

The analysis has been created and is queued for processing. The score and breakdown fields are null at this stage.
2

processing

The scoring model is actively evaluating the image. Processing typically completes within a few seconds.
3

completed

Scoring is finished. The score, breakdown, feedback, and completed_at fields are populated and will not change.
If an analysis reaches failed status, the score and breakdown fields will remain null. Check the error field on the object for a machine-readable reason code. Common causes include an inaccessible image URL or an unsupported file format.

Submitting an analysis

To create an analysis, send a POST request to /v1/analyses with your project ID and the image you want to score.
curl -X POST https://api.screenscoreai.com/v1/analyses \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "project_id": "proj_01j9abc",
    "image_url": "https://example.com/screenshot.png"
  }'
For the complete request schema, supported image formats, and file upload instructions, see the Create analysis API reference.

Understanding the results

The score field holds the composite 0–100 value. The breakdown object contains a score for each of the five dimensions the model evaluates. The feedback array contains human-readable improvement suggestions derived from the lowest-scoring dimensions. To learn what each dimension measures and how to interpret score ranges, see Understanding ScreenScore AI scores.