The analysis object
Each analysis contains your input, the current processing status, and the scored results once the job finishes.analysis object
Key fields
| Field | Type | Description |
|---|---|---|
id | string | Unique identifier for the analysis. |
project_id | string | The project this analysis belongs to. |
status | string | Current lifecycle state: pending, processing, completed, or failed. |
image_url | string | The URL of the image submitted for analysis. |
score | number | Composite score from 0–100. null until status is completed. |
breakdown | object | Per-dimension scores. null until status is completed. |
feedback | array | List of improvement suggestions generated by the model. |
created_at | string | ISO 8601 timestamp when the analysis was created. |
completed_at | string | ISO 8601 timestamp when scoring finished. null if not yet complete. |
Analyses are immutable. Once an analysis reaches
completed status, its results never change. To re-score a screen after making changes, submit a new analysis.Analysis lifecycle
Every analysis moves through the following states in order. Poll theGET /v1/analyses/{id} endpoint or use webhooks to track progress.
pending
The analysis has been created and is queued for processing. The
score and breakdown fields are null at this stage.processing
The scoring model is actively evaluating the image. Processing typically completes within a few seconds.
Submitting an analysis
To create an analysis, send aPOST request to /v1/analyses with your project ID and the image you want to score.
Understanding the results
Thescore field holds the composite 0–100 value. The breakdown object contains a score for each of the five dimensions the model evaluates. The feedback array contains human-readable improvement suggestions derived from the lowest-scoring dimensions.
To learn what each dimension measures and how to interpret score ranges, see Understanding ScreenScore AI scores.