Skip to main content
Flintmere

Methodology

How we compute the score and maintain the standard.

Seven pillars, weighted to sum to 100. Source-cited regulatory references. Half-yearly publication of the food catalog standard with a public change log between. Last updated 2026-05-02.

Composite weight
100
Public-source pillars
55%
Install-gated
45%

The seven pillars

Each pillar measures a different way a catalog can break.

The composite score is a weighted average of the seven pillars below.

Public scans read four pillars from public sources — identifiers, titles, consistency, and crawlability — and report a partial score against 55% of the weight. No Shopify install is required.

The remaining 45% — attributes, category mapping, and checkout eligibility — is reachable only after the Shopify app is installed, since those signals come from authenticated Admin-API reads.

The seven pillars and their weights.
PillarWeightInstall gated
Identifiers20%No
Attributes20%Yes
Titles15%No
Mapping15%Yes
Consistency15%No
Checkout eligibility10%Yes
Crawlability5%No

Evidence — same product, two records

What the seven pillars actually look at, on a single product record.

Passes

How a strong record looks

GTIN-13

5012345678900

Allergens

contains: nuts (structured)

Brand

Glenarbor

Category

Food, Beverages > Honey

Title

Yorkshire wildflower honey 350g

Identifiers

18 / 20

Attributes

19 / 20

Titles

14 / 15

Fails

How a record fails the checks

GTIN-13

absent → identifiers/checksum fails

Allergens

<p>contains nuts</p>

buried in HTML, not parseable

Brand

no brand string

Category

Food

too shallow, GMC rejects

Title

BUY NOW! Best honey deal!!

fluff demoted by quality classifier

Identifiers

4 / 20

Attributes

6 / 20

Titles

5 / 15

Synthetic example. The same physical jar of honey, two ways of describing it in catalog data. The right-hand record loses ~33 points across Identifiers, Attributes, and Titles before any of the install-gated pillars run.

20% of score

Identifiers.

What it measures

Whether your variants carry valid GTINs (with checksum verification), brand names, MPNs, and unique SKUs. Sub-checks: barcode presence on every variant (45% of pillar), GTIN checksum validity (30%), brand presence (10%), SKU presence and uniqueness (15%).

Why it matters

Google Shopping, Amazon Fresh, Ocado, and emerging AI shopping channels all verify against GS1’s database. A missing or invalid identifier is the most common reason a product is suppressed from a feed.

Sources

GS1 General Specifications (gs1.org), GTIN-13/GTIN-14 checksum algorithm (Mod-10).

Not measured

Whether the GTIN you have was purchased from your local GS1 office (we cannot verify provenance from public data).

20% of score — install-gated

Attributes.

What it measures

Whether your products carry the structured attributes a food catalog needs: allergens (FSA Big-14), nutrition declarations (EU 1169/2011), provenance claims (PDO/PGI/TSG), certifications (organic, Fairtrade, RSPCA, etc.), and ingredient lists at the product-data level rather than only in description text.

Why it matters

AI shopping channels and merchant-side filters work on structured fields, not on free-text descriptions. A product with allergens buried in HTML is invisible to a query like "show me dairy-free granola."

Sources

FSA Big-14 allergen list (food.gov.uk), EU Regulation 1169/2011 (Food Information to Consumers), DEFRA UK GI register, certification-body schemas.

Not measured

Marketing claims that are not regulatory ("artisan," "premium," "small-batch"). Image-based claims that are not also in structured data.

15% of score

Titles.

What it measures

Whether your product titles are the right length (<= 150 chars), free of marketing fluff, structured (brand + product type), and whether your descriptions hit the >= 200-character minimum with structural markup and use-case language.

Why it matters

A title with "Buy Now! Best Value!" gets demoted by Google Shopping’s quality classifier. A description without paragraph structure is harder for AI agents to extract. The shape of the prose decides whether a product is shown.

Sources

Google Merchant Center title requirements, Schema.org Product type, observed quality-classifier behaviour from public Google Shopping documentation.

Not measured

SEO ranking, conversion rate from titles, image quality, A/B-tested copy variants.

15% of score — install-gated

Mapping.

What it measures

Whether your products are mapped to the right google_product_category, whether the mapping is consistent across variants, and whether your category choice matches what the product actually is.

Why it matters

A misnamed category sends your product to the wrong feed shelf. "Beverages > Coffee" vs "Pantry > Coffee" is the difference between appearing in the search a coffee buyer actually runs.

Sources

Google Merchant Center taxonomy (published category list), GMC mapping requirements.

Not measured

Marketplace-specific categorisations beyond Google (Amazon Fresh, Ocado have their own taxonomies; we read GMC as the lingua franca).

15% of score

Consistency.

What it measures

Whether image URLs resolve, whether images carry alt-text, whether stock status is consistent across the variants of a product, and whether your published-status fields match what is actually live on each channel.

Why it matters

A 404 on a product image is a hard demotion across every channel. A variant marked in-stock while the parent is out-of-stock is a returns risk. Cross-channel parity is the trust commitment.

Sources

HTTP 200/4xx/5xx response checks, Schema.org ImageObject validation, Shopify Admin API published-status fields.

Not measured

Image aesthetic quality. Whether the alt-text is meaningful (we measure presence, not semantic accuracy).

10% of score — install-gated

Checkout eligibility.

What it measures

Whether the product can complete a checkout flow on the channel it claims to ship to. Tax registration, shipping origin, age-restriction handling, alcohol licensing, allergen-disclosure regulatory compliance.

Why it matters

A product that ranks beautifully but cannot complete checkout is worse than not appearing — it spends ad budget without conversion. The checkout-eligibility pillar is the gate from "discoverable" to "purchasable."

Sources

HMRC tax-registration rules, UK alcohol licensing (Licensing Act 2003), age-restriction product schemas, shipping-zone declarations.

Not measured

Per-merchant payment-processor approval (separate ML risk-score signal). Customer-side bank declines (out of our scope).

5% of score

Crawlability.

What it measures

Whether your robots.txt allows the AI agents that increasingly arbitrate product discovery (GPTBot, ClaudeBot, PerplexityBot, etc.), whether your sitemap is present and references product URLs, and whether you publish an llms.txt artefact for AI-agent guidance.

Why it matters

Discovery is shifting from search engines to AI agents. The agents that aren’t allowed to crawl your store can’t recommend your products. The crawlability pillar measures whether you’re ready for that.

Sources

IETF robots.txt specification, Schema.org sitemap conventions, llms.txt proposed convention (llmstxt.org).

Not measured

AI-agent ranking signal (each agent uses its own opaque scoring). Whether the agent likes your prose (we measure access, not preference).

The food catalog standard

What the standard actually is.

Editorial still life of British speciality foods on warm cream paper — a glass jar of golden honey, a wedge of aged cheddar partially wrapped in waxed paper, a sprig of fresh thyme, and a wooden spoon, lit by soft north window light.
Catalog data, made of physical things

Version

v1

Publication target

Q3 2026

Status

Subdomain provisioned

The food catalog standard is a JSON Schema plus a human-readable spec. It defines what a complete food product record looks like for UK Shopify merchants — fields, types, allowed values, regulatory citations, and version history.

One standard, five channels: Google Merchant Center, Amazon Fresh, Ocado, Deliveroo, and the emerging AI shopping agents. It publishes at standards.flintmere.com/food/v1.

Why a standard, not a checklist: a checklist is a list of things to do; a standard is a versioned, dated, citation-backed contract. A standard can be referenced by trade journalists, regulatory consultants, and other vertical PIM tools. A checklist cannot. The standard is the moat we are building, not the scanner.

Until Q3 2026, this page describes the artefact; the artefact itself does not yet exist in published form. We will not pretend otherwise.

Data sources

Every claim links to its primary source.

We read from primary regulators, not from second-hand summaries. If we cite a rule, the citation links to the regulator’s own published page.

Maintenance cadence

How “we maintain it” works.

v0.xMar 2026v1Target Q3 2026v1.1Mar 2027v2Sep 2027v2.1Mar 2028
Half-yearly publication cadence. Versions: v0.x Mar 2026, v1 Target Q3 2026, v1.1 Mar 2027, v2 Sep 2027, v2.1 Mar 2028. v1 is pending publication (target Q3 2026); v0.x is the operator-internal pre-release.
Cadence
Half-yearly publication of the major standard version (v1, v1.1, v2). This is the load-bearing commitment.
Between publications
A public change log tracks every regulatory update we observe between major publications. Each entry: source URL, observed date, scope of change, action we took.
Monitoring
An automated cron monitors regulator RSS feeds, the EU Official Journal, DEFRA updates, and certification-body announcements. Surfacing is AI-assisted (Vertex AI / Gemini 2.5 Pro on EU residency); decisions are human.
Review
Every change is reviewed by the council’s Regulatory Affairs seat before merging into the public change log. The seat holds a binding veto on standards-publication accuracy.
Versioning
Major versions (v1, v2) are breaking; minor versions (v1.1, v1.2) are additive. Every published version stays accessible at its versioned URL. Breaking changes never silently rewrite history.

Limitations

What the score does not measure.

Being explicit about what we do not measure is what separates an honest score from a black-box marketing number. The Flintmere score does not measure:

  • Marketing copy quality, tone-of-voice, or brand strength.
  • SEO ranking outside identifier and title structural concerns.
  • Image aesthetic quality (we measure presence, alt-text, and structured-data linkage; we do not judge composition or lighting).
  • Conversion-rate optimisation. A high score does not guarantee sales; it reduces the rate at which catalogs lose impressions to data quality.
  • Customer-service quality, returns handling, fulfilment speed.
  • The truthfulness of regulatory claims (we measure presence and structure of declarations; we do not lab-test the product).

Things we cannot yet measure but are tracked as next-quarter targets:

  • Real-time GMC suppression status (requires merchant authentication; on roadmap).
  • Amazon Fresh listing status (no public API; partnership track).
  • AI-shopping-channel inclusion (channels not standardised across providers).

Conflicts of interest

What we sell and what we don’t.

  • Flintmere is not affiliated with GS1. We do not sell GTINs. We route merchants to their local GS1 office (GS1 UK for UK-based merchants).
  • Flintmere does not take affiliate commissions from certification bodies, regulatory consultants, GS1 offices, or platform integrations. We monetise via subscriptions, audits, and embedded apps only.
  • Flintmere is a trading name of Eazy Access Ltd, Companies House 13205428, registered office 71–75 Shelton Street, Covent Garden, London, WC2H 9JQ. Accountable director: Abdur-Rahman Morris. Eazy Access Ltd is not VAT-registered, so prices shown are the full price.
  • The standard is published openly. Anyone may cite it without permission; anyone may build a competing scoring tool against the same regulatory sources. We do not own the regulations; we curate the structure.

How to challenge a score

If we got it wrong, we rewrite.

Send a message via our contact form with your store, the disputed pillar, and your reasoning. We respond within five business days with the underlying data we read and the rule we applied.

If we agree we got it wrong, the score updates and the underlying rule updates. The change appears in the public change log so other merchants benefit from your challenge. If we disagree, the rule and the reasoning stay published; you can decide whether the disagreement is material.

Now apply it to your store.

You read how we measure. The concierge audit walks the seven pillars across your catalog product by product and lands a per-product fix plan in three working days — from £197. Or run a free scan first and see the four public-source pillars in 60 seconds.

Have a regulatory or methodology question? Talk to us — we route methodology queries to regulatory affairs.