Google Ads Audit Methodology: What Actually Gets Checked

Most “audit tools” are really just a scorecard wrapped around three or four quality-score checks. A serious audit reads the entire account architecture, flags the patterns that waste money, and gives you a prioritized fix list. This is the methodology we built Perfoads around — and the manual version if you want to run it yourself.

V
Vilo
Founder, Perfoads ·
15 min read

I built the Perfoads audit engine after auditing hundreds of accounts by hand and getting tired of templates that missed the stuff that actually bleeds money.

Why most Google Ads audits are shallow

The typical free audit tool runs five or six checks: wasted spend on broad match, low Quality Score keywords, missing negative keywords, low ad-strength, generic landing pages. Useful, but surface. The actual failure modes live a layer deeper — in the account structure, the conversion tracking graph, the bid-strategy / audience-signal mismatch, the ad-extension coverage patterns that heuristic flags never catch.

A good audit does three things a superficial audit does not:

  1. Reads the entire account as a system, not a checklist of components.
  2. Branches by business type — ecommerce, lead gen, local service all have different failure modes, benchmarks, and structural expectations.
  3. Produces a prioritized fix list with dollar impact, not a color-coded scorecard.

The 2-pass architecture

Perfoads runs every audit through two separate AI passes. The separation exists because single-pass LLM audits consistently sacrifice either breadth (skip categories to stay in context) or depth (run shallow across everything). Two passes let each one specialize.

Pass 1 — structural analysis

Claude Opus reads the account data and produces:

  • A score (0–100) for every category, mapped to a letter grade.
  • A findings list with severity levels: critical, warning, info.
  • For each finding: a plain-English summary, the technical detail, and a self-service-difficulty flag.
  • An estimated monthly waste or opportunity value, in the account's own currency.

Pass 1 covers every category but intentionally does not produce an executive brief or ad-copy-deep-dive. Those belong to Pass 2.

Pass 2 — deep dive + executive brief

Pass 2 re-reads the account with a 32K-token response budget and a narrower brief: ad-copy quality at headline level, extension coverage and gaps, keyword theming, and the top-line executive brief (one-sentence summary, grade explanation, top-three actions, total monthly waste, total monthly opportunity, strengths). A Pass 2 failure is non-fatal — the audit still returns with Pass 1 data only. This separation is why the ad-copy analysis feels substantive rather than generic.

The five core analysis categories

Every account gets audited against the same five pillars, regardless of business type:

1. Account structure

Campaign and ad-group organization. We check theme coherence, whether brand and non-brand are segmented (the single highest-impact structural issue — see the brand vs non-brand segmentation guide), whether ad groups are keyword-tight or over-broad, whether Shopping is priority-layered, whether PMax has proper exclusions and is not cannibalizing Search.

2. Campaign performance

Efficiency and ROI analysis — CPC trends, conversion-rate distribution across campaigns, outlier identification, day-of-week and hour-of-day patterns. Budget pacing checks: which campaigns are capped, which are under-utilizing budget, whether budget follows performance or history.

3. Ad quality

RSA headline and description analysis: pinning logic, combinational coverage, CTA placement, message-match to keyword theme. Ad-strength distribution. Ad-extension coverage across sitelinks, callouts, structured snippets, and call extensions. Missing extensions is one of the most undervalued findings — adding them costs nothing and consistently moves CTR.

4. Keyword strategy

Match-type distribution (how much of spend is on broad, phrase, exact), quality score spread across campaigns, negative-keyword coverage, search-term report analysis for waste patterns, coverage gaps vs. the account's keyword universe.

5. Budget and bidding

Bid-strategy alignment with account maturity (Target ROAS is a mistake on a two-month-old account with eight conversions), Smart Bidding guard checks, budget utilization per campaign, device / location / audience bid-modifier sanity, and (for accounts ready for it) marginal-ROAS analysis on the next dollar spent.

Business-type branches

The categories above are universal. The benchmarks, the expected structure, and the failure modes are not. Perfoads routes every audit into one of five business-type branches:

  • Ecommerce — Shopping + PMax-heavy, feed health central, ROAS as the primary scoreboard.
  • Lead gen (national) — form-fill or lead-volume goals, long post-click funnel.
  • Local lead gen — service-area businesses, call tracking, Local Services / map-pack interplay.
  • Local retail — physical store footfall, promotion-driven, radius-targeted.
  • Generic — B2B, SaaS, and hybrid cases that do not fit the other branches cleanly.

The business-type assignment is explicit (from questionnaire) or inferred from account signals — Shopping campaign presence, LSA leads table, the conversion-action set. Each branch carries its own prompt variant with branch-specific analysis categories layered on top of the five core ones. A local lead-gen audit checks for Local Services Ads verification, call-extension coverage during business hours, and geographic bid-modifier sanity. A national B2B audit does not.

The ecommerce 6-layer health check

Ecommerce accounts get an additional scoring layer on top of the five core categories. We look at the account across six layers that compound — each builds on the previous one:

  1. Feed health. Product data quality, disapproval rates, revenue-at-risk (revenue tied up in products with disapproval or policy flags), image/title/description completeness.
  2. Product performance. Winners (scale), sleepers (fix messaging), losers (cut), zombies (zero impression products). Count per quadrant and revenue share.
  3. Campaign structure. Shopping vs. PMax split, priority layering, catalog segmentation by margin tier, whether Search is competing with itself.
  4. Bidding and targeting. Strategy selection appropriate for data maturity, audience-signal richness, RLSA layers, new-customer acquisition bid.
  5. Competitive position. Where the catalog sits on price relative to competitors per product. Overpriced count (losing auctions), underpriced count (leaving margin on the table).
  6. Conversion funnel. Landing-page CWV for top products, cart / checkout friction, mobile parity. Audits that skip this layer get blamed for poor Shopping ROAS when the real issue is a 6-second mobile LCP.

Each layer scores 0–100. The scores are independent — an account can have a pristine feed (layer 1: 95) and terrible bidding (layer 4: 40). The layered view forces you to address the right layer instead of retuning bids on an account with a broken feed.

Build your own manual audit checklist

If you want to run this by hand, here is the minimum defensible checklist. It covers about 70% of what the Perfoads audit surfaces. The remaining 30% are the cross-cutting patterns that an LLM catches because it can hold the whole account in memory at once.

Account structure (30 minutes)

  • List every active campaign. For each: naming convention follows a rule, or not.
  • Brand and non-brand in separate campaigns? Brand negatives applied to non-brand?
  • Ad groups keyword-tight (single theme) or mixed-theme?
  • Shopping priority-layered (high for non-brand, low for brand)?
  • PMax: brand exclusions on? Category exclusions set?
  • Any orphan campaigns with budget but no clicks for 30+ days?

Conversion tracking (20 minutes)

  • Primary conversion action defined? Single source of truth, not duplicated.
  • Attribution model selected deliberately (not last-click by default)?
  • Enhanced conversions enabled?
  • Cross-device tracking working?
  • Any goals with zero conversions tracked that are still listed as primary?

Campaign performance (40 minutes)

  • Pull the 90-day campaign report. Which campaigns are bottom-quartile on ROAS or CPA?
  • For each bottom-quartile: is it budget-capped (pull the impression-share-lost-to-budget column)?
  • Day-of-week and hour-of-day patterns — any obvious waste windows?
  • Device split — mobile conversion rate materially lower than desktop?

Ad quality (30 minutes)

  • RSA ad-strength distribution. Percentage at Good / Excellent.
  • Pinning — are headlines pinned in positions that force message coherence?
  • Extension coverage: sitelinks, callouts, structured snippets, calls (where applicable), images.
  • At least 3 ads per ad group?
  • Any ads disapproved? Any policy issues pending?

Keywords (30 minutes)

  • Match-type spend split. Broad >50% is usually a red flag unless Smart Bidding is mature.
  • QS distribution per campaign. Concentrations of QS 1–4 point to landing-page or relevance issues.
  • Search-terms report last 30 days: any irrelevant terms eating spend?
  • Negative-keyword coverage: brand negatives in non-brand, product-name negatives in generic.
  • Any keyword with $100+ spend and zero conversions in 30 days?

Budget and bidding (20 minutes)

  • Each campaign's bid strategy makes sense for its conversion volume?
  • Target ROAS / Target CPA targets set near reality, not aspirational?
  • Device, location, audience bid modifiers — any set to −100% (silent traffic kill)?
  • Shared budgets actually shared, or hidden spend concentration?

Total: about three hours for a well-organized account. Five for a messy one. The deliverable is a list of findings with estimated monthly impact — not a color-coded scorecard. Scorecards tell you there is a problem. Numbers tell you which problem to fix first.

When AI audit beats manual — and when it does not

Where AI wins

  • Cross-category pattern detection. Seeing that low QS in Campaign A correlates with the landing page also used by Campaign D, which also has a disapproval. Humans spot these after hours; LLMs in seconds.
  • Scale. 47-factor coverage, consistently, without fatigue.
  • Dollar quantification. Estimating “this issue costs you ~$1,800/mo” from search-term data + CPC averages is tedious to do by hand and fast to do with a prompt.
  • Business-type benchmarking. Knowing that an ecommerce account with 28% of conversions from returning customers should have RLSA layers, and a lead-gen account should not.

Where manual still wins

  • Creative judgment. Whether the ad copy is good — tone, brand fit, differentiation — requires a human who knows the brand.
  • Business context. “That campaign looks broken because we are running it on purpose to test a new market” is invisible to an audit.
  • Attribution debates. When stakeholders disagree about whether brand is incremental, the audit can show numbers but not resolve the political question.

The realistic stack is both. Run the AI audit for coverage and quantification. Then a human reviews the top-three actions with brand and business context before executing.

What the output looks like

A Perfoads audit returns a structured report with:

  • Overall grade (A–F) with a one-sentence summary you can paste into a client email.
  • Per-category grades with the top 1–3 findings per category.
  • For each finding: plain-English summary, technical detail, self-service-difficulty flag, estimated monthly waste or opportunity in the account's currency.
  • The top-three-actions prioritized list with expected savings / opportunity and effort estimate.
  • Strengths — what is working — so the fix plan does not break things that are fine.
  • Executive brief — three paragraphs ready for the Monday stakeholder email.

The same methodology underpins the shopping ads audit checklist for ecommerce accounts, where the 6-layer health check replaces the generic ad-quality pillar.

See it run on your own account

The methodology in this guide is exactly what Perfoads' audit runs. Connect a Google Ads account in two clicks, wait about 15 minutes for the 2-pass analysis, download the report.

Start a Perfoads audit