Need help understanding sudden drop in App Store reviews

I recently noticed a sudden drop in my App Store reviews and ratings, even though I haven’t pushed any major updates or changed key features. This is hurting my app’s visibility and downloads. Has anyone dealt with missing or filtered App Store reviews, and what steps did you take to figure out what was going on or fix it?

I have had this happen twice on iOS, both times for different reasons. Here is what I would check step by step.

  1. Check ratings region by region
    • Go to App Store Connect → Analytics → Metrics → Ratings
    • Break it down by country and by version
    • Look for a date where ratings count or average rating shifts hard

    If it drops only in some countries, it is often due to a local bug, payment issue, or store featuring change.

  2. Confirm Apple did not reset anything
    • New major versions do not reset ratings by default, but if you ticked “Reset ratings and reviews” for some reason in App Store Connect for a version, older ratings disappear
    • Also check if you changed the primary app category or age rating; some older reviews can get filtered

  3. Look for review filtering patterns
    Apple auto filters a lot of reviews if it detects:
    • Duplicate text across many users
    • Same IP or device patterns in “review farms”
    • Incentivized reviews in your UX (“Rate us and get coins”)

    When Apple flags this, you often see a sudden drop in total review count, not a slow decline. If you ever ran a campaign asking people to review in exchange for rewards, that becomes a risk.

  4. Check app performance and crashes around that date
    • In App Store Connect → Metrics → Crashes and ANR
    • If crashes spike, ratings often fall or new reviews go negative
    • Look at session length, uninstalls and subscription cancellations too

    Sometimes a backend change or third party SDK update breaks something without an App Store update.

  5. Look at reply timing
    • If you stopped responding to negative reviews, users sometimes downrate instead of updating
    • Replying quickly and offering fixes tends to improve updated ratings over a few weeks

  6. Confirm there was no search or category change
    • Check App Analytics → Acquisition
    • If browse or search traffic dropped, you get fewer new users, so fewer rating prompts trigger. Ratings growth slows, which affects average over time.

  7. Compare your charts with an ASO tool
    If you have tools like Appfigures, Appfollow, AppTweak, etc, check:
    • Ratings volume per day
    • Review sentiment over time
    • Keyword rank changes around the same date

    When keyword ranks drop, you see fewer “happy casuals” and more “power users” who complain.

  8. Check your rating prompt logic
    • Confirm SKStoreReviewController is not being called too often or in annoying spots
    • If you removed some “rate this app” entry point accidentally, you get fewer positive reviews

    I once moved a “Rate us” button behind another menu. New positive reviews dropped 60 percent within a week.

  9. Look for external triggers
    • A bad Reddit or TikTok thread can send in a wave of 1-star reviews
    • Google the app name + “scam”, “bug”, “broken” around the date of the drop
    • Check support inbox volume for matching complaints

  10. If you suspect Apple filtering or a bug
    • Take screenshots of rating counts over time
    • Contact Apple Developer Support with date, app ID, and metrics screenshots
    Sometimes they confirm review removal due to policy, sometimes they say nothing useful, but at least you know you checked.

Quick recovery pointers
• Fix any crash / login / payment issues first, then ship a small update
• Mention the fix in “What’s New” in plain language
• Add a gentle in-app rating prompt after a success event (e.g. after a completed task or level)
• Respond to recent negative reviews with concrete info and version numbers
• Track the daily rating count for 2 to 4 weeks, not only the overall average

If you share your app category, region focus, and the date when you saw the drop, people here can compare with their data and see if it looks like an Apple-side change or something specific to your app.

Had something similar happen on a casual game last year, and it freaked me out too.

I agree with most of what @caminantenocturno wrote, but I’d look at a few angles they didn’t lean on much:

  1. Silent UX regressions
    You said “no major updates,” but check for:

    • Small config changes server side
    • Feature flags you toggled
    • A/B tests that changed onboarding, paywalls, ads frequency, login friction
      I once changed ad frequency through remote config and ratings tanked within 48h, with no app update involved.
  2. Prompt timing + cohort shifts
    Sometimes the reviews themselves are not “missing,” you just shifted who you’re asking.

    • If you recently changed when SKStoreReviewController fires, you might now be prompting more frustrated users (e.g. after a failed action instead of a success event).
    • If a UA campaign ended, you might have lost a big chunk of “happy casuals” that previously left 5 stars, so your average drifts down even if nothing is “wrong.”
  3. Seasonality and user intent
    Depending on your category:

    • Education, fitness, finance, travel apps see sharp sentiment swings around New Year, school start, tax season, holidays.
    • When user intent changes, expectations change, and they rate harder even if the app is identical.
      Look at year‑over‑year ratings per date range, not just last month vs previous month.
  4. Hidden policy landmines
    I’d actually push a bit more on the Apple side than @caminantenocturno suggests:

    • If you recently modified ToS, subscriptions, trial wording, or added an aggressive paywall, Apple might have started auto‑filtering some older reviews that referenced old flows.
    • If you localized your store page or changed keywords, some reviews get de‑prioritized or are harder to see from your primary storefront.
  5. Competitor activity
    Cynical take, but worth considering:

    • When a competitor runs aggressive UA or gets featured, your store traffic quality can drop sharply. Fewer high‑intent users → worse engagement → more complaints.
    • In a few “fun” cases, a competitor community piled on with coordinated 1‑star attacks after a pricing change. A quick search on Reddit / Discord / Twitter around your app name can reveal that kind of brigade.
  6. Ratings vs perceived ratings
    Check if:

    • The average rating shown on the product page differs from what you see in Analytics. Sometimes Apple updates regional storefronts at different times, so it can look like a drop in one place but not another.
    • Your in‑app prompt shows an older cached rating. That makes devs think it dropped globally when only one region tanked.
  7. Correlate with funnel metrics
    I’d do a simple table by date (or week) with:

    • New installs
    • Activations / signups
    • First key action completed
    • Crash rate and major error codes
    • % of users who reach the point where you ask for a rating
      Put the rating count and average rating next to that. Often you see, for example, crash rate ok, but completion of “first success action” down 30 percent, which explains fewer positive prompts and more frustrated users.
  8. Recovery play that actually moves the needle
    What helped us:

    • Ship a tiny update, even if it is mostly “stability improvements,” but pair it with very clear “What’s New”:
      “Fixed login failures affecting some users on [date range]. If you were stuck before, it should now work.”
    • For 2 to 3 weeks, show a rating prompt only:
      • After a clear success moment
      • For users with at least X sessions or Y days retained
    • Reply to recent 1 and 2 star reviews specifically referencing the new version number and what changed.
      That combo recovered our average by ~0.4 stars in about a month.

If you can share roughly:

  • Category
  • Main countries
  • Approx date the drop started
    people can probably tell you if it lines up with a known App Store / policy / seasonality quirk or if it smells more like an app specific issue.

One angle not covered much by @himmelsjager and @caminantenocturno is how App Store surfacing changes can make ratings look like they dropped, even when the underlying numbers are similar.

A few things to probe:

  1. Product page A/B tests from Apple
    Apple sometimes experiments with screenshots, order of info, or how reviews are highlighted. If a new variant shows fewer or older reviews, it can feel like ratings vanished. Correlate the date you noticed the drop with any jump or dip in product page views and conversion in App Analytics. If views change but installs and rating volume stay almost flat, you may be seeing a presentation change, not a real loss.

  2. “Most helpful” vs “Most recent” bias
    The front page of reviews may flip toward a few harsh 1 or 2 star entries, which tanks perceived rating quality. Manually scroll more than a few pages of reviews in at least 3 of your main storefronts. If you spot a handful of recent negative ones that keep surfacing, prioritize replying to those specifically and reference your latest version. Apple sometimes reorders visibility after dev replies.

  3. Store listing mismatch
    If your screenshots, subtitle, or description overpromise relative to the current feature set, you get harsher ratings from fresh users. Even if the app has not changed, expectations might have. For example, new keywords added to the title can attract a different type of user who finds the app “not what I wanted” and slaps 1 star. Revisit your messaging to match what you actually deliver right now.

  4. Pricing and perceived value without changing the code
    Subscription trials, introductory offers, or promo banners updated server side can shift sentiment hard. A very slightly worse deal than before can turn neutral users into vocal critics. Compare store revenue events and intro offer usage before and after the rating drop. If revenue per user went up while ratings went down, you might have crossed a perceived fairness line.

  5. Platform expectation drift
    iOS norms move quickly. Features that were “nice” last year feel outdated now. If competitors in your niche ship slick new flows, your unchanged UX starts to feel clunky and people rate more harshly even without a regression. Check top chart apps in your category and see where your onboarding, loading speed, or visual design is clearly behind.

  6. What I somewhat disagree with
    I would not lean too heavily on seasonality for most small and mid sized apps. Unless you are in strongly cyclical categories like tax or travel, a sudden cliff in ratings is usually structural, not calendar driven. Same for blaming only “external brigades.” It happens, but more often the root is in your funnels, not on Reddit.

  7. Recovery actions that are easy to overlook
    • Tweak expectations in the first run experience. For example, add a short explainer about what the app does not do, to defuse “missing feature” complaints.
    • Use a lightweight survey or in app feedback button before hitting SKStoreReviewController. Divert angry users into sending you support messages while happy ones get nudged to the store.
    • In your next release notes, explicitly invite previously frustrated users to try again, mentioning concrete fixes, not generic “bug fixes and improvements”.

Lastly, both @himmelsjager and @caminantenocturno already covered the mechanical diagnostics really well, so I would spend an evening walking through your new user journey as if you never saw the app before. Record the session, note each friction point, then map those to the date the ratings dropped. That qualitative pass often surfaces the one small change or expectation mismatch that analytics and Apple dashboards do not clearly highlight.