Need help understanding sudden drop in my app reviews

I’m seeing a sudden drop in my app reviews and ratings across both iOS and Android, even though I haven’t pushed any major updates or changed key features. I need help figuring out what could be causing this, how to check if it’s a bug, a UX problem, or maybe fake reviews, and what steps I should take to recover my app’s rating and encourage more honest positive reviews.

First thing I’d do is confirm the drop is real and not an analytics glitch.

  1. Check the raw data
  • Go to App Store Connect → Analytics → Ratings and Reviews.
    Export by day for the last 90 days.
  • Do the same in Google Play Console → User feedback → Ratings.
  • Look for:
    • A spike in 1-star ratings on specific days.
    • Drop in rating count after a specific date.
      If you see a step-change on a specific day, something changed around that date.
  1. Correlate with releases and config changes
    Even if you did not push a “major” update, small stuff still matters.
    Make a timeline:
  • App releases by version and date.
  • Backend releases / feature flags / config changes.
  • New SDKs you integrated, ads, analytics, A/B tests.
  • Price changes, subscription trials, promo campaigns.

Check if the drop aligns with:

  • A new version going live on one platform first.
  • Backend rollout that affected both platforms at once.
  1. Check crashes and ANRs
    Play Console:
  • Quality → Android vitals.
    Look at:
  • Crash rate.
  • ANR rate.
    Filter by version and device. If one version has a big jump, that explains a rating drop.

Xcode / App Store:

  • Check Crashes in App Store Connect.
  • Also check your crash tool (Firebase Crashlytics, Sentry, etc).

If crash rate jumped after a certain build or server change, fix that first.

  1. Read the recent reviews in bulk
    You need patterns, not single opinions.
  • Export or scrape the last 100–300 reviews from both stores.
  • Put them into a sheet.
  • Add columns like “Main complaint”, “Feature”, “Device”, “Country”.
    Tag them quickly:
  • Login issues.
  • Performance / lag.
  • Ads.
  • Paywall / pricing.
  • Bugs on specific devices.

If 30 percent of reviews mention one thing, you have your root cause.

  1. Check external changes you do not control
    Some common external triggers:
  • OS update. iOS or Android update breaks a workflow or permission flow.
  • API deprecation from Apple or Google.
  • Third-party SDK issue. For example, an ad SDK causing slow startup.
  • Device-specific bug after a new device launch.

Look at reviews filtered by device and OS version:

  • Play Console → Reviews → filter by Device / App version.
  • App Store Connect → see which devices / OS get more negatives.
  1. Look for rating prompt changes
    If you changed:
  • Timing of in-app rating prompts.
  • Logic for who sees prompts.
  • Wording or placement.

You might now be asking more frustrated users instead of happy ones.
Check:

  • When you call SKStoreReviewController on iOS.
  • When you show Play in-app review flow on Android.

Try:

  • Showing prompts after “success” events.
  • Avoiding prompts right after a long, painful flow.
  1. Check acquisition channels
    Sometimes new traffic brings different users with different expectations.
    In both consoles, look at:
  • Where users come from: search, ads, referral, featured.
  • Any new ad campaigns or new keywords.

Example:

  • If you started bidding on a keyword like “free X”, but your app has a hard paywall, you will get bad ratings from misaligned expectations.

Align your store listing and ads with what the app actually does.

  1. Check store listing changes
    Look at:
  • Screenshots, description, short description, title.
  • Any localization edits.

If you changed wording and started attracting the wrong audience, ratings fall.
Also check if:

  • You added more aggressive “pro” wording.
  • You removed key info about limits or pricing.
  1. Look for competitor or spam patterns
    Less common, but worth checking:
  • Sudden spike of similar 1-star reviews in a short window.
  • Repetitive wording or nonsense reviews.

If you suspect spam, you can:

  • Report them in Play Console and App Store Connect.
  • Respond politely in public so real users see your side.
  1. Operational stuff users feel
    Even with no app update:
  • Backend latency increase.
  • Rate limits on APIs.
  • More aggressive ads or lower-quality networks.
  • Payment failures or region issues.

Monitor:

  • API error rates.
  • Response times.
  • Payment success rates.

If you see a change that matches the review drop date, fix that path.

  1. Quick triage plan
  • Week 1:
    • Identify main theme from recent reviews.
    • Correlate with crash / ANR data.
    • Roll back or hotfix any obvious regressions.
  • Week 2:
    • Ship a small update with fixes for top 1–2 issues.
    • Respond to recent 1–2 star reviews mentioning those issues, tell them it is fixed.
    • Add a more targeted in-app review prompt for happy users.
  1. Metrics to watch after fixes
  • New version’s average rating per day.
  • Percentage of 4–5 star reviews vs all reviews.
  • Crash and ANR rates by version.
  • Support tickets by category.

If you share:

  • Last 2–3 release dates.
  • Any changes to prompts, ads, pricing, or SDKs.
  • A few typical negative review texts.

People here can help pinpoint the most likely cause faster.

Two angles that haven’t really been covered by @kakeru that I’d check right away:

  1. Silent UX changes that aren’t “releases”
    You said no big updates, but ratings can tank from stuff that feels “non‑product” internally:
  • Experiment in your paywall or subscription screen (copy, free trial wording, placement)
  • Change in ad frequency / network waterfall
  • Change in login / SSO provider behavior
  • New onboarding step, extra consent dialog, or more aggressive upsell

These often get shipped via remote config, A/B tools, or CMS, so they don’t show up as app releases. Go through:

  • Remote config history (Firebase Remote Config, LaunchDarkly, etc.)
  • CMS / feature flag change logs
  • Paywall / experiment dashboards
    Look specifically for things that:
  • Increased friction before users hit the “value” moment
  • Showed more ads earlier in the session
  • Moved a key feature behind a paywall or account wall

You’ll usually see a pattern in reviews like “used to like this app but…” / “too many ads now” / “paywall” even if code didn’t change.

  1. Trust & policy issues that hit both stores at once
    When ratings drop on both iOS and Android at the same time without an obvious crash spike, I’d be slightly suspicious about:
  • Account bans or stricter moderation
  • Region / country restrictions suddenly applied
  • Payment declines for renewals (cards blocked, PSP issue, new SCA flow)
  • Privacy or tracking prompts that feel sketchy

Things to cross check:

  • Customer support tickets: spike in “why did you ban me / charge me / lock me out”
  • Payment provider dashboards: jump in failed renewals or 3DS failures around the same date
  • Any new compliance work: age gates, KYC, GDPR/ATT related flows

If users feel “cheated” or “locked out” you’ll see tons of 1‑stars fast with emotionally written reviews that might not show up in crash charts at all.

A few more less-obvious checks:

  • Ranking / visibility shift:
    If your app fell in category rankings or lost a feature placement, you can suddenly get more “cold” users instead of loyal / referred ones. Cold users are harsher. Compare:

    • Store search terms before vs after
    • % of installs from brand keyword vs generic keywords
      If you suddenly rank on a generic term that poorly matches your app, expect lower ratings.
  • Onboarding vs power users:
    Look at rating distribution by tenure if you can track it. If mostly new users are upset, it’s onboarding / expectation mis‑match. If long‑time users suddenly turn, something core changed: data loss, layout, monetization, or performance.

  • Regional problems:
    This is underrated. Filter reviews and analytics by country/locale:

    • Did some CDN change or region server change affect only certain countries?
    • Did you change pricing tiers in specific currencies?
      If the drop is mostly, say, Brazil and India, it’s probably pricing, payment, or network issues, not a universal UX flaw.

Where I mildly disagree with @kakeru is on treating this only as “find the bug and patch it.” Sometimes the rating drop is a lagging indicator of a strategic change that actually matches your long‑term goals. For example:

  • You tightened fraud detection or spam filtering
  • You stopped supporting very old devices / OS versions
  • You made the free tier more limited to protect revenue

In those cases:

  • You might accept a short‑term rating hit
  • But you should explicitly adjust:
    • Store listing: set clearer expectations so people don’t feel bait‑and‑switched
    • In‑app messaging: explain why some things changed and what users still get for free
    • Support macros: have canned but honest responses for the most frequent complaints

Concrete next steps I’d use that don’t just duplicate the earlier checklist:

  1. Pull last 30 days of 1–3 star reviews only.
    For each: tag with “Monetization / Ads / Login / Perf / Crash / Trust / Region”.
    Do at least 100 reviews. You want a pie chart, not vibes.

  2. Pull your support inbox / chat logs for the same period.
    Tag the same way.
    If “cannot login” or “charged unexpectedly” is top 1 or 2, focus there before anything else.

  3. Check any non-code change logs: remote config, CMS, experiments, pricing tables, feature flags, moderation rules.
    Create a list of “revert candidates” and, if safe, roll one or two back temporarily to see if new reviews improve.

  4. If you find a clear pain point, ship:

    • A minimal fix
    • Release notes that explicitly call it out
    • Public replies to a handful of recent 1‑stars saying what changed and when it’s fixed

If you’re willing to share anonymized snippets of your most common recent negative reviews (like 5–10 examples) and roughly when the drop started, it’s usually possible to guess the top 1–2 root causes pretty fast.

Skip repeating the excellent forensic checklists from @andarilhonoturno and @kakeru; you already have a solid “what changed and when” playbook. I’d zoom out a bit and look at the shape of the drop and how you react publicly.

1. Classify the type of drop

Look at the graph of ratings over time and ask:

  • Cliff: sudden fall in average rating in 2–3 days
    → Usually a specific incident: outage, policy change, paywall, login break.
  • Slide: steady drift downward over weeks
    → Often a perception or audience problem: new traffic source, expectations misaligned.
  • Mixed: cliff, then plateau low
    → You had an incident, and recovery strategy was weak.

Why this matters: a cliff needs an incident response mindset, not just bug hunting. For a slide, it is more about positioning and funnel.

2. Treat it as a comms problem as much as a product problem

I slightly disagree with both in one area: they focus mostly on internal diagnostics. That’s necessary, but users only see two things:

  • The app behavior
  • Your public stance in replies and release notes

For a visible drop across both iOS and Android:

  • Reply to a representative set of 1–2 star reviews with specific, dated info.
    • “Issue X affecting Y devices is fixed in version Z, released on [date].”
    • “If you hit [error text], use [temporary workaround].”

Do not just write “sorry for the inconvenience.” Use replies as micro-changelogs. Potential users read those.

3. Use short, surgical updates instead of waiting for a big fix

If you found even a partial cause, ship a very small update:

  • One or two concrete fixes
  • Release notes that explicitly mention the pain users wrote about
  • Then watch: does the daily rating trend for that version rise compared to previous?

This is more informative than staring at the long-term average that moves slowly.

4. Ratings recovery strategy

Once the core issue is at least partially under control:

  • Temporarily narrow rating prompts to your best-fit segment:
    • Satisfied, engaged users after a clear success event
    • Post-task completion rather than on first launch or after friction
  • Avoid pushing prompts at users who:
    • Just hit an error
    • Are stuck in a paywall
    • Are in high-latency regions where you know performance is bad

You are not “gaming” ratings, you are just not asking unhappy users while the problem is fresh.

5. Measure the “emotion” of reviews, not only topics

Even if you do the tagging steps that were suggested, also grade each recent bad review on intensity:

  • Mild: “annoying bug,” “too many ads now”
  • Strong: “scam,” “stole my money,” “fraud,” “never using again”

If strong emotional language suddenly increases, treat it like a brand incident. Add:

  • In‑app banner or interstitial explaining current issue and planned fix date
  • Short FAQ inside the app for the top 1–2 complaints

This often calms down future reviews even before the full technical solution is shipped.

6. About the product titled ‘’

As written, the product title is empty, so:

Pros:

  • Nothing to mislead users.
  • No expectations gap caused by bad wording.

Cons:

  • Zero discoverability.
  • Users cannot understand the value from the name, which usually hurts ratings because expectations are random.

At minimum, a clear, honest title and subtitle usually raise the baseline rating by attracting the right users and repelling the wrong ones.

7. Sanity check: is this acceptable pain?

One thing both @andarilhonoturno and @kakeru hinted at but did not lean on: sometimes the drop is the price of a strategic decision:

  • Removed free features
  • Tightened moderation / bans
  • Dropped old OS versions

If your business metrics (revenue, churn, fraud) improved while ratings fell:

  • Decide what “floor rating” you are willing to live with (for example 4.2 instead of 4.7).
  • Make sure store copy and in‑app messaging are brutally clear so people do not feel tricked.
  • Keep public responses honest: “We recently changed X for Y reason; we know not everyone will like it.”

That transparency can stop the slide even if you do not revert the change.

If you can share 5–10 recent negative reviews and whether the drop was a cliff or a slide, it is usually possible to point to 1–2 likely root causes and the most leverage‑heavy fix.