I’m trying to keep up with very fresh AI news and only want updates from the last 3 days, but most sites mix in older articles or don’t let me filter by such a short timeframe. How can I reliably track AI news specifically from the last 72 hours, preferably with tools or sites that auto-refresh or can be customized for this kind of recency-focused feed?
Short answer, you need your own filters, not a single magic site.
Here is a setup that works well for 0–3 day AI news only:
-
Use Google News with tight filters
- Go to news.google.com
- Search:
‘artificial intelligence’ OR ‘AI’ OR ‘machine learning’ OR ‘LLM’ OR ‘ChatGPT’ OR ‘OpenAI’ OR ‘Anthropic’ OR ‘stability ai’ - In Tools, set “Recent” to “Past 24 hours” or “Past 3 days”
- Bookmark that URL
- Check once or twice a day, do not rely on email digests, they lag.
-
Use Twitter / X keyword searches
- Use advanced search:
- Words: ‘AI’ ‘LLM’ ‘GPT-4’ ‘OpenAI’ ‘Anthropic’ ‘Mistral AI’ ‘Google DeepMind’
- Date range: from 3 days ago to today
- Save searches
- Follow a few high signal accounts, for example
- @karpathy
- @ylecun
- @soumithchintala
- @OfficialLoganK
- @sama, @gdb, @OpenAI, @GoogleDeepMind, @AnthropicAI
- Mute obvious hype accounts to cut noise.
- Use advanced search:
-
Use RSS with date-based feeds
If you like RSS, this helps a lot.Suggest feeds:
- AI-specific sections of:
- The Verge: https://www.theverge.com/artificial-intelligence/rss/index.xml
- MIT Tech Review AI: MIT Technology Review
- VentureBeat AI: https://venturebeat.com/category/ai/feed/
- Ars Technica AI: Biz & IT - Ars Technica
- Put feeds into Feedly, Inoreader, or FreshRSS
- Sort by “Newest” and only scroll until you hit items older than 3 days
- Mark everything older than 3 days as read, so your feed stays clean.
- AI-specific sections of:
-
Use Hacker News only with filters
- Go to https://hn.algolia.com
- Query: ‘AI OR LLM OR OpenAI OR Anthropic OR DeepMind’
- Date range: past 3 days
- Sort by “Date”
- This finds launches and dev posts that normal news sites miss.
-
Use a custom Google Alert with strict query
- Go to google.com/alerts
- Query example:
(‘large language model’ OR ‘LLM’ OR ‘GPT-4’ OR ‘GPT-5’ OR ‘OpenAI’ OR ‘Anthropic’ OR ‘Mistral AI’ OR ‘Google DeepMind’) ‘announced’ OR ‘launch’ OR ‘released’ - Set frequency to “At most once a day”
- It still brings older references sometimes, but you filter by date in email.
-
Use a personal Notion or doc as a log
This helps you see what is actually new.- Create a table: date, source, topic, link, notes
- When you add an item, if you already logged that topic earlier, you skip it
- After a week or two, you train yourself to spot recycled news.
-
Specific “fast” sources worth checking every 1–2 days
These tend to post new AI stuff quickly: -
Simple daily routine (takes ~15–20 minutes)
Morning:- Open your Google News AI search (3 days filter)
- Check top 10–20 links
- Log only new topics.
Evening: - Quick scroll of Twitter / X saved searches
- RSS reader, sort by “newest”, stop when you hit items older than 3 days.
This combo gives you:
- Google News for mainstream coverage
- Twitter / X and HN for “early” stuff
- RSS for consistent feeds
- A personal log to avoid re-reading old news
Once you set it up, the hard part is done. The rest is discipline and quick scanning.
You can get close to “only last 3 days” AI news, but you’ll never get it 100 percent clean. The trick is to flip your mindset from “find the perfect site” to “build a tiny pipeline that you control.”
I like a lot of what @viaggiatoresolare suggested, but I’d actually simplify and shift focus a bit:
-
Stop trying to monitor “AI” as a category
That term is too broad and spammed. You’ll drown in nonsense funding announceents and fluff. Instead, define 3 to 6 “must know” buckets, for example:- Foundation models and LLMs
- Open source models / repos
- Policy and regulation (EU AI Act, US orders, etc.)
- Big‑vendor product changes (OpenAI, Google, Anthropic, Mistral, Meta)
Now everything you track is tagged to one of those buckets. If it does not fit, ignore it. This is the main filter, more important than the 3‑day window.
-
Use structured sources, not just media sites
The fastest “almost real‑time” stuff usually shows up before articles:- Arxiv: use arxiv-sanity or arxiv.org with queries like
cat:cs.LG AND submittedDate:[NOW-3DAYS TO *] - GitHub: search “llm”, “transformers”, “diffusion”, sort by “Recently updated”, manually skim repos with a lot of stars added in the last 3 days
- Company changelogs and status pages:
- OpenAI release notes
- Anthropic changelog
- Google Cloud / Vertex AI changelogs
Those are inherently date‑driven and not padded with week‑old opinion pieces.
- Arxiv: use arxiv-sanity or arxiv.org with queries like
-
Use negative filters, not only positive ones
Where I disagree a bit with @viaggiatoresolare: just piling on more keywords is not enough. You want to block recurrent junk, for example:- Filter out titles with “beginner’s guide”, “what is AI”, “explained”, “2023”, “2022” etc.
- In tools like Feedly / Inoreader, create rules: if title matches
tutorial|course|bootcamp|jobs|salary|hypethen auto‑mark as read.
This cuts a lot of “old but repackaged” content that keeps sneaking into “latest” filters.
-
Build a single “last 3 days” view across sources
Instead of checking 6 difference places separately and eyeballing dates, use one hub:- Use Inoreader, Feedbin, or self hosted FreshRSS.
- Drop in: AI sections, official blogs, changelogs, a few subreddits via RSS, plus maybe a couple of curated newsletters that have RSS mirrors.
- Then:
- Sort globally by “Most recent”
- Scroll until you hit the first item older than 3 days
- Stop. Hard rule.
You never scroll beyond that timestamp, which enforces your 3‑day window even if the site’s own filtering sucks.
-
Use “compare mode” one day per week
Once a week, take 10 minutes to compare:- What showed up in your last 3 days feed
- What’s on the front page of major tech sites and maybe Hacker News
If you already saw 70 to 80 percent of the AI stories, your pipeline is working. The remaining 20 to 30 percent is either noise or slow‑moving analysis you can safely batch for weekends.
-
Have an “overflow” bin for older but important stuff
Pure 3‑day focus can actually be counterproductive. Some of the most useful AI writing lands as slower essays, not breaking news. To avoid FOMO:- Create a second RSS folder or tag: “Longform / no rush”
- Once a week, move “interesting but not urgent” links there
You stay strict on recency day to day, but still catch deeper context over time.
-
Minimum viable routine
Morning (10 minutes):- Open your RSS hub, sorted by newest
- Read only items from the last 24 hours
- Tag anything worth remembering with your buckets (models, open source, policy, products)
Evening (5 to 10 minutes): - Quick skim of arxiv + maybe GitHub “trending ML”
- Check 2 or 3 official model labs / product blogs
That’s it. If it feels like work, you’re doing too much.
In practice, you’ll still see the odd 4‑ or 5‑day‑old piece slip in, but with negative filters, a single “newest” view, and a hard scroll cut off at “3 days back,” you’ll be way closer to what you want than by hunting for some magical “AI news last 72 hours” site.
Skip the perfect‑filter fantasy. You will not get a clean “last 72 hours of AI” feed, but you can get close by attacking a different part of the problem than @viaggiatoresolare did.
They focused on building a pipeline from existing feeds. I would shift a bit more toward event‑driven tracking and “reverse search,” so you avoid a lot of junk before it even hits your queue.
1. Track events, not sites
Instead of monitoring outlets, monitor the moments when people talk:
- Model / product launch events
- Policy announcements
- Major conference deadlines & proceedings (NeurIPS, ICML, ICLR, CVPR)
- Security incidents affecting AI tools
Set calendar reminders for the known dates, and for everything else rely on:
- A couple of high‑signal X (Twitter) lists focused only on AI researchers and key product engineers.
- A few “breaking change” channels in Discord / Slack communities (ML Ops, open source AI model communities).
When something big happens, it appears in those streams almost instantly. Then you search backwards for “coverage in last 3 days” instead of passively waiting for feed filters to behave.
2. Use reverse news search with narrow windows
Instead of browsing “AI” sections, use search interfaces that let you hard‑limit time:
- General news search engines with “Past 24 hours” or “Past 3 days” filters.
- Vertical search tools for tech news that support custom time ranges.
Workflow:
- Take a topic in your core buckets (foundation models, policy, etc.).
- Search with
'topic keyword' + 'model name or vendor'and set time to “last 24 hours” or “last 3 days.” - Save those as custom searches in your browser or news tool and re‑run daily.
This is slightly more manual than @viaggiatoresolare’s “single unified feed,” but it also gives you way tighter control of recency. You are asking for “what was written in this slice of time” instead of relying on how sites order content.
3. Add thresholds so you only see big changes
If you only care about non‑trivial stuff, you can layer signal thresholds:
- Social: only posts with at least X reshares / comments inside 24 hours.
- GitHub: repos that gained at least Y stars in the last 3 days and contain LLM‑related tags.
- Papers: citations will lag, so use secondary signals like appearance in curated lists, conference “spotlights,” or being discussed in newsletters.
This helps with the main problem both you and @viaggiatoresolare face: “fresh but useless.” A lot of last‑3‑days AI news is just noise. Thresholds filter that without you having to read it.
4. Be comfortable ignoring slow contexts
Where I disagree slightly with the “overflow bin” idea: if your goal is ultra‑fresh tracking, you may want to intentionally abandon most longform think‑pieces and weekend op‑eds.
Those are great for understanding trends, but they will blow up your limited attention. A stricter rule can be:
- Only save longform if it directly explains something from your last 3 days (e.g., a deep dive on a new model you just saw announced).
- Discard evergreen “AI will change X” content, even if everyone is linking it.
You will end up with less philosophical context, but a much cleaner operational view.
5. Quick routine that respects the 3‑day cap
You can keep this to 15 minutes:
- Morning
- Run 3–5 pre‑saved news searches with “last 24 hours.”
- Check one high‑signal X list filtered by “Top” or “Popular” in last day.
- Evening
- Scan trending repos / tools that hit your threshold in the last 3 days.
- Do a brief reverse search on any item that looks big (“model name” + last 3 days).
If a piece is older than 3 days, you just do not open it, even if it seems interesting. The rule is the filter.
About integrating a product like ``
You mentioned ``, so a few thoughts on how something like that fits this system:
Pros for using `` in a 3‑day AI news workflow
- Can serve as a single UI to run saved searches and show them in one timeline.
- Might let you combine multiple sources (news, social, code) and sort globally by time.
- If it supports keyword filters and rules, you can implement negative filters and thresholds directly.
Cons for using ``
- If it does not have strict time filters, you are still stuck manually enforcing the 3‑day cutoff.
- Some tools over‑optimize for “engagement” rather than recency, so you end up seeing popular week‑old AI posts.
- Extra setup: you still need to define your buckets, keywords, and follow lists. The tool does not solve that part for you.
Used with discipline, `` can improve readability, but it is only an interface over the strategy. The strategy is what actually gives you “last 3 days” reliability.
To tie it back: @viaggiatoresolare is right that building a small pipeline you own beats searching for a magic site. My twist is to be more aggressive: drive everything via time‑bounded search, event‑driven discovery, and thresholds, and be unapologetic about throwing away almost all “slow” content. That is how you keep AI news truly constrained to the last 72 hours without losing your week to scrolling.