How we evaluate research claims
A simple framework for auditing any “star rating” stock platform — including 5starsstocks-style products — without pretending we can predict markets.
5starsstocks searches often start with a single anxiety: “Am I about to trust the wrong thing?” The methodology below is designed to calm that anxiety by replacing vibes with checks.
1) Translate marketing into testable statements
When a site says “high accuracy,” “top picks,” or “strong buy,” we rewrite it into a statement that could be verified by a third party: what is the metric, what is the time range, what is the benchmark, and what are the costs/assumptions (fees, slippage, taxes, survivorship bias)?
2) Demand a time-stamped archive
A rating system is only as accountable as its history. We look for a complete archive of recommendations with timestamps that can’t be edited retroactively. If there’s no archive, performance claims may be interesting — but they’re not independently auditable.
3) Check methodology disclosure, not just “insights”
“Proprietary” isn’t a methodology. We don’t require firms to publish source code, but a credible system should disclose inputs (what data is used), the general weighting logic, and the conditions that trigger a downgrade. If ratings never go down, the system is not a system — it’s marketing.
4) Watch for conflicts and incentives
Affiliates, sponsored content, or opaque “partners” change incentives. The question is not “is monetization bad?” It’s “is monetization disclosed clearly, consistently, and up-front?”
5) Risk framing: is the downside given equal respect?
In volatile niches, the downside is not a footnote. We check whether risk language is specific (liquidity, dilution, regulation, cyclicality) and whether the platform avoids one-way narratives.
6) Benchmark reality: could a reader replicate the baseline?
If a platform compares itself to “the market,” we ask: which index, which dates, with what rebalancing rules? A benchmark should be simple enough that a reader could hold it without needing special access.
Using this framework
If you want a fast implementation, use the scorecard and keep a copy of your answers. Re-scoring the same platform every 3 months is a useful test: does it become more transparent over time, or does it keep moving the goalposts?