Gresham's Law Online: Fixing the Scales

Online platforms promise to connect us with the best ideas, the most relevant news, and the smartest voices. Yet too often we find ourselves wading through clickbait, spam, and outrage. The problem is not simply that bad content exists - it is that the rules of the game reward it. When attention is the only currency and quick reactions the only measure, low-quality work crowds out thoughtful contributions. Fixing the scales requires rethinking what we measure, who can audit the measurements, and how incentives are designed.

The "bad drives out good" problem
Gresham's Law is an old monetary principle: bad money drives out good when the rules make it profitable to spend debased coins and hoard the ones with full precious-metal content. People circulate the inferior currency and save the better one, and over time only the bad money remains in everyday use. The mechanism is simple - if both coins are accepted at the same face value, rational actors keep the valuable coin and pass the cheap one along.
A strikingly similar dynamic plays out online. When platforms measure success by clicks, shares, and watch-time alone, they tilt the scales toward content that triggers fast reactions. Sensational headlines, outrage bait, and low-effort memes are cheaper to produce and easier to amplify than careful reporting or nuanced analysis. Thoughtful work gets buried because it does not generate the same immediate spike in engagement. The EU's Digital Services Act addresses these systemic risks head-on, requiring very large platforms to identify and mitigate harms that arise from opaque ranking and recommender systems.
Better measurement and transparency can reset the incentives. If platforms track not just volume but also source diversity, satisfaction, and verified provenance, quality signals begin to matter. When the rules reward lasting value instead of fleeting attention, good work can compete again. When verification, audit trails, and honest signaling are valued, contributors find it pays to invest in accuracy and depth rather than gaming shallow metrics.
How incentives shape what we see
Algorithms are optimizers. They chase whatever signal you give them. If the only inputs are clicks and shares, the system will surface content that maximizes those numbers - regardless of truthfulness or usefulness. Fast, provocative posts win because they deliver the metric the algorithm craves. Creators and bots quickly learn the playbook: engineer outrage, tease curiosity, promise easy answers. Careful journalism and expert knowledge cannot keep pace when the race is measured in milliseconds and emotional reactions.
The solution is not to eliminate algorithmic curation but to enrich the signals it relies on. Completion rates reveal whether people actually read or watch to the end. Satisfaction surveys capture whether the content delivered value. Source diversity checks prevent echo chambers from distorting the feed. Verified provenance shows who created the work and whether it has been altered. Each of these signals adds friction to manipulation - gaming a single metric is easy, gaming a basket of independent, auditable indicators is much harder.
Bridge cues like verification layers, provenance trails, and transparent audit logs make low-quality shortcuts expensive. When platforms publicly disclose why an item ranks highly, coordinated inauthentic behavior becomes easier to spot. When contributors earn credit for original work that others reuse, the incentive shifts from volume to genuine impact. These design choices construct an economy where truth and quality are competitive advantages, not liabilities.
What the EU's DSA changes
The Digital Services Act marks a regulatory shift from voluntary self-regulation to binding obligations for the largest platforms. Very Large Online Platforms and Very Large Online Search Engines - those reaching at least 45 million monthly active users in the EU - must conduct annual risk assessments covering systemic threats like disinformation, manipulation, and the viral spread of illegal content. All regulated entities had to comply by 17 February 2024. Designated VLOPs and VLOSEs had four months from their official designation to meet the heightened requirements.
Transparency duties form the backbone of the DSA framework. Platforms must label advertising clearly, provide meaningful explanations for content moderation decisions, and grant vetted researchers access to data for public-interest studies. These obligations are designed to curb the opacity that allows low-quality or harmful content to flourish unchecked. Importantly, the DSA requires that all transparency measures respect data protection principles under the General Data Protection Regulation, ensuring that opening the black box does not compromise user privacy.
By mandating regular audits, public reporting, and independent scrutiny, the DSA creates accountability loops that were previously absent. Platforms can no longer rely solely on internal metrics and assurances. External researchers, regulators, and civil society can now verify claims, measure real-world harms, and push for evidence-based improvements. This shift from closed systems to auditable architectures is essential for reversing the dynamic where bad content drives out good.
Article 40: opening data for research
Article 40 of the DSA introduces a structured pathway for vetted researchers to access publicly accessible platform data. This provision targets systemic risks - the patterns of harm that emerge not from a single post but from the architecture of recommendation and ranking at scale. Researchers affiliated with accredited institutions and approved through a formal vetting process can request datasets to study questions like whether algorithmic amplification favors sensationalism over accuracy, or whether coordinated networks exploit engagement metrics to spread disinformation.
Privacy safeguards are built into every stage. Platforms must anonymize or aggregate data where necessary to comply with GDPR. The vetting process filters out frivolous or poorly designed requests, ensuring that access serves genuine public interest. For example, a platform designated as a VLOP must publish its first risk assessment within four months and implement mitigation measures shortly thereafter. Independent researchers can then test whether those measures actually reduce harmful virality or improve the visibility of reliable sources.
Public-interest research helps close the feedback loop. When academic teams publish findings on how ranking changes affect information quality, regulators gain evidence to refine rules and platforms face pressure to adopt best practices. The European Commission's guidance on DSA data access clarifies that this scrutiny is not optional - platforms that obstruct legitimate research face enforcement action. Over time, transparent data access can reveal whether new metrics and design interventions succeed in flipping the incentive structure.
Fair measurement that flips the incentives
Measuring outcomes instead of outputs is the first step. Raw click counts tell you how many people arrived; completion rates, satisfaction scores, and longitudinal engagement tell you whether they found value. Expert consensus and peer review signal whether a piece of work meets professional standards. These richer signals reward depth and accuracy, making it harder for manipulative tactics to dominate.
Tracking provenance and giving credit reinforces the shift. Persistent identifiers for creators, timestamps for edits, and usage logs that show when others cite or build on your work create a transparent record. When contributors earn residuals based on verified reuse, the incentive moves from chasing viral moments to producing foundational insights. Audit trails and transparent scoring let people see why an item ranks highly - raising the cost of gaming and building trust in the system.
The Marpole whitepaper calls this idea "The economy of truth" and outlines how architectures that log contributions, enable peer verification, and distribute value based on actual impact create environments where good work circulates instead of being hoarded or drowned out.
When platforms adopt these multi-dimensional metrics, the playing field levels. Clickbait loses its edge because engagement alone no longer guarantees reach. Thoughtful analysis gains traction because satisfaction, completion, and expert endorsement now count. The scales tip back toward quality, and the bad-drives-out-good spiral begins to reverse.
What better "scales" look like in practice
Content ranking offers the clearest example. Instead of sorting by shares and clicks, platforms can weight source diversity - favoring feeds that pull from multiple independent outlets. On-page reading time and scroll depth reveal genuine engagement. User feedback that asks "Was this helpful?" or "Would you recommend this?" separates satisfaction from mere curiosity. Down-ranking signals linked to coordinated inauthentic behavior or bot networks prevents manipulation from skewing the results.

For creators, fair scales mean payment tied to verified reuse and audience satisfaction rather than raw view counts. Musicians earn residuals when their tracks appear in playlists or remixes. Journalists receive micropayments when readers finish articles and rate them positively. Visual artists collect royalties when their work is reposted with attribution. These models reward contributions that others find valuable enough to build on, not just scroll past.
In workplaces, evaluation systems can shift from counting output to assessing outcomes and peer-reviewed impact. A developer is judged by the reliability and reusability of their code, not the number of commits. A researcher's influence is measured by citations and reproductions of their methods, not publication volume alone. This prevents low-effort output from crowding out meaningful work and encourages collaboration over competition for vanity metrics.
Markets and value exchange also benefit from transparent rules and tamper-resistant records. When transactions are logged and visible, high-quality goods and services circulate freely because buyers can verify provenance and reputation. Trustworthy sellers earn repeat business; fraudsters are identified and excluded. The result is an ecosystem where quality compounds instead of being hidden or displaced by cheaper substitutes.
How to act today
Platforms can start by adding quality-weighted metrics to their dashboards. Track not just reach but completion, satisfaction, and source diversity. Publish annual risk assessments as the DSA requires, and open privacy-safe data channels for vetted researchers. Transparency builds trust and surfaces problems early, before they scale into systemic harms.
Teams managing content, products, or services should map where low-quality outputs currently win. Introduce checks like provenance tracking, peer review, and satisfaction surveys to rebalance incentives. Make it easy to attribute original work and reward contributors when others build on their ideas. Small design changes - like showing edit histories or highlighting expert endorsements - can shift behavior without heavy-handed rules.
Creators can adopt practices that signal quality. Cite your sources, label edits clearly, and invite feedback that reflects usefulness rather than just popularity. Seek platforms and communities that value these signals. Over time, audiences learn to recognize and reward transparency, and the market advantage shifts toward those who invest in accuracy and depth.
Readers and users hold power too. Follow sources with track records of accuracy. Use platform controls to flag spam and coordinated manipulation. Support tools and services that prioritize verification and fair measurement. Collective action - choosing quality over convenience, transparency over virality - helps reinforce the incentives that make good content competitive again.
Frequently Asked Questions
What does Gresham's Law mean in simple terms?
Gresham's Law says that when two forms of money circulate at the same official value, people will spend the one that is worth less and keep the one that is worth more. Over time, only the inferior currency remains in active use because everyone hoards the good money. The principle shows how rules and incentives shape what circulates in any system.
Do algorithms make bad content outcompete good content?
Algorithms optimize for the signals they receive. When platforms measure only clicks, shares, and watch-time, fast and provocative content wins because it generates those metrics quickly. Quality work that requires time and thought cannot compete on those terms. Algorithms do not have preferences - they reflect the incentives built into the measurement system. Richer signals like completion rates, satisfaction, and verified provenance let algorithms surface quality instead of just speed.
What does the EU's Digital Services Act require from big platforms?
The Digital Services Act requires Very Large Online Platforms and Very Large Online Search Engines to conduct annual risk assessments for systemic harms like disinformation and manipulation. They must publish transparency reports, label advertising clearly, explain content moderation decisions, and provide vetted researchers with access to data for public-interest studies. All regulated entities had to comply by 17 February 2024, with designated platforms given four months from designation to meet full obligations.
How does researcher data access help improve online information?
Independent researchers can test whether platforms' claims about safety and quality match reality. By analyzing publicly accessible data under strict privacy protections, academics can identify patterns of harm, measure the effectiveness of mitigation measures, and publish findings that inform both regulation and platform design. This external scrutiny creates accountability and generates evidence for interventions that reduce low-quality content and amplify reliable sources.

