You've just signed up for Nostr. After generating your keys, you're faced with an empty screen. You click "Discover" or "Explore." Within seconds, a list of names, some perhaps already familiar from traditional social media, is suggested to you. Welcome to the first, and most pervasive, algorithmic layer of Nostr: the connection suggestion mechanism. This seemingly trivial moment is the entry point into a complex system that aims to replace central authority with decentralized trust, but often ends up building hierarchies through logic more opaque than what it criticizes.
This report is the result of an in-depth analysis of the technical documentation (NIPs), observable behavior of popular clients, statements from core developers, and community dynamics. It is not a technical guide, but a critical mapping of the forces shaping the daily experience of every user, especially the common one, lacking the tools to decipher the code. Our goal is to deconstruct, piece by piece, the machinery that transforms the utopia of the "flat graph" into a reality of asymmetries and access barriers.
Part I: The Alchemy of Reputation - From Social Trust to Algorithmic Score
The Theoretical Foundation: Web of Trust and Its Practical Failure
The Web of Trust (WoT) is the conceptual pillar of sociality in Nostr. In cryptography, it describes a system where users mutually certify identities. In Nostr, this "trust" is translated into the simple "follow." The premise is democratic: everyone is a node of equal value in the network.
Reality is different. A network with hundreds of thousands of nodes and millions of "follows" is illegible to a human. To make this data mass useful, clients must filter, sort, and evaluate it. This is where the qualitative concept of "trust" is necessarily, and dangerously, translated into algorithmic quantities. Proxy metrics for reputation are born: no longer "I trust Paloma," but "Paloma's npub address has an influence score of X, calculated by function Y."
The Algorithms at Play: PageRank and Its Hybrid Creatures
The most cited (and often misunderstood) algorithm in this context is PageRank. Its basic principle is elegant: a webpage's importance is determined not only by how many links point to it, but by the importance of the pages linking to it. A link from an authoritative source is worth more than a hundred links from unknown sites.
In Nostr, the "link" is the follow. Some clients and analysis tools implement variations of PageRank to assign an "influence score" to each npub. GrapeVine and similar algorithms are essentially adaptations of this concept. The typical process works like this:
- The client downloads a large subset of the social graph from various relays.
- It applies a mathematical formula that weights each "follow" based on the follower's own score.
- It assigns a numerical value to each account.
- It uses this value to: generate suggestion lists ("Influential people to follow"), order comments under a post, or filter content in a "Global" view.
The problem is not the mathematics, but its sociotechnical consequences. This algorithm:
- Cement Power Positions: Accounts that gained visibility early (often for reasons external to Nostr, like a strong Twitter presence) become, for the algorithm, "high-authority nodes." Every new follower they attract disproportionately increases their weight, which in turn makes them even more visible. A vicious circle of centralization is created in a system designed to avoid it.
- Punish the Periphery: An artist, a local activist, or an unconventional thinker with 200 "real" but "low-scoring" followers will have less algorithmic influence than a mainstream influencer with 200 "high-scoring" followers. Their voice is systematically downranked—not censored, but made statistically invisible in feeds sorted by relevance.
- Mask Decision-Making: The average user does not see the "7.8" score associated with a profile. They only see that account is "at the top of the suggestion list." The algorithm becomes an incomprehensible oracle that presents its decisions as facts, naturalizing a constructed hierarchy.
A Thought Experiment: The "Expert's Coffee" Syndrome
Imagine two cafés.
- Café A (Traditional Web): Has a sign at the entrance: "We decide what you like here. Our curators have chosen these 10 coffees for you."
- Café B (Nostr): Has no signs. There are 1000 different coffee machines, but all but 20 are hidden behind a curtain. A friendly waiter points to the 20 visible machines saying: "These are the most appreciated by the community!" You, as a new customer, don't know that the selection is made by an algorithm that decided to highlight machines already used by the café's oldest baristas.
In which café are you freer? In B, theoretically yes. But in which café do you better understand the rules of the game? The lack of transparency on how those 20 machines were chosen makes the experience in Café B potentially more frustrating and manipulative. The Nostr user is the customer in Café B.
Part II: The Second Layer of Opacity: Relays as Algorithmic Gatekeepers
Beyond the Dumb Pipe: Relay Logic
The client/relay separation is Nostr's architectural genius. But the idea of the relay as a "dumb pipe" passing data is an idealization. In practice, relays are servers with bandwidth, storage, and computation costs. To survive, many implement policies that are, in effect, filtering algorithms.
These policies include:
- Rate Limiting: Limiting events per second from the same npub. A very active user might see their messages dropped by a congested relay, stifling their participation.
- Npub Whitelist/Blacklist: Some relays, especially private or thematic ones, only admit authorized accounts. Others actively block accounts considered spam.
- Content Filters: Some relays filter events based on keywords or specific NIP event types (e.g., NIP-36 for "sensitive" content).
- Source-Based Prioritization: It is plausible that a relay under load prioritizes forwarding events from clients or npubs it recognizes as "reliable" or "important" to serve the majority of users first.
The Definitive Asymmetry: You Don't Know What You Don't See
While the client applies sorting algorithms to received data, the relay applies selection algorithms to data to be sent. This is a qualitative leap in opacity.
Scenario: You follow 100 people. They publish 1000 events per day. Your preferred relay, due to bandwidth limits or internal policies, filters out 200 "upstream." Your client receives 800 and, with its internal PageRank algorithm, selects 50 to show you in a "Top" view. You, the user, believe you have a window into the network. In reality, you are looking through two consecutive, undocumented filters. You have lost 95% of the original signal, and you have no way of knowing what was discarded, by whom, or why. This is not decentralization; it is a decentralization of opacity. The filtering authority, instead of being in one company, is distributed across dozens of entities (relay operators) who answer to no one.
The Relay Sustainability Problem and Its Perverse Effects
Most public relays are run by volunteers or communities. Costs grow with popularity. When a relay shuts down (a common event), entire portions of the social graph and conversation history become temporarily or permanently inaccessible to those who relied on it. This:
- Further Distorts Algorithms: Reputation algorithms running on clients depend on data received from relays. If an important relay hosting many "niche" accounts disappears, those accounts see their "algorithmic footprint" drastically shrink, becoming even more invisible.
- Creates De Facto Centralization: Users migrate to the few large, reliable, well-funded relays. These relays become critical chokepoints. Their policies (algorithmic or not) influence an ever-growing number of users, creating an infrastructural oligarchy.
Part III: On-the-Ground Consequences - The Ecosystem Born from These Choices
1. The Dictatorship of Relevance and the Death of Serendipity
Feeds ordered by reputation algorithms promote social homophily (association between similar individuals). You mainly see content from people already similar to those you follow, amplified by a system that rewards cohesion. Serendipity – the joy of the fortuitous discovery of something new and different – is strangled. Polemical but constructive debate, experimental art, minority political positions are crushed not by a censor, but by a mathematical equation that interprets their poor connectivity to "authority nodes" as poor relevance.
2. Systemic Vulnerability to Sybil Attacks (and Why They Are Insidious)
A Sybil attack involves creating a network of fake accounts to deceive a reputation system. On a system like Facebook, it would require forging verified identities. On Nostr, creating 1000 npubs is trivial and free. If a malicious actor creates 1000 fake accounts that follow each other and, strategically, follow a target account, they can artificially inflate its "influence score" in the eyes of a naive PageRank algorithm. That account could then appear in suggestion lists, credited with fake reputation. For the common user, this is a fundamental insecurity: they can no longer even trust the social status presented by the platform, because that status is relatively easy to manipulate.
3. The Knowledge Chasm Between Expert and Common User
This touches the heart of the critique. The expert user, who understands these mechanisms, can:
- Choose clients that allow fine control over algorithms (or disable them).
- Subscribe to dozens of relays, including private ones, to maximize data coverage.
- Read source code to verify implementations.
- Use external tools to analyze the social graph.
The common user, the declared target of this article, will do none of this. They will accept default settings. They will use the simplest client to install. They will trust suggestions. The gap between those who can navigate the system and those who are at its mercy becomes a chasm. Technology, born to emancipate, risks creating a new class of technical "priests" and a mass of passive "laity." It is the exact opposite of the promised empowerment.
Part IV: A Manifesto for the Common User - Survival and Resistance Strategies
Faced with this scenario, the response is not to abandon Nostr, but to use it with critical awareness. Here is a toolkit of conceptual and practical strategies.
1. Adopt an Investigator Mindset (Not a Consumer One)
- Suspect "Hot" Lists: Every "Trending," "Influential," or "Suggested" list is the output of a model. Mentally ask: "What is it measuring? What is it excluding?"
- Look for the "Disable Algorithm" Button: Some clients offer a pure "Chronological" view or the option to disable suggestions. Use it as your baseline. It's the equivalent of drinking tap water before tasting the cocktails.
- Ask Public Questions: In public channels (like those on popular clients), ask: "How does the suggestion algorithm work in this client?" Collective pressure for transparency is the only leverage the common user has.
2. Diversify Your Infrastructure
- Multiple Clients: Use two different clients for a week. Compare their "global views" or suggestions. The differences will illuminate each one's algorithmic choices.
- Hybrid Relay Strategy: Don't depend on a single relay. Configure at least 3-4 in your client: one large public one (for maximum coverage), one small or thematic one (for niches), and if possible, one private or from a community you trust. This reduces any single operator's filtering power.
- Manual Exploration as Discipline: Dedicating 10 minutes a week to manual exploration is revolutionary. Go to the profile of someone you appreciate and browse who they follow, not who is similar to them. It's the way to break bubbles and discover parallel communities.
3. Support (and Demand) Transparency Projects
The long-term solution cannot be for every user to become an expert. There must be a bottom-up and top-down movement for:
- Algorithmic Labeling: Clients should have a clear label: "This feed is sorted by [Algorithm Name]." Even better: "Why are you seeing this? [Brief explanation in plain language]."
- Relay Documentation Standards: Relays should clearly and accessibly publish their admission, filtering, and rate limiting policies. A "Relay Manifesto" should be the norm.
- Clients with Exposed "Knobs": Future clients for common users should have simple sliders or options: "Priority to new content" vs. "Priority to popular content," "Discovery breadth: [Narrow —> Wide]." Delegate some fundamental choices to the user, explained clearly.
Conclusion: Decentralization is a Verb, Not a Noun
Nostr is not decentralized because it uses cryptography. It is decentralized to the extent that the power to control, understand, and modify the system is distributed among its participants. Today, that power is concentrated in the hands of those who write the algorithms, run the relays, and understand the code.
The invitation of this report is not to distrust, but to constructive vigilance. Using Nostr critically means refusing to be a passive subject of opaque processes. It means asking questions, experimenting with configurations, talking about these issues. The real battle for a decentralized web is not only fought against Silicon Valley monopolies, but against the opacity and apathy that can be reborn in any system, even the most noble in its intentions. Your awareness is the first, and most powerful, algorithm of resistance.
#OpaqueAlgorithms #HiddenHierarchy #CriticalInfrastructure #DigitalSovereignty #SystemicCritique #NostrCritics #Algorithm #AskNostr #Decentralization #CensorshipResistance #Nostr #Moderation #Fediverse #Bitcoin #wotathon #FreeSpeech #OpenProtocol#NostrGrowth #NostrAdoption #WoT (Web of Trust) #NostrFeedback #NIP (Nostr Implementation Possibility) #NostrCritique
Self-Critical Methodological Note
Premises and Fundamental Limits:
- The analysis deliberately focuses on potential negative effects and opacities to fulfill the function of "critical reportage" and "warning." This may produce a darker picture than the average daily user's perception, who might find the platform simply useful and interesting.
- The critique is structural and does not evaluate the merit of individual projects or developers, many of whom are aware of these tensions and work to mitigate them.
- Nostr evolves rapidly. Some criticisms may be overcome tomorrow by new NIPs or innovative client implementations. This text captures a persistent design concern, not a definitive judgment on an immutable state of affairs.
Weak Points and Debatable Choices:
- The article, although longer, still avoids proper names of clients/relays to not fossilize criticism on specific implementations that may change. This sacrifices some journalistic concreteness in favor of a critique of principle.
- The use of the "Café" thought experiment is a rhetorically effective but perhaps excessive simplification.
- The "Manifesto" section proposes actions that still require a level of commitment higher than that of a traditional social media user. The tension between "easy to use" and "transparent and controllable" remains the unresolved dilemma the article signals but does not solve.
Usage Warnings:
- This text is a conceptual tool, not an instruction manual. Its purpose is to provide a critical lens, not to replace direct experience.
- It invites an active reading of technological reality. Its validity must be constantly tested by the user against their own experience on the platform.
- The most important conclusion is not in the individual recommendations, but in the invitation to consider algorithmic transparency as a fundamental user right in any digital system, decentralized or not.