Thread

Article header

Trusted Assertions in Relatr

This post introduces TA in relatr, why we implemented it, and why we think the combination of TA + request/response is the right shape for decentralization.

If you’ve spent time building or using Nostr clients, you’ve felt the tension.

On one hand, Nostr is open by design: anyone can join, anyone can publish, anyone can run infrastructure. That openness is the point.

On the other hand, open networks don’t come with a built-in answer to practical questions clients must solve every day:

Who should I see first?

Which profiles look established?

How do I discover accounts beyond my immediate graph?

How do I search without trusting a single gatekeeper?

In other words: Nostr is a free ocean. But even free oceans need maps to navigate them.

Relatr exists to produce those maps—ranks, search, and other computation that helps clients navigate a permissionless network—without turning the mapmaker into an authority. With the new Trusted Assertions (TA) feature, relatr adds a second, complementary way to deliver those signals to Nostr: not only as real-time request/response, but also as published events anyone can fetch from relays.

This post introduces TA in relatr, why we implemented it, and why we think the combination of TA + request/response is the right shape for decentralization.

TA, explained like you’re building a client

Trusted Assertions (TA) is an emerging standard in Nostr described by NIP-85. In practice, it defines a way for a service to publish an assertion about a pubkey as a Nostr event—so clients can retrieve it later from relays, and allows users to define which providers they trust.

You can think of it as the difference between these two workflows:

  • Request/response: “Hey server, what is Alice’s rank right now?”
  • TA publication: “The server already published its latest statement about Alice; I’ll fetch it from relays.”

That second model matters. When an assertion is published to relays, it becomes easier to cache, easier to distribute, and less dependent on a client being online at the exact moment a computation happens.

TA is already being adopted. Other providers include Brainstorm, and it’s implemented in the Amethyst client. Relatr is one of the first solutions to adopt TA—and, importantly, the first to integrate TA together with the original request/response pattern.

Why relatr implements TA ,and, keeps request/response

Relatr now offers two APIs to consume computations from relatr servers:

  • ContextVM-style request/response (req/res): clients ask for a computation and receive a result immediately.
  • TA-style producer/consumer: servers publish assertions to relays; clients query relays to discover what’s available.

These are not competing interfaces. They are complementary tools for different parts of the problem.

Request/response: best for real-time, query-dependent work

Request/response is ideal when the answer depends on your specific question right now.

Search is the cleanest example. Search queries are dynamic, high-entropy, and user-specific. A producer/consumer model would require “publishing” an essentially infinite space of possible query results ahead of time, which makes no sense. Search needs req/res.

Req/res also fills gaps when TA events are missing. Relays can be unreliable; networks partition; users come and go. When you need a fresh answer, a direct computation path is the escape hatch.

The req/res model is also how we introduced the feature, so users can enable any Relatr instance that supports the new feature to serve TA in a pure Nostr-native way—no REST APIs or similar required.

TA: best for distribution and caching

TA fits naturally into a producer/consumer model:

  1. a server computes an assertion (for example, a rank)

  2. the server publishes it to relays

  3. clients query relays and fetch the latest available assertions

That model has two strong properties:

  • It’s relay-native. Clients are already built around querying relays.
  • Client-side caching becomes straightforward. The assertion is a retrievable artifact; you can store it, reuse it, and sync it.

Relatr’s TA implementation publishes NIP-85 assertions as Kind 30382 events. The detailed mechanics—persistence, staleness rules, when relatr publishes, and what operators can tune—are covered in the technical spec: TA.md.

“Ranks look like social credit” — the objection we take seriously

Numbers can be intimidating. A rank can feel like a verdict.

That concern is valid when ranking is centralized, global, and imposed—when one service becomes “the score” and everyone is pressured to comply. That is the dark pattern: a single algorithm hardens into a social hierarchy.

Relatr is built to avoid that failure mode.

Relatr is fully open-source, self-hostable, switchable, and designed to be customizable. Users and communities can define their own validations and models to compute ranks. If you don’t like a ranking algorithm, you don’t have to petition for change—you can run a different one.

This is not a minor implementation detail. It’s the difference between:

  • “Someone else decided how I should be judged,” and
  • “I chose (or helped build) the model my community uses—or I opted out entirely.”

TA strengthens this sovereignty story rather than weakening it. TA lets a server publish assertions, but the spec gives users explicit control: users publish Kind 10040 events to authorize which TA providers they trust. This means that you choose who speaks for you—and communities can choose which relatr instance they trust, or run their own.

What’s available now (and where to find it)

This feature is already deployed on the public relatr instance we operate.

The relatr.xyz website has been upgraded with a new page, /ta, that provides a simple interface to:

  • manage your TA list

  • enable relatr servers that support TA (including the public instance)

This release is tagged as version 0.1.11.

The goal is to make TA usable without turning it into a power-user ritual. If you’re not interested in protocol details, the experience should still feel intuitive: opt in, publish assertions, and let your client fetch and cache them from relays.

Notes for relatr operators

If you operate a relatr instance today and want to enable TA, you can do it with configuration. The full operator model—how relatr treats TA as a cached computation, what “active” means, and how refresh works—is documented in TA.md.

At a minimum, TA requires:

  • Update to the latest version of Relatr
  • Set TA_ENABLED=true in your env

Two interfaces, one direction: computation without surrender

Nostr clients need computation to navigate an open network. The mistake is assuming computation must be centralized.

Relatr’s approach is to make computation portable: open-source, self-hostable, configurable, and integrated with Nostr’s own distribution layer.

ContextVM request/response gives you real-time answers when you need them—especially for dynamic queries like search. It also allows you to interact with the provider without leaving Nostr thanks to ContextVM.

Trusted Assertions gives you relay-native distribution and caching for published statements.

Together, they form a practical toolbox: users and communities can get tailored ranks, powerful discovery, and search, while keeping the power to define—and change—the models that generate those signals.

Replies (0)

No replies yet. Be the first to leave a comment!