Skip to content

Whitepaper V5 (Full Text)

PDU Protocol — A Peer-to-Peer Social Network

Section titled “PDU Protocol — A Peer-to-Peer Social Network”

Version 5

Email: liupeng@pdu.pub

Any information distribution system must achieve two fundamental goals: the free publication of information and the effective acquisition of information. Modern centralized social platforms sacrifice the former to achieve the latter, filtering content through censorship and account bans while requiring real-world identity verification for account creation.

This paper proposes a fully peer-to-peer social network system that does not rely on any third-party services and does not attempt to eliminate spam or malicious accounts at the system level. Instead, identity is defined as a totally ordered sequence of messages signed by the same private key—an identity is the set of ordered events itself.

Each information consumer constructs their own set of visible publisher identities based on interaction relationships among publishers, applying personalized rules to filter information effectively. The system has no global consensus and no “god view.” Information quality emerges as a statistical result of independent decisions made by all participants.


Information dissemination and interaction on today’s internet primarily depend on centralized platforms such as Facebook, Twitter/X, and WeChat. These platforms make it easy for users to publish content and build connections, using algorithms to detect and filter spam to improve user experience.

However, the drawbacks of centralized systems have become increasingly evident. Third-party operators may misuse user data or compromise privacy. Their large user bases enable lock-in effects and monopolistic behavior. Moreover, centralized services are vulnerable to regulation and censorship, as they are clearly identifiable control points.

Despite these issues, most users remain on existing platforms. While switching platforms does not necessarily result in data loss, it does mean losing accumulated social relationships and influence, which creates strong lock-in.

Decentralized social platforms have emerged to address these issues. For example, Mastodon adopts a federated architecture composed of multiple interoperable servers. While this removes a single central authority, user registration and content moderation still depend on server administrators. In practice, this model resembles a collection of smaller centralized systems and does not fundamentally solve the problem.

Blockchain-based social platforms (e.g., Steemit, Minds) introduce token-based mechanisms, requiring tokens to create or activate accounts and incentivizing user behavior. While this increases the cost of account creation, it does not effectively prevent fake accounts and introduces inequality, limiting inclusiveness.

Some platforms use invitation systems in their early stages to control user quality. While effective against malicious registrations, this approach restricts broader participation and can lead to homogeneous community culture dominated by early users.

The common issue across these approaches is that they all attempt to eliminate spam or malicious accounts at the system level. However, there is an inherent tension between the two core goals of information systems: free publication and effective filtering. Centralized platforms prioritize filtering through censorship. In a fully decentralized system, there is no universal standard for what constitutes “spam.”

Therefore, a truly decentralized system cannot and should not eliminate unwanted content at the system level. Instead, it should allow all information to exist while enabling each participant to filter efficiently according to their own criteria.

This system follows that philosophy. Its goal is not to prevent malicious behavior, but to contain its impact within limited scope. High-quality information spreads further because it is accepted by more participants, while low-quality information naturally contracts due to being filtered out.

These design decisions are rooted in a unified philosophical foundation: identity is defined as a sequence of ordered events; each participant constructs their own “view,” and no global perspective exists. Information transparency is a structural requirement, not a feature preference.


In this system, the traditional concept of a “user” is divided into two independent roles:

Publishers are visible entities in the system. All actions—posting, replying, reposting, liking, blocking—form an immutable, ordered chain of messages through cryptographic signatures. This chain defines the publisher’s identity.

The publisher’s goal is to maximize influence—not just reach, but a balance between duration and audience impact, judged subjectively.

Consumers are invisible entities. Their goal is to efficiently acquire information using personalized filtering rules based on trust relationships derived from publisher interactions.

These rules are stored locally, leave no trace in the system, and define the consumer’s visible identity set—the set of publishers whose content they can see.

A user may have multiple publisher identities and independent or shared filtering rules. There is no binding between identity and filtering logic.


Messages are the fundamental data structure and the only unit transmitted in the network.

Each message consists of:

  • Content (media + interaction type)
  • Reference list (signatures of related messages)
  • Signature

The message is hashed and signed using the publisher’s private key to ensure integrity and identity.

Importantly, judging information quality cannot be done at the level of individual messages. It must be evaluated at the level of the publisher’s entire ordered message history.


A publisher identity is defined as a totally ordered chain of messages signed by the same private key.

Maintaining this order is the publisher’s responsibility. Each message must reference the previous message signed by the same key. The first message uses a null reference.

A consistent total order allows all participants to resolve conflicts. However, if the chain forks, consensus becomes impossible. In such cases, the only reasonable action is to ignore the identity from the fork point onward.

Unlike blockchain systems, forks here are considered the publisher’s fault. Any violation of ordering rules results in penalties, typically ignoring or blocking the identity.


To prevent abuse (e.g., generating new identities at zero cost), the system uses a trust propagation mechanism.

  1. Consumers define initial trusted identities.
  2. From these, they include identities that receive positive interactions initiated by trusted identities.
  3. This process can extend multiple layers but is usually limited to one.

Blocking acts as a path-specific exclusion mechanism.

This process is gradual and naturally limits the size of the visible set.

New identities must:

  • Be discovered (via third-party services)
  • Earn trust through content over time

Time becomes an unforgeable cost.


  • Publishers aim to maximize influence.
  • Consumers aim to efficiently filter information.
  • System goal: ensure natural information flow.

High-quality content spreads further due to acceptance. Low-quality content shrinks due to filtering. No global consensus or system-level judgment is required.


This system eliminates privacy issues at their root:

  • No real-world identity is required
  • No personal data is collected or stored
  • All messages are public by design

Private communication must occur خارج the system using encryption.


Third-party services are optional and supportive, including:

  • Search
  • Messaging
  • Analytics
  • Advertising
  • Voting systems

They do not have control over the system.


The system does not depend on cryptocurrency but can support blockchain as an extension.

Publisher identities can replace staking mechanisms, enabling more efficient consensus models such as PoS, DPoS, PoA, or Avalanche.


This paper presents a fully peer-to-peer social network system that balances free publication and effective information acquisition without relying on centralized control or system-level filtering.

Identity is defined as a chain of ordered events. Consumers independently construct their view of the network through filtering rules. All identities—human, organizational, or AI—are treated equally.

Information quality emerges organically through decentralized filtering, not centralized enforcement.