| CARVIEW |
p2panda-auth, a convergent, offline-first CRDT (Conflict-free Replicated Data-Type) which helps managing members, admins or moderators in a group and giving them permission to sync, read or change application data. If you want to learn about the concrete implementation and design choices of p2panda-auth we have an in-depth blog post for you here.
This post is about our general learnings on building an Access Control CRDT followed by our exploration of patterns for how application data can be “authorised”. This includes Application Integrations and Combination with Key Agreement Protocols. These patterns can be very different based on the individual application’s requirements (and we surely didn’t cover a lot of other approaches). There is no straight-forward answer or single solution - so here is an attempted “summary”!
Do you need eventual consistency?
If we can trust a single server to authorise edits from different users we can simply rely on the “linear history” of access control- and data changes. The changes are “serialised” in the order the server received them.

Centralised server serializing all permission- and app-data changes
We can apply a similar rule in peer-to-peer systems as well: As soon as the peer learned that a permission was revoked (the user is not allowed to change the data) we reject any future data from now on. Every peer can manage that “authorisation state” themselves and act accordingly from their perspective.

Authorisation in peer-to-peer without eventual consistency
This example shows how both peers “synced” the removal of Owl’s permission at one point, but allowed different edits of Owl before doing so. Peer A will end up with a different app state than Peer B. In systems like that we can’t guarantee the “eventual consistency” of the application data. At least not without further work.
Imagine an often changing key/value database, if peers constantly overwrite the values with new data (in a “last-write-wins” fashion), it maybe doesn’t matter if they are briefly “out of sync” until they converge to the same state again.
In p2panda we believe that this is a very valid option for building many peer-to-peer applications. The trick will be to figure out patterns (once again) to identify when and how which guarantee is required for which application.
For a computer game that might not be so nice though as maybe Owl ended up making an extra move gaining more points for Peer A, while Peer B has a different game state.
The answer is really: it depends!
How would we introduce eventual consistency here? We would need to detect concurrent and conflicting changes, for example a user changing data while they concurrently have been removed and be able to retroactively handle the “unauthorised” edits.
An access control system doesn’t need to be complicated when applications don’t require eventual consistency guarantees from it or if they can model them outside of that scope. Simple Capability-based Tokens with an expiry date can be enough to account for such applications.
In p2panda we want an access control solution which can account for all sorts of application needs, including the guarantee to “converge” to the same application state based on the given access control, with all the bells and whistles we need around concurrency and conflict resolution. The answer is to build a “convergent”, offline-first Access Control CRDT and we will show later how we can integrate it into application data, with different patterns around eventual consistency, moderation and encryption.
First let’s build that CRDT.
Building a convergent, offline-first Access Control CRDT
There are different strategies to “authorise” someone to do something in an application. In our setting we would like users to create groups. These groups provide a context to manage higher-level key agreement CRDTs for group encryption and are “composed” on top of the Access Control CRDT. A group creator is able to add and remove other users and give them “access rights”, for example write or admin access.
This is similar to an Access Control List (or ACL) which is a bit like a guest list of a party: If you’re on the list you are allowed to enter, if not - you can’t. Every peer manages such a list and checks it to ensure that received user actions are permitted.
This is different from building a convergent capability-based CRDT, like in Keyhive, where users form “delegation” chains to permit access. However, the CRDT parts are fairly similar.
Dependencies & out of order handling
Independent of the application data we know one thing for sure: We want access control to always be consistent! Every peer should be able learn about the latest and same status of the access control system at one point.
This doesn’t seem so hard to achieve at first sight: Every peer eventually receives the “user got removed” operation and converged to that state, knowing they need to reject that user’s data from this point on.
Things can still get a little bit tricky though: What if we receive the operations out-of-order? We receive the user’s removal before they were even added? We need a way to describe “dependencies” to understand if we’re missing operations.
For this we give every operation a unique identifier, for example a hash. Now operations can “declare dependencies” by mentioning a list of operation IDs which they consider important to be “processed before”. Peers who receive an operation can now understand if they’re still missing other operations before they can go on.
This structure of dependencies forms a Directed Acyclic Graph (DAG). Our favourite graph for many peer-to-peer problems! <3

The “edges” of the graph describe the “dependencies” of every “node”.
Peers can now easily reason about what they are missing and process things only if they know that they have seen every dependency of that operation.
If we look closer we can see that the “processed” operations are effectively a linearised, topologically ordered sequence after their dependencies have been checked.
Now we can at least be sure that we haven’t “missed” anything. If we choose our hashing algorithm wisely we can make sure that it is impossible to “guess” an operation id because they are simply too long and “too random”. Like this, peers can only refer to an operation if they really learned about it before. This gives us the guarantee of Integrity.
Ordering
Another property we have from DAGs is that we can start to reason about the “order” of operations.
In our decentralised systems we can’t really reason about the “exact” order of concurrent operations as we don’t have one single source of truth which orders events for us (like a centralised server or a consensus-based, single-ledger blockchain). This is why DAGs are perfect to describe this “partial ordering”. Partial order is a set of operations where we can’t always compare two entries with each other and know which one was created first. When we can’t compare them directly we only know they occurred at the same “logical” time.

We don’t know exactly in which order C and D took place here, we only know that these have been “concurrent” operations as both of them didn’t know about each other while they got created.
Authorisation
How can we know that a member was really allowed to add someone to the group?
We need to make sure that new operations can only be applied to the graph if previous operations authorised them. Like this we can form a sort of “trust chain” or proof and trace back the “logical” operations until the beginning and check on every step if they got authorised to do that action.

Owl is allowed to add Pig to the group because we can see that Owl was made an Admin by Sheep before and that Sheep is an admin because they created the group before
If we sign every operation on top with a cryptographically-secure signature we can make sure that the author of this change can prove their identity. This gives us the guarantee of Provenance.
Concurrency
By looking at a DAG we can reason about the partial order and concurrent operations easily. How do we identify concurrent operations programmatically?
In a DAG we can use a traversal technique to identify all concurrent operations from the perspective of a single “target operation” by moving from that target to all other reachable nodes in the graph, in depth-first order. We mark all visited nodes as “successors” of that target.
Next we reverse all edges and do the same traversal again. All visited nodes can now be marked as the “predecessor” of the target. Like this we get a set of all operations which are neither predecessors nor successors of the target operation which means that they have been concurrent to it.
We need to repeat this process for every node in the graph. Each time we detect a set of “concurrent nodes” we recurse down each of them, repeating the same process again but merging the outcome as one “concurrent bubble”. Bubbles which only contain one node are ignored.

Traversing the graph with this technique gives us a list of all concurrent bubbles in it. At this stage it is not necessary to account for “nested” bubbles. It is sufficient to consider them as one single entity.

This is arguably a very involved way to compute concurrent bubbles in a graph and using Lamport Timestamps can be more efficient. However, they do not give us the integrity guarantees of the hashes, so implementations will probably end up having both systems next to each other.
Conflicts
Where things get the most tricky is on concurrent group changes which might be in conflict with each other. For example: What happens if someone wants to add a member to the group while they are concurrently being removed?
Since we can now detect concurrent operations or “bubbles” we can apply rules for merging conflicting changes.
There are very different approaches on how to handle these conflicts, all coming with different advantages and disadvantages:
- Seniority ranking: “older” members win over concurrent removals. Prone to Sibyl attack so further mitigation is required
- Remove both: Solution for byzantine scenarios where a group was “infiltrated” but it might end up without admins so the group needs to recover by starting anew
- Decide “by higher hash”: Removal operation with “higher hash” value wins deterministically but “randomly”. Operations can be adversarially chosen to “win”
Which strategy to pick should be decided by a treat model and how well it can be communicated to the users. Which conflict resolution strategies are working well for Authorisation CRDTs and their users is still to be explored.
Consensus & Finality
Whenever a peer publishes an operation in the DAG we can use their “dependency” pointers as proof that they have “acknowledged” the state up until that point in the graph.
Since the number of members is known in the group at this point we have a fixed bound of how many acknowledgements we need to reach “finality” or consensus on that current state.
This is useful for a whole range of improvements to our Auth CRDT:
- We can explore pruning or compaction techniques to remove or “compress” parts of the history of the DAG
- We can reason about what a peer has “missed” when they have acknowledged something. With that knowledge we might know something they didn’t, so we can “forward” that missing information to them. This gets especially interesting to account for concurrent changes in key agreement protocols, like
p2panda-encryption - We can “lock in” the group state from this moment on and agree on finality: Every change which will be applied into that “past” will be considered byzantine behaviour and illegal. This “fork the past” we also call equivocation and consensus protocols like that can help us to detect and mitigate them

Integration with p2panda
To build CRDTs based on DAGs we already have all the tools we need inside of our p2panda-core crate. Our Operation core data type gives us:
- Hashing functions to derive an identifier from the operation and guarantee Integrity
- Digital Signature Scheme for each operation, to guarantee Provenance
- Extensions to declare dependencies and partial order
- Append-only log structure per author to detect forks within a log; this can be optionally used to build byzantine fault tolerance on that layer
Additionally we provide ready implementations in p2panda-stream to efficiently order incoming operations based on their declared dependencies and keep them around (unloaded into a database) until “ready”.
With operations and the orderer we can now work with partially-ordered, dependency-checked (handling out-of-order arrival), authenticated and linearised streams of operations.
This is a basic building block to build powerful, authorised CRDTs or other new data types on top of. With p2panda-auth we’re introducing our first eventually-consistent / convergent, authorisation CRDT which can be easily combined with p2panda operations and our orderer.
Integration with applications
Different applications will have different requirements around integrating an Authorisation CRDT.
In any case we need some sort of “Events API” for the Access Control layer to inform applications about any group changes. In p2panda we also give the application information about when in logical time this event took place. This is the ID of the operation in the graph. For removal events we also mention the set of operations which potentially have been created by the removed member concurrent to their removal.

With all of this information, group events and application messages, an application has everything to deal with authorisation changes. Developers will need to reason about the eventual consistency guarantees in their applications, depending on the kind of data, how content is “moderated” and how it is represented to the users.
The aim is to come up with “patterns” describing different use-cases and concrete examples in the future which hopefully will make these decisions easier.
We’ve already talked heaps about eventual consistency in the beginning of the blog post, so this might feel familiar:
“Weak” / No eventual consistency: As soon as the application is informed about a member’s removal it simply starts to exclude future messages authored by the member from that point on.
This approach is very simple and doesn’t guarantee eventual consistency. Peers might pick different points from where they’ve started to exclude messages from the removed member.
Depending on the application data that might not be a problem though, either because consistency is dealt with on a higher level or because the application doesn’t care.
Consistency could be reached again, for example, by using a key/value store with last-write-wins logic which converges to the same state as soon as another member overwrites the entry with a new value.
There’s no “automatic moderation” taking place, as in members would need to edit the data or remove messages manually etc.
Guaranteed eventual consistency: As soon as the application gets informed about a member’s removal it automatically removes the data from the point where the group changed, including concurrent messages.
For simple data types, like chat messages or “social media” posts etc. it will be easier to remove every data type associated with that concurrent operation, while for Text CRDTs we might need to “re-play” the operation graph from the removal point on and filter out all concurrent operations during materialisation.
With this “re-basing” or “re-playing” approach we can guarantee eventual consistency for complex data.
For moderation we can be sure that no messages will be displayed concurrent to the removal; anything which took place before needs to be manually removed or edited.

First example shows a more “complex” text CRDT with an “invalid” change which needs to be retroactively removed. To remove the concurrent change of Panda we need to go back in time and “re-play” the changes to the text, without the concurrently removed operations. The state will be “re-materialised”.

Second example is simpler, as we don’t need to re-do anything. We can simply remove the concurrently created post based on the Operation IDs we learned about from the removal event.
Integration with key agreement protocols
When integrating an Authorisation CRDT with key agreement protocols for group encryption (for example p2panda-encryption) we have to be aware of some concurrency edge-cases which might lead to (accidentally) leaked secret keys.
This is a little excursion into Key Agreement protocols in decentralised, offline-first systems but it also shows how the composition of different convergent data-types can lead to interesting problems we need to think about.
We define our key agreement protocol in a way where on every “Add” Operation the newly introduced member learns about the secret keys for that group so they can decrypt other member’s messages or encrypt new messages towards the group.
In a scenario where Panda creates a group with Panda and Bear inside, Bear concurrently adds Owl to the group while Panda removed Bear. We can resolve this “conflict” easily with our Authorisation CRDT but unfortunately the secret keys might have been leaked to Owl without Panda being aware of it!

In a system with forward secrecy, such as p2panda’s “Message Encryption” Scheme, we would not run into problems as Owl will not be able to decrypt any previously created messages. However, if we use encryption systems with only Post-Compromise security, like p2panda’s “Data Encryption” Scheme, Owl will now be able to decrypt all previously created data, even if they’ve only been very “briefly” in the group from Bear’s perspective!
To avoid this scenario in PCS-only encryption systems we need to ask for consensus from the group before we hand over secret keys to new members. We can achieve this with a form of acknowledgement protocol (similar to what we’ve described previously) and only allow sharing secrets after every removal has been acknowledged by everyone or at least a majority of the group.
Patterns everywhere!
This was it for now for our little excursion into convergent data-types! We hope that it inspired you to see a range of “tricks” one can do with them to make them work in peer-to-peer environments.
It is still too early, but it makes us hopeful for a future where we will slowly converge to re-usable “patterns” which can be applied to all sorts of problems, apart from access control solutions. Across the p2panda stack we can slowly see those “repetitions”. It surely helped to have developed all data types independently from each other, outside of a monolithic all-in-one solution as it forced us to be very clear about the common interfaces they share.
Maybe one day it will be very easy to “compose” them with other solutions outside of the p2panda universe because these patterns, terminology and requirements around convergent data types become well-understood and known? Imagine combining the access control data type with someone else’s work to do efficient pruning or another one’s code to detect and mitigate Byzantine behaviour? This is an quite exciting future for sharing code, progress and research across peer-to-peer projects!
]]>Having just released the first version of our p2panda-auth crate, it seems like the right time to write about access control in decentralised systems. In the process, we’ll share an overview of the system we’ve designed - as well as a discussion of some of the technical challenges involved in implementing peer-to-peer group management and access control. We’re grateful to NLnet for supporting us in this work.
System Requirements
Before diving into the details of the system we’ve implemented, let’s first discuss why we might want an access control system? Broadly speaking, access control gives us a way to define who can interact with some specific data and how they can interact with the data. The “who” can be thought of as an actor; this might be a cryptographic keypair mapping to a single person or it may be a keypair which represents one of several devices controlled by a single person. An actor may even be a group which is itself composed of several other actors.
Reading & Writing
The two most basic forms of interacting with data in an access control system are reading and writing. In the process of composing this blog post, for example, I’d like to give my teammates the ability to read and modify the text while only allowing external advisors to read it. As another example, I may wish to have a music folder on my laptop which is shared between all of my other devices. Access control allows us to clearly define abilities and gives us a means of realising these scenarios.

Sloth has written a list of their favourite activities: sleeping, cuddling and climbing. Owl has read-access to the document; they can read the list that Sloth has authored but not make any changes. Cat, on the other hand, has write-access and decides to replace “climbing” with “meowing”.
Replication
In decentralised or peer-to-peer systems we also need to keep in mind that data travels between devices in a network. Since a direct connection to a particular peer is not always possible, we may wish for some intermediate peers to be able to assist with passing data through the network. Since those peers may be untrusted, or may simply not be the intended recipient of some data, it’s useful to have a means of allowing the right to replicate without the write to read.
This is where encryption comes into the picture; it allows us to prevent unauthorised reading of data. The ability to read is thus mapped to the ability to decrypt. In systems with connection-based replication protocols we can rely on a seperate access control level to define who is allowed to receive data (even though that data is still encrypted). When connecting to a peer, before we begin sending any requested data, we first check whether that peer has pull-access. If not, we refrain from fulfilling the request.

Sloth has write-access to a list of activities: sleeping, cuddling and climbing. Beetle only has pull-access to the list, meaning that they can receive and pass-on that data but cannot decrypt it for reading. Beetle replicates the data from Sloth and forwards it to Shark. Shark has read-access; they’re able to decrypt the data and read the list that Sloth authored.
Intuitive & Customisable
In the context of p2panda, we want access control to be intuitive for application developers to integrate into their software. In addition, we wish to allow for custom access conditions which can further constrain an actor’s access level over some data, and we aim to be conservative in terms of meeting our security requirements. These aspects of our work will become clearer throughout the rest of this post.
Implementation Approaches
There are several design approaches to meeting the requirements we outlined above. Here we briefly describe two such systems for maintaining and enforcing access to resources.
Capability-Based Access Control
Capability-based access control systems rely on secure authorisation tokens and use delegation chains to verify which actors have access to any particular set of data. For example, I may issue a token granting read access to my photo-sharing folder with a relative. That token is then handed over as proof of access when my relative tries to read the folder. Such systems allow delegation of received access; an actor can pass on any received capability to other actors. Meadowcap from the Willow team is a good example of a pure capability-based access control system.
Delegations will most often include an expiry date, the expectation being that if access should be maintained a new token will be issued before the previous one has expired. Some systems include the ability to retroactively revoke a previously-delegated access. In such cases, all dependent delegations will also be revoked.
Distributed Access Control Lists
An alternative approach to capability-based systems is the Distributed Access Control List (ACL). ACLs are commonly used to restrict filesystem access and access to resources on centralised servers. In such systems, a list is kept which maps an actor to an access level. In decentralised contexts, we also require the ability to collaboratively maintain and modify the list. To do so, some actors are given special rights which allow them to edit the ACL. Given the possibily of conflicts resulting from concurrent edits, Distributed ACLs are likely to rely on a Conflict-Free Replicated Data Type (CRDT) to encode changes to the list.
Design & Implementation
We’ve ended up with a generic decentralised group management system with fine-grained, per-member permissions. Once a group has been created, members can be added, removed, promoted and demoted. A group member can either be an individual (usually represented by a single public key) or another group. Assigned access levels can be restricted with application specific conditions.
Access Levels
Each member has an associated access level which can be used to determine their permissions. The access levels we’ve defined are Pull, Read, Write and Manage. The precise access granted by each level is left open to interpretation but in the upcoming integration of our p2panda auth and encryption systems they will be as follows: Pull gives the ability to replicate encrypted data, Read gives the ability to decrypt data, Write gives the ability to mutate data and Manage gives the ability to mutate the group state.
Each access level is cumulative, meaning that it includes the rights granted by lower levels (ie. Read also includes Pull rights). Each access level can be assigned an associated set of conditions; this allows fine-grained partitioning of each access level. For example, Read conditions could be assigned with a path to restrict access to specific areas of a dataset. Finally, only members with Manage access are allowed to modify the group state by adding, removing, promoting or demoting other members.
Group Control Operations
The aforementioned group actions are published as group control operations; each operation is cryptographically-signed, contains a group identifier and action and refers to previous operations and dependencies. Together, these operations form a causal Directed Acyclic Graph (DAG) which is used to track modifications to the group state over time. The previous field allows us to establish a causal “happened before” relationship between group actions, while the dependencies field offers a way to point to operations outside of the group which may need to be processed before the action is applied (such as application-specific dependencies or dependencies on other groups).
Concurrency Resolver
Membership state for a group is maintained locally using a Causal-Length CRDT based on grow-only sets which allow for efficiently merging states across graph branches. However, it’s simplicity does not allow us to fully handle conflicting group states emerging from some concurrent scenarios. In such cases, all operations in the DAG are walked in a depth-first search so that any “bubbles” of concurrent operations may be identified. Resolution rules are then applied to the operations in these bubbles in order to populate a filter of operations to be invalidated. Once the offending operations have been invalidated, any dependent operations are then invalidated in turn.
We have defined the Resolver as a Rust trait to allow for multiple implementations with contrasting rulesets for identifying which concurrent operations are to be considered invalid. This approach arises from the understanding that applications have different requirements around trust and security; some may operate in high-stakes contexts where the most cautious implementation is always preferred, while others may operate in low-stakes contexts without the need for strict conflict resolution. The initial offering of our p2panda-auth crate offers a single resolver implementation which we refer to as a “strong removal” resolver. The ruleset is as follows:
1) Removal or demotion of a manager causes any concurrent actions by that member to be invalidated 2) Mutual removals, where two managers remove or demote one another concurrently, are not invalidated; both removals are applied to the group state but any other concurrent actions by those members are invalidated 3) Re-adds are allowed; if Alice removes Charlie then re-adds them, they are still a member of the group but all of their concurrent actions are invalidated 4) Invalidation of transitive operations; invalidation of an operation due to the application of the aforementioned rules results in all dependent operations being invalidated
We fully realise, as mentioned before, that this ruleset is not optimal or desirable for all cases. For example, an alternative implementation of the Resolver might utilise the seniority of a member to act as a tie-breaker in the case of a mutual removal. In that scenario, the member who was added to the group first would remain in the group and the more recently added member would be removed.
Another scenario: what happens when the only manager of a group is removed or demoted? Is the group state forever frozen? Or are all group members automatically promoted to managers? The flexibility of our approach allows for both options to be catered for. We look forward to further discussions around these different requirement scenarios and we would be available to assist anyone who wishes to implement their own custom resolver for p2panda-auth.
Debugging Graphs
When trying to reason about various group membership and access control scenarios, it can become extremely challenging to hold complex operation graphs in one’s head. We spent many hours dicussing such scenarios during the design of our system and often resorted to sketching diagrams to try and gain an understanding of what was happening. To make things easier, we’ve implemented a means of printing an auth group graph (using graphviz) to allow for visualising the group control message DAG. This is especially helpful when debugging group state and understanding the impact of concurrency.

Here we have a visualisation of a DAG of group membership operations. The graph is read from bottom to top. A group is created with two initial members: individuals A and B. Both of these individuals are assigned manage-access. We then have two concurrent branches of operations. In the right-hand branch: individual A removes B. In the left-hand branch: individual B adds C to the group and assigns them manage-access (this operation is shown with a red background); C then adds individual D to the group with read-access (this operation is shown with an orange background). Since individual B is removed in the right-hand branch, any concurrent actions of theirs are invalidated (shown in red) and any dependent (aka. transitive) actions are also invalidated (shown in orange). A “Group Members” table with a green background shows individual A as the only member; this is a representation of the resolved state of the DAG.
Nested Groups
Each member in a p2panda auth group can either be an individual or a group. Individuals are understood to be “stateless” due to the fact that they represent a single immutable identity. Groups, on the other hand, are understood to be “stateful” as they contain a mutable set of members. Defining group members in this way allows us to create nested group relationships, where a single group may contain several sub-groups as members.

Here we have another visualisation of a DAG of group membership operations, this time illustrating a nested group scenario. A group ‘T’ is created with a single initial member: individual A with manage-access. Separately, a group D is created with two initial members: individuals L (manage-access) and M (write-access). Individual A adds individual B to group ‘T’ with manage-access. Individual B then adds individual C to group ‘T’ with read-access. The last operation in the graph points the creation of the ‘D’ group as a dependency and adds that group to group ‘T’ with manage-access. A “Group Members” table with a green background shows five members: A: manage, B: manage, C: read, L: manage and M: write.
Challenges
Centralised vs. Decentralised
Group management and access control is relatively straightforward in a centralised context where a server is the single source of truth and all group updates are received in total order. The server “knows” exactly which actions occurred before the others and is able to validate each one before updating the group state or allowing access of a specific resource to a member. This means that there can never be conflicting group states (unless in the case of a bug or exploited vulnerability). We don’t have such luxuries when building peer-to-peer systems.
In our world, a peer in the network may receive group updates in any order. To make matters worse, there may be long delays between when a group action is taken and when it’s learned about by other peers in a network (this could even take years!). So, unlike centralised systems, we have to rely on partial order of actions (we know that one action happened after another but we don’t know exactly when each one happened) to detect concurrent modifications of the group state and then apply specific rules to ensure that all members will eventually converge on the same state.
We also need to take into account malicious actors who may try to manipulate the group state for their own gains; for example, to harrass other members or retain access to their data. It is this paired need to ensure eventual consistency in the face of concurrency and negate byzantine actors that drives many of our design decisions.
Complex Edge Cases
As we’ve already mentioned, concurrency brings about some complex and challenging scenarios for group management and eventual consistency. Here we outline a few such scenarios and describe how our current implementation handles them.
Mutual Removal Involving Byzantine Actor
Penguin is a group manager and promotes Parrot to manager access level. Right afterward, Penguin changes her mind about Parrot and immediately demotes him. Parrot quite enjoys his promotion to manager status and chooses to ignore Penguin’s demotion action. As a result, all of Parrot’s future actions are technically considered concurrent with Penguin’s demotion action. Parrot then goes on to demote all other group members, making him then the single authority figure in the group.

Here we have a Concurrency Diagram depicting a case of “mutual removal” involving a byzantine actor, as described in the paragraph above. Operations on replicas are shown in boxes. Synchronization between replicas is shown with a “Merge” arrow. An operation happens before a later operation if there exists a path between the first and the second, potentially going through merge arrows. If no path exists, then the operations are concurrent.
In this scenario, a third group member (such as Duck) who has received Penguin’s demotion of Parrot and Parrot’s demotion of Penguin will determine that a concurrent, mutual removal has occurred. As such, they will remove both Penguin and Parrot from the group and roll-back or ignore any subsequent actions by those two members. This ensures that Parrot’s nefarious plan is ultimately undone.
It’s worth noting that in a group with only two managers, a mutual removal effectively freezes the group membership state; no manager remains to add, remove, promote or demote other members. An alternative resolver implementation might choose to promote all remaining members to manager level in that case. Alternatively, one could rely on a seniority principle - where a remaining member with the longest history of group membership would be declared a manager. We have chosen what we believe to be a more conservative approach, where the remaining group members would need to create an entirely new group to re-establish manager roles.
Concurrent Removal
Duck creates a group and promotes Penguin to manager access level. Penguin receives the promotion control message after syncing with Duck and then decides to promote Parrot to manager. Parrot goes on to promote some of his friends to manager access level. At this point, without yet knowing about Penguin’s promotion of Parrot and Parrot’s subsequent actions, Duck choses to demote Penguin so that she is no longer a manager.

Here we have another Concurrency Diagram, this one depicting a case of “concurrent removal”, as described in the paragraph above.
In this scenario, since Penguin’s promotion of Parrot and Duck’s demotion of Penguin happened concurrently, Penguin is no longer a manager (since the demotion takes precedence) and any downstream actions taken by Penguin are ignored. This means that Parrot and his friends are no longer managers of the group and any actions they took as managers are invalidated.
Broadcast-Only Contexts
Another open question which emerged during our work is how to achieve access control in broadcast-based systems; this includes systems which rely exclusively on gossip-based replication strategies. In such cases, we can’t control who will receive the data - just like a community radio station can’t control who listens to their broadcast. As long as we have strong encryption in place, we can at least control who is able to make sense of the received data. We consider this an open problem and look forward to discuss possible solutions with other researchers.
Informal Correctness Argument
Ultimately, our access-control design is based on replicating a grow-only set of authenticated immutable operations, since every participant replicates and maintain the full history of operations associated with every group, which is already known to be state-based CRDT. Only correct operations are replicated, by verifying that the causal history of an operation, i.e. the set of all operations that happened-before, indeeds proves that the author had the permission to perform that operation at the time. In the presence of concurrent operations that are replicated later, and given a Resolver strategy such as the one presented before, some correct operations may be later invalidated. Since invalidated operation will never be considered valid again, no matter what additional information is further obtained by replicating new operations, then the set of operations used to compute permissions is a state-based CRDT and is therefore convergent, i.e. given the same set of replicated operations two participants will compute the same permissions. Our design is therefore eventually consistent (see Byzantine Eventual-Consistency).
Related Work
How does our approach compare with other decentralised access control systems?
localfirst/auth
The TypeScript library localfirst/auth uses groups (“teams”) to define access-control and encryption boundaries. In their own words:
This library provides a Team class, which wraps the signature chain and encapsulates the team’s members, devices, and roles. With this object, you can invite new members and manage their permissions.
Group membership is managed using the operation-based CRDT library CRDX. Roles can be dynamically added to a group; a member’s access-level is inferred from the roles they are assigned. Members assigned the special “admin” role can perform actions which change the group membership (ie. add/remove members, create/assign/remove roles). New members are invited to the group using Seitan Tokens.
Our approach is similar in how group state is managed using an operation-based CRDT. We were quite influenced by the way in which localfirst/auth allows custom approaches to conflict resolution. We differ in our use of nested-groups and access-levels with associated conditions (rather than roles) to describe a group member’s capabilities. Another difference is that we do not use invitation tokens.
Keyhive
The Ink & Switch project Keyhive also uses a groups abstraction for their integration of access-control and encryption systems into the automerge CRDT library. They describe their approach as using “convergent capabilities”, which are intended to be similar to object-capabilities while being partition-tolerant and therefore suitable for use with local-first/offline-first applications and CRDTs in general. Group membership is derived from delegation chains; any member can delegate the capability they hold to another actor. A previously-delegated capability can be revoked by the original delegator or any member with special “manage” authority.
Our approach uses similar access-levels with attached conditions and also makes uses of nested-groups. We differ in the fact that we only allow “manager” members to add new members to a group (rather than any member being able to delegate their own capability). That said, it’s still possible for users to follow a similar delegation approach using nested groups with the Principle of least privilege (POLA).
Both localfirst/auth and Keyhive use something similar to a Cryptree for data encryption. This is different from our approach which can be read about in detail here.
What’s next?
So far most of the work has gone into how access levels are defined and associated with individuals or groups of actors. The next steps are integrating this system with p2panda-encryption and to take on the task of how we associate groups with a set of application data. This final piece of the puzzle is tentatively named p2panda-spaces.
Why are crashes dangerous for applications and especially peer-to-peer ones?
A classic example: Parrot wants to send one apple to Horse. Parrot starts the transaction and removes an apple from their store. Horse receives the apple and adds it to their store. If one of the processes crashes on either Parrot’s or Horse’s side, we might end up with a situation where Parrot’s state has one less apple and Horse’s has none.

An example of a failed transaction where Parrot will loose an apple and Horse will never receive it.
We never want to end up in a situation where a failure like that leads to the app hanging in an invalid state. Trying to recover a “broken” peer is especially hard when doing things without any central coordination.
This blog post is about the strategies and design ideas we’re exploring in p2panda to make p2p applications resilient to critical failures, for both system- and application layers.
Processing system- and application data
Processes usually change their internal state when receiving new data or input. Peers observe messages on a network and process them based on a set of rules. Message processing results in a new state which then gets moved into a store or database.

Regular processing pipeline.
What is different from more traditional client-server models is that in peer-to-peer systems every single peer needs to process the incoming data and store the materialised or indexed state by itself, instead of relying on a server doing the “hard work” and the “lightweight” client cheaply querying the materialised result.
In p2panda it is possible to model application data and the processing of it however you like. We have observed three emergent “patterns” for doing so:
- Peers send messages to each other which simply need to be stored in a database, for example chat messages or “social media” posts. Not much processing needs to be done here.
- Peers send State- or Operation-based CRDTs (Conflict-free replicated data types) to each other. These “updates” form a new CRDT state which allows all peers to eventually converge to the same state. Applications like Reflection follow this pattern.
- Peers send events to each other which are processed based on a form of “event sourcing” logic. This can also be understood as event or stream processing (depending on where you come from). By re-processing all events we can re-construct the state again. Applications like Toolkitty follow this pattern.
In most application contexts there is underlying system data which needs to be processed before we process our application data.
In p2panda we offer different building blocks as part of the system layer which solve a collection of common challenges in p2p systems, for example: Authentication, Integrity Guarantees, Ordering, Deletion, Garbage Collection, Offline-First, Access Control, Roles Management and Group Encryption.
Information required to coordinate these system-related data-types or protocols is usually transmitted as part of the Header of every p2panda Operation being sent. This Header contains information to describe append-only logs, DAGs giving us partial ordering, signatures, pruning mechanisms, key agreement, capabilities etc. Next to the Header the Operation also contains a Body; this is where the previously-mentioned application data goes. The exact shape of the data is defined by the application but it must be encoded as bytes for inclusion in the Body.

Operations processed first on system- then on application layer. The system layer uses data stored in the Header of the Operation for processing, the application layer is mostly interested in what’s in the Body.
Whenever a p2panda Operation arrives at a peer we need to separate the system- and application concerns and handle them one after another. First we look into the Header, process it and adjust our system state accordingly. Has this author been invited to an encrypted group context? Is this author allowed to write to this document? Did this Operation arrive out-of-order and do we need to wait before we can proceed with it? Can we decrypt the Body?
After all of the system processing and checks have taken place, we can finally “forward” the Operation to the application layer which is controlled by the developers who want to build anything on top. Here we can only guess what will happen, but let’s assume that some sort of processing will also be required and whatever comes out of it will land in some application state, potentially persisted in a database like SQLite.
Our goal is to allow developers to combine and “stack up” different processing flows for their individual application needs. Does your application need permission management? Add p2panda-auth to the processing pipeline!
This is very similar to middleware APIs in classical HTTP frameworks like Express or Tower where different “features” can be stacked up on top of each other for every incoming “request”.

Stacking system layers like “middlewares” in p2panda.
While we’re building completely p2panda-independent data-types and protocols, for example around access control in p2panda-auth or group encryption in p2panda-encryption, we offer a common ground with p2panda-stream where these solutions can easily be combined and integrated into every application’s pipeline.
The name and APIs are still in flux but we believe that this gives the most framework-independence and flexibility while allowing application developers to focus primarily on application-level concerns.
We’ve experimented with Rust Stream to express a “flexible” middleware API in our p2panda-stream crate but unfortunately Rust’s strict type system and exploding complexity and boilerplate around nested async calls and generics makes the use of it not enjoyable, and prone to bugs. Currently we’re exploring a more pragmatic approach with slightly less flexibility and significant better readability and ease of usage.
Why all of this introduction into the p2panda processing pipeline when talking about crash resilience? We see now that there are multiple state changes required before we even arrive at the application layer. In addition, we only really want to commit to the new state if the application finally says: “Yes, this operation was valid and everything is ok”. To achieve this guarantee we need to be able to express the processing of an operation across both layers as one single atomic transaction.
Atomic Transactions
There has already been a lot of thought put into making applications crash-resilient on state changes, for example when writing to a database. This is not an exclusive peer-to-peer problem to have; all applications need to take care of this, it is just a question as to whether or not developers actually do it.
Databases like SQLite or PostgreSQL try to handle exactly these cases by fulfilling certain properties which can be summarised as ACID. Martin Kleppmann once again gives an excellent introduction into these properties (and how they can be misunderstood).
The “A” in ACID stands for Atomicity and is exactly the property we are interested in for our systems. By grouping state changes into one atomic transaction we can guarantee that either all get executed at once, or none of them. If anything fails, the transaction is aborted and all changes are rolled back, as if they never happened.
Even before our data hits the actual database (like SQLite) we already need to worry about these things like atomicity, this is why building a p2p protocol can sometimes feel like re-inventing databases.
Here is an example of how to express an atomic transaction in SQL using the sqlx Rust crate, following our initial apple-sending example with Parrot and Horse:
// Initiate a single, atomic transaction using sqlx.
let mut tx = connection.begin().await?;
// Remove one apple from Parrot.
sqlx::query("UPDATE apples SET amount = 2 WHERE user = 'parrot'")
.execute(&mut *tx)
.await?;
// Give Horse one apple.
sqlx::query("UPDATE apples SET amount = 1 WHERE user = 'horse'")
.execute(&mut *tx)
.await?;
// Finally "commit" all of these changes. They will be persisted
// from now on if nothing crashed until here.
tx.commit().await?;
In p2panda every incoming Operation begins a new atomic transaction which will be used for every state change or write to a database during processing. Finally we want to forward the same transaction object to the application layer so developers can continue writing to the database as part of the same transaction.

Atomic transactions in the processing pipeline.
This solves two important problems: First, we don’t want to end up with invalid state when a process crashes. Imagine that a chat message arrives, the system layer decrypts the data using the “Message Encryption” scheme of p2panda-encryption with double ratcheting, moves the ratchet forwards, writes the new encryption state to the database and due to a bug in the application layer everything crashes and we never save the plaintext. Now we potentially end up with a situation where the message will never be read. With this in mind, we don’t want to persist any changes to the database until the final application layer “committed” the transaction, making sure that everything until then succeeded.
The second issue solved is validation: Applications should always have the last say as to whether or not they want to reject an operation based on their own application validation logic and social protocols. Even when nothing fails, an application can choose to not commit to the new state. This might occur when the application protocol has been violated due to invalid encoding or offensive content has been published. In such cases, users many not want to persist any operations in our database, nor to commit to any state changes triggered by them.
APIs
We don’t want to burden application developers too much, at the same time we can see that caring about atomic transactions is crucial for rolling out any robust p2p application.
As part of the p2panda-store crate we’re currently preparing traits (Rust interfaces) to implement state handling with atomic transactions against any possible database implementation. You can see the related PR here.
With this trait design we stay flexible with the final choice concerning what database your application would like to use (in-memory, SQLite, redb, etc.) while keeping the atomicity guarantees.
The API design clearly separates “read” queries from “writes”, as the latter is the only one actually changing state and thus needing to be placed inside an atomic transaction.
Inspired by sqlx we follow a very similar API approach to express writing state to a database inside of transactions and committing them:
// Initialise a concrete store implementation, for example for
// SQLite. This implementation implements the WritableStore
// trait, providing it's native transaction interface.
let mut store = SqliteStore::new();
// Establish state, do things with it.
//
// User and Event both implement the WriteToStore trait for the
// concrete store type SqliteStore.
let octopus = User::new("Octopus");
// To retrieve state from the database we can read from it via
// a trait interface which was implemented against SqliteStore.
//
// How this trait exactly looks like was defined by whoever
// defined what a "User" is.
let horse = store.find_user("Horse").await?;
let mut event = Event::new("Ants Research Meetup");
event.register_attendance(&octopus);
event.register_attendance(&horse);
// Persist new state in database as part of a transaction.
let mut tx = store.begin().await?;
octopus.write(&mut tx).await?;
horse.write(&mut tx).await?;
event.write(&mut tx).await?;
tx.commit().await?;
There are different possible approaches to design state handling around transactions and our traits. We’re exploring multiple options right now, for example pure functions.
Pure functions are functions which do not have any side-effects; they will never write to a database when being called and instead return a new state object. The combination of transactions and Rust’s strict borrow checker allows us to express state handling quite neatly (and we did it a lot inside our p2panda-auth and p2panda-encryption crates), for example:
// Retrieve current group state from database.
let state = store.group_by_id(&group_id).await?;
// Create a new group state in "pure function style".
let new_state = Group::add_member(state, &member_id).await?;
// Persist new group state as part of atomic transaction.
new_state.write(&mut tx).await?;
Replaying operations with a stream controller
It is important to note that after a process has crashed and restarted, we want to “re-play” any operation which never completed, otherwise our application will not have a chance to recover from the crash and the operation will be lost.
As part of p2panda-stream (the stackable middleware pipeline) we’re planning on integrating a stream controller which allows re-playing “unprocessed” operations by default and manually re-playing all or a range of operations from a certain point (defined by logical or wall-clock time) or “topic” (grouping operations defined by the application) when required.
The stream controller can be neatly combined with atomic transactions. Every operation needs to be “acknowledged” by the application layer at the end of every processing. This signals to the controller that the operation can now be marked as “processed”. Now we can finally commit the atomic transaction with all state changes caused by that operation and we don’t need to re-play it whenever the process starts again.
Processors usually need to have idempotency guarantees; this can be difficult to reason about when the codebases and data types get complex. Processing an operation twice might lead to invalid state (for example, Horse ending up with two apples when only one was sent). By combining transactions with a stream controller we can guarantee that the state produced by processing an operation is only ever committed once.
A stream controller design is already implemented as part of the Toolkitty peer-to-peer application and we now want to move the ideas into our official libraries.

Stream controller with atomic transactions in the processing pipeline.
From our previous experience of releasing peer-to-peer applications like Meli we can also utilise a stream controller for rolling out app updates with breaking changes or emergency bug fixes. If a schema or data type has changed we might need to wipe some database state, re-play all available operations and re-instantiate the database with the breaking change or bug fix in place. This is a useful feature to have around in case an application ever needs it.
The p2panda node
p2panda is a project which didn’t start by specifying a “perfect system” before building it. We begin by exploring patterns and ideas while vertically integrating them into user applications through collaboration with other teams. At times this makes it challenging to explain what p2panda is and how to get started, as we’re deep down in exchanging with application developers and reasoning about the constantly changing APIs.
However, in all of this we can see useful patterns emerging, such as clear separation of system- and application concerns, access control and roles, group encryption, multi-device support, message ordering, transactions, stream processing with “stackable” middleware APIs and stream re-plays - and we want to move all of this into one unified p2panda-node with an RPC API and a robust p2p networking and sync layer underneath.
All of this takes place on higher “integration layers”, while still keeping all “low-level” code (for example, group encryption) clearly separated, so whoever wants to just use one particular feature built by p2panda will not need to follow the same design.
For everyone who wants to have a complete peer-to-peer backend with robustness and security guarantees, we’re gradually moving towards the release of p2panda-node. We will then be able to replace all system-level “custom” integration code with a more unified solution for some of our applications. Stay tuned!
Bootstrapping and Discovery
Network Events API
As a user of our p2panda-net crate, it’s understandable that you might wish to have some degree of introspection into the peer-related events which are occurring. In this release we’ve introduced a means of subscribing to a stream of network events by calling .events() on Network. Events currently include those related to gossip (aka. “live mode”), sync (aka. data replication) and discovery. In this way, it’s possible to learn when a sync session has started and finished, when a new peer has been discovered and when a connection has been establish with a new neighbour. We intend to add additional data to these events in the future, such as the amount of bytes synced and the duration of each session.
Bootstrap Mode
As our collaborators begin working more deeply with our modules, we’ve been discovering blindspots in our implementations and working on improvements. One such improvement is the introduction of a “bootstrap mode” for network peers. The bootstrap node is one which is started without knowledge of any peers; it serves as the entrypoint into the network for others. This ability is activated by calling .bootstrap() on the NetworkBuilder during network configuration. The chat example in our p2panda-net repository provides a configurable CLI tool to play with various scenarios.
Discovery Over the Internet
With the bootstrap mode in place, we now have discovery of peers and topics over the internet. Using the power of iroh under the hood, all that’s needed to join a network is the URL of a relay server (which must be the same amongst peers) and the public key of a bootstrap node or other online peer; the discovery process then occurs automatically.
Filesystem Persistence
SQLite Store
You might be surprised to learn that up until this point we have only had in-memory persistence for operations and logs, the core data types of p2panda. This reflects our style when it comes to adding new features; we aim to coordinate our efforts to align with the needs of the apps being built by us and our collaborators. The need for filesystem persistence has recently arisen and so this release introduces a SQLite store. It can be accessed through p2panda-store with the sqlite feature flag enabled.
Until Next Time
Next week we’re on our way to the Bidston Observatory Artistic Research Centre (BOARC) near Birkenhead, UK for a team working session! The core team (adz, sam and glyph) will join the Toolkitty team for a contentrated few days of app develpoment. Toolkitty is an autonomous coordination toolkit for collectives, organisers and venues to share resources and organise events in a collaborative calendar. We’re getting closer to completing the prototype and can’t wait to share in the months ahead!
We’re looking forward to hearing from you as you try out p2panda 0.3.0! Please consult the CHANGELOG for a full list of changes.
Remember to subscribe to our RSS feed for new upcoming posts on our website or follow us on the Fediverse via @p2panda@autonomous.zone to hear more about upcoming events or news!
]]>p2panda-encryption towards Spring 2025!
This library will offer group encryption compatible with any data type, encoding format or transport, made for p2p applications which do not rely on constant internet connectivity. Similar to our other crates, we aim to make our implementation independent of the rest of p2panda while providing optional “glue code” to integrate it in into the larger p2panda ecosystem. With this design we’re adding another building block for secure and private p2p applications to our p2panda collection.
p2panda-encryption manages group membership with a Conflict-Free Replicated Data-Type (CRDT) and two different group key-agreement and encryption schemes. The first scheme we simply call “Data Encryption”, allowing peers to encrypt any data with a secret, symmetric key for a group. This will be useful for building applications where users who enter a group late will still have access to previously created content, for example private knowledge or wiki applications or a booking tool for rehearsal rooms. A member will not learn about any newly created data after removing them from the group since the key gets rotated on member removal or manual key update. This should accommodate for many use-cases in p2p applications which rely on basic group encryption with post-compromise security (PCS) and forward secrecy (FS) during key agreement.
The second scheme is “Message Encryption”, offering a forward-secure (FS) messaging ratchet, similar to Signal’s Double Ratchet algorithm. Since secret keys are always generated for each message, a user can not easily learn about previously created messages when getting hold of such key. We believe that the latter scheme will be used in more specialised applications, for example p2p group chats, as strong forward-secrecy comes with it’s own UX requirements, but we are excited to offer a solution for both worlds, depending on the application’s needs.
Together with our work towards access control we’re at the end of a longer research and implementation phase and we’re excited to publish a solution to secure data encryption and messaging in p2panda soon.
This blog post is the first announcement of p2panda-encryption and we want to share our insights, learnings and design from this research and implementation phase.
Encryption in p2p applications
If we entrust our private data to specific peers we are always directly connected with, we can rely on transport encryption only. This works as soon as we can cryptographically verify that the other peer is really the authentic one we trust exchanging data with. We can, for example, keep a “friends” list of public keys around and only allow establishing a connection if their signatures match, or we rely on a symmetrical secret (Pre Shared Key or PSK) every peer needs to know to establish a connection. Self-signed TLS Certificates in QUIC or one of the many Noise Protocol patterns allow us to establish that trust towards each other on transport level. Peer-to-peer protocols like Cable elegantly specify such a system.
As soon as we don’t want our network to theoretically have access to every message we author (even if it is our friends), or if we want to relay our private messages through “untrusted” peers or server admins, we need some sort of data encryption scheme.
In p2panda we are exploring solutions where we can work with “shared” nodes in the future: These are “always-online” servers which can be shared among friends or used only personally for different devices. They sync up with “leaf” or “edge” nodes (which can be offline and change location all the time) and keep the data as fully encrypted blobs, ideally with almost no metadata attached.

Nodes can run on mobile devices, laptops and desktop computers. They sync data with other nodes whenever they are online. “Shared Nodes” are instead always online and provide data when all other nodes are unavailable. The crucial point here is that this system would even work when no “Shared Nodes” are available, data might just flow more slowly.
Shared nodes help us to sync up on the latest data, even when all nodes are currently offline. The data itself can be ephemeral and automatically wiped from the server’s database after a while. The important thing is: We want the network to continue functioning as normal even when no shared nodes are involved; they are an enhancement to the system, but not a requirement.
Delta Chat already successfully experiments with this concept with their so-called Chatmail servers, Next Graph provides a similar 2-Tier solution with Broker nodes and Ink & Switch is thinking along the lines of peers with pull permissions. These are servers which are allowed to “pull” or sync your data, but they will not be able to read it’s contents, because it’s encrypted!
If we can dream wildly we can even see a “Shared Nodes System Setting” one day, built straight into our operating system! We define which nodes we want to maintain a connection with and entrust our (encrypted) data to. They can be shared with friends, family, collectives, etc. - but that’s for the long run.
Another factor is that we don’t want anyone to learn about our actions or data, even if we consider them our “trusted” circle. Just because Billie, Claire and Davie are best friends, Davie doesn’t need to always be able to see what Billie and Claire talked about. There are many ways to model such a system, but data encryption can be one of them.
Shared Nodes, Delivery Services, Caching Layers, or however you want to call them, are already implemented and successfully used and deployed in production. Still, we are in need of a solution which works in a more decentralised setting, that is, even when no intermediate peers or “always-online nodes” are available, we want group encryption to always work, even when your node is running on a smartphone.
Exciting developments
There is exciting research and experimentation in this direction, most recently Beehive explores a TreeKEM-based data encryption scheme for local-first protocols. Organising the shared secrets in form of a tree provides similar efficiency to maintaining larger groups with Messaging Layer Security Protocol (MLS). Similar to our “Data Encryption” scheme, Beehive is forward-secure for the key-agreement part and not forward-secure for the final data encryption. It aims at an integration with Automerge and a new sync protocol for DAG-based data-types called Beelay.
Auth of Local First Web has an elegant solution to accommodate for both authenticated group membership management and group encryption by building a DAG with cryptographically secure pointers which solves a large part of maintaining potentially forked group membership states and giving a secure way to verify that a member has actually been allowed to perform a specific group operation, for example adding or removing a member. Check out their excellent blog post for an overview.
The p2panda team have looked deeply into MLS and the OpenMLS Rust implementation in the past. While different academic (for example CoCoA, DeCAF or FREEK) and practical attempts, like Matrix Protocol, are being made right now to make these algorithms work in a more p2p setting, it is still too far off to simply adopt this technology, even though we would prefer to rely on known IETF standards and widely-used implementations like OpenMLS.
That being said, forks are also a thing in centralised applications (caused by bugs, race conditions, connection errors etc.) and even protocols like MLS need to look into fork-resilience and concurrency issues at one point. On this front we are definitely excited to follow the developments, with research around the FREEK protocol being probably the most promising.
We have been particularly inspired by the Key Agreement for Decentralized Secure Group Messaging with Strong Security Guarantees (DCGKA) paper by Matthew Weidner, Martin Kleppmann, Daniel Hugenroth and Alastair R. Beresford (published in 2021) which is the first paper we are aware of which introduces a PCS and FS encryption scheme with a local-first mindset. On top there’s already an almost complete Java implementation of the paper, which helped with realising our Rust version.
The paper formed the initial starting point of our work. In particular, we followed the Double-Ratchet “Message Encryption” scheme with some improvements around managing group membership. We also carried over some of the ideas in the paper to accommodate for the simpler “Data Encryption” approach.
While not as performant as a TreeKEM solution for large groups (“thousands of members”) as in Beehive or MLS, we value the simplicity of the DCGKA protocol. The paper shows that DCGKA is performant for small- to mid-size groups of ca. 128 members, similar to Signal. We are also excited to share a strong forward-secure “Message Encryption” scheme in our crate, next to a minimal post-compromise secure “Data Encryption” variant with forward-secure key-agreement. On top we’re aiming at a solution which will be fully generic and useful for other projects without tying them too much to an existing framework, sync protocol, encoding or data type.
Peer-To-Peer challenges
When we can’t rely on any centralised point to deliver key material, ask an authority about group membership or a member’s permission, building a secure encryption scheme gets challenging. The main challenges we’ve identified are:
Ordering
Messages in peer-to-peer networks can arrive out-of-order or very late, especially when the network is highly fragmented and peers are offline for a long time. Extra care is required to make systems robust for these “pending” states where data is missing or incomplete.
Group membership
We always want to know from our perspective with whom we are interacting. Every peer needs to be able to verify if a member is inside a group or not and that they have permission to change the group’s state, for example if they sent us a message that they want to remove a member from the group.
This gets especially tricky if we can’t ask a “central authority” for the “right” group state or don’t have any explicit consensus algorithms in place to determine a common ground among distributed processes. In a p2p world like this, we need to be able to deal with and verify group state locally for each peer from their perspective and tolerate potentially forked groups.
This can happen when, for example, two members concurrently add another member to the group while they don’t know about each others operations. This leads to two different, valid versions of the group, even if it’s just temporary and they get merged eventually.
Metadata
Not all data is usually encrypted. When we talk about encryption, some information might still be in plaintext to allow efficient authentication, storage, routing or processing of the data. This is what we usually call “metadata”. The problem with this form of data is that, even if the actual application data is protected, the metadata can still be used to derive sensitive information about the sender, for example: “who is the receiver”, “when was it sent”, “how large is the file”, etc.
The main trade-off we can see is between an efficient syncing strategy and encrypting all metadata. As soon as we encrypt everything it will get harder and harder to effectively exchange data with another peer. On top we put a lot of trust into that peer as they can also send us a lot of garbage: We are happily receiving a lot of encrypted data, but after decrypting it find out that it contains invalid information.
Some protocols like Willow, for example, require a timestamp to be kept in plaintext in their metadata, while everything else is encrypted. This allows the set reconciliation sync strategy to still remain efficient as data items can be sorted by timestamp. On top it is possible to use “relative” timestamps (like 0, 1 and 2) not revealing too much about the actual, “absolute” time signature of these events.
For p2panda itself we are undecided yet if we will follow the Willow path (using set reconciliation for syncing over fully encrypted data) or go more towards the direction Beelay is taking with their Sedimentree sync strategy over DAGs. In any case, some information, either timestamp or the hashed pointers of the graph need to remain in readable plaintext.
Like this it will be possible to protect all other information, including public keys, signatures etc. if necessary. This doesn’t come for free and puts more pressure on other parts of the system (validation, buffering, ordering, etc.).
Because of the generic nature of p2panda-encryption we will not dictate how metadata is treated in your data-type, as this is ultimately a routing, sync and application concern. However we propose an off-the-shelf solution which will find a compromise when integrating with the other p2panda crates, header extensions our multi-writer- and append-only log data types.
Post-compromise security (PCS)
When a member got removed from a group, it still takes a while until all members are aware of that removal. Not knowing about someone’s removal, some members will still continue to encrypt messages towards them as from their perspective the member is still in the group.
This is also a problem in centralised group encryption schemes, but here we have a centralised server and an “always connected” setup where clients usually learn very quickly about a member’s removal.
In a decentralised setting this information spreads slower, thus leaving the group in a transitional “post-compromise” state for longer. Only once the last member has synced up and received the removal event can we consider the group to be fully “healed”.
Members who learned about the removal are already secure and can safely continue to communicate; there is no need for the whole group to completely heal before others can continue.
One solution is to have always-online nodes around which help spreading this sort of information faster. Additionally we can make the key rotation more efficient, which means that the group in itself needs less messaging round-trips before being healed. In p2panda-encryption we are using the 2SM Key-Agreement Protocol, as proposed in the DCGKA paper, which optimises the group healing process to O(n) steps instead of O(n^2). In TreeKEM based systems we can rotate the keys in O(log(n)) steps so the group can heal even faster.
Forward secrecy (FS)
Allowing peers to use a key only once to encrypt data and drop it instantly after receiving and decrypting a message successfully allows them to be protected against attacks where a malicious actor can’t learn about previous messages when they get hold of such a key. This is called forward secrecy (FS) and is implemented in it’s strongest form in protocols such as Messaging Layer Security (MLS) or Signal.
Forward secrecy is a notoriously hard to grasp concept. It gives us additional security guarantees but also increases the complexity of both the protocol and implementation. It requires extra care to accommodate for such a system when integrating it into an application.
The largest challenge we see is the reliance on a Public Key Infrastructure (PKI) when deploying FS encryption in decentralised systems as peers rely on “pre-keys” to establish the initial, commonly shared secrets when beginning to message each other (key agreement handshake). These pre-keys need to be known by each peer beforehand.
Some applications might have very strong forward secrecy requirements and only allow “one-time” pre-keys per group during key agreement handshake. This means that we can only establish a FS communication channel with a peer if we reliably made sure to only use the pre-key exactly once, which is hard to guarantee in a decentralised setting. If we don’t care about very strong FS we can ease up on that requirement a little bit and tolerate re-use with longer-living pre-keys which get rotated frequently (every week for example).
A solution for very strong forward secrecy, where we can make sure the pre-key is only used once, is a “bilateral session state establishment” process where peers can only establish a group chat with each other after both parties have been online. They don’t need to be online at the same time, just to be online at least once and receive the messages of the other party. This puts a slight restriction on the “offline-first” nature for peer-to-peer applications.
Another solution is to rely on always-online and trusted key servers which maintain the pre-keys for the network, but this puts an unnecessary centralisation point into the system and seems even worse. Publishing pre-keys via DNS might be an interesting solution to look into.
Puncturable Pseudo-Random Functions (PPRF) can help with preventing replay attacks where the security of a user gets weakened by making them “unknowingly” re-use the same one-time pre-key. This secures the owner of the pre-keys, but doesn’t solve eventual consistency of the senders, as they will not learn which message the receiver accepted first and which messages they rejected.
In p2panda-encryption we provide two different encryption schemes with variable strength of forward secrecy for different scenarios: “Message Encryption” gives strong forward secrecy like the Signal protocol and “Data Encryption” gives configurable forward secrecy during key agreement but no forward secrecy for application data itself - as members who join a group late are allowed to decrypt previously created data.
We believe there’s a place for applications in p2p with strong forward secrecy requirements with well-done UX, for example local-first messengers or applications using state-based CRDTs where no message history is required. We also can imagine applications running with both encryption schemes at the same time! Imagine a tool to organise your collective’s shared items where every member needs to have access to past records and thus encrypted with “Data Encryption”, while private messages between members are encrypted with stronger “Message Encryption”.
Lastly there should always be the option for an application to decide manually whenever it’s time to remove a decryption secret from the history, for example when data is simply not useful anymore. We keep this door open, even in the “Data Encryption” scheme.
p2panda Group Encryption
For a decentralised group encryption scheme we need the following parts:
- Causal Ordering of group operations (adding, removing, promoting, demoting members etc.).
- Group Management to reason about who the members of the group are from our perspective.
- Key Agreement to securely deliver secret keys to each member of the group.
- Encryption to finally secure application data with the group’s secret key material.
These parts are both required in p2panda’s “Data Encryption” and “Message Encryption” scheme, for most of them we can even re-use the same underlying mechanisms, cryptographic algorithms and data types.
Causal ordering
In p2panda we rely on our data type, the multi-writer directed acyclic graph (DAG) or “hash graph”, to establish a causal order of incoming operations, similar to a vector clock. Every operation points at the previous operation(s) it knows about by mentioning their cryptographic hash, which forms the graph. Now we can reason about whether an operation was sent before, after or at the same time as any other operation in the group.

We’re reading this graph from the bottom, starting at
1. Each circle represents an “operation” and the arrows point at the operation which was observed “before” it. This can be used to express causal orderings: We can see now if something happened before, after or at the same time. The operations2and3for example occurred at the same “logical” time.
The cool thing with this approach is that we:
- Gain a reliable way to reason about the causal ordering of events (partial ordering), even when they arrive at our node at random times.
- Already have this data type so we can simply re-use it for group membership management.
- Have a cryptographically secure way (through the unique hash function) to verify that we really observed one operation and that the authenticity of each operation is given, as they are signed by the author.
- Bonus point: Gain almost all the concurrency guarantees we need from this data type because a DAG can be understood as a CRDT in itself!
In our concrete p2panda-encryption Rust implementation we will not prescribe a specific ordering technique, so as to not tie developers too closely to our data types. Many p2p applications already have their own solutions for handling concurrency and, if not, we have these developers covered with our p2panda-core crate, providing the DAG data type, and causally ordered, “dependency checked” and buffered operation streams in p2panda-stream.
Group Management CRDT
The hash graph, mentioned above, can be used to verify if someone was allowed to perform a certain operation: we can use the cryptographic hashes and signatures of each signed operation to securely verify if a group operation was valid, by tracing back the “chain path” to the root of the graph and validating the operation at each step. “Was this member really assigned an admin role before and are they allowed to remove this other member now?”.

Here we flip the DAG around and start from the top! During each “operation” somebody changes the state of the group by adding or removing a member. With the causal history of our graph we can see how we can easily verify if a member was allowed to perform a certain operation. When two operations happened at the same logical time (Panda removes Pig and Sloth adds Cow), we need a method which can “merge” both “branches” and represent a final state which will be reached eventually by all peers as soon as they received both diverging operations.
This approach solves mostly all questions around managing group membership and checking permissions, except we need to be aware of some “special” concurrent cases. For example, what happens if two members remove each other at the same time or if a member got added by someone who was concurrently removed?
In p2panda-encryption we provide a default CRDT implementation which takes care of all of these situations by following a “strong removal” approach. This means that anyone who is removed can’t add any other member, even when they have been removed at the same logical time as when they added someone. If two members remove each other at the same time, they will both leave the group. This approach accounts for situations where it is more secure to remove everyone (for example, an attacker removing the admin while the admin tries to remove the attacker) and rely on social processes to re-establish a trusted group setting by re-adding members or starting a new group in worst case.

The last two scenarios show the “Strong Removal” CRDT logic. It might not always be desirable to remove all admins from a group if they try to remove each other. Different conflict resolution strategies can be chosen depending on the use-cases.
Since in Rust “everything is a trait” it is also possible to implement a different logic and apply other rules for different groups in these “conflicting” scenarios.
It is important to mention that we don’t provide a baked-in concept of “admins” or “moderators” as such but merely tools to express these relationships, so they can be designed by application developers based on their needs.
Key Agreement: Two-Party Secure Messaging (2SM)
Encrypting data for a whole group requires coordination, especially when it comes to sharing the secrets required to encrypt and decrypt the data. To make a group or another member aware of these secrets we need a “key agreement” protocol.
Key agreement in p2panda can be described in two phases, a first “key handshake” phase, which only occurs once, where we establish the shared secret for the first time. This happens when we are inviting a new member to a group and want them to learn about the secret key material. The second phase consists of subsequent “key rotations”. This can happen, for example, after we’ve removed a member from the group and we need to initiate a new key agreement to make sure the members who are still around learn about the new state.

The 2SM key agreement protocol used by p2panda combines a X3DH key handshake with subsequent key rotations realised with a simpler public key encryption scheme (HPKE).
In p2panda-encryption we randomly generate a secret key we can later use as the symmetrical secret to encrypt to all members in the group or to derive the message ratchet “root chain key” for one member.
To make a group aware of a new secret key we could encrypt the secret pairwise with public-key encryption (PKE) towards each member of the group, resulting in an O(n^2) overhead as every member needs to share their secret with every other. The DCGKA paper proposes an alternative approach which they call “Two-Party Secure Messaging Protocol” (2SM) where a member prepares the next encryption keys not only for themselves but also for the other party, resulting in a more optimal O(n) cost when rotating keys. This allows us to “heal” the group in less steps after a member is removed.
The 2SM protocol is well described in the DCGKA paper (Appendix D) and we’ve made a little diagram to show you the process:

Panda uses 2SM to securely deliver an “Update Key” to another member, the Jellyfish. Panda uses the Public Key of Jellyfin to encrypt this information. They learned about that Public Key from previous 2SM iterations. Next to the “Update Key” Panda will also send them new key material for the next, future 2SM iteration. This is the special trick of this protocol!
For encrypting the key towards each member a Hybrid Public-Key Encryption Scheme (HPKE) is used with DHKEM_X25519, HKDF_SHA256 and AEAD_AES_256_GCM as parameters. Since we only use the secrets for key agreement once per iteration we can make 2SM forward secure.
In case we’re trying to establish a secret key for the first time though, we need to look closer into the “Key Handshake” Phase. This is where the X3DH protocol of Signal comes into play, which was designed to establish a secure future communication channel between two parties, even when one of them is offline.

The 2SM protocol works the same during the handshake phase, the only thing which is different is how we encrypt the secret towards the other member, by using X3DH instead of HPKE.
For our “Message Encryption” scheme we make one-time pre-keys mandatory during X3DH to ensure strong forward secrecy while in our “Data Encryption” scheme we remove that requirement and only use pre-keys which can be rotated after a while and published based on the application’s security model.
p2panda Data Encryption
In p2panda-encryption we implement a simple and secure, symmetrical key encryption for application data with the forward-secure 2SM key-agreement protocol and post-compromise security on manual key-rotation or member removal.
Every member in the group uses the same secrets to encrypt the data. During key agreement we include a list of all previously used secrets and send it to the new members so they will be able to decrypt all previously created data even when they entered the group later. It would be possible to use a more efficient way of storing previously used keys in the future, for example in form of a Cryptree.
For encryption we use the latest key and for decryption we keep a list of all currently known (current and past) keys. Keys are identified with an “epoch” hash. We’re using a unique hash to identify the key, derived with a Sha256 Hashed Message Authentication Code (HMAC)-based key derivation function (HKDF) from the secret, as there might be cases where group members concurrently generate new keys and a simple number counter would not be sufficient to distinctively identify them.

While the secret symmetric key didn’t get rotated, we continue re-using it for each data encryption (each time with a different nonce).
For symmetrical encryption of the application data we use XChaCha20-Poly1305, an authenticated encryption with associated data (AEAD) symmetric-key encryption scheme with randomly generated 24 byte nonces. Nonce-reuse is catastrophic in AEAD but preventing it is hard in a decentralised setting, so it is usually recommended to keep state around to understand which nonce to use next. By using XChaCha20-Poly1305 we can drop the requirement to keep that state around, since the nonces are large enough to be generated randomly and securely.
Every encrypted data includes the ciphertext and additional metadata, including the key hash (epoch) and nonce that have been used to encrypt the data.
During the initial key-agreement handshake via X3DH we don’t use one-time pre-keys and only rely on signed pre-keys. The forward secrecy is defined by the lifetime of the pre-keys which again is set by the application developers. This is only the case for the handshake, afterwards we’re back at strong forward secrecy provided by the 2SM protocol.
p2panda Message Encryption (with Double Ratchet)
For encryption with strong forward secrecy we’re implementing Signal’s Double Ratchet algorithm as specified by the DCGKA paper.
After the “update key” of randomly generated 32 bytes got delivered pair-wise via 2SM to each member in the group, we can establish the regarding message ratchets towards each member of the group by deriving the initial “root chain key” via a HKDF with the member’s public identity key.

The sender keeps an “outgoing” message ratchet to each member. The receiver will keep an “incoming” message ratchet with the same state. Like this they will be able to derive the same chain key for that message to finally decrypt it. This means that each member in the group needs to keep an incoming and outgoing message ratchet for each other member in the group!
For the message ratchet we use a pseudorandom function (PRF) with a Sha256 HKDF scheme. The encryption of message data itself is done with AES-256-GCM AEAD, with the chain key and a nonce derived from the used chain key itself (using the same derivation function).
Additional care is required to make sure that all peers update their ratchets in the same order. The important mechanism here is an explicit “acknowledgement” for each rotated update key. Acknowledgements are also “group operations”, similar to member additions or removals, etc., where the sender confirms that they have observed and applied a key update. By learning that another peer has confirmed the newly rotated secret we can switch our message ratchet towards them with that new key. Another great “side effect” of the acknowledgements is that we can learn how far the group progressed in it’s post-compromise security.
Another important point to mention are concurrent cases where a member might learn too late that they have been added to a group, even though some messages have already been sent to them. These senders need to actively send them the missing ratchet update keys; a process called “forwarding” in DCGKA. Another case is when two members independently added another member each: Alfie adds Charlie at the same logical time as Betty adds Davie. In this situation Charlie will end up not knowing about Betty’s new ratchet state; as soon as Betty realizes that they will have to send the update again to Charlie.
These cases are handled as part of the DCGKA protocol specification from the paper (see section “6.2.5 Handling Concurrency”).
Integration with p2panda stack
Since our implementation of p2panda-encryption is in Rust, we express many parts of the group encryption in the form of “traits” or generic interfaces, allowing developers to adjust certain parts. For example, customising the group management CRDT for their own needs or using a different key agreement protocol while keeping the same encryption algorithm. We will also offer a way for developers to deliver their own storage solutions for secret key material and group state.
p2panda-encryption is agnostic to p2panda itself, no concrete data-types, sync or ordering strategy or transport is assumed by the implementation.
For use with the rest of p2panda though we will provide Rust Stream implementations in p2panda-stream for both giving the required causal ordering of group operations and the encryption and decryption of application data or messages itself.
Like this developers can easily “stack” different stream iterators on top of each other and not need to worry about encryption from this point on, as the data they will receive from these iterators will be ordered, buffered, dependency-checked and decrypted automatically.
We will also provide a p2panda message “header extension” and sync protocol solution where almost no metadata needs to be revealed. This will be developed this Summer as we’re about to look into efficient DAG- and set-reconciliation sync strategies.
The group membership CRDTs are also very useful outside of the encryption system. We can use membership as a way to represent roles. For example, we can ask now: “Is Billie part of the moderator group?”. Being part of an encryption group can also mean having “read” permission. Nodes will not sync data with another peer if they don’t have this permission - and even if they would receive the data, they would not be able to decrypt it.
See you soon and thank you!
As already mentioned, an implementation is currently underway and to-be-released in Spring 2025! You can subscribe to our RSS feed for new upcoming posts on our website or follow us on the Fediverse to get informed when p2panda-encryption is ready for use.
We want to especially thank the Cryptographic Engineer and OpenMLS maintainer Jan Winkelmann from Cryspen for answering our questions around the cryptographic parts of our implementation and the Researcher Erick Lavoie for giving advice and ideas for efficient group membership CRDT designs. We took inspiration from work done by Abhilash Mendhe during his Master studies at University of Basel, under the guidance of Erick. We also want to thank the authors of the “Key Agreement for Decentralized Secure Group Messaging with Strong Security Guarantees” paper for giving a strong foundation of future p2p group encryption solutions and lastly NLNet for supporting this adventure and our security audit of Radically Open Security.
]]>More “offline-first”, please!
A major design requirement of p2panda is the ability to operate in an “offline-first” manner. The services we offer should be tolerant of volatile network environments, with the capacity to self-heal when interfaces and connections go down or become available. This stands in contrast to many contemporary networked applications and services which both expect and demand continuous connectivity.
Our collaboration with several GNOME developers on a local-first GTK text editor (working title: Aardvark) quickly brought to light shortcomings in our “offline-first” resilience; data sync and live-mode would fail to recover after more than 30 seconds of lost connectivity and our mDNS discovery service would throw an error if no interface was available on startup. We’ve made it a priority to rectify these issues and believe that our 0.2.0 release makes significant improvements.
Bidirectional Sync
Firstly, we’ve refactored our log sync protocol to be bidirectional, meaning that both peers exchange their log heights and data in a single session. Our previous implementation required each peer to initiate a separate session for complete synchronisation. The change from a pull-based to push-based approach means that one peer can reset their sync state (e.g. after a network disconnection and reconnection) and initiate a session, resulting in both peers “catching up” on past data.
Live-mode State Reset
p2panda currently relies on gossip overlays for network-wide topic discovery and “live mode” (ie. broadcast of published data to all peers interested in a given topic). We discovered that these overlays break down after extended loss of connectivity. Our new release rectifies this issue by resetting live-mode state and re-entering gossip overlays when connectivity is regained.
mDNS Retries
What happens if you register an mDNS discovery service and then start your p2panda-powered network node when your device networking is disabled? In p2panda 0.1.0 this would result in a panic, as no socket was available to be fully configured and bound. Not good! We’ve refactored the service to (re)bind the socket as needed, ensuring that mDNS discovery can (re)start when interface changes are detected.
Until Next Time
At the end of the month we’ll be heading to Switzerland for P2P Basel, a weekend workshop which “bring[s] together researchers and software builders to share insights and collaborate towards the sound and sustainable development of efficient eventually-consistent (offline-first) peer-to-peer systems”. We can’t wait to spend time with new and old friends alike!
Other than that, we’re working hard on the autonomous coordination toolkit and designing our group encryption and access control systems.
We’re looking forward to hearing from you as you try out p2panda 0.2.0! Please consult the CHANGELOG for a full list of changes.
Remember to subscribe to our RSS feed for new upcoming posts on our website or follow us on the Fediverse via @p2panda@autonomous.zone to hear more about upcoming events or news!
]]>p2panda has been undergoing some changes; it’s lighter, more hackable and much more modular. While this rewrite felt scary at times, we are very proud and excited about the outcome we’re releasing today. This post is intended to shed light on our decision and convey why we believe the new p2panda will be useful for the wider “local-first” community!
The “new” p2panda
We’ve released our new crates today and with this we’re taking a new approach with p2panda:
Modular and interoperable
p2panda wants to lower the barrier for developers to build modern, privacy-respecting and secure local-first applications for mobile, desktop or web. We’ve learned that such a toolkit shouldn’t come at the cost of a highly abstract “monolithic” API or framework. With that in mind, our new version aims to be as modular as possible—allowing projects the freedom to pick what they need and integrate it with minimal friction. Our networking layer offers a great example of a module which should be useful for many different projects. We believe this approach contributes the most to a wider, interoperable p2p ecosystem which outlives “framework lock-in”.
For projects seeking a more fully-integrated system, it’s possible to stack our modules into one efficient, stream-based pipeline providing p2p networking, sync, discovery, gossip, blobs, authentication, ordering, deletion, multi-writer, access control, encryption and so on.
Data-type agnostic
Some of our new building blocks, such as the sync, discovery and networking layers, do not require any custom p2panda data types. This makes it possible to bring your own data types and develop your protocol on top. We’re planning the same for our group encryption implementation which is set to be released next year.
Support any CRDT or application data
Previous versions of p2panda came with their own approaches to CRDTs and schema validation. While we still believe this is great for future high-level modules, we wanted to offer you the option of combining all p2panda modules with Automerge, Yjs or any other CRDT of your choice. Of course not every application needs a CRDT and at the end of the day the payload is just “raw bytes”.
Re-use as much existing technology and well-established standards as possible
We’re using existing libraries like iroh and well-established standards such as BLAKE3, Ed25519, STUN, ICE, CBOR, TLS, QUIC, UCAN, PlumTree, HyParView, Double Ratchet and more - as long as they give us the radical offline-first guarantee we need.
Radically distributed and compatible with any post-internet communication infrastructure
We want collaboration, encryption and access-control to work even when we can’t assume any sort of stable connectivity over an extended period. p2panda is “broadcast-only” at it’s heart, making any data not only offline-first but also compatible with post-internet communication infrastructure, such as shortwave, packet radio, Bluetooth Low Energy, LoRa or simply a USB stick.
Append-only logs are great!
We’ve chosen to remove Bamboo as a core data type from the p2panda protocol; we realised that we weren’t making full use of the features it provides. Still, the new p2panda data type is again an append-only log, just much lighter and more flexible.
Logs are very efficient data types for exchanging data over challenged communication infrastructure and give developers something they will always need when building a distributed application: ordering (in other words, a knowledge of “what happened before / after or at the same time as x”). Our new data type also includes exciting features and optional extensions, such as writing to multiple logs at the same time, pruning (automatic removal of unused data), prefix-based deletion (delete multiple logs at the same time with a single tombstone) and fork-tolerance. Some of these features are rooted in exciting new research and we’re happy to be sharing more about them in a future blog post.
As already mentioned, not all modules require you to use p2panda data types, but if you want the most features, they are ready with all sorts of extensions you can pull in for your application’s needs.
So much news!
Collaborations with GNOME and HIRO
While working on the new version of p2panda we also embarked on collaborations with two very different teams. We’ve been exploring code, UX and UI patterns for GTK-based applications with a group of developers from GNOME and together will release the first GTK-based, collaborative, local-first text editor. Our second collaboration has been a project developed together with HIRO, a company based in the Netherlands. Together we designed and implemented a solution named “rhio” to sync large files and messages between micro data centers in a fully distributed manner.
Autonomous Coordination App
This autumn we started working on an autonomous coordination toolkit with a new team of six people, supported by the Innovate Creative Catalyst Programme in UK. The goal of the project is to develop a mobile and desktop app for collectives, organisers and places to share resources and organise events in a shared calendar. This tool has been a very old goal of p2panda and it feels incredibly special to have the release of the first prototype scheduled for Spring of 2025!

Ongoing development of the resource sharing and event organising app with p2panda.
What’s next?
p2panda is a very multifaceted project: We maintain our crates, apply for grants, design protocols and do research in radically distributed data-types. We organise community events and write peer-to-peer applications with our friends and collaborators. There’s a lot coming up.
Improve!
This is our first release of the new p2panda version and we will surely learn more about it’s APIs and user requirements in the upcoming months. Our goal is to reach a stable API but for now we need to expect breaking changes as we’re adjusting.
Group Encryption and Capabilities
Next year we’ll also be working on an NLNet NGI Zero Entrust grant to integrate UCAN-based access control and secure group encryption with Post-Compromise-Security and optional Forward-Secrecy, based on research into decentralised secure group messaging algorithms. Our plan is to implement these as Rust modules which you can pull into your application, independent of p2panda, your choice of data types or networking stack. The DCGKA algorithm we’ll be implementing is essentially Signal’s Double Ratchet Algorithm with PCS and FS, made fit for offline-first use.
Together with researchers we’ll be publishing our work on fork-tolerant and prunable append-only logs, hopefully in the form of another blog post or even a paper.
App releases!
For next year we will be releasing the GTK-based text editor in collaboration with GNOME and the first version of the autonomous coordination app (name still pending) and hopefully organise a festival with it sometime :-)
Get involved
Please subscribe to our RSS feed for new upcoming posts on our website or follow us on the Fediverse via @p2panda@autonomous.zone to hear more about upcoming events or news!
Our crates are ready to be played with and we are more than curious to hear about your ideas or feedback.
We are very excited to be hearing from you!

]]>Tired but happy p2panda team: adz, sam and glyph in London.