Distributed Event Stores - 10XTS Skip to content

Distributed Event Stores

Distributed Event Stores

Rethinking Data with Distributed Event Stores

The Conduit Network represents a shift towards decentralization by embracing distributed event stores—a dynamic data storage and retrieval system that gives participants unprecedented control over their data. 

Unlike traditional centralized data models, the Conduit Network empowers each user to dictate where their data is stored, how it’s accessed, and who can interact with it. This system establishes a distributed event store infrastructure, creating a resilient and efficient approach to data sovereignty, secure storage, and scalable access.

10XTS builds powerful, next-generation real world asset tokenization solutions using the Conduit Network, a suite of hardware, operating systems, and protocols with military-grade security, allowing participants to fully own the value generated in the new decentralized internet.

Distributed Event Stores: Shaping a Decentralized Data Landscape

Distributed event stores are at the core of the Conduit Network, operating as a sophisticated, decentralized storage system that allows data owners to determine which nodes will house their data. This model provides an alternative to traditional, centralized storage systems that often sacrifice control, flexibility, and privacy for scale. In a distributed event store, the storage of each data object or event is controlled by the data owner, allowing for precise data management while ensuring that each transaction, change, and update to the data is accurately recorded and can be tracked over time.

The Power of Data Sovereignty

One of the most innovative aspects of Conduit’s architecture is its commitment to Data Sovereignty. Data sovereignty refers to the participant’s ability to retain control over the storage, access, and backup of their data. In Conduit, each participant can choose specific nodes to store their data and create backups on additional nodes to ensure resiliency. Should a node or network experience an outage, these backups can restore access seamlessly, maintaining data integrity across the system.

This system also accommodates the needs of participants who change their physical locations. By allowing them to transfer their master node’s location, the Conduit Network provides unparalleled flexibility in data management and ownership. This portability enables participants to carry their data sovereignty across borders and infrastructures, allowing them to securely manage and access their data regardless of their location.

Locating Data in a Distributed Network: Conduit’s Global Data Registry

In a distributed network where data is stored across many nodes, efficiently locating and retrieving data is crucial. Conduit solves this challenge with a Global Data Registry, akin to a Domain Name System (DNS), which records the location of each data object’s “master” copy across the network.

How the Global Data Registry Works

Each data object within the Conduit Network—such as a configuration, ledger, document, or profile—is assigned to a specific “master” node. Conduit typically organizes these objects by “Party,” which includes participants such as individual members, organizations, or legal entities. This setup provides logical data grouping, so each participant’s data is consolidated and efficiently managed on a specific node.

The Global Data Registry facilitates access to this data through several mechanisms:

  • Master Object Registry: This registry keeps a comprehensive record of the node that stores the primary (or “master”) copy of each data object. This allows back-end services to locate data efficiently, reducing latency and unnecessary data traffic across the network.
  • Network-Wide Caching: Copies of the registry are cached throughout the network. This cache allows local nodes to quickly access registry information and locate data without making requests to every node, speeding up data retrieval and enhancing scalability.

Through these methods, Conduit Network ensures that back-end services and client applications can access data as if it were locally stored, while the registry handles all the complexity of finding and retrieving the correct data across the network.

Streamlining Distributed Data Management

Conduit Network’s architecture consists of distributed microservices and multiple layers of intelligent routing to streamline data interactions across nodes. This flexible architecture enables Conduit to support various database types—relational and NoSQL—ensuring compatibility and future-proofing. Several key components make this system robust and scalable, including specialized database drivers, node-aware APIs, and conditional writes.

Database Drivers: Automating Data Routing and Processing

Conduit’s database drivers manage data interactions across nodes without requiring back-end services to have direct knowledge of each node’s data. The database driver sits between services and the underlying databases, handling reads and writes while accommodating different data types:

  • Slowly Changing Data: Data such as configurations or profile details, which change infrequently.
  • Fact Ledger Data: Data representing dynamic transactions, especially within Conduit’s economy-ledger context.

The database driver level understands the nature of these data types, allowing it to forward requests to the correct node efficiently. Moreover, Conduit’s approach does not require any business logic for data type processing, simplifying the application layer and making distributed management more resilient and scalable.

By integrating database drivers that support both relational and NoSQL databases, Conduit eliminates reliance on a single database vendor or technology, allowing flexibility and supporting the evolution of future database technologies.

Node-Aware APIs: Enhancing Transactions and Reports

The Conduit architecture also includes Node-Aware APIs for more efficient data handling across distributed nodes. While database drivers manage most data interactions, certain types of API calls—especially transactions and reports—benefit from node awareness, optimizing data flow and system performance.

  • Transactions: Conduit uses a two-phase commit process to manage transactions in a distributed environment, ensuring data consistency even when multiple nodes are involved. Instead of relying solely on the database driver to route each read/write operation, the system groups activities by node to streamline processing. This reduces latency, enabling efficient handling of complex transactions between participants.
  • Reports and Aggregations: Reporting and data aggregation often involve querying large datasets. With node-aware APIs, Conduit can forward report requests to the nodes holding the relevant data, optimizing performance by avoiding record-by-record retrieval. This design allows complex analytics and aggregation operations to occur closer to the data source, improving efficiency and scalability.

Ensuring Data Consistency with Conditional Writes

A distributed system like Conduit must handle concurrent data interactions without risking data integrity. Conduit uses Conditional Writes as a concurrency management method, ensuring that changes are applied only when data meets specific conditions, reducing conflicts and ensuring consistency across nodes.

  • Sequential Numbering of Events: Every data object’s events are assigned a sequence number, which allows for reliable tracking and prevents update conflicts. Before any update, the system checks the sequence number and proceeds only if the number matches expectations.
  • Error Handling for Conflicts: If a conditional write fails (due to sequence mismatch), the Event Store automatically invalidates cached data and replays the request. This mechanism prevents deadlocks and data conflicts, reducing the need for complex error handling by developers.

For even more robust conflict resolution, Conduit may extend this model to include data hashing, which could identify unauthorized updates on non-master copies and further ensure data accuracy across distributed nodes.

Distributed Identity Services: Managing Authentication and Access Across Nodes

In addition to data management, Conduit enables secure and distributed Identity Services, allowing each node to authenticate users locally. This decentralized identity system enhances security and flexibility, ensuring users can authenticate with their preferred node without sacrificing data integrity.

Localized Identity Services

Each node can independently host an Identity Service, responsible for managing local user configurations, such as login credentials, access rights, and associated parties (individuals or entities). This localized setup means that the identity service can authenticate users directly on the nodes where they manage their data, reducing latency and enhancing security.

Session Management and Token Authentication

Upon logging in, users are issued an access token containing the DNS of the issuing node and a session ID. The token is used for subsequent authentication across the network, allowing users to access their data securely regardless of its storage location. When a backend service requires authentication, it communicates with the Identity Service on the issuing node, which verifies the token’s validity.

This approach aligns with OAuth2 standards, ensuring that only the Identity Service issuing the token can verify it, providing an additional layer of security. Each session is logged in the Event Store, enabling data forensics and analytics.

Enabling a New Era of Decentralized Data Sovereignty

The Conduit Network’s distributed event store model represents a breakthrough in decentralized data management, enabling participants to retain control over their data while ensuring resiliency and flexibility. By combining database drivers, node-aware APIs, conditional writes, and distributed identity services, Conduit empowers users to manage their data on their terms—without sacrificing efficiency or security.

Through this innovative architecture, Conduit is setting a new standard for data sovereignty, resilience, and scale in decentralized data networks. This platform offers a comprehensive framework that enables seamless, secure, and scalable data management, setting the stage for a future where data ownership and control are in the hands of the individuals and organizations that own it.