How to integrate with the DA API
Customizations require expertise
Customizing your chain is a core benefit of building with Arbitrum chains. We strongly recommend that teams interested in customizations work alongside a partner with ArbOS and Nitro software expertise, such as a Rollup-as-a-Service team.
Working alongside an experienced Arbitrum chain operator can help your team navigate the complex tradeoff space of rollup customizations, which can include performance, security, and cost considerations. Offchain Labs is positioned to train and enable Rollup-as-a-Service in their work with clients to scale support to the Arbitrum chain ecosystem as a whole. As such, Offchain Labs does not necessarily have the capacity to review code changes made by individual Arbitrum chains.
We encourage you to leverage your in-house expertise, collaborate with expert partners, and allocate appropriate resources for both an initial implementation (including an audit) and ongoing maintenance and security management of your customization.
This guide will help your team implement their own Data Availability (DA) provider that integrates with Arbitrum Nitro using the DA API.
The DA API is experimental until Nitro officially announces its availability in a release.
1. What is the DA API?
The DA API is an extensibility feature in Arbitrum Nitro that allows external data availability providers to integrate with Nitro without requiring any fork of Nitro core or contracts. This extensibility will enable your team to build and deploy custom DA solutions tailored to your specific needs while maintaining full compatibility with the Nitro stack.
Why use the DA API?
- No Forking Required: Integrate your DA system without modifying Nitro or contracts
- Pluggable Architecture: Define your own certificate formats and validation logic
- Fraud-proof Compatible: Full support for BoLD challenge protocol and fraud proofs
- Multi-Provider Support: Multiple DA systems can coexist on the same chain
Architecture overview
The DA API has four main components:
- Reader RPC Methods: Recovers batch data from certificates and collects preimages for fraud proofs
- Writer RPC Method: Stores batch data and generates certificates
- Validator RPC Methods: Generates proofs for fraud proof validation
- onchain Validator Contract (Solidity): Validates proofs onchain during fraud proof challenges
These components work together to enable:
- Normal execution: Batch Poster stores data -> generates certificate -> posts to L1
- Validation: Node reads certificate -> recovers data -> executes
- Fraud proofs: Prover generates proof -> enhances with DA data -> validates onchain
Who is this guide for?
This guide is for development teams who want to:
- Integrate an existing DA system with Nitro's DA API
- Understand how the DA API works under the hood
Prerequisites: Familiarity with JSON-RPC, Solidity, Ethereum L1/L2 architecture, and Arbitrum Nitro basics. Experience with any backend programming language.
Reference implementation
The Nitro repository includes ReferenceDA, a complete working example of a DA provider:
- Go implementation:
daprovider/referenceda/ - Solidity contract:
contracts-local/src/osp/ReferenceDAProofValidator.sol - Example server:
cmd/daprovider/
Use ReferenceDA as a reference when building your own provider. It demonstrates all the RPC methods and patterns described in this guide.
2. Quickstart guide
To create a DA provider, you need to implement two components:
1. JSON-RPC server (any language)
Your DA provider exposes JSON-RPC methods that Nitro nodes call to store and retrieve data. You can implement this in any language.
Reader Methods (data retrieval):
daprovider_getSupportedHeaderBytes- Returns header bytes identifying your providerdaprovider_recoverPayload- Recovers batch data from certificate (normal execution)daprovider_collectPreimages- Collects preimages for validation and fraud-proof replay (validation)
Writer Methods (for batch posting):
daprovider_store- Stores batch data and returns certificatedaprovider_startChunkedStore,daprovider_sendChunk,daprovider_commitChunkedStore- Optional streaming protocol for large batches (see Appendix A)
Validator Methods (fraud-proof generation):
daprovider_generateReadPreimageProof- Generates a proof for reading preimage datadaprovider_generateCertificateValidityProof- Generates proof of certificate validity
2. Onchain validator contract (Solidity)
- Implements
ICustomDAProofValidatorinterface validateReadPreimage()- Validates preimage read proofs onchainvalidateCertificate()- Validates certificate authenticity onchain
Development workflow
- Design your certificate format (Section 6)
- Implement Reader RPC methods to recover data from certificates (Section 3)
- Implement Writer RPC method to generate certificates (Section 4)
- Implement Validator RPC methods to generate proofs (Section 5)
- Implement the onchain validator contract for proof validation (Section 7)
- Create your JSON-RPC server exposing these methods (Section 9)
- Test your integration end-to-end (Section 10)
- Deploy your server and configure Nitro nodes (Section 9)
Quick reference: ReferenceDA example
ReferenceDA is a complete working example you can study:
ReferenceDA Implementation:
- Certificate format: 99 bytes (header + SHA256 + ECDSA signature) - see
daprovider/referenceda/certificate.go - JSON-RPC server: Complete working server at
cmd/daprovider/daprovider.go- Implements all required RPC methods
- Written in Go, but you can use any language
- Shows configuration, server setup, and lifecycle management
- Onchain contract:
contracts-local/src/osp/ReferenceDAProofValidator.sol- Implements
ICustomDAProofValidator - Uses ECDSA signature verification with trusted signer mapping
- Implements
3. Implementing reader RPC methods
Your JSON-RPC server must implement three Reader methods that Nitro nodes call to retrieve batch data. Implementation is possible in any language.
Method 1: daprovider_getSupportedHeaderBytes
Returns the header byte strings that identify your DA provider in sequencer messages.
Parameters: None
Returns:
{
"headerBytes": ["0x01ff"] // Array of hex-encoded byte strings
}
Example:
ReferenceDAreturns["0x01ff"](DA API header 0x01 + provider type 0xFF)- Your provider might use
["0x01aa", "0x01bbcc"]or any other hex strings as long as they start with "0x01"
Purpose: Nitro uses this to register your provider in its internal routing table. When it sees a sequencer message starting with these bytes, it routes the request to your server. Your provider can return multiple strings to support different certificate formats, DA systems, etc., at once.
Method 2: daprovider_recoverPayload
Recovers the full batch payload data from a certificate. Called during normal node execution.
Parameters:
{
"batchNum": "0x1a2b", // Batch number (hex-encoded uint64)
"batchBlockHash": "0x1234...", // Block hash when batch was posted
"sequencerMsg": "0xabcd..." // Full sequencer message including certificate
}
Returns:
{
"Payload": "0x5678..." // Hex-encoded batch data
}
Implementation Requirements:
-
Extract the certificate from
sequencerMsg:sequencerMsg format: [SequencerHeader(40 bytes), DACertificateFlag(0x01), Certificate(...)]Skip the first 40 bytes, then extract your certificate.
-
Validate the certificate:
- Check certificate format/structure
- Verify signature or proof
- Confirm the certificate is authentic according to your DA system's rules
-
Retrieve the batch data using information in the certificate
-
Verify data integrity:
- Check that the data matches the commitment in the certificate
- Example:
sha256(data) == certificate.dataHash
-
Return the payload or error
Error Handling:
Your implementation must distinguish between three scenarios:
1. Invalid Certificate -> CertificateValidationError
- When: Certificate is invalid (bad format, bad signature, untrusted signer, etc.)
- Return: JSON-RPC error with message containing
"certificate validation failed" - Nitro behavior: Treats batch as empty (zero transactions), syncing continues
- Example:
"certificate validation failed: untrusted signer"
2. Other Errors -> Syncing Stops
- When: Storage failure, network error, RPC timeout, database down
- Return: JSON-RPC error with any other message
- Nitro behavior: Stops syncing immediately
- Example:
"storage unavailable: database connection lost"
3. Empty Batch -> Valid Response
- When: Certificate is valid, but batch data is actually empty
- Return: Successful response with
"Payload": nullor"Payload": "0x" - Nitro behavior: Processes empty batch (zero transactions), syncing continues
- Important: Only return nil/empty if the batch is truly empty, not as an error signal
The DA provider is a trusted component. Nitro cannot validate what you return, so you must:
- Return proper errors when there are errors (don't return nil payload)
- Use CertificateValidationError for invalid certificates (allows processing to continue)
- Use other errors for infrastructure failures (stops syncing until resolved)
Detection Mechanism: Nitro detects CertificateValidationError by checking if the error message contains the string "certificate validation failed". The error can include additional context after this string.
Example Response (Success with Data):
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"Payload": "0x000123456789abcdef..."
}
}
Example Response (Success with Empty Batch):
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"Payload": "0x"
}
}
Example Response (Invalid Certificate - Syncing Continues):
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32000,
"message": "certificate validation failed: untrusted signer"
}
}
Example Response (Storage Error - Syncing Stops):
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32000,
"message": "storage unavailable: database connection timeout"
}
}
Method 3: daprovider_collectPreimages
Collects preimage mappings needed for fraud-proof replay (called during validation).
Parameters: Same as daprovider_recoverPayload
{
"batchNum": "0x1a2b",
"batchBlockHash": "0x1234...",
"sequencerMsg": "0xabcd..."
}
Returns:
{
"Preimages": {
"0xabcd1234...": {
"Data": "0x5678...",
"Type": 3
}
}
}
Implementation Requirements:
-
Do the same work as
recoverPayload:- Extract certificate
- Validate certificate
- Retrieve batch data
-
Build the preimage mapping:
- Compute
certHash = keccak256(certificate) - Map
certHash -> batch data - Set preimage type to
3(DACertificatePreimageType)
- Compute
Critical: The preimage key must be keccak256(certificate), not your internal hash. The fraud-proof replay binary expects this specific key.
Example Response:
{
"Preimages": {
"0xabcd1234...certHash": {
"Data": "0x5678...batchData",
"Type": 3
}
}
}
Error Handling:
collectPreimages has identical error handling behavior to recoverPayload:
- Invalid certificate -> return error containing
"certificate validation failed"(validator continues, treating batch as empty) - Other errors -> return other error (validator stops)
- Empty batch -> return empty preimages map (valid, validator continues)
See the Error Handling section in recoverPayload above for complete details.
Why Two Separate Methods?
recoverPayload: Fast path for normal execution (just needs the data)collectPreimages: Validation path (needs data + keccak256 mapping for fraud proofs)
This separation avoids unnecessary work in each context.
Parameter encoding
All uint64 parameters must be hex-encoded strings with 0x prefix:
- Correct:
"0x1a2b","0x0","0xff" - Incorrect:
42,"42","0x"
All byte arrays use hex encoding with 0x prefix:
- Correct:
"0xabcdef","0x01ff" - Incorrect:
"abcdef",[0xab, 0xcd]
Reference: ReferenceDA implementation
The ReferenceDA server (cmd/daprovider/) shows a complete working implementation in Go. Key logic:
Certificate extraction:
certBytes := sequencerMsg[40:] // Skip 40-byte sequencer header
Certificate validation (calls L1 contract):
validator, _ := NewReferenceDAProofValidator(validatorAddr, l1Client)
err := cert.ValidateWithContract(validator, &bind.CallOpts{})
Preimage recording:
certHash := crypto.Keccak256Hash(certBytes)
preimages[certHash.Hex()] = PreimageResult{
Data: hexutil.Encode(payload),
Type: 3, // DACertificatePreimageType
}
While ReferenceDA is written in Go, you can implement these methods in any language. You need to:
- Parse hex-encoded JSON-RPC requests
- Query an Ethereum L1 node
- Query your DA system
- Store and retrieve data
- Compute keccak256 hashes
4. Implementing the writer RPC method
Your JSON-RPC server implements a Writer method that the Batch Poster calls to store batch data and get a certificate.
Method: daprovider_store
Stores batch data and returns a certificate that will post to L1.
Parameters:
{
"message": "0x1234...", // Batch data to store (hex-encoded)
"timeout": "0x67a30580" // Expiration time as Unix timestamp (hex-encoded uint64)
}
The timeout parameter specifies when the stored data should expire:
- Unix timestamp (seconds since January 1, 1970 UTC)
- Minimum retention: DA provider must retain data at least until this time
- Calculated by batch poster as:
current_time + retention_period
Returns:
{
"serialized-da-cert": "0x01..." // Certificate (hex-encoded)
}
Implementation Requirements:
-
Store the batch data in your DA system:
- Must be retrievable later using the certificate
-
Generate a certificate:
- Must start with bytes
0x01(DA API header) + your optional provider type byte(s). Refer to the description in the section ondaprovider_getSupportedHeaderBytes - Must contain enough information to retrieve the data later
- Should include a data commitment (hash, Merkle root, etc.)
- Should include proof of authenticity (signature, BLS signature, etc.)
- Must start with bytes
-
Return the certificate as hex-encoded bytes
Example Request:
{
"jsonrpc": "2.0",
"id": 1,
"method": "daprovider_store",
"params": [
{
"message": "0x00012345...",
"timeout": "0x67a30580"
}
]
}
Example Response (Success):
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"serialized-da-cert": "0x011234567890abcdef..."
}
}
Certificate format
Your certificate should:
- Start with
0x01(DA API header byte, defined asDACertificateMessageHeaderFlag) - Optional provider type bytes: Your provider type identifier, see description in the section on
daprovider_getSupportedHeaderBytes - Contain a data commitment: Hash, Merkle root, KZG commitment, etc.
- Contain validity proof: Signature, BLS signature, or other authentication
- Be compact: Certificates are posted to L1 as calldata
ReferenceDA Example (99 bytes total):
[0] : `0x01` (DA API header)
[1] : `0xFF` (`ReferenceDA` provider type)
[2-33] : SHA256(batch data) - 32 bytes
[34-98] : ECDSA signature (v, r, s) - 65 bytes
See Section 6 for detailed guidance on certificate design.
Fallback mechanism
The Batch Poster supports multiple DA writers in a sequential fallback chain. If your server wants to trigger fallback to the next writer (e.g., temporary unavailability, overload), return an error containing the string:
"DA provider requests fallback to next writer"
Example Fallback Response:
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32000,
"message": "DA provider requests fallback to next writer: storage temporarily unavailable"
}
}
This method is the only way to trigger automatic fallback. Any other error will stop the batch posting entirely without trying other writers. This design prevents expensive surprise costs from fixable infrastructure issues.
Error handling
Return errors for:
- Storage failures (disk full, network down)
- Invalid batch data
- Timeout exceeded
- System overload (use fallback error)
Example Error Response:
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32000,
"message": "storage failed: disk quota exceeded"
}
}
Reference: ReferenceDA implementation
The ReferenceDA server shows a complete implementation. Key logic:
Store batch data (in-memory for demo, use real storage in production):
storage.Store(message)
Generate certificate with SHA256 hash + ECDSA signature:
dataHash := sha256.Sum256(message)
sig, _ := signer(dataHash[:])
certificate := []byte{
0x01, // DA API header
0xFF, // ReferenceDA type
}
certificate = append(certificate, dataHash[:]...) // 32 bytes
certificate = append(certificate, sig...) // 65 bytes (v, r, s)
Return certificate:
return StoreResult{
SerializedDACert: hexutil.Encode(certificate),
}
While ReferenceDA uses Go, you can implement daprovider_store in any language. You need to be able to:
- Accept JSON-RPC requests
- Store data persistently
- Generate cryptographic signatures/commitments
- Return hex-encoded responses
Streaming protocol methods (optional)
For large batches exceeding HTTP body limits (default: 5MB), you'll need to implement these three additional RPC methods:
daprovider_startChunkedStore: Initiates chunked storage sessiondaprovider_sendChunk: Sends individual chunks (can be sent in parallel)daprovider_commitChunkedStore: Finalizes stream and returns certificate
See Appendix A: Streaming Protocol for complete specifications, parameter details, and implementation guidance.
5. Implementing validator RPC methods
Your JSON-RPC server implements two Validator methods that generate cryptographic proofs for fraud proof validation. These are critical for security - the fraud-proof system must be able to prove both valid and invalid certificates onchain.
Method 1: daprovider_generateReadPreimageProof
Generates an opening proof for reading a specific 32-byte range of the committed batch data during fraud proof validation. The offset parameter specifies the 32-byte-aligned starting position (must be a multiple of 32).
Parameters:
{
"certHash": "0xabcd...", // keccak256 hash of the certificate
"offset": "0x0", // 32-byte-aligned offset (must be multiple of 32, hex-encoded uint64)
"certificate": "0x01..." // Full certificate bytes
}
The offset must be 32-byte aligned (0, 32, 64, 96, ...). The proof covers exactly 32 bytes starting at this offset.
Returns:
{
"proof": "0x..." // Your DA-system-specific proof data (hex-encoded)
}
Implementation Approaches:
Your implementation can use one of two approaches:
1. Simple approach (like ReferenceDA): Include the full preimage
- Proof contains the entire batch data
- Easy to implement, returns the stored data
- Inefficient for large batches: 5MB batch = 5MB proof
2. Advanced Approach: Use cryptographic opening proofs
- Proof contains only a commitment opening for the requested byte range
- Efficient for large batches: 5MB batch = typically <1KB proof
- Requires a cryptographic commitment scheme (see examples below)
Implementation Steps:
-
Parse the certificate to extract information about the stored data
-
Retrieve the batch data from your DA storage
-
Build a proof using one of these approaches:
Simple (Full Preimage):
- Include the entire batch payload in the proof
- Format:
[version, preimageSize, preimageData]
Advanced (Opening Proof):
- Generate a cryptographic opening for the 32-byte range at
offset - Include only the commitment opening, not the full data
- See cryptographic scheme examples below
-
Return the proof as hex-encoded bytes
Cryptographic Commitment Scheme Examples:
If you're implementing the advanced approach with opening proofs, here are common schemes:
KZG Polynomial Commitments (used in EIP-4844, Celestia, EigenDA):
- Commit to batch data as a polynomial
- Generate a point evaluation proof for the 32-byte chunk at
offset - Proof size: ~48 bytes (constant, regardless of batch size)
- Verification: onchain pairing check proves polynomial evaluates correctly at that position
- Example: EIP-4844 uses
POINT_EVALUATION_PRECOMPILEfor this
Proof format: [commitment (48 bytes), evaluation (32 bytes), proof (48 bytes)]
Merkle Tree Commitments:
- Organize batch into 32-byte chunks as tree leaves
- Generate an inclusion proof for the leaf at position
offset / 32 - Proof size: ~log₂(n) * 32 bytes (e.g., 512 bytes for 64K leaves)
- Verification: Hash authentication path to prove chunk inclusion
- Common in Bitcoin, Ethereum state trees
Proof format: [leaf_data, sibling_hash₁, sibling_hash₂, ..., sibling_hashₙ]
Vector Commitments:
- Commit to batch as a vector of elements
- Generate a position opening for the specific index/range
- Proof size: Constant (scheme-dependent, often ~32-96 bytes)
- Verification: Algebraic check proves correct opening
- Used in some modern DA systems
Proof format: [element_value, opening_proof, auxiliary_data]
Comparison:
| Approach | Proof Size (5MB batch) | Pros | Cons |
|---|---|---|---|
| Full Preimage | 5MB | Simple to implement | Huge proofs, expensive L1 verification |
| KZG Commitments | ~128 bytes | Constant size, efficient | Requires trusted setup, pairing-friendly curves |
| Merkle Trees | ~512 bytes | No trusted setup, simple | Logarithmic size, multiple hashes to verify |
| Vector Commitments | ~64 bytes | Constant size, flexible | More complex cryptography |
What Happens Next:
- The proof enhancer prepends
[certSize(8), certificate]to your proof - The complete proof is sent to your onchain
validateReadPreimage()function - Your contract extracts up to 32 bytes starting at
offsetand returns them
Example Request:
{
"jsonrpc": "2.0",
"id": 1,
"method": "daprovider_generateReadPreimageProof",
"params": [
{
"certHash": "0xabcd1234...",
"offset": "0x0",
"certificate": "0x01..."
}
]
}
Example Response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"proof": "0x0100000000000001005678..."
}
}
ReferenceDA Proof Format (Simple Approach):
ReferenceDA uses the simple approach, including the full preimage in the proof:
[0] : version (`0x01`)
[1-8] : preimageSize (8 bytes, big-endian uint64)
[9...] : preimageData (full batch payload - entire 5MB for a 5MB batch!)
This method is easy to implement but inefficient for large batches. Production DA systems should consider using cryptographic commitments (e.g., KZG, Merkle, etc.) to generate compact opening proofs instead.
Method 2: daprovider_generateCertificateValidityProof
Generates a proof of whether a certificate is valid or invalid according to your DA system's rules.
Parameters:
{
"certificate": "0x01..." // Certificate to validate
}
Returns:
{
"proof": "0x..." // Validity proof (hex-encoded)
}
Invalid Certificates Return Proof, not Error
Invalid certificates (bad format, bad signature, untrusted signer) must return a successful response with claimedValid=0 in the proof. Do not return an error.
Only return errors for:
- Network failures (can't reach L1, database down)
- RPC timeouts
- Other transient issues
Why? The fraud-proof system needs to prove "this certificate is invalid" onchain. If you return an error, this proof becomes impossible.
Implementation Requirements:
-
Validate the certificate:
- Check format/structure
- Verify signature or cryptographic proof
- Check against trusted signers (if applicable)
-
Determine validity:
- Valid ->
claimedValid = 1 - Invalid (any reason) ->
claimedValid = 0
- Valid ->
-
Build proof containing:
claimedValidbyte (0 or 1)- Any additional data your onchain validator needs
- For
ReferenceDA:[claimedValid(1 byte), version(1 byte)]
-
Return the proof (not an error, even for invalid certs!)
Example Request:
{
"jsonrpc": "2.0",
"id": 1,
"method": "daprovider_generateCertificateValidityProof",
"params": [
{
"certificate": "0x01ff1234..."
}
]
}
Example Response (Valid Certificate):
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"proof": "0x0101" // claimedValid=1, version=1
}
}
Example Response (Invalid Certificate - still success!):
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"proof": "0x0001" // claimedValid=0, version=1
}
}
Example Response (Network Error - only now return Error):
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32000,
"message": "failed to query L1 contract: connection timeout"
}
}
ReferenceDA Proof Format:
[0] : `claimedValid` (`0x00` = invalid, `0x01` = valid)
[1] : version (`0x01`)
Proof flow summary
What the proof enhancer does:
- Receives your custom proof from the RPC method
- Prepends standardized header:
[certSize(8 bytes), certificate, ...] - Sends complete proof to your onchain validator contract
For generateReadPreimageProof:
Complete proof = [machineProof..., certSize(8), certificate, yourCustomProof]
↑
[version, size, preimageData]
For generateCertificateValidityProof:
Complete proof = [machineProof..., certSize(8), certificate, claimedValid(1), yourCustomProof]
↑
[0 or 1, version, ...]
Reference: ReferenceDA implementation
The ReferenceDA server demonstrates the complete pattern:
GenerateCertificateValidityProof - key logic:
// Parse certificate
cert, err := Deserialize(certificate)
if err != nil {
// Invalid format -> claimedValid=0, not an error!
return proof{claimedValid: 0, version: 1}
}
// Verify signature
signer, err := cert.RecoverSigner()
if err != nil {
// Invalid signature -> claimedValid=0
return proof{claimedValid: 0, version: 1}
}
// Query L1 contract for trusted signers
isTrusted, err := l1Contract.TrustedSigners(signer)
if err != nil {
// Network error -> return actual error
return error(err)
}
// Build proof
return proof{claimedValid: boolToByte(isTrusted), version: 1}
GenerateReadPreimageProof - key logic:
// Parse certificate to get data hash
cert, err := Deserialize(certificate)
// Retrieve full batch data from storage
preimageData, err := storage.GetByHash(cert.DataHash)
// Build proof with full data
return proof{
version: 1,
preimageSize: len(preimageData),
preimageData: preimageData,
}
While ReferenceDA uses Go, you can implement these methods in any language.
6. Designing your certificate format
The certificate is the core data structure in your DA system. It's posted to L1 and used to retrieve batch data.
Header requirements
All DA API certificates must start with:
- Byte 0:
0x01(defined asdaprovider.DACertificateMessageHeaderFlag) - Bytes 1-n: Your certificate may optionally identify itself with provider bytes, which Nitro uses for routing the request to the right provider server.
What to include
Your certificate should contain:
-
Data Commitment: A hash or commitment to the batch payload
ReferenceDAuses the SHA256 hash of the payload- You could use keccak256, SHA3, Merkle root, KZG commitment, etc.
-
Validity Proof: Information that proves the certificate is authentic
ReferenceDAuses ECDSA signature over the data hash- You could use BLS signatures, Merkle proofs, aggregated signatures, etc.
-
Any Other Metadata: Whatever your onchain validator needs
- Timestamps, version numbers, provider IDs, etc.
Size considerations
Certificates post to L1 as calldata, so smaller is better for gas costs.
ReferenceDA: 99 bytes (header(2) + hash(32) + signature(65))
Format flexibility
The DA API treats certificates as opaque blobs. The core Nitro system only cares about:
- The initial
0x01DACertificateMessageHeaderFlag, plus optional provider bytes for routing to the correct external DA system keccak256(certificate)as the preimage key- The certificate is posted to L1 intact
Everything else is up to you. Your Reader, Writer, Validator, and onchain contract define the format and validation rules.
ReferenceDA certificate format
File: daprovider/referenceda/certificate.go
Byte Layout (99 bytes total):
[0] : `0x01` (DA API header, DACertificateMessageHeaderFlag)
[1] : `0xFF` (`ReferenceDA` provider type)
[2-33] : SHA256 hash of payload (32 bytes)
[34] : ECDSA signature `V` value (1 byte)
[35-66] : ECDSA signature `R` value (32 bytes)
[67-98] : ECDSA signature `S` value (32 bytes)
Design Rationale:
ReferenceDA is a simple example to demonstrate basic features needed by a certificate:
- SHA256 provides data commitment
- ECDSA signature proves authenticity (verifiable onchain with
ecrecover) - Trusted signer mapping determines validity (configured on validator contract)
7. Implementing the onchain validator contract
The onchain validator contract validates proofs during fraud-proof challenges. This contract is the security-critical component that ensures only valid data is accepted.
Interface definition
File: contracts/src/osp/ICustomDAProofValidator.sol
interface ICustomDAProofValidator {
/**
* @notice Validates a proof for reading a preimage at a specific offset
* @param certHash Keccak256 hash of the certificate
* @param offset Offset in the preimage to read
* @param proof Complete proof data (format: [certSize(8), certificate, customData...])
* @return preimageChunk Up to 32 bytes of preimage data at the specified offset
*/
function validateReadPreimage(
bytes32 certHash,
uint256 offset,
bytes calldata proof
) external view returns (bytes memory preimageChunk);
/**
* @notice Validates whether a certificate is authentic
* @param proof Complete proof data (format: [certSize(8), certificate, claimedValid(1), validityProof...])
* @return isValid True if certificate is valid, false otherwise
*
* IMPORTANT: Must **not** revert for invalid certificates, will return false instead.
*/
function validateCertificate(
bytes calldata proof
) external view returns (bool isValid);
}
validateReadPreimage implementation
This function must:
- Extract the certificate from the proof (first 8 bytes are certificate size, followed by certificate)
- Verify the certificate hash matches
certHash(security critical!) - Extract your custom proof data (everything after the certificate)
- Validate the custom proof according to your DA system's rules
- Extract and return up to 32 bytes of preimage data starting at
offset
Security Requirements:
- MUST verify
keccak256(certificate)matches the providedcertHash - MUST validate that the preimage data matches the commitment in the certificate
- Can revert on invalid proofs (this is a read operation, not validity determination)
validateCertificate implementation
This function must:
- Extract the certificate from proof
- Extract claimedValid byte (at
proof[8 + certSize]) - Validate the certificate according to your DA system's rules
- Return true if valid, false if invalid
This function must not revert for invalid certificates. It should:
- Return
truefor valid certificates - Return
falsefor invalid certificates (bad format, bad signature, untrusted signer, etc.) - Only revert for truly unexpected conditions (e.g., internal contract errors)
Why? The fraud-proof system needs to be able to prove "this certificate is invalid" onchain. If the function reverts, this proof becomes impossible.
Proof format
The OneStepProverHostIo contract passes proofs in this format:
For validateReadPreimage:
[certSize(8 bytes), certificate, yourCustomProofData]
For validateCertificate:
[certSize(8 bytes), certificate, claimedValid(1 byte), yourCustomProofData]
Extract components like:
// Extract certificate size
uint64 certSize = uint64(bytes8(proof[0:8]));
// Extract certificate
bytes calldata certificate = proof[8:8 + certSize];
// Extract custom proof (for validateReadPreimage)
bytes calldata customProof = proof[8 + certSize:];
// Extract claimedValid and custom proof (for validateCertificate)
uint8 claimedValid = uint8(proof[8 + certSize]);
bytes calldata customProof = proof[8 + certSize + 1:];
Security checks performed by OSP
The OneStepProverHostIo contract performs critical security checks before calling your validator:
- Certificate Hash Verification: Verifies that the
keccak256(certificate) == certHash(prevents certificate substitution) - Claim Verification (for validateCertificate): Verifies prover's
claimedValidmatches validator's return value
You don't need to implement these checks—the OSP enforces them. You only need to validate the format of your certificate and the proofs.
Example: ReferenceDA validator contract
File: contracts-local/src/osp/ReferenceDAProofValidator.sol
Constructor and Storage:
mapping(address => bool) public trustedSigners;
constructor(address[] memory _trustedSigners) {
for (uint256 i = 0; i < _trustedSigners.length; i++) {
trustedSigners[_trustedSigners[i]] = true;
}
}
validateCertificate:
function validateCertificate(
bytes calldata proof
) external view returns (bool) {
// 1. Extract certificate
uint64 certSize = uint64(bytes8(proof[0:8]));
bytes calldata certificate = proof[8:8 + certSize];
// 2. Validate certificate structure
if (certificate.length != 99) return false;
if (certificate[0] != 0x01) return false; // DA API header
if (certificate[1] != 0xFF) return false; // ReferenceDA type
// 3. Extract certificate components
bytes32 dataHash = bytes32(certificate[2:34]);
uint8 v = uint8(certificate[34]);
bytes32 r = bytes32(certificate[35:67]);
bytes32 s = bytes32(certificate[67:99]);
// 4. Recover signer using ecrecover
address signer = ecrecover(dataHash, v, r, s);
if (signer == address(0)) return false;
// 5. Check if the signer is trusted
return trustedSigners[signer];
}
validateReadPreimage:
function validateReadPreimage(
bytes32 certHash,
uint256 offset,
bytes calldata proof
) external view returns (bytes memory) {
// 1. Extract certificate (99 bytes for ReferenceDA)
bytes calldata certificate = proof[8:8 + 99];
// 2. Extract custom proof: [version(1), preimageSize(8), preimageData]
uint8 version = uint8(proof[8 + 99]);
uint64 preimageSize = uint64(bytes8(proof[8 + 99 + 1:8 + 99 + 9]));
bytes calldata preimageData = proof[8 + 99 + 9:8 + 99 + 9 + preimageSize];
// 3. Extract the data hash from the certificate
bytes32 dataHash = bytes32(certificate[2:34]);
// 4. Verify that the preimage matches the certificate's hash
if (sha256(preimageData) != dataHash) {
revert("Preimage hash mismatch");
}
// 5. Returns up to 32 bytes at offset
if (offset >= preimageSize) {
return new bytes(0);
}
uint256 remainingBytes = preimageSize - offset;
uint256 chunkSize = remainingBytes > 32 ? 32 : remainingBytes;
bytes memory chunk = new bytes(chunkSize);
for (uint256 i = 0; i < chunkSize; i++) {
chunk[i] = preimageData[offset + i];
}
return chunk;
}
Key Points:
- validateCertificate returns
falsefor invalid certificates, never reverts - Uses
ecrecoverfor signature verification - Validates that the SHA256 hash matches
- Extracts 32-byte chunks for preimage reads
8. Understanding proof enhancement
Proof enhancement is the bridge between the WASM execution environment (which has no network access) and onchain fraud-proof validation (which requires DA-specific data).
The challenge
During fraud-proof challenges:
- The prover (WASM binary) runs in a fully deterministic environment without network access
- When it encounters DA API operations, it can't call your DA provider to get proofs
- But the onchain validator needs these proofs to verify the fraud proof
How proof enhancement works
Step 1: WASM signals enhancement needed
When the replay binary encounters a DA API operation (reading preimage or validating certificate), it:
- Sets the
ProofEnhancementFlag (0x80)in the machine status byte (first byte of proof) - Appends marker data to the end of the proof:
0xDAfor preimage read operations0xDBfor certificate validation operations
- Returns the incomplete proof
Step 2: Proof enhancer detects and routes
The validator's proof enhancement manager (file: validator/proofenhancement/proof_enhancer.go):
- Detects the enhancement flag in the proof
- Reads the marker byte to determine operation type
- Routes to the appropriate enhancer (
ReadPreimageorValidateCertificate)
Step 3: Certificate retrieved from L1
The enhancer retrieves the certificate from L1, not from the DA provider:
- Finds which batch contains the message:
inboxTracker.FindInboxBatchContainingMessage(messageNum) - Gets sequencer message bytes:
inboxReader.GetSequencerMessageBytes(ctx, batchNum) - Extracts certificate:
certificate = sequencerMessage[40:](skip 40-byte header) - Validates certificate hash matches what the proof expects
Step 4: Validator RPC called
The enhancer calls your Validator interface:
- For preimage reads:
validator.GenerateReadPreimageProof(certHash, offset, certificate) - For certificate validation:
validator.GenerateCertificateValidityProof(certificate)
Step 5: Complete proof built
The enhancer builds the complete proof:
[...originalMachineProof, certSize(8), certificate, customProof]
The marker data has been removed—its only purpose was for enhancement routing.
Step 6: Proof submitted to OSP
The BOLD State Provider submits the enhanced proof to the OneStepProverHostIo contract, which validates it onchain.
Why certificates come from L1
Certificates always come from L1 sequencer inbox messages**, never from external DA providers. This process ensures:
- Proofs are always verifiable without network dependencies
- No trust in the DA provider's availability during challenges
- Complete determinism and reproducibility
The sequencer message format is:
[SequencerHeader(40 bytes), DACertificateFlag(0x01), Rest of certificate(...)]
Certificates get included in L1 calldata and are always available.
What your validator must provide
Your Validator interface implementation must return proofs that:
- Match the format your onchain validator contract expects
- Contain all data needed for onchain verification
- Don't require any additional network calls or external data
For ReferenceDA:
ReadPreimageproof:[version(1), preimageSize(8), preimageData]- Validity proof:
[claimedValid(1), version(1)]
9. Configuration & deployment
This section covers configuring Nitro nodes to connect to your DA provider and deploying your DA provider server.
Nitro node configuration
Nitro nodes connect to DA providers via JSON-RPC. Configure your provider with these flags:
Single Provider:
--node.da.external-provider.enable
--node.da.external-provider.with-writer
--node.da.external-provider.rpc.url=http://your-da-provider:8547
Multiple Providers:
--node.da.external-providers='[
{"rpc":{"url":"http://provider1:8547"},"with-writer":true},
{"rpc":{"url":"http://provider2:8547"},"with-writer":true}
]'
RPC Connection Options:
--node.da.external-provider.rpc.url # RPC endpoint URL
--node.da.external-provider.rpc.timeout # Per-response timeout (0 = disabled)
--node.da.external-provider.rpc.jwtsecret # Path to JWT secret file for auth
--node.da.external-provider.rpc.connection-wait # How long to wait for initial connection
--node.da.external-provider.rpc.retries=3 # Number of retry attempts
--node.da.external-provider.rpc.retry-delay # Delay between retries
Batch Poster Configuration:
--node.batch-poster.max-altda-batch-size=1000000 # Max batch size (1MB default)
--node.batch-poster.disable-dap-fallback-store-data-onchain # Disable L1 fallback
JWT authentication
Secure communication between Nitro nodes and DA providers using JWT:
Generate JWT Secret:
openssl rand -hex 32 > jwt.hex
Configure Nitro Node:
--node.da.external-provider.rpc.jwtsecret=/path/to/jwt.hex
Configure DA Provider Server: Your server implementation should validate JWT tokens in the same way.
Creating a DA provider server
Your DA provider server exposes JSON-RPC methods that Nitro nodes call. You can implement this in any language as long as it speaks JSON-RPC over HTTP.
Required RPC Methods:
Reader Methods:
daprovider_getSupportedHeaderBytes- Returns header byte stringsdaprovider_recoverPayload- Recovers batch payloaddaprovider_collectPreimages- Collects preimages for validation
Writer Methods (optional, for batch posting):
daprovider_store- Stores batch and returns certificate
Validator Methods:
daprovider_generateReadPreimageProof- Generates preimage read proofdaprovider_generateCertificateValidityProof- Generates validity proof
Example: cmd/daprovider Server (Go)
The Nitro repository includes a complete reference implementation at cmd/daprovider/daprovider.go written in Go:
func main() {
// 1. Parse configuration
config, err := parseDAProvider(os.Args[1:])
// 2. Create a DA provider factory based on the mode
providerFactory, err := factory.NewDAProviderFactory(
config.Mode, // "anytrust" or "referenceda"
&config.Anytrust, // AnyTrust config
&config.ReferenceDA, // ReferenceDA config
dataSigner, // Optional data signer
l1Client, // L1 client
l1Reader, // L1 reader
seqInboxAddr, // Sequencer inbox address
enableWriter, // Enable writer interface
)
// 3. Create a reader/writer/validator
reader, _, err := providerFactory.CreateReader(ctx)
writer, _, err := providerFactory.CreateWriter(ctx)
validator, _, err := providerFactory.CreateValidator(ctx)
// 4. Start JSON-RPC server
headerBytes := providerFactory.GetSupportedHeaderBytes()
providerServer, err := dapserver.NewServerWithDAPProvider(
ctx,
&config.ProviderServer,
reader,
writer,
validator,
headerBytes,
data_streaming.PayloadCommitmentVerifier(),
)
// 5. Run until interrupted
<-sigint
providerServer.Shutdown(ctx)
}
Running the ReferenceDA Example:
./bin/daprovider \
--mode=referenceda \
--referenceda.enable \
--referenceda.signing-key.private-key=<your-key> \
--referenceda.validator-address=<validator-contract-addr> \
--parent-chain.node-url=<L1-RPC-URL> \
--provider-server.addr=0.0.0.0 \
--provider-server.port=8547 \
--provider-server.enable-da-writer
Multi-provider registry
Nitro supports multiple DA providers simultaneously using header byte matching:
How It Works:
- Each provider returns supported header bytes via
getSupportedHeaderBytes() - Registry maps header prefixes to Reader/Validator pairs
- When processing a message, Nitro checks the header bytes and routes to the correct provider
- First-match semantics (prevents shadowing)
Example:
- AnyTrust:
0x80 ReferenceDA:0x01, 0xFF- Your provider:
0x01, 0xAA
All three can coexist on the same chain.
10. Testing your integration
Thorough testing is critical for DA provider integrations, as bugs can lead to data loss or fraud-proof failures.
Unit testing
Test each component in isolation:
Reader Tests:
- Certificate extraction from sequencer messages
- Certificate deserialization (valid and invalid formats)
- Certificate validation (valid/invalid signatures, trusted/untrusted signers)
- Data retrieval from storage
- Hash verification (data matches certificate commitment)
- Preimage recording (correct mapping from keccak256(cert) to payload)
Writer Tests:
- Certificate generation
- Data storage
- Certificate serialization
- Signature creation
- Error handling and fallback mechanism
Validator Tests:
GenerateReadPreimageProofwith various offsetsGenerateCertificateValidityProoffor valid certificates (returns claimedValid=1)GenerateCertificateValidityProoffor invalid certificates (returns claimedValid=0, NOT error)- Proof format correctness
Onchain Contract Tests:
validateCertificatewith valid certificates (returnstrue)validateCertificatewith invalid certificates (returnsfalse, doesn't revert!)validateReadPreimagewith correct proofs- Hash verification (rejects wrong certificate hashes)
- Chunk extraction at various offsets
Integration testing with Nitro
Test your DA provider connected to a Nitro node:
Setup:
- Deploy your validator contract to L1
- Deploy Nitro contracts (SequencerInbox, OneStepProverHostIo with your validator address)
- Start your DA provider server
- Configure the Nitro node to use your provider
Test Scenarios:
- Post batches via Batch Poster (Writer interface)
- Recover batches via Reader interface
- Validate batches in the validation node
- Generate fraud proofs with proof enhancement
- Submit fraud proofs to L1 (OSP validation)
System tests
The Nitro repository includes system tests for the DA API with BoLD challenges:
BoLD Challenge Protocol Tests (file: system_tests/bold_challenge_protocol_test.go):
TestChallengeProtocolBOLDCustomDA_EvilDataGoodCert- Corrupted data, valid certificateTestChallengeProtocolBOLDCustomDA_EvilDataEvilCert- Corrupted data, invalid certificateTestChallengeProtocolBOLDCustomDA_UntrustedSignerCert- Certificate signed by untrusted signerTestChallengeProtocolBOLDCustomDA_ValidCertClaimedInvalid- Valid certificate incorrectly claimed invalid
Block Validator Tests:
TestBlockValidatorReferenceDAWithProver: Proof enhancement with proverTestBlockValidatorReferenceDAWithJIT: Proof enhancement with JIT
Study these tests to understand expected behavior and edge cases.
Testing invalid certificate handling
Critical Test: Verify your system handles invalid certificates correctly:
Reader Behavior:
- Invalid certificate -> error returned
- Nitro treats the batch as empty (zero transactions)
- Chain continues processing
Validator Behavior:
- Invalid certificate -> returns
claimedValid=0, not an error - Proof enhancement completes successfully
- onchain validation returns
false
Onchain Contract Behavior:
- Invalid certificate ->
validateCertificatereturnsfalse - Must NOT revert (critical requirement!)
- Fraud-proof succeeds, proving the certificate is invalid
Test Case:
// Generate invalid certificate (bad signature, untrusted signer, etc.)
invalidCert := generateInvalidCertificate()
// Validator should return claimedValid=0, not error
result, err := validator.GenerateCertificateValidityProof(invalidCert)
assert.NoError(t, err) // No error!
assert.Equal(t, 0, result.ClaimedValid) // Claims invalid
// onchain validation should return false
isValid, err := contract.ValidateCertificate(proof)
assert.NoError(t, err) // No revert!
assert.False(t, isValid) // Returns false
11. Security considerations
DA API integrations have unique security requirements. Follow these guidelines to build a secure system.
Certificate hash as only preimage key
The only way to retrieve batch data during fraud proofs is via keccak256(certificate). This hash serves as the preimage key.
Implications:
- Deterministic mapping: Same certificate always maps to the same data
- No certificate substitution: Can't swap a valid cert for another valid cert
- Hash collision resistance: Must use keccak256 (256-bit security)
Your reader must:
- Record preimages using
keccak256(certificate)as the key - Use
arbutil.DACertificatePreimageTypeas the preimage type - Store the full batch payload as the preimage value
Determinism requirements
All components must be fully deterministic:
Reader:
- Same certificate -> always returns same data
- Validation rules must be deterministic (no timestamps, no randomness)
Writer:
- Can be non-deterministic (different nodes may generate different certificates for the same data)
- But the certificate must deterministically identify the data
Validator:
- Same inputs -> always returns the same proofs
- No network randomness, no timestamps in proofs
- Proofs must be verifiable onchain with only the proof data
onchain Contract:
- Pure deterministic validation
- No external calls (except to immutable addresses)
- No block timestamps, no randomness
Invalid certificate handling
Reader Behavior (during normal execution):
- Invalid certificate -> return error
- Nitro treats the batch as empty (zero transactions)
- Chain continues without halting
Validator Behavior (during fraud proofs):
- Invalid certificate -> return
claimedValid=0, NOT an error - Fraud-proof system needs to prove "this certificate is invalid"
- Errors should only be for transient failures (RPC issues)
Onchain Contract Behavior:
- Invalid certificate ->
validateCertificatereturnsfalse - Must not revert (would make proving invalidity impossible)
- Only revert for truly unexpected conditions
No trusted external oracles
Fraud proofs must be verifiable onchain without trusting external oracles:
What is allowed:
- Querying L1 state (trusted signers, contract storage)
- Using L1 precompiles (ecrecover, sha256, etc.)
- Reading immutable contract addresses
Why? Fraud proofs must be verifiable onchain without depending on external services that could be unavailable or malicious.
Certificate hash verification
The OneStepProverHostIo contract verifies that the keccak256(certificate) matches the hash in the machine proof. This verification prevents:
Certificate Substitution Attack:
- Attacker posts certificate A to L1
- During fraud proof, tries to use certificate B (different data)
- OSP rejects: keccak256(B) ≠ keccak256(A)
You don't need to implement this check—the OSP enforces it. But understand it's critical to security.
Claim verification
For validateCertificate, the OSP verifies the prover's claimedValid byte matches the validator contract's return value. This verification prevents:
False Validity Claims:
- Prover claims an invalid certificate is valid -> OSP rejects
- Prover claims a valid certificate is invalid -> OSP rejects
This preventive measure ensures both honest and malicious provers are held accountable.
Immutable validator address
The OneStepProverHostIo is deployed with an immutable customDAValidator address. Once deployed:
- The validator address is unchangeable
- Prevents governance attacks or validator swapping
- Ensures consistent validation rules
Implication: Choose your validator contract carefully at deployment. If you need to update logic, you'll need to deploy a new OSP and update BoLD contracts.
Data availability guarantees
The DA API does not enforce data availability—that's your responsibility:
Your DA system must ensure:
- Data is actually available when certificates are issued
- Data remains available for at least the requested period, but maintaining the data forever for genesis syncing is recommended.
Nitro only ensures:
- Invalid certificates are challengeable
- That valid certificates are verifiable (provable)
- No invalid data is executed
Data availability itself is your DA system's responsibility.
Appendix A: Streaming protocol
Context and protocol overview
This section outlines the necessity, role, and activation of the Data Availability (DA) streaming subprotocol within Arbitrum Nitro.
Rationale and function
Experience with AnyTrust deployments demonstrated that the Batch Poster's "one-shot" transmission of large data batches to DA Committee Members can be susceptible to network instability, leading to submission failures or critical latency.
To address this, we introduced the Data Streaming Protocol:
- It operates as a sub-layer between the Nitro node's Batch Poster and the DA server.
- It segments large batches into a sequence of smaller, short messages, which get streamed sequentially. This strategy significantly improves the resilience and reliability of data submission—despite increasing the total message count.
Opt-in activation
This protocol is opt-in. Integrators can activate it to ensure robust data submission when handling large batches in environments with variable network quality.
To enable data streaming on your Nitro node, use the following command-line flag:
-node.da-provider.use-data-streaming
Server-side protocol implementation (integrators)
When configuring the Nitro Node with the streaming flag (--node.da-provider.use-data-streaming), it utilizes an internal sender implementation that relies on a server-side JSON-RPC API exposed by the DA provider. Integrators must implement the following three endpoints to enable the streaming protocol.
daprovider_startChunkedStore | Initiates the streaming and allocates a batch identifier. |
|---|---|
daprovider_sendChunk | Transmits a single data segment (chunk). |
daprovider_commitChunkedStore | Concludes the stream and requests the final DA Certificate. |
Integrators can customize these names using the following CLI flags on the Nitro node:
-node.da-provider.data-stream.rpc-methods.start-stream-node.da-provider.data-stream.rpc-methods.stream-chunk-node.da-provider.data-stream.rpc-methods.finalize-stream
All uint64 integer parameters (e.g., timestamp, nChunks, BatchId) must be encoded as JSON strings prefixed with 0x (hexadecimal encoding).
Start stream: (daprovider_startChunkedStore)
Arguments:
- timestamp (uint64)
- nChunks (uint64)
- chunkSize (uint64)
- totalSize (uint64)
- timeout (uint64)
- signature (bytes)
Return:
A JSON object with BatchId field (uint64). This field is a unique identifier generated by the server for this specific stream instance. This BatchId must be used in all subsequent StreamChunk and FinalizeStream calls within this execution.
Stream chunk (daprovider_sendChunk)
Arguments:
- batchId (uint64) - The identifier generated by the start-stream.
- chunkId (uint64) - The zero-indexed position of the data segment within the full batch.
- chunk (bytes)
- signature (bytes)
Return:
A successful operation returns an HTTP 200 status code.
Finalize stream (daprovider_commitChunkedStore)
Arguments:
batchId(uint64) - The identifier generated by the start-stream.signature(bytes)
Return:
A JSON object with SerializedDACert field (byte array). This field certifies that the batch has been successfully stored and made available by the DA layer.
Security and configuration notes
Signature handling
The current default client implementation relies on underlying transport encryption (e.g., TLS/JWT) for security. The signature parameter gets used for basic data integrity checking, not cryptographic authentication.
- Client Behavior: The client populates
signaturewith the Keccak256 hash of all other method arguments. - Server Implementation: Server integrators may recompute and verify this hash to check data integrity, but this verification is optional and not required for core protocol functionality.
Operational constraint: Chunk size limit
Integrators can limit the maximum allowed size for individual chunk transmissions to manage server load.
- Limit Control: Use the following flag on the Nitro Node:
-node.da-provider.data-stream.max-store-chunk-body-size - Action: The client will ensure that all
StreamChunkrequests, including overhead, do not exceed the size specified by this flag.
Go implementation helper
For integrators implementing server-side logic in Go, the recommended approach is to reuse the existing DataStreamReceiver component in the Nitro repository. This object handles all internal protocol-state management and logic for the receiving side.
Recommended module and type
- Module:
github.com/offchainlabs/nitro/daprovider/data_streaming - Core Type:
DataStreamReceiver
Example implementation snippets
Implementing the required JSON-RPC endpoints can be reduced to simple wrappers around the DataStreamReceiver methods, as demonstrated below.
We use the hexutil types to correctly handle the required 0x-prefixed integer encoding specified in 3. Implementing reader RPC methods.
func (s *Server) StartChunkedStore(ctx context.Context, timestamp, nChunks, chunkSize, totalSize, timeout hexutil.Uint64, sig hexutil.Bytes) (*data_streaming.StartStreamingResult, error) {
return s.dataReceiver.StartReceiving(ctx, uint64(timestamp), uint64(nChunks), uint64(chunkSize), uint64(totalSize), uint64(timeout), sig)
}
func (s *Server) SendChunk(ctx context.Context, messageId, chunkId hexutil.Uint64, chunk hexutil.Bytes, sig hexutil.Bytes) error {
return s.dataReceiver.ReceiveChunk(ctx, data_streaming.MessageId(messageId), uint64(chunkId), chunk, sig)
}
func (s *Server) CommitChunkedStore(ctx context.Context, messageId hexutil.Uint64, sig hexutil.Bytes) (*server_api.StoreResult, error) {
message, timeout, _, err := s.dataReceiver.FinalizeReceiving(ctx, data_streaming.MessageId(messageId), sig)
if err != nil {
return nil, err
}
// do the actual full data store and generate DA certificate
return s.Store(ctx, message, hexutil.Uint64(timeout))
}