Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DA throttling #48

Open
dmarzzz opened this issue Jan 17, 2025 · 4 comments · May be fixed by #69
Open

DA throttling #48

dmarzzz opened this issue Jan 17, 2025 · 4 comments · May be fixed by #69
Assignees

Comments

@dmarzzz
Copy link
Member

dmarzzz commented Jan 17, 2025

https://discord.com/channels/1244729134312198194/1294422364473528331/1327389163045257249

Hey guy's wondering if any thought has been put into how to handle DA throttling within the block builder after the 1.9.5 batcher release. The op-batcher in this release will send a miner_setMaxDASize to the sequencing execution client to impose a throttle on tx size/max block size until the max pending queue bytes in the batcher is reduced below a configurable threshold. With an external block builder leveraging rollup-boost, the block builder will not be aware of the DA throttle imposed on the payloads, and will continue to build payloads at the standard gas limit accepted by the network. This opens up the chain to a spam attack.

I think there's two options here, wondering if anyone has a recommended solution:
Upstream changes to the batcher that multiplexes the miner_setMaxDASize call to the sequencer/(optional list of builder clients)
Place a Proxy service in front of the batcher to multiplex the miner_setMaxDASize call to the sequencer/(optional list of builder clients) e.g. proxyd, or similar service to rollup-boost

It seems like it might be advantageous to have this directly built into the batcher for chains running the rollup-boost stack, as to minimize complexity for the chain operator. It would be a fairly minor addition in the batcher.

@linear linear bot added the rollup-boost label Jan 20, 2025
@ferranbt
Copy link
Collaborator

Rollup-boost includes a proxy service that relays certain RPC calls to both backends (e.g. eth_sendRawTransaction).

We could whitelist also the miner_setMaxDASize and other RPC methods required by op-batcher and have op-batcher connect with Rollup-boost directly instead of the sequencer EL node.

Note, we have to check exactly which RPC endpoints is op-batcher calling in the EL node, can we proxy all the methods? do some methods require some return parameters? do Do some calls require that only the EL node receives them?

@dmarzzz dmarzzz assigned dmarzzz and unassigned dmarzzz Jan 22, 2025
@0xKitsune
Copy link

0xKitsune commented Jan 22, 2025

Note, we have to check exactly which RPC endpoints is op-batcher calling in the EL node, can we proxy all the methods? do some methods require some return parameters? do Do some calls require that only the EL node receives them?

The batcher calls the EL within the main loop to fetch block by number in order to get blocks from the unsafe head. Since this is returning a block, should this call be forwarded to the sequencer's default execution client?

Additionally, the batcher calls the miner_setMaxDASize endpoint on the EL during the throttling loop, which returns a bool or an error in the case of failure.

It would seem useful to proxy all miner_ requests through rollup-boost, enabling all builders to be aware of changes in effective gas limit, extra data, etc.

Is there anyone currently working on this? If not I am happy to pick this up.

@ferranbt
Copy link
Collaborator

I assigned the issue to you @0xKitsune

For reference, when rollup-boost receives a JSON-RPC call it either:

  • Sends the call to the sequencer EL node.
  • Implements a callback to decide how to relay the call.

So, by default, rollup-boost would already send block_by_number requests to the EL node.

@0xKitsune
Copy link

0xKitsune commented Jan 24, 2025

Sounds good Ill get started on this later today and let you know if I run into any issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants