Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create payload->blob type system #1120
base: master
Are you sure you want to change the base?
Create payload->blob type system #1120
Changes from 25 commits
9b222e0
bae8843
ebf9979
d66ddf2
f6021ea
422c2a8
a3d184b
856f5fc
1eeec85
a30a4ad
5eab8dd
132e8de
da63536
3016cac
9f98462
58b76c1
916573f
7eed618
c3a6291
b604c82
2ef8e99
13d16c7
22e8134
23b49c4
2add571
e5363e4
7801044
c70fad8
617faa7
e22238b
db45645
c70db76
5a0350f
3c239cc
15805a9
c028c00
693d513
f1a4e85
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unclear how coeffPolynomial and blobLength are related. Does
len(coeffPolynomial) == blobLength
? Guessing we allow last zeros to be truncated. We should explain thatThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also can't we just enforce that blobLength = nextPowerOf2(len(coeffPolynomial)) and then we don't need blobLength here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a corner case here:
blobLength = nextPowerOf2(len(coeffPolynomial))
, then the user fetching and reconstructing this blob would determine that the blob length is 1 symbol, when it's actually 2There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add a description of this corner case as a comment, if you confirm my logic here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
e22238b1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah. Wouldn't it be easier to just force the relay to keep those 0 bytes around? I'm always in favor of simplifying logic, especiallywhen the "corner cases" are things that will tbh never happen. Let me finish reviewing the PR to get the full picture of this implementation though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is any good way that we can force relays to keep these bytes around: it's a consequence of how our commitment scheme works. Let's assume we were to implement a policy that "forces" relays to NOT truncate trailing 0 bytes. Here a story which shows the problem with this policy:
Aside from this resource consuming attack vector, a policy which attempts to "force" relays not to truncate zeros also complicates retrieval logic. As currently implemented, if the bytes sent by a relay are successfully verified against the commitments, there is never a need to try fetching from a different relay: either the received bytes are decodable or not decodable. If we add a way that a relay might respond "maliciously", which doesn't result in failed commitment verification, that would mean a failed payload decode might require another fetch attempt, contingent upon on the mode of failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
have a preference for
Bytes()
without theget
prefixThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or
toBytes()
to be consistent withToPayload
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@samlaf How do you feel about using the
serialize
/deserialize
nomenclature? #1120 (comment)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yes that’s even better!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
f1a4e85f
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just realized now that golang has standard interfaces for serialization: MarshalJSON() and MarshalBinary(). Might be worth it to add a comment as to why we define our own function that isn't marshalBinary (perhaps we should just implement MarshalBinary() instead? so that people could encode our structs to gob if they need to for whatever reason).
Also the fact that we need that extra length param to unmarshal is another consequence of this weird "edge case that'll never happen". If we changed that requirement then this code would also be much simpler and more standard go. Again, we can discuss (perhaps with your brother).. I might be missing something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's worth implementing an interface just because it exists, without a concrete use case. I suppose it would be fine to add a comment saying why we don't implement the interface, but it doesn't seem necessary: throughout our whole codebase, there are no implementations of
BinaryMarshaler
: I think the implicit explanation for why it's not implemented is that we don't need it. @cody-littley any strong opinions here?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't feel strongly one way or another about the golang interfaces, but I'm generally in favor of avoiding code complexity that doesn't have a justification on the short-to-medium term roadmap.
Perhaps also relevant, the string
MarshalBinary
doesn't appear anywhere in the eigenDA repo right now, so this isn't a pattern we are currently using.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you add comment as to why we add the first 0x00 byte (so that it gets parsed as a bn254 FE). Took me a second to remember this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
78010448
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks a bit weird now. Why is payload version 0 mapping to a defaultBlobCodec? Should we use BlobCodecVersion0 instead? Or do we want separate versioning for the payloadEncoding and BlobCodec? Or are these things the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I renamed the constant to what I felt made sense for the new code, but the name doesn't work so well in the old
blob_codec
code. If I don't rename the constant, then the name doesn't work so well in the new codeYes, they are essentially the same thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see! Think I'm missing the venn diagram of which code is used for v1, which is used for v2, and which is used for both. is it not possible to completely separate v1 and v2 stuff?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't this function
EncodedBlob
need to be changed? it returns []byte, shouldnt it return a Blob in your type system? Or whatever else this is (prob just an encodedPayload?)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default_blob_codec
will not be used in v2, all encoding is contained in the new structsThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see... shouldnt we separate v1 and v2 codecs then? Why are they all in the same package? Or am I missing something?