-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat/Process object notification from chain #3085
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #3085 +/- ##
==========================================
+ Coverage 22.93% 23.06% +0.12%
==========================================
Files 750 755 +5
Lines 58683 59495 +812
==========================================
+ Hits 13459 13721 +262
- Misses 44282 44784 +502
- Partials 942 990 +48 ☔ View full report in Codecov by Sentry. |
0207dcb
to
50a8e8a
Compare
1510922
to
ca811c2
Compare
pkg/services/meta/notifications.go
Outdated
continue | ||
} | ||
|
||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unordered. Potentially dangerous resource-wise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unordered
can this be bad? eventually, we plan to receive every notification in a block at once. now the order and new block handling is also not been ordered and will not be
Potentially dangerous resource-wise.
we have been discussing notification handling from time to time for so long. we have to read notifications always as says neo-go client (IMO, it is the client's bad restriction and it is a mistake) and we cannot do any other calls if we are blocked on notification handling. considering all the things i know and all the discussions i had, i may only say that using all the resources we have (routines) is the only way to prevent deadlock, otherwise, there is always a number of notifications that can be received via a channel and their number exceeds any buffer size and it leads to the lock
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- We care about ordering here. That's exactly the feature we need from this chain, ordered events.
- Client is limited by a single pipe transmitting all data, it's like HOL blocking issue if you like. But we can build a reasonable enough pipeline with buffers that will work in most cases and will just reset the connection if it's overflown. Then this boils down to synchronizing with the chain from cold start which is the issue we have to solve anyway.
pkg/services/meta/notifications.go
Outdated
if !ok { | ||
return res, fmt.Errorf("unexpected object ID stack item: %T", arr[1].Value()) | ||
} | ||
meta, ok := arr[2].Value().([]stackitem.MapElement) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not handling it as a proper map (yeah, it only works via Index()
API now, but still)? You don't care about element order. I think depending on order is too fragile here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not handling it as a proper map (yeah, it only works via Index() API now, but still)?
if you are talking about calling Index
and then pick this item in slice, i do not see any difference. i would like to have Get
/Pick
as the opposite to the existing Add
and know nothing about slice but there is no such method. i do not know why
I think depending on order is too fragile here.
i do not agree. while reading this code, yes, it may seem strange. but we are reading node's code that has received notification. it means every node signed a correct order and number of meta fields. so either the order correct and the node can work with it, or nothing happens at all
anyway, i really would like to have Get
from the map type, and agree to use it orderless. but i dont
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have FromStackItem
in RPC bindings, probably it'll help us here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what exact struct do you suggest to use from bindings?
ca811c2
to
1e3b86d
Compare
There are still some TODOs that require additional attention but in general, it works. Fixes may be done later or maybe now. Some more tests will be added, currently, they are broken because of new updates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/MTP/MPT/
pkg/services/meta/notifications.go
Outdated
} | ||
|
||
// TODO: receive full object header and store its non-MTP parts in | ||
// persistent object storage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact you need to save original KVs (MPTfied) as well, MPT is slow for reads (even though it can return the same data).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you mean every KV that i put into MPT should also be placed to the raw storage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
pkg/services/meta/notifications.go
Outdated
} | ||
|
||
func (m *Meta) handleObjectNotification(e objEvent) error { | ||
// TODO check network magic? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will do. i could not answer if it is possible to misuse networks. but if we have this field from our chain, i guess, it should be done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's mostly for replay protection, but we have it, can be checked.
cmd/neofs-node/config.go
Outdated
@@ -698,6 +715,13 @@ func initBasics(c *cfg, key *keys.PrivateKey, stateStorage *state.PersistentStor | |||
fatalOnErr(err) | |||
} | |||
|
|||
c.shared.wsCli, err = rpcclient.NewWS(c.ctx, addresses[0], rpcclient.WSOptions{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No tries for other addresses?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whoops, rebase problems. this field is not used and it is kinda no-op code, multi endpoints are handled inside meta service now
return epoch.Int64(), nil | ||
} | ||
|
||
func (m *Meta) handleEpochNotification(e int64) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Expired objects are not deleted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm, we do not have expiration epoch in meta-from-chain, so we cannot find such cases without object HEADing object. or we have to add it to the contract
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be postponed.
1e3b86d
to
90e9810
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gr8 work is done, viewed few random parts only
|
||
_, err := st.mpt.Store.PersistSync() | ||
if err != nil { | ||
return fmt.Errorf("persisting %q storage: %w", st.path, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mb worth keep persisting other storages?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is a big question for me in fact. i feel like either everything should be in sync and ok, or we should stop
@roman-khimov
newKVs[string(append([]byte{0, previousPartIndex}, commsuffix...))] = e.prevObject | ||
} | ||
if len(e.deletedObjects) > 0 { | ||
newKVs[string(append([]byte{0, deletedIndex}, commsuffix...))] = e.deletedObjects |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The object is deleted. Which means you need to delete it from the object index and be done with it.
1f2dae0
to
9d16e91
Compare
fb07736
to
b1073bb
Compare
Information from FS chain is placed inside MPT structures reused from neo-go. Only the latest state is stored. Every container has its own MPT storage based on a separate bolt KV database. Refs #3070. Signed-off-by: Pavel Karpy <[email protected]>
Store caught information in MPT structures. Closes #3070. Signed-off-by: Pavel Karpy <[email protected]>
b1073bb
to
dcbd80d
Compare
No description provided.