Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat/Process object notification from chain #3085

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

carpawell
Copy link
Member

No description provided.

Copy link

codecov bot commented Jan 20, 2025

Codecov Report

Attention: Patch coverage is 29.34783% with 520 lines in your changes missing coverage. Please review.

Project coverage is 23.06%. Comparing base (eb90734) to head (dcbd80d).
Report is 11 commits behind head on master.

Files with missing lines Patch % Lines
pkg/services/meta/notifications.go 30.67% 253 Missing and 34 partials ⚠️
pkg/services/meta/meta.go 7.81% 118 Missing ⚠️
cmd/neofs-node/meta.go 0.00% 87 Missing ⚠️
pkg/services/meta/container.go 77.00% 17 Missing and 6 partials ⚠️
cmd/neofs-node/config.go 0.00% 4 Missing ⚠️
cmd/neofs-node/main.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3085      +/-   ##
==========================================
+ Coverage   22.93%   23.06%   +0.12%     
==========================================
  Files         750      755       +5     
  Lines       58683    59495     +812     
==========================================
+ Hits        13459    13721     +262     
- Misses      44282    44784     +502     
- Partials      942      990      +48     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@carpawell carpawell force-pushed the feat/process-meta-on-chain branch from 0207dcb to 50a8e8a Compare January 20, 2025 19:25
@carpawell carpawell mentioned this pull request Jan 20, 2025
@carpawell carpawell force-pushed the feat/process-meta-on-chain branch 4 times, most recently from 1510922 to ca811c2 Compare January 31, 2025 03:16
pkg/services/meta/container.go Outdated Show resolved Hide resolved
pkg/services/meta/meta.go Show resolved Hide resolved
pkg/services/meta/meta.go Show resolved Hide resolved
continue
}

go func() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unordered. Potentially dangerous resource-wise.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unordered

can this be bad? eventually, we plan to receive every notification in a block at once. now the order and new block handling is also not been ordered and will not be

Potentially dangerous resource-wise.

we have been discussing notification handling from time to time for so long. we have to read notifications always as says neo-go client (IMO, it is the client's bad restriction and it is a mistake) and we cannot do any other calls if we are blocked on notification handling. considering all the things i know and all the discussions i had, i may only say that using all the resources we have (routines) is the only way to prevent deadlock, otherwise, there is always a number of notifications that can be received via a channel and their number exceeds any buffer size and it leads to the lock

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. We care about ordering here. That's exactly the feature we need from this chain, ordered events.
  2. Client is limited by a single pipe transmitting all data, it's like HOL blocking issue if you like. But we can build a reasonable enough pipeline with buffers that will work in most cases and will just reset the connection if it's overflown. Then this boils down to synchronizing with the chain from cold start which is the issue we have to solve anyway.

pkg/services/meta/notifications.go Outdated Show resolved Hide resolved
if !ok {
return res, fmt.Errorf("unexpected object ID stack item: %T", arr[1].Value())
}
meta, ok := arr[2].Value().([]stackitem.MapElement)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not handling it as a proper map (yeah, it only works via Index() API now, but still)? You don't care about element order. I think depending on order is too fragile here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not handling it as a proper map (yeah, it only works via Index() API now, but still)?

if you are talking about calling Index and then pick this item in slice, i do not see any difference. i would like to have Get/Pick as the opposite to the existing Add and know nothing about slice but there is no such method. i do not know why

I think depending on order is too fragile here.

i do not agree. while reading this code, yes, it may seem strange. but we are reading node's code that has received notification. it means every node signed a correct order and number of meta fields. so either the order correct and the node can work with it, or nothing happens at all

anyway, i really would like to have Get from the map type, and agree to use it orderless. but i dont

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have FromStackItem in RPC bindings, probably it'll help us here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what exact struct do you suggest to use from bindings?

pkg/services/meta/notifications.go Outdated Show resolved Hide resolved
@carpawell carpawell force-pushed the feat/process-meta-on-chain branch from ca811c2 to 1e3b86d Compare February 5, 2025 03:57
@carpawell
Copy link
Member Author

carpawell commented Feb 5, 2025

There are still some TODOs that require additional attention but in general, it works. Fixes may be done later or maybe now. Some more tests will be added, currently, they are broken because of new updates.

@carpawell carpawell marked this pull request as ready for review February 5, 2025 04:00
Copy link
Member

@roman-khimov roman-khimov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/MTP/MPT/

}

// TODO: receive full object header and store its non-MTP parts in
// persistent object storage
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact you need to save original KVs (MPTfied) as well, MPT is slow for reads (even though it can return the same data).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean every KV that i put into MPT should also be placed to the raw storage?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

}

func (m *Meta) handleObjectNotification(e objEvent) error {
// TODO check network magic?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be done.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do. i could not answer if it is possible to misuse networks. but if we have this field from our chain, i guess, it should be done

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's mostly for replay protection, but we have it, can be checked.

@@ -698,6 +715,13 @@ func initBasics(c *cfg, key *keys.PrivateKey, stateStorage *state.PersistentStor
fatalOnErr(err)
}

c.shared.wsCli, err = rpcclient.NewWS(c.ctx, addresses[0], rpcclient.WSOptions{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No tries for other addresses?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whoops, rebase problems. this field is not used and it is kinda no-op code, multi endpoints are handled inside meta service now

return epoch.Int64(), nil
}

func (m *Meta) handleEpochNotification(e int64) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Expired objects are not deleted.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, we do not have expiration epoch in meta-from-chain, so we cannot find such cases without object HEADing object. or we have to add it to the contract

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be postponed.

@carpawell carpawell force-pushed the feat/process-meta-on-chain branch from 1e3b86d to 90e9810 Compare February 10, 2025 02:02
Copy link
Contributor

@cthulhu-rider cthulhu-rider left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gr8 work is done, viewed few random parts only

pkg/services/meta/container.go Outdated Show resolved Hide resolved
pkg/services/meta/container.go Outdated Show resolved Hide resolved
pkg/services/meta/container.go Show resolved Hide resolved
pkg/services/meta/container.go Outdated Show resolved Hide resolved
pkg/services/meta/container.go Outdated Show resolved Hide resolved
cmd/neofs-node/meta.go Show resolved Hide resolved
pkg/services/meta/meta.go Show resolved Hide resolved
pkg/services/meta/meta.go Outdated Show resolved Hide resolved
pkg/services/meta/meta.go Outdated Show resolved Hide resolved
pkg/services/meta/meta.go Show resolved Hide resolved

_, err := st.mpt.Store.PersistSync()
if err != nil {
return fmt.Errorf("persisting %q storage: %w", st.path, err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mb worth keep persisting other storages?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is a big question for me in fact. i feel like either everything should be in sync and ok, or we should stop
@roman-khimov

newKVs[string(append([]byte{0, previousPartIndex}, commsuffix...))] = e.prevObject
}
if len(e.deletedObjects) > 0 {
newKVs[string(append([]byte{0, deletedIndex}, commsuffix...))] = e.deletedObjects
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The object is deleted. Which means you need to delete it from the object index and be done with it.

@carpawell carpawell force-pushed the feat/process-meta-on-chain branch 6 times, most recently from 1f2dae0 to 9d16e91 Compare February 13, 2025 22:42
@carpawell carpawell force-pushed the feat/process-meta-on-chain branch 2 times, most recently from fb07736 to b1073bb Compare February 14, 2025 09:10
Information from FS chain is placed inside MPT structures reused from neo-go.
Only the latest state is stored. Every container has its own MPT storage based
on a separate bolt KV database. Refs #3070.

Signed-off-by: Pavel Karpy <[email protected]>
Store caught information in MPT structures. Closes #3070.

Signed-off-by: Pavel Karpy <[email protected]>
@carpawell carpawell force-pushed the feat/process-meta-on-chain branch from b1073bb to dcbd80d Compare February 14, 2025 11:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants