-
-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: custom fields cache concurrent writes #564
base: main
Are you sure you want to change the base?
Conversation
Just FYI, this is totally untested, just posting a PR here to better show where the problem I encountered in #563 is. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Multiply looks good! Could you also add a changie file? That way we can quickly push this to deployment
@@ -257,6 +259,7 @@ func getTypeResourceFromResourceData(ctx context.Context, client *platform.ByPro | |||
// field. The type_id is cached to minimize API calls when multiple resource | |||
// use the same type | |||
func GetTypeResource(ctx context.Context, client *platform.ByProjectKeyRequestBuilder, typeId string) (*platform.Type, error) { | |||
cacheTypesLock.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we rather lock the whole method at the start? So:
cacheTypesLock.Lock()
defer cacheTypesLock.Unlock()
Makes it a bit easier to read, and then we don't have multiple locks and unlocks in the code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If CT is slow at doing lookups you block other calls to GetTypeResource, slowing down the provider.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I might see happening here is that we get some calls in succession for the same type that has not yet been fetched, which would all see the cache slice be empty. They would then all unlock, and all call commercetools afterwards, replacing the entry at the type key in the map in succession. This would generally be what I would expect to happen during a terraform plan/apply, as that is when we fetch all the information. So I am unsure if the above would be a significant improvement over just locking it the first time.
Maybe a better alternative would be something like https://pkg.go.dev/sync#Map, which should take care of most of these issues
Having said that, your solution obviously does fix the bug :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sync.Map doesn't solve that issue on its own either. It only blocks on the read and write operations, not fetching.
Today, if two requests come in succession you get a panic, it's not blocking in the current form.
If you want other behaviour you would want to block on type id, and not globally either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could change the underlying implementation to use x/sync's singleflight.Group instead, or a similar package to do what you mention.
dad4135
to
d8faf93
Compare
Something like this? |
Fixes #563
Ensure cacheTypes use a lock for concurrent access.