-
Notifications
You must be signed in to change notification settings - Fork 302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LRU cache utility #1090
base: master
Are you sure you want to change the base?
LRU cache utility #1090
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1090 +/- ##
==========================================
- Coverage 94.29% 93.96% -0.33%
==========================================
Files 159 162 +3
Lines 17325 17595 +270
==========================================
+ Hits 16337 16534 +197
- Misses 988 1061 +73
Flags with carried forward coverage won't be shown. Click here to find out more.
|
738e0d7
to
2938377
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I sadly can't complain about anything. Let's ask Copilot then . :)
|
||
using LRUList = std::list<LRUEntry>; | ||
using LRUIterator = typename LRUList::iterator; | ||
using Map = std::unordered_map<Key, LRUIterator>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible/better to store values in the map entries? Then there'd be just one link LRUEntry => MapNode
.
Now it's double-direction: LRUEntry::key => MapNode::key
and MapNode::value -> LRUEntry
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see a comment below that there's no performance difference. But it seems more convoluted to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is possible. Then the LRU list has only the pointer/reference/iterator to the map. And the map is Key => (LRUIterator, Value).
I'm kind of on your side here, but because the performance difference was negligible I decided to keep the "classic" layout for now. There is also small nuance: you cannot have std::list<const Key&>
so you need to wrap the reference or use a pointer.
This is to be revisited in case we are going to use intrusive list and/or map iterators.
Map map_; | ||
|
||
/// Marks an element as the most recently used by moving it to the back of the LRU list. | ||
void move_to_back(LRUIterator it) noexcept { lru_list_.splice(lru_list_.end(), lru_list_, it); } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: the name kinda leaks internal representation (order of elements in the list). Maybe better something like touch()
, use()
, access()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I didn't notice it's a private method, then it's fine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I thought about it, e.g. naming it bump_usage()
. In the end I decided not be because we also use lru_list_
directly in other places. E.g. we should also replace ru_list_.emplace
with something like new_use()
?
The main motivation was to wrap splice()
which takes me usually some time to figure out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I probably should mention O(1) get/put complexity.
Map map_; | ||
|
||
/// Marks an element as the most recently used by moving it to the back of the LRU list. | ||
void move_to_back(LRUIterator it) noexcept { lru_list_.splice(lru_list_.end(), lru_list_, it); } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I thought about it, e.g. naming it bump_usage()
. In the end I decided not be because we also use lru_list_
directly in other places. E.g. we should also replace ru_list_.emplace
with something like new_use()
?
The main motivation was to wrap splice()
which takes me usually some time to figure out.
|
||
using LRUList = std::list<LRUEntry>; | ||
using LRUIterator = typename LRUList::iterator; | ||
using Map = std::unordered_map<Key, LRUIterator>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is possible. Then the LRU list has only the pointer/reference/iterator to the map. And the map is Key => (LRUIterator, Value).
I'm kind of on your side here, but because the performance difference was negligible I decided to keep the "classic" layout for now. There is also small nuance: you cannot have std::list<const Key&>
so you need to wrap the reference or use a pointer.
This is to be revisited in case we are going to use intrusive list and/or map iterators.
Adds generic single-threaded implementation of the least recently used (LRU) cache. This is to be used to cache the code and code analysis.
2d08d63
to
1200c41
Compare
Adds generic single-threaded implementation of
the least recently used (LRU) cache.
This is to be used to cache the code and code analysis.