Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support and_try_compute_if_nobody_else #460

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

xuehaonan27
Copy link

@xuehaonan27 xuehaonan27 commented Oct 9, 2024

Issue: #433
It will only evaluate one closure for a certain entry, and other closures will be canceled, returning CompResult::Unchanged.

Add ValueInitializer::post_init_for_try_compute_with_if_nobody_else.
Add Cache::try_compute_if_nobody_else_with_hash_and_fun.
Add RefKeyEntrySelector::and_try_compute_if_nobody_else.

It will only evaluate one closure for a certain
entry, and other closures will be canceled, returning
`CompResult::Unchanged`.

Add `ValueInitializer::post_init_for_try_compute_with_if_nobody_else`.
Add `Cache::try_compute_if_nobody_else_with_hash_and_fun`.
Add `RefKeyEntrySelector::and_try_compute_if_nobody_else`.
Copy link

codecov bot commented Oct 9, 2024

Codecov Report

Attention: Patch coverage is 0% with 131 lines in your changes missing coverage. Please review.

Project coverage is 94.21%. Comparing base (bd5e447) to head (29b00c2).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #460      +/-   ##
==========================================
- Coverage   94.79%   94.21%   -0.58%     
==========================================
  Files          44       43       -1     
  Lines       20306    20435     +129     
==========================================
+ Hits        19249    19253       +4     
- Misses       1057     1182     +125     

@tatsuya6502 tatsuya6502 self-requested a review October 15, 2024 02:15
@xuehaonan27
Copy link
Author

Test code:

use std::sync::Arc;

use moka::{
    future::Cache,
    ops::compute::{CompResult, Op},
};

const N: usize = 100000;
const CONCURRENCY: usize = 100; // concurrent task nums

#[tokio::main]
async fn main() {
    /*
        Note: the final result should less than `N`.
        That's okay because we are testing [`and_try_compute_if_nobody_else`] which should
        cancel some computations, rather than testing whether the addition itself is thread-safe
        or not.
    */

    // Uncomment this line to test computation serialized by cache.
    // You can only see
    // ReplacedWith(Entry { key: "key1", value: <xxx>, is_fresh: true, is_old_value_replaced: true })
    // in stdout, which is as expected.
    computation_serialized().await;

    // Uncomment this line to test new feature.
    // You can see lines like:
    // Unchanged(Entry { key: "key1", value: <xxx>, is_fresh: false, is_old_value_replaced: false })
    // in stdout, which is as expected because there are multiple waiters manipulating the same
    // entry simutaneously.
    // However, I am not sure whether [`CompResult`] like `Unchanged` is proper for
    // this situation where the computation is cancelled because of another waiter
    // occupying the entry.
    // Should we add a new Enum to [`CompResult`] to represent this situation?
    computation_maybe_cancelled().await;
}

async fn computation_serialized() {
    // Increment a cached `u64` counter.
    async fn inclement_counter(cache: &Cache<String, u64>, key: &str) -> CompResult<String, u64> {
        cache
            .entry_by_ref(key)
            .and_compute_with(|maybe_entry| {
                let op = if let Some(entry) = maybe_entry {
                    let counter = entry.into_value();
                    Op::Put(counter.saturating_add(1)) // Update
                } else {
                    Op::Put(1) // Insert
                };
                // Return a Future that is resolved to `op` immediately.
                std::future::ready(op)
            })
            .await
    }

    let cache: Arc<Cache<String, u64>> = Arc::new(Cache::new(100));
    let key = "key1".to_string();

    let mut handles = Vec::new();

    for _ in 0..CONCURRENCY {
        let cache_clone = Arc::clone(&cache);
        let key_clone = key.clone();
        let handle = tokio::spawn(async move {
            for _ in 0..(N / CONCURRENCY) {
                let res = inclement_counter(&cache_clone, &key_clone).await;
                println!("{:?}", res);
            }
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.await.unwrap();
    }

    let result = cache.get(&key).await;
    println!("{result:?}");
}

async fn computation_maybe_cancelled() {
    // Increment a cached `u64` counter.
    async fn inclement_counter_if_nobody_else(
        cache: &Cache<String, u64>,
        key: &str,
    ) -> Result<CompResult<String, u64>, ()> {
        cache
            .entry_by_ref(key)
            .and_try_compute_if_nobody_else(|maybe_entry| {
                let op = if let Some(entry) = maybe_entry {
                    let counter = entry.into_value();
                    Ok(Op::Put(counter.saturating_add(1))) // Update
                } else {
                    Ok(Op::Put(1)) // Insert
                };
                // Return a Future that is resolved to `op` immediately.
                std::future::ready(op)
            })
            .await
    }

    let cache: Arc<Cache<String, u64>> = Arc::new(Cache::new(100));
    let key = "key1".to_string();

    let mut handles = Vec::new();

    for _ in 0..CONCURRENCY {
        let cache_clone = Arc::clone(&cache);
        let key_clone = key.clone();
        let handle = tokio::spawn(async move {
            for _ in 0..(N / CONCURRENCY) {
                let res = inclement_counter_if_nobody_else(&cache_clone, &key_clone)
                    .await
                    .unwrap(); // unwrap safe
                println!("{:?}", res);
            }
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.await.unwrap();
    }

    let result = cache.get(&key).await;
    println!("{result:?}");
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant