Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge probabilistic scores from external source #3562

Merged
merged 5 commits into from
Feb 10, 2025

Conversation

joostjager
Copy link
Contributor

@joostjager joostjager commented Jan 27, 2025

Fixes #2709

Usage in LDK node: lightningdevkit/ldk-node#449

Only "fix historical liquidity bucket decay" should be backported

Copy link

codecov bot commented Jan 27, 2025

Codecov Report

Attention: Patch coverage is 90.03559% with 28 lines in your changes missing coverage. Please review.

Project coverage is 88.53%. Comparing base (c5fd164) to head (2c1bdfd).
Report is 15 commits behind head on main.

Files with missing lines Patch % Lines
lightning/src/routing/scoring.rs 90.03% 25 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3562      +/-   ##
==========================================
+ Coverage   88.52%   88.53%   +0.01%     
==========================================
  Files         149      149              
  Lines      115030   115279     +249     
  Branches   115030   115279     +249     
==========================================
+ Hits       101833   102067     +234     
- Misses      10706    10720      +14     
- Partials     2491     2492       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@joostjager joostjager force-pushed the merge-scores branch 8 times, most recently from c24cc83 to 85f3fee Compare January 30, 2025 11:30

fn time_passed(&mut self, duration_since_epoch: Duration, decay_params: ProbabilisticScoringDecayParameters) {
self.0.retain(|_scid, liquidity| {
liquidity.min_liquidity_offset_msat =
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe move this into the ChannelLiquidity (singular) struct

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do so, would we gain much by introducing ChannelLiquidities at all? Maybe we just use HashMap<u64, ChannelLiquidity> in the API?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think being able to offer state-level functionality does make things a bit cleaner. Maybe I should also add a merge method on this level.

The original reason for this struct though is to be able to use ser/deser logic without a scorer.

channel_liquidities: HashMap<u64, ChannelLiquidity>,
channel_liquidities: ChannelLiquidities,
}
/// ChannelLiquidities contains live and historical liquidity bounds for each channel.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Objections to moving this into its own file?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, splitting scoring.rs into two modules would be nice. We generally don't put individual structs in their own module just for the sake of it but when files get too big, splitting them down the middle (if there's a clean way to do it) is always nice...there's a few files that are in desperate need of it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For my workflow, sticking to one struct per file would work well. I do find myself navigating quite a bit in these large files, using editor features (find, find symbol, find ref) to make it easier but not perfect. I'd rather use the folder/file hierarchy and pinning of files as tabs for example. But it is personal of course.

Some type of split would be welcome either way. For this PR I could start with a liquidity_information (open to naming suggestions) module and place the new ChannelLiquidities in it. Then in a separate PR move more liquidity code (ChannelLiquidity, HistoricalLiquidityTracker, HistoricalBucketRangeTracker and tests) in there. Thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thoughts?

I like the idea of breaking up our humongous modules and splitting more types out, be it in this PR or a follow up.
IMO, we could consider a folder structure such as:

src/routing/scoring/mod.rs (moved from src/routing/scoring.rs for backwards compat of the path)
src/routing/scoring/liquidity_tracking.rs (or just liquidity.rs ?)

Copy link
Contributor

@tnull tnull Feb 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, another easy step towards cleaning up/smaller files would be to move the entire bucketed_history sub-module out of scoring.rs and into a dedicated bucketed_history.rs file. Although, if we do this, we could consider movng the *Liquidity* types there, too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Skipping the moves for now as this PR is getting close to ready.


impl<G: Deref<Target = NetworkGraph<L>>, L: Deref> Writeable for ProbabilisticScorer<G, L> where L::Target: Logger {
#[inline]
fn write<W: Writer>(&self, w: &mut W) -> Result<(), io::Error> {
write_tlv_fields!(w, {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we want to maintain reading/writing the TLV fields here, rather than moving them into ChannelLiquidities, given that we're more likely to add additional fields requiring persisting on the more general ProbabilisticScorer.

Copy link
Contributor Author

@joostjager joostjager Jan 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it likely that there will be more persistent state unrelated to channels? If so, wouldn't that state also be placed in ChannelLiquidities, extending it with additional fields beyond the hash map? At that point, the struct might need a more general name, but I imagine those new fields would still be part of what you'd want to export/import.

For now, the purpose of creating the struct is to allow deserialization of state from disk or network without having to construct a full probabilistic scorer, which includes non-persistent and irrelevant fields.


fn time_passed(&mut self, duration_since_epoch: Duration, decay_params: ProbabilisticScoringDecayParameters) {
self.0.retain(|_scid, liquidity| {
liquidity.min_liquidity_offset_msat =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do so, would we gain much by introducing ChannelLiquidities at all? Maybe we just use HashMap<u64, ChannelLiquidity> in the API?

@joostjager joostjager requested a review from tnull January 30, 2025 12:55
@joostjager joostjager force-pushed the merge-scores branch 2 times, most recently from fdad047 to ced0adc Compare January 30, 2025 16:31
@joostjager joostjager marked this pull request as ready for review January 31, 2025 08:53
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay here.

liquidity.liquidity_history.decay_buckets(elapsed_time.as_secs_f64() / half_life);
liquidity.offset_history_last_updated = duration_since_epoch;
liquidity.offset_history_last_updated += decay_params.historical_no_updates_half_life;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I get why we're calling decay_buckets in a loop. It already does buckets *= (1/2)^half_lives so we shouldn't need to call it repeatedly.

Well...actually, looking at it it is wrong, its doing buckets *= 1024 / 2048^half_lives instead of buckets *= (1024 / 2048)^half_lives, but we should fix the math instead of calling it in a loop :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure either 😅 At some point I concluded that this was a discrete operation, probably set on the wrong foot by that if elapsed_time > decay_params.historical_no_updates_half_life statement.

I do wonder though why the buckets aren't decayed always like the live bounds and have this 1 half life waiting time? In the end, half life is just a way to express a rate, and it seems a bit strange to also use that in the way it is used in the if expression.

Good catch on the formula. Your suggestion is correct, but 1024/2048 is just 0.5 and doesn't work with integer math. Added a commit to fix it, and a unit test.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do wonder though why the buckets aren't decayed always like the live bounds and have this 1 half life waiting time? In the end, half life is just a way to express a rate, and it seems a bit strange to also use that in the way it is used in the if expression.

This goes back a bit to the conclusion we took that half-lives for scoring data are Always Wrong (because they're both too fast and too slow). Thus, we really want the historical buckets to only decay as we get new data in them (which they always do when we get new data in them). However, it seems like it'd be obviously bad if we're scoring one channel with data that's a month old while treating it the same as another channel with data that's a minute old, so we added a decay, but made it pretty aggressively slow and only run it if we haven't gotten data in a while (since we'd prefer to only decay based on new data).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not so easy to feel confident that that is indeed the right thing to do, but that's the gut-part of all of this I suppose.

I've added this explanation as a comment to the code.

channel_liquidities: HashMap<u64, ChannelLiquidity>,
channel_liquidities: ChannelLiquidities,
}
/// ChannelLiquidities contains live and historical liquidity bounds for each channel.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, splitting scoring.rs into two modules would be nice. We generally don't put individual structs in their own module just for the sake of it but when files get too big, splitting them down the middle (if there's a clean way to do it) is always nice...there's a few files that are in desperate need of it.

@joostjager joostjager force-pushed the merge-scores branch 3 times, most recently from b459831 to 055177c Compare February 3, 2025 09:29
@joostjager joostjager added the weekly goal Someone wants to land this this week label Feb 4, 2025
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically LGTM, a few nits and one actual comment.

@joostjager
Copy link
Contributor Author

Added set method.

@joostjager joostjager requested a review from tnull February 6, 2025 14:50
}
/// Container for live and historical liquidity bounds for each channel.
#[derive(Clone)]
pub struct ChannelLiquidities(HashMap<u64, ChannelLiquidity>);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize we don't actually expose this anywhere so its not possible with the current API for an LSP to expose one of these.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How this works in ldk-node - or is intended to work - is that the serialized version of this data is fetched from the database and exposed.

This struct comes into play when merging the scores again. Bytes are retrieved via a url, and then deserialized into this ChannelLiquidities struct.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, but we have to also expose the ability to fetch this struct from a ProbabilisticScorer on the LSP end so that it can write the data to a server to be fetched from the URL :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is that? I am assuming the LSP does something like https://github.com/lightningdevkit/ldk-node/pull/458/files

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added get_scores on ProbabilisticScorer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renamed the getter to scores() to follow convention

Wrap the liquidities hash map into a struct so that decay and
serialization functionality can be attached. This allows external data
to be serialized into this struct and decayed to make it comparable and
mergeable.
Add a new scorer that is able to combine local score with scores coming
in from an external source. This allows light nodes with a limited view
on the network to improve payment success rates.
The formula for applying half lives was incorrect. Test coverage added.
@@ -1148,6 +1148,11 @@ impl<G: Deref<Target = NetworkGraph<L>>, L: Deref> ProbabilisticScorer<G, L> whe
pub fn set(&mut self, external_scores: ChannelLiquidities) {
_ = mem::replace(&mut self.channel_liquidities, external_scores);
}

// Returns the current scores.
pub fn get_scores(&self) -> ChannelLiquidities {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Please follow the Rust API Guidelines where possible, i.e., in the general case the get_ prefix is avoided and the method name should often reflect the field name. Suggestion: channel_liquidities

Copy link
Contributor Author

@joostjager joostjager Feb 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'd then mix scores (in parameter names) and channel_liquidities. Probably better to align them - but which one is the better name?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be voting for channel_liquidities but doesn't matter too much, so feel free to leave it as scores if you prefer, IMO.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving it to scores, as we refer to this data as scores elsewhere too now. It is a good point, it isn't fully consistent. Naming is hard.

@joostjager joostjager force-pushed the merge-scores branch 2 times, most recently from c42f96c to 0514071 Compare February 10, 2025 13:25
@joostjager joostjager requested a review from tnull February 10, 2025 13:40
tnull
tnull previously approved these changes Feb 10, 2025
Copy link
Contributor

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, one nit.

This commit expands on the previously introduced merge method by
offering a way to simply replace the local scores by the liquidity
information that is obtained from an external source.
tnull
tnull previously approved these changes Feb 10, 2025
Allows access to the scorer state. An example use case is an LSP
exposing the global network view in its scorer over http to light
clients.
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Landing as the diff since @tnull's ACK is just

$ git diff-tree -U2 6f40f73ac766ae35f559a375bba2d59b2b139753 e9921ddb016dba1c6ce0b371e4ced7faf4956b62
diff --git a/lightning/src/routing/scoring.rs b/lightning/src/routing/scoring.rs
index ae6c754a9..bbdd75284 100644
--- a/lightning/src/routing/scoring.rs
+++ b/lightning/src/routing/scoring.rs
@@ -1151,6 +1151,6 @@ impl<G: Deref<Target = NetworkGraph<L>>, L: Deref> ProbabilisticScorer<G, L> whe

 	/// Returns the current scores.
-	pub fn scores(&self) -> ChannelLiquidities {
-		self.channel_liquidities.clone()
+	pub fn scores(&self) -> &ChannelLiquidities {
+		&self.channel_liquidities
 	}
 }

@TheBlueMatt TheBlueMatt merged commit f866e2c into lightningdevkit:main Feb 10, 2025
24 of 26 checks passed
@TheBlueMatt
Copy link
Collaborator

Backported the decay fix in #3613

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
weekly goal Someone wants to land this this week
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Combine scorers
3 participants