Skip to content

Comments

Defer ChainMonitor updates and persistence to flush()#4351

Open
joostjager wants to merge 4 commits intolightningdevkit:mainfrom
joostjager:chain-mon-internal-deferred-writes
Open

Defer ChainMonitor updates and persistence to flush()#4351
joostjager wants to merge 4 commits intolightningdevkit:mainfrom
joostjager:chain-mon-internal-deferred-writes

Conversation

@joostjager
Copy link
Contributor

@joostjager joostjager commented Jan 27, 2026

Summary

Modify ChainMonitor internally to queue watch_channel and update_channel operations, returning InProgress until flush() is called. This enables persistence of monitor updates after ChannelManager persistence, ensuring correct ordering where the ChannelManager state is never ahead of the monitor state on restart. The new behavior is opt-in via a deferred switch.

Key changes:

  • ChainMonitor gains a deferred switch to enable the new queuing behavior
  • When enabled, monitor operations are queued internally and return InProgress
  • Calling flush() applies pending operations and persists monitors
  • Background processor updated to capture pending count before ChannelManager persistence, then flush after persistence completes

Performance Impact

Multi-channel, multi-node load testing (using ldk-server chaos branch) shows no measurable throughput difference between deferred and direct persistence modes.

This is likely because forwarding and payment processing are already effectively single-threaded: the background processor batches all forwards for the entire node in a single pass, so the deferral overhead doesn't add any meaningful bottleneck to an already serialized path.

For high-latency storage (e.g., remote databases), there is also currently no significant impact because channel manager persistence already blocks event handling in the background processor loop (test). If the loop were parallelized to process events concurrently with persistence, deferred writing would become comparatively slower since it moves the channel manager round trip into the critical path. However, deferred writing would also benefit from loop parallelization, and could be further optimized by batching the monitor and manager writes into a single round trip.

Alternative Designs Considered

Several approaches were explored to solve the monitor/manager persistence ordering problem:

1. Queue at KVStore level (#4310)

Introduces a QueuedKVStoreSync wrapper that queues all writes in memory, committing them in a single batch at chokepoints where data leaves the system (get_and_clear_pending_msg_events, get_and_clear_pending_events). This approach aims for true atomic multi-key writes but requires KVStore backends that support transactions (e.g., SQLite); filesystem backends cannot achieve full atomicity.

Trade-offs: Most general solution but requires changes to persistence boundaries and cannot fully close the desync gap with filesystem storage.

2. Queue at Persister level (#4317)

Updates MonitorUpdatingPersister to queue persist operations in memory, with actual writes happening on flush(). Adds flush() to the Persist trait and ChainMonitor.

Trade-offs: Only fixes the issue for MonitorUpdatingPersister; custom Persist implementations remain vulnerable to the race condition.

3. Queue at ChainMonitor wrapper level (#4345)

Introduces DeferredChainMonitor, a wrapper around ChainMonitor that implements the queue in a separate wrapper layer. All ChainMonitor traits (Listen, Confirm, EventsProvider, etc.) are passed through, allowing drop-in replacement.

Trade-offs: Requires re-implementing all trait pass-throughs on the wrapper. Keeps the core ChainMonitor unchanged but adds an external layer of indirection.

@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Jan 27, 2026

👋 Thanks for assigning @valentinewallace as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

@joostjager
Copy link
Contributor Author

Closing this PR as #4345 seems to be the easiest way to go

@joostjager joostjager closed this Jan 27, 2026
@joostjager joostjager reopened this Feb 9, 2026
@joostjager joostjager force-pushed the chain-mon-internal-deferred-writes branch from 1f5cef4 to 30d05ca Compare February 9, 2026 14:45
@joostjager
Copy link
Contributor Author

The single commit was split into three: extracting internal methods, adding a deferred toggle, and implementing the deferral and flushing logic. flush() now delegates to the extracted internal methods rather than reimplementing persist/insert logic inline. Deferred mode is opt-in via a deferred bool rather than always-on. Test infrastructure was expanded with deferred-mode helpers and dedicated unit tests.

@joostjager joostjager force-pushed the chain-mon-internal-deferred-writes branch 9 times, most recently from 2815bf9 to 3eb5644 Compare February 11, 2026 09:37
@codecov
Copy link

codecov bot commented Feb 11, 2026

Codecov Report

❌ Patch coverage is 93.54067% with 27 lines in your changes missing coverage. Please review.
✅ Project coverage is 85.91%. Comparing base (62c7575) to head (47070bd).

Files with missing lines Patch % Lines
lightning/src/chain/chainmonitor.rs 92.33% 20 Missing and 4 partials ⚠️
lightning/src/util/test_utils.rs 92.85% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4351      +/-   ##
==========================================
+ Coverage   85.87%   85.91%   +0.03%     
==========================================
  Files         157      157              
  Lines      103769   104076     +307     
  Branches   103769   104076     +307     
==========================================
+ Hits        89115    89418     +303     
- Misses      12158    12160       +2     
- Partials     2496     2498       +2     
Flag Coverage Δ
tests 85.91% <93.54%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@joostjager joostjager force-pushed the chain-mon-internal-deferred-writes branch 10 times, most recently from f964466 to b140bf9 Compare February 12, 2026 08:22
@joostjager joostjager marked this pull request as ready for review February 12, 2026 10:56
@joostjager
Copy link
Contributor Author

This PR is now ready for review. LDK-node counterpart: lightningdevkit/ldk-node#782

Ok(ChannelMonitorUpdateStatus::Completed)
}

fn watch_channel_internal(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol was it really faster to ask claude to do this than to just move the code? :p

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like cursor keys anymore 😂

pub fn new(
chain_source: Option<C>, broadcaster: T, logger: L, feeest: F, persister: P,
_entropy_source: ES, _our_peerstorage_encryption_key: PeerStorageKey,
_entropy_source: ES, _our_peerstorage_encryption_key: PeerStorageKey, deferred: bool,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs comprehensive documentation (copied on the new_async_beta constructor as well).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added

deferred: bool,
/// Queued monitor operations awaiting flush. Unused when `deferred` is `false`.
///
/// # Locking order with `monitors`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO lock order constraints should be written in code and checked, not in english :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean remove as we did with the giant lock tree doc? Or is there some more explicit way to write it in code other than just locking? Have to say that the lock debugger indeed helped with getting it right.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

}
}

/// Returns the number of pending monitor operations queued for later execution.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs description of how to use it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added

self.pending_ops.lock().unwrap().len()
}

/// Flushes up to `count` pending monitor operations.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs description of how to use it, including reference to pending_operation_count.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added

/// remaining operations are left in the queue for the next call.
pub fn flush(&self, count: usize, logger: &L) -> Result<(), ()> {
if count > 0 {
log_trace!(logger, "Flushing up to {} monitor operations", count);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More than trace imo, maybe info even?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made it info, probably good to see that things are eventually persisted.

},
Err(()) => {
drop(queue);
// The monitor is consumed and cannot be retried.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be unreachable, right?

Copy link
Contributor Author

@joostjager joostjager Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. want to panic?

in the original code, this did return an error to the caller of watch_channel

Copy link
Contributor Author

@joostjager joostjager Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is indeed already checked before queueing. Made it unreachable. Could also remove the flush return value.

},
ChannelMonitorUpdateStatus::InProgress => {},
ChannelMonitorUpdateStatus::UnrecoverableError => {
panic!("UnrecoverableError during monitor operation");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be unreachable, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is indeed truly unreachable. Made it unreachable.

@ldk-reviews-bot
Copy link

👋 The first review has been submitted!

Do you think this PR is ready for a second reviewer? If so, click here to assign a second reviewer.

@joostjager joostjager force-pushed the chain-mon-internal-deferred-writes branch from b140bf9 to 04051fb Compare February 13, 2026 19:47
@ldk-reviews-bot
Copy link

🔔 1st Reminder

Hey @TheBlueMatt! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@joostjager
Copy link
Contributor Author

joostjager commented Feb 16, 2026

@TheBlueMatt I addressed your comments, but afterwards I realized that there may be another way to achieve the same goal: #4422

It's 25% faster.

Let me know what you think

Never mind, we can hold off completion signalling, but that doesn't prevent monitors from being on disk before chan manager is.

@joostjager
Copy link
Contributor Author

I think there was still a small race remaining. Pushed fix.

@ldk-reviews-bot
Copy link

🔔 1st Reminder

Hey @TheBlueMatt! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

feel free to squash


let mut futures = Joiner::new();

// We capture pending_operation_count inside the persistence branch to
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not entirely sure this needs to be explained lol.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so, because I hadn't thought of that tiny gap. I thought it's all wrapped in the persistence guard, but the needs_persist atomic is separate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did rephrase the comment a bit.

self.pending_ops.lock().unwrap().len()
}

/// Flushes up to `count` pending monitor operations that were queued while the
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: its ordered

Suggested change
/// Flushes up to `count` pending monitor operations that were queued while the
/// Flushes the first `count` pending monitor operations that were queued while the

@joostjager joostjager force-pushed the chain-mon-internal-deferred-writes branch from 7ce7cf9 to 824e976 Compare February 19, 2026 14:52
@ldk-reviews-bot
Copy link

✅ Added second reviewer: @valentinewallace

@TheBlueMatt TheBlueMatt removed their request for review February 19, 2026 16:08
@joostjager joostjager force-pushed the chain-mon-internal-deferred-writes branch from 824e976 to 47070bd Compare February 20, 2026 08:58
@joostjager
Copy link
Contributor Author

Rebased without changes


// Flush monitor operations that were pending before we persisted. New updates
// that arrived after are left for the next iteration.
chain_monitor.get_cm().flush(pending_monitor_writes, &logger);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we currently won't flush if any of the futures error, e.g. persisting the liquidity manager, which may not be the desired behavior.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The flush not happening on error here isn't a channel safety concern, since no messages have gone out yet based on these deferred writes, and the channel manager being ahead of monitors doesn't lead to force closes.

I'd prefer to avoid adding complexity for inspecting individual futures after a failure just to flush when the channel manager persistence future succeeded.

let mut queue = self.pending_ops.lock().unwrap();
let op = match queue.pop_front() {
Some(op) => op,
None => return,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

debug_assert!(false) here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

let update_id = monitor.get_latest_update_id();
// Atomically check for duplicates in both the pending queue and the
// flushed monitor set. Lock order: `pending_ops` before `monitors`
// (see `pending_ops` field doc).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is outdated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, left over from previous documentation of the lock order. Removed.

/// A pending operation queued for later execution when `ChainMonitor` is in deferred mode.
enum PendingMonitorOp<ChannelSigner: EcdsaChannelSigner> {
/// A new monitor to insert and persist.
NewMonitor { channel_id: ChannelId, monitor: ChannelMonitor<ChannelSigner>, update_id: u64 },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we omit the update_id field and do this?

diff --git a/lightning/src/chain/chainmonitor.rs b/lightning/src/chain/chainmonitor.rs
index 6da88fe01..fc0e340be 100644
--- a/lightning/src/chain/chainmonitor.rs
+++ b/lightning/src/chain/chainmonitor.rs
@@ -70,7 +70,7 @@ use core::sync::atomic::{AtomicUsize, Ordering};
 /// A pending operation queued for later execution when `ChainMonitor` is in deferred mode.
 enum PendingMonitorOp<ChannelSigner: EcdsaChannelSigner> {
 	/// A new monitor to insert and persist.
-	NewMonitor { channel_id: ChannelId, monitor: ChannelMonitor<ChannelSigner>, update_id: u64 },
+	NewMonitor { channel_id: ChannelId, monitor: ChannelMonitor<ChannelSigner> },
 	/// An update to apply and persist.
 	Update { channel_id: ChannelId, update: ChannelMonitorUpdate },
 }
@@ -1282,8 +1282,9 @@ where
 			};
 
 			let (channel_id, update_id, status) = match op {
-				PendingMonitorOp::NewMonitor { channel_id, monitor, update_id } => {
+				PendingMonitorOp::NewMonitor { channel_id, monitor } => {
 					let logger = WithChannelMonitor::from(logger, &monitor, None);
+					let update_id = monitor.get_latest_update_id();
 					log_trace!(logger, "Flushing new monitor");
 					// Hold `pending_ops` across the internal call so that
 					// `watch_channel` (which checks `monitors` + `pending_ops`
@@ -1546,7 +1547,6 @@ where
 			return self.watch_channel_internal(channel_id, monitor);
 		}
 
-		let update_id = monitor.get_latest_update_id();
 		// Atomically check for duplicates in both the pending queue and the
 		// flushed monitor set. Lock order: `pending_ops` before `monitors`
 		// (see `pending_ops` field doc).
@@ -1562,7 +1562,7 @@ where
 		if already_pending {
 			return Err(());
 		}
-		pending_ops.push_back(PendingMonitorOp::NewMonitor { channel_id, monitor, update_id });
+		pending_ops.push_back(PendingMonitorOp::NewMonitor { channel_id, monitor });
 		Ok(ChannelMonitorUpdateStatus::InProgress)
 	}
 

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good one indeed, done

if already_pending {
return Err(());
}
pending_ops.push_back(PendingMonitorOp::NewMonitor { channel_id, monitor, update_id });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sometimes when we push to queues we have a helper method that checks if the queue's length is getting too long and take some action, like warn or something. Does that seem useful in this case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is very useful. The queue is self-limiting: when deferred mode returns InProgress, MONITOR_UPDATE_IN_PROGRESS is set on the channel, which blocks further updates.

},
}
},
PendingMonitorOp::Update { channel_id, update } => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wanted to note that within the monitor we also have a pending_monitor_updates queue, so we have two layers of pending update queues now... Probably fine but seems a bit duplicative at first glance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two queues serve different purposes: pending_ops holds operations that haven't been executed at all yet (deferred until after the ChannelManager is persisted), while pending_monitor_updates in the MonitorHolder tracks updates that have been applied to the in-memory monitor but whose async persistence hasn't completed yet. They operate at different stages of the pipeline: "not yet started" vs "started but not yet confirmed persisted."

match status {
ChannelMonitorUpdateStatus::Completed => {
let logger = WithContext::from(logger, None, Some(channel_id), None);
if let Err(e) = self.channel_monitor_updated(channel_id, update_id) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Relatedly, the Persist docs state that channel_monitor_updated should be called once a full channelmonitor has been persisted, which I think is inaccurate (and later docs seem to contradict that).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the doc to clarify this. Each pending update ID must be individually marked complete via channel_monitor_updated, since the implementation uses retain to remove only the exact matching ID.

match status {
ChannelMonitorUpdateStatus::Completed => {
let logger = WithContext::from(logger, None, Some(channel_id), None);
if let Err(e) = self.channel_monitor_updated(channel_id, update_id) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused, the Persist docs say it's only necessary to call channel_monitor_updated when ::InProgress was previously returned for an update. Have I told you about my theory we should remove sync persistence support? Lol

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The deferred watch_channel/update_channel always returns InProgress to the ChannelManager, so from its perspective there's a pending in-progress update. When flush() runs and the underlying persister returns Completed, we need to call channel_monitor_updated to signal that the previously-returned InProgress is now done, clearing MONITOR_UPDATE_IN_PROGRESS and releasing held messages.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing sync persistence support: I'd be all in favor of that, but I have a suspicion it won't land this PR any faster ;)

}

/// Creates a minimal `ChannelMonitor<TestChannelSigner>` for the given `channel_id`.
fn dummy_monitor(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This boilerplate already exists in channelmonitor.rs L6970

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added commit to extract helper

};

match status {
ChannelMonitorUpdateStatus::Completed => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I seem to recall we started disallowing flipping between returning Completed and InProgress at some point. I'm not sure if that's still correct, but it may be worth looking into.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The flipping concern doesn't apply here. The InProgress is returned by the deferred layer in ChainMonitor, not by the Persist implementation itself. The Persist impl can consistently return Completed; the deferred layer just wraps it with an initial InProgress to delay processing until after the ChannelManager is persisted. When flush() runs and the persister returns Completed, we call channel_monitor_updated to resolve the InProgress that the deferred layer returned earlier.

@joostjager joostjager force-pushed the chain-mon-internal-deferred-writes branch from 47070bd to 1ecd046 Compare February 25, 2026 08:22
joostjager and others added 4 commits February 25, 2026 09:29
Extract the ChannelMonitor construction boilerplate from
channelmonitor test functions into a reusable dummy_monitor
helper in test_utils.rs, generic over the signer type.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Pure refactor: move the bodies of Watch::watch_channel and
Watch::update_channel into methods on ChainMonitor, and have
the Watch trait methods delegate to them. This prepares for adding
deferred mode where the Watch methods will conditionally queue
operations instead of executing them immediately.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add a `deferred` parameter to `ChainMonitor::new` and
`ChainMonitor::new_async_beta`. When set to true, the Watch trait
methods (watch_channel and update_channel) will unimplemented!() for
now. All existing callers pass false to preserve current behavior.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the unimplemented!() stubs with a full deferred write
implementation. When ChainMonitor has deferred=true, Watch trait
operations queue PendingMonitorOp entries instead of executing
immediately. A new flush() method drains the queue and forwards
operations to the internal watch/update methods, calling
channel_monitor_updated on Completed status.

The BackgroundProcessor is updated to capture pending_operation_count
before persisting the ChannelManager, then flush that many writes
afterward - ensuring monitor writes happen in the correct order
relative to manager persistence.

Key changes:
- Add PendingMonitorOp enum and pending_ops queue to ChainMonitor
- Implement flush() and pending_operation_count() public methods
- Integrate flush calls in BackgroundProcessor (both sync and async)
- Add TestChainMonitor::new_deferred, flush helpers, and auto-flush
  in release_pending_monitor_events for test compatibility
- Add create_node_cfgs_deferred for deferred-mode test networks
- Add unit tests for queue/flush mechanics and full payment flow

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@joostjager joostjager force-pushed the chain-mon-internal-deferred-writes branch from 1ecd046 to c6b30c9 Compare February 25, 2026 08:29
@joostjager
Copy link
Contributor Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

weekly goal Someone wants to land this this week

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

4 participants