Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repair task failed after an hour with zero token nodes in multi dc configuration #4078

Open
aleksbykov opened this issue Oct 23, 2024 · 3 comments
Assignees
Labels

Comments

@aleksbykov
Copy link

Packages

Scylla version: 6.2.0-20241013.b8a9fd4e49e8 with build-id a61f658b0408ba10663812f7a3b4d6aea7714fac

Kernel Version: 6.8.0-1016-aws
Scylla Manager Agent 3.3.3-0.20240912.924034e0d

Issue description

Cluster configured with zero token nodes and multi dc configuration. There are DC: "eu-west-1" with 3 data nodes, DC: "eu-west-2": 3 data nodes and 1 zero token nodes, DC: "eu-north-1": 1 zero token node.

Nemesis 'disrupt_mgmt_corrupt_then_repair' was failed. This nemesis stops scylla , remove several sstables, start scylla and then trigger repair from scylla manager. Nemesis chose node4 (data node) as target node. It remove sstables after scylla was stopped. And after scylla was started
triggered repair from scylla manager:
Repair task was failed after an hour:

sdcm.mgmt.common.ScyllaManagerError: Task: repair/362a4112-02b8-47f3-ae49-49c47600de51 final status is: ERROR.
Task progress string: Run:		a1edb893-8c11-11ef-bb82-0a7de1e926c3
Status:		ERROR
Cause:		see more errors in logs: master 10.4.2.208 keyspace keyspace1 table standard1 command 6: status FAILED
Start time:	16 Oct 24 22:54:43 UTC
End time:	17 Oct 24 00:06:52 UTC
Duration:	1h12m9s
Progress:	0%/99%
Intensity:	1
Parallel:	0
Datacenters:	
  - eu-northscylla_node_north
  - eu-west-2scylla_node_west
  - eu-westscylla_node_west

╭───────────────────────────────┬────────────────────────────────┬──────────┬──────────╮
│ Keyspace                      │                          Table │ Progress │ Duration │
├───────────────────────────────┼────────────────────────────────┼──────────┼──────────┤
│ keyspace1                     │                      standard1 │ 0%/100%  │ 1h11m50s │
├───────────────────────────────┼────────────────────────────────┼──────────┼──────────┤
│ system_distributed_everywhere │ cdc_generation_descriptions_v2 │ 100%     │ 0s       │
├───────────────────────────────┼────────────────────────────────┼──────────┼──────────┤
│ system_distributed            │      cdc_generation_timestamps │ 100%     │ 0s       │
│ system_distributed            │    cdc_streams_descriptions_v2 │ 100%     │ 0s       │
│ system_distributed            │                 service_levels │ 100%     │ 0s       │
│ system_distributed            │              view_build_status │ 100%     │ 0s       │
╰───────────────────────────────┴────────────────────────────────┴──────────┴──────────╯

Next error found in scylla manager log in "monitor-set-2bc4de73.tar.gz":

Oct 17 00:06:41 multi-dc-rackaware-with-znode-dc-fe-monitor-node-2bc4de73-1 scylla-manager[7935]: {"L":"ERROR","T":"2024-10-17T00:06:41.197Z","N":"repair.keyspace1.standard1","M":"Repair failed","error":"master 10.4.2.208 keyspace keyspace1 table standard1 command 6: status FAILED","_trace_id":"MQddNqAdRnuC207sElnpJg","errorStack":"github.com/scylladb/scylla-manager/v3/pkg/service/repair.(*worker).runRepair.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/worker.go:58\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*worker).runRepair\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/worker.go:100\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*worker).HandleJob\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/worker.go:30\ngithub.com/scylladb/scylla-manager/v3/pkg/util/workerpool.(*Pool[...]).spawn.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/[email protected]/workerpool/pool.go:99\nruntime.goexit\n\truntime/asm_amd64.s:1695\n","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/[email protected]/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/[email protected]/logger.go:84\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*tableGenerator).processResult\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/generator.go:334\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*tableGenerator).Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/generator.go:219\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*generator).Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/generator.go:148\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.(*Service).Repair\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/service.go:304\ngithub.com/scylladb/scylla-manager/v3/pkg/service/repair.Runner.Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/repair/runner.go:26\ngithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler.PolicyRunner.Run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler/policy.go:32\ngithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler.(*Service).run\n\tgithub.com/scylladb/scylla-manager/v3/pkg/service/scheduler/service.go:448\ngithub.com/scylladb/scylla-manager/v3/pkg/scheduler.(*Scheduler[...]).asyncRun.func1\n\tgithub.com/scylladb/scylla-manager/v3/pkg/scheduler/scheduler.go:401"}

This could be related to zero token nodes in cofiguration.

Impact

Repair process failed from scylla manager.

Installation details

Cluster size: 6 nodes (i4i.4xlarge)

Scylla Nodes used in this run:

  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-1 (52.17.239.72 | 10.4.1.1) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2 (52.30.16.60 | 10.4.2.208) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-3 (34.244.15.201 | 10.4.2.21) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-4 (35.179.142.180 | 10.3.0.73) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-5 (35.177.188.187 | 10.3.1.136) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-6 (35.177.134.180 | 10.3.1.62) (shards: 14)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-7 (35.177.11.239 | 10.3.1.229) (shards: 4)
  • multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-8 (13.61.14.77 | 10.0.0.60) (shards: 4)

OS / Image: ami-01f5cd2cb7c8dbd6f ami-0a32db7034cf41d95 ami-0b2b4e9fba26c7618 (aws: undefined_region)

Test: longevity-multi-dc-rack-aware-zero-token-dc
Test id: 2bc4de73-4328-4444-b601-6bd88060fa4d
Test name: scylla-staging/abykov/longevity-multi-dc-rack-aware-zero-token-dc
Test method: longevity_test.LongevityTest.test_custom_time
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor 2bc4de73-4328-4444-b601-6bd88060fa4d
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs 2bc4de73-4328-4444-b601-6bd88060fa4d

Logs:

Jenkins job URL
Argus

@aleksbykov
Copy link
Author

@kbr-scylla @patjed41 , this but is not related to scylla directly ( at least i didn't find any issue in scylla logs) but scylla manager repair task failed and looks like it could be related to zero token nodes

@kbr-scylla
Copy link

Could be that support for zero-token nodes needs to be explicitly implemented in Scylla Manager. Maybe it assumes that every node has tokens

@Michal-Leszczynski Michal-Leszczynski self-assigned this Oct 24, 2024
@Michal-Leszczynski
Copy link
Collaborator

From SM logs we can see that it tries to repair 16 token ranges owned by a 6-node replica set:

Oct 16 22:54:50 multi-dc-rackaware-with-znode-dc-fe-monitor-node-2bc4de73-1 scylla-manager[7935]: {"L":"INFO","T":"2024-10-16T22:54:50.498Z","N":"repair.worker 2","M":"Repairing","keyspace":"keyspace1","table":"standard1","master":"10.4.2.208","hosts":["10.3.0.73","10.3.1.136","10.3.1.62","10.4.1.1","10.4.2.208","10.4.2.21"],"ranges":16,"intensity":1,"job_id":6,"_trace_id":"MQddNqAdRnuC207sElnpJg"}
...
2024-10-16T22:54:50.766+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: starting user-requested repair for keyspace keyspace1, repair id 6, options {"ranges_parallelism": "1", "columnFamilies": "standard1", "ranges": "-9223372036854775808:-8070450532247928833,-8070450532247928833:-6917529027641081857,-6917529027641081857:-5764607523034234881,-5764607523034234881:-4611686018427387905,-4611686018427387905:-3458764513820540929,-3458764513820540929:-2305843009213693953,-2305843009213693953:-1152921504606846977,-1152921504606846977:-1,-1:1152921504606846975,1152921504606846975:2305843009213693951,2305843009213693951:3458764513820540927,3458764513820540927:4611686018427387903,4611686018427387903:5764607523034234879,5764607523034234879:6917529027641081855,6917529027641081855:8070450532247928831,8070450532247928831:9223372036854775807", "hosts": "10.3.0.73,10.3.1.136,10.3.1.62,10.4.1.1,10.4.2.208,10.4.2.21"}

The replica set (["10.3.0.73","10.3.1.136","10.3.1.62","10.4.1.1","10.4.2.208","10.4.2.21"]) does not contain any zero token nodes.
SM gets job ID (6) from Scylla, but then it timeouts 3 times (30 timeout min each time) on getting repair status.

From my understanding, SM is behaving correctly here, but the problem is that Scylla hangs on the repair status API call.
SM uses POST "/storage_service/repair_async/{keyspace}" for scheduling a repair, and GET "/storage_service/repair_status for synchronously waiting for its status.

There are some error Scylla logs on repair master (node2):

2024-10-16T23:30:47.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2  !WARNING | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: 1 out of 1 ranges failed, keyspace=keyspace1, tables=["standard1"], repair_reason=repair, nodes_down_during_repair={}, aborted_by_user=false, failed_because=seastar::rpc::remote_verb_error (Compaction for keyspace1/standard1 was stopped due to: user-triggered operation)
More Scylla logs from repair master (node2)
2024-10-16T23:06:39.181+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard 12:strm] compaction - [Split keyspace1.standard1 4cac6b10-8c13-11ef-9926-494caaf719a3] Splitting [/var/lib/scylla/data/keyspace1/standard1-115fa1608c1011efa02683c41940e401/me-3gkf_1rul_1wsuo2buctnb7itaqr-big-Data.db:level=0:origin=repair]
2024-10-16T23:07:04.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  2:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: stats: repair_reason=repair, keyspace=keyspace1, tables=["standard1"], ranges_nr=1, round_nr=226, round_nr_fast_path_already_synced=1, round_nr_fast_path_same_combined_hashes=0, round_nr_slow_path=225, rpc_call_nr=4507, tx_hashes_nr=151612, rx_hashes_nr=6337143, duration=512.1361 seconds, tx_row_nr=955827, rx_row_nr=151612, tx_row_bytes=5486446980, rx_row_bytes=1753544392, row_from_disk_bytes={10.3.0.73: 6301934520, 10.3.1.136: 7516926060, 10.3.1.62: 7516926060, 10.4.1.1: 7516776820, 10.4.2.208: 7500934420, 10.4.2.21: 7516926060}, row_from_disk_nr={10.3.0.73: 1097898, 10.3.1.136: 1309569, 10.3.1.62: 1309569, 10.4.1.1: 1309543, 10.4.2.208: 1306783, 10.4.2.21: 1309569}, row_from_disk_bytes_per_sec={10.3.0.73: 11.735148, 10.3.1.136: 13.997644, 10.3.1.62: 13.997644, 10.4.1.1: 13.997367, 10.4.2.208: 13.967866, 10.4.2.21: 13.997644} MiB/s, row_from_disk_rows_per_sec={10.3.0.73: 2143.7622, 10.3.1.136: 2557.0723, 10.3.1.62: 2557.0723, 10.4.1.1: 2557.0215, 10.4.2.208: 2551.6323, 10.4.2.21: 2557.0723} Rows/s, tx_row_nr_peer={10.3.0.73: 360497, 10.3.1.136: 148826, 10.3.1.62: 148826, 10.4.1.1: 148852, 10.4.2.21: 148826}, rx_row_nr_peer={10.3.0.73: 120037, 10.3.1.136: 1232, 10.3.1.62: 669, 10.4.1.1: 28028, 10.4.2.21: 1646}
2024-10-16T23:07:04.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  2:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: completed successfully, keyspace=keyspace1
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}, seastar::future<void>::then_wrapped_nrvo<void, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}>(seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
2024-10-16T23:07:24.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard 10:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: stats: repair_reason=repair, keyspace=keyspace1, tables=["standard1"], ranges_nr=1, round_nr=225, round_nr_fast_path_already_synced=1, round_nr_fast_path_same_combined_hashes=0, round_nr_slow_path=224, rpc_call_nr=4505, tx_hashes_nr=206217, rx_hashes_nr=6528130, duration=593.46423 seconds, tx_row_nr=1035375, rx_row_nr=206217, tx_row_bytes=5943052500, rx_row_bytes=2385105822, row_from_disk_bytes={10.3.0.73: 7463469440, 10.3.1.136: 7516466860, 10.3.1.62: 7516466860, 10.4.1.1: 7464847040, 10.4.2.208: 7497932400, 10.4.2.21: 7503787200}, row_from_disk_nr={10.3.0.73: 1300256, 10.3.1.136: 1309489, 10.3.1.62: 1309489, 10.4.1.1: 1300496, 10.4.2.208: 1306260, 10.4.2.21: 1307280}, row_from_disk_bytes_per_sec={10.3.0.73: 11.99351, 10.3.1.136: 12.078674, 10.3.1.62: 12.078674, 10.4.1.1: 11.995724, 10.4.2.208: 12.04889, 10.4.2.21: 12.058299} MiB/s, row_from_disk_rows_per_sec={10.3.0.73: 2190.9592, 10.3.1.136: 2206.517, 10.3.1.62: 2206.517, 10.4.1.1: 2191.3638, 10.4.2.208: 2201.0762, 10.4.2.21: 2202.795} Rows/s, tx_row_nr_peer={10.3.0.73: 212221, 10.3.1.136: 202988, 10.3.1.62: 202988, 10.4.1.1: 211981, 10.4.2.21: 205197}, rx_row_nr_peer={10.3.0.73: 44968, 10.3.1.136: 2350, 10.3.1.62: 2124, 10.4.1.1: 66348, 10.4.2.21: 90427}
2024-10-16T23:07:24.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard 10:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: completed successfully, keyspace=keyspace1
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}, seastar::future<void>::then_wrapped_nrvo<void, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}>(seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
2024-10-16T23:07:25.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  8:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: stats: repair_reason=repair, keyspace=keyspace1, tables=["standard1"], ranges_nr=1, round_nr=226, round_nr_fast_path_already_synced=1, round_nr_fast_path_same_combined_hashes=0, round_nr_slow_path=225, rpc_call_nr=4304, tx_hashes_nr=160312, rx_hashes_nr=5871842, duration=606.12695 seconds, tx_row_nr=1458940, rx_row_nr=160312, tx_row_bytes=8374315600, rx_row_bytes=1854168592, row_from_disk_bytes={10.3.0.73: 3729536300, 10.3.1.136: 7513889600, 10.3.1.62: 7493403540, 10.4.1.1: 7456099280, 10.4.2.208: 7494264540, 10.4.2.21: 7505032780}, row_from_disk_nr={10.3.0.73: 649745, 10.3.1.136: 1309040, 10.3.1.62: 1305471, 10.4.1.1: 1298972, 10.4.2.208: 1305621, 10.4.2.21: 1307497}, row_from_disk_bytes_per_sec={10.3.0.73: 5.8680162, 10.3.1.136: 11.822282, 10.3.1.62: 11.79005, 10.4.1.1: 11.731355, 10.4.2.208: 11.791404, 10.4.2.21: 11.808346} MiB/s, row_from_disk_rows_per_sec={10.3.0.73: 1071.9619, 10.3.1.136: 2159.6797, 10.3.1.62: 2153.7913, 10.4.1.1: 2143.069, 10.4.2.208: 2154.0388, 10.4.2.21: 2157.1338} Rows/s, tx_row_nr_peer={10.3.0.73: 816188, 10.3.1.136: 156893, 10.3.1.62: 160462, 10.4.1.1: 166961, 10.4.2.21: 158436}, rx_row_nr_peer={10.3.0.73: 94292, 10.3.1.136: 14, 10.3.1.62: 1031, 10.4.1.1: 42269, 10.4.2.21: 22706}
2024-10-16T23:07:25.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  8:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: completed successfully, keyspace=keyspace1
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}, seastar::future<void>::then_wrapped_nrvo<void, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}>(seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
2024-10-16T23:07:39.455+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard 13:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: stats: repair_reason=repair, keyspace=keyspace1, tables=["standard1"], ranges_nr=1, round_nr=225, round_nr_fast_path_already_synced=1, round_nr_fast_path_same_combined_hashes=0, round_nr_slow_path=224, rpc_call_nr=4400, tx_hashes_nr=163415, rx_hashes_nr=6188615, duration=570.0171 seconds, tx_row_nr=1163765, rx_row_nr=163415, tx_row_bytes=6680011100, rx_row_bytes=1890057890, row_from_disk_bytes={10.3.0.73: 5474748860, 10.3.1.136: 7512787520, 10.3.1.62: 7512787520, 10.4.1.1: 7512649760, 10.4.2.208: 7501244380, 10.4.2.21: 7503247640}, row_from_disk_nr={10.3.0.73: 953789, 10.3.1.136: 1308848, 10.3.1.62: 1308848, 10.4.1.1: 1308824, 10.4.2.208: 1306837, 10.4.2.21: 1307186}, row_from_disk_bytes_per_sec={10.3.0.73: 9.159598, 10.3.1.136: 12.569365, 10.3.1.62: 12.569365, 10.4.1.1: 12.569134, 10.4.2.208: 12.550052, 10.4.2.21: 12.553404} MiB/s, row_from_disk_rows_per_sec={10.3.0.73: 1673.2639, 10.3.1.136: 2296.1558, 10.3.1.62: 2296.1558, 10.4.1.1: 2296.1135, 10.4.2.208: 2292.6277, 10.4.2.21: 2293.24} Rows/s, tx_row_nr_peer={10.3.0.73: 516463, 10.3.1.136: 161404, 10.3.1.62: 161404, 10.4.1.1: 161428, 10.4.2.21: 163066}, rx_row_nr_peer={10.3.0.73: 82604, 10.3.1.136: 165, 10.3.1.62: 3694, 10.4.1.1: 16932, 10.4.2.21: 60020}
2024-10-16T23:07:39.455+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard 13:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: completed successfully, keyspace=keyspace1
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}, seastar::future<void>::then_wrapped_nrvo<void, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}>(seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
2024-10-16T23:08:39.766+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard 12:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: stats: repair_reason=repair, keyspace=keyspace1, tables=["standard1"], ranges_nr=1, round_nr=226, round_nr_fast_path_already_synced=1, round_nr_fast_path_same_combined_hashes=0, round_nr_slow_path=225, rpc_call_nr=4385, tx_hashes_nr=112703, rx_hashes_nr=6263965, duration=571.65686 seconds, tx_row_nr=836060, rx_row_nr=112703, tx_row_bytes=4798984400, rx_row_bytes=1303522898, row_from_disk_bytes={10.3.0.73: 5928208860, 10.3.1.136: 7518579180, 10.3.1.62: 7506708860, 10.4.1.1: 7481206040, 10.4.2.208: 7502621980, 10.4.2.21: 7513998660}, row_from_disk_nr={10.3.0.73: 1032789, 10.3.1.136: 1309857, 10.3.1.62: 1307789, 10.4.1.1: 1303346, 10.4.2.208: 1307077, 10.4.2.21: 1309059}, row_from_disk_bytes_per_sec={10.3.0.73: 9.889815, 10.3.1.136: 12.542972, 10.3.1.62: 12.5231695, 10.4.1.1: 12.480624, 10.4.2.208: 12.516352, 10.4.2.21: 12.535331} MiB/s, row_from_disk_rows_per_sec={10.3.0.73: 1806.6589, 10.3.1.136: 2291.3342, 10.3.1.62: 2287.7168, 10.4.1.1: 2279.9446, 10.4.2.208: 2286.4712, 10.4.2.21: 2289.9385} Rows/s, tx_row_nr_peer={10.3.0.73: 386991, 10.3.1.136: 109923, 10.3.1.62: 111991, 10.4.1.1: 116434, 10.4.2.21: 110721}, rx_row_nr_peer={10.3.0.73: 88678, 10.3.1.136: 884, 10.3.1.62: 1346, 10.4.1.1: 15205, 10.4.2.21: 6590}
2024-10-16T23:08:39.766+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard 12:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: completed successfully, keyspace=keyspace1
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}, seastar::future<void>::then_wrapped_nrvo<void, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}>(seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
2024-10-16T23:13:04.766+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] sstable - Rebuilding bloom filter /var/lib/scylla/data/keyspace1/standard1-115fa1608c1011efa02683c41940e401/me-3gkf_1rqv_43w2o2ixsplnfgkper-big-Filter.db: resizing bitset from 261648 bytes to 332040 bytes. sstable origin: repair
2024-10-16T23:13:52.516+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] sstable - Rebuilding bloom filter /var/lib/scylla/data/keyspace1/standard1-115fa1608c1011efa02683c41940e401/me-3gkf_1rrb_330ww2k1wo5d48hh9f-big-Filter.db: resizing bitset from 247368 bytes to 201336 bytes. sstable origin: repair
2024-10-16T23:15:12.883+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] compaction - [Split keyspace1.standard1 7ef27c30-8c14-11ef-a825-494aaaf719a3] Splitting [/var/lib/scylla/data/keyspace1/standard1-115fa1608c1011efa02683c41940e401/me-3gkf_1rrb_330ww2k1wo5d48hh9f-big-Data.db:level=0:origin=repair]
2024-10-16T23:15:30.766+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] compaction - [Split keyspace1.standard1 8957ec50-8c14-11ef-a61c-4946aaf719a3] Splitting [/var/lib/scylla/data/keyspace1/standard1-115fa1608c1011efa02683c41940e401/me-3gkf_1rqv_43w2o2ixsplnfgkper-big-Data.db:level=0:origin=repair]
2024-10-16T23:15:37.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2  !WARNING | scylla[5553]:  [shard  1:strm] repair - repair_writer: keyspace=keyspace1, table=standard1, multishard_writer failed: sstables::compaction_stopped_exception (Compaction for keyspace1/standard1 was stopped due to: user-triggered operation)
2024-10-16T23:15:37.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2  !WARNING | scylla[5553]:  [shard  1:strm] repair - repair_writer: keyspace=keyspace1, table=standard1, wait_for_writer_done failed: sstables::compaction_stopped_exception (Compaction for keyspace1/standard1 was stopped due to: user-triggered operation)
2024-10-16T23:24:50.516+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla-manager-agent[5613]: {"L":"ERROR","T":"2024-10-16T23:24:50.499Z","N":"http","M":"GET /storage_service/repair_status?id=6","from":"10.4.2.67:57388","status":502,"bytes":0,"duration":"1800000ms","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/[email protected]/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/[email protected]/logger.go:84\nmain.(*logEntry).Write\n\tgithub.com/scylladb/scylla-manager/v3/pkg/cmd/agent/log.go:53\nmain.newRouter.RequestLogger.RequestLogger.func5.1.1\n\tgithub.com/go-chi/chi/[email protected]/middleware/logger.go:52\nmain.newRouter.RequestLogger.RequestLogger.func5.1\n\tgithub.com/go-chi/chi/[email protected]/middleware/logger.go:56\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2171\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/[email protected]/mux.go:90\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3142\nnet/http.(*conn).serve\n\tnet/http/server.go:2044"}
2024-10-16T23:30:47.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: stats: repair_reason=repair, keyspace=keyspace1, tables=["standard1"], ranges_nr=1, round_nr=0, round_nr_fast_path_already_synced=0, round_nr_fast_path_same_combined_hashes=0, round_nr_slow_path=0, rpc_call_nr=0, tx_hashes_nr=0, rx_hashes_nr=0, duration=2033.4792 seconds, tx_row_nr=0, rx_row_nr=0, tx_row_bytes=0, rx_row_bytes=0, row_from_disk_bytes={}, row_from_disk_nr={}, row_from_disk_bytes_per_sec={} MiB/s, row_from_disk_rows_per_sec={} Rows/s, tx_row_nr_peer={}, rx_row_nr_peer={}
2024-10-16T23:30:47.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2  !WARNING | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: 1 out of 1 ranges failed, keyspace=keyspace1, tables=["standard1"], repair_reason=repair, nodes_down_during_repair={}, aborted_by_user=false, failed_because=seastar::rpc::remote_verb_error (Compaction for keyspace1/standard1 was stopped due to: user-triggered operation)
2024-10-16T23:30:47.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2  !WARNING | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Repair tablet for table=keyspace1.standard1 range=(minimum token,-8070450532247928833] status=failed: std::runtime_error (repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: 1 out of 1 ranges failed, keyspace=keyspace1, tables=["standard1"], repair_reason=repair, nodes_down_during_repair={}, aborted_by_user=false, failed_because=seastar::rpc::remote_verb_error (Compaction for keyspace1/standard1 was stopped due to: user-triggered operation))
2024-10-16T23:30:47.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb] Repair 15 out of 16 tablets: table=keyspace1.standard1 range=(6917529027641081855,8070450532247928831] replicas=[ebe5bf99-7a90-4080-ac56-89d120e4fda9:0, 4e2f0651-ce0d-48fe-abc9-042ccac10114:0, 7e40c144-c328-4d8e-b2b5-20d30c40428c:0, 9f9dd207-1fa7-4217-b715-ab7b26b8b3aa:0, 95a46333-001f-4f53-82e6-3a0b470697b5:0, ac6032a3-eb12-4136-8377-1b32e8f2fa54:0]
2024-10-16T23:30:47.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.4.1.1, participants=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62], started
2024-10-16T23:30:47.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.4.2.21, participants=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62], started
2024-10-16T23:30:47.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.3.1.136, participants=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62], started
2024-10-16T23:30:47.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.3.0.73, participants=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62], started
2024-10-16T23:30:47.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.4.2.208, participants=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62], started
2024-10-16T23:30:47.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.3.1.229, participants=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62], started
2024-10-16T23:30:47.017+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.3.1.62, participants=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62], started
2024-10-16T23:30:47.017+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.0.0.60, participants=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62], started
2024-10-16T23:30:47.017+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Started to process repair_flush_hints_batchlog_request from node=10.4.2.208, target_nodes=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62], hints_timeout=300s, batchlog_timeout=300s
2024-10-16T23:30:47.017+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Started to flush hints for repair_flush_hints_batchlog_request from node=10.4.2.208, target_nodes=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62]
2024-10-16T23:30:47.017+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Started to flush batchlog for repair_flush_hints_batchlog_request from node=10.4.2.208, target_nodes=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62]
2024-10-16T23:30:47.017+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Finished to flush hints for repair_flush_hints_batchlog_request from node=10.4.2.208, target_hosts=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62]
2024-10-16T23:30:49.516+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Finished to flush batchlog for repair_flush_hints_batchlog_request from node=10.4.2.208, target_nodes=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62]
2024-10-16T23:30:49.516+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Finished to process repair_flush_hints_batchlog_request from node=10.4.2.208, target_nodes=[10.4.2.208, 10.4.1.1, 10.4.2.21, 10.3.1.136, 10.3.0.73, 10.3.1.62]
2024-10-16T23:31:01.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2  !WARNING | scylla[5553]:  [shard  1:strm] repair - Failed auto-stopping Row Level Repair (Master): sstables::compaction_stopped_exception (Compaction for keyspace1/standard1 was stopped due to: user-triggered operation). Ignored.
2024-10-16T23:31:01.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: stats: repair_reason=repair, keyspace=keyspace1, tables=["standard1"], ranges_nr=1, round_nr=0, round_nr_fast_path_already_synced=0, round_nr_fast_path_same_combined_hashes=0, round_nr_slow_path=0, rpc_call_nr=0, tx_hashes_nr=0, rx_hashes_nr=0, duration=2032.6772 seconds, tx_row_nr=0, rx_row_nr=0, tx_row_bytes=0, rx_row_bytes=0, row_from_disk_bytes={}, row_from_disk_nr={}, row_from_disk_bytes_per_sec={} MiB/s, row_from_disk_rows_per_sec={} Rows/s, tx_row_nr_peer={}, rx_row_nr_peer={}
2024-10-16T23:31:01.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2  !WARNING | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: 1 out of 1 ranges failed, keyspace=keyspace1, tables=["standard1"], repair_reason=repair, nodes_down_during_repair={}, aborted_by_user=false, failed_because=sstables::compaction_stopped_exception (Compaction for keyspace1/standard1 was stopped due to: user-triggered operation)
2024-10-16T23:31:01.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2  !WARNING | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Repair tablet for table=keyspace1.standard1 range=(-8070450532247928833,-6917529027641081857] status=failed: std::runtime_error (repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: 1 out of 1 ranges failed, keyspace=keyspace1, tables=["standard1"], repair_reason=repair, nodes_down_during_repair={}, aborted_by_user=false, failed_because=sstables::compaction_stopped_exception (Compaction for keyspace1/standard1 was stopped due to: user-triggered operation))
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}, seastar::future<void>::then_wrapped_nrvo<void, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}>(seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
2024-10-16T23:31:01.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb] Repair 16 out of 16 tablets: table=keyspace1.standard1 range=(8070450532247928831,9223372036854775807] replicas=[ebe5bf99-7a90-4080-ac56-89d120e4fda9:1, 7e40c144-c328-4d8e-b2b5-20d30c40428c:1, 4e2f0651-ce0d-48fe-abc9-042ccac10114:1, ac6032a3-eb12-4136-8377-1b32e8f2fa54:1, 9f9dd207-1fa7-4217-b715-ab7b26b8b3aa:1, 95a46333-001f-4f53-82e6-3a0b470697b5:1]
2024-10-16T23:31:01.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.4.1.1, participants=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73], started
2024-10-16T23:31:01.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.4.2.21, participants=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73], started
2024-10-16T23:31:01.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.3.1.136, participants=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73], started
2024-10-16T23:31:01.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.3.0.73, participants=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73], started
2024-10-16T23:31:01.267+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.4.2.208, participants=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73], started
2024-10-16T23:31:01.267+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.3.1.229, participants=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73], started
2024-10-16T23:31:01.267+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.3.1.62, participants=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73], started
2024-10-16T23:31:01.267+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Sending repair_flush_hints_batchlog to node=10.0.0.60, participants=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73], started
2024-10-16T23:31:01.267+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Started to process repair_flush_hints_batchlog_request from node=10.4.2.208, target_nodes=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73], hints_timeout=300s, batchlog_timeout=300s
2024-10-16T23:31:01.267+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Started to flush hints for repair_flush_hints_batchlog_request from node=10.4.2.208, target_nodes=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73]
2024-10-16T23:31:01.267+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Started to flush batchlog for repair_flush_hints_batchlog_request from node=10.4.2.208, target_nodes=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73]
2024-10-16T23:31:01.267+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Finished to flush hints for repair_flush_hints_batchlog_request from node=10.4.2.208, target_hosts=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73]
2024-10-16T23:31:04.766+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Finished to flush batchlog for repair_flush_hints_batchlog_request from node=10.4.2.208, target_nodes=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73]
2024-10-16T23:31:04.766+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Finished to process repair_flush_hints_batchlog_request from node=10.4.2.208, target_nodes=[10.4.2.208, 10.4.2.21, 10.4.1.1, 10.3.1.62, 10.3.1.136, 10.3.0.73]
2024-10-16T23:32:27.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Started to repair 1 out of 1 tables in keyspace=keyspace1, table=standard1, table_id=115fa160-8c10-11ef-a026-83c41940e401, repair_reason=repair
2024-10-16T23:33:21.516+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: Started to repair 1 out of 1 tables in keyspace=keyspace1, table=standard1, table_id=115fa160-8c10-11ef-a026-83c41940e401, repair_reason=repair
2024-10-16T23:37:28.766+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard 13:comp] compaction - [Compact system_distributed.cdc_generation_timestamps 9afd1b30-8c17-11ef-9502-494daaf719a3] Compacting [/var/lib/scylla/data/system_distributed/cdc_generation_timestamps-fdf455c4cfec3e009719d7a45436c89d/me-3gkf_1ql8_27axs29kr0mguf1bxf-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/system_distributed/cdc_generation_timestamps-fdf455c4cfec3e009719d7a45436c89d/me-3gkf_1tme_2x0v429kr0mguf1bxf-big-Data.db:level=0:origin=memtable]
2024-10-16T23:55:52.016+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla-manager-agent[5613]: {"L":"ERROR","T":"2024-10-16T23:55:51.687Z","N":"http","M":"GET /storage_service/repair_status?id=6","from":"10.4.2.67:58670","status":502,"bytes":0,"duration":"1859999ms","S":"github.com/scylladb/go-log.Logger.log\n\tgithub.com/scylladb/[email protected]/logger.go:101\ngithub.com/scylladb/go-log.Logger.Error\n\tgithub.com/scylladb/[email protected]/logger.go:84\nmain.(*logEntry).Write\n\tgithub.com/scylladb/scylla-manager/v3/pkg/cmd/agent/log.go:53\nmain.newRouter.RequestLogger.RequestLogger.func5.1.1\n\tgithub.com/go-chi/chi/[email protected]/middleware/logger.go:52\nmain.newRouter.RequestLogger.RequestLogger.func5.1\n\tgithub.com/go-chi/chi/[email protected]/middleware/logger.go:56\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2171\ngithub.com/go-chi/chi/v5.(*Mux).ServeHTTP\n\tgithub.com/go-chi/chi/[email protected]/mux.go:90\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3142\nnet/http.(*conn).serve\n\tnet/http/server.go:2044"}
2024-10-16T23:59:15.516+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] sstable - Rebuilding bloom filter /var/lib/scylla/data/keyspace1/standard1-115fa1608c1011efa02683c41940e401/me-3gkf_1tft_04pr42k1wo5d48hh9f-big-Filter.db: resizing bitset from 760832 bytes to 1086320 bytes. sstable origin: repair
2024-10-16T23:59:17.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] compaction - [Split keyspace1.standard1 a6f09e00-8c1a-11ef-a825-494aaaf719a3] Splitting [/var/lib/scylla/data/keyspace1/standard1-115fa1608c1011efa02683c41940e401/me-3gkf_1tft_04pr42k1wo5d48hh9f-big-Data.db:level=0:origin=repair]
2024-10-17T00:01:14.516+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] sstable - Rebuilding bloom filter /var/lib/scylla/data/keyspace1/standard1-115fa1608c1011efa02683c41940e401/me-3gkf_1tea_1pii82ixsplnfgkper-big-Filter.db: resizing bitset from 785248 bytes to 1079616 bytes. sstable origin: repair
2024-10-17T00:01:15.271+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] compaction - [Split keyspace1.standard1 ed475c90-8c1a-11ef-a61c-4946aaf719a3] Splitting [/var/lib/scylla/data/keyspace1/standard1-115fa1608c1011efa02683c41940e401/me-3gkf_1tea_1pii82ixsplnfgkper-big-Data.db:level=0:origin=repair]
2024-10-17T00:04:05.477+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: stats: repair_reason=repair, keyspace=keyspace1, tables=["standard1"], ranges_nr=1, round_nr=226, round_nr_fast_path_already_synced=1, round_nr_fast_path_same_combined_hashes=0, round_nr_slow_path=225, rpc_call_nr=4508, tx_hashes_nr=1010681, rx_hashes_nr=6562079, duration=1843.9375 seconds, tx_row_nr=5053476, rx_row_nr=1010681, tx_row_bytes=29006952240, rx_row_bytes=11689536446, row_from_disk_bytes={10.3.0.73: 7531396600, 10.3.1.136: 7532125580, 10.3.1.62: 7532125580, 10.4.1.1: 7532125580, 10.4.2.208: 7532056700, 10.4.2.21: 7532102620}, row_from_disk_nr={10.3.0.73: 1312090, 10.3.1.136: 1312217, 10.3.1.62: 1312217, 10.4.1.1: 1312217, 10.4.2.208: 1312205, 10.4.2.21: 1312213}, row_from_disk_bytes_per_sec={10.3.0.73: 3.895197, 10.3.1.136: 3.8955739, 10.3.1.62: 3.8955739, 10.4.1.1: 3.8955739, 10.4.2.208: 3.895538, 10.4.2.21: 3.895562} MiB/s, row_from_disk_rows_per_sec={10.3.0.73: 711.56964, 10.3.1.136: 711.63855, 10.3.1.62: 711.63855, 10.4.1.1: 711.63855, 10.4.2.208: 711.632, 10.4.2.21: 711.63635} Rows/s, tx_row_nr_peer={10.3.0.73: 1010796, 10.3.1.136: 1010669, 10.3.1.62: 1010669, 10.4.1.1: 1010669, 10.4.2.21: 1010673}, rx_row_nr_peer={10.3.0.73: 268016, 10.3.1.136: 18095, 10.3.1.62: 162360, 10.4.1.1: 523240, 10.4.2.21: 38970}
2024-10-17T00:04:05.477+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  1:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: completed successfully, keyspace=keyspace1
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}, seastar::future<void>::then_wrapped_nrvo<void, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}>(seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::smp_message_queue::async_work_item<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>::run_and_dispose()::{lambda(auto:1)#1}&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
2024-10-17T00:06:41.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: stats: repair_reason=repair, keyspace=keyspace1, tables=["standard1"], ranges_nr=1, round_nr=226, round_nr_fast_path_already_synced=1, round_nr_fast_path_same_combined_hashes=0, round_nr_slow_path=225, rpc_call_nr=4516, tx_hashes_nr=1014777, rx_hashes_nr=6559391, duration=2054.4846 seconds, tx_row_nr=5082384, rx_row_nr=1014777, tx_row_bytes=29172884160, rx_row_bytes=11736910782, row_from_disk_bytes={10.3.0.73: 7489385540, 10.3.1.136: 7538783980, 10.3.1.62: 7538709360, 10.4.1.1: 7538783980, 10.4.2.208: 7538646220, 10.4.2.21: 7538783980}, row_from_disk_nr={10.3.0.73: 1304771, 10.3.1.136: 1313377, 10.3.1.62: 1313364, 10.4.1.1: 1313377, 10.4.2.208: 1313353, 10.4.2.21: 1313377}, row_from_disk_bytes_per_sec={10.3.0.73: 3.476509, 10.3.1.136: 3.4994395, 10.3.1.62: 3.499405, 10.4.1.1: 3.4994395, 10.4.2.208: 3.4993756, 10.4.2.21: 3.4994395} MiB/s, row_from_disk_rows_per_sec={10.3.0.73: 635.08435, 10.3.1.136: 639.2732, 10.3.1.62: 639.2669, 10.4.1.1: 639.2732, 10.4.2.208: 639.26154, 10.4.2.21: 639.2732} Rows/s, tx_row_nr_peer={10.3.0.73: 1023359, 10.3.1.136: 1014753, 10.3.1.62: 1014766, 10.4.1.1: 1014753, 10.4.2.21: 1014753}, rx_row_nr_peer={10.3.0.73: 347347, 10.3.1.136: 6240, 10.3.1.62: 112811, 10.4.1.1: 511823, 10.4.2.21: 36556}
2024-10-17T00:06:41.266+00:00 multi-dc-rackaware-with-znode-dc-fe-db-node-2bc4de73-2     !INFO | scylla[5553]:  [shard  0:strm] repair - repair[11c2353c-f36a-4b5f-85bf-554924f36fdb]: completed successfully, keyspace=keyspace1
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::future<void>::finally_body<seastar::smp::submit_to<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>(unsigned int, seastar::smp_submit_to_options, seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}&&)::{lambda()#1}, false>, seastar::future<void>::then_wrapped_nrvo<seastar::future<void>, seastar::future<void>::finally_body<seastar::smp::submit_to<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>(unsigned int, seastar::smp_submit_to_options, seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}&&)::{lambda()#1}, false> >(seastar::future<void>::finally_body<seastar::smp::submit_to<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>(unsigned int, seastar::smp_submit_to_options, seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}&&)::{lambda()#1}, false>&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::future<void>::finally_body<seastar::smp::submit_to<seastar::sharded<repair_service>::invoke_on_all(seastar::smp_submit_to_options, std::function<seastar::future<void> (repair_service&)>)::{lambda(unsigned int)#1}::operator()(unsigned int) const::{lambda()#1}>(unsigned int, seastar::smp_submit_to_options, auto:1&&)::{lambda()#1}, false>&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::async<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}>(seastar::thread_attributes, repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}&&)::{lambda()#2}, seastar::future<void>::then_impl_nrvo<seastar::async<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}>(seastar::thread_attributes, repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}&&)::{lambda()#2}, seastar::future<void> >(repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::async<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}>(seastar::thread_attributes, auto:1&&, (auto:2&&)...)::{lambda()#2}&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::future<void>::finally_body<seastar::async<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}>(seastar::thread_attributes, repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}&&)::{lambda()#3}, false>, seastar::future<void>::then_wrapped_nrvo<seastar::future<void>, seastar::future<void>::finally_body<seastar::async<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}>(seastar::thread_attributes, repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}&&)::{lambda()#3}, false> >(seastar::future<void>::finally_body<seastar::async<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}>(seastar::thread_attributes, repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}&&)::{lambda()#3}, false>&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::future<void>::finally_body<seastar::async<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#1}>(seastar::thread_attributes, auto:1&&, (auto:2&&)...)::{lambda()#3}, false>&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
   seastar::continuation<seastar::internal::promise_base_with_type<void>, repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#2}, seastar::future<void>::then_impl_nrvo<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#2}, seastar::future<void> >(repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#2}&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0::operator()()::{lambda()#2}&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
   N7seastar12continuationINS_8internal22promise_base_with_typeIvEEZNS_6futureIvE16handle_exceptionIZZN6repair19task_manager_module3runE14repair_uniq_idSt8functionIFvvEEEN3$_0clEvEUlNSt15__exception_ptr13exception_ptrEE_Qoooooosr3stdE16is_invocable_r_vINS4_IT_EETL0__SF_Eaaeqsr3stdE12tuple_size_vINSt11conditionalIXsr3stdE9is_same_vINS1_18future_stored_typeIJSH_EE4typeENS1_9monostateEEESt5tupleIJEESQ_IJSO_EEE4typeEELi0Esr3stdE16is_invocable_r_vIvSK_SF_Eaaeqsr3stdE12tuple_size_vISU_ELi1Esr3stdE16is_invocable_r_vISH_SK_SF_Eaagtsr3stdE12tuple_size_vISU_ELi1Esr3stdE16is_invocable_r_vISU_SK_SF_EEES5_OSH_EUlSV_E_ZNS5_17then_wrapped_nrvoIS5_SW_EENS_8futurizeISH_E4typeEOT0_EUlOS3_RSW_ONS_12future_stateISP_EEE_vEE
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::future<void>::finally_body<seastar::internal::invoke_func_with_gate<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0>(seastar::gate::holder&&, repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0&&)::{lambda()#1}, false>, seastar::future<void>::then_wrapped_nrvo<seastar::future<void>, seastar::future<void>::finally_body<seastar::internal::invoke_func_with_gate<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0>(seastar::gate::holder&&, repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0&&)::{lambda()#1}, false> >(seastar::future<void>::finally_body<seastar::internal::invoke_func_with_gate<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0>(seastar::gate::holder&&, repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0&&)::{lambda()#1}, false>&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::future<void>::finally_body<seastar::internal::invoke_func_with_gate<repair::task_manager_module::run(repair_uniq_id, std::function<void ()>)::$_0>(seastar::gate::holder&&, auto:1&&)::{lambda()#1}, false>&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
   seastar::continuation<seastar::internal::promise_base_with_type<void>, repair::tablet_repair_task_impl::run()::$_1, seastar::future<void>::then_impl_nrvo<repair::tablet_repair_task_impl::run()::$_1, seastar::future<void> >(repair::tablet_repair_task_impl::run()::$_1&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, repair::tablet_repair_task_impl::run()::$_1&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
  

@aleksbykov @kbr-scylla

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants