Skip to content

Commit

Permalink
[omdb] Add disks, historical VMMs to omdb db instance show (#6935)
Browse files Browse the repository at this point in the history
Inspired in part by what would have been nice to have while @augustuswm
and I were debugging failed instances on the colo rack a few days ago,
this branch adds additional data to the `omdb db instance show` command.

Now, we fetch and display a list of all disks attached to the instance:

```console
root@oxz_switch1:~# /var/tmp/omdb-eliza-test-7 db instance show e97e9fb9-62c3-4745-9d91-b0b6fa2baeba
note: database URL not specified.  Will search DNS.
note: (override with --db-url or OMDB_DB_URL)
note: using DNS server for subnet fd00:1122:3344::/48
note: (if this is not right, use --dns-server to specify an alternate DNS server)
note: using database URL postgresql://root@[fd00:1122:3344:109::3]:32221,[fd00:1122:3344:105::3]:32221,[fd00:1122:3344:10b::3]:32221,[fd00:1122:3344:107::3]:32221,[fd00:1122:3344:108::3]:32221/omicron?sslmode=disable
WARN: found schema version 110.0.0, expected 111.0.0
It's possible the database is running a version that's different from what this
tool understands.  This may result in errors or incorrect output.

== INSTANCE ====================================================================
                        ID: e97e9fb9-62c3-4745-9d91-b0b6fa2baeba
                project ID: 5e49b6de-cb2d-438d-83af-95c415bbb901
                      name: mongodb-xfs-bs4096-secondary2
               description: mongodb cluster
                created at: 2024-03-30 05:40:49.049297 UTC
          last modified at: 2024-03-30 05:40:49.049297 UTC

== CONFIGURATION ===============================================================
                     vCPUs: 4
                    memory: 8 GiB
                  hostname: secondary2
                 boot disk: None
              auto-restart:
                  InstanceAutoRestart {
                      policy: None,
                      cooldown: None,
                  }

== RUNTIME STATE ===============================================================
               nexus state: Vmm
(i)     external API state: Starting
           last updated at: 2024-10-22T15:54:39.323895Z (generation 249)
       needs reincarnation: false
             karmic status: bound to saṃsāra
      last reincarnated at: Some(2024-10-22T15:54:45.258508Z)
             active VMM ID: Some(fa525216-9f81-4a8f-8fca-f7858994dce2)
             target VMM ID: None
              migration ID: None
              updater lock: UNLOCKED at generation: 49

         active VMM record:
             Vmm {
                 id: fa525216-9f81-4a8f-8fca-f7858994dce2,
                 time_created: 2024-10-24T20:51:43.102557Z,
                 time_deleted: None,
                 instance_id: e97e9fb9-62c3-4745-9d91-b0b6fa2baeba,
                 sled_id: 0c7011f7-a4bf-4daf-90cc-1c2410103300,
                 propolis_ip: V6(
                     Ipv6Network {
                         addr: fd00:1122:3344:104::1:41c,
                         prefix: 128,
                     },
                 ),
                 propolis_port: SqlU16(
                     12400,
                 ),
                 runtime: VmmRuntimeState {
                     time_state_updated: 2024-10-24T20:51:54.106560Z,
                     gen: Generation(
                         Generation(
                             3,
                         ),
                     ),
                     state: Starting,
                 },
             }

== ATTACHED DISKS===============================================================

SLOT NAME                          ID                                   SIZE   STATE
1    dbdata-xfs-bs4096-secondary2  258172d2-e89a-46bb-9509-ef2b0df8d8e5 80 GiB attached
0    mongodb-xfs-bs4096-secondary2 5e98b4b7-b539-4849-9953-55341a1c3772 30 GiB attached
```

Additionally, the command now accepts an optional `--history` flag,
which will print a list of all VMMs that have ever been associated with
the instance. This is useful for debugging instances which have migrated
and/or been restarted --- it's often useful to figure out what has
happened to VMMs that previously incarnated an instance, and determine
their IDs and sled to find zone bundles etc. The list of past migrations
is now enabled or disabled by the `--history` flag, as well.

```console
root@oxz_switch1:~# /var/tmp/omdb-eliza-test-7 db instance show e97e9fb9-62c3-4745-9d91-b0b6fa2baeba --history
note: database URL not specified.  Will search DNS.
note: (override with --db-url or OMDB_DB_URL)
note: using DNS server for subnet fd00:1122:3344::/48
note: (if this is not right, use --dns-server to specify an alternate DNS server)
note: using database URL postgresql://root@[fd00:1122:3344:109::3]:32221,[fd00:1122:3344:105::3]:32221,[fd00:1122:3344:10b::3]:32221,[fd00:1122:3344:107::3]:32221,[fd00:1122:3344:108::3]:32221/omicron?sslmode=disable
WARN: found schema version 110.0.0, expected 111.0.0
It's possible the database is running a version that's different from what this
tool understands.  This may result in errors or incorrect output.

== INSTANCE ====================================================================
                        ID: e97e9fb9-62c3-4745-9d91-b0b6fa2baeba
                project ID: 5e49b6de-cb2d-438d-83af-95c415bbb901
                      name: mongodb-xfs-bs4096-secondary2
               description: mongodb cluster
                created at: 2024-03-30 05:40:49.049297 UTC
          last modified at: 2024-03-30 05:40:49.049297 UTC

# ... snipped out for conciseness ...

== ATTACHED DISKS===============================================================

SLOT NAME                          ID                                   SIZE   STATE
1    dbdata-xfs-bs4096-secondary2  258172d2-e89a-46bb-9509-ef2b0df8d8e5 80 GiB attached
0    mongodb-xfs-bs4096-secondary2 5e98b4b7-b539-4849-9953-55341a1c3772 30 GiB attached

== VMM HISTORY==================================================================

ID                                   STATE        GEN SLED_ID                              TIME_CREATED             TIME_DELETED
fa525216-9f81-4a8f-8fca-f7858994dce2 starting     3   0c7011f7-a4bf-4daf-90cc-1c2410103300 2024-10-24T20:51:43.102Z -
40b7a075-9b7f-4fb9-93b4-87107334243c saga_unwound 2   7b862eb6-7f50-4c2f-b9a6-0d12ac913d3c 2024-10-22T15:54:45.174Z 2024-10-22T15:56:37.058Z
09814c36-302b-43ea-8f43-0e2d977fef39 failed       5   0c7011f7-a4bf-4daf-90cc-1c2410103300 2024-10-11T00:16:33.410Z 2024-10-22T15:54:43.103Z
36887bb1-24c2-47f7-9258-1dada23e33c6 failed       6   7b862eb6-7f50-4c2f-b9a6-0d12ac913d3c 2024-10-09T04:32:44.359Z 2024-10-11T00:16:12.527Z
c3e29a0b-0c8a-4977-8cdc-4b9b6e355a7e failed       6   b886b58a-1e3f-4be1-b9f2-0c2e66c6bc88 2024-10-08T04:42:01.560Z 2024-10-09T04:32:35.413Z
5a62b22b-069f-4744-9303-b5cc1f0b1db8 failed       5   2707b587-9c7f-4fb0-a7af-37c3b7a9a0fa 2024-10-07T05:42:59.516Z 2024-10-08T04:41:51.168Z
f05702ad-cbaa-4b9c-9263-ce0c7a156104 failed       5   5f6720b8-8a31-45f8-8c94-8e699218f28b 2024-10-02T18:18:39.008Z 2024-10-07T05:41:31.394Z
9b63d1b9-dfe8-4c78-a302-c7cdc9fc0970 failed       5   2707b587-9c7f-4fb0-a7af-37c3b7a9a0fa 2024-09-26T21:53:30.820Z 2024-09-29T22:13:57.503Z
a37ee475-d158-4a4e-b396-33ceea78deb8 failed       6   5f6720b8-8a31-45f8-8c94-8e699218f28b 2024-09-19T00:32:09.839Z 2024-09-26T20:50:59.138Z
9febc78d-1aa8-4f40-9990-33c6d109e414 destroyed    8   b886b58a-1e3f-4be1-b9f2-0c2e66c6bc88 2024-09-12T18:07:06.294Z 2024-09-18T20:43:25.976Z
e1aecd78-69b6-40f1-9d9b-8d2801d5b511 destroyed    8   2707b587-9c7f-4fb0-a7af-37c3b7a9a0fa 2024-08-31T04:28:47.870Z 2024-09-12T15:05:17.665Z
89726097-ef24-4b2d-812e-b968072cac24 destroyed    8   dd83e75a-1edf-4aa1-89a0-cd8b2091a7cd 2024-08-31T04:08:52.567Z 2024-08-31T04:23:34.536Z
2d4a3e34-54a0-4ced-9ff6-0630797c5cb5 destroyed    8   0c7011f7-a4bf-4daf-90cc-1c2410103300 2024-08-31T03:26:07.477Z 2024-08-31T04:04:48.122Z
82cf4fa0-56a3-4e18-a461-e4ce581b8623 destroyed    8   71def415-55ad-46b4-ba88-3ca55d7fb287 2024-08-28T14:41:03.838Z 2024-08-31T01:47:23.120Z
11301fac-d145-4bad-8a33-3b02f57e0e24 destroyed    8   b886b58a-1e3f-4be1-b9f2-0c2e66c6bc88 2024-08-27T02:19:37.580Z 2024-08-27T18:37:20.671Z
1f7cb49f-d039-4028-b484-10d1c83e43eb destroyed    8   b886b58a-1e3f-4be1-b9f2-0c2e66c6bc88 2024-08-21T21:27:04.618Z 2024-08-25T18:37:26.915Z
5809e44e-1d90-4526-9c58-8fb4e393ab46 destroyed    8   f15774c1-b8e5-434f-a493-ec43f96cba06 2024-08-20T20:47:56.695Z 2024-08-21T19:10:29.366Z
9131d4d0-8ce4-40ed-b931-614f52469a82 destroyed    8   71def415-55ad-46b4-ba88-3ca55d7fb287 2024-08-14T03:24:27.896Z 2024-08-15T21:20:07.082Z
d8be34b6-f59c-4de4-9852-e60eead5682e destroyed    8   5f6720b8-8a31-45f8-8c94-8e699218f28b 2024-08-01T21:43:21.657Z 2024-08-14T01:19:21.004Z
0a0b6743-0f9c-45c6-9b31-dd83163d6fe8 destroyed    7   2707b587-9c7f-4fb0-a7af-37c3b7a9a0fa 2024-08-01T00:17:06.056Z 2024-08-01T19:07:00.617Z
83e0672d-d1f8-440b-a4c2-5f3e659dc910 destroyed    7   f15774c1-b8e5-434f-a493-ec43f96cba06 2024-07-25T23:41:18.279Z 2024-07-26T16:03:54.205Z
ead4e634-50f8-4d58-bae1-568df37c8ff7 destroyed    7   0c7011f7-a4bf-4daf-90cc-1c2410103300 2024-07-18T22:49:01.899Z 2024-07-25T05:43:53.592Z
436dff2f-8392-4814-b045-98f8eecbbcad destroyed    8   5f6720b8-8a31-45f8-8c94-8e699218f28b 2024-07-13T21:18:07.784Z 2024-07-18T14:43:34.656Z
fe28c172-a9d5-45ac-8a82-b94f51e1f8c1 saga_unwound 2   f15774c1-b8e5-434f-a493-ec43f96cba06 2024-07-13T21:09:49.175Z 2024-09-12T18:00:18.441Z
5eb042a2-5ea4-4aa0-90e1-bd90e0e31af8 destroyed    99  87c2c4fc-b0c7-4fef-a305-78f0ed265bbc 2024-07-13T03:57:18.576Z 2024-07-13T21:07:14.735Z
500811e4-7a5a-4857-98ef-f20ad73438a5 destroyed    7   f15774c1-b8e5-434f-a493-ec43f96cba06 2024-07-12T05:28:37.432Z 2024-07-13T01:04:46.536Z
620e6014-0707-4b46-94ba-ef0c1ce9b5d2 destroyed    7   f15774c1-b8e5-434f-a493-ec43f96cba06 2024-07-11T00:55:48.868Z 2024-07-11T22:55:41.933Z
8f6b2953-1b72-4325-9fb9-08cc1b52d0a5 destroyed    7   71def415-55ad-46b4-ba88-3ca55d7fb287 2024-07-08T21:35:02.137Z 2024-07-10T23:18:21.667Z
eedbe3ee-1d8e-46ae-bd48-393e6cd00368 destroyed    7   2707b587-9c7f-4fb0-a7af-37c3b7a9a0fa 2024-07-07T20:24:54.814Z 2024-07-08T19:53:29.134Z
7650d95c-a9e3-4087-8a1c-959a64f18742 destroyed    7   dd83e75a-1edf-4aa1-89a0-cd8b2091a7cd 2024-07-03T17:35:10.428Z 2024-07-07T17:58:16.378Z
613d7e11-4f49-4a07-b0ba-a7c2617eaaef destroyed    8   71def415-55ad-46b4-ba88-3ca55d7fb287 2024-06-28T23:59:56.326Z 2024-07-01T16:56:39.303Z
01651e7a-88fe-4b09-a812-7d1e42d3cee3 destroyed    8   dd83e75a-1edf-4aa1-89a0-cd8b2091a7cd 2024-06-21T18:09:48.050Z 2024-06-27T17:07:01.677Z
4058a07d-9ceb-42a0-99a9-2abfd280f75a destroyed    8   0c7011f7-a4bf-4daf-90cc-1c2410103300 2024-06-20T19:23:53.106Z 2024-06-20T23:57:03.869Z
2c2619dc-7920-4aa0-bd31-56753dffed2d destroyed    8   b886b58a-1e3f-4be1-b9f2-0c2e66c6bc88 2024-06-14T19:15:06.049Z 2024-06-20T17:26:09.012Z
a4b51b6f-373f-496e-9c59-2c7b2965893c destroyed    8   5f6720b8-8a31-45f8-8c94-8e699218f28b 2024-06-11T23:36:11.780Z 2024-06-13T14:09:03.529Z
15ecaa92-81b9-42d7-84e1-6e21e35c46cf destroyed    29  dd83e75a-1edf-4aa1-89a0-cd8b2091a7cd 2024-06-11T18:55:51.705Z 2024-06-11T20:26:44.169Z
3ab65681-7850-43fe-9f35-6d10a6fb7f7d destroyed    8   dd83e75a-1edf-4aa1-89a0-cd8b2091a7cd 2024-06-08T04:39:12.921Z 2024-06-11T18:31:58.065Z
83c196cb-4410-470d-9c9e-1b428ea2f913 destroyed    8   bd96ef7c-4941-4729-b6f7-5f47feecbc4b 2024-06-07T05:53:04.289Z 2024-06-08T04:38:28.372Z
97303add-944c-4c34-9e15-7a6077e8909a destroyed    8   7b862eb6-7f50-4c2f-b9a6-0d12ac913d3c 2024-05-31T03:44:48.196Z 2024-06-06T22:43:42.122Z
1ad7b454-907a-404c-8bf2-619f6c7ccc60 destroyed    8   87c2c4fc-b0c7-4fef-a305-78f0ed265bbc 2024-05-29T19:27:32.798Z 2024-05-31T00:18:43.729Z
root@oxz_switch1:~#
```

Depends on #6933 
Closes #6929 
Also addresses some of #6931, but not the networking part.
  • Loading branch information
hawkw authored Oct 26, 2024
1 parent 29f3459 commit 38a34e1
Showing 1 changed file with 203 additions and 44 deletions.
247 changes: 203 additions & 44 deletions dev-tools/omdb/src/bin/omdb/db.rs
Original file line number Diff line number Diff line change
Expand Up @@ -451,6 +451,14 @@ struct InstanceInfoArgs {
/// the UUID of the instance to show details for
#[clap(value_name = "UUID")]
id: InstanceUuid,

/// include a list of VMMs and migrations previously associated with this
/// instance.
///
/// note that this is not exhaustive, as some VMM or migration records may
/// have been hard-deleted.
#[arg(short = 'i', long)]
history: bool,
}

#[derive(Debug, Args)]
Expand Down Expand Up @@ -1213,21 +1221,31 @@ async fn lookup_project(

// Disks

#[derive(Tabled)]
#[tabled(rename_all = "SCREAMING_SNAKE_CASE")]
struct DiskIdentity {
name: String,
id: Uuid,
size: String,
state: String,
}

impl From<&'_ db::model::Disk> for DiskIdentity {
fn from(disk: &db::model::Disk) -> Self {
Self {
name: disk.name().to_string(),
id: disk.id(),
size: disk.size.to_string(),
state: disk.runtime().disk_state,
}
}
}

/// Run `omdb db disk list`.
async fn cmd_db_disk_list(
datastore: &DataStore,
fetch_opts: &DbFetchOptions,
) -> Result<(), anyhow::Error> {
#[derive(Tabled)]
#[tabled(rename_all = "SCREAMING_SNAKE_CASE")]
struct DiskRow {
name: String,
id: String,
size: String,
state: String,
attached_to: String,
}

let ctx = || "listing disks".to_string();

use db::schema::disk::dsl;
Expand All @@ -1236,6 +1254,26 @@ async fn cmd_db_disk_list(
query = query.filter(dsl::time_deleted.is_null());
}

#[derive(Tabled)]
#[tabled(rename_all = "SCREAMING_SNAKE_CASE")]
struct DiskRow {
#[tabled(inline)]
identity: DiskIdentity,
attached_to: String,
}

impl From<&'_ db::model::Disk> for DiskRow {
fn from(disk: &db::model::Disk) -> Self {
Self {
identity: disk.into(),
attached_to: match disk.runtime().attach_instance_id {
Some(uuid) => uuid.to_string(),
None => "-".to_string(),
},
}
}
}

let disks = query
.limit(i64::from(u32::from(fetch_opts.fetch_limit)))
.select(Disk::as_select())
Expand All @@ -1245,16 +1283,7 @@ async fn cmd_db_disk_list(

check_limit(&disks, fetch_opts.fetch_limit, ctx);

let rows = disks.into_iter().map(|disk| DiskRow {
name: disk.name().to_string(),
id: disk.id().to_string(),
size: disk.size.to_string(),
state: disk.runtime().disk_state,
attached_to: match disk.runtime().attach_instance_id {
Some(uuid) => uuid.to_string(),
None => "-".to_string(),
},
});
let rows = disks.iter().map(DiskRow::from);
let table = tabled::Table::new(rows)
.with(tabled::settings::Style::empty())
.with(tabled::settings::Padding::new(0, 1, 0, 0))
Expand Down Expand Up @@ -2891,14 +2920,14 @@ async fn cmd_db_instance_info(
args: &InstanceInfoArgs,
) -> Result<(), anyhow::Error> {
use nexus_db_model::schema::{
instance::dsl as instance_dsl, migration::dsl as migration_dsl,
vmm::dsl as vmm_dsl,
disk::dsl as disk_dsl, instance::dsl as instance_dsl,
migration::dsl as migration_dsl, vmm::dsl as vmm_dsl,
};
use nexus_db_model::{
Instance, InstanceKarmicStatus, InstanceRuntimeState, Migration,
Reincarnatability, Vmm,
};
let InstanceInfoArgs { id } = args;
let &InstanceInfoArgs { ref id, history } = args;

let instance = instance_dsl::instance
.filter(instance_dsl::id.eq(id.into_untyped_uuid()))
Expand Down Expand Up @@ -3167,40 +3196,170 @@ async fn cmd_db_instance_info(
}
}
}
let past_migrations = migration_dsl::migration
.filter(migration_dsl::instance_id.eq(id.into_untyped_uuid()))

let ctx = || "listing attached disks";
let mut query = disk_dsl::disk
.filter(disk_dsl::attach_instance_id.eq(id.into_untyped_uuid()))
.limit(i64::from(u32::from(fetch_opts.fetch_limit)))
.order_by(migration_dsl::time_created)
// This is just to prove to CRDB that it can use the
// migrations-by-time-created index, it doesn't actually do anything.
.filter(migration_dsl::time_created.gt(chrono::DateTime::UNIX_EPOCH))
.select(Migration::as_select())
.order_by(disk_dsl::time_created.desc())
.into_boxed();
if !fetch_opts.include_deleted {
query = query.filter(disk_dsl::time_deleted.is_null());
}

let disks = query
.select(Disk::as_select())
.load_async(&*datastore.pool_connection_for_tests().await?)
.await
.context("listing migrations")?;
.with_context(ctx)?;

check_limit(&past_migrations, fetch_opts.fetch_limit, || {
"listing migrations"
});
#[derive(Tabled)]
#[tabled(rename_all = "SCREAMING_SNAKE_CASE")]
struct DiskRow {
#[tabled(display_with = "display_option_blank")]
slot: Option<u8>,
#[tabled(inline)]
identity: DiskIdentity,
}

#[derive(Tabled)]
#[tabled(rename_all = "SCREAMING_SNAKE_CASE")]
struct MaybeDeletedDiskRow {
#[tabled(inline)]
r: DiskRow,
#[tabled(display_with = "datetime_opt_rfc3339_concise")]
time_deleted: Option<DateTime<Utc>>,
}

impl From<&'_ db::model::Disk> for DiskRow {
fn from(disk: &db::model::Disk) -> Self {
Self { slot: disk.slot.map(|s| s.into()), identity: disk.into() }
}
}

impl From<&'_ db::model::Disk> for MaybeDeletedDiskRow {
fn from(disk: &db::model::Disk) -> Self {
Self { r: disk.into(), time_deleted: disk.time_deleted() }
}
}

if !disks.is_empty() {
println!("\n{:=<80}\n", "== ATTACHED DISKS");

if !past_migrations.is_empty() {
let rows =
past_migrations.into_iter().map(|m| SingleInstanceMigrationRow {
created: m.time_created,
vmms: MigrationVmms::from(&m),
check_limit(&disks, fetch_opts.fetch_limit, ctx);
let table = if fetch_opts.include_deleted {
tabled::Table::new(disks.iter().map(MaybeDeletedDiskRow::from))
.with(tabled::settings::Style::empty())
.with(tabled::settings::Padding::new(0, 1, 0, 0))
.to_string()
} else {
tabled::Table::new(disks.iter().map(DiskRow::from))
.with(tabled::settings::Style::empty())
.with(tabled::settings::Padding::new(0, 1, 0, 0))
.to_string()
};
println!("{table}");
}

if history {
let ctx = || "listing migrations";
let past_migrations = migration_dsl::migration
.filter(migration_dsl::instance_id.eq(id.into_untyped_uuid()))
.limit(i64::from(u32::from(fetch_opts.fetch_limit)))
.order_by(migration_dsl::time_created.desc())
// This is just to prove to CRDB that it can use the
// migrations-by-time-created index, it doesn't actually do anything.
.filter(
migration_dsl::time_created.gt(chrono::DateTime::UNIX_EPOCH),
)
.select(Migration::as_select())
.load_async(&*datastore.pool_connection_for_tests().await?)
.await
.with_context(ctx)?;

if !past_migrations.is_empty() {
println!("\n{:=<80}\n", "== MIGRATION HISTORY");

check_limit(&past_migrations, fetch_opts.fetch_limit, ctx);

let rows = past_migrations.into_iter().map(|m| {
SingleInstanceMigrationRow {
created: m.time_created,
vmms: MigrationVmms::from(&m),
}
});

let table = tabled::Table::new(rows)
.with(tabled::settings::Style::empty())
.with(tabled::settings::Padding::new(4, 1, 0, 0))
.to_string();
let table = tabled::Table::new(rows)
.with(tabled::settings::Style::empty())
.with(tabled::settings::Padding::new(0, 1, 0, 0))
.to_string();

println!("{table}");
}

println!("\n{:=<80}\n\n{table}", "== MIGRATION HISTORY");
let ctx = || "listing past VMMs";
let vmms = vmm_dsl::vmm
.filter(vmm_dsl::instance_id.eq(id.into_untyped_uuid()))
.limit(i64::from(u32::from(fetch_opts.fetch_limit)))
.order_by(vmm_dsl::time_created.desc())
.select(Vmm::as_select())
.load_async(&*datastore.pool_connection_for_tests().await?)
.await
.with_context(ctx)?;

if !vmms.is_empty() {
println!("\n{:=<80}\n", "== VMM HISTORY");

check_limit(&vmms, fetch_opts.fetch_limit, ctx);

let table = tabled::Table::new(vmms.iter().map(VmmStateRow::from))
.with(tabled::settings::Style::empty())
.with(tabled::settings::Padding::new(0, 1, 0, 0))
.to_string();
println!("{table}");
}
}

Ok(())
}

#[derive(Tabled)]
#[tabled(rename_all = "SCREAMING_SNAKE_CASE")]
struct VmmStateRow {
id: Uuid,
state: db::model::VmmState,
#[tabled(rename = "GEN")]
generation: u64,
sled_id: Uuid,
#[tabled(display_with = "datetime_rfc3339_concise")]
time_created: chrono::DateTime<Utc>,
#[tabled(display_with = "datetime_opt_rfc3339_concise")]
time_deleted: Option<chrono::DateTime<Utc>>,
}

impl From<&'_ Vmm> for VmmStateRow {
fn from(vmm: &Vmm) -> Self {
let &Vmm {
id,
time_created,
time_deleted,
sled_id,
propolis_ip: _,
propolis_port: _,
instance_id: _,
runtime:
db::model::VmmRuntimeState { time_state_updated: _, r#gen, state },
} = vmm;
Self {
id,
state,
time_created,
time_deleted,
generation: r#gen.0.into(),
sled_id,
}
}
}
#[derive(Tabled)]
#[tabled(rename_all = "SCREAMING_SNAKE_CASE")]
struct CustomerInstanceRow {
Expand Down

0 comments on commit 38a34e1

Please sign in to comment.