-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vmm_tests: Add test for NVMe-backed SCSI DVD scenario #948
base: main
Are you sure you want to change the base?
Conversation
let read_drive = || agent.read_file("/dev/sr0"); | ||
let b = read_drive().await.context("failed to read dvd drive")?; | ||
let len = nvme_disk_sectors * sector_size; | ||
if b.len() as u64 != len { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would it be useful to check more than just the length of the returned data?
i.e: add a new RamDiskWithDataLayerHandle
, that allows the disk to be pre-populated with a Vec<u8>
of data that you can then compare against?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure would. Do you have any objection to adding that as a follow-up PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree in principle. However, I am struggling a bit with the logic underneath the RamDiskLayerHandle in the VMM tests. Where does the actual ramdisk get allocated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this ties into some of the resource/resolver infrastructure in OpenVMM, which can be a bit tricky to wrap your head around. Once you know where to look, it's not too bad, but the fact you can't simply "jump to definition" your way between the Handle to the underlying call to RamDisk::new
is a bit unfortunate.
Start by taking a look at /vm/devices/storage/disklayer_ram/src/resolver.rs
. I think you'll find it shouldn't be too hard to quickly add this new handle type.
That said - if you find yourself falling down a rabbit hole here, I wouldn't block adding this baseline collateral on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have been looking into this for a bit now. It looks like the SQLite backend is able to make this work because its handle just contains a string with the disk path of the database. However, if we put the actual RamDiskLayer here, it would be a looping reference, and I don't really think it's a good idea to provide a direct reference to the RamState or the RwLock surrounding it. Any thoughts on the design here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't be opposed to this... but a much simpler alternative might be to create a temporary raw file with tempfile::tempfile()
, write whatever you want to it, and send it over as a FileDisk
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, I think I'd prefer that. In the future, we could also consider a section-backed file disk, so that we can use a memory/pagefile-backed file. But let's not extend the existing RAM disk to support external data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I typically bias away from tempfile based solutions on purely aesthetic grounds, but it's not unreasonable for something like this. Plus, thinking ahead - it like this could be a useful test to run with the Hyper-V petri backend, which would probably require using a concrete backing file, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. For Hyper-V petri, we'll need a VHD. But that's a trivial transformation of a raw file.
On Linux, you could also just use a memfd instead of tempfile()
. That doesn't work on Windows, though. We would have to make an mmap-based disk for Windows to be able to use a page-file backed section.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eric135 , please file a follow-up issue to track this conversation before closing out this PR.
let (vm, agent) = config | ||
.with_vmbus_redirect() | ||
.with_custom_config(|c| { | ||
c.vpci_devices.extend([VpciDeviceConfig { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm tempted to ask you to figure out a way to abstract this out to make it more concise (we have multiple places now creating nvme disks with mostly the same parameters). But, if we're going to end here then that's probably over-factoring for the sake of elegance. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would need to account for all the possible configurations of NVMe disks, such as striped, etc., but I think this may be good future work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, imagine that you started with:
fn new_test_vtl2_nvme_device(nsid: u8, size: u64) -> VpciDeviceConfig
{
VpciDeviceConfig {
vtl: DeviceVtl::Vtl2,
instance_id: NVME_INSTANCE,
resource: NvmeControllerHandle {
subsystem_id: NVME_INSTANCE,
max_io_queues: 64,
msix_count: 64,
namespaces: vec![NamespaceDefinition {
nsid: nsid,
disk: (LayeredDiskHandle::single_layer(RamDiskLayerHandle {
len: Some(size),
})
.into_resource()),
read_only: false,
}],
}
.into_resource(),
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vmm_test_macros doesn't quite seem like the right place for this helper and I don't want to put it directly in the tests file. Any thoughts?
v.dynamic.as_mut().unwrap().storage_controllers.push( | ||
vtl2_settings_proto::StorageController { | ||
instance_id: scsi_instance.to_string(), | ||
protocol: vtl2_settings_proto::storage_controller::StorageProtocol::Scsi.into(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd love to see a gen1 vm equivalent of this test as well (as a separate follow up, if we go that route)
This looks good to me |
.with_custom_config(|c| { | ||
c.vpci_devices.extend([new_test_vtl2_nvme_device( | ||
vtl2_nsid, | ||
nvme_disk_sectors * sector_size, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
disk_len
instead of doing the math again here.
let mut backing_file = tempfile::tempfile()?; | ||
let mut bytes = vec![0_u8; disk_len as usize]; | ||
for i in 0..(disk_len / 0xff) { | ||
// This should always be divisible by 64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also write this code as something like the following, which would be more concise and avoid the index-into-array math.
let data_chunk: Vec<u8>= (0..63).collect();
let data_chunk = data_chunk.as_slice();
let mut bytes = vec![0_u8; disk_len as usize];
bytes.chunks_exact_mut(64).for_each(|v| { v.copy_from_slice(&data_chunk); }}
if b.len() as u64 != disk_len { | ||
anyhow::bail!("expected {} bytes, got {}", disk_len, b.len()); | ||
} | ||
if !b[..].eq(&bytes[..]) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why anyhow::bail
intead of assert_eq!(b[..], &bytes[..])
?
These tests have a ramdisk-backed NVMe block device translated by OpenHCL into a SCSI DVD device.