I would like to use this crate in a project I'm working on, and so would like to be able to get a stable version from crates.io. I'm happy to publish it myself, but let me know if you'd prefer to.
With the new version of LLVM in rustc 1.78.0, pointer::write_volatile can sometimes get optimised into something like str w9, [x0], #4. This writes to the MMIO memory as expected, but suppresses the syndrome register (e.g. ESR_EL1) update, which at least in the case of KVM with crosvm prevents the VMM from servicing the fault as it relies on the ESR for this.
This could arguably be considered a bug in rustc as write_volatile is explicitly intended for MMIO, but we may want to consider working around this in the meantime by using inline assembly for volatile memory access on aarch64.
I suspect that this crate currently assumes little-endian byte order in a bunch of places, and will break on big-endian architectures. We should go through and make the endianness explicit so that it will work on either. (I plan on doing so, this is just to keep track of it.)
for output in outputs.iter(){let desc = &mutself.desc[self.free_headasusize];
desc.set_buf(output);
desc.flags.write(DescFlags::NEXT | DescFlags::WRITE);
last = self.free_head;self.free_head = desc.next.read();}
For every output memory buffer, we only add one Descriptor to the Descriptor Table now. This is because we assume that the given memory buffer resides on contagious physical pages, which means that it is required that the current VA->PA mapping is somehow "smooth" near the position of the memory buffer in the virtual memory space. If this condition is not satisfied, the second part of the output will be written to a wrong physical page.
One possible solution is that we add a Descriptor for each contagious physical memory region where the given memory buffer resides in.
By the way, the BlkReq we allocate in Blk::VirtIOBlk::read/write_block(_nb) can also be across 2 physical pages depending on that current page table, which is more difficult to think about.
In the current version (as of 0.5.0), I noticed that this library does not provide a way to unload the virtio driver. I need to unload the driver in order to reload or use another driver in a given situation.
While the library provides the ability to load the driver, it lacks a way to unload it. I think adding support for unloading drivers would make the library more complete and flexible.
I would like to be able to dynamically load and unload virtio drivers within an application to allow flexibility to adjust as needed. Currently, the only way to uninstall a driver seems to be to restart the application or the OS(this way seem to be too rude).
Please consider adding a function or method to uninstall the virtio driver to allow developers to uninstall the driver at runtime. This would add functionality and flexibility to the library and match the functionality of the loaded driver.
Thanks for your interest and contribution!
Additional information:
Library version: 0.5.0
Operating system: rcore-os,(yeah I'am following this os tutorial)
When I run the code in ch7 without any modification, I ran into a compilation error. I think '#![feature(renamed_spin_loop)]' should be added to lib.rs.
My rust version is rustc 1.46.0-nightly (50fc24d8a 2020-06-25)
Hello, I have a question. I'm trying to configure the virtio console driver on riscv qemu. I found that the fdt has an interrupt number, but I'm not sure how to enable the interrupt of this device in the virtio drivers library. Could you please ask me how to do it?
I change target from riscv64imac-unknown-none-elf to riscv64gc-unknown-none-elf (seems newer), and use rustc 1.51-nightly, qemu-5.0
Compiling OK.
But when make qemu, I got runtime error:
Hello, I am using virtio-net to implement an interrupt-controlled network driver. However, I found that interrupts cannot be triggered unless the recv method of VirtIONet is called. Do you think VirtIONet should also provide a non-blocking method like VirtIOBlk?
It is necessary for some implementations of the Hal interface to know whether the driver or the device should be allowed to write to a memory region that is being allocated, so that it can map it appropriately in the page tables and IOMMUs or equivalent. For example the driver area and descriptor queue should be mapped read-write for the driver but read-only to the device, and the device area vice-versa.
This could be done by adding a parameter to the Hal::dma_alloc method. However, VirtQueue currently allocates all three regions together, as the legacy interface requires them to be laid out contiguously. This is awkward. Options I can see are:
Remove support for the legacy interface. Then the three areas could be allocated separately, with different permissions. This would also reduce the size of each individual allocation which might make things easier for the allocator.
Have dma_alloc take two parameters, something like driver_pages and device_pages, and be expected to allocate a contiguous region of the given total number of pages, but with different permissions for the two subranges. This is an awkward API, but would at least allow us to maintain all current functionality.
Thoughts? Is anyone particularly attached to the legacy MMIO interface? Could we just remove it entirely?
I am having problems redrawing the frame buffer periodically using a timer. The first succeeds, but all further draw calls fail with an IO Error. Maybe I do not understand the procedure correctly...
The MmioTransport is initialized as in the RISCV example.
A timer interrupt calls draw periodically. The first draw call succeeds; all subsequent ones fail in setup_framebuffer.
I implemented Hal trait for VirtioHal and use it to init VirtIONet, However, the address of the input parameter buf of the share interface is not within the range of the previous dma allocated address. I am not sure whether this is the correct behavior.
In these methods, we allocate a BlkReq instance on the stackframe. The memory block which the BlkReq instance resides in will be reused after the methods return, and there is a possibility that the virtio device is also accessing it at the same time, leading to a data race. Mention that the blocking version of there methods do not have this problem since when the methods returns and the BlkReq is deallocated, it can be guaranteed that the virtio device has completed handling the request before.
Hello, I'd like to ask if this package supports network drivers on multicore systems. When I create multiple TCP connections using the send interface of virtio-net on a multicore machine, the captured packets in tcpdump are incomplete and disordered.
let responese = "response from server";// write a response
syscall::write!(client_fd, responese.as_bytes());
Additionally, it works fine if there is only a single TCP connection.
All data which will be access by device directly should be allocated from DMA range. But in vsock.rs, VirtioVsockHdr is allocated in stack and rx_queue_buffers is allocated in heap.
pubfnconnect(&mutself,connection_info:&ConnectionInfo) -> Result{let header = VirtioVsockHdr{op:VirtioVsockOp::Request.into(),
..connection_info.new_header(self.guest_cid)};// Sends a header only packet to the TX queue to connect the device to the listening socket// at the given destination.self.send_packet_to_tx_queue(&header,&[])}
// Allocate and add buffers for the RX queue.letmut rx_queue_buffers = [null_mut();QUEUE_SIZE];for(i, rx_queue_buffer)in rx_queue_buffers.iter_mut().enumerate(){letmut buffer:Box<[u8;RX_BUFFER_SIZE]> = FromZeroes::new_box_zeroed();// Safe because the buffer lives as long as the queue, as specified in the function// safety requirement, and we don't access it until it is popped.let token = unsafe{ rx.add(&[],&mut[buffer.as_mut_slice()])}?;assert_eq!(i, token.into());*rx_queue_buffer = Box::into_raw(buffer);}let rx_queue_buffers = rx_queue_buffers.map(|ptr| NonNull::new(ptr).unwrap());
It's unsafe and not reasonable to share the stack range or heap range to device.