-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rtio: Asynchronous Real-Time I/O #44999
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ecf6a3d
to
266e543
Compare
00d44ac
to
60ef0be
Compare
86a44d5
to
3f9243b
Compare
nashif
previously approved these changes
Jun 23, 2022
nashif
previously approved these changes
Jun 23, 2022
mbolivar-nordic
previously requested changes
Jun 23, 2022
9ef9122
to
91a63c1
Compare
A DMA friendly Stream API for zephyr. Based on ideas from io_uring and iio, a queue based API for I/O operations. Provides a pair of fixed length ringbuffer backed queues for submitting I/O requests and recieving I/O completions. The requests may be chained together to ensure the next operation does not start until the current one is complete. Requests target an abstract rtio_iodev which is expected to wrap all the hardware particulars of how to perform the operation. For example with a SPI bus device, a description of what a read, and write mean can be decided by the iodev wrapping a particular device hanging off of a SPI controller. The queue pair are submitted to an executor which may be a simple inplace looping executor done in the callers execution context (thread/stack) but other executors are expected. A threadpool executor might for example allow for concurrent request chains to execute in parallel. A DMA executor, in conjunction with DMA aware iodevs would allow for hardware offloading of operations going so far as to schedule with priority using hardware arbitration. Both the iodev and executor are definable by a particular SoC, meaning they can work in conjuction to perform IO operations using a particular DMA controller or methodology if desired. The application decides entirely how large the queues are, where the buffers to read/write come from (some executors may have particular demands!), and which executor to submit requests to. Signed-off-by: Tom Burdick <[email protected]>
Schedules I/O chains in the same order as they arrive providing a fixed amount of concurrency. The low memory cost comes at the cost of some computational cost that is likely to be acceptable with small amounts of concurrency. The code cost is about 4x higher than the simple linear executor which isn't entirely unexpected as the logic requirements are quite a bit more than doing the next thing in the queue. Signed-off-by: Tom Burdick <[email protected]>
Adds a new sample application that demonstrates using the RTIO subsystem to read periodic sensor data directly into buffers allocated by the application, asynchronously process batches of data with an algorithm, and recycle buffers back for reading additional sensor data. The sensor iodev in this application is an timer-driven device that executes one read request per timer period. It doesn't actually send any transactions to a real I2C/SPI bus or read any real data into the application-provided buffers. This timer-driven behavior mimics how a real sensor periodically triggers a GPIO interrupt when new data is ready. The sensor iodev currently uses an internal message queue to store pending requests from the time they are submitted until the next timer expiration. At least one pending request needs to be stored by the iodev to ensure that it has a buffer available to read data into. However, any more than that should probably be handled by the application, since it's the application that determines how often it can submit new requests and therefore how deep the queue needs to be. The sensor iodev is implemented to support multiple instances with devicetree, but additional work remains to enable and use more than one in the application. Tested on native_posix and frdm_k64f. Signed-off-by: Maureen Helm <[email protected]>
Notes that the API is currently experimental, and is targetting being added initially for 3.2. Signed-off-by: Tom Burdick <[email protected]>
nashif
approved these changes
Jun 27, 2022
laurenmurphyx64
approved these changes
Jun 28, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
area: API
Changes to public APIs
area: Documentation
area: Tests
Issues related to a particular existing or missing test
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Enabling the usage of asynchronous I/O with interrupt or DMA driven transfers across multiple devices with an io_uring like API.
The docs are the most helpful starting place
https://github.com/teburd/zephyr/blob/rtio_next/doc/services/rtio/index.rst
Enhancements moved to Issue #46658
This is an experimental API and subsystem and it is useful now but will require improvements and refinements to cover the expected use cases.