-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added interrupt driven UARTE (open for feedback, do not merge yet) #102
base: master
Are you sure you want to change the base?
Conversation
Would you mind if I suggested an alternative solution. Instead of using a I'm not sure which solution is best, but I'd be happy to try implementing it so we can compare. |
Not quite sure what difference you are suggesting, this sounds exactly like what we are doing with the Or did you have something different in mind? |
I've written up a boilerplate implementation as it's probably easier to explain than with words SamP20@54bb0c1 The main changes are that Edit: I haven't included the details of how the interrupts work with this, although for RX this would involve checking the ENDRX event, and if set then disabling the ENDRX_STARTRX short, and updating any information to keep track of which ReadHandle is active. |
@SamP20 I had a look at the implementation. The comment I have is that now you have pushed the ownership issue to the user by taking a reference to the memory, without any significant improvement in performance (the Do you have an example use-case where manually handling the ownership is advantageous or, when automatically handling ownership is an issue? |
After looking through your implementation some more, I'm starting to see the advantages of your solution, and to be honest I think I had completely the wrong idea with my approach, so sorry about that! That being said there is a use case I'd like to support, however I don't know if it's mutually exclusive with what you've imeplemented. Firstly I'd like to recognise the merits of your solution when paired with RTFM. It returns the data in the I'm working on a thread scheduling library which takes a different approach. Without going into too much detail I can spawn multiple threads that can take ownership of variables from their surrounding context (Almost identical to With that in mind I'd like to be able to do something similar to the pseudocode below (the inner read loop would be wrapped up in a macro):
In the above implementation the In an interrupt handler I woud like something similar to this:
It may be this this usage is too far from the current implementation, and that would be understandable, however the solution in this PR currently doesn't follow Rust's ideal of zero cost abstractions due to the memory overhead of the I hope that wasn't too much to take in, and let me know if you need me to clarify anything. I do hope there's a solution that caters for the best of both worlds :) |
Hi Sam,
What do you think? |
On a side note. If doing a threading API, make sure never to involve any spinning Mutex behavior, its ok to try_lock, and yield on fail. Bare in mind that RTFM resource lock is just claiming the resource, it never needs to wait for it (its guaranteed to succeed). Hence you never run-into the dreaded dead-locking associated to Mutex:es. In fact the try_read (as sketched above) will never fail, your cooperative thread will be woke by a message (which might be a transmit error), but never of WouldBlock type, which reflects the reactive nature of the system, which is inherent to the embedded systems running on top of interrupt driven hardware. Best |
@SamP20 Thanks for giving insight into your use-case! I'd like to propose a different way of performing " Let's say you have 3 processing steps that are all part of the same interrupt handler, and yielding is needed: #[interrupt]
fn handler() {
processing1();
yield();
processing2();
yield();
processing3();
} This can be mapped to RTFM (or other event based systems) by having a state machine as: enum States {
Processing1,
Processing2,
Processing3,
}
#[interrupt]
fn handler_sm() {
static mut state: States = States::Processing1;
match state {
States::Processing1 => {
processing1();
state = States::Processing2;
}
States::Processing2 => {
processing2();
state = States::Processing3;
}
States::Processing3 => {
processing3();
state = States::Processing1;
}
}
} However, this is quite verbose to always write over and over again, so I have been working on a State Machine Language DSL, in which the following could be done to get the same behavior: use sml::statemachine;
statemachine!(
*State1 + ProcessingEvent / processing1 = State2,
State2 + ProcessingEvent / processing2 = State3,
State3 + ProcessingEvent / processing3 = State1,
);
#[interrupt]
fn handler_sml() {
static mut sm: StateMachine = StateMachine::new();
sm.process_event(States::ProcessingEvent);
} How do you see on this pattern, what are your thoughts / worries? With this we can always stay within the strong guarantees of RTFM (or similar) and still get yields. On this note (@SamP20):
The meaning of zero cost is that there is no way to do it better by hand (and not that the operation is free), and in the end you must solve the ownership problem, together with handling of "chunks" with correct owner ship, where Or maybe I misunderstood your thought about this? :) I'm very open to discussing this! |
Thanks for the state machine idea. That sounds almost like
How does the UARTE fit into your state machine model? Also how would you manage reading data from different peripherals in different states? For example in one state you might be waiting on UART0 data, but in a later state you might be waiting on the i2c bus. Whan I talk about zero cost, I mean if I import the nrf52-hal crate but decide not to use the UARTE, then it shouldn't consume any extra memory in the final code. If any static pools are required then in my opinion it's the end user that should create them. @perlindgren I already have mutexes yielding on failure. They also keep a list of waiting threads so they can be woken up once the mutex is released. Take a look here if you're curious. Also I'm unable to have threads preempt eachother at the moment due to how they're implemented. The ARM cortex architecture has two stack pointers, MSP and PSP. Most of the time the MSP is used, but I have the |
@SamP20, some update are on the way.
Best regards |
OW this all sounds great =) Looking forward to merge this ;) |
I added tentative support for setting DMA_SIZE from the users side. The pool! needs to be in the driver (for the driver to see it). In the user side you give a feature to set size.
So if you don't give an explicit DMA_SIZE it will default to 16.
An example using the DMA pool, can be found in It requires some local crates in the parent directory (or you may change the
The The Best regards, |
Thanks for the heaps of work, I am really excited to see this coming! What I wonder about is, how you would handle this with the |
@Yatekii, well the approach I took for now was an enumeration of DMA_SIZE_xx, but that is clearly not the way to go if we want it to be fully generic. We will see what const generics might bring to the table (hopefully it will be of help). In the meantime, we can take a detour through the module system, and use a crate to store the default size, and just override the crate by a patch section in the user Cargo.toml. Its quite straightforward, and more generic than the "feature" enumeration. Right now I'm hunting a bug to be able to run everything on stable, in the worst case we just have to wait out the const_fn, which is due to land sometimes during the summer. Best regards |
Sounds good to me! Thanks! |
Seems like the |
The code is now updated with byte timeout functionality and being able to read and write fewer bytes than a full DMA package! |
The public interface has been simplified and more checks have been added as well. |
ah, timeout feels nice Fix for printer, maybe something like
also thoughts on impl debug for the hprintln were debug for MaybeUninit doesnt work currently?
|
Thanks for the feedback @jacobrosenthal ! |
Interesting error when merging with master, it seems that the nRF9160 core does not have support for the |
Does heapless not support thumbv8* yet? |
It seems so, yes - from a the Cortex core perspective it should be fine. I think the feature flag check in heapless just needs updating. |
@jamesmunns I made a PR for it here: rust-embedded/heapless#113 |
Probably outside of the scope of this pr.. but any thoughts on avoiding multiple buffers by writing directly into the dmapoolnode TLDR I thin this is the old rolling your own core::io problem right? Im looking at cbor which recently got no_std support. This is the naive mutiple copy scenario from the docs, (havent tested beyond that it compiles)
You can see it news up a SliceWriter with a &mut [u8] https://github.com/pyfisch/cbor/blob/0dfd50959a53dbf1a3d1ac55cc1735b104956c8f/src/write.rs#L139 but our dmapoolnode doesnt degrade to that . It also implements their fake writer trait only supporting write_all fn https://github.com/pyfisch/cbor/blob/0dfd50959a53dbf1a3d1ac55cc1735b104956c8f/src/write.rs#L24 and your dmapoolnode exposes a write trait Anyone heard anything on core::io Should dmapoolnode implement something here? or should I bother cbor? Or am I missing a simple solution |
Good feedback, thanks! This way you can short circuit, and not need the intermediary buffer as long as the thing before support to directly pump data to a |
yeah I think that would do it for this case. Id offer, but Im not sure how that works with Maybeuninit |
Thanks for finding @jacobrosenthal I'll change to that! Also I just rebased all this on the master to be in sync again. |
So this has been working for a long while now, how does @nrf-rs/all feel about this? Time to start bikesheding names? Or are there questions/thoughts that need to be handled? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a rough review, left some comments. Also needs a rebase.
@@ -5,7 +5,7 @@ authors = ["James Munns <[email protected]>"] | |||
edition = "2018" | |||
|
|||
[dependencies] | |||
cortex-m-rtfm = "0.4.3" | |||
cortex-m-rtfm = "0.4" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why make the version requirement less specific? I think it can be useful to see what the latest version was, when a dependency was added or last updated.
I also think being less specific can make breakage more likely in some scenarios, although this is unlikely to be the case here.
cortex-m-rtfm = "0.4" | ||
panic-semihosting = "0.5.1" | ||
cortex-m-semihosting = "0.3.3" | ||
heapless = ">= 0.5.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the >=
requirement? Won't this include 0.6
and later versions, which may break this code?
); | ||
uarte | ||
.config | ||
.write(|w| w.hwfc().bit(hardware_flow_control).parity().variant(parity)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know everyone's tired of hearing it, but this sucks. While tool-supported consistent formatting has its merits, that whole notion, that it needs to be applied universally, and that everyone needs to agree, is just plain wrong.
I don't care about consistency in formatting. I care about readability. This is a clear example where readability suffers, as is every other svd2rust-generated API. Since using svd2rust APIs is the whole point of this project, using rustfmt here is simply a bad idea.
I won't block merging this PR over this, but I'd really appreciate it if not every pull request came with a million formatting changes to unrelated code. This just makes the diffs harder to review and in many cases (like this one) it makes the code plain worse.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I have to disagree. I find the official style much more readable than the previous version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have to say I'm baffled about this comment. Those svd2rust function calls always come in pairs, so I figured it was self-evident that any formatting choice that makes it easy to immediately recognize what those pairs are would be a readability win.
Could you elaborate what you find more readable about the new style here? I'm genuinely interested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want to hunt for the closure argument on the next line, the merging of the pairs (due to fitting in a single line) I don't mind as much.
If anything I'd have preferred:
uarte.config.write(|w| w
.hwfc().bit(hardware_flow_control)
.parity().variant(parity)
);
or the version rustfmt would have generated for a longer version, like
uarte.config.write(|w| {
w.hwfc()
.bit(hardware_flow_control)
.parity()
.variant(parity)
});
because it highlights exactly the flow of the register manipulation
- Take the register provided as closure argument
- Extract the position of the
hwfc
bits - Set the bits to
hardware_flow_control
- Extract the position of the
parity
bits - Set the bits to
parity
I'm not too hot about the pair argument since they're not expressed in the API, it's implicit knowledge that you have and easy to get wrong, if anything the API should provide setters instead like:
uarte.config.write(|w| w.hwfc(hardware_flow_control).parity(parity));
but it doesn't, so the point is moot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If anything I'd have preferred [...] the version rustfmt would have generated for a longer version [...] because it highlights exactly the flow of the register manipulation
I don't see how that version highlights the flow of register manipulation better than any other. The order of operations is the same in all versions. Only whether they are split over lines, and how, changes.
I'm not too hot about the pair argument since they're not expressed in the API, it's implicit knowledge that you have and easy to get wrong
I'm not sure I understand what you mean. The API doesn't use the word "pair", but the method that selects a field and the method that sets its value clearly belong together. The first is a method of w
, the second is a method of the type returned by the first method.
If we ignore how the API is structured, they still clearly belong together. One method sets the value of a field and doesn't make any sense without the method that selects which field that is. That you would prefer to have only one method that does both (your last code example) supports that point.
That this isn't clear to everyone is exactly why the formatting should highlight those relationships, not hide them.
if anything the API should provide setters instead like:
I might prefer that too, but as you say, the point is moot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't the whole point of machine formatted code to avoid random unrelated style changes? If we don't like what the tool does, we should teach the tool to do what we do like (rustfmt.toml, lints, or whatever) and if it is uncapable of being taught, I think we should fix the tool, or live with it.
One possible workaround is to stop chaining all the calls together:
peripheral.register.write(|w| {
w.foo().value(1);
w.bar().value(2);
unsafe { w.baz().bits(25); }
});
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't the whole point of machine formatted code to avoid random unrelated style changes?
If that's the case, it's failing spectacularly, since it's doing the exact opposite.
If we don't like what the tool does, we should teach the tool to do what we do like (rustfmt.toml, lints, or whatever) and if it is uncapable of being taught, I think we should fix the tool
Then please go teach the tool how to understand code beyond the syntax level so it can make sensible decisions. Personally, I don't feel like dropping everything I currently do for a decade or so to make that happen.
or live with it.
I don't believe it's self-evident that code should be machine-formatted always in every situation. I know it's supposed to be such a universal solution, but that by itself won't make me roll over against my better judgement.
One possible workaround is to stop chaining all the calls together:
I think that's an excellent suggestion. That still won't make me love rustfmt, but it would take away my biggest gripe.
@@ -303,8 +287,7 @@ impl<T> Uarte<T> where T: Instance { | |||
// | |||
// The PTR field is a full 32 bits wide and accepts the full range | |||
// of values. | |||
unsafe { w.ptr().bits(rx_buffer.as_ptr() as u32) } | |||
); | |||
unsafe { w.ptr().bits(rx_buffer.as_ptr() as u32) }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To follow up on my rant on formatting: This is an example where the code didn't get any worse due to the formatting change, but it took me way too long to figure out what had changed about this line, and why that's okay. You're making my life as a reviewer hard for no conceivable benefit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That wouldn't be a problem if it was correctly formatted from the start... One of the big benefits of using rustfmt consequently and consistently is that you eliminate random style changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I would be all for that, if it didn't come at the cost of less readability in many other places.
/// DMA block size | ||
/// Defaults to UARTE_DMA_SIZE = 16 if not explicitly set | ||
#[cfg(not(any( | ||
feature = "UARTE_DMA_SIZE_4", | ||
feature = "UARTE_DMA_SIZE_8", | ||
feature = "UARTE_DMA_SIZE_16", | ||
feature = "UARTE_DMA_SIZE_32", | ||
feature = "UARTE_DMA_SIZE_64", | ||
feature = "UARTE_DMA_SIZE_128", | ||
feature = "UARTE_DMA_SIZE_255" | ||
)))] | ||
pub const UARTE_DMA_SIZE: usize = 16; | ||
|
||
#[cfg(feature = "UARTE_DMA_SIZE_4")] | ||
pub const UARTE_DMA_SIZE: usize = 4; | ||
|
||
#[cfg(feature = "UARTE_DMA_SIZE_8")] | ||
pub const UARTE_DMA_SIZE: usize = 8; | ||
|
||
#[cfg(feature = "UARTE_DMA_SIZE_16")] | ||
pub const UARTE_DMA_SIZE: usize = 16; | ||
|
||
#[cfg(feature = "UARTE_DMA_SIZE_32")] | ||
pub const UARTE_DMA_SIZE: usize = 32; | ||
|
||
#[cfg(feature = "UARTE_DMA_SIZE_64")] | ||
pub const UARTE_DMA_SIZE: usize = 64; | ||
|
||
#[cfg(feature = "UARTE_DMA_SIZE_128")] | ||
pub const UARTE_DMA_SIZE: usize = 128; | ||
|
||
// Currently causes internal OOM, needs fixing | ||
// Maximum DMA size is 255 of the UARTE peripheral, see `MAXCNT` for details | ||
#[cfg(feature = "UARTE_DMA_SIZE_255")] | ||
pub const UARTE_DMA_SIZE: usize = 255; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like this. What if multiple in the dependency graph depend on the HAL, but specify different sizes? Then it won't compile.
This should be added as a generic argument on UarteDMAPoolNode
and wherever else it's needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed its ugly. If I recall it needs to be a constant as we are using the size for allocation. Not sure if we can do it currently in Rust in a more elegant way. I'm open to suggestions here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah right. I remember there are limitations around array lengths (although I don't recall the details), so it makes sense you needed to do this to get it to work.
I'm fine with merging it like this for now. Const generics seem to be making progress lately, so hopefully we can fix it by the time someone runs into the dependency graph problems I described.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was aiming at zero-cost and thus avoiding passing arguments and run-time assertions (at least this approach is declarative and zero cost). Regarding Rust's feature handling I'm still hoping for a complete feature algebra, but there might be some soundness things lurking that is blocking. Regarding const generics/eval, there is a lot going on, const assert, etc. Maybe there will be (or even is) a better solution possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could use typenum
to provide the size as a type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its used to allocate memory, not sure if we have const eval for usize.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, if you use typenum the value is evaluated at compile time so it should be zero cost.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If typenum
is what heapless
uses, then we should do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
heapless
uses generic-array
which usestypenum
.
); | ||
|
||
/// Each node in the `UarteDMAPool` consists of this struct. | ||
pub struct UarteDMAPoolNode { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be UartDmaPoolNode
as per established convention. I'm all for ignoring conventions when there's a good reason, but in this case, I think the conventional name is easier to parse.
@korken89 and @jacobrosenthal I and a usage example here: It still needs more testing but it looks like it's working fine, let me know what you think. Thanks for this work btw. Edit: I just realized that there's a problem in |
Thanks for the ping! I am currently a bit swamped, but I'll try and come back on this in the near future. |
Hi nrf team,
We have added the interrupt driven code from the
RTFM workshop
@ Oxidize and would like feedback.Purpouse:
RTFM
(example will be added)Please note:
DMAPool
is fixed to 16 byte size, which we should discuss on. Currently this means that only full 16 byte chunks can be sent or received, which we should look into fixing.min-const-fn
inheapless
, but we are currently looking into thatFuture extensions:
Best Regards
Emil Fresk & Per Lindgren (@perlindgren)