-
-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change client resolution semantics to match what's needed for 3PH #494
Comments
One notion I have is that it might help to introduce more asymmetry between SendCall and RecvCall; initially these were just a question of who allocated the memory, with Recv being more appropriate for calls coming from the network, and send more appropriate for calls going to it. Then, when we added flow control support, we made it so that Send respects the FlowController, but Recv does not, per the different use cases. Maybe we can get away with doing a similar thing here, where when app code makes calls (using Send) we deep-shorten (using resolveHook internally), but when the rpc system makes calls (using Recv) we don't, and just pass it down the chain naively. I need to think this through but this might give us what we want. |
There is something I don't quite understand.
My understanding is that we are dealing with a chain The argument seems to be that performance isn't degraded (much?) because P has resolved to R, leaving us with Having typed all this out, I am slightly more confident that it is the latter, but then this passage would contradict that conclusion:
|
P.S.: maybe it would make sense for @kentonv to weigh in here? |
Yeah, that's kinda what I'm puzzling over as well; it seems like the docs essentially assume something at a higher level of abstraction is going to do the "deep" shortening, and I'm fuzzy on what that is supposed to be. Would definitely appreciate @kentonv's thoughts. Also, from talking to ocapn folks, I have learned that E didn't have anything like disembargos -- it just punted and took the latency hit of having to queue calls locally and wait for all prior calls on the promise to return before actually sending them. It is more obvious to me how that would work. |
Sorry, I don't follow what is the point of confusion here. If you have a path: P -> Q -> R -> S Now you want to resolve Q -> R, to make it: P -> R -> S The rule here is, once you have decided this, you cannot subsequently decide that you want to resolve Q directly to S instead. Q is now permanently a proxy that relays to R. But this is OK because Q won't be around much longer anyway; as soon as P has received the message that Q should be resolved to R, then P will stop sending messages to Q. Moreover, P can then discover that R should further resolve to S, and then P can begin talking directly to S. FWIW, this issue applies even in two-party scenarios, when you have promise chains pointing back and forth between the two parties. |
@kentonv I think you've just cleared it up (for me at least). Thank you! |
I think a point of confusion here is, @zenhack is imagining that Q has clients other than P. That is not the case here. I am using Q to designate a specific export on a specific connection, where the other end of that connection is P's vat. |
Let's label the edges: P -q-> Q -r-> R -s-> S Here, capital letters are objects, lower-case letters are specific exports over specific connections connecting those objects. Q was originally a promise but has resolved to R. It informs P of this replacement. Now, the rule is: all future messages arriving over the edge |
Ok, yeah I think part of what I'm snagging on is that the distinction between objects and specific exports in go-capnp makes this hard, and the implementation will probably need to be tweaked to facilitate it. The entries in a connection's export table literally store the same object that an app would use to make calls. So it sounds like the way |
(Just looked at the C++ implementation, I see that the exports table stores a ClientHook, not a Client, so yeah I think I get it now) |
Yeah. In C++, an imported RPC promise capability is actually implemented using two layers: The inner layer is a ClientHook targeting the specific import slot, and the outer layer waits for resolution and then redirects to the resolved destination. The export table just contains ClientHooks. But the way we implement this rule is, once we've sent out a message indicating that an export should be resolved, we update the export table entry to point directly at the inner ClientHook, whereas previously it pointed to the outer one. Specifically in
Of course, other implementations are possible. |
Makes sense. Thanks for your help! |
Background from rpc.capnp:
go-capnp/std/capnp/rpc.capnp
Lines 686 to 709 in 1a829fd
Right now, when a capability is resolved, we overwrite the internal clientHook. This has a couple implications:
(2) sounds like a good thing initially, but per the docs above this will likely be problematic in a level 3 implementation. Also, (1) is probably not ok either, since if we are to avoid multi-hop shortening, sendCap() needs to continue to encode the capability as the original promise.
My current thinking is that we need to refactor the internals of Client so that on resolution, the initial clientHook is not dropped, and after resolution we always invoke the first resolution, rather than doing what resolveHook() does and walk as deep into the chain as it can. I think this will work for basic correctness, but I see two downsides:
I can envision a scenario where:
I want to think a bit more to see if there's a way we can keep this transparent; I have some things ideas investigate that may not pan out.
@lthibault interested in your thoughts.
The text was updated successfully, but these errors were encountered: