You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 3, 2020. It is now read-only.
Currently all addressing binding side is done through strings. Client and API side this is good and should not change. However, there is an optimization possibility in the relay protocol.
If the client uses long identification strings (cluster/topic names), it places a double burden on the system: every time a message is sent to that address, the binding needs to pass the long string to the relay node. On relay side, Iris needs to hash that string into its final numerical scribe/pastry address for every message.
This could be made significantly faster by introducing a lookup mechanism into the relay protocol. Whenever a binding encounters a new id string, it looks up the string's hash id (relay request) and caches it. Whenever a message is to be sent to an address, instead of the textual id, the cached numerical id will be used.
The advantage is that currently Iris uses 6 byte internal ids for clusters and topics. It would thus be a much smaller address blob to transfer between binding and relay, and also remove the need relay side to post-process the address.
The question that arises is how to differentiate between clusters and topics in this new scheme, since the old one concatenated some prefix values to the textual address. Prefixing the binary address is obviously not good since it ruins the uniform distribution of the hashed addresses. Suffixing on the other hand might work. Will need to explore this a bit.
The same question arises for the sub-group optimization in the Scribe layer. The solution will probably be the same.
The text was updated successfully, but these errors were encountered:
But every relay has a unique ID, embedding this in the message would enable the receiving end to send a message back to the originator, no? Maybe this only works on the pastry-level but not on the scribe-level, i don't know enough on how that is routed through the network.
My comment is based on what i have read and understood so far about Iris and i likely need to learn more :-)
You understood it correctly, and that is indeed how the request/reply pattern works. The requester relay embeds its own ID into the message, that gets routed to its destination by Scribe, after which the reply uses Pastry directly to return the reply to the originator.
The issue/question is rather about an optimization possibility, in that currently all semantic addresses are turned into hashes relay side, and whether it would be worthwhile to convert them to hashes client side (hence saving multiple hashing efforts). However, I'm keen on dropping this - for now at least - since it adds a significant complexity overhead on top of the relay protocol, which is already getting tricky in certain spots.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Currently all addressing binding side is done through strings. Client and API side this is good and should not change. However, there is an optimization possibility in the relay protocol.
If the client uses long identification strings (cluster/topic names), it places a double burden on the system: every time a message is sent to that address, the binding needs to pass the long string to the relay node. On relay side, Iris needs to hash that string into its final numerical scribe/pastry address for every message.
This could be made significantly faster by introducing a lookup mechanism into the relay protocol. Whenever a binding encounters a new id string, it looks up the string's hash id (relay request) and caches it. Whenever a message is to be sent to an address, instead of the textual id, the cached numerical id will be used.
The advantage is that currently Iris uses 6 byte internal ids for clusters and topics. It would thus be a much smaller address blob to transfer between binding and relay, and also remove the need relay side to post-process the address.
The question that arises is how to differentiate between clusters and topics in this new scheme, since the old one concatenated some prefix values to the textual address. Prefixing the binary address is obviously not good since it ruins the uniform distribution of the hashed addresses. Suffixing on the other hand might work. Will need to explore this a bit.
The same question arises for the sub-group optimization in the Scribe layer. The solution will probably be the same.
The text was updated successfully, but these errors were encountered: