Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[22281] When communicating in data sharing mode, if the QoS is set to best-effort, the writer will return an error code RETCODE_OUT_OF_RESOURCES after sending a number of data equal to the history depth. However, if the QoS is set to reliable, this issue does not occur. Why is that? #5411

Open
1 task done
Eternity1987 opened this issue Nov 19, 2024 · 4 comments
Labels
in progress Issue or PR which is being reviewed

Comments

@Eternity1987
Copy link

Is there an already existing issue for this?

  • I have searched the existing issues

Expected behavior

When the QoS is set to best effort, data can be continuously sent without any issues.

Current behavior

if the QoS is set to best-effort, the writer will return an error code RETCODE_OUT_OF_RESOURCES after sending a number of data equal to the history depth

Steps to reproduce

a writer and a reader communicate,the data type is bounded and plain,data sharing is ON

Fast DDS version/commit

2.14.3

Platform/Architecture

Ubuntu Focal 20.04 amd64

Transport layer

Data-sharing delivery, Zero copy

Additional context

image

XML configuration file

No response

Relevant log output

No response

Network traffic capture

No response

@Eternity1987 Eternity1987 added the triage Issue pending classification label Nov 19, 2024
@Eternity1987
Copy link
Author

Why does the phenomenon occur when the kind of DurabilityQosPolicy is TRANSIENT_LOCAL_DURABILITY_QOS, but messages are sent normally if it is VOLATILE_DURABILITY_QOS? Can you provide some insights into why this might happen, such as not receiving ack or gap

@Eternity1987
Copy link
Author

When free_payloads_ is empty, my program doesn't execute advance_till_first_non_removed. A single loan and write process consumes two CacheChange_t in free_payloads_. How can I ensure that release_payload is executed? Otherwise, when a loan has CacheChange_t available, they might not be available by the time write is called.

img_v3_02gp_11f2c552-3752-47a4-990e-11929a28015g


@rsanchez15 rsanchez15 changed the title When communicating in data sharing mode, if the QoS is set to best-effort, the writer will return an error code RETCODE_OUT_OF_RESOURCES after sending a number of data equal to the history depth. However, if the QoS is set to reliable, this issue does not occur. Why is that? [22281] When communicating in data sharing mode, if the QoS is set to best-effort, the writer will return an error code RETCODE_OUT_OF_RESOURCES after sending a number of data equal to the history depth. However, if the QoS is set to reliable, this issue does not occur. Why is that? Nov 22, 2024
@cferreiragonz
Copy link
Contributor

Hi @Eternity1987. Thank you for reporting this.

We will address it and try to reproduce it in the following weeks.

@cferreiragonz cferreiragonz added in progress Issue or PR which is being reviewed and removed triage Issue pending classification labels Nov 27, 2024
@cferreiragonz
Copy link
Contributor

Hi @Eternity1987.

Unfortunately, I was not able to reproduce the behavior on Fast DDS v2.14.3. The quickest way to identify the potential issue would be if you could provide some code or a reproducer where this behavior is reflected.

Additionally, it would be helpful if you could share more details about the QoS settings you are using for both the DataWriter and DataReader. In your initial comment, you mentioned that the issue occurs when the ReliabilityQosPolicy is set to BEST_EFFORT_RELIABILITY_QOS, but in a later comment, you suggest it might be resolved by changing the DurabilityQosPolicy to VOLATILE_DURABILITY_QOS. Could you clarify whether the issue is related to one or the other, or if it occurs with a specific combination of both QoS settings?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in progress Issue or PR which is being reviewed
Projects
None yet
Development

No branches or pull requests

2 participants