Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update toggle test failing at random #966

Open
alecandido opened this issue Aug 2, 2024 · 5 comments
Open

Update toggle test failing at random #966

alecandido opened this issue Aug 2, 2024 · 5 comments
Labels
bug Something isn't working tests

Comments

@alecandido
Copy link
Member

I haven't identified yet any random number involved, but it seems that this test is failing at random

@pytest.mark.parametrize("global_update", [True, False])
@pytest.mark.parametrize("local_update", [True, False])
def test_update_argument(platform, global_update, local_update, tmp_path):
"""Test possible update combinations between global and local."""
NEW_CARD = modify_card(
UPDATE_CARD, local_update=local_update, global_update=global_update
)
# platform = deepcopy(GlobalBackend().platform)
old_readout_frequency = platform.qubits[0].readout_frequency
old_iq_angle = platform.qubits[1].iq_angle
Runcard.load(NEW_CARD).run(
tmp_path,
mode=AUTOCALIBRATION,
platform=platform,
)
if local_update and global_update:
assert old_readout_frequency != approx(platform.qubits[0].readout_frequency)
assert old_iq_angle != approx(platform.qubits[1].iq_angle)
else:
assert old_readout_frequency == approx(platform.qubits[0].readout_frequency)
assert old_iq_angle == approx(platform.qubits[1].iq_angle)

https://github.com/qiboteam/qibocal/actions/runs/10219428483/job/28277670887
(the second attempt is passing, without having changed anything - also noticed that is just failing on win-py3.11 combination, but working on win-py3.9, linux, and darwin, so it seems to be just a chance)

@alecandido alecandido added bug Something isn't working tests labels Aug 2, 2024
@alecandido
Copy link
Member Author

@andrea-pasquale it seems you wrote this test. Any clue about?

(it should not be difficult to debug, anyone can do it, so it doesn't have to be necessarily you - but maybe you already had an idea)

@andrea-pasquale
Copy link
Contributor

Yes, I know.
The test is failing when the lorentzian fit fails therefore the parameters are not updated.
As a patch we could keep the test only for the single shot classification which shouldn't ever fail.

@alecandido
Copy link
Member Author

The test is failing when the lorentzian fit fails therefore the parameters are not updated.

But how is it possible that the Lorentzian fit will pass just rerunning?
Is curve_fit using some sort of random numbers internally? In case: could we fix the seed for that?

@andrea-pasquale
Copy link
Contributor

The output is different every time because we are testing with dummy, right?
Given that we don't fix the seed of the platform several runs will have different results.
So one option is to fix the seed of dummy

@alecandido
Copy link
Member Author

That is definitely a seed that should be fixed.

Then, we may still decide that the fit is too unstable to be tested in the CI, especially with dummy. And we'd rather design a closure test for it.
But all seeds should be fixed, for reproducibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working tests
Projects
None yet
Development

No branches or pull requests

2 participants