Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3D fits review #956

Open
alecandido opened this issue Aug 1, 2024 · 2 comments
Open

3D fits review #956

alecandido opened this issue Aug 1, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@alecandido
Copy link
Member

alecandido commented Aug 1, 2024

3D fits are now mostly based on the extract_feature() function, constructed on the idea of masks (a first row-wise mask, and a second global one on top of the first).

However, this function is:

  • potentially discarding quite useful information
  • retaining more points in flatter, less-resolved regions (weighting them more than sharply resolved ones)
  • prone to the background behavior
    • it's not necessarily able to distinguish a localized and resolve signal if the background assumes a comparable magnitude, even in a region far apart from the signal
    • potentially it would discard relevant points if the value of the signal region is not flat enough (usually the tails of the functions have a much fainter signal, though locally distinguishable)

For this reason, we would need to reassess the general strategy for 3D fits, e.g. as used in the qubit flux dependence (but the relevant part of feature extraction is common to all the 3D fits).

The material here is definitely work-in-progress, extracted from some comments. It should be rearranged.
  • also return an error band, not just the array of points
  • include the errors in the function fit
  • draw the error band in the plots (we can do it with a completely transparent background, but with a pattern made of diagonal white lines)
  • do not return all points: if there is no valley/hill, just skip the point
    • and the valley/hill should be identified with a relative variation
    • that could be: "find your best point, check the relative distance from average and the other extreme"
    • additional points, outside the "function" domain, may confuse the subsequent fit
    • a more complicate option is to attempt a direct minimization in 3D, but this will require fine-tuning of the initial guesses (so it is not clear it would be more robust - but that could be something to explore)
  • replace feat: str (["max", "min"]) with a boolean toggle (there are only two options anyhow)

#914 (comment)

In any case, I would advocate once more for the intervals line by line, or the 2D fit (if possible). Not in this PR, of course.

In this case, what you're selecting are a few pixels, to treat as "scatter points" in your fit.

However, while the two masks mechanism is a bit robust, it is actually prone to the noise happening outside the function window (on the vertical axis).
Indeed, you can clearly see in plots that the function is usually visible on just a portion of the vertical axis (the independent variable), but only the second mask is acting globally, and it's only there to remove outliers.
So, if you have vertical tails, they would be included as points in the fit, and they could bias the minimization.

Maybe we could keep playing with percentile and masks, but the most promising way, to me, is trying to analyze the horizontal slices one at a time (since we know the vertical to be the independent variable of the function we're seeking) and deweight the information on the function location within the slices with the SNR of the slice, i.e. its flatness.
I don't know how much you went through this way in the past, but I'd be interested to know.

#914 (comment)

I realized that the full 3D fit is impossible, because we would need to:

  • either assume the function region and background region to be flat at two different levels (which are not)
  • or have a model for the background value, and even for the signal value in the function region (which we have not)

The option of identifying the two regions and assign different levels is the same as extracting the feature in advance, so there is no advantage in a full 3D fit (there would have been only if a rotation were possible - but our two axes are not equivalent, so it is useless).

@igres26
Copy link
Contributor

igres26 commented Aug 1, 2024

Just to add a bit of more information here for something different than flux dependence that might be relevant.

I have been consistently seeing fits like this in the Chevron experiments: http://login.qrccluster.com:9000/8VjfsM1eRjGUBtrp5CmAvw==/

This seems to happen with both positive and negative pulse length, and might be related to the Fourier treatment of the data.

(if this is not related to the fitting you had in mind @alecandido I can move this to a separate issue, but it felt appropriate)

@alecandido
Copy link
Member Author

Thanks @igres26, it might be related as well. I'm not sure whether my interpretation is correct (I should check plotting the masks), but I believe that many wrong fits fail because the feature extraction. Even if just some of the pixels retained are scattered in peripheral regions, they may considerably affect the fit. And if they are enough, it may be trying to fit something unreasonable, leading to more or less random numbers.

Even issues with the punchout might be related. The qubit flux dependence is just a usually cleaner environment to debug.

@alecandido alecandido added the enhancement New feature or request label Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants