You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
3D fits are now mostly based on the extract_feature() function, constructed on the idea of masks (a first row-wise mask, and a second global one on top of the first).
However, this function is:
potentially discarding quite useful information
retaining more points in flatter, less-resolved regions (weighting them more than sharply resolved ones)
prone to the background behavior
it's not necessarily able to distinguish a localized and resolve signal if the background assumes a comparable magnitude, even in a region far apart from the signal
potentially it would discard relevant points if the value of the signal region is not flat enough (usually the tails of the functions have a much fainter signal, though locally distinguishable)
For this reason, we would need to reassess the general strategy for 3D fits, e.g. as used in the qubit flux dependence (but the relevant part of feature extraction is common to all the 3D fits).
The material here is definitely work-in-progress, extracted from some comments. It should be rearranged.
also return an error band, not just the array of points
include the errors in the function fit
draw the error band in the plots (we can do it with a completely transparent background, but with a pattern made of diagonal white lines)
do not return all points: if there is no valley/hill, just skip the point
and the valley/hill should be identified with a relative variation
that could be: "find your best point, check the relative distance from average and the other extreme"
additional points, outside the "function" domain, may confuse the subsequent fit
a more complicate option is to attempt a direct minimization in 3D, but this will require fine-tuning of the initial guesses (so it is not clear it would be more robust - but that could be something to explore)
replace feat: str (["max", "min"]) with a boolean toggle (there are only two options anyhow)
In any case, I would advocate once more for the intervals line by line, or the 2D fit (if possible). Not in this PR, of course.
In this case, what you're selecting are a few pixels, to treat as "scatter points" in your fit.
However, while the two masks mechanism is a bit robust, it is actually prone to the noise happening outside the function window (on the vertical axis).
Indeed, you can clearly see in plots that the function is usually visible on just a portion of the vertical axis (the independent variable), but only the second mask is acting globally, and it's only there to remove outliers.
So, if you have vertical tails, they would be included as points in the fit, and they could bias the minimization.
Maybe we could keep playing with percentile and masks, but the most promising way, to me, is trying to analyze the horizontal slices one at a time (since we know the vertical to be the independent variable of the function we're seeking) and deweight the information on the function location within the slices with the SNR of the slice, i.e. its flatness.
I don't know how much you went through this way in the past, but I'd be interested to know.
I realized that the full 3D fit is impossible, because we would need to:
either assume the function region and background region to be flat at two different levels (which are not)
or have a model for the background value, and even for the signal value in the function region (which we have not)
The option of identifying the two regions and assign different levels is the same as extracting the feature in advance, so there is no advantage in a full 3D fit (there would have been only if a rotation were possible - but our two axes are not equivalent, so it is useless).
The text was updated successfully, but these errors were encountered:
Thanks @igres26, it might be related as well. I'm not sure whether my interpretation is correct (I should check plotting the masks), but I believe that many wrong fits fail because the feature extraction. Even if just some of the pixels retained are scattered in peripheral regions, they may considerably affect the fit. And if they are enough, it may be trying to fit something unreasonable, leading to more or less random numbers.
Even issues with the punchout might be related. The qubit flux dependence is just a usually cleaner environment to debug.
3D fits are now mostly based on the
extract_feature()
function, constructed on the idea of masks (a first row-wise mask, and a second global one on top of the first).However, this function is:
For this reason, we would need to reassess the general strategy for 3D fits, e.g. as used in the qubit flux dependence (but the relevant part of feature extraction is common to all the 3D fits).
The material here is definitely work-in-progress, extracted from some comments. It should be rearranged.
feat: str (["max", "min"])
with a boolean toggle (there are only two options anyhow)#914 (comment)
In any case, I would advocate once more for the intervals line by line, or the 2D fit (if possible). Not in this PR, of course.
In this case, what you're selecting are a few pixels, to treat as "scatter points" in your fit.
However, while the two masks mechanism is a bit robust, it is actually prone to the noise happening outside the function window (on the vertical axis).
Indeed, you can clearly see in plots that the function is usually visible on just a portion of the vertical axis (the independent variable), but only the second mask is acting globally, and it's only there to remove outliers.
So, if you have vertical tails, they would be included as points in the fit, and they could bias the minimization.
Maybe we could keep playing with percentile and masks, but the most promising way, to me, is trying to analyze the horizontal slices one at a time (since we know the vertical to be the independent variable of the function we're seeking) and deweight the information on the function location within the slices with the SNR of the slice, i.e. its flatness.
I don't know how much you went through this way in the past, but I'd be interested to know.
#914 (comment)
I realized that the full 3D fit is impossible, because we would need to:
The option of identifying the two regions and assign different levels is the same as extracting the feature in advance, so there is no advantage in a full 3D fit (there would have been only if a rotation were possible - but our two axes are not equivalent, so it is useless).
The text was updated successfully, but these errors were encountered: