-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create workflow for Z-phase calibration #6728
base: main
Are you sure you want to change the base?
Conversation
fa125bb
to
07ba30a
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #6728 +/- ##
==========================================
- Coverage 97.83% 97.82% -0.02%
==========================================
Files 1077 1079 +2
Lines 92558 92695 +137
==========================================
+ Hits 90555 90677 +122
- Misses 2003 2018 +15 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really great, Nour!!
A few nits:
I think the default values in XEBPhasedFSimCharacterizationOptions
should be
characterize_theta=False,
characterize_phi=False,
zeta_default=0,
chi_default=0,
gamma_default=0
As written, the parameters like zeta_default
are optional, but if you leave them as None, then it complains, e.g. ValueError: zeta_default was not set.
.
When running this, I get a lot of runtime warnings invalid value encountered in equal
coming from cirq/value/value_equality_attr.py:82
.
I tried running this twice on hardware. The first time, I got zeta=0.014562062933646333, chi=0.0651809408437691, gamma=-0.0799040390729371
and the second time, I got zeta=0.037173, chi=0.048721, gamma=-0.076402
, which is basically consistent, but it would be good to quantify statistical uncertainty if that isn't too difficult.
A method for making a plot like the one that you posted would be really helpful: one that shows the fidelity vs depth for the original and reoptimized gates.
In order to use the results of this characterization tool, a simple transformer that replaces the original gates with the appropriate reoptimized gates could be helpful, but that can go in a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please reduce the test duration to the order of seconds and make the axes argument in plot optional. Otherwise LGTM.
|
||
Returns: | ||
- An `XEBCharacterizationResult` object that contains the calibration result. | ||
- A `pd.DataFrame` comparing the before and after fidilities. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit - "fidelities"
@@ -103,7 +108,11 @@ def benchmark_2q_xeb_fidelities( | |||
def per_cycle_depth(df): | |||
"""This function is applied per cycle_depth in the following groupby aggregation.""" | |||
fid_lsq = df['numerator'].sum() / df['denominator'].sum() | |||
ret = {'fidelity': fid_lsq} | |||
# Note: both df['denominator'] an df['x'] are constants. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: "an" --> "and"
|
||
def plot_z_phase_calibration_result( | ||
before_after_df: 'pd.DataFrame', | ||
axes: np.ndarray[Sequence[Sequence['plt.Axes']], np.dtype[np.object_]], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most other plotting functions have axes
as an optional argument, which is created and returned if not provided.
Can we follow the same pattern here? Also please simplify the axes type to Sequence["plot.Axes"]
.
def plot_z_phase_calibration_result( | ||
before_after_df: 'pd.DataFrame', | ||
axes: np.ndarray[Sequence[Sequence['plt.Axes']], np.dtype[np.object_]], | ||
*, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eliottrosenberg - Would it be useful to provide an optional pairs
argument if only specific qubit pairs are desired for plotting?
|
||
|
||
@pytest.mark.parametrize(['angles', 'error'], _create_tests(n=10, seed=32432432)) | ||
def test_calibrate_z_phases(angles, error): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests are very slow and take about 1 minute to run on my desktop with --numprocesses=2
(number of cores on GHA). They make up the bulk of the 20 slowest durations in this PRs CI at https://github.com/quantumlib/Cirq/actions/runs/11152830363/job/30999210792?pr=6728#step:7:507.
Please reduce the number and computational size of the tests so we don't add more than several seconds to the pytest time. The main purpose of unit tests is to exercise all code branches and check a few small and quick computations.
If you need to run larger tests, please move them to a separate test method decorated with @pytest.mark.slow
.
_, axes = plt.subplots(1, 2) | ||
plot_z_phase_calibration_result(before_after_df=df, axes=axes) | ||
|
||
np.testing.assert_allclose(axes[0].lines[0].get_xdata().astype(float), [1, 2, 3]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems the arrays passed to plotting have dtype=object
, otherwise this should work without astype(float)
.
Is it feasible to convert to numerical arrays in or before the plotting code?
This calibration workflow is created for excitation preserving 2-qubit gates and assumes an error model that can be described with small random z-rotations
for some small angles a, b, c, and d.
when the error model doesn't apply the workflow may give absured numbers (e.g. fidilities$\notin [0, 1]$ ). The fidilities can be slightly outside the $[0, 1]$ interval because it's a statistical estimate as can be seen in the following figure which compares the estimated fidelity of a CZ surrounded by random z rotations before and after calibration
test notebook: https://colab.sandbox.google.com/drive/10gZ5dggYKH_xSsCJFpg__GakxvoaZDIi