Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change expectation value metric to HOP for QV circuits #223

Merged
merged 36 commits into from
Feb 19, 2025

Conversation

Misty-W
Copy link
Collaborator

@Misty-W Misty-W commented Feb 11, 2025

Partially addresses #170 (to be split into sub-issues).

Reporting the expectation value metric of the ZZZZ... observable doesn't make as much sense for quantum volume (QV) circuits- it's better to use the Heavy Output Probability (HOP).

This PR changes benchmarks/scripts/expval_benchmark.py to check if the compiled circuit is a QV circuit, and if so, compute the distribution of bitstrings, execute the circuit on a noisy simulator, and compute the heavy output metric. The same process is repeated with the uncompiled circuit but on a noiseless simulator for comparison.

Locally run benchmarks with the modified QV execution flow:

image

image

@Misty-W Misty-W marked this pull request as ready for review February 11, 2025 21:01
@Misty-W Misty-W requested a review from natestemen February 11, 2025 21:01
def eval_exp_vals(compiled_circuit, uncompiled_qiskit_circuit, circuit_name):
"""Calculates the expectation values of observables based on input benchmark circuit."""
circuit_short_name = circuit_name.split("_N")[0]
if circuit_short_name == "qv":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit -- Not for this release, but as part of the rest of #170, it might be worth a more explicit structure for benchmarks, the relevant observables etc. Relying on name conventions and also having such a wide variety in the return tuples of this function will be hard to manage as the benchmark suite evolves.

@natestemen
Copy link
Member

I did some refactoring to expval_benchmarking.py that will need to be merged here. Sorry for the inconvenience!

Base automatically changed from plot-expectation-value to main February 13, 2025 18:43
@Misty-W
Copy link
Collaborator Author

Misty-W commented Feb 18, 2025

Refactor complete, ready for review.

Copy link
Member

@natestemen natestemen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice change! Should we also remove all the previous expectation value benchmark data with this PR since comparing to older data will not longer be an apples to apples comparison?


def estimate_heavy_output(
circuit: qiskit.QuantumCircuit,
qv_1q_err: float = 0.002,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For posterity sake, how would we know how these values were determined? Should we have some comments or details on that process, in the event we need to revisit or there's a question?

In hindsight, I have the same question for @natestemen and the other observable benchmark he added.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how would we know how these values were determined?

I played with "reasonable" valuables. Meaning values that are within the range of what we are seeing on hardware. There wasn't much more than this.

Copy link
Collaborator

@bachase bachase left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Piggybacked on a few comments but overall 👍

@Misty-W Misty-W merged commit 46f2d90 into main Feb 19, 2025
2 checks passed
@Misty-W Misty-W deleted the match-observables-with-circuits branch February 19, 2025 19:17
@Misty-W Misty-W linked an issue Feb 21, 2025 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement custom observables for each benchmark
5 participants