-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change expectation value metric to HOP for QV circuits #223
Conversation
def eval_exp_vals(compiled_circuit, uncompiled_qiskit_circuit, circuit_name): | ||
"""Calculates the expectation values of observables based on input benchmark circuit.""" | ||
circuit_short_name = circuit_name.split("_N")[0] | ||
if circuit_short_name == "qv": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit -- Not for this release, but as part of the rest of #170, it might be worth a more explicit structure for benchmarks, the relevant observables etc. Relying on name conventions and also having such a wide variety in the return tuples of this function will be hard to manage as the benchmark suite evolves.
I did some refactoring to |
added some logging to help me debug this as well that could be useful in the future
Refactor complete, ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice change! Should we also remove all the previous expectation value benchmark data with this PR since comparing to older data will not longer be an apples to apples comparison?
|
||
def estimate_heavy_output( | ||
circuit: qiskit.QuantumCircuit, | ||
qv_1q_err: float = 0.002, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For posterity sake, how would we know how these values were determined? Should we have some comments or details on that process, in the event we need to revisit or there's a question?
In hindsight, I have the same question for @natestemen and the other observable benchmark he added.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how would we know how these values were determined?
I played with "reasonable" valuables. Meaning values that are within the range of what we are seeing on hardware. There wasn't much more than this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Piggybacked on a few comments but overall 👍
Co-authored-by: nate stemen <[email protected]>
Partially addresses #170 (to be split into sub-issues).
Reporting the expectation value metric of the ZZZZ... observable doesn't make as much sense for quantum volume (QV) circuits- it's better to use the Heavy Output Probability (HOP).
This PR changes
benchmarks/scripts/expval_benchmark.py
to check if the compiled circuit is a QV circuit, and if so, compute the distribution of bitstrings, execute the circuit on a noisy simulator, and compute the heavy output metric. The same process is repeated with the uncompiled circuit but on a noiseless simulator for comparison.Locally run benchmarks with the modified QV execution flow: