You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For some benchmarks like correlation and covariance, the initialization of the matrix is a symmetric matrix. This unfortunately validates incorrect implementations for the benchmarks. Is there any specific reason for initializing the matrices as symmetric in the way they currently are?
On experimenting with random initialization, I found that they were less susceptible to validating wrong implementations for the benchmarks.
Happy to put in a PR addressing this.
The text was updated successfully, but these errors were encountered:
Benchmarks that do not have random initialization follow instead the data generation of the original codes. For example, correlation and covariance are taken from Polybench. All Polybench benchmarks use the same initialization as the C version, with the only exception being GramSchmidt (transposes the matrix).
The main reason not to use random data is that the problem is defined only if the input satisfies certain properties. For example, inverting a singular (or almost singular) matrix. Therefore, I would check if the codes actually work properly with random data or if they produce "garbage."
For some benchmarks like correlation and covariance, the initialization of the matrix is a symmetric matrix. This unfortunately validates incorrect implementations for the benchmarks. Is there any specific reason for initializing the matrices as symmetric in the way they currently are?
On experimenting with random initialization, I found that they were less susceptible to validating wrong implementations for the benchmarks.
Happy to put in a PR addressing this.
The text was updated successfully, but these errors were encountered: