Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question Regarding frameAP Correctness #29

Open
JhengHsien opened this issue Dec 29, 2024 · 0 comments
Open

Question Regarding frameAP Correctness #29

JhengHsien opened this issue Dec 29, 2024 · 0 comments

Comments

@JhengHsien
Copy link

I have a question regarding the frameAP calculation in your code, specifically about how true positives (TP) and false positives (FP) are determined.

From my understanding:

  • The current implementation seems to lack a filtering mechanism to ensure that only the highest-scoring class prediction is considered for each frame.
  • For example, if class "1" is not the correct class but is predicted with a high IoU, it is still counted towards TP or FP without validating that it is the class with the highest score.
  • As a result, when IoU matches, it is counted as a TP, which could artificially inflate the mAP for this class. This occurs because the prediction is not necessarily the most confident one, and yet it contributes to the evaluation metrics.

Could you clarify if this behavior is intentional or if there might be a missing step to filter predictions based on the highest score per frame? I want to ensure I correctly interpret your implementation and its alignment with the evaluation protocol.

Thank you for your time and insights. I appreciate your clarification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant