You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question regarding the frameAP calculation in your code, specifically about how true positives (TP) and false positives (FP) are determined.
From my understanding:
The current implementation seems to lack a filtering mechanism to ensure that only the highest-scoring class prediction is considered for each frame.
For example, if class "1" is not the correct class but is predicted with a high IoU, it is still counted towards TP or FP without validating that it is the class with the highest score.
As a result, when IoU matches, it is counted as a TP, which could artificially inflate the mAP for this class. This occurs because the prediction is not necessarily the most confident one, and yet it contributes to the evaluation metrics.
Could you clarify if this behavior is intentional or if there might be a missing step to filter predictions based on the highest score per frame? I want to ensure I correctly interpret your implementation and its alignment with the evaluation protocol.
Thank you for your time and insights. I appreciate your clarification.
The text was updated successfully, but these errors were encountered:
I have a question regarding the frameAP calculation in your code, specifically about how true positives (TP) and false positives (FP) are determined.
From my understanding:
Could you clarify if this behavior is intentional or if there might be a missing step to filter predictions based on the highest score per frame? I want to ensure I correctly interpret your implementation and its alignment with the evaluation protocol.
Thank you for your time and insights. I appreciate your clarification.
The text was updated successfully, but these errors were encountered: