1. In general, recall increases and precision decreases. So what you are describing is that precision approaches 0 when your model reaches certain recall level. So the last few terms does not improve your mAP score.
  2. My thought on your 2nd question: How is 0.6 confidence in one model comparing with 0.4 confidence in another model? The answer is likely don’t know. What is more important is how your top 10 predictions compare with my top 10. That is why mAP rank the choice using the confidence score. Should I just send predictions higher than 0.5 only? If you think anything lower than 0.5 are garbage and will not improve your recall without dropping precision close to 0, then you can stop sending those to the scoring server.

Written by

Deep Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store