-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Float64 replaced by AbstractFloat for regression #226
Conversation
Codecov ReportAll modified lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## dev #226 +/- ##
==========================================
+ Coverage 87.50% 87.65% +0.15%
==========================================
Files 10 10
Lines 1328 1329 +1
==========================================
+ Hits 1162 1165 +3
+ Misses 166 164 -2
☔ View full report in Codecov by Sentry. |
Thanks for opening this pull request @xinadi. I was trying to reproduce what situations this PR has fixed. Could you add a test that fails on the current version of |
@rikhuijzer I added test at started form line
The idea of test is next: if classification takes place each prediciton will be equal to the some value from training labels set because estimation based on votes. On the other hand, for regression predicitons will be means and will not be exact as from training labes set. In theory, with PR test also could be failed in the sutiation when for each predicition all the trees estimate the same value. But for 10^3 number of predictions and machine precision for mean the probability of it should be very small. Also, as I understend the seed of rng guarantees same numbers. So I see proper work of test on my PC. |
@@ -293,11 +293,7 @@ function apply_tree(tree::LeafOrNode{S,T}, features::AbstractMatrix{S}) where {S | |||
for i in 1:N | |||
predictions[i] = apply_tree(tree, features[i, :]) | |||
end | |||
if T <: Float64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a weird piece of code. If Float64
, then convert to Float64
. I agree with deletion and we'll see whether a bug pops up.
Thank you @xinadi! |
Based on #225.
Occurunces of Float64 related to labels were replaced by AbstractFloat.
Problem noted in #225 (comment) still not solved as requare more refactor of parameters. But origin issue solved and now other Float type than Float64 will lead to regression, not calssification.
Also, I see a little perfomance boost and reducing number of allocations on my data.