-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] unify language and make small improvements in some param descriptions #6618
Conversation
// descl2 = must be passed through parameters explicitly in the C API | ||
// descl2 = **Note**: cannot be used in CLI version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -569,7 +571,7 @@ struct Config { | |||
// check = >= 0.0 | |||
// desc = controls smoothing applied to tree nodes | |||
// desc = helps prevent overfitting on leaves with few samples | |||
// desc = if set to zero, no smoothing is applied | |||
// desc = if ``0.0`` (the default), no smoothing is applied |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly to monotone_penalty
, feature_pre_filter
and predict_disable_shape_check
@@ -619,24 +621,27 @@ struct Config { | |||
// desc = enabling this will discretize (quantize) the gradients and hessians into bins of ``num_grad_quant_bins`` | |||
// desc = with quantized training, most arithmetics in the training process will be integer operations | |||
// desc = gradient quantization can accelerate training, with little accuracy drop in most cases | |||
// desc = **Note**: can be used only with ``device_type = cpu`` and ``device_type=cuda`` | |||
// desc = **Note**: works only with ``cpu`` and ``cuda`` device type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"can be used only ..." for something that is not supposed to work in other environments at all, "Note: works only ..." for something that is supposed to work in other environments but currently doesn't work (not implemented yet).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally support these changes, thank you for this!
I left one small suggestion, but leaving an "Approve" review so you can merge this whenever you want. If you disagree with my suggestion, it's ok to leave this as-is... I don't feel strongly about it.
include/LightGBM/config.h
Outdated
// desc = set this to ``true`` to use double precision math on GPU (by default single precision is used) | ||
// desc = **Note**: can be used only in OpenCL implementation, in CUDA implementation only double precision is currently supported | ||
bool gpu_use_dp = false; | ||
|
||
// check = >0 | ||
// desc = used only with ``cuda`` device type | ||
// desc = number of GPUs | ||
// desc = **Note**: can be used only in CUDA implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similar to the comment on gpu_use_dp
, what do you think about combining two lines here? So we'd have:
// desc = **Note**: can be used only in CUDA implementation (``device_type="cuda"``)
Thanks very much for your attention to detail 😁 |
No description provided.