adelie.adelie_core.glm.GlmBinomialProbit64#
- class adelie.adelie_core.glm.GlmBinomialProbit64#
Core GLM class for Binomial probit family.
Methods
__init__
(self, arg0, arg1)gradient
(self, arg0, arg1)Computes the gradient of the negative loss function.
hessian
(self, arg0, arg1, arg2)Computes a diagonal hessian majorization of the loss function.
inv_hessian_gradient
(self, arg0, arg1, arg2, ...)Computes the inverse hessian of the (negative) gradient of the loss function.
inv_link
(self, arg0, arg1)Computes the inverse link function.
loss
(self, arg0)Computes the loss function.
loss_full
(self)Computes the loss function at the saturated model.
Attributes
True
if it defines a multi-response GLM family.Name of the GLM family.
- __init__(self: adelie.adelie_core.glm.GlmBinomialProbit64, arg0: numpy.ndarray[numpy.float64[1, n]], arg1: numpy.ndarray[numpy.float64[1, n]]) None #
- gradient(self: adelie.adelie_core.glm.GlmBase64, arg0: numpy.ndarray[numpy.float64[1, n]], arg1: numpy.ndarray[numpy.float64[1, n], flags.writeable]) None #
Computes the gradient of the negative loss function.
Computes the (negative) gradient \(-\nabla \ell(\eta)\).
- Parameters:
- eta(n,) ndarray
Natural parameter.
- grad(n,) ndarray
The gradient to store.
- hessian(self: adelie.adelie_core.glm.GlmBase64, arg0: numpy.ndarray[numpy.float64[1, n]], arg1: numpy.ndarray[numpy.float64[1, n]], arg2: numpy.ndarray[numpy.float64[1, n], flags.writeable]) None #
Computes a diagonal hessian majorization of the loss function.
Computes a diagonal majorization of the hessian \(\nabla^2 \ell(\eta)\).
Note
Although the hessian is in general a fully dense matrix, we only require the user to output a diagonal matrix. It is recommended that the diagonal matrix dominates the full hessian. However, in some cases, the diagonal of the hessian suffices even when it does not majorize the hessian. Interestingly, most hessian computations become greatly simplified when evaluated using the gradient.
- Parameters:
- eta(n,) ndarray
Natural parameter.
- grad(n,) ndarray
Gradient as in
gradient()
method.- hess(n,) ndarray
The hessian to store.
- inv_hessian_gradient(self: adelie.adelie_core.glm.GlmBase64, arg0: numpy.ndarray[numpy.float64[1, n]], arg1: numpy.ndarray[numpy.float64[1, n]], arg2: numpy.ndarray[numpy.float64[1, n]], arg3: numpy.ndarray[numpy.float64[1, n], flags.writeable]) None #
Computes the inverse hessian of the (negative) gradient of the loss function.
Computes \(-(\nabla^2 \ell(\eta))^{-1} \nabla \ell(\eta)\).
Note
Unlike the
hessian()
method, this function may use the full hessian matrix. The diagonal hessian majorization is provided in case it speeds-up computations, but it can be ignored. The default implementation simply computesgrad / (hess + eps * (hess <= 0))
whereeps
is given byhessian_min
.- Parameters:
- eta(n,) ndarray
Natural parameter.
- grad(n,) ndarray
Gradient as in
gradient()
method.- hess(n,) ndarray
Hessian as in
hessian()
method.- inv_hess_grad(n,) ndarray
The inverse hessian gradient to store.
- inv_link(self: adelie.adelie_core.glm.GlmBase64, arg0: numpy.ndarray[numpy.float64[1, n]], arg1: numpy.ndarray[numpy.float64[1, n], flags.writeable]) None #
Computes the inverse link function.
Computes \(g^{-1}(\eta)\) where \(g(\mu)\) is the link function.
- Parameters:
- eta(n,) ndarray
Natural parameter.
- out(n,) ndarray
Inverse link \(g^{-1}(\eta)\).
- loss(self: adelie.adelie_core.glm.GlmBase64, arg0: numpy.ndarray[numpy.float64[1, n]]) float #
Computes the loss function.
Computes \(\ell(\eta)\).
- Parameters:
- eta(n,) ndarray
Natural parameter.
- Returns:
- lossfloat
Loss.
- loss_full(self: adelie.adelie_core.glm.GlmBase64) float #
Computes the loss function at the saturated model.
Computes \(\ell(\eta^\star)\) where \(\eta^\star\) is the minimizer.
- Returns:
- lossfloat
Loss at the saturated model.
- is_multi#
True
if it defines a multi-response GLM family. It is alwaysFalse
for this class.
- name#
Name of the GLM family.