adelie.adelie_core.glm.GlmMultiBase64#
- class adelie.adelie_core.glm.GlmMultiBase64#
 Base multi-response GLM class.
The generalized multi-response linear model is given by the (weighted) negative likelihood
\[\begin{align*} \ell(\eta) = \frac{1}{K} \sum\limits_{i=1}^n w_{i} \left( -\sum\limits_{k=1}^K y_{ik} \eta_{ik} + A_i(\eta) \right) \end{align*}\]We define \(\ell(\eta)\) as the loss and \(A(\eta) := K^{-1} \sum_{i=1}^n w_{i} A_i(\eta)\) as the log-partition function. Here, \(w \geq 0\) and \(A_i\) are any convex functions.
The purpose of a GLM class is to define methods that evaluate key quantities regarding this model that are required for solving the group lasso problem.
Every multi-response GLM-like class must inherit from this class and override the methods before passing into the solver.
Methods
__init__(self, name, y, weights)gradient(self, arg0, arg1)Computes the gradient of the negative loss function.
hessian(self, arg0, arg1, arg2)Computes a diagonal hessian majorization of the loss function.
inv_hessian_gradient(self, arg0, arg1, arg2, ...)Computes the inverse hessian of the (negative) gradient of the loss function.
inv_link(self, arg0, arg1)Computes the inverse link function.
loss(self, arg0)Computes the loss function.
loss_full(self)Computes the loss function at the saturated model.
Attributes
Trueif it defines a multi-response GLM family.Name of the GLM family.
- __init__(self: adelie.adelie_core.glm.GlmMultiBase64, name: str, y: numpy.ndarray[numpy.float64[m, n], flags.c_contiguous], weights: numpy.ndarray[numpy.float64[1, n]]) None#
 
- gradient(self: adelie.adelie_core.glm.GlmMultiBase64, arg0: numpy.ndarray[numpy.float64[m, n], flags.c_contiguous], arg1: numpy.ndarray[numpy.float64[m, n], flags.writeable, flags.c_contiguous]) None#
 Computes the gradient of the negative loss function.
Computes the (negative) gradient \(-\nabla \ell(\eta)\).
- Parameters:
 - eta(n, K) ndarray
 Natural parameter.
- grad(n, K) ndarray
 The gradient to store.
- hessian(self: adelie.adelie_core.glm.GlmMultiBase64, arg0: numpy.ndarray[numpy.float64[m, n], flags.c_contiguous], arg1: numpy.ndarray[numpy.float64[m, n], flags.c_contiguous], arg2: numpy.ndarray[numpy.float64[m, n], flags.writeable, flags.c_contiguous]) None#
 Computes a diagonal hessian majorization of the loss function.
Computes a diagonal majorization of the hessian \(\nabla^2 \ell(\eta)\).
Note
Although the hessian is in general a fully dense matrix, we only require the user to output a diagonal matrix. It is recommended that the diagonal matrix dominates the full hessian. However, in some cases, the diagonal of the hessian suffices even when it does not majorize the hessian. Interestingly, most hessian computations become greatly simplified when evaluated using the gradient.
- Parameters:
 - eta(n, K) ndarray
 Natural parameter.
- grad(n, K) ndarray
 Gradient as in
gradient()method.- hess(n, K) ndarray
 The hessian to store.
- inv_hessian_gradient(self: adelie.adelie_core.glm.GlmMultiBase64, arg0: numpy.ndarray[numpy.float64[m, n], flags.c_contiguous], arg1: numpy.ndarray[numpy.float64[m, n], flags.c_contiguous], arg2: numpy.ndarray[numpy.float64[m, n], flags.c_contiguous], arg3: numpy.ndarray[numpy.float64[m, n], flags.writeable, flags.c_contiguous]) None#
 Computes the inverse hessian of the (negative) gradient of the loss function.
Computes \(-(\nabla^2 \ell(\eta))^{-1} \nabla \ell(\eta)\).
Note
Unlike the
hessian()method, this function may use the full hessian matrix. The diagonal hessian majorization is provided in case it speeds-up computations, but it can be ignored. The default implementation simply computesgrad / (hess + eps * (hess <= 0))whereepsis given byadelie.adelie_core.configs.Configs.hessian_min.- Parameters:
 - eta(n, K) ndarray
 Natural parameter.
- grad(n, K) ndarray
 Gradient as in
gradient()method.- hess(n, K) ndarray
 Hessian as in
hessian()method.- inv_hess_grad(n, K) ndarray
 The inverse hessian gradient to store.
- inv_link(self: adelie.adelie_core.glm.GlmMultiBase64, arg0: numpy.ndarray[numpy.float64[m, n], flags.c_contiguous], arg1: numpy.ndarray[numpy.float64[m, n], flags.writeable, flags.c_contiguous]) None#
 Computes the inverse link function.
Computes \(g^{-1}(\eta)\) where \(g(\mu)\) is the link function.
- Parameters:
 - eta(n, K) ndarray
 Natural parameter.
- out(n, K) ndarray
 Inverse link \(g^{-1}(\eta)\).
- loss(self: adelie.adelie_core.glm.GlmMultiBase64, arg0: numpy.ndarray[numpy.float64[m, n], flags.c_contiguous]) float#
 Computes the loss function.
Computes \(\ell(\eta)\).
- Parameters:
 - eta(n, K) ndarray
 Natural parameter.
- Returns:
 - lossfloat
 Loss.
- loss_full(self: adelie.adelie_core.glm.GlmMultiBase64) float#
 Computes the loss function at the saturated model.
Computes \(\ell(\eta^\star)\) where \(\eta^\star\) is the minimizer.
- Returns:
 - lossfloat
 Loss at the saturated model.
- is_multi#
 Trueif it defines a multi-response GLM family. It is alwaysTruefor this base class.
- name#
 Name of the GLM family.