geometric_kernels.kernels

This module provides the abstract base class for geometric kernels and specialized classes for various types of spaces.

Unless you know exactly what you are doing, always use the MaternGeometricKernel that “just works”.

Submodules

Package Contents

class geometric_kernels.kernels.BaseGeometricKernel(space)[source]

Bases: abc.ABC

Abstract base class for geometric kernels.

Parameters:

space (geometric_kernels.spaces.Space) – The space on which the kernel is defined.

abstract K(params, X, X2=None, **kwargs)[source]

Compute the cross-covariance matrix between two batches of vectors of inputs, or batches of matrices of inputs, depending on the space.

Parameters:
  • params (beartype.typing.Dict[str, lab.Numeric]) –

    A dict of kernel parameters, typically containing two keys: “lengthscale” for length scale and “nu” for smoothness.

    The types of values in the params dict determine the output type and the backend used for the internal computations, see the warning below for more details.

    Note

    The values params[“lengthscale”] and params[“nu”] are typically (1,)-shaped arrays of the suitable backend. This serves to point at the backend to be used for internal computations.

    In some cases, for example, when the kernel is ProductGeometricKernel, the values of params may be (s,)-shaped arrays instead, where s is the number of factors.

    Note

    Finite values of params[“nu”] typically correspond to the generalized (geometric) Matérn kernels.

    Infinite params[“nu”] typically corresponds to the heat kernel (a.k.a. diffusion kernel, generalized squared exponential kernel, generalized Gaussian kernel, generalized RBF kernel). Although it is often considered to be a separate entity, we treat the heat kernel as a member of the Matérn family, with smoothness parameter equal to infinity.

  • X (lab.Numeric) – A batch of N inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

  • X2 (beartype.typing.Optional[lab.Numeric]) –

    A batch of M inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

    X2=None sets X2=X1.

    Defaults to None.

Returns:

The N x M cross-covariance matrix.

Return type:

lab.Numeric

Warning

The types of values in the params dict determine the backend used for internal computations and the output type.

Even if, say, geometric_kernels.jax is imported but the values in the params dict are NumPy arrays, the output type will be a NumPy array, and NumPy will be used for internal computations. To get a JAX array as an output and use JAX for internal computations, all the values in the params dict must be JAX arrays.

abstract K_diag(params, X, **kwargs)[source]

Returns the diagonal of the covariance matrix self.K(params, X, X), typically in a more efficient way than actually computing the full covariance matrix with self.K(params, X, X) and then extracting its diagonal.

Parameters:
  • params (beartype.typing.Dict[str, lab.Numeric]) – Same as for K().

  • X (lab.Numeric) – A batch of N inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

Returns:

The N-dimensional vector representing the diagonal of the covariance matrix self.K(params, X, X).

Return type:

lab.Numeric

abstract init_params()[source]

Initializes the dict of the trainable parameters of the kernel.

It typically contains only two keys: “nu” and “lengthscale”.

This dict can be modified and is passed around into such methods as K() or K_diag(), as the params argument.

Note

The values in the returned dict are always of the NumPy array type. Thus, if you want to use some other backend for internal computations when calling K() or K_diag(), you need to replace the values with the analogs typed as arrays of the desired backend.

Return type:

beartype.typing.Dict[str, lab.NPNumeric]

property space: beartype.typing.Union[geometric_kernels.spaces.Space, beartype.typing.List[geometric_kernels.spaces.Space]]

The space on which the kernel is defined.

Return type:

beartype.typing.Union[geometric_kernels.spaces.Space, beartype.typing.List[geometric_kernels.spaces.Space]]

class geometric_kernels.kernels.MaternFeatureMapKernel(space, feature_map, key, normalize=True)[source]

Bases: geometric_kernels.kernels.base.BaseGeometricKernel

This class computes a (Matérn) kernel based on a feature map.

\[k_{\nu, \kappa}(x, y) = \langle \phi_{\nu, \kappa}(x), \phi_{\nu, \kappa}(y) \rangle_{\mathbb{R}^n}\]

where \(\langle \cdot , \cdot \rangle_{\mathbb{R}^n}\) is the standard inner product in \(\mathbb{R}^n\) and \(\phi_{\nu, \kappa}: X \to \mathbb{R}^n\) is an arbitrary function called feature map. We assume that it depends on the smoothness and length scale parameters \(\nu\) and \(\kappa\), respectively, which makes this kernel specifically Matérn.

Note

A brief introduction into feature maps and related kernels can be found on this page.

Note that the finite-dimensional feature maps this kernel is meant to be used with are, in most cases, some approximations of the intractable infinite-dimensional feature maps.

Parameters:
  • space (geometric_kernels.spaces.base.Space) – The space on which the kernel is defined.

  • feature_map (geometric_kernels.feature_maps.FeatureMap) – A FeatureMap object that represents an arbitrary function \(\phi_{\nu, \kappa}: X \to \mathbb{R}^n\), where \(X\) is the space, \(n\) can be an arbitrary finite integer, and \(\nu, \kappa\) are the smoothness and length scale parameters.

  • key (lab.RandomState) –

    Random state, either np.random.RandomState, tf.random.Generator, torch.Generator or jax.tensor (which represents a random state).

    Many feature maps used in the library are randomized, thus requiring a key to work. The MaternFeatureMapKernel uses this key to make them (and thus the kernel) deterministic, applying the utility function make_deterministic() to the pair feature_map, key.

    Note

    Even if the feature_map is deterministic, you need to provide a valid key, although it will essentially be ignored. In the future, we should probably make the key parameter optional.

  • normalize (bool) –

    This parameter is directly passed on to the feature_map as a keyword argument “normalize”. If normalize=True, then either \(k(x, x) = 1\) for all \(x \in X\), or \(\int_X k(x, x) d x = 1\), depending on the type of the feature map and on the space \(X\).

    Note

    For many kernel methods, \(k(\cdot, \cdot)\) and \(a k(\cdot, \cdot)\) are indistinguishable, whatever the positive constant \(a\) is. For these, it makes sense to use normalize=False to save up some computational overhead. For others, like for the Gaussian process regression, the normalization of the kernel might be important. In these cases, you will typically want to set normalize=True.

K(params, X, X2=None, **kwargs)[source]

Compute the cross-covariance matrix between two batches of vectors of inputs, or batches of matrices of inputs, depending on the space.

Parameters:
  • params (beartype.typing.Dict[str, lab.Numeric]) –

    A dict of kernel parameters, typically containing two keys: “lengthscale” for length scale and “nu” for smoothness.

    The types of values in the params dict determine the output type and the backend used for the internal computations, see the warning below for more details.

    Note

    The values params[“lengthscale”] and params[“nu”] are typically (1,)-shaped arrays of the suitable backend. This serves to point at the backend to be used for internal computations.

    In some cases, for example, when the kernel is ProductGeometricKernel, the values of params may be (s,)-shaped arrays instead, where s is the number of factors.

    Note

    Finite values of params[“nu”] typically correspond to the generalized (geometric) Matérn kernels.

    Infinite params[“nu”] typically corresponds to the heat kernel (a.k.a. diffusion kernel, generalized squared exponential kernel, generalized Gaussian kernel, generalized RBF kernel). Although it is often considered to be a separate entity, we treat the heat kernel as a member of the Matérn family, with smoothness parameter equal to infinity.

  • X (lab.Numeric) – A batch of N inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

  • X2 (beartype.typing.Optional[lab.Numeric]) –

    A batch of M inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

    X2=None sets X2=X1.

    Defaults to None.

Returns:

The N x M cross-covariance matrix.

Warning

The types of values in the params dict determine the backend used for internal computations and the output type.

Even if, say, geometric_kernels.jax is imported but the values in the params dict are NumPy arrays, the output type will be a NumPy array, and NumPy will be used for internal computations. To get a JAX array as an output and use JAX for internal computations, all the values in the params dict must be JAX arrays.

K_diag(params, X, **kwargs)[source]

Returns the diagonal of the covariance matrix self.K(params, X, X), typically in a more efficient way than actually computing the full covariance matrix with self.K(params, X, X) and then extracting its diagonal.

Parameters:
  • params (beartype.typing.Dict[str, lab.Numeric]) – Same as for K().

  • X (lab.Numeric) – A batch of N inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

Returns:

The N-dimensional vector representing the diagonal of the covariance matrix self.K(params, X, X).

init_params()[source]

Initializes the dict of the trainable parameters of the kernel.

Returns dict(nu=np.array([np.inf]), lengthscale=np.array([1.0])).

This dict can be modified and is passed around into such methods as K() or K_diag(), as the params argument.

Note

The values in the returned dict are always of the NumPy array type. Thus, if you want to use some other backend for internal computations when calling K() or K_diag(), you need to replace the values with the analogs typed as arrays of the desired backend.

Return type:

beartype.typing.Dict[str, lab.NPNumeric]

class geometric_kernels.kernels.MaternGeometricKernel[source]

This class represents a Matérn geometric kernel that “just works”. Unless you really know what you are doing, you should always use this kernel class.

Upon creation, unpacks into a specific geometric kernel based on the provided space, and, optionally, the associated (approximate) feature map.

__new__(space, num=None, normalize=True, return_feature_map=False, **kwargs)[source]

Construct a kernel and (if return_feature_map is True) a feature map on space.

Note

See this page for a brief introduction into feature maps.

Parameters:
  • space (geometric_kernels.spaces.Space) – Space to construct the kernel on.

  • num (int) –

    If provided, controls the “order of approximation” of the kernel. For the discrete spectrum spaces, this means the number of “levels” that go into the truncated series that defines the kernel (for example, these are unique eigenvalues for the Hypersphere or eigenvalues with repetitions for the Graph or for the Mesh). For the non-compact symmetric spaces (NoncompactSymmetricSpace), this is the number of random phases used to construct the kernel.

    If num=None, we use a (hopefully) reasonable default, which is space-dependent.

  • normalize (bool) –

    Normalize the kernel (and the feature map). If normalize=True, then either \(k(x, x) = 1\) for all \(x \in X\), where \(X\) is the space, or \(\int_X k(x, x) d x = 1\), depending on the space.

    Defaults to True.

    Note

    For many kernel methods, \(k(\cdot, \cdot)\) and \(a k(\cdot, \cdot)\) are indistinguishable, whatever the positive constant \(a\) is. For these, it makes sense to use normalize=False to save up some computational overhead. For others, like for the Gaussian process regression, the normalization of the kernel might be important. In these cases, you will typically want to set normalize=True.

  • return_feature_map (bool) –

    If True, return a feature map (needed e.g. for efficient sampling from Gaussian processes) along with the kernel.

    Default is False.

  • **kwargs – Any additional keyword arguments to be passed to the kernel (like key).

Note

For non-compact symmetric spaces, like Hyperbolic or SymmetricPositiveDefiniteMatrices, the key must be provided in **kwargs.

class geometric_kernels.kernels.MaternKarhunenLoeveKernel(space, num_levels, normalize=True)[source]

Bases: geometric_kernels.kernels.base.BaseGeometricKernel

This class approximates Matérn kernel by its truncated Mercer decomposition, in terms of the eigenfunctions & eigenvalues of the Laplacian on the space.

\[k(x, x') = \sum_{l=0}^{L-1} S(\sqrt\lambda_l) \sum_{s=1}^{d_l} f_{ls}(x) f_{ls}(x'),\]

where \(\lambda_l\) and \(f_{ls}(\cdot)\) are the eigenvalues and eigenfunctions of the Laplacian such that \(\Delta f_{ls} = \lambda_l f_{ls}\), and \(S(\cdot)\) is the spectrum of the Matérn kernel. The eigenvalues and eigenfunctions belong to the DiscreteSpectrumSpace instance.

We denote

\[G_l(\cdot, \cdot') = \sum_{s=1}^{d_l} f_{ls}(\cdot) f_{ls}(\cdot')\]

and term the sets \([f_{ls}]_{s=1}^{d_l}\) “levels”.

For many spaces, like the sphere, we can employ addition theorems to efficiently compute \(G_l(\cdot, \cdot')\) without calculating the individual \(f_{ls}(\cdot)\). Note that \(\lambda_l\) are not required to be unique: it is possible that for some \(l,l'\), \(\lambda_l = \lambda_{l'}\). In other words, the “levels” do not necessarily correspond to full eigenspaces. A level may even correspond to a single eigenfunction.

Note

A brief introduction into the theory behind MaternKarhunenLoeveKernel can be found in this & this documentation pages.

Parameters:
  • space (geometric_kernels.spaces.DiscreteSpectrumSpace) – The space to define the kernel upon.

  • num_levels (int) – Number of levels to include in the summation.

  • normalize (bool) – Whether to normalize kernel to have unit average variance.

K(params, X, X2=None, **kwargs)[source]

Compute the cross-covariance matrix between two batches of vectors of inputs, or batches of matrices of inputs, depending on the space.

Parameters:
  • params

    A dict of kernel parameters, typically containing two keys: “lengthscale” for length scale and “nu” for smoothness.

    The types of values in the params dict determine the output type and the backend used for the internal computations, see the warning below for more details.

    Note

    The values params[“lengthscale”] and params[“nu”] are typically (1,)-shaped arrays of the suitable backend. This serves to point at the backend to be used for internal computations.

    In some cases, for example, when the kernel is ProductGeometricKernel, the values of params may be (s,)-shaped arrays instead, where s is the number of factors.

    Note

    Finite values of params[“nu”] typically correspond to the generalized (geometric) Matérn kernels.

    Infinite params[“nu”] typically corresponds to the heat kernel (a.k.a. diffusion kernel, generalized squared exponential kernel, generalized Gaussian kernel, generalized RBF kernel). Although it is often considered to be a separate entity, we treat the heat kernel as a member of the Matérn family, with smoothness parameter equal to infinity.

  • X (lab.Numeric) – A batch of N inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

  • X2 (beartype.typing.Optional[lab.Numeric]) –

    A batch of M inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

    X2=None sets X2=X1.

    Defaults to None.

Returns:

The N x M cross-covariance matrix.

Return type:

lab.Numeric

Warning

The types of values in the params dict determine the backend used for internal computations and the output type.

Even if, say, geometric_kernels.jax is imported but the values in the params dict are NumPy arrays, the output type will be a NumPy array, and NumPy will be used for internal computations. To get a JAX array as an output and use JAX for internal computations, all the values in the params dict must be JAX arrays.

K_diag(params, X, **kwargs)[source]

Returns the diagonal of the covariance matrix self.K(params, X, X), typically in a more efficient way than actually computing the full covariance matrix with self.K(params, X, X) and then extracting its diagonal.

Parameters:
  • params – Same as for K().

  • X (lab.Numeric) – A batch of N inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

Returns:

The N-dimensional vector representing the diagonal of the covariance matrix self.K(params, X, X).

Return type:

lab.Numeric

eigenvalues(params, normalize=None)[source]

Eigenvalues of the kernel.

Parameters:
  • params (beartype.typing.Dict[str, lab.Numeric]) – Parameters of the kernel. Must contain keys “lengthscale” and “nu”. The shapes of params[“lengthscale”] and params[“nu”] are (1,).

  • normalize (beartype.typing.Optional[bool]) –

    Whether to normalize kernel to have unit average variance. If None, uses self.normalize to decide.

    Defaults to None.

Returns:

An [L, 1]-shaped array.

Return type:

lab.Numeric

init_params()[source]

Initializes the dict of the trainable parameters of the kernel.

Returns dict(nu=np.array([np.inf]), lengthscale=np.array([1.0])).

This dict can be modified and is passed around into such methods as K() or K_diag(), as the params argument.

Note

The values in the returned dict are always of the NumPy array type. Thus, if you want to use some other backend for internal computations when calling K() or K_diag(), you need to replace the values with the analogs typed as arrays of the desired backend.

Return type:

beartype.typing.Dict[str, lab.NPNumeric]

property eigenfunctions: geometric_kernels.spaces.eigenfunctions.Eigenfunctions

Eigenfunctions of the kernel.

Return type:

geometric_kernels.spaces.eigenfunctions.Eigenfunctions

property eigenvalues_laplacian: lab.Numeric

Eigenvalues of the Laplacian.

Return type:

lab.Numeric

property space: geometric_kernels.spaces.DiscreteSpectrumSpace

The space on which the kernel is defined.

Return type:

geometric_kernels.spaces.DiscreteSpectrumSpace

class geometric_kernels.kernels.ProductGeometricKernel(*kernels, dimension_indices=None)[source]

Bases: geometric_kernels.kernels.base.BaseGeometricKernel

Product kernel, defined as the product of a sequence of kernels.

See this page for a brief account on theory behind product kernels and the Torus.ipynb notebook for a tutorial on how to use them.

Parameters:
  • *kernels (geometric_kernels.kernels.base.BaseGeometricKernel) – A sequence of kernels to compute the product of. Cannot contain another instance of ProductGeometricKernel. We denote the number of factors, i.e. the length of the “sequence”, by s.

  • dimension_indices (beartype.typing.Optional[beartype.typing.List[beartype.typing.List[int]]]) –

    Determines how a product kernel input vector x is to be mapped into the inputs xi for the factor kernels. xi are assumed to be equal to x[dimension_indices[i]], possibly up to a reshape. Such a reshape might be necessary to accommodate the spaces whose elements are matrices rather than vectors, as determined by element_shapes. The transformation of x into the list of xis is performed by project_product().

    If None, assumes the each input is layed-out flattened and concatenated, in the same order as the factor spaces. In this case, the inverse to project_product() is make_product().

    Defaults to None.

Note

params of a ProductGeometricKernel are such that params[“lengthscale”] and params[“nu”] are (s,)-shaped arrays, where s is the number of factors.

Basically, params[“lengthscale”][i] stores the length scale parameter for the i-th factor kernel. Same goes for params[“nu”]. Importantly, this enables automatic relevance determination-like behavior.

K(params, X, X2=None, **kwargs)[source]

Compute the cross-covariance matrix between two batches of vectors of inputs, or batches of matrices of inputs, depending on the space.

Parameters:
  • params (beartype.typing.Dict[str, lab.Numeric]) –

    A dict of kernel parameters, typically containing two keys: “lengthscale” for length scale and “nu” for smoothness.

    The types of values in the params dict determine the output type and the backend used for the internal computations, see the warning below for more details.

    Note

    The values params[“lengthscale”] and params[“nu”] are typically (1,)-shaped arrays of the suitable backend. This serves to point at the backend to be used for internal computations.

    In some cases, for example, when the kernel is ProductGeometricKernel, the values of params may be (s,)-shaped arrays instead, where s is the number of factors.

    Note

    Finite values of params[“nu”] typically correspond to the generalized (geometric) Matérn kernels.

    Infinite params[“nu”] typically corresponds to the heat kernel (a.k.a. diffusion kernel, generalized squared exponential kernel, generalized Gaussian kernel, generalized RBF kernel). Although it is often considered to be a separate entity, we treat the heat kernel as a member of the Matérn family, with smoothness parameter equal to infinity.

  • X – A batch of N inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

  • X2

    A batch of M inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

    X2=None sets X2=X1.

    Defaults to None.

Returns:

The N x M cross-covariance matrix.

Return type:

lab.Numeric

Warning

The types of values in the params dict determine the backend used for internal computations and the output type.

Even if, say, geometric_kernels.jax is imported but the values in the params dict are NumPy arrays, the output type will be a NumPy array, and NumPy will be used for internal computations. To get a JAX array as an output and use JAX for internal computations, all the values in the params dict must be JAX arrays.

K_diag(params, X)[source]

Returns the diagonal of the covariance matrix self.K(params, X, X), typically in a more efficient way than actually computing the full covariance matrix with self.K(params, X, X) and then extracting its diagonal.

Parameters:
  • params – Same as for K().

  • X – A batch of N inputs, each of which is a vector or a matrix, depending on how the elements of the self.space are represented.

Returns:

The N-dimensional vector representing the diagonal of the covariance matrix self.K(params, X, X).

init_params()[source]

Returns a dict params where params[“lengthscale”] is the concatenation of all self.kernels[i].init_params()[“lengthscale”] and same for params[“nu”].

Return type:

beartype.typing.Dict[str, lab.NPNumeric]

property space: beartype.typing.List[geometric_kernels.spaces.Space]

The list of spaces upon which the factor kernels are defined.

Return type:

beartype.typing.List[geometric_kernels.spaces.Space]

geometric_kernels.kernels.default_feature_map(*, space=None, num=None, kernel=None)[source]

Constructs the default feature map for the specified space or kernel.

Parameters:
  • space (geometric_kernels.spaces.Space) – A space to construct the feature map on. If provided, kernel must either be omitted or set to None.

  • kernel (geometric_kernels.kernels.base.BaseGeometricKernel) – A kernel to construct the feature map from. If provided, space and num must either be omitted or set to None.

  • num (int) – Controls the number of features (dimensionality of the feature map). If omitted or set to None, the default value for each respective space is used. Must only be provided when constructing a feature map on a space (not from a kernel).

Returns:

Callable which is the respective feature map.