Kernels on Graphs¶
Warning
You can get by fine without reading this page for almost all use cases, just use the standard MaternGeometricKernel
, following this example notebook if you are modeling a function of nodes, or this example notebook if you are modeling a function of edges.
This is optional material meant to explain the basic theory and based mainly on Borovitskiy et al. [2021].
Node Set of a Graph¶
Consider a (weighted) undirected graph with \(N\) nodes determined by a (weighted) adjacency matrix \(\mathbf{A}\) of size \(N \times N\). The weights are nonnegative: \(\mathbf{A}_{i j} \geq 0\).
Let \(\mathbf{D}\) denote the degree matrix, i.e. the diagonal matrix with \(\mathbf{D}_{j j} = \sum_{n=1}^N \mathbf{A}_{n j}\). Then there are two notions of the graph Laplacian you can use to define kernels: the unnormalized graph Laplacian \(\mathbf{\Delta}_{un} = \mathbf{D} - \mathbf{A}\) and the symmetric normalized graph Laplacian \(\mathbf{\Delta}_{no} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2}\) where \(\mathbf{I}\) denotes the identity matrix. Which one to use is up to you. The performance will depend on the task at hand. In practice, it makes sense to try both.
If \(\mathbf{\Delta}\) is either \(\mathbf{\Delta}_{un}\) or \(\mathbf{\Delta}_{no}\), it is a symmetric positive semidefinite matrix. Because of this, there is an orthonormal basis \(\{\boldsymbol f_l\}_{l=0}^{N-1}\) of \(\mathbb{R}^N\) consisting of eigenvectors such that
The eigenvectors \(\boldsymbol f_l\) can be regarded as functions on the graph nodes: \(f_l(j) = (\boldsymbol f_l)_j\).
For graphs, MaternGeometricKernel
is an alias to MaternKarhunenLoeveKernel
.
The latter is given by the formula
\(1 \leq L \leq N\) controls the quality of approximation of the kernel.
Throughout the package, \(L\) is referred to as the number of levels (though this term may have different meaning for other spaces [1]).
Setting \(L = N\) gives you the exact kernel but usually requires \(O(N^3)\) to compute the eigenpairs.
Setting \(L \ll N\) can in principle allow much faster eigenpair computation for some graphs. Such techniques are, however, not (yet) implemented in the library.
The constant \(C_{\nu, \kappa}\) ensures that the average variance is equal to \(1\), i.e. \(\frac{1}{N} \sum_{n=1}^N k_{\nu, \kappa}(n, n) = 1\).
It is easy to show that \(C_{\nu, \kappa} = \sum_{l=0}^{L-1} \Phi_{\nu, \kappa}(\lambda_l)\).
Notes:
The “variance” \(k(n, n)\) can vary from point to point.
Unlike the Euclidean or the manifold case, the \(1/2, 3/2, 5/2\) may fail to be the reasonable values of \(\nu\). On the other hand, the parameter \(\nu\) is optimizable in the same way in which the lengthscale is. Keep in mind though that the optimization problem may require some trial and error to find a good initialization and that reasonable \(\kappa\) and \(\nu\) will heavily depend on the specific graph in a way that is hard to predict.
Consider \(\mathbf{A}' = \alpha^2 \mathbf{A}\) for some \(\alpha > 0\). Then, for the normalized graph Laplacian \(\mathbf{\Delta}_{no}\), we have \(k_{\nu, \kappa}' (i, j) = k_{\nu, \kappa} (i, j)\) where \(k_{\nu, \kappa}'\) is the kernel corresponding to \(\mathbf{A}'\) instead of \(\mathbf{A}\). On the other hand, for the unnormalized graph Laplacian \(\mathbf{\Delta}_{un}\), we have \(k_{\nu, \kappa}' (i, j) = k_{\nu, \alpha \cdot \kappa} (i, j)\), i.e. the lengthscale changes.
Edge Set of a Graph¶
If you want to model a signal on the edges of a graph \(G\), you can consider modeling the signal on the nodes of the line graph. The line graph of \(G\) has a node for each edge of \(G\) and its two nodes are connected by an edge if the corresponding edges in \(G\) used to share a common node. To build the line graph, you can use the line_graph function from networkx. Alternatively, especially for the flow-type data, you might want to use specialized edge kernels that we briefly describe below.
To define kernels for flow-type data on graph edges, the graph is extended to a simplicial 2-complex. You can ignore the concept of a simplicial 2-complex and treat the space as a graph because GeometricKernels can automatically extend your graph to a simplicial 2-complex in a sensible way. However, if you prefer to ignore this concept, you may want to disregard the rest of this section too.
Simplicial 2-complexes¶
A simplicial 2-complex is a collection of \(N_0\) nodes, \(N_1\) edges, and \(N_2\) triangles such that each triangle is a triplet of nodes and each edge is a pair of nodes. The edges are not directed, but they are oriented [2]. An oriented edge, denoted as \(e=[i,j]\) is an ordering of \(\{i,j\}\). This is not a directed edge allowing flow only from \(i\) to \(j\), but rather an assignment of the sign of the flow: from \(i\) to \(j\) it is positive and the reverse is negative. For \(e=[i,j]\), we denote the same edge with reverse orientation by \(-e=[j,i]\). Same goes for oriented triangles, denoted as \(t=[i,j,k]\), where \(i, j, k\) are nodes, with
A function \(f : E \to \mathbb{R}\) on the edge set \(E\) of a simplicial 2-complex is required to be alternating, i.e. \(f(-e) = -f(e)\) for all \(e \in E\). Such a function may be identified with the vector
Hodge Laplacian¶
Given a simplicial 2-complex, we can define the discrete Hodge Laplacian, which operates on the space of edge flows, as
The Hodge Laplacian \(\mathbf{L}\) describes the connectivity of edges where the down part \(\mathbf{L}_d\) and the up part \(\mathbf{L}_u\) encode how edges are adjacent, respectively, through nodes and via triangles. Matrix \(\mathbf{L}\) is positive semi-definite, admitting an orthonormal basis of eigenvectors \(\mathbf{f}_1, \ldots, \mathbf{f}_{N_1}\) with eigenvalues \(\lambda_l\):
Matérn Karhunen–Loève Kernel¶
The eigenpairs \(\lambda_l, f_l\) of the Hodge Laplacian can be used to define the MaternKarhunenLoeveKernel
on the set \(E\) of graph edges.
Much like for the node set of a graph, this kernel is given by the formula
Where \(L\) is the user-defined truncation parameter, and \(C_{\nu, \kappa}\) is the normalizing constant making sure that \(1/N_1 \sum k_{\nu, \kappa}(e, e) = 1\) where the summation is over all edges \(e\) in positive orientation.
Matérn Hodge-compositional Kernel¶
Edge flows can be thought of as discrete analogs of vector fields. Much like for vector fields, you can define the gradient, divergence and curl of an edge flow:
MaternHodgeCompositionalKernel
kernel, which is the underlying kernel for the MaternGeometricKernel
on the GraphEdges
space.
Hodge decomposition implies that eigenvectors of the Hodge Laplacian can be chosen in such a way that they form three groups.
The eigenvectors from the first group are denoted by \(f_l^H\) and called harmonic. They satisfy \(\mathbf{L} f_l^H = \operatorname{div} f_l^H = \operatorname{curl} f_l^H = 0\), corresponding to \(\lambda_l^H = 0\) eigenvalues of the Hodge Laplacian \(\mathbf{L}\).
The eigenvectors from the second group are denoted by \(f_l^G\) and called gradient. They are curl-free: \(\operatorname{curl} f_l^G = 0\), corresponding to nonzero eigenvalues \(\lambda_l^G\) of the down Laplacian \(\mathbf{L}_d\).
The eigenvectors from the third group are denoted by \(f_l^C\) and called curl(-adjoint). They are divergence-free: \(\operatorname{div} f_l^C = 0\), corresponding to nonzero eigenvalues \(\lambda_l^C\) of the up Laplacian \(\mathbf{L}_u\).
The Hodge-compositional kernel is built to enable separable control on the different Hodge subspaces. For the truncation parameter \(L\), one obtains \(L\) eigenpairs of the Hodge Laplacian corresponding to the lowest eigenvalues. Of them, \(L_H\) are associated with zero eigenvalues, \(L_G\) with nonzero eigenvalues of the down Laplacian, and \(L_C\) with nonzero eigenvalues of the up Laplacian. The kernel has three vector hyperparameters: \(\mathbf{\nu} = (\nu_H, \nu_G, \nu_C)\); \(\mathbf{\kappa} = (\kappa_H, \kappa_G, \kappa_C)\); and \(\mathbf{\alpha} = (\alpha_H, \alpha_G, \alpha_C)\). It is given by the formula
By setting \(\alpha_{\Box} = 0\), you can effectively “turn off” the corresponding subspace.
By using equal \(\nu_{\Box}\) and \(\kappa_{\Box}\) for all \(\Box\) and choosing appropriate \(\alpha_{\Box}\), you can recover the Matérn Karhunen–Loève kernel, which is a special case of the Hodge-compositional Matérn kernel. [3]
You can also automatically infer \(\mathbf{\alpha}\) from data.
Footnotes