Customize Models¶
One common demand for graph embedding is to customize the model (i.e. score function). Here we will demonstrate an example of adding a new model to the knowledge graph solver.
First, get into include/model/knowledge_graph.h
. Fork an existing model class
(e.g. TransE) and change it to a new name.
template<class _Vector>
class TransE {
__host__ __device__ static void forward(...);
template <OptimizerType optimizer_type>
__host__ __device__ static void backward(...);
template <OptimizerType optimizer_type>
__host__ __device__ static void backward(...);
template <OptimizerType optimizer_type>
__host__ __device__ static void backward(...);
}
Here a model class contains a forward function and several overloads of the backward function, which correspond to different categories of optimizers. We are going to modify a forward and a backward function, and then do some copy-and-paste work to the others.
Let’s start from the forward function. This function takes a triplet of embedding vectors, and outputs a score.
void forward(const Vector &head, const Vector &tail, const Vector &relation,
Float &output, float margin)
The last argument is either margin for latent distance model or l3 regularization for tensor decomposition models. For TransE, the function is implemented as
output = 0;
FOR(i, dim)
output += abs(head[i] + relation[i] - tail[i]);
output = margin - SUM(output);
Here we need to replace this piece of code with our own formulas. Note that this function should be compatible with both CPU and GPU. This can be easily achieved by helper macros defined in GraphVite.
We just need to use the macro FOR(i, stop)
instead of the conventional
for (int i = 0; i < stop; i++)
. For any accumulator x
inside the loop (e.g.
output
in this case), update it with x = SUM(x)
after the loop to get the
correct value.
For the backward function. It takes additional arguments of moment statistics, head gradient, optimizer and sample weight. For example, here is an overload with 1 moment per embedding.
template<OptimizerType optimizer_type>
void backward(Vector &head, Vector &tail, Vector &relation,
Vector &head_moment1, Vector &tail_moment1, Vector &relation_moment1,
float margin, Float gradient, const Optimizer &optimizer, Float weight)
The backward function should compute the gradient for each embedding, and update them with the optimizer. Typically, this is implemented as
auto update = get_update_function_1_moment<Float, optimizer_type>();
FOR(i, dim) {
Float h = head[i];
Float t = tail[i];
Float r = relation[i];
Float s = h + r - t > 0 ? 1 : -1;
head[i] -= (optimizer.*update)(h, -gradient * s, head_moment1[i], weight);
tail[i] -= (optimizer.*update)(t, gradient * s, tail_moment1[i], weight);
relation[i] -= (optimizer.*update)(r, -gradient * s, relation_moment1[i], weight);
}
Here we modify this function according to the partial derivatives of our forward function. Once we complete a backward function, we can copy them to the other overloads. The only difference among overloads is that they use different update function and numbers of moment statistics.
Finally, we have to let the solver know there is a new model. In
instance/knowledge_graph.cuh
, add the name of your model in
get_available_models()
. Also add run-time dispatch of the new model in
train_dispatch()
and predict_dispatch()
.
switch (num_moment) {
case 0:
if (solver->model == ...)
...
case 1:
if (solver->model == ...)
...
case 2:
if (solver->model == ...)
...
Compile the source and it should be ready.