Benchmarks¶
This page contains benchmarks of all models implemented in GraphVite, including their time and performance. All experiments are conducted on a server with 24 CPU threads and 4 V100 GPUs.
Node Embedding¶
We experiment node embedding models on 3 datasets, ranging from million-scale to half billion-scale. The following table shows the size of each dataset, as well as the time and resource required by LINE model.
Dataset |
|V| |
|E| |
Training Time |
GPU memory cost |
---|---|---|---|---|
1.1M |
4.9M |
1.17 mins |
4 * 801 MiB |
|
1.7M |
23M |
3.54 mins |
4 * 943 MiB |
|
7.9M |
447M |
1.84 hrs |
4 * 2.42 GiB |
The learned node embeddings are evaluated on the standard task of multi-label node classifcation. We report micro-F1 and macro-F1 of each model, with regard to different percentages of training data.
%Labeled Data |
1% |
2% |
3% |
4% |
5% |
6% |
7% |
8% |
9% |
10% |
|
---|---|---|---|---|---|---|---|---|---|---|---|
Micro-F1 |
37.41 |
40.48 |
42.12 |
43.63 |
44.47 |
44.83 |
45.41 |
45.77 |
46.11 |
46.39 |
|
38.36 |
40.61 |
42.17 |
43.70 |
44.44 |
44.97 |
45.47 |
45.73 |
46.12 |
46.25 |
||
37.91 |
40.59 |
42.37 |
43.56 |
44.32 |
44.94 |
45.40 |
45.77 |
46.07 |
46.41 |
||
Macro-F1 |
30.77 |
33.67 |
34.91 |
36.44 |
37.02 |
37.27 |
37.74 |
38.17 |
38.35 |
38.51 |
|
30.90 |
33.69 |
34.88 |
36.40 |
36.75 |
37.44 |
37.89 |
38.07 |
38.30 |
38.40 |
||
30.70 |
33.69 |
34.84 |
36.17 |
36.45 |
37.42 |
37.68 |
38.05 |
38.32 |
38.62 |
See also
Configuration files:
deepwalk_youtube.yaml
line_youtube.yaml
node2vec_youtube.yaml
For larger datasets, node2vec fails with out-of-memory error, as it requires more than 200 GiB memory to build alias tables for the 2nd order random walks.
%Labeled Data |
10% |
20% |
30% |
40% |
50% |
60% |
70% |
80% |
90% |
|
---|---|---|---|---|---|---|---|---|---|---|
Micro-F1 |
62.98 |
63.44 |
63.72 |
63.71 |
63.79 |
63.69 |
63.80 |
63.93 |
63.92 |
|
63.05 |
63.45 |
63.69 |
63.73 |
63.79 |
63.82 |
64.00 |
63.69 |
63.79 |
||
Out of Memory |
||||||||||
Macro-F1 |
61.72 |
62.12 |
62.36 |
62.38 |
62.42 |
62.36 |
62.44 |
62.58 |
62.55 |
|
61.77 |
62.14 |
62.35 |
62.39 |
62.46 |
62.45 |
62.64 |
62.28 |
62.45 |
||
Out of Memory |
See also
Configuration files:
deepwalk_flickr.yaml
line_flickr.yaml
%Labeled Data |
1% |
2% |
3% |
4% |
5% |
6% |
7% |
8% |
9 % |
10% |
|
---|---|---|---|---|---|---|---|---|---|---|---|
Micro-F1 |
76.93 |
83.96 |
86.41 |
86.91 |
87.94 |
88.49 |
88.84 |
88.96 |
88.90 |
89.18 |
|
76.53 |
83.50 |
85.70 |
87.29 |
87.97 |
88.17 |
88.69 |
88.87 |
88.76 |
89.20 |
||
Out of Memory |
|||||||||||
Macro-F1 |
71.54 |
81.34 |
84.57 |
85.75 |
86.77 |
87.48 |
87.93 |
88.02 |
88.25 |
88.42 |
|
70.46 |
80.88 |
84.07 |
85.99 |
86.76 |
87.39 |
87.86 |
87.91 |
87.72 |
88.56 |
||
Out of Memory |
See also
Configuration files:
deepwalk_friendster-small.yaml
line_friendster-small.yaml
Knowledge Graph Embedding¶
For knowledge graph embedding, we benchmark TransE, DistMult, ComplEx and RotatE on 4 standard datasets. The training time and resource of RotatE on these datasets is given in the following table.
Dataset |
|V| |
|E| |
|R| |
Training Time |
GPU memory cost |
---|---|---|---|---|---|
15K |
483K |
1.3K |
27.0 mins |
4 * 785 MiB |
|
15K |
272K |
237 |
14.3 mins |
4 * 745 MiB |
|
41K |
141K |
18 |
15.3 mins |
4 * 761 MiB |
|
41K |
87K |
11 |
13.8 mins |
4 * 761 MiB |
To evaluate the knowledge graph embeddings, we test them on the link prediction task. We report the results for each model on the test set, where ranking metrics are computed based on filtered results.
MR |
MRR |
HITS@1 |
HITS@3 |
HITS@10 |
|
---|---|---|---|---|---|
42 |
0.694 |
0.576 |
0.789 |
0.868 |
|
136 |
0.747 |
0.684 |
0.793 |
0.849 |
|
50 |
0.678 |
0.571 |
0.755 |
0.857 |
|
74 |
0.779 |
0.721 |
0.818 |
0.876 |
|
44 |
0.740 |
0.654 |
0.805 |
0.875 |
See also
Configuration files:
transe_fb15k.yaml
distmult_fb15k.yaml
complex_fb15k.yaml
simple_fb15k.yaml
rotate_fb15k.yaml
MR |
MRR |
HITS@1 |
HITS@3 |
HITS@10 |
|
---|---|---|---|---|---|
157 |
0.294 |
0.193 |
0.328 |
0.502 |
|
272 |
0.281 |
0.182 |
0.312 |
0.490 |
|
193 |
0.311 |
0.212 |
0.348 |
0.513 |
|
176 |
0.298 |
0.198 |
0.333 |
0.504 |
|
176 |
0.314 |
0.217 |
0.347 |
0.511 |
See also
Configuration files:
transe_fb15k-237.yaml
distmult_fb15k-237.yaml
complex_fb15k-237.yaml
simple_fb15k-237.yaml
rotate_fb15k-237.yaml
MR |
MRR |
HITS@1 |
HITS@3 |
HITS@10 |
|
---|---|---|---|---|---|
234 |
0.608 |
0.306 |
0.916 |
0.952 |
|
355 |
0.819 |
0.711 |
0.923 |
0.954 |
|
760 |
0.940 |
0.936 |
0.943 |
0.946 |
|
412 |
0.948 |
0.944 |
0.950 |
0.954 |
|
226 |
0.945 |
0.938 |
0.950 |
0.958 |
See also
Configuration files:
transe_wn18.yaml
distmult_wn18.yaml
complex_wn18.yaml
simple_wn18.yaml
rotate_wn18.yaml
MR |
MRR |
HITS@1 |
HITS@3 |
HITS@10 |
|
---|---|---|---|---|---|
2620 |
0.215 |
0.012 |
0.382 |
0.526 |
|
2954 |
0.467 |
0.416 |
0.489 |
0.562 |
|
7131 |
0.425 |
0.405 |
0.431 |
0.460 |
|
4751 |
0.475 |
0.445 |
0.489 |
0.535 |
|
1845 |
0.490 |
0.439 |
0.508 |
0.589 |
See also
Configuration files:
transe_wn18rr.yaml
distmult_wn18rr.yaml
complex_wn18rr.yaml
simple_wn18rr.yaml
rotate_wn18rr.yaml
Graph & High-dimensional Data Visualization¶
The high-dimensional data visualization is evaluated on two popular image datasets. The training time and resource needed by LargeVis is given in the following table. Note that more than 95% GPU memory cost comes from the construction of KNN Graph, and can be traded off with speed if necessary.
Dataset |
Vector |
N |
dim |
Training Time |
GPU memory cost |
---|---|---|---|---|---|
Raw pixels |
70K |
784 |
15.1 s |
2.86 GiB |
|
ResNet50 feature |
1.33M |
2048 |
16.6 mins |
15.1 GiB |
See also
Configuration files:
largevis_mnist_2d.yaml
largevis_imagenet.yaml
Here is a 3D visualization result of MNIST.
For ImageNet, since it contains 1000 classes, we visualize classes according to
their hierarchy in WordNet. The following animation shows how the class of
english setter
(a kind of dog) is traversed in the hierarchy.