graphvite.optimizer

Optimizer module of GraphVite

class graphvite.optimizer.Optimizer(type=auto, *args, **kwargs)[source]

Create an optimizer instance of any type.

Parameters

type (str or auto) – optimizer type, can be ‘SGD’, ‘Momentum’, ‘AdaGrad’, ‘RMSprop’ or ‘Adam’

class graphvite.optimizer.LRSchedule(*args, **kwargs)

Learning Rate Schedule.

This class has 2 constructors

LRSchedule(type='constant')
LRSchedule(schedule_function)
Parameters
  • type (str, optional) – ‘constant’ or ‘linear’

  • schedule_function (callable) – function that returns a multiplicative factor, given batch id and total number of batches

class graphvite.optimizer.SGD(lr=1e-4, weight_decay=0, schedule='linear')

Stochastic gradient descent optimizer.

Parameters
  • lr (float, optional) – initial learning rate

  • weight_decay (float, optional) – weight decay (L2 regularization)

  • schedule (str or callable, optional) – learning rate schedule

class graphvite.optimizer.Momentum(lr=1e-4, weight_decay=0, momentum=0.999, schedule='linear')

Momentum optimizer.

Parameters
  • lr (float, optional) – initial learning rate

  • weight_decay (float, optional) – weight decay (L2 regularization)

  • momentum (float, optional) – momentum coefficient

  • schedule (str or callable, optional) – learning rate schedule

class graphvite.optimizer.AdaGrad(lr=1e-4, weight_decay=0, epsilon=1e-10, schedule='linear')

AdaGrad optimizer.

Parameters
  • lr (float, optional) – initial learning rate

  • weight_decay (float, optional) – weight decay (L2 regularization)

  • epsilon (float, optional) – smooth term for numerical stability

  • schedule (str or callable, optional) – learning rate schedule

class graphvite.optimizer.RMSprop(lr=1e-4, weight_decay=0, alpha=0.999, epsilon=1e-8, schedule='linear')

RMSprop optimizer.

Parameters
  • lr (float, optional) – initial learning rate

  • weight_decay (float, optional) – weight decay (L2 regularization)

  • alpha (float, optional) – coefficient for moving average of squared gradient

  • epsilon (float, optional) – smooth term for numerical stability

  • schedule (str or callable, optional) – learning rate schedule

class graphvite.optimizer.Adam(lr=1e-4, weight_decay=0, beta1=0.999, beta2=0.99999, epsilon=1e-8, schedule='linear')

Adam optimizer.

Parameters
  • lr (float, optional) – initial learning rate

  • weight_decay (float, optional) – weight decay (L2 regularization)

  • beta1 (float, optional) – coefficient for moving average of gradient

  • beta2 (float, optional) – coefficient for moving average of squared gradient

  • epsilon (float, optional) – smooth term for numerical stability

  • schedule (str or callable, optional) – learning rate schedule