# Accelerating Natural Gradient with Higher-Order Invariance

TL;DR: Natural gradient update loses its invariance due to the finite step size. In our recent paper published at ICML 2018, we study the invariance of natural gradient from the perspective of Riemannian geometry, and propose several new update rules to improve its invariance. Empirical results show that better invariance can result in faster convergence in several supervised learning, unsupervised learning and reinforcement learning applications.

Probabilistic models are important for inference and prediction in many areas of machine learning. In this post, we focus our discussion on parametric probabilistic models, e.g., models that can be determined by specifying its parameters. Such a model may have several different **parameterizations**. For illustrative purposes, let’s consider a simple linear regression model that captures the joint distribution of input and label :

where is the parameter, and are two fixed constants. By substituing with , a different parameterization can be created

Note that even if and have different analytical forms, they actually represent the same model family. In particular, for any .

Given a loss function , the goal of an optimization algorithm is to find the model such that is as small as possible, in the form of finding the optimal parameter or under some parameterization.
**Although re-parameterization does not change the model family, the performance of optimization algorithms usually differs under different parameterizations.**
In the following section, we use the linear regression model as a running example to elaborate on why this can happen.

### Why optimization can depend on parameterization

Returning to the example of linear regression, a typical loss function is the expected negative log-likelihood, which can be formally written as

Consider using gradient descent to minimize the loss function:

- For , the update rule is , where is the learning rate and

- For , the update rule is , where is the learning rate and gradient becomes

Let’s assume , i.e., at -th optimization step, represents the same probabilistic model as . However, because of this, the gradient has different scales:

and therefore

Hence the -th optimization step will result in different probabilistic models . Specifically, one step of gradient descent will result in **different models** depending on **which parameterization is used**. This dependence on model parameterization can be undesirable, as it requires carefully choosing the parameterization to avoid hindering optimization.

For example, many tweaks are used to parameterize neural networks in such a way that they become more amenable to commonly used first-order optimization methods. We can view normalization methods of network activations, such as Batch Normalization (Ioffe and Szegedy 2015), Layer Normalization (Ba, Kiros, and Hinton 2016), and Weight Normalization (Salimans and Kingma 2016), as special parameterizations of the network to facilitate first-order gradient descent methods.

Luckily, this dependence on model parameterization is not universally true for every optimization method. For example, the Newton-Raphson method is invariant to affine transformations of model parameters. Therefore, for the linear regression case discussed above, the Newton-Raphson method will ensure that as long as , because the new parameter is an affine transformation of .

Following this thread, we next discuss the Natural Gradient Method (Amari 1998)—an optimization method that uses a similar preconditioning matrix to that of the Newton-Raphson method, but is said to be **invariant to arbitrary differentiable transformations of model parameters** when the learning rate is **small** enough.

### Natural gradient method

From the example of applying gradient descent to linear regression, we can easily see that switching between model parameterizations can cause the gradients and parameters to change in different ways. So how can we ensure that when the parameterization changes, the gradient will also change in a clever way to compensate for its effect on these optimization algorithms?

In 1998, inspired by research on information geometry, Shun’ichi Amari proposed to use natural gradient (Amari 1998) to solve this problem. A full understanding of natural gradient requires some basic concepts from Riemannian geometry. Although it is not required for understanding the rest of the post, we encourage interested readers to review the following introduction to Riemannian geometry.

##
##### An Introduction to Riemannian Geometry

(click to expand or collapse)

The purpose of this text is to give readers the minimal background to understand the motivations behind natural gradient, invariance and our proposed improvements in the later part of this blog. For a more rigorous treatment on this topic, please refer to textbooks (Amari and Nagaoka 2007) and (Petersen 2006).

Good theory starts with beautiful notation. We will first introduce Einstein summation convention, a great method to eschew boilerplates in differential geometry formulas.

##### Einstein summation convention

The convention states that

When any index variable appears twice in a term, once as a superscript and once as a subscript, it indicates summation of the term over all possible values of the index variable.

For example, when index variable . Note that here the superscripts **are not exponents but are indices of coordinates, coefficients or basis vectors**. We will write the summation symbol explicitly if we do not assume Einstein’s summation convention on some index, for instance, .

##### Riemannian manifold

Riemannian geometry is used to study intrinsic properties of differentiable manifolds equipped with metrics. In machine learning, we often describe the space of some probabilistic models as a manifold. Informally, a *manifold* of dimension is a smooth space whose local regions resemble . Smoothness allows us to define a one-to-one local smooth mapping called *chart* or *coordinate system*, and for any , represents the coordinates of . For instance, if is a parameterized distribution, will refer to its parameters. Generally, manifolds may need a group of charts for complete description.

##### Tangent and cotangent spaces

Local resemblance to provides the foundation for defining a linear space attached to each point called the *tangent space* . Formally, is the collection of all vectors obtained by differentiating smooth curves
through , and consists of all tangent vectors to the manifold at point .
Each tangent vector can be **identified with a directional differential operator** acting on functions defined on , with direction .

Let be the coordinates of , it can be shown that the set of operators forms a basis for and is called the *coordinate basis*. Note that is abbreviated to and is often identified with its coordinates in the sequel.

Any vector space has a corresponding dual vector space consisting of all **linear functionals** on . The dual vector space of is called the *cotangent space* and is denoted . Because is finite-dimensional, has the same dimension as . admits the *dual coordinate basis* . These two sets of bases satisfy where is the Kronecker delta. To gain some intuition, if is the space of all -dimensional column vectors, can be identified with the space of -dimensional row vectors.

Vectors and covectors are geometric objects that exist independently of the coordinate system. Although their geometric properties do not depend on the basis used, their representation (in terms of coefficients) is of course basis-dependent. In particular, a change in the coordinate system (e.g., switching from Cartesian to polar coordinates) around will alter the coordinate basis of , resulting in transformations of coefficients of both vectors and covectors. Let the new chart be and let . The new coefficients of will be given by

while the new coefficients of will be determined by

Due to the difference in transformation rules, we call to be *contravariant* while *covariant*, as indicated by superscripts and subscripts respectively. For example, imagine a constant rescaling of the coordinates, such that all the new basis vectors are scaled by . To preserve the identity of a vector, its coefficients in this new basis will have to increase by a factor of (changing contra-variantly with respect to the basis vectors). This means that the coefficients of a covector in the new basis will have to be scaled by (changing co-variantly), so that it yields the same result when applied to a vector whose coefficients are twice as large in the new basis.

Riemannian manifolds are those equipped with a positive definite metric tensor . It is a symmetric bilinear positive definite scalar function of two vectors which defines the inner product structure of , and makes it possible to define geometric notions such as angles and lengths of curves. Namely, the inner product of two vectors , is defined as . Here we drop the subscript to avoid cluttering notations, although can depend on . For convenience, we denote the inverse of the metric tensor as using superscripts, i.e., .

The introduction of inner product induces a natural map from a tangent space to its dual. Let , its natural correspondence in is the covector . Let , for any vector , we have . Since the equation holds for arbitrary , we have and . Therefore the metric tensor relates the coefficients of a vector and its covector by lowering and raising indices.

##### Geodesics

Consider a curve . Using Cartesian coordinates , it is easy to verify that is a straight line if it satisfies the following equation , i.e., . However, if we represent the curve using polar coordinates , i.e., , then does *not* correspond to a straight line. The issue is that the coordinate basis depends on the position and will change as we move. For example, the radial directions at two points on a circle will be different.

We denote
as the vector describing how fast the basis changes in the direction of . It is given by where is the *connection*, which specifies additional manifold structure associated with curvature and parallelness. Here we use the Levi-Civita connection which is defined in terms of the metric as

A connection also specifies a way to transport a vector along a curve so that it stays parallel (to itself) with respect to the connection. Given the change rate of bases specified by the connection, the change rate of any vector along direction can be computed by

We define the *parallel transport* of a vector along a curve as the solution to , where represents the tangent vector of . This notion can be used to define the analogue of “straight lines’’, or geodesics, in general manifolds. A curve is a geodesic if the tangent vector is propagated parallelly to itself along the curve. This gives the geodesic equation , or equivalently

Note that for with Euclidean metric and we recover usual straight lines. Let , there exists a unique geodesic satisfying . This enables us to define the *exponential map*, i.e., . By simple scaling we also have . Consistent with our intuition, geodesics (w.r.t. Levi-Civia connection) also correspond to local shortest paths between two points.

##### Summary

As a summary, we provide a graphical illustration of relevant concepts in Riemannian geometry.

From the above formulation, we observe that every concept and relation is defined via coordinates or coefficients. However, we need to discern the important ontological difference between an object and its coordinate. The manifold itself, along with geodesics and vectors (covectors) in its tangent (cotangent) spaces is **intrinsic and independent of coordinates** provided by charts. Riemannian geometry studies intrinsic properties of manifolds via the lens of coordinates. As long as the coordinates are transformed accordingly, e.g., both transforms covariantly or contravariantly, the objects and their relations will not change. This is where invariance emerges.

##### Relation to natural gradient method

Let be a probabilistic model parameterized by , and be the expected negative log-likelihood. Here are random variables, is their joint, is the marginal distribution of and is independent of . In a Riemannian geometry framework, the set of all possible models constitutes a manifold , and the parameter provides a chart. Our goal is to find a model that minimizes the (empirical) loss , which is a function defined on the manifold.

The well known update rule of gradient descent can be viewed as approximately solving the ordinary differential equation (ODE) with forward Euler method. Here is a time scale constant, is the step size, and their product is the learning rate. However, the gradient descent ODE is not invariant to reparameterizations. For example, if we rescale to , will be downscaled to . This is more evident from a differential geometric point of view. As can be verified by chain rules, transforms contravariantly and therefore is a vector in , while transforms covariantly, thus being a covector in . Accordingly, the l.h.s. and r.h.s. of the ODE transform with different rules and **do not type check**. As a result, optimizing with gradient descent is using a parameterization-dependent method to solve a parameterization-independent problem, which is aesthetically unsatisfying.

Natural gradient alleviates this issue by approximately solving an invariant ODE. Recall that we can raise or lower an index given a metric tensor , and hereby we choose the Fisher information metric given by

By raising the index of , the r.h.s. of the gradient descent ODE becomes a vector in and both sides transform contravariantly. The new invariant ODE is now , and the forward Euler approximation becomes , which is the traditional natural gradient update.

Intuitively, natural gradient is not the gradient with respect to model parameters; rather, it is the gradient on the manifold of probabilistic models. As opposed to the traditional gradient which considers how perturbations of the model parameters affect the loss function, natural gradient considers how the loss function changes when the probabilistic model moves a little bit on the manifold of our model family. Because the model family does not change with respect to parameterizations, the natural gradient will also remain the same. This means that the form of the natural gradient will transform appropriately between parameterizations, automatically ensuring that the natural gradient under two different parameterizations correspond to the same “gradient entity” on the manifold.

Mathematically, suppose the loss function is , we can write the natural gradient as

where is the Fisher information matrix for , i.e.,

and we use a tilde mark (~) to differentiate the notation of natural gradient from that of ordinary gradient.

As a sanity check, let’s see how natural gradient eliminates the non-invariance of gradient descent for our linear regression example. For different parameterizations of Gaussian conditional distributions, the derivatives of the log densities are respectively

Therefore, suppose , we have and therefore . From our previous analysis on gradient descent, we already know that . As a result, the natural gradient scales as

which indicates invariance of the optimization step:

More generally, consider an ordinary differential equation (ODE) that describes how the parameter should change as a function of , following the natural gradient smoothly:

Here is just a prespecified scaling constant. If we change the parameterization from to , we get a similar ODE:

The following observation on invariance can be derived with knowledge of information theory and Riemannian geometry:

Suppose is a smoothly differentiable transformation of . If , and , are solutions of corresponding natural gradient ODEs, then we have for any .

However, it is usually intractable to solve such a complicated ODE for practical optimization tasks, and various approximation schemes must be used. From the perspective of numerical methods for ODEs, the naive natural gradient descent method

can be viewed as approximately solving the ODE

using the forward Euler method with a step size of . **Such an approximation with discrete step size will potentially cause loss of invariance**.

The following picture illustrates this point more clearly.

An illustration of optimization trajectories on the manifold (left), parameter space (middle) and another parameter space (right). The *red* dotted arrow represents a truly invariant trajectory, while the light and dark *blue* solid arrows represent trajectories of vanilla natural gradient updates under different parameterizations.

Suppose we are doing optimization on a manifold of probabilistic models , and consider two different parameterizations with parameter spaces denoted as and respectively. The red arrow shows the invariant optimization trajectory obtained by accurately solving the natural gradient ODE. The light and dark blue arrows depict the optimization trajectory of the vanilla natural gradient descent under two different parameterizations.For the red arrows, we show the trajectory on manifold , as well as trajectories in parameter space and . For the blue arrows, we only show trajectories on and the parameter space.

Because vanilla natural gradient descent is a linear update, the blue trajectories in parameter space and are straight lines. However, in order to obtain the invariant trajectory (red arrow), the update has to be curved in parameter space. The update of the vanilla natural gradient descent is straight in the parameter space, and therefore its trajectory will not be truly invariant on . For more invariant solutions, we need more complicated update rules that can perform curved updates in parameter space as well.

Because the natural gradient ODE solution is invariant, a straightforward approach to improving the invariance of the optimization is to exploit more accurate solvers. In numerical analysis, the accuracy of a solver is measured by computing its convergence order to the exact solution when step size . We say that a numerical ODE solver is -th order accurate if the error between exact and approximate solutions decreases as . The forward Euler solver used in the traditional natural gradient update is a simple first-order solver.

In a similar way, we can also measure the “invariance order” of a numerical ODE solver. We say that a solver is -th order invariant if the error between the approximate solution and **some exactly invariant trajectory** decreases as . Because the exact solution of the natural gradient ODE is invariant, any -th order accurate solver is also -th order invariant. We therefore have two ways to improve the invariance of natural gradient update — (1) using higher-order accurate solvers for the natural gradient ODE, or (2) using more invariant (but not necessarily more accurate) solvers.

### Numerical ODE solvers that achieve better invariance

We discuss two approaches to improve the invariance of the natural gradient update:

#### 1. More accurate solvers

Natural gradient ODE has an invariant solution; we just need to improve the accuracy of our numerical ODE solver such that the optimization trajectory is closer to the invariant one. In the paper, we discussed using a second-order solver called the Midpoint Method as a replacement for the first-order forward Euler method.

The Midpoint method can be decomposed into three steps to approximately solve the natural gradient ODE:

- Use gradient descent with half the step size to compute the midpoint: $$ \theta_{t+1/2} = \theta_{t} - \frac{1}{2} h \lambda \widetilde{\nabla}_{\theta_t} L(p_{\theta_t}(x,y)) $$
- Compute the gradient at the midpoint: $$ \widetilde{\nabla}_{\theta_{t + 1/2}} L(p_{\theta_{t+1/2}}(x, y)) $$
- Do the final update with the midpoint gradient: $$ \theta_{t + 1} = \theta_{t} - h \lambda \widetilde{\nabla}_{\theta_{t+1/2}} L(p_{\theta_{t+1/2}}(x,y)) $$

#### 2. More invariant solvers

Some solvers can be more invariant by making their update rules more agonistic to reparameterizations, even though they are not necessarily more accurate than the forward Euler method. In the paper, we investigated a first-order solver called the Riemannian Euler Method (Bielecki 2002) which is *exactly invariant* to reparameterizations when approximately solving the natural gradient ODE. In other words, when starting from the same initial probabilistic model, the Riemannian Euler method will return the same final probabilistic model after a fixed number of iterations, as long as the learning rate is fixed.

When applied to solving the natural gradient ODE, the Riemannian Euler method uses the following update rule:

where is the Exponential Map, a concept related to geodesics in Riemannian geometry. Unfortunately, evaluating requires calculating geodesics on the manifold of probabilistic models, which is usually intractable.

The good news is that this exponential map can be easier to approximate than the invariant solution of the natural gradient ODE. Leveraging the geodesic equation—an ODE governing geodesic lines on a manifold—we can obtain a **second-order** approximation to the exponential map. Equipped with this approximation, the update rule now becomes

Here represents the result of applying the Levi-Civita connection^{1} to input vectors. We name this update rule *natural gradient update with geodesic correction*.

An interesting observation is that if we use the naive natural gradient update rule, it amounts to using the Riemannian Euler method with **first-order** approximation to . This observation confirms that geodesic correction can provide more invariance (asymptotically).

The new numerical ODE solvers we explored so far are all computationally more expensive than the naive natural gradient update. For the midpoint integrator, we must calculate the midpoint first. For geodesic correction, we need to compute the Levi-Civita connection term. Both methods require about twice the computational cost compared to the naive natural gradient update. Luckily, we discovered that there exists a **faster** method for doing geodesic correction which still ensures a second-order approximation to the exponential map, presevering invariance. This faster geodesic correction is roughly **as computationally expensive as** the naive natural gradient update. Please refer to our paper for more details.

### Overview of experimental results

Finally, we apply our techniques to improve the invariance of the natural gradient update and observe how they affect optimization performance. We first examine a toy problem to demonstrate that our methods achieve better invariance, then show that they lead to improved optimization for several tasks in various areas of machine learning, including supervised/unsupervised learning and reinforcement learning.

#### Testing invariance on a toy problem

We experiment with different natural gradient updates to fit a univariate Gamma distribution under different parameterizations. The canonical parameterization of a Gamma distribution is

where denotes the Gamma function.

Aside from this canonical parameterization, we test three other parameterizations:

- Replacing , with , .
- Replacing with .
- Replacing with .

Next, we test all the natural gradient updates to see how well they can fit the parameters of a univariate Gamma distribution under different parameterizations. In the following figure, we show how the negative log-likelihood (our training loss) decreases with respect to training iterations. We use **ng** to denote the naive natural gradient update, **mid** for midpoint integrator, **geo** for geodesic correction, **geo _{f}** for the faster version of geodesic correction,

**ng(exact)**for solving the natural gradient ODE exactly, and

**geo(exact)**for using the exact Riemannian Euler method (without approximating the exponential map).

As the theory predicts, the curves of **ng(exact)** and **geo(exact)** are the same for all four different parameterizations. All of our proposed methods—**mid**, **geo** and **geo _{f}**—are closer to the exact invariant optimization curves. In stark contrast, the naive natural gradient update leads to different curves under different parameterizations, and yield slower convergence.

#### Fitting deep auto-encoders and deep classifiers

We then show the results of training deep autoencoders and classifiers on the MNIST dataset^{2}. The following figure illustrates how the different optimization algorithms fare against each other. The bottom x-axis corresponds to the number of iterations during training, while the top x-axis corresponds to the wall-clock time. The y-axis shows the training error. Each optimization algorithm has two curves with the same color: one is solid while the other is dashed. We can observe that improved natural gradient updates converge faster in terms of number of iterations, and the faster geodesic correction also achieves lower loss in shorter wall-clock time, compared to the naive update.

** the wording of this paragraph is unclear; what do the two curves mean here? solid == improved updates and dashed == geodesic correction? **

#### Model-free reinforcement learning for continuous control

Finally, we show that our techniques also improve the performance of natural gradient-based optimization algorithms in reinforcement learning. In the experiments, we apply our techniques to ACKTR (Wu et al. 2017), a natural gradient algorithm for policy optimization in reforcement learning. The results on the HalfCheetah^{3} environment are shown in the following figure, where we obtain higher rewards faster after combining our techniques with ACKTR.

### Conclusion

In this work, we analyze the invariance of optimization algorithms. Based on our analysis, we propose several methods to improve the invariance of natural gradient update. We empirically report that being more invariant leads to faster optimization for many machine learning tasks.

#### Footnotes

#### References

- Ioffe, Sergey, and Christian Szegedy. 2015. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.”
*ArXiv Preprint ArXiv:1502.03167*. - Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. “Layer Normalization.”
*ArXiv Preprint ArXiv:1607.06450*. - Salimans, Tim, and Diederik P Kingma. 2016. “Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks.” In
*Advances in Neural Information Processing Systems*, 901–9. - Amari, Shun-Ichi. 1998. “Natural Gradient Works Efficiently in Learning.”
*Neural Computation*10 (2): 251–76. - Amari, Shun-ichi, and Hiroshi Nagaoka. 2007.
*Methods of Information Geometry*. Vol. 191. American Mathematical Soc. - Petersen, Peter. 2006.
*Riemannian Geometry*. Vol. 171. Springer. - Bielecki, Andrzej. 2002. “Estimation of the Euler Method Error on a Riemannian Manifold.”
*Communications in Numerical Methods in Engineering*18 (11): 757–63. - Wu, Yuhuai, Elman Mansimov, Roger B Grosse, Shun Liao, and Jimmy Ba. 2017. “Scalable Trust-Region Method for Deep Reinforcement Learning Using Kronecker-Factored Approximation.” In
*Advances in Neural Information Processing Systems*, 5279–88.