19
voti

# The mathematical symmetry of the differential pair

 Questo articolo è disponibile anche in italiano

## Indice

In this article, we would like to discuss about the well known solutions of a widely used circuit, the differential pair. Many textbooks present a study of this circuit based on the behavior of the differential and common modes. We would like to present a mathematical justification of why it works so well. In particular, we will show how the differential and common modes are related to the eigenvalues and eigenvectors of the algebraic operator representing the action of the circuit on the signals.

Figure 1 depicts the well known circuit of a differential pair, built with two NPN bipolar transistors.

Figure 1: a classical NPN-based differential pair circuit.

The article is organized as follows. We begin with a rather conventional small signal analysis of the circuit visible in figure 1. Then, we observe that the effect of the circuit on the inputs can be described via a suitable matrix. It turns out that this matrix is diagonalizable: this operation has some profound consequences leading to the differential and common modes. We come to the end of this article by demonstrating that the diagonalization of the matrix allows to perform a simplified small signal analysis of the circuit which confirms some of the properties mathematically calculated.

## Small signal analysis

If we consider that the calculation of the bias point has been already done, the linear small signal circuit equivalent of the differential pair is visible in Figure 2. We neglected the Early effect and we used the π model for the bipolar transistors, with the advantage that the same analysis can be done for MOS transistors. At this point, we do not suppose yet that the circuit has some degree of symmetry. In particular, we consider that the two transistors Q1 and Q2 are not identical and are thus represented by different trans-conductances gm,1 and gm,2, as well by different base/emitter differential resistances rπ,1 and rπ,2.

For what it concerns the notation, we use lower case letters to indicate the small signal variation around the bias point. For example, in Figure 1 the voltage Ve,1 indicates the voltage one might measure between the base of Q1 and the reference node with an oscilloscope (bias point plus variations). In Figure 2, on the other hand, the voltage ve,1 indicates only the variation around the bias point. In the schematics, the blue arrows indicate the voltages using the Italian and French convention, where the head is the conventional +.

Figure 2: the small signal equivalent of the circuit of figure 1.

Different strategies might be applied for the small signal analysis. We chosen to calculate the voltage drop on RE, which we might call vA. To do so, one can calculate the two emitter currents i1 and i2 which summed up constitute the current flowing in RE:

i1 = (gm,1 + gπ,1)v1

i2 = (gm,2 + gπ,2)v2

where gπ,1 = 1 / rπ,1 and gπ,2 = 1 / rπ,2 and v1 and v2 are the voltage drops on the resistances rπ,1 and rπ,2. By relating v1 and v2 to the input voltages ve,1 and ve,2, the expression of vA can be deduced:

$v_\mathrm{A}=\frac{R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1})v_\mathrm{e,1}+R_\mathrm{E}(g_\mathrm{m,2}+g_\mathrm{\pi,2})v_\mathrm{e,2}}{1+R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1}+g_\mathrm{m,2}+g_\mathrm{\pi,2})}$

Knowing vA, it is easy to calculate the current flowing in the controlled sources, and so the outputs vs,1 and vs,2:

$v_\mathrm{s,1}=\frac{-R_\mathrm{c1}g_\mathrm{m,1}[R_\mathrm{E}(g_\mathrm{m,2}+g_\mathrm{\pi,2})+1]v_\mathrm{e,1}+R_\mathrm{c1}g_\mathrm{m,1}R_\mathrm{E}(g_\mathrm{m,2}+g_\mathrm{\pi,2})v_\mathrm{e,2} }{1+R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1}+g_\mathrm{m,2}+g_\mathrm{\pi,2})}$

$v_\mathrm{s,2}=\frac{R_\mathrm{c2}g_\mathrm{m,2}R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1})v_\mathrm{e,1} -R_\mathrm{c2}g_\mathrm{m,2}[R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1})+1]v_\mathrm{e,2} }{1+R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1}+g_\mathrm{m,2}+g_\mathrm{\pi,2})}$

Until now, the studied circuit is not symmetrical and the analysis we performed is completely standard. In the next paragraph, we will introduce a mathematical interpretation of the equations relating vs,1 and vs,2 with ve,1 and ve,2 that will prove to be useful in the case of a symmetrical circuit.

## A slightly mathematical point of view

The small signal analysis of the differential pair shown in Figure 2 can be interpreted in an mathematical way:

• it is a circuit with two inputs and two outputs
• as long as the transistors are operated in their linear region (and thus the small signal equivalent circuit shown in Figure 2 is valid), its action can be written in a matrix form:

$\begin{pmatrix} v_\mathrm{s,1}\\ v_\mathrm{s,2}\\ \end{pmatrix}= A \begin{pmatrix} v_\mathrm{e,1}\\ v_\mathrm{e,2}\\ \end{pmatrix}$

where A is a 2x2 matrix:

$A=\begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{pmatrix}$

$a_{11}=\left. \frac{v_\mathrm{s,1}}{v_\mathrm{e,1}} \right |_{v_\mathrm{e,2}=0}= -R_\mathrm{c1}g_\mathrm{m,1}\frac{R_\mathrm{E}(g_\mathrm{m,2}+g_\mathrm{\pi,2})+1}{1+R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1}+g_\mathrm{m,2}+g_\mathrm{\pi,2})}$

$a_{12}=\left. \frac{v_\mathrm{s,1}}{v_\mathrm{e,2}} \right |_{v_\mathrm{e,1}=0}= R_\mathrm{c1}g_\mathrm{m,1}\frac{R_\mathrm{E}(g_\mathrm{m,2}+g_\mathrm{\pi,2})}{1+R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1}+g_\mathrm{m,2}+g_\mathrm{\pi,2})}$

$a_{21}=\left. \frac{v_\mathrm{s,2}}{v_\mathrm{e,1}} \right |_{v_\mathrm{e,2}=0}= R_\mathrm{c2}g_\mathrm{m,2}\frac{R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1})}{1+R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1}+g_\mathrm{m,2}+g_\mathrm{\pi,2})}$

$a_{22}=\left. \frac{v_\mathrm{s,2}}{v_\mathrm{e,2}} \right |_{v_\mathrm{e,1}=0}= -R_\mathrm{c2}g_\mathrm{m,2}\frac{R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1})+1}{1+R_\mathrm{E}(g_\mathrm{m,1}+g_\mathrm{\pi,1}+g_\mathrm{m,2}+g_\mathrm{\pi,2})}$

This matrix represents mathematically a linear application on a vector which gives the electrical state of the inputs, giving another vector which is the representation of the electrical state of the outputs. In fact, what we called "the electrical state" can be seen seen mathematically as a point in a vector space, which in this case is two dimensional. Giving the coordinates of a point in a vector space requires the choice of a basis, which is somehow the result of a conventional choice. For example, we have already made an implicit choice for the basis:

$\begin{pmatrix} v_\mathrm{e,1}\\ v_\mathrm{e,2}\\ \end{pmatrix} =v_\mathrm{e,1} \begin{pmatrix} 1\\ 0\\ \end{pmatrix} + v_\mathrm{e,2} \begin{pmatrix} 0\\ 1\\ \end{pmatrix}$

In fact, the two vectors $\begin{pmatrix} 1\\ 0\\ \end{pmatrix}$ and $\begin{pmatrix} 0\\ 1\\ \end{pmatrix}$ are the natural basis of the vector space on which the linear application defined by the matrix A is described. It can be noticed that being square, the A matrix can be associated to an endomorphism.

NOTE: an endomorphism is a linear application where the input and output vector spaces are coincident. This may seem surprising in our application, where the input and output vectors represent the electrical state of nodes well separate in the circuit. However, the description of the signals shares the very same mathematical structure. A consequence is that the output signals can be used to fed another, identical, differential pair.

This mathematically reflects the choice of a bidimensional vector space for representing both the electrical state of the inputs and of the outputs.

Very often, the circuit is symmetrical (i.e. Rc1 = Rc2 = Rc, gm,1 = gm,2 = gm, gπ,1 = gπ,2 = gπ. It might not, however, be operated in a symmetrical way, which means that in general $v_\mathrm{e,1}\neq v_\mathrm{e,2}$.

Several questions arise at this point.

• How does the symmetry is reflected on the behavior of the circuit?
• Is there a better choice for the basis of the vector spaces representing the electrical state of the inputs and the outputs of the circuit?
• What is the simplest possible mathematical representation of the circuit?

In the next paragraphs, we try to discuss some answers.

## Eigenvalues, eigenvectors and symmetry

Trying to answer to the questions leads us to do some interesting mathematical observations. First of all, when the circuit is symmetrical, the A matrix introduced above is also symmetrical. This is a direct consequence of the fact that the symmetry of the circuit is respected in the nodes chosen as the inputs and the outputs of the circuit:

$a_{11}=a_{22}= -R_\mathrm{c}g_\mathrm{m}\frac{R_\mathrm{E}(g_\mathrm{m}+g_\mathrm{\pi})+1}{1+2R_\mathrm{E}(g_\mathrm{m}+g_\mathrm{\pi})}$

$a_{12}=a_{21}= R_\mathrm{c}g_\mathrm{m}\frac{R_\mathrm{E}(g_\mathrm{m}+g_\mathrm{\pi})}{1+2R_\mathrm{E}(g_\mathrm{m}+g_\mathrm{\pi})}$

We saw that the A can be seen as an endomorphism. An important consequence of the fact that the matrix is symmetrical is that it can be shown the matrix can be diagonalized. In other words, it exists a basis made by eigenvectors of the vector space representing the electrical state of the input and output signals. By using it, we are able to represent the circuit in a very convenient way.

First of all, let's refresh a little the mathematical concept of eigenvectors or eigenvalues of a matrix. The definition is as follows. Given a matrix A (sized $n\times n$), if there is a $n\times 1$ vector $v\neq 0$ such that:

Av = λv

where λ is a complex number, the vector v is called (right) eigenvector associated to the eigenvalue λ.

There is an interesting interpretation of the eigenvector when A translates the effect of a physical system. In fact, when the input vector is an eigenvector, this means that the effect might be described in a particularly simple way. Having a basis composed by eigenvectors means that it is possible to describe each point in this space by means of a linear combination of vectors whose transformation applied by the physical system is simple.

Let's thus calculate the eigenvectors and eigenvalues of the A matrix associated to the effect of the differential pair to a couple of signals applied at the bases of the transistors Q1 and Q2. First of all, let's calculate the eigenvalues by observing that, from the definition:

Av − λv = 0

should be true for all multiples of the eigenvalue v. This means that the following determinant should be null (characteristic polynomial), in order that the associated linear system is undeterminate:

det(A − λI) = 0

where I denotes the unit matrix having the same size of A. This calculation strategy is convenient only calculating eigenvectors and eigenvalues of very small matrices, and is perfectly adapted to our problem. When you are seeking for eigenvectors and eigenvalues of $2000\times 2000$ matrices, better numerical methods exist, such as the QR decomposition. After some algebra, we obtain:

λ2 − (a11 + a22)λ + a11a22a12a21 = 0

It appears that the coefficient of λ is the trace of the A matrix, where the constant term is is its determinant. By substituting all the terms and by doing the calculations it is straightforward to demonstrate that there are two separate eigenvalues which are:

λ1 = − Rcgm

$\lambda_2 = -R_\mathrm{c}g_\mathrm{m}\frac{1}{1+2R_\mathrm{E}(g_\mathrm{m}+g_\mathrm{\pi})}$

The calculation of the eigenvectors associated to λ1 and λ1 gives respectively the two following vectors:

$e_1 = \alpha \begin{pmatrix} 1 \\ -1 \end{pmatrix}$

$e_2 = \beta \begin{pmatrix} 1 \\ 1 \end{pmatrix}$

where α and β are two arbitrary constants (different from zero). By chosing the two eigenvalues as the elements of the basis used to represent the electrical state of the input and output signal, the effect of the circuit is given by a new matrix D, which is diagonal:

$D = \begin{pmatrix} \lambda_1 & 0\\ 0 & \lambda_2\\ \end{pmatrix} = \begin{pmatrix} -R_\mathrm{c}g_\mathrm{m} & 0 \\ 0 & -R_\mathrm{c}g_\mathrm{m}\frac{1}{1+2R_\mathrm{E}(g_\mathrm{m}+g_\mathrm{\pi})} \\ \end{pmatrix}$

The eigenvectors allows to change the point of view on the circuit, and it is not difficult to see that by assembling a matrix W from the eigenvectors, it can be obtained:

A = WDW − 1

where:

$W=(e_1, e_2)=\begin{pmatrix} \alpha & \beta \\ -\alpha & \beta \\ \end{pmatrix}$

$W^{-1}=\begin{pmatrix} \frac{1}{2\alpha}& -\frac{1}{2\alpha} \\ \frac{1}{2\beta} & \frac{1}{2\beta}\\ \end{pmatrix}$

In other words, the representation given by the eigenvectors constitutes a coordinate change in which the representation of the circuit is particularly simple, being the diagonal matrix D. Understanding the electrical interpretation of the eigenvectors and eigenvalues is particularly enlightening from a physical point of view. The next paragraph will thus be devoted to this discussion.

## Small signal analysis based on the symmetry of the circuit

The coordinate transform described by the eigenvalues has a tremendous electrical importance. In fact, we saw that the matrices W and W − 1 are a coordinate change. If we write:

$v_\mathrm{e}= \begin{pmatrix} v_\mathrm{e,1}\\v_\mathrm{e,2} \end{pmatrix}$

which gives the representation of the electrical state of the inputs by using the "natural" base described above, we can write:

$v_\mathrm{e}^{\prime}=\begin{pmatrix}v_{e,d}\\v_{e,cm}\end{pmatrix}=W^{-1}v_\mathrm{e}$

and thus, by choosing for example α = 1 / 2 and β = 1:

\left \{ \begin{align} v_{e,d}&=v_\mathrm{e,1}-v_\mathrm{e,2}\\ v_{e,cm}&=\frac{v_\mathrm{e,1}+v_\mathrm{e,2}}{2} \end{align} \right.

ve and $v_\mathrm{e}^\prime$ are the same electrical state for the inputs, but represented with two different basis. The same can be done for the outputs.

The electronician will easily recognize that ve,d is nothing less than the good old differential mode, whereas ve,cm is the common mode for the input signals. It is very well known that it is convenient to treat the differential pair circuit by representing signals in terms of a linear combination of differential and common mode:

$v_\mathrm{e}=W v_\mathrm{e}^{\prime}$

\left \{ \begin{align} v_{e,1}&=v_\mathrm{e,cm}+v_\mathrm{e,d}/2\\ v_{e,2}&=v_\mathrm{e,cm}-v_\mathrm{e,d}/2 \end{align} \right.

It should be clear now that this practice has a profound mathematical justification. Doing so means choosing a basis constituted by eigenvectors for the input and output signals. Let's do now the small signal analysis (as it can be found on many textbooks such as [1] and [2]), based on the differential and the common modes, to check wether we find the same results seen above.

### Purely differential excitation

Let's consider in figure 3 at the top, the small signal equivalent circuit of the (symmetric) differential pair, excited by a purely differential input ve,d. As the excitation is anti symmetrical and the circuit is symmetrical, the electrical state of the circuit will be anti symmetrical.

Figure 3: the small signal equivalent of a symmetric differential pair excited by a purely differential input.

Since by antisymmetry i1 = − i2, there is no current flowing in RE and the voltage drop is zero. This means that the circuit analysis can be simplified since it is perfectly equivalent to what shown in the figure 3 at the bottom. The output signal is thus:

vs,d / 2 = − Rcgmve,d / 2

and thus:

vs,d = − Rcgmve,d

The voltage gain for the differential mode (the ratio between the differential mode of the outputs and the inputs) is Rcgm, which is exactly the eigenvalue λ1 we calculated above!

### Purely common mode excitation

When the excitation is only due to the common mode, figure 4 shows the small signal equivalent circuit and the symmetry of its electrical state.

Figure 4: the small signal equivalent circuit for the symmetric differential pair excited only by the common mode.

The symmetry implies that i1 = i2 and thus that if the RE resistance is split in two halves, there is no current flowing between them. This means that (see figure 4 at the bottom), if we remove the connection between the nodes A and A', nothing happens.

Those considerations allows to write the common mode gain:

$v_\mathrm{s,cm}=v_\mathrm{e,cm}\frac{1}{1+2R_\mathrm{E}(g_\mathrm{m}+g_\mathrm{\pi})}$

Which is, as we expected, nothing less than the λ2 eigenvalue.

## Conclusion

In this article, we discussed the symmetry of the very well known differential pair circuit from a mathematical point of view. The question was to see whether the symmetry can be derived mathematically from an analysis which is not based on it. We first started with a complete analysis of the circuit when the differential pair is operated in the linear region. It appears that there is a mathematical justification for the choice of the usual description based on the differential and common modes. In fact, the differential and common modes can be seen as eigenvalues of the (symmetric) matrix which represents the effect of the circuit on the bidimensional vector space of the electrical state of the inputs. If the circuit is symmetrical, this matrix is symmetrical too, and the choice of a base of eigenvalues make sort that the matrix is diagonalized. Then, we exploited the mathematical results to perform a second analysis this time based on the symmetry, in which the simplification of the algebra is evident. In particular, the two eigenvalues are the differential and the common mode voltage gain of the circuit.

## Bibliography

[1] - B. Razavi, "Design of CMOS analog integrated circuits", McGraw-Hill, 2001

[2] - A. S. Sedra, K. C. Smith, "Microelectronic circuits", sixth edition, The Oxford Series in Electrical and Computer Engineering, 2009

3

Commenti e note

Inserisci un commento

di ,

Quite impressive, great job Davide, I love it!...see those subscripts V_{e(ntrè)} e V_{s(ortie)}... just let me guess where you work and live :)

Rispondi

#### Inserisci un commento

Per inserire commenti è necessario iscriversi ad ElectroYou. Se sei già iscritto, effettua il login.

PDF by PrinceXML