Archive for January 2022

Covariant and Contravariant Functor in Category Theory

Recently, I strolled around an exciting fact about a difference of meaning of covariant and contravariant words in mathematics (category theory) and physics (tensor analysis). Well, that makes it harder for a mathematician and a physicist to talk about these two words without knowing in what sense.


In category theory, we can think of functors as the mapping of objects between categories. We can say that if a functor preserves the direction of morphism, then the functor is a covariant one. If it reverses the direction of the morphism, then it is a contravariant functor. John Baez has briefly mentioned them in his book "Gauge Fields, Knots and Gravity". An identity functor is a covariant functor, and so are tangent vectors. While cotangent vectors and 1-forms are contravariant. (1-form in this case is differential of a function, however, if a differential of a function is to be thought as a vector field then the vector fields are covariant.)


Suppose we have a map $\phi:M \rightarrow N$ from one manifold to another. On $N$, we have real valued functions defined from $\psi:N\rightarrow \mathbb{R}^n$. To get real valued functions on $M$ we have pullback $\psi$ from $N$ to $M$ by $\phi$. 

$$\phi * \psi = \psi \circ \phi$$

We see that real-valued functions on $M$ suffer a change of direction in their morphism. So they are contravariant.

 

In tensor analysis, one can say that $X_\mu$ is covariant and $X^\mu$ is contravariant. (It is important to dodge that $\partial_\mu$ is covariant while its component $v^\mu$ can be contravariant.)

Posted in | Leave a comment Print it.

Heterotic Strings

Here is my 10 pages handwritten (rough) notes on Heterotic string theory. We will work on both $SO(32)$ and $E_8 \times E_8$. For any reference, one can use String Theory Vol 1 and Vol 2 by Green, Schwarz and Witten. 

Heterotic Strings

Posted in | Leave a comment Print it.

A Few Comments on Entropy

Entropy $S(x)$ is the measure of randomness of a variable $x$. It is important in the area of information theory, which, on the other hand, shares similarities with the entropy that we have in thermodynamics. We write entropy as

$$S(x) =\sum -p(x) \log p(x),$$

here $p(x)$ is the probability mass distribution of the variable. In quantum information theory (or quantum Shannon theory), we use discrete matrices in the place of mass distribution. We mostly prefer the logarithms in base 2 and the entropy is measured in bits. 

Suppose that Alice has sent a message which contains either $a$ or $b$. There is half-chance probability occurring of either. In this case, the binary entropy looks like the below figure, where when $p=1/2$ and $(1-p)=1/2$ the entropy becomes $1$ bit,

$$S(x)=-p(x)\log p(x) - (1-p) \log (1-p),$$

For more than one variable, we have joint entropy

$$S(x,y) = -\sum_{x}\sum_{y} p(x,y) \log p(x,y)$$

If, for instance, Alice sends a message consisted of strings a and b

$$ababcbcbcba$$

then the messaged received by Bob is given by conditional entropy which is given by conditional probability

$P_{x \mid y}\left(x_{i} \mid y_{j}\right)=\frac{P_{x, y}\left(x_{i}, y_{j}\right)}{P_{y}\left(y_{j}\right)}$

and (we change the notation a bit, calling $X,Y$ random variable)

$$I(X; Y)=\sum_X \sum_Y p(x,y) \log \frac{p(x,y)}{p(x)p(y)} = S_{X}-S_{X Y}+S_{Y}$$

is the mutual information between two variables $X,Y$. The mutual information ($I(X;Y)$) is given by the relative entropy of the joint probability mass function and the product distribution given by $p(x)p(y)$ (I will recommend T. Cover and J. Thomas, Elements Of Information Theory. John Wiley Sons, 2006, for introductory materials.)

A look at general $x\log x$.

Posted in | 3 Comments Print it.