## Old posts are being manually restored

During an update to the server in September and an issue with the backup system, the site was corrupted. Some posts have been totally lost. All previous comments have been totally lost.

I have been running this blog regularly since 2013. I am manually restoring the posts one by one. If you have a post you’d like to reference in particular, email me and I will prioritize putting that up.

The contents are somewhat available on the wayback machine in the meantime.

## Fiber Bundles of Formal Disks

Here is an incomplete proof that varieties are fiber bundles of formal disks over their deRham Stacks.  The fact makes intuitive sense, the deRham stack is the variety without infinitesimal data, and then by adding the infinitesimal data (formal disks) back in, you recover the result. However, the fact that you can build anything non infinitesimal out of formal disks fills me with confusion and awe.

Acknowledgements: This is the result of a working group with Dan Fletcher, Adam Holeman, and me as part of the Northwestern Homotopy Working Seminar (started by Matthew Weatherly, Grigory Kondyrev, and me). The working session in which Adam and I figured out the proof, Dan was not there, which is why his name is not mentioned, but he was very helpful in understanding the claim. This proof is the result of Yaroslav Khromenkov coming up with the idea of it, and Adam and I understanding and correcting his solution during a working session.

## A Gentle Introduction to Tensors and Monoids

There are at least three distinct conceptual roles which vectors and vector spaces play in mathematics:

• A vector is a column of numbers. This is the way vector spaces appear in quantum mechanics, sections of line bundles, elementary linear algebra, etc.
• A vector is a weighted direction in space. Vector spaces of this kind are often the infinitesimal data of some global structure, such as tangent spaces to manifolds, Lie algebras of Lie groups, and so on.
• A vector is an element of a module over the base ring/field.

What is a module? The basic idea is that a module $V$ is an object equipped with an action by a monoid $A$. (This is closely related to the concept of a representation of a group.)

Let’s take an example that you’re familiar with, vector spaces, and generalize it to get some intuition for working with modules.

Fields $\hookrightarrow$ Rings

If $K$ is a field, then a $K$-vector space (a vector space over $K$) $\equiv$ $K$-module.

$K$-Vector Spaces $\hookrightarrow$ $K$-modules

For the categorically minded: a familiar example of a module is a vector space $V$ over a field $K$; this is a module over $K$ in the category of abelian groups: every element of $K$ acts on the vector space by a multiplication of vectors, and this action respects the addition of vectors.

Tensors play analogous conceptual roles.

• A tensor is a multidimensional array of numbers.
• A tensor is multiple weighted directions in space.
• A tensor is an element of a free monoid over the base ring. (If you don’t know what a free monoid is, don’t worry. I’ll go over them later in this post.)

An explanation of tensors as type constructors is postscript for fellow Haskell enthusiasts.

#### Gaining Intuition Through the Classical Approach

The ‘classical’ approach to tensor theory views tensors as multidimensional arrays that are $n$D generalizations of 0D scalars, 1D vectors, and 2D matrices. The ‘components’ of the tensor are the indices of the array.

We can then visualize a higher-order analogue of matrix rows and columns as fiber. A fiber is defined by fixing every index but one, where a slice is defined by fixing two indices.

An algebra is a bilinear product distributive over vector addition (no commutatitvity, associativity, or identity required). In this article, I will use the word “algebra” to mean “associative algebra with unit.”

#### Gluing Stuff Together to Make Other Stuff

A monoid is a set with a binary operation (let’s call it multiplication and denote it as infix *). We demand that this operation:

• has a special identity element $1$ such that $1*a=a*1=a$
• is associative, $(a * b) * c = (a * b) * c$

The easiest way to understand a free monoid is to construct one.

1. Pick a set, any set, and call it the set of generators.

• Assign letters of the alphabet to your generators. Say, you have less than 26 generators and you call them $a, b, c$, etc.

2. Add one more element to the set and call it the unit.

• Reserve the empty string (which I’ll denote $””$) for your unit element.

3. Define multiplication by unit to always return the other muliplicand.

• “”a = a

4. For every pair of generators, create a new element and call it their product.

• When asked for the product of, say, $a$ and $t$, call it $at$. The product of $c$ and $at$ will be $cat$, the product of $ca$ with $t$ is also $cat$.

As you can see a free monoid generated by an alphabet is equivalent to the set of strings, with product defined as string concatenation.

Lists are free monoids. Take any finite or infinite set and create ordered lists of its elements. Your unit element is the empty list, and “multiplication” is the concatenation of lists. Strings are just a special type of lists based on a finite set of generators. But you can have lists of integers, 3-D points, or even lists of lists.

Let’s construct another one!

We can construct an algebra for any $R$-linear space $V$ by repeatedly gluing $V$ together via the tensor product.

To start, we define the $1$-tensor space on $V$ := $\mathcal{T}^1(V) = V$.

We can recursively define the $k$-tensor space on $V$ := $\mathcal{T}^k(V)$ by setting $\mathcal{T}^k(V) = V \otimes \mathcal{T}^{k-1}(V)$.

In other words: $\mathcal{T}(V) = \mathbb{K} \oplus V \oplus (V \otimes V) \oplus …$

Moreover, we can define the tensor algebra on $V$ as $\mathcal{T}(V) = \bigoplus_{n \in \mathbb{N}} \mathcal{T}^n(V)$.

This is the coproduct of all tensor products of $V$. A tensor algebra is the free monoid on the linear space $V$. As with other free constructions, $\mathcal{T}$ is left adjoint to a forgetful functor that takes a $R$-Algebra to its underlying $R$-linear space.

Keep in mind that Lin is a type of monoidal category (a category $(C, \otimes)$ equipped with a tensor product). We can take the tensor product of linear spaces and of linear maps.

The functoriality of $\mathcal{T}$ means that any linear map $\phi: V \rightarrow W$ extends uniquely to an algebra homomorphism from $\mathcal{T}(\phi): \mathcal{T}(V) \rightarrow \mathcal{T}(W)$. Note that a map $f: A \to B$ between two algebras is an algebra morphism if $f$ is linear and preserves multiplication $f(a_1a_2) = f(a_1)f(a_2)$.

#### Tensors are the Building Blocks of Multilinear Algebra

The tensor product is an operation combining vector spaces, and tensors are the elements of the resulting vector space.

Let’s start with an example of the tensor product of two vectors $\phi, \psi \in \mathbb{C}^2$:

The tensor product of $A, B$ over $R$ looks like this:

• an abelian group $A \otimes_R B$
• a bilinear map $\otimes: A \times B \to A \otimes_R B$
• the universal property holds. At first glance, this diagram might upset category theorists (isn’t all in Vect). If we note that it is taking place in a multicategory (since our objects are modules and our morphisms are k-linear maps), your feelings of angst toward the diagram will hopefully subside.

Given:

• $R$-module $M$
• $R$-bilinear map $f: A \times B \to M$

The universal property means that: there exists a unique $R$-bilinear map $\hat{f}: A \otimes_R B \to M$, such that the above diagram commutes (i.e. every $R$-bilinear map defined on the product $A \times B$ factors through $A \otimes_R B$ uniquely).

The tensor product $\otimes$ of $A$ and $B$ (the tensor space) is any linear space having this universal property. Note that the following diagram is the same diagram as above. In other words, a linear map out of the tensor space corresponds to a bilinear map out of the original linear space. Note that a tensor can be naturally extended from a bilinear map to a multilinear map by currying. The rules for manipulation of of tensors arise as an extension of linear algebra to multilinear algebra. We can think of composition as doing things in series, and tensoring as doing things in parallel. The second bifunctor condition can be sloganized as: Doing things in parallel, in series, is the same as doing things in series, in parallel. If you’re an algebraist: this equivalent presentation may be more palletable: $Hom(M,M) \otimes Hom(N,N) \simeq Hom(M \otimes N, M \otimes N)$

#### Motivating Example: Application to Quantum Computing

Tensor products are used to describe systems consisting of multiple subsystems.

Each subsystem is a vector in a Hilbert space. A qubit is a 2D, normalized, complex vector in a Hilbert space with base vectors $|0\rangle$ and $|1\rangle$.

The state space of a quantum computer with $n$ qubits can be represented as the tensor product of the respective state spaces of all the individual qubits.

If we have 2 systems, let us have 2 systems $I$ and $II$ with corresponding Hilbert spaces $\mathcal{V}_I$ and $\mathcal{V}_{II}$.

The state vectors $|\phi_{I}\rangle$ and $|\phi_{II}\rangle$ describe the state of the total system as $|\phi_{I}\rangle \otimes |\phi_{II}\rangle$.

Contra-||Co-variance: A Discussion of Tensor Types

Contra-||co-variance refers to how the change of scale in the reference axis affects the components of the object.

In general, most things that we think of as vectors, such as a position or displacement, are contravariant vectors. These are represented as column vectors in linear algebra. Covariant vectors, or one-forms, are represented by row vectors in linear algebra. In the language of quantum mechanics, contravariant vectors are kets $v^i \sim |v\rangle$, while covariant vectors are bras $v_i \sim \langle v|$.

Tensors can be defined as objects in multilinear algebra that can have aspects of both covariance and contravariance.

The tensors are classified according to their type $(p, q)$, where $p$ is the number of contravariant indices, $q$ is the number of covariant indices, and $p + q$ gives the total order of the tensor. The geometric product is based on simple geometric principles. The first premise is that multiplying two contravariant vectors $a \wedge b$ produces an area-like object called a bivector. Multiplying 3 vectors together $a \wedge b \wedge c$ produces a volume-like object called a trivector. In general, multiplying $p$ vectors together produces a $p$-vector.

A scalar is a grade-0 object, denoted as $\langle A \rangle_0$, a vector is a grade-1 object $\langle A \rangle_1$, a bivector is a grade-2 object $\langle A \rangle_2$, and in general a $p$-vector is a grade-$p$ that defines a $p$-volume.

Adding different grade objects creates a multivector of the form $A = \langle A \rangle_0 + … + \langle A \rangle_{p+q}$.

#### Postscript: Tensors as Type Constructors

In Haskell, we construct a free vector space over a type by taking some type a as a basis, and forming k-linear combinations of elements of a, represented as Vect k a.

Given 2 vector spaces:
A = Vect k a
B = Vect k b
We can form their tensor product A $\otimes$ B = Vect k (Tensor a b).

The tensor product is the vector space whose basis is given by all expressions of the form a $\otimes$ b. A Tensor is a type constructor on basis types, taking basis types a, b  for vector spaces A, B, and returning a basis type for the tensor product A $\otimes$ B.

In other words, a function of type <Tensor> is a bilinear function that takes each pair of basis elements a, b in the input to a basis element (a,b) in the output.

The power of this bilinear function is that it is in some sense “the mother of all bilinear functions”. Specifically, you can specify a bilinear function completely by specifying what happens to each pair (a,b) of basis elements.

It follows that any bilinear function f :: Vect k (Either a b) -> Vect k t can be factored as f = f' . tensor, where f' :: Vect k (a,b) -> Vect k t is the linear function having the required action on the basis elements (a,b) of Vect k (a,b). Hungry for more Haskell? Here‘s a post written in literate Haskell using examples from computer vision.

## An Informal Categorical Introduction to Lie’s Theorems

This quick post assumes basic knowledge of Lie algebras and category equivalence. I am new to category theory, and appreciative of constructive feedback.

We commonly study smooth manifolds, e.g. Lie groups, by studying their tangent spaces. Since the product map induces a map from one tangent space to another, we can oftentimes just consider the tangent space to a Lie group at the identity. This tangent space can be equipped with a structure induced by the group structure, called a Lie algebra structure.

#### The Theorems of Lie

The assignment Lie : $G \to$Lie$(G)$ is functorial. The theorems of Lie in their modern incarnation emerge out of the attempt to see how close this functor is to being an equivalence of categories. Note that we are working in Diff.

Lie proved that the category of local real Lie groups is equivalent to the category of finite-dimensional real Lie algebras. This equivalence was extended to global cases by Cartan: the category of real Lie algebras is equivalent to the category of simply-connected Lie Groups. Note: We cannot drop the condition of being simply connected for $G$, as, for example, $G = S^1$ and $G = \mathbb{R}$ have the same Lie algebras, but are not isomorphic.

#### Lie I: Groups $\to$ Algebras

The assignment $G \mapsto$ Lie$(G)$ induces a functor Lie: LieGrp $\to$ LieAlg and for each morphism $g:G \to H$ of Lie groups the following diagram commutes: The Lie algebra homomorphism Lie$(g)$ is equivalent to the first order infinitesimal of the group homomorphism.

#### Lie II: Do You Even Lift, Lie? Alegbras $\to$ Groups

Let $G$ and $H$ be Lie groups with Lie algebras Lie$(G)$ and Lie$(H)$, with a Lie algebra homomorphism $f:$Lie$(G) \to$Lie$(H)$. For notational convenience, we denote Lie$(G)$ as the lowercase gothic letter $\mathfrak{g}$. Lie II states that there exists a unique morphism $F$ lifting $f$ such that $f=$Lie$(F)$. #### Lie-Cartan III: Groups $\leftrightarrow$ Algebras

The functor Lie cannot be inverted because locally isomorphic Lie groups have isomorphic Lie algebras. However, we can invert Lie on the subcategory of simply connected Lie groups. The essential surjectivity of this functor is the third theorem.

For every finite-dimensional real Lie algebra $\mathfrak{g}$ there exists a Lie group $G$ with Lie algebra $\mathfrak{g}$. Note that $G$ is not necessarily unique.

## Introduction to Bundles

Treating spaces as fiber bundles allows us to tame twisted beasts. Most of spin geometry is phrased in the language of fiber bundles, and this post will begin to introduce that language – extremely powerful in its simplicity.

#### Introduction to Fiber Bundles

If we glue lines onto every point $b$ in a circle (or a circle to every point of a line), we get a cylinder. In other words, a cylinder is the product space $S^1 \times [0,1]$. If we glue lines onto every point of a circle, progressively twisting each individual line, we get a Mobius strip. A fiber bundle with fiber $F$ consists of: 2 topological spaces, and a projection map which projects the total space onto its base space.

If you flip the arrow around, $\pi^{-1}$, the inverse image of the projection map, maps every $b$ in the basespace to its corresponding fiber $\pi^{-1}(b)$ in the total space. Similarly, the $\pi^{-1}(N)$ maps every point in the neighborhood $N$ of $b$ to their corresponding fibers $\pi^{-1}(N)$ in the total space. We can locally treat the Mobius strip as a plane, in the same way that we can locally treat a cylinder as a plane.

This property allows us to vastly simplify calculations; it allows us to locally treat twisted spaces like their non-twisted counterparts. We can generalize this property as follows:

For every $b \in B$ there is a neighborhood $N$ of $b$ s.t. the following diagram commutes.  Formally:

A fiber bundle (with structure group $G$ and fiber $F$) over $B$ is a smooth surjection $\pi: E \to B$ together with a local triviality condition: every $b \in B$ has a neighborhood $N$ and a diffeomorphism $\phi: \pi^{-1}(N) \to N \times F$ s.t. the following commutes: Another notation commonly used to represent the fiber in E over $b$ is $E_b$

$E_b \equiv \pi^{-1}(b)$

$E = \coprod\limits_{b \in B} E_b = \coprod\limits_{b \in B} \pi^{-1}(b)$

As an aside: How can we formally construct a twisted space?

A Mobius strip := $[0,1] \times [0,1] /\sim$, where the equivalence relation is $(0,t) \sim (1, 1-t)$.

Basically, this equivalence relation gives us gluing instructions.

We must twist the plane an odd number of times s.t. $(0,t)$ are the same as $(1, 1-t)$. #### Get Your Group On: Actions and Torsors

If you are unfamiliar with smooth groups and representations, I recommend reading Studying Symmetryfor context before venturing onward.

What does it mean for $G \curvearrowright X$ (a group $G$ to “act” on $X$)?

Suppose we write the group operation as multiplication and the identity element as $1$.

A $G$-action on $X$ takes any $g \in G$ and any $x \in X$, and returns $gx \in X$. In other words:

For it to be a $G$-action, we demand that it obeys:  and  These properties may look familiar to the categorically inclined. Every group $G$ is a category with a single object whose morphisms are the elements of $G$.

An “action” of $G$ on an object in the category $C$ is simply a functor $G \to C$.

• If $C$ is a group, this action is a group homomorphism.
• If $C$ is $\text{Vect}$, this action is a linear representation of $G$.

A $G$-torsor is a special type of $G$-action which satisfies the following: for any 2 elements $x_1,x_2$ in our $G$-torsor, $\exists! g \in G$ that satisfies $gx_1 = x_2$.

This means that for any two elements of our torsor, we can talk about their “ratio” $x_2/x_1$, which defines the unique element $g$ which satisfies the above equation.

#### Principal-Bundles

A principal-bundle is a bundle whose fibers are torsors.

A principal $G$-bundle over $E$ is essentially a bundle of “affine $G$-spaces” over $X$. To be precise, it is a fiber bundle $P \xrightarrow{\pi} X$ together with a continuous right action of $G$ on $P$ which preserves the fibers and acts simply and transitively on them. Thus, the fibers are exactly the orbits of $G$.

Let $G$ be a Lie group, and $F$ be our typical fiber. We have a faithful smooth group action:
$\rho: G \times F \to F$, which we can curry into
$\rho: G \to (F \to F)$ and rewrite as
$\rho: G \to \text{Aut}(F)$

In the most general case, $\text{Aut}(F) = \text{Diff}(F)$ (the group of diffeomorphisms), but we will shortly be working with vector bundles, for which $F$ is a vector space and $\text{Auto}(F) = GL(F)$.

Let $(GL(E))_b$ be the set of orthonormal frames of the vector space $E_b$. Note that the set of all orthonormal frames is a right $\text{O}(n)$-torsor.

Given a vector bundle $E \xrightarrow{\pi} B$ with a vector space $F$ as the fiber, we can construct a it’s principal bundle $GL(E) \xrightarrow{\Pi}B$ by mapping each fiber $E_b$ to the bundle of orthonormal frames over that fiber $GL(E_b)$. #### A Taste of What’s to Come

“Fiber bundles are themselves merely the culmination of the centuries-long struggle to come fully and properly to grips with the idea of a multiple-valued function.

One of the purposes of cohomology is to specify how the typical fibers and the base may be combined to make a variety of fiber bundles (a way to classify and distinguish the different possibilities) — these are the so-called characteristic classes (such as Chern classes).”

## Studying Symmetry

Group(oid) theory is the study of symmetry. When we are dealing with objects that appear symmetric, group theory assists with analysis of these objects. The label of “symmetric” is applied to anything which stays invariant under some transformations.

This can apply to geometric figures (the unit circle, $S^1$, is highly symmetric, for it is invariant under any rotation): This also applies to more abstract objects such as functions: the trigonometric functions $sin(\theta)$ and $cos(\theta)$ are invariant when we replace $\theta$ with $\theta+\tau$. Source

Both functions are periodic with period $\tau$. Periodicity is a type of symmetry. Important points for $cos(\theta)$ and $sin(\theta)$ in terms of their period $\tau$:

The subject of Fourier analysis is concerned with representing a wave-like ($\tau$-periodic) function as a combination of simple sine waves (simpler $\tau$-periodic functions). More formally, it decomposes any periodic function into the sum of a set of oscillating functions (sines and cosines, or equivalently, complex exponentials).

Fourier analysis is central to spectroscopy, passive sonar, image processing, x-ray crystallography, and more. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.

Fourier analysis is a fusion of analysis, linear algebra, and group theory. [Source]

Furthermore, group theory is the beating heart of physics.

According to Noether’s theorem, every continuous symmetry of a physical system corresponds to a conservation law of the system. This ties into everything from the conservation of electric charge to the first law of thermodynamics.

We care a great deal about group representations, especially of Lie groups, since these representations often point the way to the “possible” physical theories. Examples of the use of groups in physics include the Standard Model and gauge theory.

Modern particle physics would not exist without group theory; in fact, group theory predicted the existence of many elementary particles before they were found experimentally. [Source]

Studying symmetry allows us to discover the laws of the universe.

#### What is a group? More formally, #### Homomorphisms Between Groups

A group homomorphism is a structure preserving map $\psi$ from a group $(G, \cdot)$ to a group $(H, *)$, satisfying: which means $\forall g_1, g_2 \in G$, $\psi(g_1 \cdot g_2) = \psi(g_1) * \psi(g_2)$

A great example of a group homomorphism is logarithms and exponentials:

An exponential $exp: (\mathbb{R}, +) \rightarrow (\mathbb{R}^+, *)$ is a morphism from the reals under addition to the positive reals under multiplication.

The operation on the domain $\mathbb{R}$ is addition, while the operation on the range $\mathbb{R}$ is mulitplication.

Thus, to show $exp$ is a homomorphism, I must show that $$\forall x, y \in \mathbb{R}, exp(x+y) = exp(x)*exp(y)$$ Recall that $exp(x+y) \equiv e^{x+y}$, and $exp(x) * exp(y) = e^xe^y$, so the equation to be verified comes down to the familiar identity $e^{x+y} = e^xe^y$, thus $exp$ is a homomorphism!

Note that a group together with group homomorphisms form a category.

#### Interested?

Group Explorer (free) allows you to autogenerate visualizations of groups, homomorphisms, subgroup lattices, and more.

Visual Group Theory: Nathan Carter’s expository text is a beautifully illustrated, gentle introduction to groups, ending in quintics.

Introduction to Tensors and Group Theory for Physicists: A unifying book, best suited to those familiar with linear algebra and basic physics.

#### Linear groups

A group in which the objects are matrices and the group operation is matrix multiplication is called a linear group (or matrix group).

Since in a group every element must be invertible, the most general linear groups are the groups of all invertible matrices of a given size, called the general linear groups $GL(n)$.

Any property of matrices that is preserved under matrix multiplication and inverses can be used to define further linear groups.

Elements of $GL(n)$ with determinant 1 form a subgroup called the special linear group $SL(n)$. Orthogonal matrices ($M^{T}M = I$) form the orthogonal group $O(n)$. The elements of the special orthogonal group $SO(n)$ are both orthogonal and have determinant 1. Linear groups pop up in virtually any investigation of objects with symmetries, such as molecules in chemistry, particles in physics, and projective spaces in geometry.

• Geometry is the study of invariants of the action of a matrix group on a space.
• Particle physics, 4 dimensional topology, and Yang-Mills connections are inter-related theories based heavily on matrix groups, particularly on a certain double cover between two matrix groups (which I’ll cover in Clifford’s Road to Spinors).
• Quantum computing is based on the group of unitary matrices. “A quantum computation, according to one widely used model, is nothing but a sequence of simple unitary matrices. One starts with a small repitoire of some 2×2 and some 4×4, and combines them to generate, with arbitrarily high precision, an approximation to any desired unitary transformation on a huge vector space.” – William Wootters
• Riemannian geometry relies heavily on matrix groups, in part because the isometry group of any compact Riemmanian manifold is a matrix group.

#### Circle $\cong$ SO(2)

We will begin with the simplest example of a smooth group: the group of proper rotations in 2 dimensions, isomorphic to SO(2).

The compositions of the rotations by two angles $\theta_1$ and $\theta_2$ corresponds to a rotation of the angle $\theta_1 + \theta_2$.

$$R(\theta_1)R(\theta_2) := R(\cdot(\theta_1, \theta_2)) = R(\theta_1 + \theta_2)$$

The map $\cdot(\theta_1, \theta_2) = \theta_1 + \theta_2$, takes two elements of the group as arguments and returns another element of the group ($\cdot: G \times G \to G$) is smooth (continuous and differentiable).

We have another important property: the proper rotations are periodic. Rotations by angles differing by multiples of $\tau$ are periodic.

$$R(\theta + \tau) = R(\theta)$$

As a manifold, this group is the circle $S^1$.

The continuity and differentiability of the product map has a very profound consequence: the elements of the group are determined by the elements close to the identity (the infinitesimal transformations).

Indeed, if we wish to determine how a rotation $R(\theta)$ depends on $\theta$, we look at how $R$ changes with respect to infinitesimal change of $\theta$.

For groups of linear transformation on a space, we can use the language of differential operators or that of matrix components, according to our taste and convenience. The rotation $R(\theta + d\theta)$ can also be described as the product $R(d\theta)R(\theta)$ [Source]

This rotation group $SO(2)$ can be regarded as a group of transformations acting on its group manifold $S^1$.

Smooth groups are conventionally referred to as Lie groups, after Sophus Lie.

#### Why Study Smooth Groups?

Groups elegantly represent the symmetries of geometric objects. For example, the finitely many symmetries of polygons are captured by the Dihedral groups.

The infinitely many symmetries of circles require more sophistication. Observe that an axis of symmetry exists for every angle in $[0,\tau]$, so there should exist a continuous map from $[0,\tau]$ into any group representing the symmetries of a circle. The pristine algebraic nature of a group fails to capture this notion of continuity, so we much enrich it to obtain the smooth groups.

Motivated by geometry, smooth groups merge the perspectives of algebra and analysis, tying together these normally disparate fields with great efficacy.

Not convinced by their beauty and simplicity alone?

Smooth groups are useful:

• The study of irreducible representations of the smooth group $SO(3)$ lead to an explanation of the Periodic Table.
• The study of irreducible representations of the smooth group $SU(2)$ naturally leads to the Dirac equation describing the electron.
• The classification of the unitary representations of the Poincare group earned Wigner the 1963 Nobel prize in physics.
• The Standard Model, which unifies 3 of the the 4 fundamental forces in nature, is described by the smooth group $SU(3)\times SU(2) \times U(1)$.

#### What is a Smooth Group?

A group object in the category of smooth manifolds is called a smooth group.

Equivalently, a group $G$ is a smooth group if it is a manifold, and the product and inverse operations $\cdot:G\times G\rightarrow G$ and $^{-1}:G \rightarrow G$ are smooth maps.

#### Representation: A Special Kind of Homomorphism

SO(3) describes the rotational symmetries of 3-dimensional Euclidean space. SO(3) acts on $\mathbb{R}^3$ (any element of SO(3) defines a linear transformation of $\mathbb{R}^2$.

We can generalize this to say a group $G$ acts on a vector space $V$ if there is a map $\phi$ from $G$ to linear transformations of $V$ s.t. $\forall v \in V; g,h \in G$

$$\phi(gh)v = \phi(g)\phi(h)v$$

The map $\phi$ is called a representation of $G$ on $V$; it’s really just a special kind of homomorphism.

Recall that the general linear group $GL(V)$ is the group of all invertible linear transformations of $V$. A representation of our smooth group $G$ on $V$ is merely a homomorphism

$$\phi: G \rightarrow GL(V)$$

We can think of a representation as a linear action of a group/algebra on a linear space (since to every $g \in G$ there is an associated linear operator $\phi(g)$ which acts on a linear space $V$).

Recall that a smooth group is a group object in the category of Diff. When $G$ is a smooth group, we usually restrict our attention to representations of $G$ in $GL(V)$, where $V$ is finite dimensional and $\phi$ is a smooth map. Since we are operating on a smooth manifold, we can apply the tools of differential geometry.

#### Symmetries of Differential Equations

The group of symmetries of a differential equation $(x,y,…, u, )$

is the set of all transformations of the independent variables $(x,y, …)$ and of the dependent variables $(u, v, …)$ that transform solutions to solutions.

Lie proved that if the group of symmetries is solvable then the differential equation can be integrated by quadratures, and he found a method to compute the symmetries.

Example: The heat equation:

#### A Note for the Adventurous

If we’re feeling algebraic, we can consider the set of invertible matrices over an arbitrary unital ring $R$. Thus $GL_n: R \rightarrow GL_n(R)$ becomes a presheaf of groups on $Aff = Ring^{op}$.

Postscript: Adding the unitary group to the visualization, while establishing completeness, disrupts the symmetric aesthetic. ## A Unifying Language

Mathematics is a huge subject.

Category theory is one area of mathematics dedicated to exploring the commonality of structure between different branches of mathematics.

Categorical language allows us to ascend a layer of abstraction, and recognize the obvious underlying principles that guide seemingly unrelated concepts. Generality facilitates connections.

#### What is a category?

A category $C$ consists of:

1. a class of objects $Ob(C)$
2. For every ordered pair of objects $X$ and $Y$, a set $C(X,Y)$ of morphisms with domain $X$ and range $Y$ [$C(X,Y)$ is possibly empty] Note: $f \in C(X,Y)$ $\equiv$ $f : X\rightarrow Y$ $\equiv$ $X \overset{f}{\rightarrow} Y$.
3. For every object an identity morphism $Id_x \in C(X,X)$.
4. A composition law $$C(X,Y) \times C(Y,Z) \rightarrow C(X,Z)$$ $$(g,f) \rightarrow f\cdot g$$

The concept of composition follows naturally from the definition of path equivalence in graph theory: Two paths with the same source as destination are equal. Additionally, categories must satisfy the laws of associativity and identity.

#### Category Laws

Categories must obey 2 laws:

1. Composition must be associative: 2. Every object $a$ in $C$ has a morphism $id_a$ which is equivalent to a loop in graph theory. The identity morphism $id_a$ connects the object $a$ to itself, $$id_a: a \rightarrow a$$. Two paths are equal if the source and destination of the paths are equal. With this in mind, we can represent the category laws of identity and associativity with diagrams.

Follow the arrows, recall that we write function composition backward! If we traverse $f$ then $g$, it is the convention to write $g \circ f$  Examples:

Groups, together with group homomorphisms, form a category (we will discuss these next lecture for those who have not danced with abstract algebra).

Each of the natural numbers is a category: #### Categories have some nice properties

Any property which can be expressed in terms of (category, objects, morphism, and composition):

• Dual: $D$ is $C$ with reversed morphisms
• Initial: $Z \in obj(C)$ s.t. $\forall Y \in obj(C)$, #$hom(Z,Y) =1$. In other words: an object is initial if there exists a unique morphism from that object to any other object in $C$.
• Terminal: $T \in obj(C)$ s.t. $T$ is initial in the dual of $C$
• Functor: Structure preserving mapping between categories

#### Homomorphims Between Categories: What the func is a functor? An example of associating morphisms from $C$ to $D$ with the functor $F(C)$. #### A Reflection on the Unfication of Familiar Concepts

A few categories you have likely encountered before without recognizing it:

• Set (sets and functions)
• Vec (vectorial spaces and linear transformations)
• Top (topological spaces and continuous maps)
• Grp (groups and homomorphisms) — we will be discussing these next lecture for those who have not danced with abstract algebra.
• Ab (abelian groups and homomorphisms)
• Cat (categories and functors)

A categorical key to highlight some of the relationships between structure preserving maps: Here are some more advanced examples for those who feel groovy and algebraic:

• $R$-Mod (R-modules and homomorphisms)
• $Gr_R$ ($\mathbb{Z}$-graded $R$-modules and graded $R$-module homomorphisms)