There are many ways to define it. An elliptic curve is an equation of the form:

$y^2$ = polynomial of degree $3$ in $x$ without a multiple root

Sidenote: This is a bit misleading, for the elliptic curve is not actually the affine variety associated to $y^2=x^3+ax+b$. An “elliptic curve” refers to the projective variety associated to the corresponding homogeneous polynomial $y^2z=x^3+axz^2+bz^3$.

Elliptic curves are really quite extraordinary. Allow me to boast a few beautiful facts about elliptic curves, before we whip out the language of schemes and make some structural statements. Continue reading What is an elliptic curve?

There are likely inaccuracies in this post, as I am just beginning to learn the basics of algebraic geometry. Constructive criticism is strongly encouraged.

There once was a line…

Let’s look at the affine line over $\mathbb{C}$. This is just the complex line with no distinguished element (i.e., a plane which forgot it’s origin — it’s 2 real dimensions, or, equivalently, 1 complex dimension).

$\mathbb{A}^1 \simeq \text{Spec } \mathbb{C}[x]$

As we know, the affine line over the field $\mathbb{K}$ is isomorphic to the spectrum of a ring of single variable polynomials (with coefficients in $\mathbb{K}$). If you aren’t familiar with this isomorphism, I recommend popping over to Spectrum of a Ring. For simplicity, let’s work with the field $\mathbb{C}$, although, I’m pretty sure the rest of this post still works for any $\mathbb{K}$.

Is there a reasonable way to take in two points, and ask for a third?

When I define a polynomial, I am simply handing you an indexed collection of coefficients.

A polynomial with two variables, $x, y$ and coefficients $c$, is of the form:

$F(x, y) = \sum\limits_{ij} c_{ij} x^i y^j$

The coefficients of a polynomial form a ring. In other words, the coefficients $c_{ij}$ are members of a coefficient ring $R$. When we say $F$ is over $R$, we mean that $F$ has coefficients in $R$.

Example: The polynomial
$F(x,y) = 7 + 5xy^2 + 2x^3$ can be written out as
$F(x,y) =7x^0y^0 + 5x^1y^2 + 2x^3y^0$ such that
$c_{00} = 7$, $c_{12} = 5$, $c_{30} = 2$, and the rest of $c_{ij} = 0$.

Alright, now let’s change the coefficients; reassign $c_{00} = 4$, $c_{78} = 3$, and all other $c_{ij} = 0$.

Out pops a very different polynomial $P(x,y) = 4 + 3x^7y^8$.

In other words,By altering the coefficients $c_{ij}$ of $F(x,y)$ via a ring homomorphism $u: R \to R’$ (from the coefficent ring $c_{ij} \in R$ to a coefficient ring $u(c_{ij}) \in R’$)…

…we can get from $F(x,y)$ to any other polynomial $F'(x,y)$.

What’s a group-y polynomial?

Intuitively, a polynomial is “group-y” if there’s a constraint on our coefficients that forces the polynomial to satisfy the laws of a commutative group.

Concretely, a group-y polynomial is an operation of the form $F(x,y) = \sum\limits_{ij}c_{ij}x^iy^j$ such that

$F(x,y) = F(y,x)$ commutativity

$F(x, 0) = x = F(0, x)$ identity

$F(F(x,y), z) – F(x, F(y,z)) = 0$ associativity

We can make sure that our polynomial satisfies these constraints! How? We mod out our coefficient ring $c_{ij}$ by the ideal $I$ — generated by the relations amoung $c_{ij}$ imposed by these constraints.

If you’d like to see the explicit relations, I wrote a cry for help post on stack overflow.

The ring of coefficients that results is called the Lazard ring $L = \mathbb{Z}[c_{ij}]/I$.

It’s important to note here that group-y polynomials are morphisms out of the Lazard ring, not elements of the Lazard ring (i.e., that an assignment of values to each of the $c_{ij}$ describes a group-y polynomial, but the ring of the $c_{ij}$ itself is just a polynomial ring).

In other words, group-y polynomials $f(x,y)$ are morphisms out of the Lazard ring, not elements of the Lazard ring.

More formally: for any ring $R$ with group-y polynomial $f(x,y) \in R[[x,y]]$ there is a unique morphism $L \to R$ that sends $\ell \mapsto f$.

$L \to R \simeq F_R$

(where $F_R$ denotes a group-y polynomial with coefficients in $R$)

This makes sense. If it doesn’t, then scroll up a bit! As we saw above, a change of base ring corresponds to a new group-y polynomial.

Grading the Lazard Ring

As we’ve noted, the Lazard ring $L = \mathbb{Z}[c_{ij}]/I$is the quotient of a polynomial ring on the $c_{ij}$ by some relations.

Lazard proved that it is also a polynomial ring (no relations) on a different set of generators. More specifically, $\alpha$ is a graded ring isomorphism:

Thanks to Alex Mennen for deriving constraints the associativity condition puts on our coefficients; thanks to Qiaochu Yuan and Josh Grochow for kindly explaining some basic details and mechanics of the Lazard ring.

For your ventures ahead…

In this post, I have committed two semantic sins in the name of pedagogy. Namely, sins of oversimplification which I’ll attempt to rectify s.t. you aren’t hopelessly confused by the literature:

group-y polynomial = “1-dimensional abelian formal group law”

polynomial = “formal power series”

Conventionally, a “polynomial” is a special case of a formal power series (in which we expect that our variables evaluate to a number – useful if we care about convergence).

polynomials $\subset$ formal power series

The polynomial ring $R[x]$ is the ring of all polynomials (in two variables) over a given coefficient ring $R$.

The ring of formal power series $R[[x]]$ is the ring of all formal power series (in two variables) over a given coefficient ring $R$.

polynomial ring $\subset$ ring of formal power series
R[x] $\subset$ R[[x]]

There are at least three distinct conceptual roles which vectors and vector spaces play in mathematics:

A vector is a column of numbers. This is the way vector spaces appear in quantum mechanics, sections of line bundles, elementary linear algebra, etc.

A vector is a weighted direction in space. Vector spaces of this kind are often the infinitesimal data of some global structure, such as tangent spaces to manifolds, Lie algebras of Lie groups, and so on.

A vector is an element of a module over the base ring/field.

What is a module? The basic idea is that a module $V$ is an object equipped with an action by a monoid $A$. (This is closely related to the concept of a representation of a group.)

Let’s take an example that you’re familiar with, vector spaces, and generalize it to get some intuition for working with modules.

Fields $\hookrightarrow$ Rings

If $K$ is a field, then a $K$-vector space (a vector space over $K$) $\equiv$ $K$-module.

$K$-Vector Spaces $\hookrightarrow$ $K$-modules

For the categorically minded: a familiar example of a module is a vector space $V$ over a field $K$; this is a module over $K$ in the category of abelian groups: every element of $K$ acts on the vector space by a multiplication of vectors, and this action respects the addition of vectors.

Tensors play analogous conceptual roles.

A tensor is a multidimensional array of numbers.

A tensor is multiple weighted directions in space.

A tensor is an element of a free monoid over the base ring. (If you don’t know what a free monoid is, don’t worry. I’ll go over them later in this post.)

Group(oid) theory is the study of symmetry. When we are dealing with objects that appear symmetric, group theory assists with analysis of these objects. The label of “symmetric” is applied to anything which stays invariant under some transformations.

This can apply to geometric figures (the unit circle, $S^1$, is highly symmetric, for it is invariant under any rotation): This also applies to more abstract objects such as functions: the trigonometric functions $sin(\theta)$ and $cos(\theta)$ are invariant when we replace $\theta$ with $\theta+\tau$.

Both functions are periodic with period $\tau$. Periodicity is a type of symmetry. Important points for $cos(\theta)$ and $sin(\theta)$ in terms of their period $\tau$:

The subject of Fourier analysis is concerned with representing a wave-like ($\tau$-periodic) function as a combination of simple sine waves (simpler $\tau$-periodic functions). More formally, it decomposes any periodic function into the sum of a set of oscillating functions (sines and cosines, or equivalently, complex exponentials).

Fourier analysis is central to spectroscopy, passive sonar, image processing, x-ray crystallography, and more. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.

Fourier analysis is a fusion of analysis, linear algebra, and group theory. [Source]

Furthermore, group theory is the beating heart of physics.

We care a great deal about group representations, especially of Lie groups, since these representations often point the way to the “possible” physical theories. Examples of the use of groups in physics include the Standard Model and gauge theory.

Modern particle physics would not exist without group theory; in fact, group theory predicted the existence of many elementary particles before they were found experimentally. [Source]

Studying symmetry allows us to discover the laws of the universe.

What is a group?

More formally,

Homomorphisms Between Groups

A group homomorphism is a structure preserving map $\psi$ from a group $(G, \cdot)$ to a group $(H, *)$, satisfying:

which means $\psi(e_G) = e_H$

which means $\forall g_1, g_2 \in G$, $\psi(g_1 \cdot g_2) = \psi(g_1) * \psi(g_2)$

An exponential $exp: (\mathbb{R}, +) \rightarrow (\mathbb{R}^+, *)$ is a morphism from the reals under addition to the positive reals under multiplication.

The operation on the domain $\mathbb{R}$ is addition, while the operation on the range $\mathbb{R}$ is mulitplication.

Thus, to show $exp$ is a homomorphism, I must show that $$\forall x, y \in \mathbb{R}, exp(x+y) = exp(x)*exp(y)$$ Recall that $exp(x+y) \equiv e^{x+y}$, and $exp(x) * exp(y) = e^xe^y$, so the equation to be verified comes down to the familiar identity $e^{x+y} = e^xe^y$, thus $exp$ is a homomorphism!

Note that a group together with group homomorphisms form a category.

Interested?

Group Explorer (free) allows you to autogenerate visualizations of groups, homomorphisms, subgroup lattices, and more.

Visual Group Theory: Nathan Carter’s expository text is a beautifully illustrated, gentle introduction to groups, ending in quintics.

A group in which the objects are matrices and the group operation is matrix multiplication is called a linear group (or matrix group).

Since in a group every element must be invertible, the most general linear groups are the groups of all invertible matrices of a given size, called the general linear groups $GL(n)$.

Any property of matrices that is preserved under matrix multiplication and inverses can be used to define further linear groups.

Elements of $GL(n)$ with determinant 1 form a subgroup called the special linear group $SL(n)$. Orthogonal matrices ($M^{T}M = I$) form the orthogonal group $O(n)$. The elements of the special orthogonal group $SO(n)$ are both orthogonal and have determinant 1.

Linear groups pop up in virtually any investigation of objects with symmetries, such as molecules in chemistry, particles in physics, and projective spaces in geometry.

Geometry is the study of invariants of the action of a matrix group on a space.

Particle physics, 4 dimensional topology, and Yang-Mills connections are inter-related theories based heavily on matrix groups, particularly on a certain double cover between two matrix groups (which I’ll cover in Clifford’s Road to Spinors).

Quantum computing is based on the group of unitary matrices. “A quantum computation, according to one widely used model, is nothing but a sequence of simple unitary matrices. One starts with a small repitoire of some 2×2 and some 4×4, and combines them to generate, with arbitrarily high precision, an approximation to any desired unitary transformation on a huge vector space.” – William Wootters

Riemannian geometry relies heavily on matrix groups, in part because the isometry group of any compact Riemmanian manifold is a matrix group.

Circle $\cong$ SO(2)

We will begin with the simplest example of a smooth group: the group of proper rotations in 2 dimensions, isomorphic to SO(2).

The compositions of the rotations by two angles $\theta_1$ and $\theta_2$ corresponds to a rotation of the angle $\theta_1 + \theta_2$.

The map $\cdot(\theta_1, \theta_2) = \theta_1 + \theta_2$, takes two elements of the group as arguments and returns another element of the group ($\cdot: G \times G \to G$) is smooth (continuous and differentiable).

We have another important property: the proper rotations are periodic. Rotations by angles differing by multiples of $\tau$ are periodic.

$$R(\theta + \tau) = R(\theta)$$

As a manifold, this group is the circle $S^1$.

The continuity and differentiability of the product map has a very profound consequence: the elements of the group are determined by the elements close to the identity (the infinitesimal transformations).

Indeed, if we wish to determine how a rotation $R(\theta)$ depends on $\theta$, we look at how $R$ changes with respect to infinitesimal change of $\theta$.

For groups of linear transformation on a space, we can use the language of differential operators or that of matrix components, according to our taste and convenience.

This rotation group $SO(2)$ can be regarded as a group of transformations acting on its group manifold $S^1$.

Smooth groups are conventionally referred to as Lie groups, after Sophus Lie.

Why Study Smooth Groups?

Groups elegantly represent the symmetries of geometric objects. For example, the finitely many symmetries of polygons are captured by the Dihedral groups.

The infinitely many symmetries of circles require more sophistication. Observe that an axis of symmetry exists for every angle in $[0,\tau]$, so there should exist a continuous map from $[0,\tau]$ into any group representing the symmetries of a circle. The pristine algebraic nature of a group fails to capture this notion of continuity, so we much enrich it to obtain the smooth groups.

Motivated by geometry, smooth groups merge the perspectives of algebra and analysis, tying together these normally disparate fields with great efficacy.

Not convinced by their beauty and simplicity alone?

Smooth groups are useful:

The study of irreducible representations of the smooth group $SO(3)$ lead to an explanation of the Periodic Table.

The study of irreducible representations of the smooth group $SU(2)$ naturally leads to the Dirac equation describing the electron.

The classification of the unitary representations of the Poincare group earned Wigner the 1963 Nobel prize in physics.

The Standard Model, which unifies 3 of the the 4 fundamental forces in nature, is described by the smooth group $SU(3)\times SU(2) \times U(1)$.

What is a Smooth Group?

A group object in the category of smooth manifolds is called a smooth group.

Equivalently, a group $G$ is a smooth group if it is a manifold, and the product and inverse operations $\cdot:G\times G\rightarrow G$ and $^{-1}:G \rightarrow G$ are smooth maps.

Representation: A Special Kind of Homomorphism

SO(3) describes the rotational symmetries of 3-dimensional Euclidean space. SO(3) acts on $\mathbb{R}^3$ (any element of SO(3) defines a linear transformation of $\mathbb{R}^2$.

We can generalize this to say a group $G$ acts on a vector space $V$ if there is a map $\phi$ from $G$ to linear transformations of $V$ s.t. $\forall v \in V; g,h \in G$

$$\phi(gh)v = \phi(g)\phi(h)v$$

The map $\phi$ is called a representation of $G$ on $V$; it’s really just a special kind of homomorphism.

Recall that the general linear group $GL(V)$ is the group of all invertible linear transformations of $V$. A representation of our smooth group $G$ on $V$ is merely a homomorphism

$$\phi: G \rightarrow GL(V)$$

We can think of a representation as a linear action of a group/algebra on a linear space (since to every $g \in G$ there is an associated linear operator $\phi(g)$ which acts on a linear space $V$).

Recall that a smooth group is a group object in the category of Diff. When $G$ is a smooth group, we usually restrict our attention to representations of $G$ in $GL(V)$, where $V$ is finite dimensional and $\phi$ is a smooth map. Since we are operating on a smooth manifold, we can apply the tools of differential geometry.

Symmetries of Differential Equations

The group of symmetries of a differential equation $(x,y,…, u, )$ is the set of all transformations of the independent variables $(x,y, …)$ and of the dependent variables $(u, v, …)$ that transform solutions to solutions.

Lie proved that if the group of symmetries is solvable then the differential equation can be integrated by quadratures, and he found a method to compute the symmetries. For more on this, the keyword is “heat equation.”

A Note for the Adventurous

If we’re feeling algebraic, we can consider the set of invertible matrices over an arbitrary unital ring $R$. Thus $GL_n: R \rightarrow GL_n(R)$ becomes a presheaf of groups on $Aff = Ring^{op}$.

Postscript: Adding the unitary group to the visualization, while establishing completeness, disrupts the symmetric aesthetic.