<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://rin.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://rin.io/" rel="alternate" type="text/html" /><updated>2026-04-04T07:27:43+00:00</updated><id>https://rin.io/feed.xml</id><title type="html">Good Fibrations</title><subtitle>math is art</subtitle><entry><title type="html">Using Automorphism Groups of Curves to Control the Slopes of their Jacobians</title><link href="https://rin.io/gausssums/" rel="alternate" type="text/html" title="Using Automorphism Groups of Curves to Control the Slopes of their Jacobians" /><published>2025-05-23T00:00:00+00:00</published><updated>2025-05-23T00:00:00+00:00</updated><id>https://rin.io/gausssums</id><content type="html" xml:base="https://rin.io/gausssums/"><![CDATA[<p>I’ve felt for a long time that automorphisms of curves should control or at least exert serious force on the slopes on their Jacobians. Symmetry forces height, as I’ve written about previously in <a href="https://rin.io/every-height/">Models of Formal Groups Laws of Every Height</a>, and <a href="https://rin.io/height-is-symmetry/">Endomorphisms Directly Control Slope</a>.</p>

<p>In this post, I conjecture that the Frobenius eigenvalues of Artin-Schreier-Witt Curves (\( \mathbb{Z}/p^k \)-covers) are Gauss sums, generalizing an old and classic theorem of Davenport and Hasse (fuck nazis though). I explain and expand (i.e., make the paper type check and fill in details) a gorgeous reproof of this theorem using local systems by Robert Coleman.</p>

<p>This playful note is toward exploring this force, and outlining a conjectural approach for further exploiting it. Enjoy and click. <a href="/pdfs/Gauss_sums.pdf">Read about my Gauss Sums :P</a></p>

<object data="http://rin.io/pdfs/Gauss_sums.pdf" type="application/pdf" width="700px" height="700px">
    <embed src="http://rin.io/pdfs/Gauss_sums.pdf" />
        <p>This browser does not support PDFs. Please download the PDF to view it: <a href="http://rin.io/pdfs/Gauss_sums.pdf">Download PDF</a>.</p>
    &lt;/embed&gt;
</object>]]></content><author><name></name></author><category term="math" /><summary type="html"><![CDATA[I’ve felt for a long time that automorphisms of curves should control or at least exert serious force on the slopes on their Jacobians. Symmetry forces height, as I’ve written about previously in Models of Formal Groups Laws of Every Height, and Endomorphisms Directly Control Slope.]]></summary></entry><entry><title type="html">Cut and Paste Invariants and Duality: A Motivating Example Via zeta(-1), zeta(2) and SL_2Z</title><link href="https://rin.io/SL2Z/" rel="alternate" type="text/html" title="Cut and Paste Invariants and Duality: A Motivating Example Via zeta(-1), zeta(2) and SL_2Z" /><published>2025-02-14T00:00:00+00:00</published><updated>2025-02-14T00:00:00+00:00</updated><id>https://rin.io/SL2Z</id><content type="html" xml:base="https://rin.io/SL2Z/"><![CDATA[<p>Here’s something enticing and strange: there are two “cut and paste” invariants of the same group which are equal to dual zeta values!</p>
<ul>
  <li>the euler characteristic \( \chi(SL_2(\mathbb{Z})) = \zeta(-1), \) and</li>
  <li>the tamagawa measure of \( \mu(SL_2(\mathbb{Z})\backslash SL_2(\mathbb{R}) = \zeta(2)\).</li>
</ul>

<p>I repeat, these apriori unrelated invariants of our beloved example case \( SL_2(\mathbb{Z}), \) are dual zeta values! Is it a general pattern? Let’s define and explain these results to spiritually prepare ourselves for the general picture!</p>

<h1 id="euler-characteristics-of-arithmetic-groups--star-of-the-show--chisl_2mathbbz--zeta-1-">Euler Characteristics of Arithmetic Groups – star of the show: \( \chi(SL_2(\mathbb{Z})) = \zeta(-1) \)</h1>

<p>An arithmetic group is the integer points of an algebraic group (i.e. a group object in abelian varieties). For example, \( SL_2(\mathbb{Z}) \) is the arithmetic group associated to the algebraic group \( \mathbb{SL}_2 \). In general, \( \chi(G) \) for arithmetic groups <em>is related to L-functions and zeta functions.</em></p>

<p>The euler characteristic is defined for a torsion free group \( G’ \) as follows:  \(\chi(G') = \sum_i (-1)^i H^i(G', \mathbb{Z}).\)  This is supposed to generalize the notion of euler characteristic for surfaces of polyhedra where \( \chi(P) = V - F + E \), the analogy is to consider \( \dim H^0(P, \mathbb{Z}) = V \), \( \dim H^1(P, \mathbb{Z}) = E \), and \( \dim H^2(P, \mathbb{Z}) = F\).</p>

<p>For example:</p>
<ul>
  <li>if we consider \( F_n \) the free group of rank \( n \geq 0 \) then \( BF_n \) has one vertex and n \( 1 \)-cells, therefore, \( \chi(F_n) = 1-n \)</li>
  <li>free abelian group of rank \( n \geq 1 \) then \(n\)-torus \( (S^1)^n \) has euler characteristic \( \chi(S^1), …, \chi(S^1)=0.\)</li>
</ul>

<p>The euler characteristic is defined for any group \( G \), in terms of \( G’ \) a torsion free group, 
\(\chi(G) = [G: G']^{-1}\chi(G')\)</p>

<p>Note: This doesn’t depend on choice of \( G \), consider a two different torsion free subgroups \( \Gamma’ \) and \( \Gamma’’\), and define \( \Gamma_0 := \Gamma’ \cap \Gamma’’ \) , then</p>

\[\frac{\chi(\Gamma')}{[\Gamma: \Gamma']} = \frac{\chi(\Gamma_0)/[\Gamma': \Gamma]}{[\Gamma: \Gamma']} = \frac{\chi(\Gamma_0)}{[\Gamma : \Gamma_0]} = \frac{\chi(\Gamma'')}{[\Gamma:\Gamma'']}\]

<p>It’s time to stare at the Gauss Bonnet Thm: Let \( Y \) be closed Riemannian manfiold, then there’s a canonical measure \( \mu \) in \( Y \) such that \(\chi(Y) = \mu(Y).\)</p>

<p>There’s a version for orbifolds, \( Y = BG\) and let \( X \) be its universal cover, there’s a canonical measure such that</p>

\[\chi(Y)= \mu(Y)\]

<p>Further, \( \chi(BG) = \chi(G). \)</p>

<p>Let’s look at \( SL_2(\mathbb{Z}) \). The free group \( F_2 \) is a torsion free group living inside of \( SL_2(\mathbb{Z}) \) via the ping pong lemma. It has index 12, not sure why tbh.</p>

<p>Thus, \(\chi(SL_2(\mathbb{Z})) = [F_2: SL_2(\mathbb{Z})]^{-1} \chi(F_2) = \frac{1}{12} (1-2) = -\frac{1}{12} = \zeta(-1).\)</p>

<h1 id="the-tamagawa-number-of-arithmetic-groups--star-of-the-show--mu_inftysl_2mathbbzbackslash-sl_2mathbbq--zeta2-">The Tamagawa Number of Arithmetic Groups – star of the show: \( \mu_\infty(SL_2(\mathbb{Z})\backslash SL_2(\mathbb{Q})) = \zeta(2) \)</h1>

<p>Given an arithmetic group \( G \), it comes equipped with a canonical measure on its adelic points \( G(\mathbb{A}) \). All other measures are a constant times this measure. It is defined by considering the measure associated to a differential form.</p>

<p>Given an algebraic group \( G \), pick any \( \omega \) which is a top degree invariant differential form of \( G \) defined over \( \mathbb{Q}. \) Then, define the measure \( \mu \) place-wise (a la the adelic perspective of Tate) for each \( G (K_v) \) by \(\mu_v := |\omega|_v \) on \((\mathbb{Q}, +)\). We then renormalize such that:</p>
<ul>
  <li>for finite places \(v \), \( \mu_v(\mathbb{Z}_v)=1 \),</li>
  <li>and at infinite places \(\mu_\infty([0,1]) = 1.\)</li>
</ul>

<p>You might be like – we chose \( \omega \) there, seems noncanonical to me. Well it is noncanonical, if we consider it individually over each place, but all together, no matter which choice of \( \omega \) we take, we will get the same answer because for any number \( f \), the product of its valuation over global fields will always be one: \( \prod |f|_v = 1. \)</p>

<h2 id="non-archimedian-case-family-friendly--prod_p-mu_psl_2mathbbz_p--zeta2-1-">Non-archimedian case (family friendly) \( \prod_p \mu_p(SL_2(\mathbb{Z}_p)) = \zeta(2)^{-1} \)</h2>

<p>In this section we wish to show that \(\prod_p \mu_p(SL_2(\mathbb{Z}\_p)) = \prod_{p} (1-p^{-2})^{-1} = \zeta(2)^{-1}.\)</p>

<p>We may compute the Tamagawa measure for \( SL_2(\mathbb{R}) \) using Iwasawa decomposition. We get that the Haar measure on \( SL_2(\mathbb{Q}_p) \) is \( |x^{-1}|_pdxdydz, \) where \( |x^{-1}|_p = p^{-v_p(x)} \). A nice reference on this is the <a href="https://www-users.cse.umn.edu/~garrett/m/v/volumes.pdf">notes of Paul Garrett</a>.</p>

<p>Now consider the reduction map:</p>

\[f: SL_2(\mathbb{Z}) \to SL_2(\mathbb{F}_p)\]

<p>\(\mu\_p(SL_2(\mathbb{Z}\_p))= \# SL_2(\mathbb{F}\_p)\mu\_p(\text{ker } f)\)  and \(\mu_p(\text{ker } f) = \int_{x,y,z} \|x^{-1}\|_p dxdydz = p^{-3}.\)</p>

<p>Lemma: \( \# GL_2(\mathbb{F}_p) = (p^2-1)(p^2-p), \),</p>

<p>Proof: This because when forming a matrix in \( GL_2 \), the first column \( r_1 \) of the matrix can be anything but the 0 vector (giving us \( p^2 - 1\) choices), and second column of the matrix can be anything other than a multiple of the first vector (it can’t be \( r_2 = cr_1 \) ).</p>

<p>Lemma: \( \# SL_2(\mathbb{F}_p) = 1-p^2, \)</p>

<p>Thus, we have proven: \( \mu(SL_2(\mathbb{Z}_p)) = \frac{1}{1-p^{-2}}, \) and we may conclude:</p>

\[\prod_{p} \mu_p(SL_2(\mathbb{Z}\_p)) = \prod\_{p} (1-p^{-2})^{-1} = \zeta(2)^{-1}.\]

<h1 id="lets-get-archimedian--mu_inftysl_2mathbbzbackslash-sl_2mathbbq--zeta2---to-be-continued">Let’s get Archimedian \( \mu_\infty(SL_2(\mathbb{Z})\backslash SL_2(\mathbb{Q})) = \zeta(2) \)  (to be continued…)</h1>]]></content><author><name></name></author><category term="math" /><summary type="html"><![CDATA[Here’s something enticing and strange: there are two “cut and paste” invariants of the same group which are equal to dual zeta values! the euler characteristic \( \chi(SL_2(\mathbb{Z})) = \zeta(-1), \) and the tamagawa measure of \( \mu(SL_2(\mathbb{Z})\backslash SL_2(\mathbb{R}) = \zeta(2)\).]]></summary></entry><entry><title type="html">The Bernoulli Numbers Come from a Shift Operator</title><link href="https://rin.io/faulhaber-shift/" rel="alternate" type="text/html" title="The Bernoulli Numbers Come from a Shift Operator" /><published>2025-01-28T00:00:00+00:00</published><updated>2025-01-28T00:00:00+00:00</updated><id>https://rin.io/faulhaber-shift</id><content type="html" xml:base="https://rin.io/faulhaber-shift/"><![CDATA[<p><img src="/images/german-tales.jpg" alt="" /></p>

<p>The Bernoulli numbers were defined by Faulhaber in terms of the following generating series.</p>

\[\frac{x}{e^x-1} = \sum_{k \geq 0} B_k \frac{x^k}{m!}\]

<p>But why? Where did this come from? Well, the mathematicians of the time were contemplating the following sorts of patterns:</p>

\[\begin{aligned}  1+2+\cdots+n &amp;= \frac{n(n+1)}{2}  \\ 1^2+2^2+\cdots+n^2 &amp; = \frac{n(n+1)(2n+1)}{6}  \\ 1^3+2^3+\cdots+n^3 &amp; = \frac{n^2(n+1)^2}{4}  \\  1^s + 2^s +  \cdots + n^s &amp; = \text{ }?? \end{aligned}\]

<p>The general formula requires \( B_k \) :)</p>

<p>To give you an idea of how hard this is, I don’t even know of a proof of the \( s = 2 \) case that that doesn’t involve guessing the formula then showing inductively (other than the Bernoulli version I’m about to show).</p>

<p>Today I’m giving a modern explanation of Faulhaber’s trick. This is based on <a href="https://math.ucr.edu/home/baez/qg-winter2004/bernoulli.pdf">notes of John Baez</a>.</p>

<p>Consider \( \mathcal{E} \) to be the space of entire functions on \( \mathbb{C} \). We can consider \( \mathcal{E} \subset \mathbb{C}[[z]] \)</p>

<p>Next, consider the finite difference operator</p>

\[\Delta(f(z)) = f(z+1) - f(z) .\]

<p>Discrete Fundamental Theorem of Calculus: If \( \Delta F = f \), then \(\sum_{i = 0}^{n-1} f(i) = F(n) - F(0)\)</p>

<p>For example, here’s a <em>big clue</em>: If \( f(z) = z^s \), then if \( \exists F \) such that \( \Delta F = f \), then 
\(\sum^{n-1}_{i=0} i^s = F(n) - F(0).\)</p>

<p>Let’s set some groundwork before proceeding, we consider an operator (called the ‘‘annihilation operator” in conformal field theory):</p>

\[\begin{aligned} a: \mathcal{E} &amp; \to \mathcal{E} \\ f(z) &amp;\mapsto \frac{d}{dz}f(z) \end{aligned}\]

<p>Let us further define our friend the exponential</p>

\[\begin{aligned} e^{ta}: \mathcal{E} &amp; \to \mathcal{E} \\ f(z) &amp;\mapsto e^{ta}f(z) := \sum_{k \geq 0} \frac{(ta)^k }{k!}(f(z)) \end{aligned}\]

<p>Lemma: \(e^{ta}f(z) = f(z+t)\)</p>

<p>Proof: I can’t help myself, I just love power series. We prove this by expanding both sides and observing that they are the same. Take \( f(z) := \sum_i c_iz^n) \), then</p>

<p>\(\begin{aligned} e^{ta}f(z) &amp;= f(z) \\&amp; + (t\frac{d}{dz})f(z) \\ &amp; +  (t\frac{d}{dz})^nf(z)  \\ &amp; + \cdots \end{aligned}\)
i.e., 
\(\begin{aligned} e^{ta}f(z) &amp;= c_0 + c_1z + c_2z^2 + \cdots + c_nz^n + \cdots \\&amp; + c_1t + 2c_2tz + \cdots + tnc_nz^{n-1} + \cdots \\ &amp;  + n!\frac{c_nt^n}{n!} + (n+1)!\frac{c_{n+1}t^nz}{n!} + \cdots  \\ &amp; + \cdots \end{aligned}\)</p>

<p>Let’s expand the other guy, 
\(\begin{aligned} f(z+t) &amp;= c_0 + c_1(z+t) + c_2(z+t)^2 + \cdots + c_n(z+t)^n + \cdots,  \\ &amp;= (c_0 + c_1t+ c_nt^n +  \cdots) + (c_1t + 2c_2t + \cdots + nc_nt^{n-1} + \cdots)z + \cdots \end{aligned}.\)</p>

<p>It’s the same yay. That means we can write \(\Delta = e^{a} - 1.\)</p>

<p>So, given our big clue above, if we can find \( \Delta^{-1} \), we can solve our addition problem.</p>

<h1 id="lets-playguess-that-inverse">Let’s play…GUESS THAT INVERSE!!</h1>

<p>If we consider \( e^x - 1 \) as a function, we want to find another function \( f(x)g(x) = 1 \), then:</p>

<p>GUESS #1: \( \frac{1}{e^x-1} \) <em>beeping noise</em> nope this is not entire, has a pole when \( x= 0 \).</p>

<p>GUESS #2: \( \frac{x}{e^x-1}) \) we can plug that pole, and get an entire function!! However, we have this extra \( x \) factor floating around (since we want to compose to 1, i.e., the identity operator).</p>

<p>If we were working non-commutatively, then, we’d want to look at either</p>

<p>NON-COMMUTATIVE GUESS #3 : \( x^{-1} \frac{x}{e^x - 1} \) or \( \frac{x}{e^x - 1} x^{-1} \)</p>

<p>Alright, we have some candidates. Let’s get back to operators to make this into real math. We define</p>

\[\begin{aligned} a^{-1}: \mathcal{E} &amp; \to \mathcal{E} \\ f(z) &amp;\mapsto \int_0^z f(u) du \end{aligned}\]

<p>Note, \( a a^{-1} f = f \), but  the other order is not always true \( a^{-1} a f \neq f \)  (because of the icky \( + c \) ).</p>

<p>That means our noncommutative guess  \( \frac{x}{e^x - 1} x^{-1} \) is a good bet! In fact, we define the following operator:</p>

\[\begin{aligned}  \frac{a}{e^a-1} \colon \mathcal{E} &amp;\to \mathcal{E} \\ f(z) &amp;\mapsto \sum_{k \geq 0} B_k \frac{a^k}{k!}f(z)\end{aligned}\]

<p>and finally, we claim \(\Delta^{-1} = \frac{a}{e^a-1}a^{-1}.\)</p>

<h1 id="tying-it-all-together-to-finish-the-job">Tying it all together to finish the job</h1>

<p>We take: \( f(z) = z^s \),  we need to calculate \( \Delta^{-1}f := \frac{a}{e^a-1}a^{-1} \).</p>

\[\begin{aligned} \Delta^{-1}f(z) &amp;= \frac{a}{e^a-1}(a^{-1}z^s) \\ &amp;= \frac{a}{e^a-1}(\frac{z^{s+1}}{s+1}) \\ &amp;= \sum_k B_k (\frac{d}{dz})^kt^k(\frac{z^{s+1}}{s+1}) \\ &amp;= \sum_{k = 0}^{s+1} B_kt^k(s+1)(s)\cdots(s+1-k) \frac{z^{s+1-k}}{s+1}  \\ &amp;= \frac{1}{s+1}\sum_{k = 0}^{s+1} B_kt^k { s+1 \choose k} z^{s+1-k}  \end{aligned}\]

<p>Note in particular that \( \Delta^{-1}f(0) = 0 \). Using our discrete fundamental theorem of calculus, we see the following:</p>

<p>Since \( \Delta(\Delta^{-1}f) = f\), let \( f = z^s \), then</p>

\[\begin{aligned} \sum_{i=0}^{n-1} i^s &amp;= (\Delta^{-1}f)(n) -  (\Delta^{-1}f)(0)  \\  &amp;= \frac{1}{s+1}\sum_{k = 0}^{s+1} B_kt^k { s+1 \choose k} n^{s+1-k} \end{aligned}\]

<p>Tada! We did it!</p>

<p>There is also the following fabulous quick derivation trick of the Bernoulli numbers, which we discussed several years ago in <a href="https://rin.io/derivation-of-the-bernoulli-numbers/">Umbral Calculus Derivation of the Bernoulli Numbers</a>.</p>

<p>A quick way to remember the sum formula is to set \( B^i \) equal to the Bernoulli number \( B_i \). This can be earnestly done if one uses umbral methods.</p>

\[1^s + \cdots + n^s = \frac{(B + n + 1)^{s+1}-B^{s+1}}{s+1}\]

<h1 id="graph-laplacians-in-more-generality">Graph Laplacians in More Generality</h1>

<p>The definition of the graph Laplacian for a graph \( G \) is the following. Let \( v \in V \) be the vertices in the graph, and let \( N(v) \) be the set of vertices neighboring the vertex \( v \).</p>

\[\nabla_G f(v) = \frac{1}{|N(v)|} \sum_{w \in N(v)} f(v) - f(w)\]

<p>The directed graph Laplacian rather looks at \( N^+(v) \), the set of vertices the vertex \( v \) maps to with edge length 1:</p>

\[\nabla_G f(v) = \frac{1}{|N^+(v)|} \sum_{w \in N^+(v)} f(v) - f(w)\]

<p>We can define the graph Laplacian to act on entire functions on \( \mathbb{C} \):</p>

\[\nabla_{G, v} f(z) = \frac{1}{|N(v)|} \sum_{w \in N(v)} f(z+v) - f(z+w)\]

<p>Notice that if we consider the lattice of \( \mathbb{Z} \) in \( \mathbb{R} \) to be directed (going toward positive numbers), we must make a slight type change.</p>

\[\nabla_{\mathbb{Z}, 0} f(z) = f(z) - f(z+1) .\]

<p>This is quite interesting because the <em>left inverse</em> of a graph Laplacian is the Green’s function associated to the uniform random walk on the graph where you hop to each of your neighbors of distance one away with probability one. The Bernoulli numbers are derived from the <em>right inverse</em> of \( \Delta = -\nabla_{\mathbb{Z}, 0} \). That’s a big difference in description of the inverses on both sides!</p>

<p>Let’s use this language to boogie. Take the lattice \( \Lambda := \mathbb{Z} + \tau\mathbb{Z} \subset \mathbb{C} \) ,  with the directions pointing toward the positive imaginary and positive real directions. The neighbors of the vertex</p>

\[\begin{aligned} \nabla_{\lambda, (0, 0)} f(z) &amp; = \frac12 (f(z) - f(z+1)) + \frac{1}{2}(f(z) - f(z+\tau))  \\ &amp;= f(z) - \frac12(f(z+1) + f(z+\tau)) \\ &amp; = \big( \mathrm{id} - \frac{\mathrm{id}}{2}(e^a + e^{\tau a})\big) f(z) \end{aligned}\]

<p>If we allow ourselves full license to fuck around, let’s set \( \tau = i \) and play GUESS THAT INVERSE. We consider the function 
\(1-\frac{1}{2}(e^x + e^{\tau x})\)
And look at it’s inverse: 
\(\begin{aligned} \frac{1}{1-\frac{1}{2}(e^x + e^{\tau x})} &amp;= \frac{2}{e^x + e^{\tau x}-2} \\ &amp;= \frac{2}{\sum_n \frac{(1+\tau^n)x^n}{n!}-2}\end{aligned}\)</p>

<p>This function has a pole of order 3 at 0, and for the simplest case \( \tau = i \), the remaining poles are of order one at roots of \( i \). We can plug those poles, no problem:</p>

\[\frac{2a^3(a+1)(a-1)(a+i)(a-i)}{e^a + e^{ia} - 2}\]

<p>I’m not sure where to go from here, I’d love for this to in some way compare to the Kronecker-Eisenstein series 
(which tantalizingly involves the character \( \phi(z) = e^{\frac{z-\overline{z}}{A}} \)). Here, \( A = \frac{\text{im } \tau}{\pi} \) is the area of the fundamental domain of the lattice \( \Lambda \) divided by \( \pi \). The Kronecker-Eisenstein series is defined as follows, (typically the role of the letter \( b \) is played by \( a \), but we are already using \( a \) to stand for annihilation operator).</p>

\[\kappa_{b}(z, w, s, \tau) = \sum_{\lambda \in \Lambda}^*\frac{(\overline{z}+\overline{\lambda})^b}{|z+\lambda|^s}\phi(z \overline{w}),\]

<p>where the \( * \) means that the summation excludes \( \lambda = -z \) if \( z \in \Lambda \).</p>

<p>A good reference on these is <a href="https://webusers.imj-prg.fr/~pierre.charollois/Charollois-Sczech_5.pdf">Elliptic Functions according to Eisenstein and
Kronecker : An Update</a>. See in particular the elliptic analogs of Bernoulli numbers discussed on page 8.</p>

<p>This is a nostalgic topic for me which feels like home to step back to. I previously contemplated the analogs of such shifts in 2015 while I was still a robotics inventor sitting in the back of lectures at Berkeley <a href="https://rin.io/euler-maclaurin-slipper/">On Detiling Polynomials: A Generalization of the Euler MacLaurin Formula</a>.</p>]]></content><author><name></name></author><category term="math" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">A Song About Computing Sheaf Cohomology with Cech Covers</title><link href="https://rin.io/cech-covers/" rel="alternate" type="text/html" title="A Song About Computing Sheaf Cohomology with Cech Covers" /><published>2025-01-17T00:00:00+00:00</published><updated>2025-01-17T00:00:00+00:00</updated><id>https://rin.io/cech-covers</id><content type="html" xml:base="https://rin.io/cech-covers/"><![CDATA[<p><a href="/images/Cech cover.m4a">Cech Covers</a> (click the link to listen to us). I wrote this song with my beloved old room mate Christian Gorski in my last year of grad school while I was wrapping up my thesis. For weeks, I was doing nothing but computing etale sheaf cohomologies of ramified covers of the projective plane. I would decompose sheaves on these curves by cutting around the neighborhood of ramification point (the stomach), and capturing the properties of the ramification group from the ramification point (the heart). The gluing back datum was the punctured neighborhood of the sheaf.</p>

<p>Let \(X\) be a curve over a complete local ring, and let us decompose it into two affine pieces \(X = A \cup B.\)</p>

<p>Given a sheaf \(\mathcal{F}\) over \(X\), we have the following Mayer-Vietoris sequence.</p>

\[0 \to H^0(X, \mathcal{F}) \hookrightarrow  \mathcal{F}(A) \times \mathcal{F}(B) \to \mathcal{F}(A) \cap \mathcal{F}(B) \to H^1(X, \mathcal{F}) \to 0.\]

<p>In particular, if take the decomposition of \(X\) into the point at infinity and its complement \(X = U \cup {\infty}\), we get the following sequence, let \(T\) be a divisor at \(\infty\).</p>

\[0 \to H^0(X, \mathcal{F}) \hookrightarrow  \mathcal{F}(U) \times \mathcal{F}^\wedge_\infty \to \mathcal{F}^\wedge_\infty[\frac{1}{T}] \to H^1(X, \mathcal{F}) \to 0.\]

<p>A global section \(H^0(X, \Omega^1_X)\) can be specified by either a basis in terms of \(\mathcal{F}(U)\), in terms of \(\mathcal{F}^\wedge_\infty.\) This is because we are considering \(H^0(X, \mathcal{F})\) as a subset of \(\mathcal{F}(U) \times \mathcal{F}^\wedge_\infty\).</p>

<h1 id="lyrics">Lyrics:</h1>
<p>I’ll cut you into manageable pieces<br /> 
I hope you’re not too hard to glue back together<br /> 
There’s so many ways to form an affine cover<br /> 
Yet I keep slicing open your stomach<br /> 
And I pull out your heart<br /> 
I don’t know why<br /> 
that’s where I like to start<br /> 
<br /> 
It’s an easy thing to fall in love with a hand<br /> 
When its lifeless and the fingers are missing<br /> 
I can trace the truth<br />
As it sits motionless<br /> 
beneath my glass, slowly twitching<br /> 
<br /> 
And when I stitch it back together<br /> 
It jumps and motions toward the sky<br /> 
There’s a hanging sequence of all of your pieces<br /> 
And your other hand is waving hi<br /> 
<br /> 
I’ll cut you into manageable pieces<br /> 
I hope you’re not too hard to glue back together<br /> 
There’s so many ways to form an affine cover<br /> 
Yet I keep slicing open your stomach<br /> 
And I pull out your heart<br /> 
I don’t know why<br /> 
that’s where I like to start<br /> 
<br /> 
Those intersections between your skin<br /> 
Reveal raw flesh that I’m kissing<br /> 
There’s so many ways to stitch you back<br /> 
I GIVE YOU MORE EYES MORE EYES MORE EYES MORE EYES<br />
YOUR DICK IS MISSING, i needed it to make more eyes<br /> 
<br /> 
I love to cut you apart to find the ways you combine<br /> 
How many times can I do this before you fall apart forever?<br /></p>]]></content><author><name></name></author><category term="art" /><category term="math" /><category term="music" /><summary type="html"><![CDATA[Cech Covers (click the link to listen to us). I wrote this song with my beloved old room mate Christian Gorski in my last year of grad school while I was wrapping up my thesis. For weeks, I was doing nothing but computing etale sheaf cohomologies of ramified covers of the projective plane. I would decompose sheaves on these curves by cutting around the neighborhood of ramification point (the stomach), and capturing the properties of the ramification group from the ramification point (the heart). The gluing back datum was the punctured neighborhood of the sheaf.]]></summary></entry><entry><title type="html">The Crystalline Period Map</title><link href="https://rin.io/crystalline-period/" rel="alternate" type="text/html" title="The Crystalline Period Map" /><published>2025-01-17T00:00:00+00:00</published><updated>2025-01-17T00:00:00+00:00</updated><id>https://rin.io/crystalline-period</id><content type="html" xml:base="https://rin.io/crystalline-period/"><![CDATA[<p><img src="/images/lubin-tate.jpg" alt="" />  This drawing is an old drawing I made when I was preparing for my qualifying exam in my second year of grad school at Northwestern. It is the crystalline period map. The tower to the left is the “Lubin-Tate” tower, the deeper it goes the more level structure. In the upper right corner there is projective space, and the “cat like” creatures below are moduli stacks of curves. I always draw the moduli stack of elliptic curves as a cat because it has 2 stacky points (its ears) and a third stacky point if you compactify “the sock” of the fundamental domain (its tail). At the time I was working on what became my PhD thesis: using moduli stacks of curves with marked points to understand moduli stacks of formal groups with level structure.</p>

<p>If I was braver then, I would have added this drawing to the paper which I wrote at an Arizona Winter School. I still think that paper is good resource for learning quickly about the Fargue Fontaine curve. <a href="https://arxiv.org/pdf/1911.08615">A Global Crystalline Period Map</a></p>

<p>All of this is on my mind because I have had the great privledge and joy of returning to this topic after years away.</p>

<p>While we are sharing old things, in 2018 or 2019, Artem, Kolya and I made an outline of an approach to rational Chromatic Vanishing  \(H^*(J_h, W(k)) \times \mathbb{Q} \simeq H^*(J_h, W(k)[[u_1, ..., u_{h-1}]][u^{\pm}]) \otimes \mathbb{Q}\) using the two tower isomorphism which reduces the problem to directly calculating the \( GL_h(\mathbb{Q}_p) \) action on constant functions on the Drinfeld projective space <a href="/pdfs/chromaticvanishingapproach.pdf">Chromatic Vanishing Approach</a>.</p>

<p>At the time, the tools to compute the p-adic cohomology of the Drinfeld projective plane were not available to us so we ulitmately were scooped.  I was initially sad about this because I felt then I could not prioritize thinking about these awesome and gorgeous concepts. However, recently I have begun with many others a spin-off project, which I am delighted and healed by. :)</p>]]></content><author><name></name></author><category term="art" /><category term="math" /><summary type="html"><![CDATA[This drawing is an old drawing I made when I was preparing for my qualifying exam in my second year of grad school at Northwestern. It is the crystalline period map. The tower to the left is the “Lubin-Tate” tower, the deeper it goes the more level structure. In the upper right corner there is projective space, and the “cat like” creatures below are moduli stacks of curves. I always draw the moduli stack of elliptic curves as a cat because it has 2 stacky points (its ears) and a third stacky point if you compactify “the sock” of the fundamental domain (its tail). At the time I was working on what became my PhD thesis: using moduli stacks of curves with marked points to understand moduli stacks of formal groups with level structure.]]></summary></entry><entry><title type="html">Fuck Perfectionism</title><link href="https://rin.io/fuck-perfectionism/" rel="alternate" type="text/html" title="Fuck Perfectionism" /><published>2025-01-16T00:00:00+00:00</published><updated>2025-01-16T00:00:00+00:00</updated><id>https://rin.io/fuck-perfectionism</id><content type="html" xml:base="https://rin.io/fuck-perfectionism/"><![CDATA[<p>I have been working recently to counter the writers block that has formed insidiously from an unhealthy creeping perfectionism. In order to do this, I will post some old art and music which at the time I felt was “not good enough to share” or “inappropriate for a professional mathematician to have associated to them.” But all of these labels and caring too much about what other people think – I am sick of them. I don’t want them in my value system. I just want to be my full authentic self. So I will be. Enjoy!</p>]]></content><author><name></name></author><category term="psych" /><summary type="html"><![CDATA[I have been working recently to counter the writers block that has formed insidiously from an unhealthy creeping perfectionism. In order to do this, I will post some old art and music which at the time I felt was “not good enough to share” or “inappropriate for a professional mathematician to have associated to them.” But all of these labels and caring too much about what other people think – I am sick of them. I don’t want them in my value system. I just want to be my full authentic self. So I will be. Enjoy!]]></summary></entry><entry><title type="html">The Biome</title><link href="https://rin.io/biome/" rel="alternate" type="text/html" title="The Biome" /><published>2024-05-06T00:00:00+00:00</published><updated>2024-05-06T00:00:00+00:00</updated><id>https://rin.io/biome</id><content type="html" xml:base="https://rin.io/biome/"><![CDATA[<p>We outline connections between the gut microbiome, autoimmune conditions, neuropathic pain, eye pain, chemical intolerance, and a specific set of “overactive” mental illnesses. All seem to be connected to a sensory processing disorder. This is joint work with Luca Estinto.</p>

<p>We propose there is collection of disorders characterized by sensory processing issues. This collection of symptoms affects many and is rarely viewed holistically. We propose to view them holistically and approach their treatment as such – keeping in mind that autoimmune disorders usually are best detected genetically, and medication-wise sensory processing is best treated using tricylic antidepressants, GABA analogues, and glasses prescribed by a behavioural optometrist.</p>

<h2 id="sensory-processing-disorders-and-connectivity">Sensory processing disorders and connectivity</h2>

<p>People who react strongly to perfumes and strong smells (chemical intolerance) are such that they perceive a smell at a constant level of intensity, the signal is processed at the same intensity over time. Normal people perceive the signal intensity to decrease over time, they “get used to it”. Signal intensity is monitored via brain blood flow (measured using EEG and fMRI). The same blood flow pattern as those with chemical intolerance is found in those with chronic pain. [(2012)<a href="https://www.sciencedaily.com/releases/2012/01/120120182914.htm">Summary of Linuss Andersson (Sick of Smells: Empirical Findings and a Theoretical Framework for Chemical Intolerance)</a>] [(2018) <a href="https://journals.lww.com/joem/Fulltext/2018/02000/Multiple_Chemical_Sensitivity__Review_of_the_State.5.aspx">Multiple Chemical Sensitivity Review of the State of the Art in Epidemiology, Diagnosis, and Future Perspectives</a>].</p>

<h2 id="sensory-processing-disorders-and-glutamate">Sensory processing disorders and glutamate</h2>

<p>If you want to imagine what life is like as a highly sensitive person recall what it is like to have a hangover — when you have a hangover you have an excess of free glutamate which causes overexcitability of sensory processing.</p>

<p>Children with sensory processing disorders have disconnected white matter (measured using DTI) [(2016)<a href="https://www.ucsf.edu/news/2016/01/401461/brains-wiring-connected-sensory-processing-disorder">Brain’s Wiring Connected to Sensory Processing Disorder</a>].</p>

<p>It is worth nothing that Gabapentin (600mg) seems quite effective in cases of patients who tend toward alcohol to calm overexcitability and anxiety.  Gabapentin is structurally similar to the neurotransmitter glutamate and competitively inhibits branched-chain amino acid aminotransferase (BCAT), slowing down the synthesis of glutamate.</p>

<h2 id="excitable-mental-illness-and-the-role-of-glutamate">Excitable Mental Illness and the role of Glutamate</h2>

<p>A sensory processing disorder makes excitable mental illnesses more likely. Mental illness wise, adhd, aspergers, bipolar, anxiety and autism have “ring of fire” blood flow patterns in the brain. Too much glutamate causes overexcitability of the brain (measured using SPECT). [<a href="https://pubmed.ncbi.nlm.nih.gov/?term=AMEN+DG+OUTCOMES">Amen Clinics Articles</a>] Their clinic uses SPECT imaging as a diagnostic tool for psychiatric treatment [<a href="https://pubmed.ncbi.nlm.nih.gov/23709407/">Amen Clinics: Multi-site six month outcome study of complex psychiatric patients evaluated with addition of brain SPECT imaging</a>]. Some examples (all photos are from the Amen Clinic):</p>

<p><img src="/images/bipolar_amen.png" alt="image" /></p>

<p><img src="/images/ocd-amen.png" alt="image" /></p>

<p><img src="/images/asd_amen.png" alt="image" /></p>

<h2 id="spd-causing-visual-tunneling-and-autoimmune-conditions-affecting-eyesight">SPD causing Visual Tunneling and Autoimmune conditions affecting Eyesight</h2>

<p>Visual tunneling is a particularly pervasive issue that affects all senses, it results from an overload of visual sensation in the brain. Tunneling is a subconscious spatial adaptation to reduce the information the individual handles. The individual is either overwhelmed with informatio or has trouble sorting out the significant area for attention.[(2014)<a href="https://www.ovpjournal.org/uploads/2/3/8/9/23898265/ovp2-1_viewpoint_getzell_web.pdf">Visual Tunneling: A Pervasive Vision disorder</a>]</p>

<p>“Cognitively speaking, for these individuals to succeed, they subconsciously fragment space or tunnel, i.e. reduce the amount of information processed in order not to be overwhelmed in dealing with abstract or conceptual activities. In the physiological aspect, the complications of this adaptation for the hyperopic patient include moving the pelvis backward and the forehead forward, and the eyes feel like they are turned out.”  This visual tunnelling affects their ability to control their attention, as well as motor and postural changes. For example, one might walk with smaller steps, round their shoulders, lock their knees when standing, and develop upper back, shoulder, or neck tension.</p>

<p>Nutrient deficiences such as low vitamin D as well as being sensorily overwhelmed cause eyesight issues (autoimmune diseases of the eye and visual tunneling respectively) [source?].</p>

<h2 id="sensory-processing-disorders-and-autoimmune-disease">Sensory processing disorders and Autoimmune Disease</h2>

<p>People who are highly sensitive (a genetic trait) are more likely to have autoimmune diseases such as diabetes, and I would postulate, celiac. [(2018)<a href="https://www.sciencedirect.com/science/article/abs/pii/S0882596317302397">Sensory Processing Sensitivity and Type 1 Diabetes</a>] [Celiac and Diabetes are often comorbid]</p>

<p>This study does not show causalty, it shows that 50% of children with autism also suffer gastrointestinal symptoms, such as celiac disease or constipation, comorbidly. Children with ASD have altered gut microbial composition, with decreased gut bacterial variety and underestimation of potentially helpful bacteria such as Bifidobacterium, Prevotella, and Desulfovibrio, as well as a decrease in short-chain fatty acids. <a href="https://ejo.springeropen.com/articles/10.1186/s43163-022-00270-6">Screening of gastrointestinal symptoms and celiac disease in children with autism spectrum disorder</a></p>

<h2 id="enteric-nervous-system-and-excitable-mental-illness">Enteric Nervous System and Excitable Mental Illness</h2>

<p>Our guts produce 90% of serotonin, if you are low on folic acid or any other basic vitamins/nutrients you literally cannot create serotonin, dopamine, and other neurotransmitters. [add] Would a severe serotonin deficiency show up in a SPECT scan?</p>

<p>There is ongoing work to directly detect and monitor this for diagnostic and treatment purposes. [<a href="https://ece.umd.edu/release/symptoms-all-in-your-heador-in-your-gut-maybe-a-little-of-both">UMD: building an ingestible capsule that can monitor and model gut microbiome serotonin activity</a>]</p>

<p><a href="https://www.nature.com/articles/s41398-019-0389-6">Altered composition and function of intestinal microbiota in autism spectrum disorders: a systematic review</a>. The overall changing of gut bacterial community in terms of β-diversity was consistently observed in ASD patients compared with HCs. Furthermore, Bifidobacterium, Blautia, Dialister, Prevotella, Veillonella, and Turicibacter were consistently decreased, while Lactobacillus, Bacteroides, Desulfovibrio, and Clostridium were increased in patients with ASD relative to HCs in certain studies.</p>

<p><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7661167/">A cross-sectional study of gastrointestinal symptoms, depressive symptoms and trait anxiety in young adults</a></p>

<p>There has been some recent research on fecal transplants curing some forms of mental illness. The inverse is also true “with the transmission of depressive and anxiety-like symptoms and behaviours resulting from the transplantation of microbiota from psychiatrically ill donors to healthy recipients”. We collect some examples here:</p>

<p><a href="https://pubmed.ncbi.nlm.nih.gov/32539741/">Effect of fecal microbiota transplant on symptoms of psychiatric disorders: a systematic review</a>.</p>

<p><a href="https://clinicaltrials.gov/ct2/show/NCT03281044">Fecal Microbiota Transplantation in Depression</a></p>

<p><a href="https://journals.physiology.org/doi/full/10.1152/ajpgi.00194.2019">Posttraumatic stress disorder is associated with altered gut microbiota that modulates cognitive performance in veterans with cirrhosis</a>.</p>

<p><a href="https://clinicaltrials.gov/ct2/show/NCT04109196">Examining Changes in Microbiota Over the Course of PTSD Treatment</a>.</p>

<p><a href="https://www.statnews.com/wp-content/uploads/2022/02/OLP-vs-DBP-PAIN-2021.pdf">Open-label placebo vs double-blind placebo for irritable bowel syndrome: a randomized clinical trial</a></p>

<h2 id="enteric-nervous-systemspd-causing-lowered-immune-system-causing-autoimmune-disease">Enteric Nervous system/SPD causing Lowered Immune System causing Autoimmune Disease</h2>

<p>If you have a lower immune system (low igA or igG or igM) you are more likely to have an autoimmune disease.</p>

<p><em>Remark: As a note, you are also more likely than average to have an autoimmune disease with an abnormally strong immune system. This is not the case we are discussing here.</em></p>

<p>The enteric nervous system generates igA and igG in our body. There is a link between igA and levels with a person’s perception of stress, implying the nervous system has the ability to control igA levels. <a href="www.sciencedirect.com/science/article/abs/pii/S0306453097000425?via%3Dihub">EFFECTS OF PSYCHOLOGICAL STRESS ON SERUM IMMUNOGLOBULIN, COMPLEMENT AND ACUTE PHASE PROTEIN CONCENTRATIONS IN NORMAL VOLUNTEERS.</a></p>

<p>People with IgA deficiency should be tested for celiac disease because they are 10 to 20 times likely to develop an autoimmune response to gluten than the general population. <a href="https://www.beyondceliac.org/celiac-disease/related-conditions/iga-deficiency/"></a>]</p>

<p>Using tests of raised igA and igG to detect for celiac disease is harmful and ineffective, as having naturally lowered igA and igG is usually comorbid with celiac disease. Thus, the rasied igA and igG register as normal levels on tests, giving many patients false negatives. The most effective test is a DNA test for the celiac markers. 
[(2017)<a href="https://pubmed.ncbi.nlm.nih.gov/28437323/">Lack of Utility of Anti-tTG IgG to Diagnose Celiac Disease When Anti-tTG IgA Is Negative</a>].</p>

<p>Immunosenescence or the ageing of the immune system is linked to the increase of autoantibody frequency while the antibodies decline [(2004) <a href="https://www.sciencedirect.com/topics/medicine-and-dentistry/immunosenescence">The Neuroendocrine Immune Network in Ageing</a> ].  One way to frame the connection between SPD and being immunocompromised is that higher sensitivity leads to greater stress / perceived stress, which leads to a decrease in immunity which then naturally causes an increase in autoantibodies that would otherwise have occured during someones natural immunosenescence. [(2004) <a href="https://www.sciencedirect.com/science/article/abs/pii/S1568997204000424">Inflamm-aging: autoimmunity, and the immune-risk phenotype</a> ]</p>

<h2 id="mechanisms-of-functional-dyspepsia-and-treatment-with-tricyclic-antidepressent">Mechanisms of Functional Dyspepsia and Treatment with Tricyclic Antidepressent</h2>

<p>A proposed mechanism is that there is original inflammation, and then continued faulty pain signaling when input is gone. That is, a central sensitization model. [(2002) <a href="https://pubmed.ncbi.nlm.nih.gov/12089851/">Functional dyspepsia–a psychosomatic disease</a> ][<a href="https://www.gastroenterologyandhepatology.net/archives/february-2020/functional-dyspepsia-a-review-of-the-symptoms-evaluation-and-treatment-options/">Functional Dyspepsia: A Review of the Symptoms, Evaluation, and Treatment Options</a> ] [(2005-2015) <a href="https://mayoclinic.pure.elsevier.com/en/projects/antidepressant-therapy-for-functional-dyspepsia">Antidepressent Therapy for Function Dyspepsia</a> ] [(2018) <a href="https://www.healio.com/news/gastroenterology/20180108/antidepressants-improve-functional-dyspepsia-symptoms-but-how-remains-unclear">Antidepressants improve functional dyspepsia symptoms, but how remains unclear</a> ]</p>

<h2 id="autoimmune-disease-is-under-direct-enteric-neural-control">Autoimmune Disease is Under Direct Enteric Neural Control</h2>

<p>The following are quotes from this paper:
<a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC1727729/pdf/v045p00679.pdf">Fast acting nervous regulation of immunoglobulin A secretion from isolated perfused porcine ileum</a></p>

<p>“As in the salivary gland, sIgA is rapidly secreted in mucosal explants from porcine distal colon in response to both cholinergic and adrenergic receptor agonists, but remains unaltered by acute exposure to an enteroadherent microbial pathogen. The enteric neurotransmitters acetylcholine and NE appear to act on muscarinic cholinergic and alpha-adrenergic receptors expressed by crypt epithelial cells to promote transcytosis of the polymeric Ig receptor and enhance vectorial secretion of both sIgA and free SC towards the mucosal surface. In its early stages, this action appears to be temporally coincident with, but mechanistically independent of active ion transport. The precise receptor types and associated signal transduction mechanisms mediating sIgA secretion by these and possibly other enteric neurotransmitters at this mucosal immune effector site remain to be characterized.”</p>

<p>“In a more complex, vascularly-perfused porcine ileum preparation however, electrical stimulation of extrinsic nerves innervating the intestine increases sIgA secretion only after atropine and phentolamine pretreatment; this effect is inhibited by hexamethonium, a blocker of ganglionic neurotransmission. This result suggests that there may exist important inhibitory reflex circuits containing muscarinic cholinergic and alpha-adrenergic receptors which modulate the enteric neural control of sIgA output. Additional investigations will be necessary to establish how basal sIgA secretion onto the mucosal surface in vivo is modulated by the release of acetylcholine from enteric nerves.”</p>

<h2 id="virome-mediates-stress-and-memory-responses">Virome Mediates Stress and Memory Responses</h2>
<p>This will be a topic of a future post, it is critical to note that in fact the virome controls the gut biome entirely. Bacteriophages maintain biodiversity of a healthy gut.</p>

<p><img src="/images/virome.png" alt="image" /></p>

<p><a href="https://archive.is/20240713074246/https://www.newscientist.com/article/mg26334991-200-the-vital-viruses-that-shape-your-microbiome-and-your-health/">The vital viruses that shape your microbiome and your health</a></p>

<h2 id="keyword-summary">Keyword summary:</h2>
<ul>
  <li>visual tunneling</li>
  <li>chemical intolerance</li>
  <li>chronic pain</li>
  <li>excess free glutamate</li>
  <li>high sensitivity</li>
  <li>autism/ocd/add/ptsd/bipolar</li>
  <li>functional dyspepsia</li>
  <li>sensory processing disorder.</li>
  <li>low autoimmune system</li>
  <li>likelihood to have autoimmune disease</li>
  <li>enteric nervous system</li>
  <li>lowered ability to create serotonin</li>
  <li>joint hyperflexibility</li>
</ul>]]></content><author><name></name></author><category term="bio" /><summary type="html"><![CDATA[We outline connections between the gut microbiome, autoimmune conditions, neuropathic pain, eye pain, chemical intolerance, and a specific set of “overactive” mental illnesses. All seem to be connected to a sensory processing disorder. This is joint work with Luca Estinto.]]></summary></entry><entry><title type="html">Half Haunted: The 1/2 in Harish-Chandra via the Fourier Transform</title><link href="https://rin.io/fourier-harish-chandra/" rel="alternate" type="text/html" title="Half Haunted: The 1/2 in Harish-Chandra via the Fourier Transform" /><published>2023-12-31T00:00:00+00:00</published><updated>2023-12-31T00:00:00+00:00</updated><id>https://rin.io/fourier-harish-chandra</id><content type="html" xml:base="https://rin.io/fourier-harish-chandra/"><![CDATA[<p>This post is written together <em>with Josh Mundinger</em>. <a href="https://rin.io/harish-chandra/">Last time</a>, we compared the Harish-Chandra isomorphism \(Z(U\mathfrak g) \cong (\text{Sym} \mathfrak h)^{W,\cdot}\) for \(\mathfrak g=  \mathfrak{sl}_2\) to the Duflo isomorphism \(Z(U\mathfrak g) \cong (\text{Sym } \mathfrak g)^{\mathfrak g} \cong (\text{Sym} \mathfrak h)^W\), and found that they differ exactly by a translation by \(\rho\). In this blog post, we study just the Harish-Chandra map \(Z(U\mathfrak g) \to \mathbb C[\mathfrak h]\), using the Fourier transform to explain why the image is invariant under the dot action \((W,\cdot)\). Recall that the Harish-Chandra map sends \(z \in Z(U\mathfrak g)\) to the action of \(z\) on the Verma module \(M_\lambda\). The dot action of \(W\) is defined by \(w \cdot \lambda = w(\lambda + \rho) - \rho\). Thus, 
for \(\mathfrak sl_2\), we need to show that the center of \(U\mathfrak sl_2\) acts by the same scalar on \(M_{\lambda}\)and \(M_{-\lambda - 2}\).</p>

<h2 id="twisted-differential-operators">Twisted differential operators</h2>
<p>The Beilinson-Bernstein localization theorem provides a geometric way to understand the universal enveloping algebra of a semisimple Lie algebra \(\mathfrak g\) through differential operators.
For a smooth variety \(X/\mathbb C\), the <strong>sheaf of differential operators</strong> \(D_X\) on \(X\) is the sheaf of \(\mathbb C\)-linear operators \(\mathcal{O}_X \to \mathcal{O}_X\) which locally look like 
\[\sum_{i_1,\ldots,i_n} f_{i_1,\ldots, i_n} \frac{\partial^{i_1}}{\partial x_1^{i_1}}\frac{\partial^{i_2}}{\partial x_2^{i_2}}\cdots \frac{\partial^{i_n}}{\partial x_n^{i_n}},\]
where \(f_{i_1,\ldots, i_n}\) are regular functions and \(x_1,\ldots, x_n\)are local coordinates.
More generally, if \(\mathcal L\) is a line bundle on \(X\), then the sheaf of differential operators \(D^{\mathcal L}\) on \(\mathcal L\) is the sheaf of operators \(\mathcal L \to \mathcal L\) which look locally as above. This ring may be expressed in terms of \(D_X\) by the formula 
\[ D^{\mathcal L}  = \mathcal L \otimes D_X \otimes \mathcal L^{-1} \]
In our case, we are interested in \(X = \mathbb P^1 = SL_2/B\), the flag variety for \(SL_2\).
Line bundles on \(\mathbb P^1\) are exactly \(\mathcal O(\lambda)\) for \(\lambda \in \mathbb Z\),
and we will write \(D^{\lambda} = D^{\mathcal O(\lambda)}\).</p>

<p>It turns out that the algebra \(D^\lambda\) makes sense even when \(\lambda\) is not an integer!
We have a map 
\[\pi: \mathbb{C}^2\setminus 0 \to \mathbb{P}^1.\]
This is a quotient map for the action of \(\mathbb G_m\) by dilation on \(\mathbb C^2\).
The derivative of the \(\mathbb G_m\)-action is given by the Euler vector field 
\[ eu = x_1 \frac{\partial}{\partial x_1} + x_2 \frac{\partial}{\partial x_2}\]
where \(x_1\) and \(x_2\) are linear coordinates on \(\mathbb C^2\).
In these terms,
\[D^\lambda := (\pi_*D_{\mathbb C^2 \backslash 0}/(eu - \lambda))^{\mathbb G_m}.\]
The above formula makes sense for any complex number \(\lambda\).The sheaf of rings \(D^\lambda\) is a sheaf of <strong>twisted differential operators</strong> on \(\mathbb P^1\).</p>

<h2 id="infinitesimal-action-and-a-lemma-of-beilinson-bernstein">Infinitesimal action and a Lemma of Beilinson-Bernstein</h2>

<p>The theorem of Beilinson and Bernstein relates \(\mathfrak g\) to the flag variety \(X = G/B\).
The line bundles on \(G/B\) are \(\mathcal O(\lambda)\) for \(\lambda\) a character of \(B/[B,B]\). 
In case \(G = SL_2\), then \(G/B = \mathbb P^1\), and \(\mathcal O(\lambda)\) for integer \(\lambda\) are the usual line bundles on \(\mathbb P^1\).</p>

<p><strong>Lemma</strong> (Global Sections of D)
    If \(G\) is a semisimple complex algebraic group and \(X = G/B\) is the flag variety of \(G\), then 
    \[\Gamma(X,D^\lambda) \cong U\mathfrak g/I_\lambda\]
    where \(I_\lambda\) is the ideal generated by the kernel of \(Z\mathfrak g\) on the Verma module \(M_\lambda\).</p>

<p>The map in this Lemma is induced by the <strong>infinitesimal action</strong> of \(G\)on \(X = G/B\).
Given a path \(\gamma: (-\epsilon, \epsilon) \to G\) with \(\gamma(0)=1\), we obtain a family of automorphisms of \(X\); differentiating them gives a vector field on \(X\).This vector field depends only on \(\gamma’(0)\), and so we obtain a Lie algebra homomorphism
\[ \mathfrak g \to \text{Vect}(X).\]
In case \(G = SL_2\), \(X = \mathbb P^1\), we can calculate these operators on \(\mathbb C^2 \setminus 0\), then descend using the map \(\pi: \mathbb C^2 \setminus 0 \to \mathbb P^1\).
Here are formulas for the infinitesimal action on \(\mathbb C^2 \setminus 0\):</p>

<p><img src="/images/Screenshot from 2023-12-31 18-36-02.png" alt="" /></p>

<p>These formulas give a Lie algebra homomorphism \(\mathfrak{sl}_2\to \text{Vect}(\mathbb C^2 \setminus 0) \to D_{\mathbb C^2 \setminus 0}\),
inducing a homomorphism of associative algebras \(U\mathfrak{sl_2} \to D_{\mathbb C^2 \setminus 0}\).
The image of \(\mathfrak{sl_2}\) is \(\mathbb G_m\)-invariant since the action of \(SL_2\) commutes with dilation.
So we get a ring homomorphism
\[\mathrm{act}: U\mathfrak{sl}_2 \to D^\lambda = \pi_\ast(D_{\mathbb C^2 \setminus 0}/(eu - \lambda))^{\mathbb G_m}\]
for all \(\lambda \in \mathbb C\).
We have now given explicit formulas for the map in the Lemma above.</p>

<h2 id="fourier">Fourier</h2>
<p>In this section we show that the Fourier transform descends to an equivalence of categories between the category of \(D^\lambda\)-modules to the category of \(D^{-\lambda - 2}\)-modules for generic \(\lambda\). The Fourier transform naturally transforms differential operators on a vector space \(V\)to differential operators on the dual space \(V^*\).We show how this induces a map on twisted differential operators on \(\mathbb P^1\), where the \(\rho\)-shift will naturally appear.</p>

<p>Let \(V = \mathbb{C}^2\) with coordinates \(x_1,x_2\). On \( V^* \), we will use dual coordinates  \((\frac{\partial}{\partial x_1}, \frac{\partial}{\partial x_2}) =: (y_1, y_2)\).Note that for \(\mathfrak g\) acting on a vector space \(V\), the action of \(x\in \mathfrak g\) on \(V^*\) is defined as \(x(v^*) = v^* \circ(-x)\).
Thus, on the dual basis, we get the following action of \(\mathfrak{sl}_2\):</p>

<p><img src="/images/Screenshot from 2023-12-31 17-42-34.png" alt="" /></p>

<p>Note that these matrices are different than the ones we had before. In this basis, \(E\) acts by \(-F\) on the dual, \(F\) acts by \(-E\) on the dual, and \(H\)acts by \(-F\) on \(V^*\)</p>

<p>The Fourier transform \(\phi: D_V \to D_{V^*}\) is defined by 
\[ \phi(x_i) = \frac{\partial}{\partial y_i}, \quad \phi\left(\frac{\partial}{\partial x_i}\right) = - y_i.\]
The goal of this section is to show that the Fourier transform \(\phi\) induces the following commutative triangle.</p>

<p><img src="/images/Screenshot from 2023-12-31 17-42-15.png" alt="" /></p>

<p>where \(\mathrm{act}\) is the infinitesimal action of the last section.
This commutative triangle implies that the central character of \(\lambda\) is the same as the central character of \(-\lambda - 2\), which gives us that the Harish-Chandra homomorphism is invariant under the \((W,\cdot)\) action.</p>

<p>We begin by showing the vertical arrow is well defined by an explicit calculation with the Euler fields on \(V \)and \( V^* \):
\[\phi (eu) = \sum_i (-\frac{\partial}{\partial y_i}y_i) = \sum_i (-y_i \frac{\partial}{\partial y_i}-1)= -eu - 2.\]
Hence \( \varphi( (eu - \lambda)D_{V}) = \varphi( (eu - (-\lambda - 2))D_{V^*}) \).
Now \(D_V\)and \(D_{V \setminus 0}\) are not the same, as the latter is a sheaf on the non-affine scheme \(V \setminus 0\). Nonetheless we get a map
\[ \varphi: D_{V \setminus 0}/ (eu - \lambda)D_{V\setminus 0} \to D_{V^* \setminus 0}/ (eu - (-\lambda - 2)D_{V^* \setminus 0} \]
and thus \(D^\lambda_{\mathbb P V} \to D^{-\lambda - 2}_{\mathbb PV^*}\).</p>

<p>To fill in the triangle, we need to check that \(\varphi\) intertwines the infinitesimal action of \(\mathfrak{sl}_2\) on \(V\) and \(V^*\).
We calculated both of these earlier! The Fourier transform sends infinitesimal actions defined by equations (1), (2), (3) to (4), (5), (6) respectively. Thus we conclude the triangle commutes.</p>

<h2 id="central-characters">Central Characters</h2>
<p>By the Lemma (Global Sections of D), this commutative triangle induces the following triangle on global sections:</p>

<p><img src="/images/Screenshot from 2023-12-31 17-42-53.png" alt="" /></p>

<p>where \(I_\lambda\) is the ideal of the central character of \(\lambda\) under the Harish-Chandra homomorphism.</p>

<p>This shows that the Harish-Chandra homomorphism is invariant under \(\lambda \mapsto -\lambda - 2\), as desired.</p>

<p>The reason that the shift \(\lambda \mapsto -\lambda - 2\) shows up in today’s story comes from the Fourier transform of the Euler field. When the Fourier transform is treated in a more coordinate-free manner, the canonical bundle shows up. The canonical bundle of \(\mathbb P^1\) is \(\mathcal O(-2)\), and that \(-2\) is the same \(-2\) as in \(-\lambda - 2\). We will elaborate on this in the future.</p>]]></content><author><name></name></author><category term="math" /><summary type="html"><![CDATA[This post is written together with Josh Mundinger. Last time, we compared the Harish-Chandra isomorphism \(Z(U\mathfrak g) \cong (\text{Sym} \mathfrak h)^{W,\cdot}\) for \(\mathfrak g= \mathfrak{sl}_2\) to the Duflo isomorphism \(Z(U\mathfrak g) \cong (\text{Sym } \mathfrak g)^{\mathfrak g} \cong (\text{Sym} \mathfrak h)^W\), and found that they differ exactly by a translation by \(\rho\). In this blog post, we study just the Harish-Chandra map \(Z(U\mathfrak g) \to \mathbb C[\mathfrak h]\), using the Fourier transform to explain why the image is invariant under the dot action \((W,\cdot)\). Recall that the Harish-Chandra map sends \(z \in Z(U\mathfrak g)\) to the action of \(z\) on the Verma module \(M_\lambda\). The dot action of \(W\) is defined by \(w \cdot \lambda = w(\lambda + \rho) - \rho\). Thus, for \(\mathfrak sl_2\), we need to show that the center of \(U\mathfrak sl_2\) acts by the same scalar on \(M_{\lambda}\)and \(M_{-\lambda - 2}\).]]></summary></entry><entry><title type="html">Half Haunted: Relating the 1/2’s in Duflo and Harish-Chandra</title><link href="https://rin.io/harish-chandra/" rel="alternate" type="text/html" title="Half Haunted: Relating the 1/2’s in Duflo and Harish-Chandra" /><published>2023-09-16T00:00:00+00:00</published><updated>2023-09-16T00:00:00+00:00</updated><id>https://rin.io/harish-chandra</id><content type="html" xml:base="https://rin.io/harish-chandra/"><![CDATA[<p>This post is written together <em>with Josh Mundinger</em>. We seek to understand the relations between \(1/2\)’s that appear across mathematics. From the Riemann Hypothesis to the L2 norm, we aim to see the myriad and enticing ways this unfurls; each instance of \(1/2\) connected in an anarchic network of equals. In this blog post, we examine a specific example arising in representation theory: the center of the universal enveloping algebra \(U\mathfrak g\) of a Lie algebra  \(\mathfrak g\).</p>

<p>The PBW theorem implies there is an isomorphism of vector spaces 
\[ sym: \text{Sym}(\mathfrak g)^\mathfrak g \cong Z(U\mathfrak g),\]
which on \(\text{Sym}^i\) is given by the symmetrization map 
\[ sym: x_1x_2\cdots x_i \mapsto \frac{1}{i!} \sum_{\sigma \in S_i} x_{\sigma 1}x_{\sigma 2}\cdots x_{\sigma i}.\]</p>

<p>Note that this is not the identity map because \(Sym\) indicates coinvariants and not invariants of the symmetric group. The symmetrization map is not a ring homomorphism. Duflo showed, however, that by <em>twisting</em> the symmetrization map, one obtains a ring homomorphism. 
The Chevalley restriction theorem then identifies  \(\text{Sym}(\mathfrak g)^\mathfrak g\) with \(\text{Sym}(\mathfrak h)^W\), where \(\mathfrak h\) is a Cartan subalgebra and \(W\) is the Weyl group.</p>

<p>On the other hand, when  \(\mathfrak g\) is semisimple, there is another way to understand  \(Z(U\mathfrak g)\). 
The Harish-Chandra isomorphism is a “quantized” version of the Chevalley isomorphism, 
\[ \text{Sym}(\mathfrak{g})^{\mathfrak{g}} \to \text{Sym}(\mathfrak{h})^{W}. \]
It’s called quantized because we are taking taking the universal enveloping algebra, which is adding a parameter deforming the bracket of our Lie algebra (literally a deformation parameter). Then, we get back down to the classical case by taking the associated graded. 
Harish-Chandra proved that there is an isomorphism  \(Z(U\mathfrak g) \cong (U\mathfrak h)^{(W,\cdot)}\).
Here  \(W\) acts on  \(\mathfrak h\) via the \emph{dot action} instead of the usual action: the origin is shifted by \(\rho\), \(1/2\) of the sum of the positive roots of  \(\mathfrak g\).</p>

<p>Our quest now leads us to the natural question: are the 1/2 in  \(\rho\) and the 1/2 in Duflo’s differential operator the same 1/2?</p>

<p>In the case of \(\mathfrak{sl}_2\), we show via explicit computation that the answer is yes!</p>

<p>In a followup post, we will further see why both arise via a lovely geometric flip and adjustment. The geometric adjustment is by another  \(1/2\): a square root of the canonical bundle on a variety.</p>

<h2 id="the-square-with-rho-and-duflo">The square with \(\rho\) and Duflo</h2>

<p>We investigate the compatibility between three different maps between semi-simple Lie algebras. In particular, we wish to fill in the remaining arrow. We show this commutes explicitly in the case of \(\mathfrak{sl}_2\) by diagram chasing the generator by hand. Let’s go!</p>

<p><img src="/images/Screenshot from 2023-10-17 15-42-38.png" alt="" /></p>

<p>The Lie algebra \(\mathfrak{sl}_2\) has basis \(E,F,H\) with commutation relations 
\[ [E,F]=H, \qquad [H,E] = 2E, \qquad [H,F] = -2F.\]</p>

<p>The matrices representing \(E, F, H\) are as follows: 
 <img src="/images/Screenshot from 2023-12-07 16-56-15.png" alt="" /></p>

<p>The Cartan subalgebra \(\mathfrak h\) is spanned by \(H\),
and the Weyl group \(W = \mathbb Z/2\mathbb Z\) acts on \(W\) by \(H \mapsto -H\).</p>

<h2 id="harish-chandra">Harish-Chandra</h2>

<p>The Harish-Chandra homorphism is defined as follows: there is the Verma module
\( M_{\lambda} = U(\mathfrak{sl}_{2})/U(\mathfrak{sl}_{2}) \cdot (E, H - \lambda).\)
which we can think of as generated by an element \( 1_{\lambda} \) satisfying \( E1_{\lambda} = 0 \) and \( H1_{\lambda} = \lambda 1_{\lambda} \).
We can think of \(\lambda\) as a basis for \(\mathfrak{h}^\ast\) dual to the basis \(H\) of \(\mathfrak{h}\).
The module \(M_{\lambda} \) is indecomposable, so \( Z(U \mathfrak{sl}_{2}) \) acts by scalars.
The image of \(z \in Z(U\mathfrak{sl}_{2}) \) in \(\mathbb{C}[\mathfrak{h}] = \mathbb{C}[\lambda]\) is the function sending \(\lambda\) to the scalar \( z|_{M_{\lambda}} \).</p>

<p>In the case of \( \mathfrak{sl}_{2} \), We know the center of \(U\mathfrak{sl}_{2} \) is generated by the Casimir operator \(\Omega = EF + FE + \frac{1}{2} H^2 \). That is, \[ Z(U(\mathfrak{sl}_{2})) \simeq \mathbb{C}[\Omega]. \]</p>

<p>Let’s compute the action of Casimir \(\Omega\) on \(1_\lambda\):</p>

<p><img src="/images/Screenshot from 2023-10-17 15-42-48.png" alt="" /></p>

<p>So we have shown that under the Harish-Chandra map,
<img src="images/Screenshot from 2023-10-17 15-42-52.png" alt="" /></p>

<p>Let’s check that \(\frac{1}{2}\lambda^2 + \lambda\) is invariant under the \(W \bullet\) action. For \(\mathfrak{sl}_2\), we have \([H, E]=2E\), so that the root associated to \(E\) is the function \(\alpha(H) = 2.\) Then, \(\rho\) is \(\frac12\) of the sum of the simple roots, in other words, \(\rho = \frac12 (2) = 1\). The action of \(W \bullet\) sends \(\lambda \mapsto -(\lambda+1)-1 = -\lambda - 2.\)</p>

<p>So we have shown that under the Harish-Chandra map,
<img src="/images/Screenshot from 2023-10-17 15-42-58.png" alt="" /></p>

<h2 id="duflo">Duflo</h2>

<p>The Duflo map is defined on the level of vector spaces by acting by a linear operator \(\partial_q: S\mathfrak g \to S\mathfrak g \) followed by the symmetrization map \(S\mathfrak{g} \to U\mathfrak{g} \). We start with expanding a function \(q:\mathfrak{g} \to \mathbb C \) into a power series. A power series expansion of \(q: \mathfrak{g} \to \mathbb{C}\) in terms of the coordinate basis of \(\mathfrak{g} \) is a power series in \(\mathfrak{g}^\ast \). To get \(\partial_q \) from \(q \), we replace each element of \(\mathfrak{g}^\ast \) in the power series with the corresponding partial derivative \(\text{Sym}\mathfrak{g} \to \text{Sym}\mathfrak{g} \). Then, we define the map \(\text{Sym}\mathfrak{g} \to \text{Sym}\mathfrak{g} \) by applying the differential operator to the polynomials that comprise \(\text{Sym}\mathfrak{g} \).</p>

<p>We now define the power series \(q(x) \). Let \(p(x) \) be the function \(\mathfrak g \to \mathbb C \) given by 
\[ p(x) = \det\left( \frac{sinh(ad(x)/2)}{ad(x)/2}\right) = 1+ \sum_{n=1}^\infty \frac{1}{2^{2n}(2n+1)!}tr(ad(x)^{2n}). \]
and let \(q(x) \) be a square root of \(p(x) \) at \(0 \).</p>

<p>We now explicitly express the process of computing the Duflo map with our favorite example \(\mathfrak{sl}_2 \), using the basis \({E,F,H} \).
We know the center of \(U\mathfrak{sl}_2 \) is generated by the Casimir operator \(\Omega = EF + FE + \frac{1}{2} H^2 \).
Hence, it suffices to compute the action of the Duflo operator on \(\frac{1}{2}H^2 + 2EF \in (S\mathfrak{g})^{\mathfrak g}\). 
Since the Casimir is quadratic, we only need to know \(q \) up to second order: we have \(q(x) = 1 + \frac{1}{2} \left(\frac{1}{4(6)}tr(ad(x)^2)\right) + O(x^4) = 1 + \frac{1}{48}tr(ad(x)^2) + O(x^4) \).
Let’s compute \(tr(ad(x)^2) \) in our coordinates \({E,F,H} \).
If 
\[ x = \begin{pmatrix} a &amp; b \ c &amp; -a \end{pmatrix} = aH + bE + cF ,\]
then \(tr(ad(x)^2) = 8a^2 + 8bc \).
Hence our differential operator \(\partial_q \) is 
\[ \partial_q = 1 + \frac{1}{48} (8 \partial_H^2 + 8 \partial_E \partial_F) + h.o.t. = 1 + \frac{1}{6}(\partial_H^2 + \partial_E\partial_F) + h.o.t.\]</p>

<p>Applying this differential operator to 
\(\frac{1}{2}H^2 + 2 EF \in (\text{Sym}\mathfrak g)^{\mathfrak g}\)
gives 
\[ \partial_q(\frac{1}{2}H^2 + 2EF) = \frac{1}{2}H^2 + 2EF + \frac{1}{6}(1 + 2).\]
So applying the symmetrization map gives 
\[ Duflo(\frac{1}{2}H^2 + 2EF) = \Omega + \frac{1}{2},\]
where \(\Omega \) is the Casimir.</p>

<h2 id="chevalley">Chevalley</h2>

<p>Now we compare with the Chevalley isomorphism. The Chevalley map is defined by viewing an element of \(\text{Sym}(\mathfrak g)^{\mathfrak g}\)
as an element of \(\mathbb C[\mathfrak g]\) via the Killing form, 
restricting to \(\mathfrak h\),
then again using the Killing form to get an element of \(U\mathfrak h\).</p>

<p>We now express this explicitly in the case of \(\mathfrak{sl}_2\). Under the Killing form, the dual basis to \(\langle H,E,F\rangle\) is \(\langle \frac{1}{8}H, \frac{1}{4}F, \frac{1}{4}E\rangle\).
Hence, dualizing, restricting to \(\mathfrak h\), and dualizing again sends \(H\) to \(H\),\(E\) to \(0\), and \(F\) to \(0\),
thus sending our element \(\frac{1}{2}H^2 + 2EF \in \text{Sym}(\mathfrak g)^{\mathfrak g}\) to \(\frac{1}{2}H^2\). This is a \(W\)-invariant element of \(U\mathfrak h\) which acts on a highest weight vector \(1_{\lambda}\) by \(\frac{1}{2}\lambda^2\).</p>

<h2 id="filling-in-the-square">Filling in the Square</h2>

<p>We know the center of \( U\mathfrak{sl}_2 \) is generated by the Casimir operator \(\Omega = EF + FE + \frac{1}{2} H^2\). Hence, it suffices to diagram chase this generator through our square to conclude that our square commutes. We showed that the Duflo map sends the symmetrized Casimir operator \(\frac{1}{2}H^2 + 2EF\) in \(\mathfrak{g})^{\mathfrak g} \) to \( \Omega + \frac{1}{2}.\) Further composing with the Harish-Chandra map gives that 
\(\frac{1}{2}H^2 + 2EF \in (S\mathfrak g)^{\mathfrak g}\)
is sent to \(\frac{1}{2}\lambda^2 + \lambda + \frac{1}{2} = \frac{1}{2}\left(\lambda+ 1 \right)^2\).</p>

<p>We now diagram chase the other side. We showed that the Chevalley map sends the symmetrized Casimir operator \(\frac{1}{2}H^2 + 2EF\) to \(\frac{1}{2} \lambda^2\) where \( 1_\lambda \) is the eigenvalue of \(H\). The \(\rho\)-shift map sends \(\lambda\) to \(\lambda + 1\), which sends the image of the Chevalley map \(\frac{1}{2} \lambda^2\) to \(\frac{1}{2} (\lambda + 1)^2\). Therefore, the diagram commutes. We have filled in the square victoriously!</p>

<p><img src="/images/Screenshot from 2023-10-17 15-51-12.png" alt="" /></p>]]></content><author><name></name></author><category term="math" /><summary type="html"><![CDATA[This post is written together with Josh Mundinger. We seek to understand the relations between \(1/2\)’s that appear across mathematics. From the Riemann Hypothesis to the L2 norm, we aim to see the myriad and enticing ways this unfurls; each instance of \(1/2\) connected in an anarchic network of equals. In this blog post, we examine a specific example arising in representation theory: the center of the universal enveloping algebra \(U\mathfrak g\) of a Lie algebra \(\mathfrak g\).]]></summary></entry><entry><title type="html">Stop Staring and Compute! Automorphism Groups of Rational Curves</title><link href="https://rin.io/stop-staring-and-compute-automorphism-groups-of-curves/" rel="alternate" type="text/html" title="Stop Staring and Compute! Automorphism Groups of Rational Curves" /><published>2022-02-23T00:00:00+00:00</published><updated>2022-02-23T00:00:00+00:00</updated><id>https://rin.io/stop-staring-and-compute-automorphism-groups-of-curves</id><content type="html" xml:base="https://rin.io/stop-staring-and-compute-automorphism-groups-of-curves/"><![CDATA[<p>Let’s compute the automorphism groups of some curves. Often presented as an insurmountable task, we must couragously go forward. There are many ways to do this! We will be using 3 different algorithms, so choose according to your taste. If you do not have SageMath installed on your computer, you can use <a href="https://sagecell.sagemath.org/">the sagecell emulator</a>.</p>

<h2 id="directly-into-your-terminal-using-sagemath">Directly into your terminal using SageMath</h2>

<p>Let’s start with the most straightforward one. Run sage and type this into your terminal, hitting enter after each line.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>A.&lt;x,y&gt;=AffineSpace(QQ,2)
C=Curve(y^8-x*(x-1)^4)
S=C.riemann_surface(prec=100)
G=S.symplectic_automorphism_group()
print(G.gens()[0:15])
print(G.order())
</code></pre></div></div>

<p>Technical P.S.: For sufficiently high degree curves, this will throw a segmentation fault. One must then narrow down where it is coming from. There are three hidden steps before the endomorphism basis calculation, so look for which it is throwing an error:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>S=C2.riemann_surface()
_ = S.homology_basis() # [1]
_ = S.cohomology_basis() # [2]
_ = S.period_matrix() # [3]
</code></pre></div></div>

<h2 id="sage-function-for-rational-curves">Sage Function for Rational Curves</h2>

<p>Let’s start with the most straightforward one. Save this file, and start sage in the same folder. Then, load(“file.sage”) will run the program. This is just a rephrasing of the previous method.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>from sage.schemes.riemann_surfaces.riemann_surface import RiemannSurface, RiemannSurfaceSum
R.&lt;x,y&gt; = QQ[]
def aut(f):
        print(f)
        S = RiemannSurface(f, prec = 100)
        try:
            	G = S.symplectic_automorphism_group()
                print("OH LAWD ITS COMING")
                print(G.gens()[0:15])
                print(G.order())
                #check_order(G.gens())
        except:
               	print(str(IOError))
                pass
#any plane curve goes here
f = y^8 - x*(x-1)^4
aut(f)
</code></pre></div></div>

<h2 id="alternative-code">Alternative code</h2>

<p>We can also do:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>f2 = any plane curve
f1 = hyperelliptic curve of same genus
S1 = RiemannSurface(f1, prec = 100)
T = RiemannSurfaceSum([ S1 ])
T.tau = S2.riemann_matrix()
</code></pre></div></div>

<h2 id="finding-group-structure-of-output-set">Finding Group Structure of Output Set</h2>

<p>We get the following matrix output. Then, we clean the matrix output, and declare it to be a group in GAP (gap-core as a linux package). GAP then returns a structure description.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>([ 0 0 1 -1]
[ 0 0 -1 0]
[ 0 1 1 0]
[ 1 1 0 1], [ 1 0 -1 1]
[ 0 1 1 0]
[ 0 -1 0 0]
[-1 -1 0 0], [ 1 1 0 1]
[-1 0 1 -1]
[ 0 -1 -1 1]
[-1 -1 -1 0], [ 0 -1 -1 1]
[ 1 1 1 0]
[-1 -1 0 -1]
[-1 0 1 -1], [ 0 0 0 -1]
[ 0 0 -1 1]
[ 1 1 1 0]
[ 1 0 0 1], [ 1 0 0 1]
[ 0 1 1 -1]
[-1 -1 0 0]
[-1 0 0 0], [ 0 0 -1 1]
[ 0 0 0 -1]
[ 1 0 0 1]
[ 1 1 1 0], [ 0 0 -1 0]
[ 0 0 1 -1]
[ 1 1 0 1]
[ 0 1 1 0], [ 1 1 1 0]
[ 0 -1 -1 1]
[-1 0 1 -1]
[-1 -1 0 -1], [ 1 0 -1 1]
[-1 -1 0 -1]
[ 1 1 1 0]
[ 0 1 1 -1], [ 0 -1 -1 1]
[-1 0 0 -1]
[ 1 0 0 0]
[ 1 1 0 0], [ 0 1 1 0]
[ 1 0 -1 1]
[-1 -1 0 0]
[ 0 -1 0 0], [0 1 0 0]
[1 0 0 0]
[0 0 0 1]
[0 0 1 0], [-1 0 0 0]
[ 1 1 0 0]
[ 0 -1 -1 1]
[ 1 0 0 1], [ 1 0 0 1]
[-1 -1 -1 0]
[ 0 0 1 -1]
[ 0 0 0 -1])
</code></pre></div></div>

<p>We clean this output with regex, or manually as follows:</p>

<ol>
  <li>find ], [ replace ]],[[</li>
  <li>find /n, replace ,</li>
  <li>find doublespace, replace space</li>
  <li>find [space, replace [</li>
  <li>find space replace ,</li>
  <li>find ,, replace ,</li>
  <li>find [, replace [</li>
  <li>change ([ and )] to ([[ and ]]) resp.</li>
  <li>Open gap and plug in:</li>
</ol>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>G := Group([[0,0,1,-1],[0,0,-1,0],[0,1,1,0],[1,1,0,1]],[[1,0,-1,1],[0,1,1,0],[0,-1,0,0],[-1,-1,0,0]],[[1,1,0,1],[-1,0,1,-1],[0,-1,-1,1],[-1,-1,-1,0]],[[0,-1,-1,1],[1,1,1,0],[-1,-1,0,-1],[-1,0,1,-1]],[[0,0,0,-1],[0,0,-1,1],[1,1,1,0],[1,0,0,1]],[[1,0,0,1],[0,1,1,-1],[-1,-1,0,0],[-1,0,0,0]],[[0,0,-1,1],[0,0,0,-1],[1,0,0,1],[1,1,1,0]],[[0,0,-1,0],[0,0,1,-1],[1,1,0,1],[0,1,1,0]],[[1,1,1,0],[0,-1,-1,1],[-1,0,1,-1],[-1,-1,0,-1]],[[1,0,-1,1],[-1,-1,0,-1],[1,1,1,0],[0,1,1,-1]],[[0,-1,-1,1],[-1,0,0,-1],[1,0,0,0],[1,1,0,0]],[[0,1,1,0],[1,0,-1,1],[-1,-1,0,0],[0,-1,0,0]],[[0,1,0,0],[1,0,0,0],[0,0,0,1],[0,0,1,0]],[[-1,0,0,0],[1,1,0,0],[0,-1,-1,1],[1,0,0,1]],[[1,0,0,1],[-1,-1,-1,0],[0,0,1,-1],[0,0,0,-1]]); StructureDescription(G);
</code></pre></div></div>

<p>Check that this structure description matches the order claimed by this sage program. For context on how this works, see section 5.1 of this paper <a href="https://arxiv.org/pdf/1811.07007v2.pdf">https://arxiv.org/pdf/1811.07007v2.pdf</a>. The reason numerical approximation is sound is listed in 5.3.</p>

<p>If you’d like to go even further, and compute the period matrix as well, the code for that is here., and section 5.2 of this paper <a href="https://arxiv.org/pdf/1811.07007v2.pdf">https://arxiv.org/pdf/1811.07007v2.pdf</a>.</p>

<h2 id="sage-program-for-superelliptic-only-curves">Sage Program for Superelliptic (only?) Curves</h2>

<p>This is much faster than the previous code, was <a href="https://doc.sagemath.org/html/en/reference/curves/sage/schemes/riemann_surfaces/riemann_surface.html">implemented by Bruin-Sijsling-Zotine</a>, and is based on this algorithm from Molin-Neuhror <a href="https://arxiv.org/abs/1707.07249">https://arxiv.org/abs/1707.07249</a> . Unfortunately, I also find it quite finnicky sometimes, which is why I list it second.</p>

<h2 id="magma-program-for-superelliptic-curves">Magma Program for Superelliptic Curves</h2>

<p>Magma doesn’t throw segmentation faults as often. So, to use this, one must have magma on your machine (not just the magma calculation) as it is necessary to concurrently use must download this package of Edgar Costa. <a href="https://github.com/edgarcosta/endomorphisms">https://github.com/edgarcosta/endomorphisms</a></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>SetVerbose("EndoFind", 0);
SetVerbose("CurveRec", 0);
prec := 300;
F := RationalsExtra(prec);
CC := F`CC;
R&lt;x&gt; := PolynomialRing(F);
p := x^4 - x;
e := 3;
// Construct superelliptic curve y^3 = x^4 - x
S := RiemannSurface(p, e : Precision := Precision(CC));
P := BigPeriodMatrix(S);
P := ChangeRing(P, CC);
GeoEndoRepCC := GeometricEndomorphismRepresentationCC(P);
GeoEndoRep := GeometricEndomorphismRepresentation(P, F);
print GeoEndoRep;
</code></pre></div></div>]]></content><author><name></name></author><category term="code" /><category term="math" /><summary type="html"><![CDATA[Let’s compute the automorphism groups of some curves. Often presented as an insurmountable task, we must couragously go forward. There are many ways to do this! We will be using 3 different algorithms, so choose according to your taste. If you do not have SageMath installed on your computer, you can use the sagecell emulator.]]></summary></entry></feed>