I’ve always found that my curiosity moves in cycles, and right now, the tide is pulling me away from Number Theory. While the journey through Silverman has been rewarding, the recent struggle has started to outweigh the joy. Since this is a hobby driven by passion rather than a deadline, I’m choosing to step back and let the soil sit fallow for a while. I know the interest will return: it always does. In the meantime, I’m shifting my focus to Schuller’s lectures and my upcoming posts on developmental biology.
Ashika Jayanthy's Blog
Learning how I learn
Tuesday, December 23, 2025
Monday, December 22, 2025
Playing around with Collatz
I spoke to Gemini about my dilemma and it suggested the following:
Since you’re looking to inject some life back into the grind, let’s pivot away from dense proofs and toward a problem that feels more like a "numerical magic trick."
One of the most famous and accessible gateways into recreational number theory is the Collatz Conjecture (often called the 3n + 1 problem). It is deceptively simple to understand but has remained unsolved for nearly a century.
The Challenge: The "Hailstone" Numbers
The rules are simple. Pick any positive integer n and follow these two rules:
* If n is even, divide it by 2: n/2
* If n is odd, triple it and add 1: 3n + 1
The Goal: Repeat the process. The "conjecture" is that no matter what number you start with, you will eventually hit the number 1 (and then get stuck in a 4 \to 2 \to 1 loop).
Try This:
Pick a small number like 7 and trace its path. You’ll notice it doesn't just drop straight down; it "hails" upward before crashing:
* Why This is "Fun" (and Frustrating):
* The Chaos: Starting with 26 takes only 10 steps to reach 1. Starting with 27 takes 111 steps, climbing as high as 9,232 before finally plummeting back to 1.
* The Mystery: Mathematicians have tested numbers up to 2^{68} and they all lead to 1. However, no one has been able to prove that it happens for every single number. Paul Erdős once said, "Mathematics may not be ready for such problems."
Your "Recreational" Task
If you want a quick distraction from your formal studies, try to find a number under 100 that takes the most steps to reach 1 (this is called its total stopping time).
I wrote some quick python code to compute this and it’s 118 for number 97.
Square free numbers
When I first encountered square free numbers, I did not start with a definition. I started with a picture
Sunday, December 21, 2025
Geometry is inevitable in development
I keep noticing that the more seriously I think about development, the more geometry sneaks into the picture.
At first, this feels a little strange. Development is about genes, molecules, signaling pathways, and transcription factors. None of these look geometric in any obvious way. Yet the moment we stop staring at single genes and start looking at populations of cells across time, ideas like distance, direction, and cost become hard to avoid.
The shift happens when we change what we think the object of interest is. If we focus on individual cells, we naturally ask what a cell is. What markers it expresses. What label it should be given. But development is not really about isolated cells. It is about how large populations reorganize themselves over time. The object is no longer a point. It is a distribution.
Once you are dealing with distributions, geometry arrives almost automatically. A distribution lives on a space, and that space has structure. In single cell data, that space is gene expression space, where each axis corresponds to a gene and each cell is a point with coordinates given by RNA counts. This space is high dimensional, but it is not arbitrary. Some directions correspond to coherent biological programs. Others are noisy or effectively inaccessible. The geometry is shaped by regulatory constraints.
Now think about development in this space. At an early time, the population occupies one region. Later, it occupies another. Development is not just that new regions appear. It is that mass moves. Cells drift, split, and concentrate along certain directions. Asking how this happens without invoking geometry starts to feel artificial.
This is why optimal transport feels less like a clever mathematical trick and more like a natural language for the problem. The moment you ask how one distribution becomes another, you are implicitly asking about paths and costs. How far does mass have to move. Which directions are cheap. Which are expensive. What rearrangements preserve global structure while allowing local change.
What matters here is not the optimization itself. Cells are not solving equations. The optimization is a lens we use to reveal constraints that are already there. When the minimal cost flow between two developmental stages has a particular structure, that structure is telling us something about the underlying landscape. It is telling us which futures were easy to reach and which required significant coordinated change.
Seen this way, fate is no longer a switch that flips at a precise moment. It is a direction in space that becomes progressively cheaper to move along. Early cells do not contain a fate label in any sharp sense. They sit in regions where some futures are nearby and others are far away. Geometry replaces decision.
This perspective also explains why development is so robust. If outcomes depended on brittle, local decisions, small perturbations would derail the process. But if development follows broad, low cost corridors in a structured space, then noise can be absorbed. Many microscopic paths can lead to the same macroscopic arrangement. Geometry makes room for flexibility without losing form.
So when geometry appears in biology, I no longer see it as metaphor. I see it as a consequence of asking the right kind of question. The moment we ask how something becomes something else, rather than what it is at a single moment, we are forced to care about space, distance, and flow.
For me, this has been the quiet lesson behind optimal transport. Not that it gives better answers in every case, but that it nudges me toward a different way of seeing. Development stops looking like a sequence of states and starts looking like a continuous, constrained motion. And once you see it that way, geometry is not optional. It is already there, waiting to be named.
Lhx2 in brain development
Lhx2 sits at a critical junction in early brain development, acting less like a builder of specific structures and more like a regulator of what is allowed to form. It is a transcription factor expressed early in the developing forebrain, and its main role is to set boundaries: spatial, developmental, and identity-related within neural tissue.
One of Lhx2’s most important functions is in cortical patterning. During early development, the cortex must choose between multiple possible fates: sensory cortex, hippocampus, or signaling centers that instruct surrounding tissue. Lhx2 suppresses the formation of these signaling centers, particularly the cortical hem. By doing so, it preserves cortical identity and prevents large regions of the forebrain from being diverted into organizer-like roles.
This constraint-based role has striking consequences. When Lhx2 is absent or reduced, cortical tissue can be overtaken by hem-like signaling regions, leading to excessive hippocampal patterning at the expense of neocortex. When Lhx2 is present, it limits these signals, ensuring that hippocampal development is tightly localized and that the cortex develops with proper size and structure.
Seen this way, Lhx2 is not specifying fine-grained neural circuits or cell types. Instead, it enforces a global developmental contract: cortex remains cortex, signaling remains constrained, and downstream differentiation happens within safe bounds. Many later developmental processes depend on this early restriction.
Lhx2 is a reminder that brain development is not only about generating complexity, but about preventing the wrong kinds of complexity from emerging too early or in the wrong place.
Saturday, December 20, 2025
Perks of thinking big picture --> small
My learning style tends to start from a distance. I need to see the shape of an idea before I can engage with its details. I begin with a rough, sometimes imprecise, big-picture understanding and then return to it repeatedly, each time zooming in a little more. My previous post touches on this way of thinking.
The advantage of approaching things this way is that connections appear early. Even before I understand the machinery of a subject, I can often see how it resonates with ideas from elsewhere. That is largely what makes this blog possible. It lives in the space where incomplete understanding is still useful, where themes and structures can be compared long before the proofs are fully in place.
The downside, of course, is that the details take time. Precision, technique, and fluency do not arrive all at once. They have to be built patiently, and sometimes the gap between the intuition and the formalism feels uncomfortably wide. But this is the phase I am in now: slowly filling in the missing pieces, tightening loose ideas, and learning to stay with a problem long enough for it to become sharp.
So far, this process has been demanding but deeply engaging. The excitement comes not from quick mastery, but from watching vague outlines turn into something solid. I don’t know how long this refinement will take, and I’m no longer in a hurry to find out. For now, it feels like the right way to learn, and I hope it continues in this direction.
A cool talk I attended at TIFR
I attended this talk on knots, hard problems and curved spaces at TIFR recently. This is where I was first introduced to knots. At first glance, knots feel almost too physical to belong in mathematics. You take a piece of rope, you tangle it, you pull on it, and you ask a very simple question: is this the same knot as before, or did I actually make something new?
That question turns out to be much deeper than it sounds.
In math, a knot is not a real rope. It has no thickness. You are allowed to stretch it, bend it, and wiggle it as much as you like, as long as you never cut it and never let it pass through itself. Two knots are considered the same if you can deform one into the other using only these allowed moves.
This immediately rules out a lot of everyday intuition. Tightness doesn’t matter. Length doesn’t matter. Even how “messy” it looks doesn’t matter. Only the underlying structure survives.
The simplest knot is actually no knot at all. A simple loop is called the unknot. Many complicated-looking tangles turn out, after enough patient deformation, to be just the unknot in disguise. This already hints at the problem: how do you know whether a knot is truly knotted?
Staring at it doesn’t scale.
This is where groups enter the picture.
A group, very roughly, is a way of keeping track of actions that can be combined and undone. You don’t need the full definition yet. What matters is this: groups are excellent at remembering structure when shapes are allowed to bend and stretch.
The key idea in knot theory is to associate a group to a knot. Not because the knot “is” a group, but because the group acts like a fingerprint. If two knots are genuinely different, their associated groups will also be different in precise ways.
Here is the intuition.
Take a knot sitting in space. Now imagine walking around it without crossing through the knot itself. You can loop around different parts, go over and under crossings, and return to where you started. Different paths can sometimes be smoothly deformed into each other, and sometimes they can’t. The rules for how these paths combine form a group.
This group encodes how the space around the knot is tangled.
If the knot is trivial, the surrounding space is simple, and the group is simple. If the knot is genuinely knotted, the space around it has twists you cannot undo, and the group remembers them even when the picture looks innocent.
What I like about this construction is that it reflects a broader mathematical pattern. Instead of trying to solve a visual problem visually, you translate it into algebra. You stop asking “what does this look like?” and start asking “what operations are allowed, and what do they force?”
For a beginner, the magic is not in the technical details, but in the shift in perspective. A knot is no longer just an object. It is a source of constraints. The group associated to it captures what you cannot do, no matter how cleverly you deform it.
This also explains why knots show up in unexpected places: DNA, fluids, physics, even computation. Anywhere structure is preserved under continuous deformation, knot-like reasoning applies. And wherever constraints matter more than appearances, groups tend to follow.
You don’t need to master group theory to appreciate this connection. What matters is the lesson it teaches early on: difficult geometric questions often become clearer when you ask what transformations are allowed and what information survives them.
-
Why do number theory arguments feel clean when working modulo a prime and frustrating when working modulo a composite? It turns out there is...
-
I have never been the person who is naturally good at math. If anything, I have spent most of my life feeling like my mind was built for ev...
-
I keep noticing that the more seriously I think about development, the more geometry sneaks into the picture. At first, this feels a littl...
