I’ve always found that my curiosity moves in cycles, and right now, the tide is pulling me away from Number Theory. While the journey through Silverman has been rewarding, the recent struggle has started to outweigh the joy. Since this is a hobby driven by passion rather than a deadline, I’m choosing to step back and let the soil sit fallow for a while. I know the interest will return: it always does. In the meantime, I’m shifting my focus to Schuller’s lectures and my upcoming posts on developmental biology.
Tuesday, December 23, 2025
Monday, December 22, 2025
Playing around with Collatz
I spoke to Gemini about my dilemma and it suggested the following:
Since you’re looking to inject some life back into the grind, let’s pivot away from dense proofs and toward a problem that feels more like a "numerical magic trick."
One of the most famous and accessible gateways into recreational number theory is the Collatz Conjecture (often called the 3n + 1 problem). It is deceptively simple to understand but has remained unsolved for nearly a century.
The Challenge: The "Hailstone" Numbers
The rules are simple. Pick any positive integer n and follow these two rules:
* If n is even, divide it by 2: n/2
* If n is odd, triple it and add 1: 3n + 1
The Goal: Repeat the process. The "conjecture" is that no matter what number you start with, you will eventually hit the number 1 (and then get stuck in a 4 \to 2 \to 1 loop).
Try This:
Pick a small number like 7 and trace its path. You’ll notice it doesn't just drop straight down; it "hails" upward before crashing:
* Why This is "Fun" (and Frustrating):
* The Chaos: Starting with 26 takes only 10 steps to reach 1. Starting with 27 takes 111 steps, climbing as high as 9,232 before finally plummeting back to 1.
* The Mystery: Mathematicians have tested numbers up to 2^{68} and they all lead to 1. However, no one has been able to prove that it happens for every single number. Paul Erdős once said, "Mathematics may not be ready for such problems."
Your "Recreational" Task
If you want a quick distraction from your formal studies, try to find a number under 100 that takes the most steps to reach 1 (this is called its total stopping time).
I wrote some quick python code to compute this and it’s 118 for number 97.
Square free numbers
When I first encountered square free numbers, I did not start with a definition. I started with a picture
Sunday, December 21, 2025
Geometry is inevitable in development
I keep noticing that the more seriously I think about development, the more geometry sneaks into the picture.
At first, this feels a little strange. Development is about genes, molecules, signaling pathways, and transcription factors. None of these look geometric in any obvious way. Yet the moment we stop staring at single genes and start looking at populations of cells across time, ideas like distance, direction, and cost become hard to avoid.
The shift happens when we change what we think the object of interest is. If we focus on individual cells, we naturally ask what a cell is. What markers it expresses. What label it should be given. But development is not really about isolated cells. It is about how large populations reorganize themselves over time. The object is no longer a point. It is a distribution.
Once you are dealing with distributions, geometry arrives almost automatically. A distribution lives on a space, and that space has structure. In single cell data, that space is gene expression space, where each axis corresponds to a gene and each cell is a point with coordinates given by RNA counts. This space is high dimensional, but it is not arbitrary. Some directions correspond to coherent biological programs. Others are noisy or effectively inaccessible. The geometry is shaped by regulatory constraints.
Now think about development in this space. At an early time, the population occupies one region. Later, it occupies another. Development is not just that new regions appear. It is that mass moves. Cells drift, split, and concentrate along certain directions. Asking how this happens without invoking geometry starts to feel artificial.
This is why optimal transport feels less like a clever mathematical trick and more like a natural language for the problem. The moment you ask how one distribution becomes another, you are implicitly asking about paths and costs. How far does mass have to move. Which directions are cheap. Which are expensive. What rearrangements preserve global structure while allowing local change.
What matters here is not the optimization itself. Cells are not solving equations. The optimization is a lens we use to reveal constraints that are already there. When the minimal cost flow between two developmental stages has a particular structure, that structure is telling us something about the underlying landscape. It is telling us which futures were easy to reach and which required significant coordinated change.
Seen this way, fate is no longer a switch that flips at a precise moment. It is a direction in space that becomes progressively cheaper to move along. Early cells do not contain a fate label in any sharp sense. They sit in regions where some futures are nearby and others are far away. Geometry replaces decision.
This perspective also explains why development is so robust. If outcomes depended on brittle, local decisions, small perturbations would derail the process. But if development follows broad, low cost corridors in a structured space, then noise can be absorbed. Many microscopic paths can lead to the same macroscopic arrangement. Geometry makes room for flexibility without losing form.
So when geometry appears in biology, I no longer see it as metaphor. I see it as a consequence of asking the right kind of question. The moment we ask how something becomes something else, rather than what it is at a single moment, we are forced to care about space, distance, and flow.
For me, this has been the quiet lesson behind optimal transport. Not that it gives better answers in every case, but that it nudges me toward a different way of seeing. Development stops looking like a sequence of states and starts looking like a continuous, constrained motion. And once you see it that way, geometry is not optional. It is already there, waiting to be named.
Lhx2 in brain development
Lhx2 sits at a critical junction in early brain development, acting less like a builder of specific structures and more like a regulator of what is allowed to form. It is a transcription factor expressed early in the developing forebrain, and its main role is to set boundaries: spatial, developmental, and identity-related within neural tissue.
One of Lhx2’s most important functions is in cortical patterning. During early development, the cortex must choose between multiple possible fates: sensory cortex, hippocampus, or signaling centers that instruct surrounding tissue. Lhx2 suppresses the formation of these signaling centers, particularly the cortical hem. By doing so, it preserves cortical identity and prevents large regions of the forebrain from being diverted into organizer-like roles.
This constraint-based role has striking consequences. When Lhx2 is absent or reduced, cortical tissue can be overtaken by hem-like signaling regions, leading to excessive hippocampal patterning at the expense of neocortex. When Lhx2 is present, it limits these signals, ensuring that hippocampal development is tightly localized and that the cortex develops with proper size and structure.
Seen this way, Lhx2 is not specifying fine-grained neural circuits or cell types. Instead, it enforces a global developmental contract: cortex remains cortex, signaling remains constrained, and downstream differentiation happens within safe bounds. Many later developmental processes depend on this early restriction.
Lhx2 is a reminder that brain development is not only about generating complexity, but about preventing the wrong kinds of complexity from emerging too early or in the wrong place.
Saturday, December 20, 2025
Perks of thinking big picture --> small
My learning style tends to start from a distance. I need to see the shape of an idea before I can engage with its details. I begin with a rough, sometimes imprecise, big-picture understanding and then return to it repeatedly, each time zooming in a little more. My previous post touches on this way of thinking.
The advantage of approaching things this way is that connections appear early. Even before I understand the machinery of a subject, I can often see how it resonates with ideas from elsewhere. That is largely what makes this blog possible. It lives in the space where incomplete understanding is still useful, where themes and structures can be compared long before the proofs are fully in place.
The downside, of course, is that the details take time. Precision, technique, and fluency do not arrive all at once. They have to be built patiently, and sometimes the gap between the intuition and the formalism feels uncomfortably wide. But this is the phase I am in now: slowly filling in the missing pieces, tightening loose ideas, and learning to stay with a problem long enough for it to become sharp.
So far, this process has been demanding but deeply engaging. The excitement comes not from quick mastery, but from watching vague outlines turn into something solid. I don’t know how long this refinement will take, and I’m no longer in a hurry to find out. For now, it feels like the right way to learn, and I hope it continues in this direction.
A cool talk I attended at TIFR
I attended this talk on knots, hard problems and curved spaces at TIFR recently. This is where I was first introduced to knots. At first glance, knots feel almost too physical to belong in mathematics. You take a piece of rope, you tangle it, you pull on it, and you ask a very simple question: is this the same knot as before, or did I actually make something new?
That question turns out to be much deeper than it sounds.
In math, a knot is not a real rope. It has no thickness. You are allowed to stretch it, bend it, and wiggle it as much as you like, as long as you never cut it and never let it pass through itself. Two knots are considered the same if you can deform one into the other using only these allowed moves.
This immediately rules out a lot of everyday intuition. Tightness doesn’t matter. Length doesn’t matter. Even how “messy” it looks doesn’t matter. Only the underlying structure survives.
The simplest knot is actually no knot at all. A simple loop is called the unknot. Many complicated-looking tangles turn out, after enough patient deformation, to be just the unknot in disguise. This already hints at the problem: how do you know whether a knot is truly knotted?
Staring at it doesn’t scale.
This is where groups enter the picture.
A group, very roughly, is a way of keeping track of actions that can be combined and undone. You don’t need the full definition yet. What matters is this: groups are excellent at remembering structure when shapes are allowed to bend and stretch.
The key idea in knot theory is to associate a group to a knot. Not because the knot “is” a group, but because the group acts like a fingerprint. If two knots are genuinely different, their associated groups will also be different in precise ways.
Here is the intuition.
Take a knot sitting in space. Now imagine walking around it without crossing through the knot itself. You can loop around different parts, go over and under crossings, and return to where you started. Different paths can sometimes be smoothly deformed into each other, and sometimes they can’t. The rules for how these paths combine form a group.
This group encodes how the space around the knot is tangled.
If the knot is trivial, the surrounding space is simple, and the group is simple. If the knot is genuinely knotted, the space around it has twists you cannot undo, and the group remembers them even when the picture looks innocent.
What I like about this construction is that it reflects a broader mathematical pattern. Instead of trying to solve a visual problem visually, you translate it into algebra. You stop asking “what does this look like?” and start asking “what operations are allowed, and what do they force?”
For a beginner, the magic is not in the technical details, but in the shift in perspective. A knot is no longer just an object. It is a source of constraints. The group associated to it captures what you cannot do, no matter how cleverly you deform it.
This also explains why knots show up in unexpected places: DNA, fluids, physics, even computation. Anywhere structure is preserved under continuous deformation, knot-like reasoning applies. And wherever constraints matter more than appearances, groups tend to follow.
You don’t need to master group theory to appreciate this connection. What matters is the lesson it teaches early on: difficult geometric questions often become clearer when you ask what transformations are allowed and what information survives them.
Organizers as Constraints: How Development Avoids Chaos

Figure from "Signals from the edges: The cortical hem and antihem in telencephalic development" Subranaiam et. al.
An organizer is a small, specialized region of developing tissue whose job is not to become a major structure itself, but to instruct surrounding tissue on what to become. It does this by emitting signals called morphogens that spread outward and impose order on what would otherwise be undifferentiated cells.
The key feature of an organizer is asymmetry. It breaks uniformity. Early embryos often begin as nearly identical collections of cells. An organizer introduces direction: front versus back, top versus bottom, center versus edge. Once these axes are established, development becomes constrained and coordinated rather than arbitrary.
Organizers do their work through gradients. Cells close to the organizer receive strong signals and adopt one fate; cells farther away receive weaker signals and adopt different fates. Importantly, cells are not told exactly what to become. They are given positional information and must interpret it locally, using their own internal state and neighboring cues.
In the brain, organizers play a particularly delicate role. They must pattern large territories without overwhelming them. Too much organizer activity can cause signaling regions to expand at the expense of functional tissue. Too little, and the tissue lacks structure and orientation. Proper brain development depends on tight control over where organizers form and how far their influence extends.
Seen abstractly, an organizer is a source of constraints rather than content. It does not encode the final form of organs or circuits. Instead, it sets the rules under which complexity is allowed to emerge. Development succeeds not because organizers build everything, but because they prevent everything from trying to form everywhere at once.
When a Set Becomes a Space
In Frederic Schuller’s Geometric Anatomy of Theoretical Physics, the order is very deliberate:
-
logic → sets
-
sets → topological spaces
-
topological spaces → manifolds
-
manifolds → tangent spaces
-
tangent spaces → bundles
-
bundles → symmetry (groups, Lie groups)
As a result, I’ve been thinking a lot about what it means to add structure to a set. Topology is one of the cleanest places where this question shows up.
Topology begins in a place that looks almost too simple to matter: with a set. A set is just a collection of elements. At this stage there is no geometry, no notion of distance, no idea of closeness or continuity. The elements could be numbers, points, or abstract objects, but the set itself carries no information beyond membership. Two sets with the same elements are indistinguishable, and nothing inside the set tells us how its elements relate to one another.
The moment we start asking questions like which points are near each other, what it means to move smoothly, or whether a shape has a hole, a bare set is no longer enough. We need structure. Topology is the study of what happens when we add the weakest possible structure that still lets us talk about continuity. Instead of introducing distances or angles, topology takes a different route. It asks us to specify which subsets of our set should be considered open.
A topological space is a set together with a collection of subsets called open sets, chosen so that three simple rules hold. The empty set and the whole set are open, arbitrary unions of open sets are open, and finite intersections of open sets are open. These rules are not arbitrary. They encode exactly what is needed to talk about local behavior. Once open sets are fixed, we have a precise notion of when points are near each other, even though no distances have been mentioned.
This shift is subtle but powerful. In topology, continuity is no longer defined using limits or epsilons. A function between two topological spaces is continuous if the preimage of every open set is open. Continuity becomes a structural property rather than a numerical one. Because of this, topology focuses on properties that remain unchanged under continuous deformations. Stretching and bending are allowed, tearing and gluing are not. This is why, from a topological point of view, a coffee mug and a donut are the same object: each has a single hole, and that feature survives all continuous distortions.
So far, this perspective is global. But many spaces that are globally complicated behave very simply when examined closely. If you zoom in far enough on the surface of a sphere, it looks flat. The Earth feels like a plane when you stand on it, even though it is curved as a whole. Topology captures this idea by focusing on local neighborhoods. Instead of asking what a space looks like everywhere at once, it asks what it looks like near each point.
This leads naturally to the idea of a topological manifold. A manifold is a topological space with the property that every point has a neighborhood that looks like ordinary Euclidean space. Locally, the space behaves like ( \mathbb{R}^n ), even if globally it twists, curves, or folds back on itself. Additional technical conditions ensure that points can be separated and that the space is not pathologically large, but the core idea is local simplicity paired with global complexity.
Circles, spheres, and tori are all examples of manifolds. Each looks like flat space in small regions, but has a distinct global structure. This combination is what makes manifolds so important. They are the natural setting for calculus, geometry, and physics. Motion, fields, and differential equations all make sense locally, while topology governs the global constraints.
What topology deliberately forgets are exact distances, angles, and curvature. What it keeps are deeper features such as connectedness, continuity, and the number of holes in a space. This selective forgetting is not a loss of information but a refinement of attention. By stripping away what is inessential, topology reveals what truly persists.
Seen this way, topology is not an abstract detour but a careful progression. Starting from sets, we add just enough structure to speak about nearness and continuity, and from there arrive at spaces that are flexible enough to describe the shapes underlying geometry and physics. Once that viewpoint settles in, topology stops feeling optional. It begins to feel like the natural language for talking about form without coordinates.
Friday, December 19, 2025
Major themes in brain development
Brain development is often presented as a catalog of mechanisms. Signaling pathways, transcription factors, migration modes, synapse formation. Each piece is described in detail, and the overall picture is expected to emerge from accumulation. When I step back, though, what stands out are not the mechanisms themselves, but a small number of recurring themes that cut across scales and systems.
Why State Spaces Must Be Sets Before They Can Be Anything Else
So lets begin with the Schuller lectures. I watched the whole series a few years ago without understanding anything of what was going on, but this time I return armed with Linear Algebra, Calculus some Real and Complex Analysis, and more Probability knowledge. Let us see how far I am able to go this time.
Introduction/Logic of propositions and predicates- 01 - Frederic Schuller - YouTube
After realizing that physical theories begin by restricting what can be observed, the next question is where those observables are supposed to live. Schuller’s answer is deliberately unglamorous: before a state space can have geometry, algebra, or dynamics, it must first be a set.
Update to the goals of this blog.
This theme has run through the recent posts on this blog. In algebra, it shows up as the distinction between groups, rings, fields, and modules. In number theory, as cancellation and non-collapse.
To push this way of thinking further, I’ve started working through Frederic Schuller’s Lectures on the Geometric Anatomy of Theoretical Physics. Schuller begins with logic, then builds sets, topology, and geometry from first principles. It feels like a direct stress test of this constraint-based view, scaling it from discrete arithmetic to the continuous structures underlying physics. I fully expect to fail as this leap is immense. However, I've decided to give it a whirl with the same questions in mind: what rules are fixed at the start, what operations preserve information, and which definitions prevent collapse rather than describe outcomes?
Going forward, I’ll also be writing about developmental biology, especially developmental neuroscience, through the same lens. I’m interested in development not as instructions being executed, but as possibilities being progressively removed. Gradients, transcription factors, and boundaries act as constraints that preserve distinction and make reliable structure inevitable.
This is not a departure from mathematics, but an extension of the same way of thinking into biology. Across all of these domains, definitions and structures are safety guarantees. The goal here is not coverage, but clarity, where I document the moment when a system stops feeling clever and starts feeling inevitable.
Vector spaces
When I first learned the definition of a vector space, it felt like a list of rules glued together by convention. There were vectors, scalars, two operations, and a long list of axioms that all seemed reasonable but unmotivated. What I did not understand was why the scalars had to come from a field. Why not a ring? Why not the integers?
Congruence
The concept of congruence provides a natural language for discussing divisibility by focusing on remainders rather than quotients. At its core, saying that two numbers are congruent modulo n simply means that their difference is a multiple of n. This definition transforms the way we perceive the number line, shifting from an infinite string of unique values to a repeating cycle where numbers wrap around themselves like hours on a clock. This perspective allows us to treat the property of divisibility as a form of modular equality, which unlocks the powerful tools of algebra for use in number theory.
One of the most vital insights of this system is that congruences behave remarkably like standard equations. We can add, subtract, and multiply congruences just as we do with traditional equalities. If we know that two numbers are equivalent to their respective counterparts under a specific modulus, their sums and products will maintain that same relationship. This arithmetic consistency allows us to perform massive calculations by constantly reducing the numbers involved to their smallest remainders. By keeping values manageable without losing essential information regarding divisibility, we can solve problems involving enormous exponents or large products that would otherwise be computationally impossible.
However, the analogy between congruences and standard equations meets a significant obstacle when it comes to division. In normal arithmetic, if ax = ay and a is not zero, we can always cancel a to find that x = y. In the world of congruences, this move is only legitimate if the number being cancelled shares no common factors with the modulus. The greatest common divisor plays a starring role here, acting as a gatekeeper for reversibility. If the divisor and the modulus have a common factor greater than one, the attempt to undo multiplication can lead to multiple distinct solutions or no solutions at all. This subtle trap requires a constant awareness of the relationship between the numbers involved and the modulus itself.
The utility of this system is most evident when solving linear congruence equations of the form ax \equiv b \pmod{n}. Solving such an equation is fundamentally the same task as finding integer solutions to a linear equation in two variables. By using the Euclidean Algorithm, we can systematically determine whether a solution exists and then find it. This connection bridges the gap between basic arithmetic and the deeper structural logic of the integers. It reveals that congruences are not just a shorthand for remainders, but a robust framework for exploring how numbers interact when they are constrained to a finite cycle.
Groups, rings and fields
A group represents the simplest case of total symmetry. Because there is only a single operation and every element has a guaranteed inverse, nothing can collapse. If two expressions are equal, you can always cancel the same element from both sides. This is not a mere convenience but the entire point of the structure. The integers under addition form a group because subtraction is always possible, meaning every step has a guaranteed way back to the start.
A ring adds a second operation of multiplication but crucially does not demand that it be reversible. While addition still behaves like a group and additive cancellation remains safe, multiplication is allowed to fail. Information can collapse, and two nonzero elements can multiply to zero. This is not a flaw in the definition; it is the specific freedom the definition is designed to allow. In arithmetic modulo 6, the classes of 2 and 3 are both nonzero, yet their product is zero. Once this happens, division and cancellation become unsafe. Any proof that assumes otherwise will quietly break because zero divisors are the exact features that distinguish rings from fields.
A field is what results when you refuse to allow this kind of collapse. Formally, it is a ring in which every nonzero element has a multiplicative inverse. Conceptually, it is a ring where multiplication is forced to behave like a group operation once zero is excluded. In this environment, cancellation works, division is legitimate, and products do not lose information. This explains why working modulo a prime feels different than working modulo a composite number. In arithmetic modulo a prime, zero divisors disappear and every nonzero element has an inverse. The same symbolic manipulations that fail modulo 6 suddenly become valid because the structure has changed.
This reframing changed how proofs feel to me. I no longer treat these terms as mere labels, but as contracts. Before making an algebraic move, I check the structure to see if I am allowed to cancel or divide. When those questions are answered by the definition, the proof stops feeling like a sleight of hand. The hierarchy is natural because groups guarantee reversibility for one operation, rings accept the loss of information when a second operation is introduced, and fields restore reversibility by forbidding that collapse. The definitions are precise statements about what kinds of mistakes the algebra will never let you make.
Thursday, December 18, 2025
my family! Guest post by 7yo niece Part III
Nitin (4 yo brother): Baabaa boo boo gee gee ga ga
Nitya: NITIN STOP BEING SO ANNOYING
Nitin: I don't want to
So that's the introduction for my family but there's more. First of all, there is a lot of shouting and messiness. Second of all, we always like have it really loud outside in our backyard. Its really annoying too. Then, me and my brother are always getting into fights and one of the parents have to shout at Nitin or me but they dont know which one started it. So they basically blame one of us. I have no time to give you an example so let me just say these three things. And I want you to remember these three things. Fighting, noisiness and messiness. So that was the third part of my blog. For now.
Guest Post by my 7 yo niece, Part II
N: hi this is me again and this is my blog about brainvita! so today we played brainvita and you know that you have four options in the beginning and they're all the same.
A: Are they exactly the same?
N: Yeah, basically. So we started with one of the four options we had and we kept going and in the end we just kept ending up with 4. It was really interesting. Then we asked ChatGPT what the strategy was. When I was reading what ChatGPT had written for us, my aunt (A) experimented with the game and she got 2 surprisingly. I asked her what she did, and she said "I cleared the edges first".
A: So what did you make of that
N: We learnt that if you clear the edges first and preserve the middle parts, you could get a lot of pegs out and less pegs in.
And that was my blog.
Guest post by my 7 year old niece
today we drew pictures, colored and played brain vita. here are some pictures we drew. we were playing a game where each of us draws something random and hands it over to the other, and we are supposed to finish the drawing. I got the idea from a friend of mine. maybe you could play this game with your family and friends too!
Shannon's Entropy from the ground up
I'm diverging a bit today. My friend Beli showed me this proof and I understood Shannon's entropy from the ground up for the first time and wanted to share.
Shannon’s entropy is usually presented as a formula,
but the real question is why this expression is forced on us. The answer comes from thinking carefully about uncertainty and how it behaves when decisions are broken into stages.
Consider a situation with n equally likely outcomes. The uncertainty associated with this situation depends only on . Call this quantity . If we make a sequence of independent choices, uncertainty should add. That gives the functional equation
The only solutions of this form are logarithms, so for equally likely outcomes we must have
for some constant , which sets the choice of units.
Now consider a general probability distribution . To connect this case to the equal-probability case, approximate each probability by a rational number. Write
where the integers sum to .
Imagine a two-stage decision process. In the first stage, we choose one of the groups, where group has probability . In the second stage, once group
is chosen, we choose uniformly among its equally likely outcomes.
Viewed as a single process, this is equivalent to choosing uniformly among outcomes. The total uncertainty is therefore
Viewed in two stages, the total uncertainty is the uncertainty of the first choice plus the expected uncertainty of the second choice. The first stage contributes . The second stage contributes
Consistency requires these two descriptions to agree, so
Rearranging gives
Substitute . Then
and so
Since the probabilities sum to one, this becomes
Substituting back,
This is Shannon’s entropy formula.
What matters is not the algebra but the structure of the argument. Uncertainty must be additive across independent choices. It must reduce to a logarithm in the equally likely case. And it must behave consistently when a decision is broken into stages. These requirements leave no freedom. The entropy formula is not chosen. It is forced.
Zero divisors
When working with modular arithmetic, some systems feel reliable and others do not. Arithmetic modulo a prime allows cancellation and division by any nonzero element. Arithmetic modulo a composite does not. This difference is not cosmetic. It comes from a structural feature called a zero divisor.
A zero divisor is a nonzero element that multiplies with another nonzero element to give zero. For example, in arithmetic modulo 8,
.
Both 2 and 4 are nonzero, yet their product is zero. This is the defining behavior of zero divisors.
To understand why zero divisors cause trouble, it helps to look at multiplication as a function.
Fix a nonzero element . Consider the operation that takes any element and maps it to . This is a function from the ring to itself.
A function is called injective if different inputs always give different outputs. In words, nothing collapses. If multiplying by a is injective, then whenever , the only possibility is that
A function is called surjective if every element of the target is hit by the function. In this context, surjectivity means that every element of the ring can be written as for some . In particular, the number 1 must appear as a product . When that happens, is the multiplicative inverse of .
Zero divisors are precisely what break injectivity. If with and , then multiplying by sends both and 0 to the same output. Distinct inputs collapse. Cancellation is no longer valid.
This immediately explains why zero divisors cannot have inverses. If multiplying by collapses information, there is no way to reverse the operation.
In a finite ring with no zero divisors, something important happens. Multiplying by a nonzero element cannot collapse distinct elements, so the operation is injective. In a finite set, injective functions are automatically surjective. Nothing can be missed. As a result, multiplying by a nonzero element must hit 1, which means that element has an inverse.
This is why a finite ring with no zero divisors must already be a field. Finiteness turns injectivity into surjectivity, and the absence of zero divisors turns multiplication into a reversible operation.
This also explains why primes matter. Arithmetic modulo a prime has no zero divisors, so multiplication by any nonzero element is both injective and surjective. Arithmetic modulo a composite does not. The difference shows up not in the symbols, but in which logical moves are allowed.
Zero divisors are the exact points where multiplication stops preserving information. Once they appear, cancellation fails, inverses disappear, and familiar arguments break.
Tuesday, December 16, 2025
Fermat's Little Theorem
Fermat’s Little Theorem says that if is a prime number and is not divisible by , then
For a long time, this statement felt clever but unanchored to me. It looked like a fact about large powers, or about numbers mysteriously wrapping around. I understood the manipulation, but not the inevitability. The result felt impressive in the way a trick feels impressive.
What finally shifted my understanding was realizing that the theorem is not about powers at all. It is about permutation.
When you work modulo a prime, the nonzero residues form a finite multiplicative world. Every element has an inverse. In that world, multiplication by any nonzero number does not create anything new. It simply rearranges what is already there.
If you take the set and multiply every element by , you do not get a new set. You get the same elements in a different order. The structure is preserved. Nothing collapses. Nothing escapes. Multiplication acts as a permutation of a fixed system.
Once I saw that, exponentiation stopped looking like growth along a line. Repeated multiplication is just repeated rearrangement inside a closed space. Cycles are unavoidable, and the size of the system dictates when you return to where you started. The theorem stops feeling clever and starts feeling necessary.
Around the same time, I noticed something similar happening in my drawing practice.
In drawing, an early mistake in proportion or angle creates a broken structure. The natural response is to erase, abandon the piece, or start over. I used to do that often. Now I try to do the opposite. I finish the drawing anyway.
Once a mistake is made, nothing new can be added to erase it. The structure is fixed. All that remains is rearrangement. Line weight, shading, emphasis, and contrast begin to act like permutations. They do not remove the original error. They redistribute attention, balance, and form within the constraints that already exist.
The drawing becomes a closed system. You are no longer creating freely. You are rearranging.
This is where the connection became clear to me. In both mathematics and drawing, I was initially trying to escape constraints. I wanted the powers to behave nicely. I wanted the sketch to stay anatomically correct. When that failed, I felt stuck.
The breakthrough came from accepting the system as it was and working entirely inside it.
In Fermat’s Little Theorem, once you accept the finite multiplicative structure modulo a prime, the result follows automatically. In drawing, once you accept the broken proportions, the work becomes about problem-solving rather than correction. Progress can come not from adding something new, but from redistributing what is already there.
That is why I am comfortable sharing failed sketches alongside unfinished mathematical understanding. They are records of the same practice. Completion without denial. Learning without erasure. Respect for structure rather than frustration with it.
Neither the theorem nor the drawing is interesting because it looks polished. They matter because they show how much work structure does for you once you stop fighting it.
Also, here's my first serious hand sketch.
-
Why do number theory arguments feel clean when working modulo a prime and frustrating when working modulo a composite? It turns out there is...
-
I have never been the person who is naturally good at math. If anything, I have spent most of my life feeling like my mind was built for ev...
-
I keep noticing that the more seriously I think about development, the more geometry sneaks into the picture. At first, this feels a littl...



.jpeg)

.jpeg)
.jpeg)

.jpeg)
.jpeg)

