# Meditation on Software 1

There is something very wrong with how we write code.

[October 31, 2021]

There is something very wrong with how we write code.

[October 15, 2020]

More exterior algebra notes. This is a reference for (almost) all of the many operations that I am aware of in the subject. I will make a point of giving explicit algorithms and an explicit example of each, in the lowest dimension that can still be usefully illustrative.

Warning: very long.

[August 10, 2020]

Rapid-fire non-rigorous intuitions for calculus on complex numbers. Not an introduction to the subject.

[July 24, 2020]

Here’s what I know about QM. I’m trying to learn QFT and it helps to have the prerequisites compressed into the simplest possible representation. It also helps me to write everything down in a compressed form so I can reference it more easily.

This will make no sense if you don’t already have a good understanding of quantum mechanics.

Conventions: \(c = 1\), \(g_{\mu \nu} = diag(+, -, -, -)\). I like to write \(S_{\vec{x}}\) for \(\nabla S\).

[December 22, 2019]

I think that the Many-Worlds Interpretation (MWI) of quantum mechanics is probably ‘correct’. There is no reason to think that the rules of atomic phenomena would stop applying at larger scales when an experimenter becomes entangled with their experiment.

However, MWI has the problem (shared with all the other mainstream interpretations of QM) that it does not explain why quantum randomness leads to the probabilities that we observe. The so-called Born Rule says that if a system is in a state \(\alpha \| 0 \> + \beta \| 1 \>\), upon ‘measurement’ (in which we entangle with one or the other outcome), we measure the eigenvalue associated with the state \(\| 0 \>\) with probability

\[P[0] = \| \alpha \|^2\]The Born Rule is normally included as an additional postulate in MWI, and this is somewhat unsatisfying. Or at least, it is apparently difficult to justify, given that I’ve read a bunch of attempts, each of which talks about how there haven’t been any other satisfactory attempts. I think it would be unobjectionable to say that there is not a consensus on how to motivate the Born Rule from MWI without any other assumptions.

Anyway here’s an argument I found that I find somewhat compelling. It argues that the Born Rule can emerge from interference if you assume that every *measurement* of a probability that you’re exposed to (which I guess is a Many-Worlds-ish idea) is assigned a random, uncorrelated phase.

[September 15, 2019]

Most of our descriptions of how our brains work are fundamentally *vague*. We speak of our brains performing verbs like “think”, “realize”, “forget”, or “hope” but we aren’t talking about what’s going on mechanically to result in those qualities.

Sure, these can all be assigned truth values, in the sense that if everyone generally agrees that someone ‘realized’ something, we might define their brain to have performed the objective act of ‘realization’. But this gives no *technical* understanding of what the process of realization is – beyond, perhaps, some hand-wavey story about connections being bridged between neurons.

So, sometime in the last few years the English-speaking Internet became aware of the condition called aphantasia. Aphantasia is when a person is unable to picture images in their thoughts – they don’t have a “mind’s eye” at all.

This is interesting because, in contrast to the above, aphantasia is a concrete description of how the brain works. Some people see an image in their head when they draw or recall something; others don’t. Their brains work in materially different ways. I would have no idea how to figure out if two people “realize” something via different mechanisms, but I can be sure that two people’s brains operate differently, if one sees pictures and the others don’t.

[January 27, 2019]

*Vector spaces are assumed to be finite-dimensional and over \(\bb{R}\). The grade of a multivector \(\alpha\) will be written \(\| \alpha \|\), while its magnitude will be written \(\Vert \alpha \Vert\). Bold letters like \(\b{u}\) will refer to (grade-1) vectors, while Greek letters like \(\alpha\) refer to arbitrary multivectors with grade \(\| \alpha \|\).*

More notes on exterior algebra. This time, the interior product \(\alpha \cdot \beta\), with a lot more concrete intuition than you’ll see anywhere else, but still not enough.

I am not the only person who has had trouble figuring out what the interior product is for. This is what I have so far…

[January 26, 2019]

*Previously: matrices and inner products on exterior algebras.*

*Vector spaces are assumed to be finite-dimensional and over \(\bb{R}\). The grade of a multivector \(\alpha\) will be written \(\| \alpha \|\), while its magnitude will be written \(\Vert \alpha \Vert\). Bold letters like \(\b{u}\) will refer to (grade-1) vectors, while Greek letters like \(\alpha\) refer to arbitrary multivectors with grade \(\| \alpha \|\).*

[December 28, 2018]

Here is a survey of understandings on each of the main types of Taylor series:

- single-variable
- multivariable \(\bb{R}^n \ra \bb{R}\)
- multivariable \(\bb{R}^n \ra \bb{R}^m\)
- complex \(\bb{C} \ra \bb{C}\)

I thought it would be useful to have everything I know about these written down in one place.

These notes are not pedagogical; they’re for crystallizing everything when you already have a partial understanding of what’s going on. Particularly, I don’t want to have to remember the difference between all the different flavors of Taylor series, so I find it helpful to just cast them all into the same form, which is possible because they’re all the same thing (seriously why aren’t they taught this way?).

In these notes I am going to ignore discussions of convergence so that more ground can be covered. Generally it’s important to address convergence in order to, well, not be wrong. And I’m certain that I’ve made statements which are wrong below. But I am just trying to make sure I understand what happens when everything works, because in my interests (physics) it usually does.

[November 1, 2018]

You may have seen that Youtube video by Numberphile that circulated the social media world a few years ago. It showed an ‘astounding’ mathematical result:

\[1+2+3+4+5+\ldots = -\frac{1}{12}\](quote: “the answer to this sum is, remarkably, minus a twelfth”)

Then they tell you that this result is used in many areas of physics, and show you a page of a string theory textbook (*oooo*) that states it as a theorem.

The video caused a bit of an uproar at the time, since it was many people’s first introduction to the (rather outrageous) idea and they had all sorts of (very reasonable) objections.

I’m interested in talking about this because: I think it’s important to think about how to deal with experts telling you something that seems insane, and this is a nice microcosm for that problem.

Because, well, the world of mathematics seems to have been irresponsible here. It’s fine to get excited about strange mathematical results. But it’s not fine to present something that requires a lot of asterixes and disclaimers as simply “true”. The equation is *true* only in the sense that if you subtly change the meanings of lots of symbols, it can be shown to become true. But that’s not the same thing as quotidian, useful, everyday truth. And now that this is ‘out’, as it were, we have to figure out how to cope with it. Is it true? False? Something else? Let’s discuss.

[October 9, 2018]

*(See this previous post for some of the notations used here.)*

*(Not intended for any particular audience. Mostly I just wanted to write down these derivations in a presentable way because I haven’t seen them from this direction before.)*

*(Vector spaces are assumed to be finite-dimensional and over \(\bb{R}\))*

Exterior algebra is obviously useful any time you’re anywhere near a cross product or determinant. I want to show how it also comes with an inner product which can make certain formulas in the world of vectors and matrices vastly easier to prove.

[October 8, 2018]

*(This is not really an intro to the subject. I don’t have an audience in mind for this. I’ve written my notes out in an expository style because it helps me retain what I study.)*

*(Vector spaces are assumed to be finite-dimensional and over \(\bb{R}\) with the standard inner product unless otherwise noted.)*

Exterior algebra (also known as ‘multilinear algebra’, which is arguably the better name) is an obscure and technical subject. It’s used in certain fields of mathematics, primarily abstract algebra and differential geometry, and it comes up a lot in physics, often in disguise. I think it ought to be *far* more widely studied, because it turns out to take a lot of the mysteriousness out of the otherwise technical and tedious subject of linear algebra. But most of the places it turns up it is very obfuscated. So my aim is to study exterior algebra and do some ‘refactoring’: to make it more explicit, so it seems like a subject worth studying in its own right.

In general I’m drawn to whatever makes computation and intuition simple, and this is it. In college I learned about determinants and matrix inverses and never really understood how they work; they were impressive constructions that I memorized and then mostly forgot. Exterior Algebra turns out to make them into simple intuitive procedures that you could rederive whenever you wanted.

[August 6, 2018]

Here’s a summary of the concept of oriented area and the “shoelace formula”, and some equations I found while playing around with it that turned out not to be novel.

I wanted to write this article because I think the concept deserves to be better popularized, and it is useful to me to have my own reference on the subject. Some resources I have found, including Wikipedia, cite a 1959 monograph entitled *Computation of Areas of Oriented Figures* by A.M. Lopshits, originally printed in Russian and translated to English by Massalski and Mills, which I have not been able to find online. I did find a copy via university library, and I thought I would summarize its contents in the process to make them more available to a casual Internet reader.

I also wanted to practice making beautiful math diagrams. Which went okay, but god is it ever not worth the effort.

[June 15, 2018]

A friend is writing her master’s thesis in a subfield where data is typically summarized using *geometric* statistics: geometric means and geometric standard deviations (GSD), and sometimes even geometric standard errors – whatever those are. And occasionally ‘geometric confidence intervals’ and ‘geometric interquartile ranges’.

Most of which are (a) not something anyone really has intuition for and (b) surprisingly hard to find references for online, compared to regular ‘arithmetic’ statistics.

I was trying to help her understand these, but it took a lot of work to find easily-readable references online, so I wanted to write down what I figured out.

[April 19, 2018]

A rant.

My bike was stolen out of the backyard last night, so I’m feeling a little more aggravated by everything than usual.

This has had the effect of reminding me of a recurring sensation in my life as a software developer: that dealing with technology can be a *fundamentally miserable experience*, and that the skill of being ‘good’ at software is often mostly the same skill as *being able to take a lot of crap from faceless, abusive machines in ways that you feel powerless to do anything about.*

So while I’m all for the “let’s teach everybody to code!” movement, I do sometimes wish we’d stop writing yet another Learn Machine Learning With Python Tutorial, or whatever, and just make maybe take some time to work on making everything the world around us better in little incremental ways, by making what we’ve already got *suck* less, for ourselves and for all the newcomers and for just everyone, so we can have less stress and more peace in our lives.

Basically some days I can’t honestly tell anyone they should get into this, when on a good day you get to slowly hack your way through bullshit and on a bad day you might just succumb and give up.

[March 30, 2018]

*(Notes. Definitely not interesting unless, at minimum, you really really liked calculus.)*

We can often write a differentiable function \(f(x)\) as a Taylor series around a point \(x\), approximating it in terms of its derivatives at that point:

\[f(x+a) = \sum_{0}^{\infty} \frac{a^{n} f^{(n)}(x) }{n!}\]And, under certain conditions, this series will converge exactly to the values of the function at nearby points.

[February 23, 2018]

*(Only interesting if you already know some things about information theory, probably)*

*(Disclaimer: Notes. Don’t trust me, I’m not, like, a mathematician.)*

I have been reviewing concepts from Information Theory this week, and I’ve realized that I never quite really understood what (Shannon) Entropy was all about.

Specifically: I have finally understood how entropy is *not* a property of probability distributions per se, but a property of streams of information. When we talk about ‘the entropy of a probability distribution’, we’re implicitly talking about the stream of information produced by sampling from that distribution. Some of the equations make a lot more sense when you keep this in mind.

[January 2, 2018]

In 2018 I am going to write. Mostly: because I don’t remember anything unless I write it out for myself. And a little bit: because I have a lot I want to say.

Update: cool, I actually did some writing in 2018.