1) The context which makes me ask this (optional, but might give better insight into the question)

I’ve been musing on the difference between mathematics and computer science for a while, namely as to why I personally got along very well with the later but not the former.

There’s a famous quote I saw attributed to many people, about programming/CS is: “There are only two hard things in Computer Science: cache invalidation and naming things.”

And I think this quote represents the difference between CS and mathematics quite well, in that, programmers are forced to work in this “finite” space where concepts such as infinity and shape are replaced by concepts such as memory and accuracy. And that small detail obviously means the two go into completely different direction, since you can’t really talk about calculus or linear algebra or even basic things such as convergence and divergence in the discrete and finite world of computers.

But the other thing which strikes me to be quite different between CS and math, is that the former places a lot of emphasis on naming. Indeed, many people that have worked in the field will tell you that buggy code can be sniffed out based on the name of things alone, and that a decent paradigm for deciding how to write code, is to think in which way you can give things names which make sense (though I haven’t meet many practitioners of the former).

2) Some examples of the problems in naming

Is there something mathematics can learn from the field of computer science in terms of naming things by combining concepts and going for intuitive names, rather than shorthands and eponyms.

For example, it always struck me as weird that things like “The Laplace transform” existed, since I always thought it could just be called “The complex Fourier transform”, which doesn’t express the difference between the two perfectly, but it’s short, and it at least alludes to the contents of the concept of Laplace transform, at least, if one know what a Fourier transform is. Whereas, one could perfectly understand what a Fourier transform is, but, had he never heard of the Laplace transform, he may think he has reached uncharted waters when someone mentions it.

A better example of how omnipresent eponyms in mathematics are, I got whilst reading this paper:

http://www.iiisci.org/journal/CV$/sci/pdfs/GS315JG.pdf

This is a simple comparison of various binary distance/similarity functions.. 76 of them to be precise, some very similar, some very basic, some very useless. But they all bare the name of a person, which makes understanding how they relate to each other, rather impenetrable… since you have only the equation to go by, the names are actively distracting you from the point. I’m quite sure, however, that all 76 functions (or at least a good percentage of them).

A similar point could be made about shorthand notation that losses all meaning to the “uninitiated”. For example, everyone knows what `+`

means, almost everyone knows `x`

stands for an unknown or free variable, not that many people know `ε`

could mean a very small yet different from zero number… etc (there are more obscure things, but at this point I’d start to go into notations that are obscure because they are very domain specific). So isn’t there a point where it may be healthy to drop shorthand notation ? Not as in, switching `+`

with `add`

, but maybe switching `x`

with `unknown`

, most certainly switching `ε`

with `very_small`

or `tiny_nr`

and switching `∇`

to `grad`

.

3) The actual questions I have

Is there justification for the heavy use of eponyms and shorthand notations in modern mathematics ? Do you think a “re-writing” of mathematics using more meaningful names is possible or desirable ? Do you see such a change coming in the future ?