Some background: I’m a freshman undergraduate majoring in CS and Math, planning to go to graduate school in the latter. So far, my favorite subject has been Analysis, though I’m also excited to learn more about PDEs, Topology, and Algebra (not necessarily in that order.)
I’m petitioning to skip the course in Analysis that maps to Abbott’s book to go directly to a course that uses Rudin’s “Principles of Analysis” instead. With this, I would be able to take graduate courses in real/complex analysis during my undergraduate years. I also plan to study subjects on my own over the summer; these include Analysis (with problems from Abbott/Rudin,) Algebra (with Dummit and Foote,) and PDEs/Point-Set Topology (currently just using YouTube lectures, but open to textbooks.)
I am personally most fascinated/enamored at the moment by Dynamical Systems. Admittedly, I don’t know as much about these as I should, but, since I am presently unable to study them rigorously, I plan to build a solid rigorous foundation in “pure” subjects like Analysis, Topology, and Algebra before devoting all my efforts to these. Nevertheless, I am still more than fascinated by what I do know about subjects within Dynamical Systems, and would like to learn everything about them rigorously from the ground-up.
Would anyone be able to recommend me a short and long-term study plan for this? Furthermore, how do graduate school applications work for applied math programs as opposed to pure math programs? Would there be any way to get ahead on these programs starting now (expecting to graduate within an epsilon ball of a 3.5 major GPA, though with as many graduate courses in Analysis, Algebra and Topology as my department will let me take)?
There appears to be a major difference in the curricula in US universities (especially small liberal arts colleges) and universities in Europe. I went to a small liberal arts college and majored in math. I took the standard undergrad curriculum classes (e.g. analysis, Galois theory, topology) but I also took history, literature, and foreign language classes, even after declaring my major at the end of my second year. In Europe, students decide on their major early on and take nothing but math. In undergrad, I would have opposed just taking math, because I was interested in academic subjects besides math. But in grad school, I felt behind because I had taken no graduate-level math courses (they were all “advanced undergraduate”), whereas basically everyone else had something like a masters degree.
Looking back, it does seem like there is some tension between wanting to do what is best mathematically, and wanting to take classes in different subjects. What are your thoughts on this?
Let me be very blunt. My question is: Why would grad admissions at a strong grad program ever pick someone who attended a small liberal arts college, when they could pick from students who have taken a massive number of graduate math courses? Is there even a point for such a student to apply?
When I went to college, some professors assured me that I didn’t have to already take a bunch of graduate courses to get into a good program. Not sure if that’s true anymore. It seems like the best option is to fill up with as much math as possible, to the exclusion of anything else you might be interested in.
I see that the integrand is a fractional variant of the difference quotient. However, that’s as far as I can get… I suppose that the seminorm being finite is a regularity condition in the sense that it says the difference quotient doesn’t behave too poorly near the diagonal.
I want to understand it as a seminorm on the fractional analogue of a weak derivative, if that’s possible.
I see how to show that when a function is affine, then its epigraph is a halfspace.
The converse is the following : When a function epigraph is a halfspace then the function is affine. It seems obvious but I fail to find a clean proof. It has to start from the equality of two sets and get to the function form.
(It is not homework, it is from an exercise from an archived online course. Sadly it is not proved in the exercise solution.)
Assume an infinite sum over a sequence of real numbers, A, converges. Define a sequence B, where the kth element of B is one greater than the kth element of A.
Does the infinite product over B necessarily converge?
If not, what is a counterexample? If so, how do I prove this? For every convergent sequence A I have tried, the infinite product over B converged too, so I am conjecturing that this statement is true.
Also: Does a convergent B imply a convergent A?
Does a divergent A imply a divergent B?
Does a divergent B imply a divergent A?
Does anyone know of a method to check whether for a given Rubik’s cube scramble the reverse of the scramble is the shortest way(in # of moves) to get back to a solved state? The same question can also be framed as checking if a given scramble is the shortest way to reach the scrambled state.
I need this for a programming project so if this method can be implemented easily it’s a bonus.