All fields of medicine are characterized by a history of wrong ideas, many of them very strange from a modern viewpoint. These ideas were slowly winnowed out as technology advanced to the point of being able to prove them wrong, and as the culture of science advanced to the point of being taken seriously. Considerations of aging are no exception, and like most of the very complex issues in biology, this is arguably one in which the wrong and the strange persisted to a later date than was the case for other areas of medical science.
Many early theories of aging revolved around loss of some form of resource: that people were born with a given amount, that it was needed for life, and the process of living depleted it. No such resource exists, of course. Rate of living theories of aging might be seen as the more modern final last gasp of that sort of thinking regarding fixed limits and the passage of time. In reality, life span is fluid, determined by the accumulation of cell and tissue damage that arises as a side-effect of the normal operation of cellular metabolism, by the rising mortality rate caused by the presence of that damage and its consequences. It is the damage that is important, not the time spent alive. Repair the damage and people will live for longer in good health; the first rejuvenation therapies, such as those that destroy senescent cells, should prove that point in the years ahead.
In the second half of the 19th century, doctors believed that old age occurred when the body ran out of "vital energy" - which was no mere metaphor. The stuff was thought to be tangible, literally present in the body and its fluids. Everyone had a finite reservoir of vital energy that gradually became depleted over a lifetime. When you began to run low on vitality, you were old; death followed when the tank was empty.
For the era's doctors, the concept conveniently solved the mystery of why illness seemed far more curable in the young than the old. Physicians supposed that the loss of vitality created a "predisposing debility," as one historian has put it, making the older body vulnerable to a host of secondary maladies. The theory also fit with American religious thought as influenced by the Second Great Awakening, which peaked in the 1830s. The amount of vitality you were endowed with at birth was simply your lot. Whether you used it well or squandered it, however, was your personal responsibility.
In continental Europe in the 1850s and 1860s, vitality theory began to wane as French and German pathologists realized that the lesions, fibrous tissue and calcium deposits they discovered in older people's cadavers could provide an explanation for some of the complaints of old age. But in the United States and Britain, many of those aware of these continental findings simply doubled down on their existing beliefs: Any wasting observed in cadavers was simply due to the loss of vital energy.
Perhaps the best evidence for that point of view was the moment in a patient's life when vitality began to appreciably decline, which English-speaking physicians named the "climacteric period," or "climacteric disease." In women, the climacteric period was believed to begin between ages 45 and 55 and was associated with menopause; in men, it took place between 50 and 75 and was indicated by such signs as wrinkles, white hair, and complaints of feebleness.
Eventually, insights from the medical field of pathology discredited vital-energy theory, but only after it molded the development of a long-lasting set of social, cultural and economic institutions. The first dedicated old-age homes, the rise of public and private pensions, the normalization of retirement both as something bad your boss could do to you and also a new stage of life - these all marinated in vital-energy theory for decades before emerging fully baked into the 20th century, complete with implications for what it meant to be an "older person."