The Importance of Age Related Suffering

geriatric painting

In an interview with London Real, and elsewhere, Aubrey de Grey has stated that “aging is the world’s most important problem, in this very obvious sense that it causes the most suffering.” (Brian Rose, who is himself in the throes of aging, can’t help but emphatically agree with Grey’s incredibly tenuous statements.) There are a couple of implicit statements in here that I take issue with. For one thing, it is impossible to definitively state that the suffering related to aging and death is greater than the suffering associated with distinct and different  categories of human experience. Generally, being a low ranking human, which is more commonly conceptualized as  “inequality,” causes a comparably tremendous amount of suffering. Kids are traumatized by their peers, and adults work awful jobs from which they are barely able to survive. Either way, this line of thinking is moot because suffering cannot be directly quantified.

Grey’s argument implies that suffering is a quantifiable, that one unit of suffering is equal to one unit of anoher type of suffering. Given these tenets, age related suffering is more important simply because there is more of it. Assuming that suffering is quantifiable, and that it is in fact greater in the aggregate from aging than anything else, is there anyway in which suffering could be weighted differently, such that early life suffering would be more important despite being lesser?

To answer this question, suffering could be conceptualized similar to money, as a thing that has a present value, and a sort of discounted future value. Going one step further and utilizing an interest model, early age suffering can be quantifiably greater in the long term because of compounding interest. This may seem incredulously abstract, but it is practically true. Consider the fact that psychological trauma during childhood disposes one to suffer throughout life. The cost of those experiences recur though memories, and through sub-consciously determined patterns of behavior, which emerge as opportunity costs to greater fulfillment from life. An example would be a person who subconsciously thinks that they don’t deserve anything good and sabotages their endeavors.

From a macro perspective, there is a potentially ethical issue that arises from diverting resources away from early life suffering to late life suffering. Assuming a zero sum sort of reality, is it wrong that one dollar going to age mitigation research, is a dollar that isn’t going towards keeping kids free of trauma, given that dollar has the same marginal impact on suffering?

On a separate topic, there is the issue of the environment. Given that the there is environmental catastrophe from the human consumption of resources, which goes beyond the usage of fossil fuels, there is a strong probability that the effects on the environment will cause tremendous suffering for humans, above and beyond age related suffering. This framework suggests that age related suffering is not the most important issue, even in terms of suffering.

Assuming that the environment will be okay (maybe degrading linearly rather than catastrophically,) at least enough for the relatively wealthy to avoid harm, yet mass extinction is going on. What are you left with? Implicitly, there is the statement that human experience is paramount to the objective physical world, and non-human experience. I get that you can’t definitively state that the environment is intrinsically valuable outside of what is necessary for human survival. Still, it seems intuitively shitty that one would think of old people dying as more important than ecological collapse.

From a less extreme, and more practical perspective, one could say that the most important beneficial endeavor for humans is increasing health for all individuals. Which would decrease suffering across all ages. Anti-aging research is interesting and worthwhile, but it’s not even close to being the most important issue facing humanity. It may be the most important issue to a wealthy individual with the ability to insulate ones self from socio-environmental catastrophe. However, it is almost tautologically not the most important issue facing the average human.

The EPIC Fitness Summit: What Was Its Real Purpose?

Last month there was a sort of fitness/nutrition conference in England called the EPIC Fitness Summit. This conference innovatively markets itself towards the tribal attitudes that underlie scientific knowledge within the fitness world. As such the format emphasized debating and was advertised as a sort of show down between pop-nutrition figures. The general theme across all debates was the calories in/out model vs. the hormonal emphasis on carbohydrate consumption.

I wasn’t actually there, though my frantic search to find footage of the debates brought me to statements via social media, which gave me enough of an awareness to realize that the event was completely devoid of any educational value to the participants.

I find that in the world of performance science, there are two types of people, those who emphasis “evidence based conclusions” and those who rely more on fundamental facts to reason upward. Ideally, a scientist or a scientifically minded practitioner, emphasizes both procedures; understanding that pure theory can be based on false assumptions, and that empirical evidence can be corrupt or misinterpreted.

This is just my own personal observation, but it seems to me that the ones skewed towards “evidence based conclusions” are not as intelligent. If you are not that smart, then you probably don’t actually understand biochemistry, or any other hard science that would lend itself to obtaining a fundamental model for understanding biology. For this person, quantity of research is their only proxy for truth. In addition, these people are dicks who use information to assert dominance over others.

In a review of the Taubes Aragon debate, the reviewer implicitly stated that he believed Aragon to be more correct because of the large quantity of research that he had presented. I understand that there is a concensus out there that Taubes was unprepared and that he acted rudely towards Aragon. This may be so. In any case, this reviewer is emblematic of the type soft reasoning that is normal within the fitness/nutrition community. An implicit sense of reasoning that translates to stating that a large amount of research supporting one argument is more credible than another argument supported by less research.

The truth is that relative quantities of research prove nothing. From an indeterminantly probabilistic perspective, one could reasonably suppose that a larger quantity of research showing one relationship, is more likely to be true  than a smaller set of research which shows a contradictory result. There are two potential issues with this perspective.

One arises from the fact that complex, multivariate systems are difficult to study. The classic assumption of a variable having a consistent relationship, holding all other variables constant, is not absolutely true. Alternate paradigms exist in which a variable’s function changes completely. For example, cortisol is widely believed to contribute to weight gain, and it does when insulin is present. However, when insulin is not present, cortisol is extremely catabolic of adipose tissue. One variable’s function can change completely across contexts.

The other issue is that beliefs perpetuate themselves. Once a belief is educationally implemented, then the future scientists acquire that bias and begin testing for particular results under a narrow scope of conditions. Or if a person challenges conventional beliefs they may not get funding. So it is not necessarily true that research quantity absolutely increases truth probability.

Moving on. What was the EPIC Fitness Summit’s true function and the actual benefit it provided for the people who attended, given that it largely consisted of the types who fall in to the evidence base bias?

This demographic can not reason scientifically, and uses scientific data as a tool for perpetuating preexisting beliefs. A debate in this context crudely formats scientific knowledge into a primitively conflictual word fight. On top of this, its not necessarily true that the correct person with the most logical and relevant argument wins. Men like Alan Aragon resort to irrational arguments, ad hominen attacks, and verbal parlor tricks to woo their audience.

The result is that no one learns anything, and the value is limited to that of a thrilling fight.

Why You Should Train Your Brain With Math

Most of us have had some degree of frustration with math. You couldn’t follow the teachers practice problems and class was useless. You didn’t learn foundational rules like order of operation well, and you constantly get stuck. Just when you feel like you’re starting to enjoy math, you take a course in proof writing and realize that “real math” is the reason John Nash lost his mind. Did I just say all of that in the second person? Because those three sentence sum up my personal experience with math. I was pretty bad at math early on, but I overcame all of that and minored in math during college….which ultimately resulted in mediocre grades. Considering all of the frustration I’ve had with math, its ironic that I now practice math, with the same enthusiasm that extraterrestrial predators have for murdering central americans.

A couple days a week, I spend the first hour of my day going through problems on the ACTEX  manual for the actuarial probability exam. For those who are unfamiliar, the probability actuary exam is extremely difficult. It takes calculus based statistics, which is kind of hard, and tests the hardest possible word problems with that material . To put this in context, all of the material on this exam is covered in a college course called mathematical statistics, and you could ace that course and still fail the P 1 exam. You need to be able to think on your feet and solve novel problems in order to pass this exam.

Anyway, I actually took the exam two years ago, and failed miserably because I got the study guide only two weeks before the test, and because I figured beer and “stimulants” were legitimate study tools. I have no intention of retaking the exam or becoming an actuary, though I’m sort of curious to see how I’d do these days as my iq has gone up since I first took it. What purpose, then, is there in doing any of this? Well, I have my personal reasons, but there is another, which is sufficient in and of itself for me to pursue this practice. Mathematical study, is the most effective practice for improving cognitive performance. It is like high intensity interval training for the brain.

A side not on “brain games:” The argument for learning math is structurally similar to that of brain games. “Do this particular mental activity in order to have a transfer effect to practical cognitive domains.”  However, there is a profound difference between the two. Brain games can be helpful if you do almost nothing with your life. Pretty much all of the benefits can be achieved just by doing things in real life. So, instead of playing games with tenuous benefits, why not spend your time using your brain for things that are productive? (If you start a business, you’ll learn extremely efficiently and you’ll get better at decision making.) Brain games tend to be marketed for those who don’t have the ability to spend their free time in a way that doesn’t involve entertainment. In this way, brain games are like the gummy bear vitamin of mental activity.

Anyone reading this article is probably smart and occupies themselves with mentally stimulating activities on a daily basis. For these people, what is the point? Going back to the exercise analogy, exercise should have the primary effect of increasing energy levels for life’s demands. In a similar way, honing in on difficult math problems improves general problem solving and pattern recognition. The time and energy that you would put into this is like an investment that will improve productivity in almost every domain of life, thus life itself improves. Brain games are like elliptical machines, math is like high intensity power lifting. Capice?

The Nearly Obsolete Meta Sense of Decision Making

At a particular point of time in the life of a society, there is a set of behaviors along an average individual’s lifespan, which are optimal in terms of expected benefit, given a risk averse disposition. However the average person does not have the theoretical knowledge or the reasoning capacity to come to this determination autonomously. So, societal mechanisms exist in which a meta sense of optimal choice proliferates. This social sense takes a behavior that is determined by multivariate models, and defines it in as a binary dictum. Moreover, the impetus for this binary decision is social cost rather than the independently economic aspect of the decision.

For example, when college was an optimal decision, the reasonable statement relating to adivisng a young person would have been, “college is a challenging environment that makes you more economically valuable. On top of that, it’s a highly potent signal as to your value, making it a less risky path to getting a high paying job than not going through college.” Instead, most people reduced this reality to “you must go to college, or you will be poor, and you’re social network will disown you.”

While it is incorrect to have an overly simplified model of a particular thing, it has worked out okay for society, because the binary model is a rough approximation of the multivariate model, given that the parameters remain constant.

The problem that we encounter today is that the rate of change or paradigm altering disruption has been accelerating. So we can’t assume that parameters will remain constant, which is required for the simplified models to work. Anything can change. So, the days of contextless  advice working are over. In order to make good decisions, one will need to attain the ability to think in terms of models*, and build up the mental framework that is pertinent to their particular life.

*To clarify, models are synonymous with systems. Being able to think modelisticaly or systematically means that you can conceive of multiple variables interacting with each other, and think your way through the ripple effect that would happen if any variable were to change.

Comparison of Modern Wages with Paleolithic Work Patterns: Part 1

Intro

Notions of fairness are par for the discourse surrounding minimum wage. Some people look at the issue in terms of work and effort. For example, that it is unfair for fast food employees to get paid less than other workers when the average effort expenditure is subjectively similar. Other people, look towards economic theory to argue that free markets are efficient and that manipulation of labor markets through mandatory wages would have negative consequences such as  decreased employment. These people have a mental model for economics and wages that is structurally similar to specific interpretations of the nature of life. Economic forces, like nature, are indifferent to normative notions of fairness and that one can merely adapt to these forces rather than alter them in vain. As an aside, there is a pretty big overlap between Paleo enthusiasts and Libertarians. Robb Wolf claims to be a die hard libertarian.
Although economic forces dictate behavior in a mechanistic way, it is an error assuming  that uninhibited markets imply optimal outcomes. Remember people who took a hand full of undergraduate economics courses as part of their business minor, the invisible hand and game theory are not friends.

Given that artificial economic structures and nature are inherently unfair, how can we make things better in an absolute sense? I have no idea, but it might be interesting to compare artificial human ecologies with the natural conditions faced by Paleolithic humans to see how they compare. Ultimately, societal economics simulates mechanisms in the natural world . Of course it is unfair, however, maybe we can determine if this simulation is an improvement over the original environment that humans were first born into. The issue of how artificial economic environments can be improved or if they can be is a separate issue.

Our first issue is setting up parameters that roughly model our definition of Paleolithic humans. This is more problematic than you would think. An assumption contained within the dialogue pertaining to the life styles of Paleolithic humans is that human life was static until it suddenly changed with agriculture. Sure, agriculture was a huge change, but lots of things changed between the emergence of humans and the Neolithic era. From advances in technology, increased starch consumption, increased population densities, and interbreeding with other hominids; humans changed a lot before agriculture. In fact, one could argue that these changes brought humans to the point where agriculture was the next incremental step, rather than some sort of quantum leap.

So what is the point in looking towards Paleolithic humans if it is really just a moving target? I could have framed this article as an economic comparison of minimum wage workers to present hunter gatherers, and it would have cut out the middle man and the speculation. As mentioned earlier, this is an analysis of the effect of two systems on humans: artificial economic systems and nature. So this isn’t totally a “lets compare ourselves to hunter gatherers due to the non-tautological implication that everything they did was good,” however, it still probably makes more sense to compare industrial productivity to modern hunter gatherers. There’s just more data, the population is more narrowly defined, and proposed differences between Paleolithic and modern hunter gatherers is probably small, and speculative. Regardless, I’m still going to frame this analysis around Paleolithic humans, because it is more interesting.

My decision to focus on Paleolithic humans exemplifies a trending fascination of extrapolating Paleolithic human behaviors as a baseline for what modern people should do. This has coincided with the busted myth of progress, which was destroyed by the economic shift of 2008. It’s probably just a coincidence, but I do think it’s interesting that the Paleo diet emerged not long after the 2008 financial recession, or maybe later on after the realization that no true recovery was in sight.

Anyway, the machinations of economic forces that converge towards extremely unfair conditions, are disconcerting, and I think what we really want is a contextual basis or model in which things work out well. Paleolithic humans, as opposed to modern hunter gatherers, provide a fantastical backdrop for that projection. Also, it doesn’t have to involve black people.

Applied Evolutionary Hypotheses Vs. Neolithic Adaptation

 

Evolutionary theory has increasingly seeped into public awareness as a model for understanding human biology. Books like The Red Queen and Sperm Wars taught us that our sexual behaviors are the product of millions of years of sexual selection, which has been instrumental to both human survival and cultivating general behavior.

Accelerating pop-culture awareness of evolutionary theory paved the way for the Paleo diet, and the application of evolutionary theory to nutrition and exercise. The idea caught on that Paleolithic humans had genes that were optimized for their lifestyle, and that we pretty much have these same genes. So it follows that if we really want to be healthy, then we should emulate the behaviors that our mostly Paleolithic genes are adapted towards. Unfortunately, proponents of the Paleo diet have failed to adequately evaluate this hypothesis through the scientific method. Over the last couple years this failure has been thoroughly belabored. What should be noted is that the statement “what was good for Paleolithic humans is good for modern humans” is NOT a tautological statement. There are instances in which acting out on some sort of paleo perceived intuition will not be optimal for you, even if that intuition is true of what humans back then were actually doing (the assumptions about what was true of all humans over the course of 100,000 years is tenuous.) The reason that tautology fails is because all humans have genetically adapted to Neolithic agrarian diets to some extent. That’s not to say that some people are completely different now. As a hypothetical example, suppose that the average human has retained 95% of the Paleolithic adaptations and has gained 5% more adaptations that have nulled previous adaptations. To some extent, most people have adapted to higher carb diets and grain toxins. That’s not to say that maintaining a higher diet like that is optimal. As a random example, honey badgers can survive poisonous snake bites. Would they be less healthy without getting bite by the snakes that they fearlessly slaughter? Maybe, maybe not, the point is that genetic adaptations to environmental stimuli do not imply that those inputs are required for optimal health.

Another possibility is that there is a nutritional context for optimal health that doesn’t conform to the Paleolithic environment. Some people are into super high fat diets. There probably weren’t any paleolithic humans who stumbled into eat highly ketogenic diets naturally. However, it might turn out to be optimal even if it doesn’t conform to an environmental context that our ancestors were adapting to.

So given the possibility that we can do better than paleolithic humans, that we have retained the vast majority of the genes and adaptations that they acquired, but that most of us have novel adaptations from agricultural; what should we do? Thinking of how early humans lived is interesting and potentially informative. For example, there is the possibility, curtesy of John Kiefer, that early humans were somewhat adapted to high glycemic, high fructose meals that were gorged upon whenever fruit was available. This is potentially a justification for carb back-loading, and possibly preferring simple sugars to starches in a differentiated context. Generally, evolutionarily derived hypotheses can be a short cut to setting up guidelines and permissive fluctuations, without going through the work of checking to see if those hypotheses are true, or if there is a lack of relevant scientific research. Ultimately, it is preferable to look up the research whenever you have a question or go through a biology textbook to understand mechanism.

Keep in mind that we don’t know everything because: there are things that can’t be researched, there are hypotheses we haven’t or can’t imagine, and there is a lot of biased research polluting the literature. Look up the data and the facts, but don’t be afraid to go out on a limb when it comes to the unknown.