Posts in Philosophy (14 found)
devansh 4 days ago

Is Complexity just an illusion?

Most of what we call “complexity” is not a property of reality. It’s a property of our descriptions of reality. The world is what it is; what changes is the language you have available to carve it up. When someone says “that’s a golden retriever,” they’re not just using two words, they’re using a compressed concept that bundles size, coat, temperament, typical behavior, and a bunch of implied background. If you don’t share that vocabulary, you’re forced into a longer, clumsier description of the same dog. The dog didn’t get more complex. Your map did. This is why expertise feels like magic. A chess novice sees a board with dozens of pieces and a combinatorial explosion of interactions. A grandmaster sees “a fork motif,” “a weak back rank,” “a pinned knight,” and a small set of candidate lines. They’re not seeing less detail. They’re carrying a better compression scheme. They have words for patterns that occur often, and those words collapse chaos into structure. Complexity shrinks when you acquire the right abstractions. Once you internalize this, you stop worshipping “simple explanations” in the naive sense. People don’t actually want explanations that are short. They want explanations that keep working when conditions change, that don’t fall apart on new data, and that don’t assume more than the evidence forces. Word count is not the virtue. Appropriate restraint is. Compare the proverb"Red sky at night, sailor’s delight" to a messier but truer model: weather depends on pressure systems, humidity, wind, and local geography; red skies correlate sometimes, depending on context. The proverb is shorter. The second is less wrong in more places because it commits less. This is also why simplicity often correlates with truth in mature domains. Over time, languages evolve to give short handles to recurring, broadly useful structure. We coin compact terms like “germs,” “incentives,” “feedback loops,” “network effects.” They’re easy to say because the underlying patterns are valuable and frequent, so the culture compresses them into vocabulary. The causality isn’t “short explanations generalize.” It’s “general structure gets named,” and once named it looks simple. Simplicity is often a dashboard indicator, not the engine. Learning anything complex is mostly representation engineering in your own head. You are not trying to stuff facts into memory. You are trying to acquire compression, concept that turn many details into a small number of stable handles. Following is a basic mental model: 1) Steal the field’s primitives before you invent your own. Every domain has a small set of basic concepts that do a shocking amount of work. If you skip them, you’ll experience the domain as irreducible complexity. In calculus, “derivative” is not a symbol; it’s “local linear approximation.” Once that clicks, a lot of problems stop being special cases. In economics, “opportunity cost” and “incentives” are compression handles that cut through moralizing narratives. In product work, “retention,” “activation,” and “unit economics” prevent you from drowning in vibes. Early learning should look like building a precise glossary, not collecting trivia. 2) Build a pattern library by grinding examples until the patterns name themselves. Experts aren’t mainly smarter; they’ve seen enough instances to chunk reality. You get there by doing many small reps, not by reading one long explanation. Read one worked example, then do three similar ones from scratch. In chess, drill forks and pins until you stop counting pieces and start seeing motifs. In programming, you want “race condition,” “off-by-one,” “state leak,” “cache invalidation” to become immediate hypotheses, not postmortem discoveries. Practice isn’t repetition for discipline’s sake; it’s training your brain to compress recurring structure. 3) Learn with falsifiable predictions, not passive recognition. If you can only nod along, you don’t have the abstraction. Force yourself to predict outcomes before checking. If you’re learning statistics, predict how changing sample size affects variance. If you’re learning sales, predict which segment will churn and why. If you’re learning systems, predict the failure mode under load. This converts knowledge from "a story I can repeat" into "a model that constrains reality." 4) Control commitment: go from broad to narrow. When something breaks or surprises you, generate hypotheses ranked by how much they commit. Start with coarse categories (“measurement issue,” “traffic shift,” “pricing edge case,” “product regression”) before picking a single narrative. Then test to eliminate. This is how experts stay accurate, they don’t jump to the cleanest story; they keep the hypothesis space alive until evidence collapses it. The question “what does this rule out?” becomes your guardrail. 5) Upgrade your vocabulary deliberately. When you encounter a recurring cluster of details, name it. Give yourself a handle. The handle can be a formal term from the field or your own shorthand, but it must point to a repeatable pattern you can recognize and use. This is how you compound. Each new concept is a new compression tool; it makes future learning cheaper. If you do this well, "complex topics" start to feel different. Not because the world got simpler, but because you stopped paying unnecessary translation costs. The deepest form of intelligence isn’t producing the shortest answer. It’s finding the abstraction level where the real structure becomes easy to express, and then refusing to overcommit beyond the evidence. So is complexity an illusion? idk you tell me. The kind of complexities people complain about are “hard to describe, hard to predict, hard to compress”, this is often a signal that your vocabulary is misaligned with the structure of the thing. The tax is rarely levied by the territory. It’s paid at the currency exchange between reality and the symbols you’re using. And the highest-leverage move, more often than people admit, is to upgrade the map.

1 views
devansh 1 months ago

Do Your Bit Anyway

The chance that your life will change human history is basically zero. You're not Caesar, you're not Newton, and you're not some chosen one. When you look at history from far away, 99.99% of everything people do disappears into nothing. About 117 billion humans have ever lived. Maybe 10,000 names are still remembered. That's 0.0085% of everyone who ever existed. Even smart people can't name more than 100 historical figures. The rest 116,999,990,000 people - lived, struggled, loved, and died completely forgotten. This isn't meant to depress you. It's just math. Wanting to "change the world" is often just ego dressed up as kindness. If you only do things because you want to be famous, you'll end up doing nothing. You'll be paralyzed because the bar is impossibly high and you won't live to see if you made it. Thinking "nothing matters" because "I won't be famous" is a mistake. It confuses being visible with being useful. Your heart valve isn't visible, but try living without it. We believe a lie called the "Great Man Theory", that history happens because of special individuals. This is wrong. It just makes for better stories. Stories need heroes. Textbooks need chapter titles. But reality doesn't work that way. Napoleon didn't conquer Europe alone. He was the tip of a spear made by millions of French people, farmers, blacksmiths, clerks, mothers who raised soldiers. Every big historical moment isn't one event. It's millions of small actions piling up until something breaks through. Look at the moon landing. Everyone remembers Neil Armstrong. But it took 400,000 people to get him there. Engineers, mathematicians, seamstresses. One was Margaret Hamilton, who wrote the computer code. But even she needed the janitor who kept the computer lab clean so dust wouldn't break the machines. We don't know the janitor's name. But without him or her, the computer fails. Without the computer, the rocket crashes. The "Great Man" is just the crack you see in the dam. But the pressure that breaks the dam comes from all the water behind it that you can't see. You are the water. Without millions of anonymous people, the "great" have nothing to stand on. Einstein's theories needed hundreds of years of math from forgotten scholars. Shakespeare's plays were performed by actors we don't remember, in theaters built by carpenters we'll never know. Greatness is always a group project, even when history gives one person credit. The best reason to do your part comes from physics. The natural state of everything is chaos. Things break down, fall apart, rot. This is entropy, and it's always happening, and will keep happening till the eternity of time. Civilization isn't solid and stable. It's fragile. It needs constant work to keep it from collapsing. The road must be fixed. If not, potholes appear. Traffic jams happen. Accidents increase. Trucks carrying food get delayed. Food rots. Prices go up. People suffer. The child must be fed. If not, their brain doesn't develop right. You get a generation that can't think well enough to keep society running. Society collapses, thus civilization collapses. Contracts must be kept. If not, trust dies. Trade becomes impossible. Everyone has to grow their own food. We're back in the stone age. The truth must be told. If not, nobody can work together. Society breaks into tribes that can't even agree on what's real. When you do your job well, raise your kid with care, or refuse to lie, you're not changing humanity's direction. You're doing something more important, you're keeping it from falling apart. People think "important" means steering the ship to a new place. They forget that the ship needs to float first. If millions of people stop doing their "small" jobs well, society doesn't change direction, it sinks. Your small part is what keeps everything standing. Every teacher who explains math clearly, every plumber who fixes a leak right, every nurse who double-checks the medicine, they're all fighting chaos. They're holding back collapse. You can't judge how important your actions are. Human systems are chaotic. Small things create huge results in ways you can't predict. A boring biology teacher in the 1800s creates a dull lesson. He feels useless. But that lesson makes one student curious. That student later discovers penicillin. The teacher is forgotten, but without him, 200 million people die. Those people have kids. Those kids create new things. One boring Tuesday in 1820 created results that never end. In 1962, a Soviet officer named Vasili Arkhipov was on a submarine during the Cuban Missile Crisis. His fellow officers voted to fire a nuclear weapon at American ships. Arkhipov voted no. They needed everyone to agree. The weapon wasn't fired. Nuclear war was avoided. Billions of people lived because of one "no." Yet Arkhipov died unknown. His story stayed secret for decades. Why did Arkhipov say no? Maybe his mother taught him patience as a kid. Maybe a neighbor was kind to him once, so he valued human life. Maybe a teacher taught him to think for himself instead of just following orders. A billion "small" moments created the man who saved the world. Stopping because you can't see results is arrogant. It assumes you have god-like vision to trace every effect of your actions into the future. You don't. You're in a chaotic system where small inputs create wild, unpredictable outputs. So the only smart move is to focus on the input, do quality work, even when you can't see the output. You don't know which of your actions will matter forever, so treat every action like it might. Strip away all results and ask yourself "What if everyone thought like you?" You must keep doing your part not for a reward, but because the opposite is impossible to defend. You want civilization's benefits - safety, medicine, clean water, laws. But you refuse to do the small work to maintain it? That makes you a parasite feeding off everyone else's effort. Civilization is a coordination game. The only stable version is where most people contribute most of the time. Cheating helps you short-term but destroys everything long-term. Your "small" contribution is your payment for living in a world with hospitals, running water, and justice systems. I've said you're statistically nothing. Now I'm saying human potential is huge. Both are true. The key is understanding the difference between "probably" and "possibly." You probably won't change the world. But you possibly can. History is full of people who looked ordinary, until they weren't. Any average person can become world-class at a skill through 10,000+ hours of serious practice. Expertise is built, not born. Today's technology gives you leverage. One person with a laptop can build software billions use. One person with a camera can influence millions. One person with a pen can change laws. The gap between what one person can do and global impact has never been smaller. Saying "you can do anything" doesn't mean "you will do anything." Possibility needs activation. Most people could achieve great things but lack one or more of these: Most people won't activate their potential. But the fact that you probably won't doesn't mean you can't. So which is it? Are you nothing or everything? Both, depending on the scale. On history's scale, you're almost certainly nothing. Your name will be forgotten. Your personal problems will dissolve into noise. On life's scale, you're powerful. You can choose your next move. You can learn any skill. You can be kind. You can absolutely change the lives of people near you. Stop craving to "change humanity's course." That's a fantasy that will torture you because it's unreachable. Reality is local. Reality is the interaction right in front of you. It's with the people you meet. It's in the life you live. It's in the positive change you bring to others' life. You are the foundation. A cathedral is known for its spire, but the foundation holds it up. The stones in the foundation are buried in darkness, never seen, never praised, never remembered. But if they shift, the spire falls. Doing your part quietly, skillfully, without anyone watching - this is the ultimate rebellion against meaninglessness. It says that order is better than chaos, even when nobody's looking. It recognizes that you're both nothing in the grand scheme and absolutely essential right now. And yet, while you do your part, never forget that you have the ability to become the spire. History isn't finished. The next Einstein, the next Lincoln, the next person who bends civilization's path might be you. Probably not. But possibly yes. The only way to guarantee it won't be you is to not try. So do your part. Do it well. Maintain the foundation. But also - build your tower. The universe owes you nothing. It also stops you from nothing. Everything is possible. Nothing is guaranteed. The only waste is unused potential. Now get to work. The road must be fixed. If not, potholes appear. Traffic jams happen. Accidents increase. Trucks carrying food get delayed. Food rots. Prices go up. People suffer. The child must be fed. If not, their brain doesn't develop right. You get a generation that can't think well enough to keep society running. Society collapses, thus civilization collapses. Contracts must be kept. If not, trust dies. Trade becomes impossible. Everyone has to grow their own food. We're back in the stone age. The truth must be told. If not, nobody can work together. Society breaks into tribes that can't even agree on what's real. If everyone said "my actions don't matter", disaster happens immediately. If every engineer said "my bridge inspection doesn't matter", bridges collapse. If every programmer said "my code quality doesn't matter", planes crash. If every parent said "raising my kid doesn't matter", civilization ends in one generation. If every voter said "my vote doesn't matter", democracy dies. Any average person can become world-class at a skill through 10,000+ hours of serious practice. Expertise is built, not born. Today's technology gives you leverage. One person with a laptop can build software billions use. One person with a camera can influence millions. One person with a pen can change laws. The gap between what one person can do and global impact has never been smaller. They don't know what they want. They know what they want but can't keep working for years. They won't endure failure, poverty, or mockery. They don't find the right opportunity, or they do but don't recognize it. On history's scale, you're almost certainly nothing. Your name will be forgotten. Your personal problems will dissolve into noise. On life's scale, you're powerful. You can choose your next move. You can learn any skill. You can be kind. You can absolutely change the lives of people near you.

13 views
annie's blog 1 months ago

Dishonesty is a rejection of life

Any future perfectly known, said Alan Watts, is already the past. But life is not in the past. Life is now, life is here, life is this moment. The only way to live it is to be as truthful as you can be. With others, of course. But mostly with yourself. Doing anything else is not living or being in the moment.  Anything less than truthfulness is an attempt to distort the past or control the future. When you’re busy trying to distort or cover or rearrange the past, you’re not in the present. When you’re focused on managing and controlling the future, you’re not in the present. You are in a time that does not exist: past or future. When you focus on the past or the future, you opt out of existing in the present. As long as you choose to stay there, in the not-now, you don’t exist in the now. Since now is all that exists, we might say you opt out of existing at all. Until you return to what does exist, the only thing that exists (if anything does): the present, this moment, now.

0 views
annie's blog 3 months ago

Shelter or prison

A mental model or set of values starts as a shelter from the unrelenting chaos of reality. We need these shelters. Living without them isn’t really possible. We can’t take in and process adequate information fast enough to make truly new decisions. We need to categorize things and go with default reactions, otherwise we’ll get stuck, overwhelmed, never able to move from processing and analysis to action. Beliefs, mental models, values: These are shortcuts to decision-making. We adopt the ones we are given, adapt them according to our experiences, and use them as a way to understand the world (at least in some fashion). They tell us what the best thing is when we face a choice. They tell us how to react to other people’s choices. These structures give us shelter from chaos. They give us shortcuts so we can live. We stack a bunch of these structures together and call it something bigger: a religion, a culture, civilization. The interactions between the structures form the system we understand as reality. The problem with every system is how it evolves. It begins as a means of supporting the structures, keeping everything working; it ends up as a self-referential entity with the core goal of sustaining itself. The individuals within a system may change and grow and need the system to change and grow with them. But systems resist change. The individuals in a system are often not served by the system, but they’re serving it. They’re trapped within it. Does it shelter them? Does it provide some resources? Does it, perhaps, even keep them alive? Sure. So does a prison. Scifi tell us to fear AI; at some point, the artificial intelligence will become real , exert will, take over. But we should, instead, look at what we’ve already created that has taken over: our structures, our systems, our organizations, our civilizations. Gaining sentience was not even necessary. We, the inhabitants of the system, provide the necessary sentience to grease the wheels, crank the gears, repair the breaks, patch the holes. How could we refuse? After all, it keeps us alive. This shelter, this system, this prison.

1 views
Brain Baking 3 months ago

What Philosophy Tells Us About Card Play

Given the extensive history behind a simple pack of standard playing cards, it should not surprise you that cards can be seen as a mirror of society: that’s essentially why the court cards have kings, queens, and jacks in them. In as early as 1377 , Johannes of Rheinfelden wrote De moribus et disciplina humanae conversationis, id est ludus cartularum ; a treatise on card play in Europe. It is the oldest surviving description of medieval card play. In essence, when you play a game of Whist, you’re playing with the remains of the medieval European feudal system. That sounds a bit ominous so let’s skip the grim history lesson and instead focus on what philosophy can tell us about card play. Would they be able to offer interesting insights on why humans like to play and why we should (not) keep on doing it? Arthur Schopenhauer detested card games or any form of leisure activity. According to him, the clear lack of an intellectual deed would distract us from pondering the real questions of life. Schopenhauer thinks that by playing cards, you’re merely fulfilling a basic instinct-level need instead of enjoying higher intellectual pleasures (from Aphorisms on the Wisdom of Life ): Dancing, the theatre, society, card‑playing, games of chance, horses, women, drinking, travelling, and so on… are not enough to ward off boredom where intellectual pleasures are rendered impossible by lack of intellectual needs. […] Thus a peculiar characteristic of the Philistine is a dull, dry seriousness akin to that of animals. In The Wisdom of Life, and Other Essays , he scoffs at us players, declaring us “bankrupt of thought”: Hence, in all countries the chief occupation of society is card‑playing, and it is the gauge of its value, and an outward sign that it is bankrupt in thought. Because people have no thoughts to deal in, they deal cards, and try and win one another’s money. Idiots! That’s certainly an original way of putting it. Schopenhauer is well-known for being the grumpy old depressive philosopher who bashes on anything he can think of, except for music and walking with his dog. Because people have no thoughts to deal in, they deal in cards, and try to win another’s money. Idiots! I guess he failed to see that just having fun is what makes living bearable. Criticising play in general is a common recurring theme in philosophy: play is said to distract from the very essence of thinking. In On Consolation , Seneca the Younger criticises Gaius Caesar for gambling to distract his grief after losing his sister Drusilla. According to Seneca, that’s evidence of moral failure. Speaking of which, Michel de Montaigne also seems to categorize card play as a stern morality exercise. In Of the Art of Conference , he notes that even in casual play sessions together with his wife and daughter, one has to stay honest by treating these small actions of integrity—by not cheating and following suit, I guess?—the same as the bigger stakes in life. In another of his essays, Of Drunkenness , he directly compares life to a game of chance where chance can easily mess up any plans we prepared. We, just like the card drawn from the deck, are at the mercy of Lady Luck. Maybe many philosophers dislike games of chance because they do not want to admit that much of our life’s experiences is left to chance 1 . Perhaps that’s why you gotta roll with the cards you’re dealt . Fifty years later, Blaise Pascal acknowledged Montaigne’s idea. He wrote extensively on wagering and views the human condition as one of uncertainty. We must make decisions with incomplete information—and live with the consequences that come with them. Doesn’t that sound like making a move in any game? On the very other end of the spectrum, we find Johan Huizinga’s Homo Ludens directly opposing Schopenhauer’s negative opinion on play. In the thick tome, Huizinga explores the very nature of play as a fundamental element of our human culture. Play is essential to keep our sanity/ Play is what makes us human. Huizinga briefly mentions card gaming as an example of a game with a clear set of rules defining boundaries and structure. Within that boundary, players can foster their skills. Huizinga seems to discard Schopenhauer’s bankruptcy idea completely. Play—including card play—is an essential part that embodies order, freedom, creativity, and even has a social and psychological function. Culture develops through play. Of course, Huizinga extensively studied play as part of his academic research meaning it would be a bit silly if he were to discard the subject as superfluous. In 1958, Roger Caillois built on top of Huizinga’s ideas in Les jeux et les hommes , investigating and categorizing games into different systems. Card games fall under games of chance but also contain a competitive aspect. The interesting Caillois notes is that some cultures handle dealing with chance differently: some celebrate it and embrace their fate, while others desperately try to master it (and usually fail). Guess which category our Western society falls under. It doesn’t take a big stretch to connect Caillois’ card play with the art of living. How do we live in relation to chance? Do we embrace it or try to resist and shape it? Life, just like card games, is not about winning, but about playing well. The act of playing cards can embody the act of living: we must navigate uncertainty, play and work within a set of constraints, read others and try to adapt to their moves, and perhaps above all find meaning in playing the game for the sake of playing the game. In the end, everybody wins, right? Or was it the house that always wins? I forgot. This article is part eight in a series on trick taking and card games . Stay tuned for more! Note that I’m interchanging the words luck and chance here even though depending on your interpretation, they are not the same.  ↩︎ Related topics: / card games / philosophy / By Wouter Groeneveld on 29 September 2025.  Reply via email .

9 views
DHH 5 months ago

The beauty of ideals

Ideals are supposed to be unattainable for the great many. If everyone could be the smartest, strongest, prettiest, or best, there would be no need for ideals — we'd all just be perfect. But we're not, so ideals exist to show us the peak of humanity and to point our ambition and appreciation toward it. This is what I always hated about the 90s. It was a decade that made it cool to be a loser

0 views
sunshowers 11 months ago

Free will quite clearly doesn't exist

There’s absolutely no brand-new insight in this post. A huge amount of credit goes to various philosophers and thinkers, especially folks like Aaron Rabinowitz , for shaping my views. Any errors in it are my own. In the natural world, everything that occurs is the result of a chain of causation all the way to the big bang. The chain of causation is: (We’ll summarize all of these causes into a single word, “deterministic.”) At no point does anything we know from studying the natural world resemble anything like our common understanding of free will . There aren’t even the vaguest hints or suggestions of anything like that. So free will in its common form is inherently a supernatural belief. It’s fine if you believe in supernatural phenomena. But you’re not going to make people who see no need to believe in supernatural phenomena, like myself, agree with you on that basis. This is not unfalsifiable! One of the characteristics of a naturalistic view is invariance : for example, the laws of the universe stay the same when you move around from place to place, and/or over time. Very clear evidence of supernatural interventions would be a different set of rules governing the universe at one particular place or time, in a way that doesn’t generalize. Such evidence has never been presented 1 . A general response to pointing out this basic truth is compatibilism . This term refers to a group of positions which accepts determinism, but tries to preserve something resembling “willpower”, “agency” or “moral capacity”. But that’s just shifting the goalposts. It is true that the ability to make globally optimal decisions is valuable, and it is also true that this varies by person and over time. But that variance is determined by the same processes that are in charge of everything else. Why wouldn’t it be? We’ve all had some days where writing the right code — doing the right thing has been harder than others. That, too, traces its chain of causation back to the big bang. Why do so many humans believe in free will? The widespread belief in free will is also deterministic, of course. Like everything else about us, it’s a result of a chain of causality, i.e. some combination of environmental, genetic, and random effects. It may be a helpful bias to have in some situations. But we have plenty of other forms of biased thinking. Part of becoming Better is understanding how our biased thinking might cloud our understanding of society and lead to worse outcomes. In particular, our belief in free will and the closely-related notion of moral responsibility clouds our ability to see developers writing bad code as a result of bad development tool— incarceration and other kinds of torture for what they are. A belief that fear and terror prevent people from doing things you don’t want them to do isn’t incompatible with determinism, but it’s best to be honest about what you truly believe here. The last stand of the free-will defenders tends to be some variety of “yes, it’s false, but laypeople need to believe in it to cope with reality.” For many of us, our cultures have not memetically prepared us to deal with a lack of free will 2 . This is unfortunate, because recognizing that free will does not exist is not a reason for nihilism or fatalism. It is a tremendous gift of knowledge! If all behavior is some combination of environmental, genetic, and random luck , then it follows that the easiest point of leverage is to make the environment better. We now have pretty good data that better environments can cause, say, memory safety bugs to decrease from 76% to 24— fewer children to develop asthma. Why wouldn’t better environments also help us reason more clearly, make better decisions, and generally live life in a more pro-social manner? Overall, I think it is far more interesting to skip over all this free will stuff and instead do what the French philosophers did: examine how environments exercise control over us, in order to suggest ways to reshape it. So when people exercise catastrophically poor judgment while writing code in memory-unsafe lang— making the most important political decisions of their lives, it is important to understand that their behaviors are a result of the environment failing them, not a personal moral failing. This is Reddit atheism 101. It is important to occasionally remind ourselves about why we started believing in these foundational ideas.  ↩︎ Like everything else in existence, this, too, is determined.  ↩︎ almost entirely deterministic with some chaotic effects that appear to be random on the outside but in reality are deterministic and also with a small amount of true (quantum) randomness, that very occasionally turns into something big This is Reddit atheism 101. It is important to occasionally remind ourselves about why we started believing in these foundational ideas.  ↩︎ Like everything else in existence, this, too, is determined.  ↩︎

0 views
Jimmy Miller 1 years ago

Dec 13: What Knowledge Isn't

I read four different papers today. None of them stuck with me. There were a few papers on social aspects of programming, all of the empirical bent. I was hoping to find a good paper that captured the sense of frustration I feel with Agile; a paper that could summarize the perverse labor relationship that Agile has forged between software engineers, product managers, and the software itself. But I've tried in vain. I looked through my absurd back catalog. None of them stood out to me as papers I wanted to discuss today. So I'm instead giving you yet another philosophy paper. But this paper leaves you with no excuse but to read it. Coming in under 3 pages. This philosophy paper turned the world of epistemology on its head. Introducing Edmund L. Gettier's Is Justified True Belief Knowledge . These are all examples of things I think I know. But how can I be sure? What exactly does it mean for something to be knowledge? Well, when we look at the kinds of things we intuitively count as knowledge, there seems to be fairly wide agreement on the core of the account. First, the things you know, you must believe. Now of course we might say something like "I know I'm going to fail, but I don't want to believe it." But here we aren't being literal. When we say that we know something we are at least saying we believe it. Or to make it silly programmer speech, knowledge is a subclass of belief. But not all belief rises to the level of knowledge. What is missing? "I know that the earth is flat." "No, you don't, because it isn't." Replace the word "know" about with the word "believe". Note the asymmetry. With "know", this logic feels sound, but replacing it with believe makes the tort non-sensical. What does this show? That we can't know things that aren't true. Philosophers call this the "factive" portion of knowledge. In order for something to count as knowledge, it must actually be true. But it's not enough for something to be true for you to correctly assert that you know it. "It's going to rain tomorrow" "How do you know that?" "I rolled a d20 and got a natural 20" Unless the above sentence was uttered during some TTRPG, we would in fact reject this person's claim to knowledge. But what if it turns out it did in fact rain tomorrow? Did they know it all along? No, they just got lucky. What they lacked was any good reason for that belief. Philosophers call this "justification". Justification means that a person is internally in possession of a good reason for their belief. When asked why they believe something, they could offer a good reason. But does this intuitive view work? Gettier shows us that it doesn't. You should read the paper. So I'm not even going to give a Gettier example. Instead, I'll use a different example. The pyromaniac (Skyrms 1967). A pyromaniac reaches eagerly for his box of Sure-Fire matches. He has excellent evidence of the past reliability of such matches, as well as of the present conditions — the clear air and dry matches — being as they should be, if his aim of lighting one of the matches is to be satisfied. He thus has good justification for believing, of the particular match he proceeds to pluck from the box, that it will light. This is what occurs, too: the match does light. However, what the pyromaniac did not realize is that there were impurities in this specific match, and that it would not have lit if not for the sudden (and rare) jolt of Q-radiation it receives exactly when he is striking it. His belief is therefore true and well justified. But is it knowledge? Gettier shows us in the paper multiple cases where true justified belief fails to be knowledge. So what exactly is knowledge? Well, this is a whole area of debate in the philosophical community. But out of this has spawned fascinating, interesting work. There are debates about whether the criteria for knowledge requires something internal (i.e. mental) or external (the world, our brain chemistry, etc). There are debates about whether there is one unified conception of knowledge, or if knowledge is contextual. Finally, there are debates on whether knowledge has an analysis at all or if we ought to take knowledge as fundamental and define things like belief in terms of knowledge. The example given above and the examples given in the paper may seem far-fetched, but it is my contention that programming is a constant Gettier problem generator. How many times have you believed you had found the cause of some bug, and had good reason to believe it, it turns out you were correct, but then when looking closer at the bug realize all your reasons were completely wrong! You had true justified belief, but not knowledge. Philosophy (well, analytic philosophy, which is the kind I enjoy) is a lot like programming. It goes deep on topics most people don't care about. It introduces concepts that seem overly technical and complicated. There are endless debates and disagreements with no hope of reconciliation. But what I see from philosophy that I find so hard to find in these computer science papers, is an awareness of the complexity of our topics. A willingness to engage with those who disagree. No pretension that your approach is the best approach. No need to have a related work section, because instead, you directly engage with others' works. My name is Jimmy Agile is a terrible way to make software The earth is round My dog sheds a ton of hair No prime numbers are prime ministers

0 views
Jimmy Miller 1 years ago

Dec 2: Software is an Abstract Artifact

For this next paper, we are looking at a question I'm sure many will think is pointless. What exactly is software? But by this, we mean something a bit different than offering a definition. The goal here isn't to figure out how to split the world into software things and non-software things. Nor is it to distinguish hardware from software. Instead, the question is, assuming we have software, what sort of thing is it? This may feel like a weird question. But why? I think it's not a very fashionable question. After all, what kinds of things are there? Are there ghosts and spirits and other spooky substances? Of course not! What could there be other than things made of atoms? Nurbay Irmak gives us what I think is a rather interesting view. The aptly named "Software is an Abstract Artifact" tells precisely what software is. But if you aren't a philosophy nerd, that title might not be enlightening to you. My goal in this brief paper summary is to make these notions clear. I will not attempt to argue for Irmak's view but simply state it. To try and make clear what the paper presents. But you must be reminded, this is merely a summary. Let's start with a particular example of software and consistently use it throughout. Irmak often uses Windows 7 so we will stick with that example. Windows 7 is software, I think we can all agree to that. But what is Windows 7? Let's say I have a disc that contains Windows 7. Is Windows 7 identical to that physical disk? If I incenerate that disk do I destroy Windows 7 and it no longer exists? I doubt many would think so. Is Windows 7 identical to the bytes on the disk? What if I store the bytes in a different order, or I compress the bytes? Does that mean I no longer have a copy of Windows 7 but some totally different piece of software? That seems unlikely. These are the kinds of questions Irmak wants us to ask. Is Windows 7 identical to a copy of it? Is it identical to some text? Is it identical to an execution of it? Or is it identical to what he calls "the algorithm" of it? Algorithm here might be better understood as some abstract structure. Take the program that is Windows 7 and convert it to some formalism like lambda calculus or a Turing Machine, consider Windows 7 from some mathematical point of view. That is what Irmak means by "algorithm", the mathematical structure that Windows 7 realizes. Is Windows 7 identical with these things? But what exactly does it mean for two things to be "identical"? Well, I common way to think about this in philosophy is called "Leibniz's law" or "Identity of indiscernibles". Put simply it says that if you have potentially two objects X and Y, X is identical to Y just in the case that X has all the same properties as Y and vice versa. So, consider the case where I am holding a red ball in my right hand and a red ball in my left hand. Imagine that these balls weigh the same, look the same, even down to the microscopic level, and are made of the same material. We know they aren't identical because one has the property of "being held in my right hand" and the other lacks that property. So, when we are looking for what Windows 7 is identical to, this is the criteria we are looking for. When asked above if Windows 7 was identical to the disk containing Windows 7, we were actually applying this principle. Windows 7 has the property "surviving being incinerated", but the disk does not. For the bytes example, Windows 7 can be instantiated by many different shapes of bytes, but a particular shape of bytes has its shape essentially. We can continue for things like the execution, if Windows 7 is identical to executions, that means if all computers executing Windows 7 were shut down, Windows 7 would cease to exist, but intuitively, that's not the case. And the text of Windows 7, if we put the text in a different encoding, or we changed all the variables, or we moved some function, etc, we'd still have Windows 7, despite the texts themselves not being identical. It seems Windows 7 lives above and beyond any particular instance of these Windows 7 artifacts. But how are we deciding that? Irmak believes we should start with our everyday notions. When we talk about software, these are the kinds of things we generally believe about it. The way we talk about software, we being the everyday folk, should inform our theory as to what kind of thing software is. This is crucially important for our last potential candidate. Is software identical to the "algorithm"? Well, according to Irmak, the algorithm is a mathematical platonic entity. That is a non-spatial, non-temporal, unchanging thing. If software is identical to the algorithm, it must not be changeable. But software can change and still be the same software! A patch to Windows 7 does not make it no longer Windows 7. In the paper, Irmak gets into versioning, but as this summary is going a bit long, I will completely alide that discussion. Let's just take it for granted this point is right. If so, software is not identical to the algorithm. What is it then? As we are told in the title, software is an abstract artifact. What is an artifact? An artifact is any human-made object. A shovel, a computer, and a statue are all artifacts. But these artifacts are concrete? What does that mean? It means they are spacio-temporal. Concrete artifacts exist in time and space, they have a definite location. We can point to them. Software we cannot. Now this may seem all a bit weird, but Irmak shows us that software isn't necessarily alone in this regard. Consider Music. Music has elements like sound structure, score, copy of score, and performance. These are quite related to algorithms, text, copies, and execution! A piece of music isn't identical to any of these things. The last thing I'll mention here is that Irmak discusses when programs cease to exist. Here are his four criteria. I think there is some intuitive appeal to these criteria. But I'm unsure about 1 and 4. Is a vague memory of a piece of software enough for it to exist? Do the authors really need to "cease to exist" for the program to cease to exist? What about that first-ever program I wrote that I have no memory of? Irmak gives us a fascinating look at the ontology of software. I think it's a great paper for introducing the ideas, but as the paper itself says, it is far from a full discussion. In a bonus episode of the future of coding podcast I talk about James Grimmelmann's distinctions between the naive, literal, and functional meaning of code. I think a similar kind of distinction can be drawn here between a program and software. I would say that Windows 7 is software, but two different versions of Windows 7 are different programs. Further, when we consider forks, it becomes tricky to say which fork continues to be the same software, is it the one that keeps the name, or is it the one the community adopts? Programs by contrast have much clearer identity criteria. To me, they are just identical to the "algorithm" in Irmak's parlance. What this means of course is that programs are eternal unchanging things. We don't create programs as much as discover them. That to me is not a fact to run away from, but to embrace. Their authors ceased to exist. All of their copies are destroyed. They are not performed or executed ever again. There are no memories of them.

0 views
Jimmy Miller 1 years ago

Conceptual Preservation

In the last post , I mentioned how we ought to learn from philosophers who have already explored these notions of Conceptual Engineering. Here we will do exactly that by focusing on the work by philosopher Matthew Lindauer Conceptual Engineering as Concept Preservation . Despite the title, Lindauer is not arguing that conceptual engineering is merely concept preservation, but rather that concept preservation is an important aspect of Conceptual Engineering. In fact, Lindauer's paper focuses on concept preservation because as he sees it, it may be the easiest or clearest case of conceptual engineering. Lindauer brings up concept preservation as a way to combat some constraints that some philosophers have placed on Conceptual Engineering. Most notably, Herman Cappelen has claimed that we don't really know how concepts change over time (Inscrutability), nor can we control how they change over time (Lack of Control). Lindauer wants to suggest that Inscrutability and Lack of Control are much less of a concern when it comes to concept preservation than to re-engineering and de novo engineering efforts. For this article, we won't dive into his argument here, but keeping this as background is important as we think about conceptual preservation as applied to our programming practice. Conceptual Engineering involves taking a normative stance, asking what our concepts ought to be, rather than what they are. Frequently we find concepts that are deficient in some regard and we aim to fix them or, to use the term of the art, to ameliorate them. But what if our concepts are good? What if we have found a concept that we think is beneficial to keep? Are we done with our conceptual engineering work? Not by any means. Over time semantic drift is inevitable if our words and concepts aren't protected and preserved. As programmers, we tend to think that our responsibility is to write clear, clean code. If we do this, others will be able to understand our ideas and the codebase will be properly maintainable. Yet, this never seems to happen in practice. As others begin to contribute to our codebase our concepts become lost. No longer do our interfaces mean what they once meant. No longer do our concepts drive changes in the codebase. Instead, people approach our codebase with entirely different preconceived notions and ascribe a new meaning to our concepts. This meaning can start to catch on and now we have two different groups with two different conceptions of the same term. These competing conceptions wreak havoc on our codebase. We must work hard to preserve the concepts that matter. We must get them in others' heads, we must teach others to act correctly in light of these concepts. A failure to preserve concepts can lead to undesirable effects. Think about how Alan Kay feels about the new understanding of Object Oriented Programming. Think about the way in which Agile has changed from something intended to help programmers, to something intended to micro-manage them. As programmers, we must expand our notion of what is our responsibility. We must work hard to preserve our good concepts and change our bad ones. There are two solutions that might come to mind on ensuring concept preservation that I want to argue are dead-ends. First is the idea of self-documenting code, code that by its very nature ought to make our concepts clear. The second is actual documentation, whether that be in the form of docstrings, long-form docs, architecture decision records, or any other format. Neither of these initiatives is enough to secure our concepts. To make this result a bit less surprising, consider the conceptual drift of English words. Does a dictionary prevent drift from occurring? Let's tackle these in turn and show why they are insufficient. First, self-documenting code can definitely be a good goal. Writing clear variable and function names is fantastic and can certainly aid in comprehension. But no matter how clear our code is, it does not capture our concept completely. Our code underdetermines our concept. No matter what structure we give to our code, there will be multiple concepts of the code that are compatible with it. This is especially true at all the points of our code we have intentionally made extensible. Extensible code necessarily allows our concepts to be open to drift over time, this can be very good. What about documentation proper? Of course, our code underdetermines our concepts. That is why we document it, describe background, talk about edge cases, discuss possible future extensions. It could be argued that our concepts are still undetermined, by the documentation, but rather than do that let's take a different tack. Preserving a concept isn't merely to write it down, it is ensuring that people who utilize this concept have in mind the same concept as what you intended. Further than that, it is that their actions are consistent with that conception. Imagine a codebase that follows the MVC pattern. Let's assume that the codebase is heavily documented and has canonical definitions of Model, View, and Controller. There are two ways in which the concepts here can drift. First, a team member may have misunderstood some concepts (say some distinction between View and Controller). When teaching these concepts to a junior engineer, they explain them incorrectly. This incorrect understanding spreads and makes its way to the whole team. In this group, our conception has changed despite our existing documentation. In fact, at some point, someone might even go and update the documentation with this new "correct" understanding. But concept preservation can fail even if no one has a different conception. Imagine the same codebase above, but this time all team members have internalized the same conception. If you asked them to explain MVC they'd give you the same explanations. But there is a team member who constantly makes the mistake of putting the wrong code in the wrong place. This isn't because of some bad conception, just a simple mistake. Over time others begin to follow this example as they create new code. This way of organizing code becomes habit. Here, our internalized concept hasn't changed, and yet our practice reveals an inconsistency. We have in our actions allowed conceptual drift to occur. To be clear, writing clean code and good documentation are good practices to help with concept preservation. They just aren't sufficient. Nor will they be as effective as they could be if we don't consciously think about them as serving that purpose. The same applies to other practices. Code review is a fantastic place to help prevent conceptual drift. But many times it becomes about style over substance. Pair programming is a great way to detect conceptual drift, it allows you to go beyond the code artifact and actually investigate people's statements of their beliefs. I'd imagine right now you can think of a number of practices that can be used here. Diagrams, presentations, glossaries, in general, any presentation or explanation of our concepts can help us with preservation. But I want to stress that none of these things exhaust our concepts. Our concepts are in-extricably bound up in human-beings themselves. To be in possession of a concept is not merely to be able to repeat it. It is to be able to act in certain ways, it is to be able to apply the concept in a number of different contexts. It is to be able to understand how the concept bares to counterfactual situations. In other words, ensuring that others know facts about our concept is not enough to preserve it. This is a mistake software architects often make. An architect may give a presentation on the desired system architecture, they may write extensive documentation, ensure each team member can explain the architecture, and yet find that after the work has begun the system is entirely different from planned. Why is this? First, it may be that despite a person's ability to articulate the infrastructure they do not have that deeper knowledge of how to make it reality. But it may also be that the architect has not done their job of convincing others. Preserving our concepts requires persuasion. Merely building a big codebase of "best practices" or creating standard boilerplate for new projects does not actually get these concepts into other heads and help them live them out in their actions. Other engineers may disagree (and perhaps rightly so) with your conceptions. These engineers may intentionally cause these concepts to drift, twisting them until they fit the shape they were looking for. This is the inevitable outcome of forced concepts. If we want to practice conceptual preservation, we must ask ourselves if our concepts are worth preserving. We must ask ourselves if others deem them to be so. We cannot preserve our concepts by fiat. No amount of power can keep a concept in place. So, if conceptual preservation is important to good codebases, we may want to reconsider our practices. Are architects a good solution to ensure quality? Is the blessed infrastructure team that imposes concepts a good idea? Perhaps not.

0 views

On being parasitical

An exploration of the concept of being parasitical, examining moral considerations, and ultimately questioning where individuals draw the line in accepting or rejecting such behavior.

0 views
flowtwo.io 6 years ago

The Myth of Sisyphus

The Myth of Sisyphus begins, earnestly, with a simple question: why don't we just all kill ourselves? Although Albert Camus phrases it a little more eloquently, he nonetheless is quite serious that it's a question of utmost importance. His rationale is rooted within a fundamental truth about existence he believes: life has no meaning; no absolute, clear meaning that any individual could ever know. Despite knowing this, Camus is also certain of another thing—he will never stop in this futile search for meaning. In fact, no one will ever stop; it's within our nature to seek these answers. Knowing these two facts, the rational individual could reasonably decide that this is not worth all the struggle and suffering...why not take back some control of your existence and end things on your terms, rather than when this cold and unreasonable universe decides to? Living, naturally, is never easy. You continue making the gestures commanded by existence for many reasons, the first of which is habit. Dying voluntarily implies that you have recognized, even instinctively, the ridiculous character of that habit, the adverse of any profound reason for living, the insane character of that daily agitation, and the uselessness of suffering — Albert Camus, The Myth of Sisyphus , pg. 78 Pretty heavy stuff. In The Myth of Sisyphus , Camus attempts to clearly illustrate the nature of the absurd , what it is and how it's reflected in various aspects of society. He then sets to work on deconstructing the essential question : why should one continue existing in an absurd universe once they've become aware of its incongruous nature? Albert Camus, born in 1913, was a French philosopher known for popularizing this philosophy of absurdism , through a variety of stories and essays like The Myth of Sisyphus . Camus was also an author, journalist, and all-in-all a pretty fun and socialable guy from what I've read. I mention this because it's typical to think of philosophers as old, bearded, armchair-dwelling hermits without a care for reality. By all accounts it sounds like Camus was an outgoing and friendly man who enjoyed sport, politics, and women. These details are important to keep in mind when reading Camus and his ideas on life, especially since the core tenet of absurdism is that we will never find meaning in existence. As a result of his literary accomplishments, Camus became the second youngest recipient of the Nobel Prize in Literature, winning it in 1957 at the age of 44. In life, the search for purpose can be an emotional, lifelong journey. I find that my understanding of purpose is fleeting and ephemeral; I have moments of lucidity where the form of an answer becomes reachable to my consciousness. I say reachable because I would never have the gall to admit I have found life's meaning, only to have felt it was close by. Unfortunately, I think my confidence in these moments of contemplation is driven by emotion. In other words, the entire view of one's existence is slowly shaped by these memorable, yet irrational moments. For example, doing a good deed for someone makes us feel a sense of fulfillment—inviting us to think that meaning is found within this connection with others. We should strive to be altruistic and co-operative because acting in this way, helping others, is our intended purpose. And yet, in different times we may accomplish some great feat or complete a goal we were emotionally invested in. In this space we feel that our purpose is to strive for personal success and betterment. All life is Darwinian after all, and our society is built upon a hierarchy, so should we not conclude, in this primal existence, our goal is to be the best we can be? Of course, all this searching and contemplation for meaning presupposes one fact about the searcher: they aren't religious. Religion offers answers to these questions, and all it takes is a leap of faith, and the need to search is gone. Religion is a belief in the existence of a higher power; one who has decided that you should exist. A transcendent entity who has laid out a set of teachings and commands that provide a blueprint for how to live. These blueprints are divine and borne from a place outside of our earthly reality, so their correctness doesn't need to be questioned. As for purpose, most popular religions offer the reward of infinite happiness to those who choose to believe and follow these doctrines throughout life. I admit this is all a gross simplification of the broad category of beliefs considered to be religions, but the point is that religion offers answers. Camus, from the very start of The Myth of Sisyphus , begins his examination from an areligious standpoint. In this sense it may be said that there is something provisional in my commentary: one cannot prejudge the position it entails — Camus, pg. 126 The entire essay is a priori based on an acceptance of the absurd . Camus wished to explore the consequences of this conclusion, rather than it's legitimacy. He explains that, as a rational individual, he can only base his truths on empirical and sensory information. Religion offers no solace in this search for purpose as it provides no evidence. In fact, Camus' concept of absurdity begins with the belief that evidence for life's meaning will never be found. This fact, that it doesn't matter whether life has meaning because we cannot know it, is the first leg by which the absurd stands upon. The second leg is found in the mechanism of consciousness. The absurd arises from the incongruity between our incessant search for meaning and the overwhelming evidence that no one shall ever find it. The absurd is born of this confrontation between the human need and the unreasonable silence of the world. This must not be forgotten. This must be clung to because the whole consequence of a life can depend on it. The irrational, the human nostalgia, and the absurd that is born of their encounter - these are the three characters in the drama that must necessarily end with all the logic of which an existence is capable — Camus, pg. 50 Here, Camus is alluding to his ultimate goal: deriving meaning from this absurd conclusion. How can one continue living within the confines of an absurd existence? To answer this question, our scholarly author enlists the help of a classical figure from Greek mythology, of whom the essay is named after. Sisyphus was the king of Ephyra, who was punished by the gods after a series of self-aggrandizing and deceitful acts. Sisyphus is depicted as a crafty, arrogant individual who tries on several occasions to outsmart the gods. In my opinion, the arrogance and self-centeredness of Sisyphus represents the phase of innocence and ignorance we all experience as children. From the moment of birth, our only care is for our own needs, and our universe is simply the confines of our immediate environment. Life is simple—we are blissfully unaware and unconcerned with these existential dilemmas. As children, we might even feel capable of out-smarting the gods of our universe: our parents. It's only as we age that we begin to grasp the nature of reality...or better yet we begin to lose this grasp. Reality becomes less clear and our motivations become more complex. This is when the notion of the absurd may manifest itself in our consciousness. For his child-like actions and insulting hubris against the gods, Sisyphus was handed a severe and unique punishment in hell. Sisyphus was condemned to the task of pushing a large stone up a mountain, but this stone was enchanted by Zeus to always fall down once it reached the top of the mountain, forcing Sisyphus to climb down the mountain and start pushing all over again. This was his eternal fate: a supreme exercise in futility (This punishment is so famous that the term Sisyphean is used to refer to a laborious and futile task). In Camus's eyes, he sees Sisyphus as the absurd hero; one who is condemned to a meaningless fate requiring constant work and suffering. Just as the innocent child eventually grows up and must take on the burden of understanding the world around her, so too must Sisyphus pay for his carnal pleasures: His scorn of the gods, his hatred of death, and his passion for life won him the unspeakable penalty in which his whole being is exerted toward accomplishing nothing. This is the price that must be paid for the passions of this earth — Camus, pg. 110 Sisyphus' eternal fate has been laid out before him, this much we know for certain. But Camus sought to imagine how Sisyphus might find solace in his inconquerable task. How can the former king, as he stares at the rock at the bottom of the mountain, fully aware of his wretched condition, start pushing once again? How can he continue on when, each and every time he reaches this summit, his efforts and accomplishment literally roll away from him? It's precisely in this moment that Sisyphus could transcend his destiny. Nothing but his own will carries him back down that hill, That hour like a breathing space which returns as surely as his suffering, that is the hour of consciousness. At each of those moments when he leaves the heights and gradually sinks toward the lairs of the gods, he is superior to his fate. He is stronger than his rock — Camus, pg. 113 Camus revealed something very poignant about the human experience through the story of Sisyphus. Anyone who has ever questioned their beliefs knows the feeling that Camus describes. They are the moments in our lives when existential doubt creeps in, or maybe it's lucidity. We question our foundations, our own "rock" if you will. These personal confrontations with the void are the absurd . But things need not be so grandiose, I believe the core lesson is that strength is born from these moments. Choosing to believe in your rock, pushing forward in the face of uncertainty—this is the price of living. Camus concludes thusly, One always finds one's burden again. But Sisyphus teaches the higher fidelity that negates the gods and raises rocks. He too concludes that all is well. This universe without a master seems to him neither sterile nor futile. Each atom of that stone, each mineral flake of that night filled mountain, in itself forms a world. The struggle itself towards the heights is enough to fill a mans heart. One must imagine Sisyphus happy — Camus, pg. 128 If you find the concept of the absurd rather extreme, or too abstract, that's understandable. Having the time to contemplate suicide because you haven't found enough meaning in life is something of a privileged existence. The vast majority of humans are almost completely occupied each day with not dying involuntarily . For them, the continuation of life itself provides enough purpose and motivation. However, I'm not sure this necessarily invalidates absurdism. What is important to every individual cannot be compared directly. We all experience our own reality, and every heaven and every hell is uniquely ours. This concept of the absurd resonated with a lot of people, given that Camus is known for it and it's still talked about today. I've thought about this idea a lot lately. I just finished reading this essay for the second time. One night over the holidays, while I was drunk and in the bathroom (where all great epiphanies happen) I thought about a way to re-interpret the absurd in the context of daily life, in a way that makes sense to me, at least. In life, in almost any pursuit we occupy ourselves with, we all strive to be better, with the underlying implicit goal that we want to be the best . Whether it's tax accounting, saxophone playing, basketball, personal attractiveness, gardening, et cetera. In all these pursuits there is at least one, if not several metrics of quality that we try to improve. Whether these metrics are absolute or relative to others doesn't matter, if you are seriously involved with an activity, you intrinsically feel the desire to become better. You might object to this premise by asserting that you have no desire to be the best in the world at tax accounting, or whatever you spend your time on. And while I agree that the pursuit to be the greatest is not a conscious idea normally, in principal I believe human nature is always pushing us to be better. Now, our critically thinking tax accountant might agree to that, but they may also claim that they'd be perfectly happy being the 2nd best accountant in the world, or even the 1000th best. For argument's sake let's assume they do in fact reach this upper pantheon of accounting supremacy. At this point, would any rationally acting accountant then cease trying to improve? I believe not, and I believe all humans, barring any external factors and circumstance, will never give up this pursuit to be the best in whatever disciplines they happen to occupy themselves with. It's analogous to our neverending search for meaning and purpose in life. The second tenet of this analogy is a statistical one. Given that there are over 7 billion people on the planet, it is overwhelmingly likely you will never become the best in the world at anything . This is a fact of life that only someone with an enormous ego would fail to accept. But regardless of your confidence or your ego, we all strive for betterment. We accept that we will not be the best, and in some cases you may never even improve significantly—yet we all try. This infallible conviction that is hardwired into us is the same mentality we must harness to face the absurd . Despite acknowledging the fact that you or I will never find meaning in life, we must search for meaning every day. Faith requires one to believe in irrefutable answers, while the absurd , on the contrary, invites one to never stop looking for them: But it is bad to stop, hard to be satisfied with a single way of seeing, to go without contradiction, perhaps the most subtle of all spiritual forces. The preceding merely defines a way of thinking. But the point is to live. — Camus, pg. 64 "To live" is Camus' answer to the question posed at the beginning of this essay. Meaning can be found by living with a lack thereof; this tension represents one's dedication to truth. Suicide is a repudiation. The absurd man can only drain everything to the bitter end, and deplete himself. The absurd is his extreme tension, which he maintains constantly by solitary effort, for he knows that in that consciousness and in that day-to-day revolt he gives proof of his only truth, which is defiance. — Camus, pg. 77

0 views