Continuous Variation

Continuous Variation

What We Talk About When We Talk About AI

Rationality, Scarcity and the Past, Present & Future of AI

C Trombley One's avatar
C Trombley One
Dec 12, 2025
∙ Paid

“If, as Leibniz has prophesied, libraries one day become cities, there will still be dark and dismal streets and alleyways as there are now”

  • Lichtenberg

Table Of Contents

  1. Life Is Just One Nonlinear Problem In Random Theory After Another

    1. Introduction and meeting Herbert Simon

  2. “On How to Decide What to Do”

    1. Answer to the question: What is AI?

  3. “Trial and Error Search in Solving Difficult Problems”

    1. What was and what is the relation between AI research and consciousness?

PAYWALL

  1. An Excerpt From Sciences of the Artificial

    1. Generative Artificial Intelligence as Variety Emitter

  2. “Rationality as Process and Product of Thought”

    1. Information and Attention as the scarce resources in AI workflows

  3. The Method of Distinguishing the Real from the Imaginary

    1. What has happened in the fields AI has already revolutionized and what is wrong with AI hype

Life Is Just One Nonlinear Problem In Random Theory After Another

“Il moderno principe, il mito-principe non può essere una persona reale, un individuo concreto, può essere solo un organismo; un elemento di società complesso nel quale già abbia inizio il concretarsi di una volontà collettiva riconosciuta e affermatasi parzialmente nell’azione.”

  • Antonio Gramsci, “Notrelle sulla politica del Machiavelli”

We are currently in an AI boom. There are estimates that world capital outlays for computational power in this boom period will exceed $7T by 2030. The high power demands and high power draw volatility of these AI tasks are already shaping the near future of energy investment. Meanwhile, AI related capital cycles rapidly as new chips, new algorithms and new data sources appear. The capital structure changes and the related social impacts are the truly interesting story, but they are not the story I will tell you today - because CVAR Founder and co-editor Alex Williams is doing great work on that front. What we are here to do today is get a grasp on a definition of what AI is and why it can have a specific boom.

By “AI” I don’t mean to limit myself to Generative Artificial Intelligence (GAI). Because of this broad focus, I may use slightly different language than you are used to. Like Polya said: that an important step to solving problems is to attack more difficult and general problems! I will try to stay away from the limiting “AI =GAI” mindset. In order to accomplish this, I will go long before the beginning of the GAI boom to the work of Herbert Simon.

In the old conception of labor as a deliberative change in the state of the world, instruments of labor like AI are of particular interest because they lie on the spectrum close to deliberation rather than close to nature. Thus the end user, the laborer, can benefit from having a strong conception of AI. This is unlike, for example, a lawnmower. A deeper knowledge of the lawnmowing system probably does not make a laborer more effective and transforming the world to a place with more uniform grass heights. This is not to say landscaping doesn’t require thought, but rather that to the extent that the thought is about the lawnmowing system it probably is “lawnmower repair” rather than landscaping proper.

Let’s delimit the arena of Platonic space before we get to a definition. Artificial Intelligence (AI) tools writ broadly are products designed to help make decisions. Thus AI tools are in the same sport as spreadsheets and ouija boards but play different positions, like Junghoo Hu and James Watt. AI tools help us solve what Norbert Weiner called “nonlinear problems in random theory”, expanding the realm of possible actions. An AI tool is being used correctly in the event that it expands its user’s administrative capacity - their ability to “effectively implement policies, manage resources, and deliver services.”. We have previously discussed administrative capacity. The linkages between AI and that discussion will be returned to below.

I want to respect your time as a reader, so let me be as clear about what this post is not as what it is. This post is not a comment on whether any or all AI companies are in a “bubble”. However, I want this post to help you manage AI use, and the AI Bubble topic can derail administrative discussions. To begin with: a market bubble is not “overvaluation” by itself but a complex triadic relationship between prices, values and both credit and other market ratios. Understanding this triad will aid in making strategic decisions. Breaking out that triad out one by one:

  1. A price is a ratio at which an actor is willing to part with one good or service for certain quantities of a possibly distinct good or service.

  2. Broadly, “values” in the economic sense are attempts to understand price abstracted from peculiar market conditions. There is not one precise meaning to “value”.

    1. For concrete examples of “values”, Substack piece gives two subtly different definitions from Benjamin Graham and Warren Buffett. These value investors base their purchases of stock, bonds etc. on when the market prices and value estimates diverge, making careful definitions important to them.

  3. I will give two traditional metrics for assessing different notions of bubbles, one for overvaluation risk and one for liquidity risk.

    1. A high price/earnings to growth (PEG) ratio is sometimes interpreted as an indicator of overvaluation. While not sufficient by itself, I will use it as a back of the envelope indicator.

    2. A conservative credit ratio for liquidity risk is the level of cash held to the value of liabilities (the cash ratio). A lower cash ratio means more liquidity risk.

Loose discussions of bubbles run distinct things like overvaluation and liquidity risks together, but they can actually anti-correlate with certain business strategies. This may seem like a lot of abstraction to take in at once, but as I said: a market bubble is a complex phenomenon.

To illustrate how these complexities work in practice, take controversial AI company Palantir. Palantir’s PEG ratio has been consistently above 2 for more than a year (see current ratios here). Meanwhile, Palantir’s cash ratio is currently more than 5 and has been more than 3 for several years (see here). Both of these are related to Palantir’s aggressive stock based compensation strategy. What this shows is that the question of a “market bubble” will bring us into dense balance sheet questions unrelated to the use of AI.

Given all this, the only responsible, high-level conclusions I can draw are:

  1. AI probably has positive long run value, as shown by the level of investment by highly diverse actors.

  2. Stock prices of some AI service providers seem high by some traditional metrics. If these metrics indicate market realities, this would make those companies “overvalued”.

  3. This is not sufficient evidence for or against a “bubble” in some senses of the word.

This is the limit of what these simple indicators can tell us.

Market matters aside, let’s get back to the main topic. The goal of this piece is to help you use AI tools to expand your administrative capacity better by explaining some of the basic concepts of A. The method of this piece is to use the work of the father of AI, Herbert Simon as guide to conceptual foundations.

Herbert Simon was a fascinating individual. I strongly recommend his autobiography Models Of My Life. Simon recounts formative experiences which led him to realize the difference between his perceptions and the world. His family went strawberry picking. Simons recalled crying because the other children got more strawberries than himself. His family had to explain that the strawberries had a mysterious invisible property - “being red” - that led the other boys to be able to distinguish between strawberry and leaf. That experience of the interplay of hidden information and decision making would affect much of the career.

After achieving majority, Herbert Simon enrolled in the University Of Chicago, drawn to higher education by the Ely-Commons tradition of social science. He remembered Nicholas Rashevsky, Henry Schultz and Rudolf Carnap as particular mentors. From Rashevsky he learned the power of boldly simple models of complex phenomena - and the danger of a cavalier approach to data. From Schultz he learned the difference between a good fit and (what Deborah Mayo calls) a severe test, the core of the Neyman-Pearson philosophy of statistics - in layman’s terms, an idea that probably works vs an idea that’s probably correct (a small subset of what works!). From Carnap, he learned the ideal of logical positivism or logical empiricism - a scientific language where not just some of the important concepts but all content words are defined operationally.

With this educational background, Herbert Simon would become one of the most important computer scientists of all time - of course, his degree was in public administration. How did his horizons expand into the nascent field of computer science? The dark star that guided Simon through his transit towards digital Bethlehem was the attempt “to explain how organizations can expand human rationality, a view quite opposed to popular folklore in our society, which commonly sees them as dehumanizing bureaucracies.” (Simon, Models Of My Life). This optimistic view inspired my use of a quotation from Gramsci at the head of this section: the “modern prince” is more than any one individual could be. In plain terms, effective organizational leadership is when that organization can do more than any sub-coalition of the organization.

So how did this public administration theorist become the father of AI? Here is how the man himself put it: “This sudden and permanent change came about because Al Newell, Cliff Shaw, and I caught a glimpse of a revolutionary use for the electronic computers that were just then making their first public appearance. We seized the opportunity we saw to use the computer as a general processor for symbols (hence for thoughts) rather than just a speedy engine for arithmetic. By the end of 1955 we had invented list-processing languages for programming computers and had used them to create the Logic Theorist, the first computer program that solved non-numerical problems by selective search. It is for these two achievements that we are commonly adjudged to be the parents of artificial intelligence.”. Al Newell had taken classes with George Polya, who dubbed the new science of the psychology of discovery ‘heuristics’. Cliff Shaw provided programming experience. Together, the three men endeavored to make a machine which could, among other things, prove 1+1=2.

As we will see, some of Herbert Simon’s original AI tools are, of course, primitive. His theories evolved along with the tools, starting at points. But one must remember that he was fighting with both arms engaged. His right arm was tied back by Father Time. The rope Father Time used was the ludicrously primitive non-AI aspects of the digital tools available. Newell, Simon and Shaw didn’t store their work as files in folders because the metaphor of the central linked list in a shared computer system as a directory of files is one that had to be created. His other arm was engaged by the practice of empirical psychology. Newell, Simon and Shaw didn’t just want the best AI tool for the job, they wanted a model of how a human did the job! I will discuss this more later.

Even with both arms engaged in work we no longer consider AI, Herbert Simon and his students spent decades refining the theories and developing tools for AI. They kept their collected noses to the grindstone, especially with respect to basic concepts. Thus his definitions and analysis - while definitely idiosyncratic - are in some ways more clear than some modern commentators.

With his unique background and decades of empirical and conceptual work, Herbert Simon really did understand something about modern AI that many today miss. This post uses Simon’s ideas as a jumping off point to cut through the noise and explain what AI is and can do. Before the paywall, I unpack Herbert Simon’s analysis of the meaning of AI, drawing on “On How To Decide What To Do”. This locates AI within the space of decision making tools. Then, as a treat, I use “Trial and Error Search in Solving Difficult Problems” to put strongly the distinction between AI and consciousness research. This closes one source of AI hype. Beyond the paywall, take the fundamental cybernetic concepts of ‘variety’ ‘information’ to map the real decision making niche of GAI. It turns out to be perfectly described by a comment from Herbert Simon in Sciences of the Artificial! Next, I draw on sharp passages from “Rationality As Process And Product Of Thought” to derive necessary conditions for the efficient use of AI.

Finally, the post ends by putting Simon in conversation with other thinkers to provide clear thinking about AI’s future. Frank Knight’s economic analysis of production proves to be an excellent skewer for AI Hype. For the last button on this long article, I combine Simon’s definition with powerful arguments from philosopher Karl Popper to bury AI hype once and for all.

On How To Decide What To Do

“Proper evaluations of words and letters in their phonetic and associated sense can bring the people of earth to the clear light of pure cosmic wisdom.”

  • Sun Ra, Cosmic Tones for Mental Therapy

In this paper, Herbert Simon gives a definition of Artificial Intelligence (AI). Despite being written in 1978, this is the result of nearly 20 years of Simon’s AI research. As such you will find a meaning refined by many years of use and research, and thus something that cuts through popular misconceptions. A true proper evaluation of words, as Sun Ra said.

Specifically, Simon locates Artificial Intelligence (AI) proper as one extreme of a spectrum of decision making techniques which emphasize optimizing on the one end and satisficing on the other. AI is on the satificing end, which makes sense because the notion of satificing was fundamental to his scientific life. Simon labels the other extreme Operations Research (OR).

Understanding the optimizing end of the spectrum makes Simon’s placement of AI much clearer. In OR, the interest is in the existence, uniqueness and computation of an optimum - that is, a special “best point” within the context of the model. All other facts about the model are relative to the interest in that optimum.

A basic example of an OR type approach is the analysis of Jan-Ken-Pon in game theory. This will show in a simple visual way how OR can value an exact optimum within a model at the expense of real life complexities.

The first step in any analysis is to discover the space of possibilities. A “strategy” is a triple of probabilities (p,q,r) such that p>0 is the probability of throwing paper, q>0 the probability of throwing rock and r >0 the probability of throwing scissors and p+q+r=1. A strategy is called “pure” if and only if one of p, q or r is equal to one. Otherwise the strategy is “mixed”. By Viviani’s theorem, each strategy can be identified with a color in the above rainbow colored simplex. The bottom left corner, in violet, represents the strategy of always throwing rock. The orange corner represents a strategy of pure paper. The spectrum between meanwhile corresponds to a certain probability of throwing rock and paper otherwise. The centroid of the triangle is a sort of grey cyan.

What does the OR type approach tell us? The strategies of special interest are the strategies which are the best response to some strategy. There are thus only four points of special interest, the three pure strategies and the centroid. If a strategy is off the centroid, then the best response is the pure strategy that beats the strategy they prefer. Take some examples:

  1. (Orange corner) If they throw paper more often than the other two, then throw scissors.

  2. (Indigo, bottom center) If they play paper and scissors in equal proportion and rock rarely, then play scissors.

  3. (Grey cyan centroid) If they play paper, rock and scissors in equal proportion, then do the same.

The centroid is the unique point which is strategy deviation stable - given that both players are playing centroid neither wants to change their strategy. It is the so-called “Nash Equilibrium”. Does this closely resemble reality? At high levels, Jan Ken Pon is about searching for tells in the opponents’ throw. This part of the Jan Ken Pon is not in this model. They could be incorporated into a higher dimensional model which would have its own set of best responses and Nash Equilibria. This is what I mean by “optimum … within the context of the model”.

The benefits of OR Modeling are obvious. It allowed us to go from a two dimensional continuum of possibilities to four points of special interest, with one point most special among them (the Nash Equilibrium). But the costs are intense. The set at which the best response is the “stable” Nash equilibrium is zero-dimensional. If your opponent has a tendency away from the Nash equilibrium no matter how slight, then you should be playing a pure strategy. The best response is not continuous around the most interesting point! This is exactly the brittleness that Simon argued makes pure optimization unsuitable as a model of intelligence.

So much for the OR approach for the problem. In AI, the interest is moving through the range of a model in a positive direction until one finds an output which works at least to an acceptable extent, what Simon called “satisficing”. That is to say, an AI method is when an outcome which may be of interest for reasons other than having a special claim on being optimum within the model. Good outcomes are not defined relative to that supposed “best” even within the AI’s model of what the world is like.

Coming back to Jan Ken Pon, an AI approach might be to have a computer go through hours and hours of footage of paper rock scissors games until an underlying neural network converges on throwing paper when the hand appears to be throwing rock etc.. The state of the neural network has no relation to the ideal function classifying the real hand (a purely platonic object). The state is judged based on the appearance of convergence when fed more data and success at Jan Ken Pon.

The benefits of AI modeling are obvious. Because we give up the obsession with optima, we can have models of enormous complexity - equal to the task at hand. We can get model outputs better behaved over model inputs. But the costs are intense. The model is now a complex and mysterious thing. Output can depend sensitively on things which are objectively ridiculous. In Jan Ken Pon, maybe due to an accident in the training data, the neural network cares intensely whether the thrower has a freckle over the pisiform bone in the wrist. We are back at the old problem of what works (in the training data) vs what’s correct(in reality).

Simon’s definition of AI is more than just clear - it is an operational definition which allows us to evaluate specific tools. In the next section, I will leverage that operationality to make distinctions between AI research and other branches of the science of decision making, such as consciousness research. Later, we will soon see how it is strong enough to help users of AI (including GAI) develop better processes for production. Finally at the end of this paper, we will see how Simon’s definition, with a bit of help from Poppernian logic, deflates AI hype.

Trial and Error Search in Solving Difficult Problems

“10. Never hesitate to make a move for fear of losing. Whenever you think a move is good, go ahead and make it. Experience is the best teacher. Bear in mind that you may learn much more from a game you lose than from a game you win. You will have to lose hundreds of games before becoming a good player.”

  • José Raúl Capablanca, A Primer of Chess

In the previous section, we saw how Simon’s definition of AI located it within the space of decision making tools. In this section, we go over where AI is located in the space of scientific theories of the mind. This analysis will use the operational nature of Simon’s definition extensively.

Now, Intelligence (artificial or otherwise) by Simon’s definition has only vague and complex relations with Consciousness. Decision making involves conscious and unconscious processes. Consciousness, whatever else it is, is a property that some humans have intermittently, and no one cognitive ability fully defines that property. Likely consciousness is a complex structure of abilities - a “bag of tricks” in Dennett’s phrase. AI tools can have some - perhaps even all! - of these without being conscious.

Despite not equating intelligence and consciousness, Simon’s still hoped that a biomimetic approach would inform successful AI tools, such as automated chess players. See, for example, “Trial and Error Search in Solving Difficult Problems”. In this paper, Simon used a satisficing approach - a now primitive seeming search + heuristics procedure - to analyze chess. He used experimental psychology to try to measure how top chess players reasoned about chess and then make a computer model of human chess playing - artificial intelligence. He included psychological analyses of chess grandmasters to tune his search depth and inspire heuristics.

It’s true that modern chess engines do not use this psychologically grounded approach. But recall that Simon’s definition of AI is a spectrum. On that spectrum modern chess engines are even more AI in Simon’s definition than his were! AI chess systems such as Stockfish use massive neural networks to build a completely opaque model of chess and then satificies itself within that model. One can even tune the degree of satificing! No Stockfish model has no known relation to the ideal chess algorithm, a computationally intractable Platonic object about which little is known by non-constructive existence proofs (Zermelo’s theorem). Rather, Stockfish has taken Capablanca’s advice above into the 21st century by losing billions of games against the best chess player in the world (itself).

The point here is that most modern AI research is not even trying to make conscious tools. Successful modern AI researchers’ goals have not been to make a tool that reasons like a human per se, not in a biomimetic manner. Rather, the main goals have been to develop innovative tools to solve recognizable problems.

Many great minds have attempted to put necessary conditions on what consciousness is. For example, Kant argued that the ability to reason spatially is necessary for consciousness. But humans reason spatially in dreams, showing that this is not sufficient. It’s clear that AI is better now than ever before at performing the tasks these great minds have argued necessary for consciousness. For example, AI can now reason about space well enough that it can create simulacra of objects moving through space (i.e. “animate”). But it would be a mistake (namely, “affirming the consequent”) to think that because AI is getting better at tasks, it is becoming more conscious. This is a confusion of necessary and sufficient conditions.

There is an interesting result of this. With Simon’s definition of AI, Artificial General Intelligence (AGI) can be defined as a system which can output a satisficing model of your input, then find a satisficing response within that model - an AI which outputs AI. There is no need for an AGI to be more conscious than a sleeping man or a large stone. In that peculiarly deflating sense, we already have AGI … but are not even approaching consciousness.

We’ve now seen that Simon’s unusual but deeply clarifying definition of AI doesn’t just tidy up some old debates. His analyses actually sharpen how we think about intelligence, consciousness and modern systems far beyond those he had access to. Now comes a change in tactic to more specifics. Below the paywall, I’ll take Simon’s insights and turn them into practical guidance for GAI use. I’ll start by giving a clean, usable definition of GAI for real-world work. The result of this analysis is put perfectly by a comment of Herbert Simon in Sciences of the Artificial. From there, I’ll draw on Simon’s pioneering work in the economics of information and attention to outline a necessary condition for efficient AI use. Finally, I’ll draw on broader traditions to show why some of today’s loudest “AI hype” claims fall apart. If you want a rational framework for thinking about AI in the present and future - or just a way to separate the useful from the absurd - these next sections will help you.

Continuous Variation is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Keep reading with a 7-day free trial

Subscribe to Continuous Variation to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Alex Williams · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture