October 1962: Cybernetics, Computation, and the Recurring Dream of Central Planning

On the evening of October 15, 1962, while most Americans were still unaware that the world was drifting toward nuclear confrontation, a CIA analyst named John J. Ford gave an informal presentation at the home of Defense Secretary Robert McNamara. The audience included Attorney General Robert F. Kennedy and other senior figures of the Kennedy administration. The subject was not missiles, bombers, or Cuba. It was something stranger, more abstract, and, in Ford’s judgment, no less consequential: a Soviet scientific movement called cybernetics, and a proposal then taking shape in Moscow and Kiev to apply it to the management of the entire Soviet economy.

Ford warned that the United States had no comparable program and, perhaps more importantly, no philosophy for developing one. He believed the Soviet commitment to cybernetic planning posed a serious threat to Western society. He was halfway through his remarks when the presentation was interrupted by news from Cuba.

The Cuban Missile Crisis began that night.

For thirteen days the world stood closer to nuclear war than it had ever stood and has ever stood since. What is striking is that even during those thirteen days the senior officials of the Kennedy administration were also being briefed on something they considered a separate and equally serious Soviet threat: not a weapon, but an idea. On October 17, Ford submitted a written version of his interrupted presentation to Arthur Schlesinger, Jr., the President’s Special Assistant. On October 20, with Soviet missiles still in Cuba and American ships moving toward the quarantine line, Schlesinger wrote to Robert Kennedy that the all-out Soviet commitment to cybernetics could give the Soviets “a tremendous advantage.” By 1970, he warned, the USSR might possess a radically new production technology, with entire industries managed by closed-loop, feedback-controlled, self-teaching computers. If American neglect of cybernetics continued, Schlesinger concluded, “we are finished.”

A reader encountering this episode for the first time might assume that the Kennedy administration was succumbing to the strategic fever of a country haunted by Sputnik and worried that it might be outpaced again. The assumption is not unreasonable. American intelligence in this period systematically overestimated Soviet capabilities. The bomber gap of the late 1950s was inflated. The missile gap of the early 1960s turned out, embarrassingly, to favor the United States. The intelligence community’s projections of Soviet economic growth, as later analyses would show, were inflated for years. The Schlesinger memo about cybernetics belongs partly to this world of Cold War threat inflation.

But only partly.

The cybernetics panel that President Kennedy directed his Science Advisor, Jerome Wiesner, to convene a few weeks later included Kenneth Arrow and Herbert Simon, both of whom would later win the Nobel Prize in Economics, along with several of the most accomplished mathematicians, engineers, and scientists in the country. These were serious people, and they took the proposition seriously because the proposition itself was serious. A nationwide computer network coordinating the economic activity of two hundred million people in real time, replacing the paper-based central planning that everyone in Moscow knew was creaking under its own weight, would have been an extraordinary achievement. If it had worked, the world would look different. The American observers of 1962 knew this. They could not yet know that it would not work.

Today the vocabulary has changed. Cybernetics has become artificial intelligence. Closed-loop control has become algorithmic coordination. The dream of the all-seeing planning machine has acquired better language and more impressive tools. But the dream itself has returned, as old dreams often do, wearing the costume of the present.

The man at the center of the Soviet proposal was Viktor Glushkov, the director of the Institute of Cybernetics in Kiev. Glushkov was a mathematician by education, a cyberneticist by ambition, and by most accounts a figure of considerable intellectual force. He was known among colleagues for the speed of his calculations, the precision of his proposals, and the disarming habit of quoting Marx from memory to defeat Communist Party ideologues who questioned his loyalty. Beginning in 1962, he ran the Kiev institute for twenty years and filled it with bright young researchers whose average age was about twenty-five. The place had an energy unusual in the Soviet scientific establishment of the period.

Glushkov’s proposal carried a name that telegraphed its ambition: The All-State Automated System for the Gathering and Processing of Information for the Accounting, Planning and Governance of the National Economy, USSR. In Russian, the system was known by the acronym OGAS. The shorter name is easier to handle. It also gives no sense of what the proposal actually involved. 

Glushkov wanted to replace the existing apparatus of Soviet central planning with a unified computational network. Production data would be collected in real time, processed through advanced mathematical techniques, and transformed into allocation instructions that would be sent back to the enterprises being directed. Where paper documents currently flowed from factory to ministry to Gosplan and back again, with delays measured in weeks or months, the network would coordinate the national economy in something approaching real time.

The architecture was striking in its scale. A central computer in Moscow would coordinate the network. Up to two hundred regional centers in major cities would handle intermediate processing. Roughly twenty thousand local terminals placed in factories and government agencies across the Soviet Union would communicate with the regional centers and with one another through the existing telephone infrastructure. The terminals were to be capable of transmitting data to any other terminal on the network.

This was 1962. The ARPANET, the predecessor to the modern internet, was still seven years away.

The technical credibility of the proposal was real. Glushkov was not a crank. The cybernetic methods he proposed were being developed in parallel at MIT, at RAND, and throughout the Western operations research community. What he was proposing was an extraordinary engineering project, not a fantasy. Had the resources been provided and the resistance overcome, something would have been built. What that something would have looked like is a question we cannot fully answer, because the project was never built.

The explanation is institutional. The Politburo, especially after Brezhnev consolidated power, declined to authorize full implementation. The Central Statistical Administration, which would have lost much of its role, resisted from within. The ministerial structure of the Soviet economy, jealous of its prerogatives, resisted as well. By the early 1970s, the grand version of OGAS had been abandoned.

This is the story as it has come down to us, and it is true. The bureaucratic resistance was real. The infighting was real. The Brezhnev-era caution that smothered so many ambitious projects was real. Yet this account, while correct, leaves the more interesting question untouched.

Suppose the conventional story had gone the other way. Suppose the Politburo had authorized full funding in 1962. Suppose Brezhnev had forced the ministries into cooperation. Suppose the Central Statistical Administration had been reorganized to support rather than resist the new system. Suppose the engineering had been delivered on schedule, the twenty thousand terminals installed, the regional centers built, and the telephone infrastructure upgraded to support real-time communication across eleven time zones.

Suppose, in short, that OGAS had succeeded exactly as Glushkov intended.

It still would have failed.

The failure would have been of a different kind than the failure that actually occurred, and it would have been less visible at first, because the system would have appeared to be working. But the failure was already implied by the architecture of the project. OGAS was going to encounter a problem its designers had not fully seen: the difference between information and knowledge.

Information, in the strict sense required by the cybernetic project, is data about a defined set of possibilities. It consists of things that have already been named: tons of steel, hours of labor, units of output, defects per thousand, machine hours, delivery intervals, inventory levels. It can be counted, compared, stored, transmitted, and aggregated. But it requires categories that already exist. It requires those categories to remain stable while the system works with them. And it requires a channel that does not substantially alter what passes through it on the way from one point to another.

Claude Shannon, working at Bell Labs in the late 1940s, gave the most careful account of these conditions that has ever been produced. His theory was not a theory of meaning, judgment, or economic life. It was a theory of communication. It told us what must be true for a message to be transmitted through a channel. Glushkov’s network, at its foundation, was a Shannon-style information system at a scale no one had ever attempted.

Knowledge is something else.

Information answers questions within a world already named. Knowledge re-names the world.

Consider a single worker on the floor of a Soviet factory in 1965. He has been at his station for years. He knows the lathe he operates the way a violinist knows a particular instrument: its small variations from day to day, the sound it makes when it needs calibration, the way it responds when the steel he is cutting is harder or softer than the last batch. His knowledge does not operate over a neatly defined set of possibilities. It responds to the situation before him, including features no one has named in advance. It is not measured in units. It cannot be moved through a channel without being altered, because it lives in his attention to the lathe and would not survive being separated from that activity.

The Hungarian-born chemist and philosopher Michael Polanyi saw this clearly. He observed that we know more than we can tell. The phrase is often treated as a comment on the limits of language, as if articulation were merely slow or incomplete. Polanyi meant something stronger. He meant that articulation, when it occurs, changes what is being articulated. The worker’s judgment is not a hidden proposition awaiting transcription. It is a form of knowing that lives only in the act.

Now consider the worker’s neighbor, an engineer in the same factory. The engineer notices that the aluminum offcuts piling up in the corner could be reworked into a component that is not in the official catalog but that a nearby motor plant has been asking for. He brings the proposal to his supervisor. The supervisor explains that the catalog does not contain the component and the system cannot process what the catalog does not name.

The worker knows more than he can tell. The engineer sees something no one has yet named. The first problem defeats transmission. The second defeats planning.

The engineer’s knowledge is not merely tacit. It is generative. It brings a category into being where none had existed before. No information apparatus could have captured it in advance, because information requires a defined possibility space and the possibility had not yet entered one. The new component did not exist as an entry in the system, and therefore could not appear as a demand, a supply, an allocation, or a shortage. It had to be seen before it could be counted. It had to be created before it could be optimized.

This distinction matters because economic life is full of such moments. Most are small. Few receive names. Almost none appear in official histories. Yet civilization is largely made from them: a substitution improvised under constraint, a process altered by experience, a tool used in an unintended way, a market discovered because someone noticed that what appeared to be waste in one setting had value in another. The economy is not simply a stockpile of known things being moved more or less efficiently among known uses. It is an open process in which new uses, new meanings, new arrangements, and new categories are continually being brought into the world.

Hayek’s point was never merely that planners lacked enough data. It was that the relevant knowledge often comes into being only through the process the planner hopes to replace.

Glushkov’s network was designed to carry information. The activity it was designed to direct was constituted by knowledge. That was the mismatch. The cybernetic vision imagined information so comprehensive that it would become control. But information and knowledge are not points on a spectrum. They are different kinds of things. This is the structural problem, and it is older than Glushkov, older than computers, older than the Soviet experiment.

The Soviet economy of the 1960s was already demonstrating this, not in some hypothetical future but in the very years OGAS was being proposed. Alongside the official planning apparatus there had emerged what Soviet citizens and Western analysts came to call the second economy: informal arrangements, side production, barter networks, favors, substitutions, and small-scale enterprise operating outside Gosplan but inside the real life of the country.

The second economy was not an exotic exception. It was the ordinary evidence of a system meeting its structural limit. A factory needed a bearing the catalog did not list. The catalog did not list it because the bearing had been modified by a foreman six months earlier in response to a problem that emerged a year before that. The factory could not order the bearing through official channels because, officially, the bearing did not exist. So the factory traded informally with another enterprise that had begun making something similar through equally unofficial channels.

This was not chaos in the absence of order. It was order appearing where the formal system had failed to make room for reality.

The planners could collect data about the second economy. They could study it, denounce it, criminalize it, or try to absorb it. What they could not do was direct it without destroying the activity that made it useful. If the apparatus had absorbed the second economy, a third economy would have emerged in the spaces the apparatus had not yet learned to name. The difficulty was not that useful activity happened to be outside the system. The difficulty was that useful activity was always producing the future that the system would need to specify in advance.

This is the reason market economies do not solve the knowledge problem by knowing everything. They solve it by allowing no one to need to. Prices are information, but they are information of a special kind: compressed signals left behind by countless acts of judgment, experiment, failure, urgency, imagination, fear, and discovery. They carry knowledge without containing it. They let economic participants coordinate without requiring any one participant, committee, ministry, model, or machine to possess the whole.

A price does not tell us everything. It does something better. It tells us enough.

This is why the planning temptation is so persistent. It begins with a truth. Information matters. Better information matters enormously. A factory with no inventory data is blind. A portfolio manager with no prices is lost. A logistics network without feedback is merely a hope with trucks attached. No serious person should sneer at data, computation, optimization, or artificial intelligence. Tools matter. The human story is partly the story of better tools extending the reach of human intelligence.

But the temptation begins when a tool that works within categories is mistaken for an intelligence that can stand above the process by which categories are created.

The current version of this temptation does not call itself central planning. It does not invoke Glushkov, cybernetics, or the Soviet experiment. It speaks instead of foundation models, integrated data systems, agentic artificial intelligence, automated decision-making, and algorithmic coordination at scales no human institution could manage. Strip the vocabulary away and the proposition is recognizable: with sufficient computational capacity and sufficient data, we may at last be able to do what every prior attempt at comprehensive economic direction could not. The hard part was always the information. Now the information problem has been solved.

The argument is being made by serious people, and seriousness should be answered seriously. Contemporary AI systems can do things no previous generation of computational tools could do. They can identify patterns in data that human analysts would miss. They can summarize, classify, translate, simulate, and recommend across domains with remarkable fluency. They can act in environments and adjust behavior in response to feedback. Their capabilities are real. Their improvement is real. Their economic consequences will be real.

But they cannot do the thing the planning argument requires.

The limitation is not model size, training data, or computational power. Those things will improve, and in many domains the improvement will be astonishing. The limitation is structural. AI systems operate on records of activity that has already occurred. They detect patterns within categories that have become legible. They can help us search the known world with greater force. They can even help us combine known things in startling ways. But economic life is not confined to the known world. It is continuously producing new categories through action itself.

When an AI system encounters a genuinely new economic category — a new kind of work, a new arrangement between buyers and sellers, a new use for an existing material, a new preference that becomes visible only after a product appears — it cannot have directed the activity around that category from above. It can respond once the category has been generated. It can learn from it. It can amplify it. It can even participate in extending it. But the category had to enter the world before the system could treat it as information.

There is an important refinement here. If an AI system actively generates new economic categories — designing new products, identifying new markets, creating new arrangements — it is not directing the catallactic process from outside. It has joined the process. It has become one participant among many, perhaps a powerful participant, perhaps an astonishing one, but still a participant. The structural point remains. No participant in economic activity, however capable, can stand outside the activity and direct the whole of it. To do so would require specifying the categories that the activity itself is creating in real time.

This matters for investors because the same error appears in softer form whenever analytical tools are treated as oracles. A sophisticated model can analyze the world that has become legible. It can detect relationships inside existing categories. It can help distinguish noise from signal, excess from value, and durable trends from fashionable hallucinations. Used well, such tools are valuable. Used reverently, they are dangerous.

The future that matters most to investors is not the extrapolation of an existing category but the emergence of a new one. Before the category exists, it cannot appear in the data in the way a model requires. Before the market has named it, the investor must judge through analogy, imagination, institutional understanding, competitive dynamics, technological possibility, and a sense of human behavior. These are not substitutes for analysis. They are the conditions under which analysis becomes useful.

Markets, at their best, are not forecasting machines. They are discovery systems. They permit experiment, error, imitation, failure, adaptation, and repricing without requiring anyone to write the future down beforehand. They do not eliminate ignorance. They metabolize it.

This is why serious investing has always required both discipline and humility. Discipline, because information is indispensable and must be treated carefully. Humility, because information is not knowledge, and knowledge is not omniscience. The investor who ignores models is negligent. The investor who worships them is merely negligent with fancier equipment.

The Kennedy officials who took an interest in Soviet cybernetics in the autumn of 1962 were responding to something real. Ford was right that the proposition deserved attention. Schlesinger was right that the ambition could not be dismissed. The Wiesner panel was right to include some of the most accomplished economic and scientific minds in the country, because the question was worth asking carefully.

They were wrong, in the event, about what would happen. OGAS was not built. The Soviet economy was not transformed. American neglect of cybernetics did not finish the United States.

But the ambition they were responding to has not died and likely never will. It has appeared in every generation under the technological language available at the time: mechanical computation, electronic computation, systems analysis, operations research, cybernetics, networked databases, algorithmic optimization, foundation models. Each generation believes the previous failure was a failure of tools. Each generation believes the new apparatus has finally changed the situation. Each generation is tempted to confuse a larger map for the territory, a better signal for the thing signaled, a faster machine for a wiser civilization.

The mistake is always the same.

Economic activity is not an object sitting still beneath an apparatus. It is the activity that generates the categories any apparatus would need in order to operate on it. The future of the economy is not merely hidden. It is being created.

That is the fact the machine could not know.

And it is the fact the careful investor must never forget.

James W. Vermillion III

Investment manager by day, philosopher by nature. Exploring timeless wisdom and fresh perspectives on wealth, freedom, and ideas. Reading always.

Next
Next

The Violin Problem: On Knowing More Than We Can Tell