Gluts of Good Guesses

Fifty years? We’re already there.

Our systems are still fairly limited, and yet they’re already growing far beyond our ability to comprehend them.

If no one truly knows how a pencil is made, let alone how a computer really works, it shouldn’t surprise us that delegated problem-solving will soon be the norm across most complex endeavors, including math and science.

This is inevitable for a very specific reason: When problems reach a certain point of computational complexity, the difficulty of understanding our own solutions to them becomes prohibitive. Even for the very same experts and thought leaders that designed those solutions in the first place.

Is that a problem?

It’s hard to argue with the results, so we may just have to trust the process.

—Or in this case, the processing.

We’ll take things on faith, I suppose.


“Computers are already revolutionizing the practice of mathematics. Because of computers, the current religion will completely change. In a hundred years, possibly fifty years, people will make fun of the superstitions of twentieth century mathematicians with their pathetic belief in absolute certainty. Because computers are already, like AlphaGo, much better than any Go player. And in chess playing, DeepBlue can beat anyone. So analogously, for mathematics and all the theorems of mathematicians, in fifty years computers will find much better proofs all by themselves, without any human help, just by doing machine learning. This is the future. Eventually, all the corpus of mathematics will be done from scratch, ab initio, by computers.” — Doron Zeilberger

Patterson in Pursuit, e. 97: Math Heresy and Ultra-finitism (2019)


“Today, machine learning programs do a pretty good job most of the time, but they don’t always work. People don’t understand why they work or don’t work. If I’m working on a problem and need to understand exactly why an algorithm works, I’m not going to apply machine learning. On the other hand, one of my colleagues is analyzing mammograms with machine learning and finding evidence that cancer can be detected much earlier. AI is an application rather than a core discipline. It’s always been used to do something.”

— Barbara Liskov, in Quanta


(This headline is dumb, as most are, but the content fits.)

Number Theorist Fears All Published Math Is Wrong:

Buzzard’s point is that modern mathematics has become overdependent on the word of the elders because results have become so complex. A new proof might cite 20 other papers, and just one of those 20 might involve 1,000 pages of dense reasoning. If a respected senior mathematician cites the 1,000 page paper, or otherwise builds off it, then many other mathematicians might just assume that the 1,000-page paper (and the new proof) is true and won’t go through the trouble of checking it. But mathematics is supposed to be universally provable, not contingent on a handful of experts.

This overreliance on the elders leads to a brittleness in the understanding of truth. A proof of Fermat’s Last Theorem, proposed in 1637 and once considered by the Guinness Book of World Records to be the world’s “most difficult math problem,” was published in the 1990s. Buzzard posits that no one actually completely understands it, or knows whether it’s true.

“I believe that no human, alive or dead, knows all the details of the proof of Fermat’s Last Theorem. But the community accept the proof nonetheless,” Buzzard wrote in a slide presentation. Because “the elders have decreed that the proof is OK.”


DeepMind Solves Quantum Chemistry:

There are roughly 70,000 parameters in the network and the real question is this: what generalization is occurring here? The network is trained just once and after this it seems to be capable of finding approximate wave functions that give good results for the predicted properties.

“Importantly, one network architecture with one set of training parameters has been able to attain high accuracy on every system examined.”

How much can we trust a neural network that we just don’t understand? This is the one potential misgiving of relying on these findings. However they are probably too good to just ignore because we don’t understand the way the network constructs its optimum functions. As the paper concludes:

“This has the potential to bring to quantum chemistry the same rapid progress that deep learning has enabled in numerous fields of artificial intelligence.”

David Pfau, James S. Spencer, Alexander G. de G. Matthews and W. M. C. Foulkes, Ab-Initio Solution of the Many-Electron Schrödinger Equation with Deep Neural Networks, DeepMind and Imperial College


Jumping the gap may make electronics faster:

The problem with using SPP waves in designing circuits is that while researchers know experimentally that they exist, the theoretical underpinnings of the phenomenon were less defined. The Maxwell equations that govern SPP waves cover continuum of frequencies and are complicated.

“Instead of solving the Maxwell equations frequency by frequency, which is impractical and prone to debilitating computational errors, we took multiple snapshots of the electromagnetic fields,” said Lakhtakia.

These snapshots, strung together, become a movie that shows the propagation of the pulse-modulated SPP wave.

“We are studying tough problems,” said Lakhtakia. “We are studying problems that were unsolvable 10 years ago. Improved computational components changed our way of thinking about these problems, but we still need more memory.”

More information: Rajan Agrahari et al, Information Transfer by Near-Infrared Surface-Plasmon-Polariton Waves on Silver/Silicon Interfaces, Scientific Reports (2019). DOI: 10.1038/s41598-019-48575-6


Mildly related addendum:

“In the early 21st century, there is far too much disorganization in science and commerce, so that the work produced by one scientist is difficult for others to find, let alone use. In order to connect different disciplines and improve communication of ideas, we need a meta-language that captures the structures of and internal relationships between these ideas, so that formal analogies can be made between disciplines. For example, category theory may help to formalize well-known engineering analogies, such as that of force and velocity versus voltage and current.”

David Spivak, Notes on Applied Category Theory (Draft), École Polytechnique Fédéral de Lausanne, 2015

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: