It's Bye, For Now
My motivation and time for technology has increasingly waned over the past year. I don't tinker as much as I used to, I've largely hit ends with where I go with Unix-based systems, my day-to-day technology stack has mostly stabilized, the future direction of some technologies I've invested in has gone sideways at best (and at worst backwards), and I no longer feel the need to keep up with technology that seems to move arbitrarily. When I had first gotten into technology, I had made an assumption, what would be my critical error today: that technology is the realm of intelligent decision making. I had thought technology would solve actual problems, that someone would implement a solution, and we would continue using it forever. After all, a date time library, once built, doesn't seem to ever need any updates, this should have been solved in the 70s. And yet every stack I go to seems to want to reinvent the wheel, and often times this reimplementation seems to be the result of ignorance of our past technology. I've accepted now that I was wrong in my assumption, that the field has the same problems as every other field, and that investing time for the sake of investing time is not particularly fruitful. And so I am taking a hiatus. However, I did not want to leave the reader hanging so to speak, and so here are my final thoughts on technology, where we are and where I think we will go.
The Actual Future Of Technology Is Not AI, But Rather Physics
We like to think that technology, which in the modern context (as "technology" tends to be a moving term for various "revolutions" in innovation) actually refers to transistor-based technology, is "the future". However, people have to understand that the transistor and classical compute technology is now an ancient dinosaur that has been around for over half a century. We have optimized it, refined, and have used a few clever tricks to get around inherit limitations in this technology (for example, multiprocessing using multiple CPU cores). However, no matter what we do, this technology, it is impossible for it to achieve anything greater than most lowest level physics primitive that it is based on: the speed of light. Light has a finite speed, a limitation that, through Einstein's most famous equation, shows that it is impossible to bypass without the mass of the object increasing towards infinity. And, unfortunately for us, compute and latency are both bottlenecked by this value. Our fiber optic cables ship bits at near light speeds, and the attempts to improve latency via 5G were actually just a looking at improving the "last mile" bottleneck in bit shipping that our older infrastructure currently faces. And even then, shipping higher frequency radio waves is nodes that then ship near light speed is obviously vastly slower than plugging into the light speed network directly (hence why your ethernet hardwired machine can achieve an order of magnitude more throughput than the wifi chips). Similarly, for compute, we are not going to push electrons through transistors faster than light speed, it's simply an impossibility in the physics of energy-based technologies.
So where does that leave us with our current technology? Nowhere actually, we are really at the end of an old revolution that started many, many years ago. Instead, the actual future of technology must attempt to find a way to bypass that light speed limitation, which will require an understanding and ability to manipulate massless physics so that E=mc^2 does not hold. Maybe it is the case that there is no physics out there like this, that light truly is the limitation. However, there is promise. Again from Einstein, Einstein had observed a paradox, often called Spooky Action At A Distance, in which a correlations were found in movements of quantum particles at far distances. If this phenomenon can be understood, and then manipulated, then it is entirely possible to ship bits instantly to any location in the observable universe, as the interaction between the particles is not bottlenecked by the speed of light. It is possible, albeit highly dubious due to Fermi's Paradox, that a highly intelligent, highly technology advanced life form has already understood this phenomenon and is shipping quantum bits to Earth (in theory a life form could create a "quantum router" that could fire quantum bits at every single detectable massive object in the universe constantly, waiting for an observer life form to pick it up). However, it is also entirely possible, and perhaps more likely, that Einstein's observed correlation was due to measurement error, which unfortunately is quite a common occurrence in science. Never-the-less, I am hopefully optimistic about these physics, and believe that the benefits of a lead such as this far outweighs the cost of research, such that, even if no breakthrough is found, it is still a highly worthwhile area of exploration.
Current Quantum Will Not Replace, But Rather Augment, Classical Computing
One more thing about quantum computers. One misinterpretation that many people have is that quantum will replace classical computing. However, the quantum computers that are being designed currently are only really suitable to solving difficult mathematical problems, and even then it is impossible for them to solve many problems that simply are a matter of iteration. There was concern among many cybersecurity researchers that the RSA algorithm could be broken via Shor's Algorithm. This would pose a problem to the mostly commonly used algorithm for asymmetric key cryptography. However, Shor's Algorithm could not be applied to the algorithms used for symmetric key cryptography (AES being the most common) and hashing (SHA algorithms). Rather, in those scenarios, increasing key length is sufficient to prevent a quantum solution. And even then, there is actually one extremely simple, but not very practical, encryption algorithm that cannot be broken by anything other than brute force: the One-Time Pad algorithm. Similarly, breaking hashing algorithms typically requires brute force to find a hash collision, which cannot be iterated more quickly be quantum (at least with the current approaches) than classical. Never-the-less, our current quantum is excellent at solving some optimization problems, which can save billions of dollars and many lives per year.
Our Current AI Implementations Are Inefficient Dead Ends For AGI, But Still Produce Very Useful Technologies
Historically, there have been 2 approaches to implementing Artificial Intelligence through classical compute: Symbolic and Perceptron approaches. Symbolic approaches were quite thoroughly explored during the early AI summers, as the Perception approaches were simply too compute expensive at the time. However, that approach proved to be a dead end, at least at a practical level due to the "dogfooding problem" (it was rarely the case that the experts were the implementers), and even then the systems are quite domain-specific, and highly unlikely to have ever produced the sophisticated chat technologies of Perceptron approaches. Instead, AI would go into winters until the most recent summer, when we just so happen to have enough compute to produce impressive Perceptron models (such as LLMs via neural networks). However, in the end the Perceptron approach is also a dead end due to it's extreme inefficiency relative to the human brain. To produce about 1000 tokens that generate say 200-300 english words it takes our current LLMs about 1 quadrillion (1,000,000,000,000,000) operations, vastly less energy efficient than what the human brain can do. Similarly, while AlphaGo was able to beat the grand Go champion when playing 4 rounds of Go, the champion did beat AlphaGo in 1 of the rounds. Furthermore, The number of games AlphaGo had to play to beat humans was over a billion, while the human player won at perhaps 40,000-50,000 games, indicating limits to the intelligence that algorithms produced by Perceptron models compared to the algorithm used by the human brain. True Artificial General Intelligence likely would require an algorithm similar to the human brain, which is not possible on classical computers as the architecture simply differs too significantly. Brain architecture is a reconfiguring, highly parallel, and highly redundant system, while classical computer architecture is static and not redundant at all. And so, I'm extremely doubtful that our Perceptron-based approaches running on classical computer architecture will produce the mythic Artificial General Intelligence.
However, despite the limitations of Perceptron models to produce AGI, they still can produce some very impressive and useful technologies. I find myself more and more often reaching for Perception-based AI tools these days to solve some tasks that are essentially impossible to solve with other algorithmic approaches. For example, background image removal, text to speech generation, general purpose language translation (granted it tends to be bad at producing writer intent, however it is still quite useful in translation of knowledge that is formulaic in nature), language intent mapping, etc. Ironically, I don't use our current suite of Perceptron-based models (for example, LLMs such as ChapGPT) for search or programming, as these require a degree of reflection that the averaged output of the GPT-family can't realistically produce. Simply put, it misses too much critical background context to be effective for my use cases.
The "Infinite Compute" Machine As The Solution To Everything
To put in perspective how far we actually are in compute power relative to what we need, let's take a simple example from biology. Estimates of the number of cells in a fully grown human body is, roughly estimating, 30 trillion cells. Now suppose we want to model out a human body perfectly in software. Logically we need data to store all of the cell information. Should we be able to store all of the information we need for a single cell in 1 byte of data (which is likely vastly lower than the amount of bits we need per cell), then representing all cells in the body in RAM would take ~30 terabytes of data. Then, on top of that and by far the biggest bottleneck that classical compute will never be able to overcome, the cells interact with each other, creating a network of possibly quadrillions of interactions that occur within the body every day. Using the entirety of the computer resources on earth, we are nowhere even close to having enough RAM or compute. However, envision a world where we have infinite compute running at infinite speeds. In this case, the human body models problem becomes trivial, simply take a small sample of DNA and let a simulation calibrate to the current body state, and then let the simulation predict perfectly the future as it models every possible interaction that can occur. Within seconds, the infinite compute machine generates a custom made drug that solves any aliment that anyone has. Further, distinctions such as physical aspects of human no longer become relevant, as its a quick custom drug to modify anything. Similarly, languages no longer become relevant either, as a model exchange occurs via computer to the 2 speakers, allowing automatic translation that instantly understands the exact intent of both speakers. It would be a very interesting world, one that I am almost certain I will never see, but happy in the optimistic thought that one day it will be a reality for people.
That's All For Now
It's been fun everyone. I've enjoyed the time I've spent blogging, despite the amount of work that goes into it. And even though I need some time, I will almost certainly be back. It may not be soon, it may not be for years even, but I always find myself coming back to the old web, wanting to recapture the magic that once was, where so many communities were built through the sheer joy of the people who were part of them. It was a magical time, and that magic is still out there, in the small, often forgotten, corners of the web. So with that, I take my leave.
Catch you in the Wired, Lain
--Yukinu
PS, The Ubunchu archives are still available at these links: