One chart best captures the AI meme stock nature of equity markets right now: SoftBank owns a considerable chunk of OpenAI equity, currently unlisted.
Before pulling back recently, SoftBank stock zoomed from $20 to $88 in just six months. It is currently trading at $55.
PS If all AI data centers announced for the US are actually built, their electrical power requirements will reach ca. 30-50 GW by 2030. To put this number into perspective, that’s a minimum of 10 new Palo Verde nuclear plants. Palo Verde is, by far, the largest nuclear plant in the US. It took 12 years to build and cost $6 billion.
Today, an equivalent plant will cost at least $35 billion due to inflation and additional regulatory demands. So… a MINIMUM of $350 billion and at least a decade to build. That’s before NIMBY even comes into play.
So… how realistic are AI growth projections? I’m speaking as an engineer here…

the problem with A.I. is far deeper than a bubble. In machine learning, it is possible to get an algorithm to do anything you want as long as there is enough training data. After all, one can just get the algo to return the closest example in its training dataset (i.e. memorization) This has been known for a long time, about a hundred years.
ReplyDeleteWhat the A.I. people have been doing lately is they have been creating extremely good demos using extremely large datasests, and using this to claim they have developed intelligence. This is fundamentally not intelligence but fancy memorization. Kind of like a stupid kid that scores well on sats by memorizing all the answers.
The problem with the memorizing solution is that it requires huge models, because you are basically memorizing every answer, albeit in a quite efficient manner. These models swallow huge amounts of electricity just to answer a simple question. Hence, the data center build out you are seeing.
From the financial perspective, what they are doing is creating nice demos which may never be financially rewarding. The trick lies here. Most finance people think that technology will naturally get cheaper and better with time, and the important part is a fancy demo. However, this only occurs if we understand the technology is question and can thus optimize it. In the memorization solution, there is no understanding. Thus, the technology will not get more efficient with time, in fact it might get less so as they push for bigger models. The result is fancy demos that never turn a profit. Classic example is self driving, but may even apply to Chat-GPT.
For the technically inclined, I am aware that A.I. seems to generalize a little beyond the training data, and this has often been touted as a form of intelligence. However, one must note two things. First, the generalization is very limited. Second, this limited generalization can also be achieved through fancy memorization, rather than intelligence, by projecting data into a feature space where the separation between right and wrong answers is maximized. I have yet to see evidence that A.I. is doing intelligence rather than fancy memorization, and have in fact much evidence of the latter.
ReplyDeletebtw, the corruption in A.I. is off the charts. Just a simple example.
ReplyDeleteThis guy claims he has written 10,000 papers. Is that humanely possible? In any sane community, he would be in jail. The fact that he is a "respected" professor tells you everything you need to know about the field and its attitude towards fraud.
https://scholar.google.com/citations?hl=en&user=DNuiPHwAAAAJ&view_op=list_works&sortby=pubdate