I am curious for myself. The market opened about 1.75% down, quickly went to down 2.5% before 9am, and rallied all the way to closing just down 0.15% on the day, so a rally of ~1.5% from open to close. Could I have made significant money with $NDXP options today? My gut tells me no, but I want to see.
Specifically the most likely trade I would have done is buy $NDXP call options at the open for $NDX to close flat by the end of the day. Those obviously would have expired worthless as NDX did not end the day positive. But how about some others? The VIX is currently at 21, and my gut feel is it is too high to make money buying out of the money calls, but let’s see.
First, what if I had bought at the open with a strike 0.5% down from yesterday. NDX closed Friday at 19,280. 0.5% down is 19,180. NDX opened today at 18,990. The first trade for .NDXP-25-03-31-C-19,180 was at $19. So, you would have made $100 / $19 = about 5x your money. Damn. That is a lot.
OK, so how about if you bought .NDXP-25-03-31-C-19,200 at 8:48am when the market was down the full 2.5%. You could have picked it up for $9. It would have ended at $80. Almost 10X. Damn.
What about an ATM call? You could have bought .NDXP-25-03-31-C-19,000 at 8:30am for $100, and at 8:45am for $40. That would have closed at $280. Meaning 3X or 8X your money.
Damn, Well, I guess you could have made money today.
OK, I’ll start this one off by admitting this post is total procrastination. It is Friday morning and I should be doing something productive, but instead I want to look at the metrics for the 2025 OKC Thunder. The stock market is down another 2% today so don’t look there. My curiosity was peaked when on local sports radio I heard the announcer say that the game last week between OKC and the LA Clippers was the first one possession game OKC has played all year. It is late March. So, let’s dig through the numbers:
As I write this the Thunder are 61-12 with 9 regular season games left to go. They have already wrapped up the #1 seed in the Western conference, no other conference team has even locked up a playoff spot yet (!!) In 1-score games this year (defined as -3 >= final margin >= +3 and OT games) the Thunder are 1-4. So the radio talking head was wrong, it was not the Thunder’s first 1-possession game, it was just the 1st 1-possession game the Thunder have won. The Thunder are 6-2 in 2-possession games, and 54-6 in 3+ possession games. Here is the full record:
The record is sorted by point differential. OT games are boxed in, 1-possession games are in puke-yellow, and 2-possession games are in sea-foam-green. 3-possession+ games are in white.
The Thunder play a one-possession game about once a month. That is nuts. Compare the current champs, the Boston Celtics:
The Celtics have played twice as many 1-possession games (10) and won 7 of those.
So if you convert the Thunder’s 1-4 record in 1 possession games to 4-1 (or even better 5-0) then they would be 65-8 currently, and theoretically could win out to go 74-8. Only slightly crazy talk, because look at the 2016 Golden State Warriors:
Notice their good fortune in close games — they went 10-0 in games that were decided by 2-possessions. They even went 7-2 in 1 possession games, for a total of 17-2. Their 3+ possession record of 56-7 is going to end up being worse than the thunders, who could go as high as 63-6
It has been widely reported this year that OKCs average margin of victory (currently +13.1) is the largest in NBA history (all time). It handily beats the 2nd and 3rd best teams this year (Cleveland +10.4 and Boston +9.1). To put in perspective that those two are great numbers on their own, the 4th and 5th place teams this year are in the +4 range. It even handily beats Jordan’s 1995 Bulls (+12.3) and the 2015 GSW team (+10.8) that went 73-9. The current best all-time is the 71-21 Lakers at +13.9. But that need to be adjusted for pace — back in the 1970s there were many possessions per 48 minutes. No 3-point line, no offensive sets, just fast breaks and dunks (Showtime!). The metric that does this is Net Rating. Net rating adjusts for pace, making it possible to compare teams that play at different speeds. For example, a fast-paced team might have a good point differential simply because they play more possessions per game, while net rating would reveal if they’re actually more efficient on a per-possession basis. Here are the net-rating comparisons:
So, on a Net Rating basis, OKC is almost up +3 on 2016 GSW and even better than Jordan and the Bulls.
Neville’s Take:
So just how good are these Thunder? Let me make a prediction. The Thunder will be the first team to go 16-0 in the playoffs. It is very likely they sweep anyone in the West, and then the ECF will have Cleveland and Boston slug it out, the winner there being tired and no match for a rested, healthy Thunder. If you are in Vegas put some money on that prop bet and send a check my way for Father’s day. We all need it after the stock market today 🙂
Just finished a great read recommended to me by Michael Palmer — The Frackers largely centers around the events in the American shale oil boom from ~2000 till about ~2014 when the book was written and published. The cast of characters is nothing less than American Heroes- George Mitchell, Harold Hamm, Aubrey McClendon/Tom Ward, Charif Souki. Several of these heroes were bankrupted or even dead and the impact they have had on our way of life is not appreciated by as many as should.
To put it in perspective, the USA (including Alaska) has around 3-4% of the worlds proven oil reserves (around 50,000 million barrels, with a world supply of 1,500,000 million barrels). However, we are the #1 producer in the world, pumping 15% of the worlds supply (13 million barrels each day, where the world pumps 83 million barrels). In 2005 we pumped only about 5 million barrels per day, with many assuming domestic oil would run out, but the work of these people has pushed us from 5 million to 13 million. As a consequence our gas and electricity bills are less than half of western Europe — natural gas in Europe costs $10 per million BTUs, gas in Asia is about $12 per million, in the USA it is just $2-3 per million. Natural gas (produced alongside oil frequently) is also much cleaner burning than coal. If not for these men we we would be burning 2-3x as much coal, polluting the environment, and paying 3x for the ability to do so.
George Mitchell, who developed the Woodlands area north of Houston, started commercial development of horizontal drilling and fracking in the 1980s / 1990s. Oil drilling previous to Mitchell was basically drill a vertical hole in the earth like a big straw and pump it out. Most fields (like Saudi Arabia’s easy oil) are just sitting there in a giant pool. This domestic revolution was shale oil — liquid oil that is there, but trapped inside rock. It takes guts to drill down 2 miles into rock, turn that horizontal, drill another 3 miles, send dynamite explosive charges down with water and blow those rocks up to recover oil. You can see how it is much easier to just drill it in the middle east and pay the importers.
Aubrey McClendon and Tom Ward (via Chesapeake Energy) really supersized the process and embraced debt to expand operations. Aubrey in particular is someone who should be taught in OKC metro public schools as he brought forth Classen Curve, transformed the city with the olympic rowing river south of downtown, helped bring the Thunder to OKC– really changed the fate of OKC for the better. Sadly though, Obama could not have given a flip about any of this and Obama’s DOJ witch hunted him because he lived on the edge with debt and largess, so they indicted him with jail time in mind, and it was too much for Aubrey as his distracted mind was killed in a car crash 24 hours later. This is after Aubrey made many, many land owners very rich by paying billions of dollars for mineral rights. He employed more landmen then others had employees. Shame on our government at that time.
Charis Souki is super fascinating. He actually managed the restaurant in LA where OJ/Nichole Brown/Ron Goldman happened. He decided to leave LA after that and move to Louisiana and get involved in oil. Specifically he saw all the media reports that the USA was running out of oil and decided to build multibillion dollar import terminals for liquified natural gas drilled in Europe and then imported into America. At that time both the USA and rest of the world were about $2-3 per million BTU, and he foresaw a time where the rest of the world would stay at $3 and the USA would go to $10. Well, as it turned out because of this domestic shale oil boom and Russia/Ukraine the USA is at $2-3 and Europe is at $10. He reconfigured his company midstream to go from importing natural gas to exporting it, and now LNG trades at $230 per share, up 20x since 2010).
Anyways, a great read and the author is on X at @GZuckerman I love a good nonfiction story, and all Oklahomans should know this story.
I had a lightbulb moment today. I am talking a class on neural networks taught by the excellent Dr. G. at Stanford continuing education. Last lecture we talked about a simple neural network identifying an image, say a boat/plane/car/train. The neural net starts blank, and you feed it labeled images of boats/planes/etc. That input changes the weights of the perceptrons (neuron mimicking structures in a machine). These weights are simple numbers, think 4, 7, 12.5, whatever. The point is simple numbers (weights) only. These perceptrons connect to each other and have an activation function, so a 12.5 from one perceptron is fed to perceptron #2 and the 2nd perceptron may (or may not) fire a number downstream after being fed a 12.5. That’s it. After trained on numerous boats/planes/cars/trains, if you feed the network a new boat it has not seen before it is likely to spit out “boat” because this new image fed a 12.6 to the downstream perceptrons, not exactly 12.5, but much closer than plane or car.
The key point to understand in the paragraph above is the AI (specifically large language models) do not “store” source materials. There is no hard drive with images of boats that can be pulled up. The network has seen many boats and that has caused these weights to be as they are. The only memory are these numbers and weights, not source material — words or images. That bears repeating- if I have a model like gemma-2-27b that is 50GB large, those 50GB are all model weights — absolutely no supplemental material.
Think about your physics test back in college– your teacher allowed you to write anything you wanted, formulas, pictures on a 3×5 note card, and as long as you could fit it on that note card you could bring it in during test time. So your brain had the ideas and methods, but you had a note card to remember the exact derivation of final speed based on acceleration and time. What I am trying to say is that the AI language model has no note card. It does not have 50GB of weights and also the text of the Declaration of Independence, it just has 50GB of weights. Sure it has read (been trained on) the Declaration of Independence, but when I ask Grok/Claude/ChatGPT what is the 3rd line of the Declaration of Independence it *does not* pull up the internet, read the text, then tell me the answer — it simply pulls the answer out of those 50Gb of weights. (now this is not exactly true anymore, Grok and the other LLMs can search the internet and take in results, but a traditional old-school LLM like gemma-2-27b does not need, and can not use, any internet access whatsoever)
So in these 50Gb of weights (not really that big, about the size of 10 movies) it can think (or predict) words out of the Declaration of Independence. Or the emancipation proclamation.
So I asked Ara (the xAI voice assistant) to read me word for word the emancipation proclamation. It said that from my 50Gb of weights I can give you that it is 270 words long, 5 paragraphs and it could give me the gist of each section, but it probably could not recite it word for word. I pulled up Lincoln’s handwritten version from the National Archives and read along as I asked Grok to give it to me word for word, or try its best. It nailed EVERY SINGLE WORD! All from the 50Gb of weights. I even asked it to tell me about which exceptions Lincoln wrote in inside the margins, where the line spacing is off. This is a very obscure reference. If you do a google search for ’emancipation proclamation “Norfolk” and “Portsmouth” “line spacing”‘ you will not get any results. This is just something you have to read and look at. But Grok, after successfully reading me the whole thing (again from “memory” aka the 50Gb of model weights) correctly told me the exceptions for Norfolk and Portsmouth were written in between the normal line spacing.
So the lightbulb for me? An LLM is not just smart — it has a photographic memory. It does not have to recall source material on demand, it can pull EXACT COPIES of things just from its weights. Maybe today only 270 words like the Emancipation Proclamation, but tomorrow, everything.