Story Grid

This blog covers more alternative story reading orders added this week to the Story Grid app. For our technical readers, there are notes on various problems of manuscript recovery. Also included this week are Qu-Map notes for a major prophetic land purchase in Louisiana by SpaceX. Finally, watch dates, shop work, war status and a headline review.

Story Grid

The link here is to the Story Grid app. The permanent link is off the front page at Paleo.In. In last week's blog I introduced a new set of pop ups which deal with alternative reading orders for stories along the rows and columns of this app. These alternative story reading orders are following the Vine and River pairing systems of the alphabet itself. Sabbath Reads, which has its own app, are alternatives that read down through the columns.

As a theoretical concept, these alternative reading orders appear to be how the inspired text does not need tooling like we see in modern Study Bibles. Alternative reads bring stories together in ways that naturally provide interpretive context without the need for an expert to force some interpretation.

Diagonal Reads

So what has been added to the Story Grid app this week are a series of pop ups that provide alternative story reading orders that work along the diagonals of the overall Story Grid itself.

Because of the way the Story Grid app is constructed, these alternative reads appear within the new set of pop ups that I introduced last week. So what is happening this week is an expansion of the set of alternative reads. This time, the new reads are along the diagonals of the Story Grid itself.

These new diagonal reads are the most speculative set of otherwise rigorous reading orders that are possible given the basic geometry of the main grid itself. These have been added in order to study how useful diagonal reads might be. Remember, of course, that as of this week the entire grid itself has not yet been fully determined. So this is an ongoing study.

The purpose of diagonal reads is to take lists of stories that are found along the diagonals as found in the Story Grid app. Negative Diagonals go down when moving to the right across the table. Positive diagonals go up when moving to the right across the table.

The 2 most interesting diagonals are the center diagonals. The center negative diagonal moves from the top left to the bottom right. The center positive diagonal moves from the bottom left to the top right. Neither of these diagonals has any wrapping.

Once off the center diagonals, the other diagonals all wrap the table in some way. Conceptually, these diagonal wraps are done as though the Story Grid was rolled. You can study the story numbers in the various reading order lists in the app to see the specifics.

Finding the Diagonal Data

Ryan went through several rounds of experiments this past week in order to find a nice place to put the launch buttons for finding diagonal read data.

In the end, the links to this function are tucked into the Column and Row reading order pop ups already in the app. The pop ups along the top header now include a link for opening negative diagonal related data. The pop ups in the left headers now include links for the positive diagonal related data.

I would recommend playing with the app to see how this works.

Ryan is perhaps the most important user of these various alternative reads. He is using these to help establish the correct set of inspired stories and their correct placement. The center of the Story Grid is perhaps the most difficult area to solve because it is less constrained than the edges.

History

This app has been undergoing changes in the shape of the grid for many years. Originally, we had no idea how many stories there might be, nor what the arrangement of stories might be. While we were working out what the shape might be, that app included alternative grid layouts in order to study the question better.

About a year and a half ago we settled on the 25x25 grid size that you see now in that app. This overall size is remarkably compelling because it causes the alphabet to inform the total number of stories. It allows 3d related design features of the alphabet to inform what is going on across the table. This informs both rows and columns of the table, which is very nice.

At this point we are attempting to find all the ways this story grid might be useful to readers of the inspired text. Alternative reading orders along the major axis of this table appear to be quite useful.

Use of the new diagonal reading orders will ultimately determine if these are also useful to readers of the Testimony. It will become more clear once the final story placement is settled.

Paleo Bible Work

This past week I finally started turning my attention to the manuscript side of the upcoming Paleo Bible app. I am still not sure when this will go live. The basic recovery work flow still needs to be worked out in detail. That work flow is where my attention is currently focused. For the record, I want to share some of what I am learning.

Let me start by saying this is a problem that has been in front of us for a long time. I have started down various parts of the recovery work several times over the past 17 years. This week I have been reviewing our vault of old git projects in order to understand failed attempts at various parts of this problem.

Perhaps the first step we ever took at recovery was working out the Paleo Keyboard layout and related configuration files that we use whenever we need to type Paleo letters. A week ago, when setting up that keyboard on our new PopOS 24.04, I noted those files are dated from 2009. So this has been a long term project.

There are various phases to the recovery work. Let me list those here. Some of these have been tried various times. Some are new.

Inbound Manuscripts

The world of Bible manuscripts is a haphazard mess. Ultimately the texts that matter to us were published in the 1800s. Bible manuscripts must be found and converted into a format that we can use. The NT and OT texts are usually in different sources and normally not in the same file formats.

So we don't go crazy, we have our own custom file format for Bible manuscripts.

At some point years ago we found 1 particular downloadable Bible with a very dense style of markup that we particularly liked. We redefined that format for our own use, but are still keeping the general dense markup approach.

We occasionally add new markup to track new needs from Ryan's ongoing work. Our build environment is now highly configurable and generally knows how to convert that format to HTML for use on web pages. Most new markup just needs HTML fragments added into a specific configuration file in order to know what HTML to use for any given new Bible markup feature.

All of our inbound Bible related files start with conversion into this format. Texts which need advanced formatting features, like the BRB, can then be edited as needed. All of this inbound conversion work is essentially done. We only drop to the format conversion level when we find a new manuscript.

That file format has suited us well for many years. But, this week, I finally gave up on that format for audit work. I have started experimenting with an alternative.

Audit and recovery work needs to view the inbound text as a simple list of words. That is all. Chapter and word breaks as found in all conventional markup just gets in the way. Let me explain.

Tool Chain

Our inbound files are still in our conventional format. Our outbound files must also be in that format. This format gives us access to a huge range of formatting features that are possible in the future Paleo Bible app itself.

But, between the inbound text and the outbound text we need a simple word list format that is easy for code to scan and change as needed through the various steps in the recovery process.

Rethinking File Formats

For a long time I have only used Javascript as a programming language and .json files for data storage. Our apps run in web browsers and this is the natural language for that environment. Our new build environment is in that same language, running on nodejs for those who may care. Passing data around the build steps is easy. Passing data from the build environment to the runtime is also very easy.

In some of the tooling I recently rebuilt for Ryan I started using .json equivalent .js files. This is basically a commented and editable form of .json data but stored in a .js file. Code using these files sees the inbound file as though it were .json data. Since it is a .js file, it is easily handled by text editors which handle syntax highlighting.

Importantly, .js files support comments. So the raw data can be decorated with whatever is needed for human inspection and when needed, human editing. .js files also generate useful error messages if they are human edited and a mistake is made.

These files are also easily saved under revision control, which is good. This general format is basically a simple hierarchical data base file format. This is without the cruft and intellectual overhead of an actual database manager, which is also very nice.

These files are capable of containing code. But, as a convention, we don't put code into these special files. For reference, The Story Grid app's map of stories is built around this format. Ryan regularly uses tools in his text editor to change the layout of the story grid itself. This is not hard. It is easy to understand. Though Ryan is editing .js files, he does not need to know how to code.

Bible Books Into Javascript

So I did some tests this week on extracting raw word lists for those 92 books into this strange .json equivalent .js file format. This converts the Bible text with complex chapter/verse/word addressing into a single long word list. This is the format needed for creating an English interlinear then for audit and then for recovery.

This format is fully commented with address data, so a human can open the file and know exactly where in the Bible any given word is actually located. But, as far a software is concerned, this starts as a simple long list of words.

This format saves all addressing in a separate place in order to know how to restore those word lists back into the highly formatted version for display in the Paleo Bible app at the end of the audit process.

But what matters most, what I saw this week, is how easy it is to load and use Paleo word lists in this format. With only 1 simple load, running at the parser speed of nodejs, the entire Bible becomes immediately available in memory. This is as word lists, ready for whatever task may then be needed.

The final step in this new tool chain is to insert the corrected word lists back into the files that are formatted with all the markup used for presenting Bibles on screen in the apps. The final Paleo Bible app is then technically able to display the same rich coloring and annotation as found in the BRB.

In playing with files in this format this week I saw more changes that are needed. The system needs to track sub-words in order for inbound Paleo words to become more than 1 word on the outbound side. There is an another question about tracking punctuation as a type of sub-word. More testing is needed for this.

I still need to write the code that inserts these word lists back into the outbound files. This does not look all that hard to do. More on this in future blogs.

With simple word lists available for quick loading, it was also possible to play with that format. I wrote some simple apps that can summarize the inbound word lists themselves. Here are some of the data I found this week.

Manuscript Stats

We have a 92 book canon that forms the base text for the Paleo Bible. These books include the 66 books of the standard Protestant canon. Then we add many more books. Basically everything we could find in Aramaic that was included in someone's Bible. It also had to be printed in the 1800s and must now be out of copyright.

This working canon includes Intertestamental works, Additions to Daniel, the Maccabean Revolt, more for Ezra, Manasseh and Psalms. Finally there are a group of books from Early Christianity.

This is the largest set of books we have looked at in terms of recovery. For most of these books we have at least one English to Aramaic interlinear. So we have the resources we need to figure out the English translation for most of the vocabulary words used in the text as passed by history.

So here are some basic stats. Across those 92 books are 536,014 individual inbound Paleo words. I am using a Layer 3 definition, so units of Aramaic grammar. Out of this inbound string of words are 50,907 unique words. These 50,907 words are the inbound data for the lexicon related tooling I am currently planning.

For comparison, a full KJV bible has a little under 15,000 unique words. Of course our canon is a little bigger, so this is not a perfect comparison. But this difference between 15K words in KJV vs. 51K in raw Paleo is mostly the grammatical grouping in Aramaic as compared to English.

We know from the Shepherd's staff work that prefix letters in perfect Paleo always become stand alone single letter words. So this 51K count of unique words will go way down once we have worked out the Lexicon data.

Planning

I have been reviewing the code from past attempts at cracking this riddle to see what, if anything, I could salvage. All I really learned from this review is what not to try again.

It is clear that in past attempts I was mostly worried about getting this problem solved quickly. This was because for years we were running out of money, so time mattered. But "quickly" is not a way to also learn a recovered language. Working quickly is sometimes called the "tyranny of the urgent." It is a trap.

Now that we ran out of money, we find we are still here. Still working. So running out of money didn't really matter.

So now, we are most interested in getting the recovery problem solved as ACCURATELY as possible. We don't really care how long it takes. Ultimately, we still want to be as efficient as possible. If we can stay efficient, then the time it takes is not a real concern.

So, I am being slow when thinking about this code to make sure I have a good recovery plan.

Magic Stats

To crack each of those 536,014 inbound words requires an entry in a lexicon. That lexicon needs to explain what to do with each of 50,907 unique words.

I ran another stat. There are 1379 unique words that occur at least 50 times in the inbound text. Of those, Mo-Ne, the standalone word for "from," is the most common. Mo-Ne has 11,521 copies in the inbound manuscript. Next up is Oo-Lu. This is Paleo for "on" or "upon." This word has 6,234 copies in the inbound manuscript. See the magic?

If we started by loading a new lexicon with those 1379 most common unique words, then the Lexicon could explain what to do with 332,901 of the words from that inbound stream of 536,901 words.

For 1379/50,907 or 2.7% of the work, the lexicon will cover 332,901/536,014 = 62.1% of the inbound text. Magic indeed. I have never been this clever before.

More Magic

Looking more carefully at this problem I could see a few other things going on. Let me start with Abraham. 146 times it is just the stand alone name. Wa-Ba-Re-Fe-Mo, as we expect. 71 times it is a Du then Abraham. So in English usually "of Abraham." 67 times it is Lu then Abraham. So "to Abraham" in English. The other forms are "In Abraham," "And Abraham," "And To Abraham" and then a strange case from the Prayer of Manasseh 8:2 where we find "Who To Abraham."

So 3 of these Abraham entries are over 50 times. So these would be caught in our net for hitting all words which occur above 50 times.

But by adding all 7 entries for Abraham into the lexicon at the same time, we would fully handle all occurrences of Abraham. Not hard. A text editor with reasonable file formats makes this quick and easy.

Let me take the name Joshua. This name has 15 different forms in the inbound string of words. So by hitting 15 entries, the name Joshua is covered across the entire manuscript. 1033 times we find stand alone Joshua. 140 times we find "Of Joshua." 116 times we find "To Joshua." 41 times "In Joshua." 37 times "And Joshua." 14 times "Who In Joshua." (These start at Romans 8:2 and may be in a question.) The other entries are minor, but still very easy to enter into a lexicon using a text editor.

In a few of those forms the name Joshua is buried as part of a more complex name. Joshua is buried in "Abishua" in 1 Chronicles 4-5. (This is a contraction of Father and Joshua. Chronicles is not likely inspired.)

Joshua is also buried in "Melchi Shua." See 1 Samuel 14:49 and 1 Samuel 21:2 for examples. This comes in as a single word, without a word break. So the English translation is hiding how that name is a contraction of "King" and "Joshua."

If Ryan and/or I were actually working these problems, this strange word would cause a stop for conversation.

Should the interlinear in the Paleo Bible read this as the contraction "King-Joshua" in English? Anyone who could read the Paleo would easily see this name as spelled exactly this way. Should we let English readers see it this way too?

There is much work ahead which is full of discovery and trouble. We are about to see through centuries of fraud by scribes who were trying to hide details of the text from generations of readers. They wanted a secret keeping priesthood. They did not want real readers.

Whatever the correct handling of these other forms of "Joshua" might be, our lexicon can be easily updated later if we had a better understanding of how to handle the specific details of each case. We are setting up for an iterative process that gets better with time.

In any case, with 1 quick edit, easy to add into the Lexicon, all of the references to Joshua throughout the inbound text are covered. 1404 inbound words are handled. About 1/2 million still to go.

Shifting Strategy: Problems In The Past

In the past I thought the entire list of unique words needed to be handled like I've described above. As I dove into these lists I got bogged down. I thought I had a technical, tooling related problem. So I changed the tooling, changed file formats, started new projects and tried again.

Looking back on this problem now, I can see I always ground to a halt because the language gets complicated and needs attention from the perspective of the actual words used in actual sentences.

This is especially so with vocabulary words which are rare across the text which then also show up with rare patterns of grammar. For many words it is almost impossible to figure out what to do without also looking at the word in a sentence which is mostly already solved.

Human readers of historical Aramaic must have certain skill in this area that they learned when they were learning to read. Some how, we must move through that same learning process in order to solve this general problem of finding the inspired text.

So this word list view of the problem is not the right way to think about this problem, especially all the way to the bottom. It only comfortably reaches about 60% coverage, as I explained above.

After this 60% level of coverage, the strategy needs to shift. After all the common, often small, words are entered, then the problem shifts to passages and finding the correct spelling of words. So far, we still don't know how to correctly spell anything. All we have done so far is split off grammar.

Shifting Strategy: Solving Passages

Correct outbound word spellings, which is also a function of the lexicon, cannot be known until after words pass audit. Taken alone, many words with added vowels still pass audit, even if added vowels are not inspired. The correct and inspired spelling must be proven by testing of words done inside longer inspired passages.

So once the lexicon has been primed with the very common inbound words, the strategy needs to shift. The next goal is to start getting all the words in certain passages entered into the Lexicon. This allows the audit code to start working on determining inspired spelling and inspired grammar.

Not all passages are created equal. Some parts of the text are better for solving this problem. These are the same passages that now look to be the intended places where young students learn to read.

These natural starting places are Proverbs and Psalms. These appear intended as places for learning the language. These have simple uses of grammar. They have simple vocabulary words. These places look ideal to start attempting sentence level, or story level, audit and recovery. By starting here, we go through the same language learning process for Paleo as native historical Aramaic speakers went through when they learned to read.

So the lexicon loading process shifts from common words, to the words found in Proverbs and Psalms. As all the words from those stories get filled in, the automated audit process can then start to work against long strings of words.

Some inspired sentences in Proverbs and Psalms should now start to pass audit. So the words in those sentences now have known good spelling. Those passages now show inspired grammar.

Reaching Break In

At this point, we reach "break in." We can now start teaching the lexicon the correct outbound spelling of inbound words. Once we reach this point the lexicon now knows how to spell words. The audit code need no longer test all combinations of letters.

Once break in is reached, the system will start an iterative cycle. At first only a few words in the lexicon have known good spelling. Each time another passage passes audit, then additional words are found to pass audit and the lexicon gets updated. That starts slowly but will get faster and faster as more lexicon words have correct spelling.

Passages with uncommon words and/or complex grammar can be skipped and handled later in this process. So once Psalms and Proverbs are mostly covered, then other areas across the text can be worked. Interesting initial passages are still passages with simple grammar. Genealogies, for example, will also make good starting points. Stories that we know must be inspired are also good starting points.

Passages that are almost certainly NOT inspired can be avoided until late in this work. 1 Chronicles, with long lists of strange names, is an example of a place to avoid. This is also why entering all names into the lexicon is not a good way to start.

More Planning

I still need to plan out the code for the audit side of this work. This is deeper into the recovery pipeline. Some divine names, for example, have been changed. Those go into the lexicon as well, but indicate to the system what alternative whole words need to be tested.

This is a tool chain problem that still needs to be worked out. What are the exact files? How are they formatted? What does the code for each step actually do? I need to prototype this process all the way to the bottom.

The audit process will also add audit marks to all the running letter strings that show up in the Paleo Bible app. This pass will determine what might actually be inspired. It will provide the means for anyone to inspect the work.

All this lexicon and audit work happens before the inbound word list is then combined back into the standard Bible format we use around here. I intend this to be an insertion into yet another set of files. These files will not be created from scratch with each iteration in order to preserve overall formatting.

Ultimately, the resulting text can then be carefully formatted while the audit work is still determining correct spelling and inspiration.

Theoretically, the Paleo Bible can be formatted as detailed as the BRB, should we have the need. That work can be done in parallel to the other work.

I still need to convince myself I have a working plan for that side of the recovery process. More prototyping to come. More on this problem in future weeks. I do, though finally have a plan.

Qu Map App

The link here is to our Qu Map app. The permanent link is off the front of Paleo.In. This app shows how the letters of the Paleo Alphabet map to geographic locations. We first saw this at Disneyland in California, but later saw how it maps out to other places in the world. Perhaps the most interesting is across 25 major places in the USA.

Elon Musk has been filling in a couple places on that USA version of the Qu Map. First, he built his main AI data center in Memphis, TN, which is the center of the Qu Map itself. The Qu letter appears to be a brain. The center of that map is Memphis. So an artificial brain at the middle is a reasonable match.

I was working on the computer this week while a live stream of a SpaceX wet dress rehearsal was playing in the background. There was a comment made on that stream of a purchase, or maybe pending purchase, of property in Louisiana by SpaceX. This was not all that interesting until someone said SpaceX was looking to purchase 200+ SQUARE MILES of land from ExxonMobil.

This was reported to be more land than Kennedy Space Center in Florida, which SpaceX shares with many other launch providers. So this would be a huge base for launching rockets. Especially interesting would be rockets launched to polar orbits used for AI data centers.

That wet dress rehearsal went fine. They are expected to make a full launch attempt the Tuesday after this blog goes out. But I wanted to learn more about this land purchase because it would match another part of the Qu Map. Musk already matches the Sa in Texas and the Ba in Florida. So is he about to match the Dot in Louisiana?

Pecan Island

The link here is to a post on X where S.E. Robinson takes his drone and shows the location of potentially future SpaceX property for a launch facility in Louisiana. Robinson already flies over Tesla's factory in Austin.

In this post, Robinson gives more specific details. This is 212 square miles of land, 136,000 acres. In this flight he shows the locks on the inter coastal waterway that you can then find on Google Maps if you want to see more details. This land is currently undeveloped. It was probably oil well territory in the past. There is a small town to the west of where Robinson took his drone.

This is south of I-10 and west of New Orleans. It would have easy access by sea from the current rocket factory in Texas.

Qu Map Interpretation

You can explore the Qu Map app and find New Orleans on the national map for the USA. The king will turn out to be Enoch, the first walk-off. He was taken to heaven because Joshua was pleased with him. Going to heaven is part of his written story. This is VERY rare in the text of scripture, so it is a nearly unique feature to Enoch and this location on the Qu Map.

Within New Orleans itself are the main factories that in the past produced the Saturn 5 rockets for the moon program. They also produced parts for the Shuttle program and now produce parts for the Artemis program. These are all markers on the ground for Enoch.

Shifting the region to producing parts for SpaceX would make canceling the Artemis program easier to do politically. It would provide prophetic continuity for the region from the past to the future.

The Dot letter is seed. New Orleans is a major exporter of American grain, grown up river from that port. You can think of the space program as sowing seed into space. The Star Link constellation is like seeds flying in space.

All of this is interesting because a huge SpaceX facility in this same area would be another prophetic match to that general geographic location.

A final curiosity is that Judah, so Russia, is the tribe that would normally go with this location. It was a failed trip to Russia when Musk decided he needed to make his own rockets. Curious indeed. Maybe once World War III is over, SpaceX should be making rockets in Russia too.

Watch Dates

The next dates we are watching are 2026-05-19 for the replay of the last year of Jehoshaphat. Then 2026-05-20 for the replay of the first year of Jehoram's reign. These are the Tuesday and Wednesday after this blog goes out.

We might see Trump restart war with Iran. Noah's story does not demand this, but a continuation of the war might land on some other timeline. Also watch Ukraine. This is the other active venue for this new World War.

As always, I am speculating as to where to look. Real headlines can come from anywhere.

Shop Work

This week we replaced Ryan's laptop. This machine already had some hardware issues with the keyboard. With external monitor and keyboard on his desk, these were not issues before. These problems become serious when updating to PopOS 24.04. The fix was to just replace this nearly 6 year old machine.

This was an almost identical replacement to my own laptop replaced about a year ago. Prices have gone up considerably for a nearly identical machine.

Serious price increases are something I don't think have happened in my entire adult lifetime of buying computers. Moore's law appears over, at least for Americans buying imported computers.

Ryan is now doing a fresh install of PopOS 24.04. This is not particularly difficult, but the build environment takes some setup for a local copy of the web server.

I will follow Ryan with my own laptop PopOS 24.04 update here soon. One problem at a time.

Iran War Status

This week was relatively quiet in the Iran war because Trump was off to China to visit Xi. Trump seems to have paused the war in order to not get flack from the Chinese during this trip to Beijing. Trump was also waiting out the 60 day War Powers window. He can more easily restart the war after returning to DC at this coming weekend.

There is also growing realization that Trump appears to have some sort of mental disorder. He is widely known to be unable to read status reports. He now also appears able to only track 1 or 2 points at a time. So the Iranian list of items for ending the war is beyond Trump's mental capacity to understand.

Ultimately, Trump's mental incapacity appears as convenient cover for the fact he is a vassal. He does not need to know what is going on. He only needs to read scripts and do a little ad-lib in front of cameras.

On the Iranian side, nightly rallies in support of the Iranian government continued. The Iranians continued to dig out their weapons tunnels. Most everyone on the podcast circuit expects the war to resume once Trump returns to DC. Though nobody appears to want to say so very loudly.

The oil supply shock continues to roll out around the world. "Tank Bottom" is a term rarely voiced in the fuels industry because there are rarely supply shocks strong enough to empty all storage tanks. This time is different.

"Tank Bottom" is expected in the USA around July 4, 2026. Different areas in the USA will have different Tank Bottom dates. July 4 is a consensus date for the USA as a whole. After that point, there will only be oil reaching American refineries as it is pumped from the ground. Oil prices must go much higher at that point in order to shed demand.

Many refineries will not be able to operate continuously after that date. Also worth noting there have been a series of strange explosions at refineries going on around the world. This may be a reporting bias because everyone in social media is watching. In any case, many think this looks planned.

Effects of this oil shock are hard to anticipate because nothing like it has happened in the modern era. The last time something similar happened in the USA was in 1973. This time is different. Even if Trump returned from China and caved to all of Iran's demands, it would be many weeks before fuels started flowing as before.

This looks to be the main fulfillment of the Noah Flood story. That story has 40 days of rain, but then rising waters continued to destroy the world. So the 40 days of war against Iran, even though they stopped shooting, continues to destroy much of the world.

Headline Review

The following caught my attention this week.

Victory Day Parade

The link here is to a post on X by Stanislav Krapivnik. It gives details on the Victory Day parade in Moscow on May 9, 2026. This was the 81st anniversary of the victory over Nazi Germany by the Soviet Union.

There was serious trouble with Ukraine threatening to bomb the parade. This did not happen, but it was a serious risk. Stanislav posted extensively that day about the meaning and impact on modern Russia of that war. We are headed into a replay of that same war, so it is important to understand.

Stalin's Victory Speech

The link here is to a post on Telegram with a copy of Stalin's victory speech at the end of World War II. Another reminder of the World War replay expected as part of the replay of Resurrection Sunday.

Detailed Oil Status

The link here is to a post on X that details the number of days of oil supply remaining. This is listed by major countries. It lists off any major actions by these governments to constrain use or help retail buyers. For many countries in Asia perhaps another 2 weeks before oil is gone.

Aramco CEO Comments

This link is to another post on Telegram citing the Saudi Aramco CEO Amin Nasser. He indicates 100 million barrels of oil each week are currently blocked from the markets. He stated demand would rebound if normal flows were to resume.

Imagine how many people are without fuel because of that shutoff. How many will die of famine because of this?

Timing to Recovery

Another post on X detailing the time needed to refill the oil delivery network. Even if the Strait of Hormuz opened immediately, it could not fully recover before 2027.

Demolitions in Silwan

The link here is to a Cradle Media post on Telegram detailing Israeli destruction of Palestinian neighborhoods in and near Jerusalem. 20,000 houses are said to be under demolition orders. This includes the "City of David." See 2 Samuel 5:7. This is the namesake and heart of Zion, so we would expect the Jews to want to recover this land.

This activity looks to include clearing of the above ground areas south of the walled city of Jerusalem. This is the route of the currently underground tunnels that connect to the pool of Siloam at the bottom of the hill. See John 9:7.

Videos I have previously posted here in the blog showed Israeli officials as not intending to disrupt the above ground areas near those tunnels. This post suggests those claims were a lie. The intent appears to be uncovering the entire area, removing all current above ground housing. Who knows what they might find buried below those structures.

That post indicates these will become "biblical-themed tourist parks." The Israelis probably think of this as preparing pilgrimage locations. What might Joshua think about this? Probably Zacchaeus.

FFmpeg

The link here is to a Youtube video on the Lex Fridman podcast. This is a long episode where Lex deals with the key people and history of FFmpeg.

This is perhaps one of the most important, most widespread, and most difficult, software libraries ever written.

Every reader of this blog has almost certainly used this code. This is ultimately a story about people with passion who want to change the world. Even when there is no money to make it happen. Very inspiring.

BRICS FM Meeting in India

While Trump was in China, the BRICS Foreign Ministers were meeting in India. That BRICS meeting may have been more important. Iran is gaining stature on the world stage and BRICS is one of the venues where this is happening. The video is a clip of the Iran FM at that meeting. This is just for the record to note that meeting.

Changing War in Ukraine

The link here is to a Daniel Davis / Deep Dive video on Youtube. Scott Ritter is the guest. They discuss Russia's changing war strategy as a response to Europe going to war against Russia. The Russians appear to be preparing conventional weapons strikes against European targets to attempt to stop the European war against Russia in Ukraine.

As we are watching for a replay of World War II, this is basically more evidence that such a replay will happen and that preparations are now underway to get it started. As Ritter suggests, the logic in Russia is pretty simple. Since the Nazis are now fighting us again in Ukraine, we need to start taking the war home to Germany and the rest of northern Europe. This will likely start with conventional weapons strikes on military and weapons manufacturing targets in Germany. Europe running out of fuel, expected within a matter of weeks, may slow down their growing Nazism.

Musk's All In Podcast

The link here is to a post on X by Elon Musk. Musk is making huge bets on AI and AI data centers in space. This All In Podcast covers much of the current business case for this. Doing this work in space is far better than on the ground. SpaceX could become one of the most valuable companies ever created.

Trump in China Summary

The link here is to a post on Telegram with a general summary of Trump's trip to China. Trump was there to mostly work business deals. Xi wanted a more substantive meeting, which did not happen. (The USA is run by a criminal gang. China is a civilization.)

Musk In China

The link here is to a post on Telegram with a video of Elon Musk in Trump's group who visited China this week. He is our favorite billionaire for a reason. He pulls out his cell phone and does a 360 video. His peers are worried about decorum. Very nice.

Elon's son in China

Elon brought his son X along for the ride to China. There were some fun photos of that father and son team. This link gives some commentary on Chinese interpretation of those photos of Elon with his son. It speaks of Elon building a family dynasty, which is important in Chinese business relationships.

Tesla Semi Production Begins

This link is to a Youtube video by the Electric Viking detailing how Tesla is beginning full scale production of electric Semis. Full capacity of the factory is 50,000 per year. It will take time to ramp to that number. Customers are already putting in large orders. Congratulations to the rightfully happy Tesla crew who has pulled this off.

50,000 sounds like a big number, but there are just under 50,000 miles of Interstate freeways in the USA. So 1 new Semi per mile of freeway per year is all they can produce.

Ryan and I have driven I-40 across the country. New Mexico is particularly beautiful, with rolling hills where it is possible to see great distances along that road.

We've seen endless streams of nearly bumper to bumper Semis running from horizon to horizon on that freeway. (With cars relegated to the left lane.) It will be many YEARS of production before electric Semis could ever begin to replace the current fleet of Semis.

More Later,

Phil