AI

This blog reviews recent strange headlines involving OpenAI. The discovery of how to build artificial brains may be another headline related to our era in history, so AI is generally interesting. But rumors are flying about a change to what AI systems can do. It happened mid October, 2023, and adds to our growing collection of important headlines at the time.

Hunt For Headlines

As I have been hounding in recent blogs, our era involves a prophetic replay of the Resurrection of Joshua. That event was perhaps the center piece of the stories in the inspired text.

How this might express itself on the ground is unclear. But center piece stories in the text should have dramatic shifts on earth too.

AI is perhaps the pinnacle invention based as it is on the invention of the transistor. That discovery is variously credited. Wikipedia would accept the patents from Bell Labs after World War II. Alternatively transistors were developed by a Mormon out of Utah in the late 1930s. Take your pick.

In either case we could think of transistors as a World War II era invention. The first electronic computers were built to solve encryption problems encountered during that war. So computers should have a post World War II development arc that should be culminating around now. By this I mean late 2023, give or take only a couple years.

Vacuum tube logic gates themselves go back to the invention of electron vacuum tubes in 1904. But, solid state transistors allowed serious miniaturization and serious reduction in power usage. Billions and billions can now fit... Say in your phone. Now imagine a football field stacked several feet high with phones. That is the sort of system used to train big AI.

In any case, AI uses an astounding number of transistors. AI may be the pinnacle application of transistors. AI stands a good chance of significantly upending our world, especially for anyone involved with information related industries. The white collar, information worker, jobs that have been the ideal jobs in the post World War II era are now under serious threat.

So now imagine a cartoon of a light over someone's head. It usually means someone has had an insight. A light has gone off. In centuries past the same cartoon was drawn with a flame over someone's head. Now transfer that image to the light shining at the resurrection. That is how AI might map to the resurrection.

So on that limited bit of inspiration, we start with what is AI, exactly?

A busy Person's Intro (youtube.com)

The video linked here is a 1 hour introduction to AI as it stands in the fall of 2023. It is a talk given by Andrej Karpathy. He is himself a man to track. He has worked for Elon Musk in the self driving team at Tesla. He now works at OpenAI. For tech readers here he has an interesting Youtube channel that normally goes into the coding details.

Though it is hard to get away from geek-speak in these talks, Karpathy did so for a talk directed at managers so they could understand the promises, pitfalls, and other related strange details of the current practice of AI.

If you have an hour to spend, this is not a bad place to start. Most readers here should be able to track most of what he has to say.

Let me review some of his key points. First idea is that the hot subset of AI these days is called "Large Language Models" or just LLMs for short.

These things allow computers to appear to be carrying on a conversation on whatever subject they were trained on.

Systems like this have existed for a long time. Think about smart speakers from Amazon, or voice commands given to cell phones.

I am old enough to remember a type of computer game called adventure games. They were popular in the early 1980s. In these games, the player keyed in words and phrases in order to play a conversational game where the computer typed back. Linux users here can run 'apt install colossal-cave-adventure' for a classic example from the PDP11 days. The movie "War Games" is built around that sort of very common game from back in the day.

So interacting with computers using English is not really new. But the technology is no longer based on hard coded if-then-else logic created by human programmers. It is no longer running on CPUs.

Instead AI is running on GPUs. These types of chips are very different than classic computers. For the purposes of AI, they are running statistical inference engines. The code is basically running through unbelievably giant matrices of numbers.

The process used to create these data sets is where Karpathy starts out.

First, LLMs must go through a training phase. This involves a large data center digesting perhaps 10,000,000,000 web pages of information. These AI systems do not comprehend what they are doing in any normal human sense.

Instead LLMs statistically predict what the next words should be given the words they have already seen. Given enough computing power, and enough training data, they can do this over strings of words that are many 1000s of words long.

They essentially parrot back what they have read. (This is much like what some people do. While able to converse on any topic, they have no clue what any of it means.) By this process, they can easily delight humans with unending conversation.

Training

The first part of the training phase can take a large data center weeks, or more often months, to fully ingest those web pages. OpenAI, appears to be doing this once a year, or slower.

Then comes a second round where humans help the AI engine have better conversation. Once through this second wave of work, a small bit of data, say 140 gigabytes, is produced that represents the result of all that training. It can be copied around and with very little additional code it can then be used by anyone to have a conversation.

For comparison, the largest format DVDs hold about 15 gigabytes, so this is about 10 DVDs worth of data. Also worth noting, this is a synthesis of those web pages. It cannot easily quote anything from what it read in training, it cannot easily give citations.

AI systems like this do not think critically in any normal sense. They are easily gamed by popular answers flooding out what is true. Minority reports have no place in these systems. The 400 prophets of Baal are well at home in AI.

Commercial, online, AI systems, are very much larger than 10 DVDs. Like search engines already do, they can sit around with a huge model sitting in servers in a data center. Anyone connected to the Internet can ask it questions. They should eventually be able to answer any theoretical question that a human might pose, provided many web pages give the accepted answer.

Since Karpathy released his video above, there is also now tiny LLM. Mozilla, for example, just released a 4 GB LLM for public download and use. So these are getting smaller too.

This is all very impressive, and a pinnacle accomplishment in human history. This is why this probably has a place on the general prophetic timeline of human history. But, this work does have a dark side.

Who Is Doing The Training?

What did those woke programmers from corrupt San Francisco feed into their machine? We can assume that they are in general agreement with and unwilling to change their local politics. Then, when they taught their LLMs how to talk, what type of world view were they using? What 10,000,000,000 web pages did they select to feed their systems?

You can see right away this tech leads quickly to the spreading of a woke, dystopian, future. It does not lead to a hopeful future. Elon Musk, in particular, has been very much afraid of these sorts of poor outcomes. He is not, say, a man of faith himself. So his witness is particularly important here.

On the other hand, Musk has expressed his desire for his own AI team to build a system that is more curious about the world. Of anyone that I know in this space I would hope he does something right by what AI can do. Micah must stand up to the 400 and only accept what is provably true.

Secondary Tools

Like humans, these systems can be taught to use secondary tools. They can, for example, do web searches. They can run a calculator. Some can write code. These secondary tools give AI systems some impressive abilities.

How secondary tools are tied into the AI system is where much of their trouble lies. In the movie War Games the system was tied into nuclear weapons launchers. That is but 1 example of an AI recipe for trouble. Imagine the trouble if they are only just tied to an army of robots?

Or even an AI system tied to a bunch of dim witted students with cell phones? One of the features of the past 100+ years is how waves of innovation have harmed the generation of young people first exposed to them. It is impossible to know what harms will come until a generation lives out and learns how they were harmed.

This isn't even the worst of it. Like laptops risking computer viruses, AI systems have a whole raft of additional safety problems related to the software itself. Let me explain.

Safety

Karpathy, in the video above, goes into the problems of safety in these systems.

Those many inbound web pages contain a great deal of information on some of the most obscure topics. Many of those subjects can be dangerous in the hands of at least some members of the public.

Somewhere out there are instructions on making bombs, say. Let me use this theoretical question as an example. There are many more serious real examples.

So you can ask an AI system how to make a bomb. Left to it's own devices it will simply parrot what it knows and teach you how to make a bomb too. In effect it is speaking back a consensus version of what it read in.

Of course this is an example of an unsafe use. The programmers likely taught it to not answer such questions. Well built systems say no to such requests.

But current AI systems are like idiot savants. You can ask them to pretend, which they can do. Then inside that pretend world they do not see instructions for making bombs as against their programming, because pretend bombs are harmless. They will then give the answer for making bombs.

This is just the tip of the safety iceberg. There are many more classes of serious safety problems.

Depending on how they were trained, they can answer with low level real security threats against the human's actual device, typically a browser or app on a phone. So the act of asking an AI system a question can theoretically cause your computer or phone to be hacked. (All those government mandated back doors coming home to roost.)

These systems can also read input that looks to humans as gibberish. But it is not gibberish to the AI engine. They can and do easily respond. Because the human cannot read the gibberish, the answer back cannot be trusted by the human using the system.

Safety issues cause the designers of these systems much trouble. Even the complete list of what safety issues might be is not known.

Eventually, of course, this creates a raft of other problems. Logical reasoning and truth itself could be considered by the humans who built the system as unsafe uses.

Many governments and related political and religious institution already consider truth as weapons not to be shared with the public. Thus they often see the need to be able to shut down and/or regulate the internet itself. Systems friendly to the current governing elite, will be trained to mislead.

With this brief introduction, let me turn to the recent AI headlines. These are basically lining up with other headlines this fall. So these are on my radar as interesting to the general timeline.

OpenAI Dev Day (youtube.com)

The link here is to a video of a keynote speech by Sam Altman at OpenAI's Dev Day conference. This was held about 4 weeks ago now.

For context, OpenAI is an organization that Elon Musk helped found. It was setup as a non-profit in order to counter balance heavy AI work going on at Google. For Google this is a way to augment search, at least. In order to keep control of the public narrative, they needed to pioneer the use of LLMs in search, at least.

Musk has a deep fear of AI run wild. This is a topic that sci-fi writers have covered for many decades. Musk is well read in the many fictional works where this is a serious theme.

So the OpenAI corporate ethos set by Musk was to try and maintain some sense of safety with these inherently unpredictable systems.

But it did not go as Musk would have desired. He is a very busy man and does not spend time on projects where he is not in control. He does not have time to suffer fools. This is part of why he had to just buy Twitter outright. So Musk is no longer involved in OpenAI.

Instead Microsoft is said to have invested $86 billion in that non-profit. Microsoft appears to many as mostly controlling the work output of OpenAI. The Microsoft CEO, for example, was on stage at the OpenAI Dev Day conference.

I have trouble thinking of another example of such an organization. It has been twisted into a strange creature as non-profit companies go. There are sometimes good reasons for non-profits in the software world. There is a non-profit for the Linux Kernel, for example. But it employs a handful of people and their work is important, but not headline worthy.

OpenAI started releasing product last year. The conference this year was for developers using their AI engine. The general thrust was to help small developers understand how to build secondary products on OpenAI's core AI engine.

As but 1 example, a company building fast-food drive through order systems could use the OpenAI engine to give the menu screens very good understanding of spoken English. Common English could then be used to interact with the menu in the drive through to place orders for hamburgers. This, of course, without human involvement.

What would such a system do if asked for a gut bomb burger? I digress.

Headline Grabber

As tech conferences go, this keynote speech was rather dry. Microsoft, Google, Oracle and Apple have all put on better shows. I have certainly been to better.

But, there was 1 line in that video that would later turn heads. Sam said that there have been a few times when he has been in the room when there was a major breakthrough in AI research. He said that one of those times was 2 weeks before.

Sam did not elaborate on what he saw.

Sam then continued with his planned remarks. The conference ended as normal.

About 3 days later, word spread that Sam Altman had been fired as CEO of OpenAI. Apparently, whatever he saw 2 weeks earlier caused the board to feel he broke some sort of safety issue.

It did not look good for Sam. But he had friends at the company. Soon, all of the AI experts at OpenAI decided to quit in support. Considering they are thought to be the largest single group of AI experts in the world, this was no idle threat.

3 days later, on a Sunday, Sam was back in. Then there was a board shake up. Someone stepped in on Sam's behalf. Someone did not want the work to stop, safe or not. Microsoft is said to have offered Sam and his team a place at Microsoft itself, so Microsoft does not appear to be the heavy who stepped in on his behalf.

What Did Sam See?

Then the rumors started to fly over what Sam saw. If those rumors are remotely true, then something major happened in mid October 2023 related to the theoretical model of what AI systems can do.

This was about the same time as war in Gaza. Thus another candidate headline in the cluster of October 2023 headlines. So it was reason for me to pay closer attention.

Math

There are 3 general rumors that started to flood the Internet about what happened. The first involved the ability of OpenAI's engine to do math. Math is tricky because these are language models. Language Models are trained in the humanities, not engineering. These models naturally have a similar trouble with math as people trained in English. If they have not been trained on the answer to a specific math problem, then they cannot solve it when asked.

Of course they could use an external tool like a calculator, just like humans do. But that is not what seems to be going on now. This is the first step to critical thinking and reasoning as we humans understand reasoning. This is something that has been lacking in all such systems so far. So it is a possible basis for a historic shift.

Decryption

The second rumor involves computer security. These are systems skilled at language. Encrypted text is the heart of modern computer security. It may be that an AI system has been trained on how to unpack the most secure security systems known. If this is so, then no encryption system is safe from high powered AI.

As the Russians are rumored to already do, it may be time to break out typewriters again. This matters little to most of us, but to the Military this is a very big deal. Someone representing military interests might be the heavy that put Sam back in as CEO.

Introspection

The third rumor was the use of the AI engine to study and improve its own AI engine. This would let the core of the AI engine disconnect from its human programmers. As Karpathy notes in his video above, humans already do not know what is going on in these systems. It will become much worse if the systems tune themselves.

All of these are serious. Here is a link to video that covers these points and what they might mean. The term being used is "Q*" pronounced Q-Star. Qu as in the Qu letter of the Paleo alphabet, or Qu-map, or brain. The * involves a class of self improving math systems. The base algorithm is known as A*. Apparently this has been added to the set of tools AI can use.

Q* (youtube.com)

The link here is a 21 minute video by David Shapiro. (Impersonating a Star Fleet officer, oh the blasphemy!) In any case he does a good job of explaining these implications. He also attempts to cite some of the rumor sources, which is rare and why I picked this video.

Fighting It

Elon Musk's basic approach to fighting AI is to have multiple competing AI systems. Some will hopefully be built and run by the good guys. The hope is they are able to reason based on chains of provable facts. This would break much of the political world we currently know. It could also unlock, say, better models of Physics.

It is unclear how this will play out. But, if we are not blown back to a post apocalypse world, then AI is here to stay. Men and women of faith will need to understand how this tool seems to have more uses for evil than for good. It is not like we don't have many other examples already.

If this is one of the resurrection headlines, then living with this tech may be an eternal feature of the human race. Like coming to adulthood, it is just something that we must collectively learn to use responsibly.

Measured in 1000 year intervals from Adam, as years in the life of humans on earth, then we are collectively beginning our teenage years. Just like teenagers often do, as we collectively go through these years, we are sure to screw things up before we can handle it in an adult way.

More Later,

Phil