The Great Flip – AIs and LLM go Beyond Science.

In all my workshops and most of my posts, I ended up discussing the differences between reduction and comprehensiveness. This was my main message since 2005. Suddenly, but not unexpectedly, this has become critical knowledge.

Because we are about to move.

Most people do not realize the existence of a struggle between the two. They believe that LLMS should be scientific because it uses the linear algebra that works on inevitable computers and was designed by mathematicians and academic programmers.

Then they discover that scientists can explain how It works but not Why (Note that word) it works. And that LLMS learns “facts” and strange relationships from social media publications. And that llms lie to you. “He does not lie,” the seller says.

It does not seem very scientifically for me. Or anyone who tried early GPTs.

However, the folding of our AIS protein, which cannot be done by scientific methods.

Llm Learning English is thus on devices that are completely inevitable using very solid mathematics algorithms.

but English is not inevitable. Not the world. Or imagination. Or social media. Or even biology – the cell is a bag of soup.

As we see, there should be something wrong with most people’s views about science roles and Amnesty International. And for very good reasons. The science was the winning strategy about 1600. For 400 years. We have made us to the moon. I do not argue with this success. I still pretend

Resting is the biggest invention that our race has ever made

This is the right thing to do for all species at this stage because it is necessary to move. To enable The great face. We want to be able to exploit the advanced comprehensive methods, and we needed to pave the point where we could build LLMS. And we did. We are ready.

Even looking at the history of artificial intelligence only, it is clear that we did the right things to a large extent along the way. Starting in 1955, we directed Minski, John, Solomonov, and others to attempts based on behavior, logical and typical (reduced) in artificial intelligence. This was the right approach at the time, because it was the only thing that our computers could deal with. Llms today It works completely different It requires enormous resources.

When training LLM, we do not create anything scientific.
We create a world.

Science is the purest example of our reduction. Our LLMS is the solution to comprehensive problems. This is the conflict. You can find this basic message in all my writing and other videos. So let me try a caricature of my message:

Humans are not general intelligences. We are generally educated. Language, walking and causation can be seen around us and learn everything that may be useful later. When we learn to become scientists or engineers, we learn physics and other scientific disciplines at two levels: we are learning at the level of subconscious based on intuition using a comprehensive structure in the brain and some learning “algorithms”, in the same way that we learn to ski. We learn very effectively from direct experience. We increase the effort and suddenly we see smoke from our devices.

In addition to that experience and other understanding, we add, by learning from the experiences of others despite books and schools, a layer of models. Equations, theories, hypotheses and computer programs other than AI.

We learn the law of Om, which could have predicted smoke. You cannot use the ohm law unless you understand Why (There is this word again) It works and how it applies to your current position.

So an engineer or scientist looks at a complex situation in the real world. They use their understanding of the world to ignore everything that is not relevant and divide what remains into a smaller and smaller pieces … until they find a piece on the bottom suitable for one of the equations they learned in STEM.

They come out as a model that fits the current problem. They measure the numbers that enter the equation, calculate the result, and then use this to solve their problem in the real world. This is a very effective way to solve problems that suit this category. Many problems of problems in the real world can be dealt with by an acronym (world or engineer) using their experience and deep understanding to simplify the complex world into a calculated model.

This is the light hand of the upper education. They teach you openly that you should

  1. Select the form for use

  2. Measuring values ​​and calculating the result using the form

  3. Use these numbers as teachers for your model

  4. Run the form to calculate the result

But they neglect to tell you that you will also do

-3. Watch the world because you were born
-2. Building a complete understanding of the world and causation
-1. Get an acronym education at the Massachusetts Institute of Technology at the top.
0. Understanding the problem well enough to perform a cognitive reduction

Humans who write the text on the Internet -3 to -1. We can use this text to pave our devices to perform steps from 2 to 2. This is a very wonderful trick, mostly because humans have already been done after the Internet fill in the text. The machines are now doing reading.

AI allows to delegate -2 to 2 to machine. Humans must also perform the last step:

5. Understand how to apply the result to the problem in the real world.

We can offer some rough estimates for the effort that goes to these stages. In a new case to solve a problem, we want to be able to know if the involvement of artificial intelligence may help, or not. If your work requires you to choose between problem -solving strategies, knowledge theory may be useful.

There are difficult limits to the science of reduction. Reduce him I got a polluted reputation Because people applied them to problems outside what the reduction can do comfortably. The idea is that the world is complicated and that science cannot deal with complexity. The first step in the reduction is a non -relevant ignorance. What happens when everything is related and ignore anything at all will affect the solution in unknown ways?

This is the normal situation in this reality. After solving all the problems that science can solve, we find that the remaining problems have one common thing. It is absolutely not surprising to the world of knowledge. We are now realizing The complexity is the main enemy to apply for any kind.

Specous level problems require comprehensive solutions

I and a few others call this group of problems Strange systems To indicate that scientific methods cannot begin. The global economy and the stock market, cellular biology, pharmaceutical interactions in a form, indicate human brains, generation of language, folding proteins, allocating global resources, fighting war, or country control.

We cannot find solutions in these areas unless we operate the standard requirements of short sciences such as optimism, repetition and the ability to clarify. Or by radically restricting the problem of the problem on what our well -known field models can deal with, which is very wrong in complex situations and often leads to unexpected consequences. Consider our models of the economy. Some simpler treats the unemployment rate as inputs and population as rational.

We can only understand our languages ​​and fold our proteins using nerve networks, which cannot be repeated or necessarily explained.

When building artificial intelligence, we do not try to solve the problem of lung cancer. We are trying to create a machine that can learn to understand anything, and we feed it all the information we have about lung cancer. LLM is a learning model. This is my scientific as it happens, and even this is an extension.

Science has no algorithms, even concepts of understanding, intuition, reduction, totalitarianism, or even abstraction. It can only be discussed in knowledge theory. But visions of knowledge theory can be used to direct the development of artificial intelligence algorithm, and this is what I have been using since 2001.

We can imagine different types of learning models, but these metamodels have not yet understood enough to direct progress. Instead, progress is made through the experiences based on the computer algorithm that can be measured for effectiveness in a reduced way, but this is surprising in the field of green research with a few active projects. The current winner is of course DNNS and their offspring, transformers, but it is not the only known strategy, which is expensive enough to require graphics processing units.

Everything is learned by intelligence is the connections. Understanding the world is a network of links in the minds of scientists and weights in LLM. It is comparable to the level of learning-gear, which means that LLMS understands everything they know in the same way that human beings and other animals understand.

The association in the brain is an intertwined connection from neurons to neurons. This is essential. If your LLM algorithm cannot be implemented in neurons and clusters – simulation or not – they are not affordable and are unlikely to be effective like biological brain structure. It may work, if your algorithm mimics clamps.

Does this affect me personally? Will the great face affect my work? Can I use this knowledge for my favor? What does it mean to adopt a total situation? Where can I find more information? Does anyone track the “small fluctuations” that will eventually be great components. Well, I will track them and expect to be able to distinguish them.

(There is a lot to say about these things to the extent that I make these first introductory posts in a series of great face. I will speculate on the future close to artificial intelligence and its adoption in the world as it appears through the eyes of the world of knowledge. The rest of this series will require a subscription but there will remain other free publications.)

We all now know that artificial intelligence will completely transform the world. The great face began, and it cannot be stopped. We can directly note the advantages we get from the use of LLMS for the tasks they do well, and any science does badly. This is happening now. We see a new wave of scientific progress that depends on LLMS to understand larger parts of reality, which humans can keep in their heads. Some of this progress already results in better models than the world. Better science! Science is not dead in any way, and just as we cannot predict whether the number of artists or programmers will increase or decrease due to artificial intelligence, we may see Amnesty International dominating science to the extent that we need fewer scientists, or we may see millions of amateur scientists in their grooms, using common computers with AI at the level of advanced work. It is paid by UBI, not colleges or companies.

Any model in the world is incomplete and pardoned at the moment it is published. It is the law. And when we face the complete complexity of our worldly reality, we must turn into comprehensive methods because this remains, when science occurs.

We can use artificial intelligence. We can personally adopt a comprehensive position Use our GUT often. Or we can build specialized machines that are not intended for Cataller of comprehensive methods That (to the world of knowledge) is the primitive that LLMS manufactures.

In general, this means that we will delegate more and more our understanding of more understandable understandings. To AIS We have.

Should we invest in artificial intelligence today? Will our work help? Do our problems require even artificial intelligence? Do our customers want it? Who can we ask?

Does your company have a world of stages for companies?

Leave a Reply