You are getting poor. Korea’s annual budget debt growth rate

You are getting poor. Korea’s annual budget debt growth rate

As expected, budget and debt growth goes beyond nominal inflation of 2%.
Saying that the inflation statistics released by the country are weird…
It is higher than the 8.4% increase in U.S. debt.

South Korea’s budget growth rate
CAGR Budget 1949 – 1980: 20.57%
CAGR Budget 1981 – 2010: 13.29%
CAGR Budget 2011 – 2024: 6.10%

South Korea’s debt growth rate
CAGR Debt 1988 – 2024: 12.21%
CAGR Debt 1988 – 2010: 14.78%
CAGR Debt 2011 – 2024: 8.37%

The last 10 years are like the U.S. debt growth rate. After all, the government’s curbing of the money supply is just a surplus budget. You can’t be fooled. If the net asset growth rate is less than 8.37 percent, you’re getting poor.

And the national debt growth rate is much higher than the budget growth rate. It’s only natural. You increase the debt, you create inflation, and if inflation gets older, the debt is zero in the long run. Money increases, and if you keep it in cash in the bank, you’re getting poor. Every year, 8.37% – this is the bank’s nominal interest

ChatGPT4 Tips, Paper Research Methods (2)

  • the potential for the breakthrough of artificial intelligence technology

The process of what our brain thinks is unclear. LLM uses tremendous text and requires a lot of memory size due to the development of GPT4 and GPT5 expected this summer.

Recently, Nvidia Blackwell’s B200 GPU used 208 billion transistors. When the transistor operates at 2.1GHz ~ 4GHz, it generates 208 billion*4 billion switching per second. However, in reality, it is likely to operate only about 1/10th due to thermal issues.

Most of these NPUs were quantized to 8-bit. 16-bit operations are four times as large by twice as large as 8-bit operations in multiplication. Then, if we reduce them to 4 bits, they are sixteen times as large. The size of the operations can be reduced by more than 1/64 if they are expressed in numbers with triplets {-1, 0, 1} 1.58 bits in the extreme. (16/1.58)^2 = 102.5 times

However, if you think about it, as it is quantized into a triad, the precision decreases, and additional calculations and data may be required to supplement it, so a lot of experiments and data are required.

If you look at the reduction in computational energy, it means that 208 billion gates can be reduced to 2080/64 = 3.2 billion Trs. Or, 64-100x performance is improved in the same size. A paper was published that the performance is similar even if made in this way. We need to check whether the authenticity of this paper is true, but if it is true, it is a huge situation. As it was published in arXiv, further experiments and verification are needed.

It doesn’t seem like a lie because the paper is published by Microsoft Research and related to the Chinese Research Institute.

The development of artificial intelligence is becoming more innovative and faster with the power of collective intelligence. If you can’t avoid it, you have to work harder, make it your own, and enjoy it. Artificial intelligence development and research is not whether or not to do this, but if you don’t, you will fail and fall behind.

Is the pdf file linked to the link difficult? Here are some dog tips. Lol

LLM is an example of using artificial intelligence in artificial intelligence research

Upload link to ChatGPT4 (using PDF AI third-party app within ChatGPT4)

  1. Take the summary.
  2. It comes out in English? If you specify Korean, translation, etc., it comes out in Korean.
  3. The summary is simple and futile?
    • Then, let’s ask questions in the prompt window where the paper was posted.
  4. When I asked him a question, he answered the thesis in Korean in detail.
    Wow, it’s really easy to read the thesis.
  5. My English was weak, but it went well.

The performance is the same even if the number is reduced to 1/100,000, which is as amazing as room temperature superconductors. Not NVIDIA GPUs, but new GPUs are expected to be born

ChatGPT4 Usage Example
https://arxiv.org/pdf/2402.17764.pdf
Copy the link to the AI PDF.

Summary in Korean
Why published in arXiv instead of a prominent paper?
Has this NPU actually been developed?
How do I manipulate data to train general data with this model?
What role did Microsoft play in this paper
What university did the Chinese researcher come from?
Who published this paper?
What university did Chinese researchers come from in China?
How can I do deep learning in 3 states?
Can you compare fp1632 to precision?
What is sLLM?

Love and thought

Triple #NPU #Quantifying #ArtificialIntelligence #Operator #16bit #FP16 #LLM #sLLM #ChatGPT4 #Memory #Paper #Research #Organization #Question

**Small Large Language Model (sLLM) sLLM is a relatively small language model with billions to tens of billions of parameters compared to LLM. Parameters in the AI field refer to variables used in models, functions, algorithms, etc

Budget #nationaldebt #budget #budgetgrowth #governmentbudget #inflation #CAGR #prices #poor #rich #financialtech #low prices, but #deflation is fantastic

Reference Data
https://namu.wiki/w/%EB%8C%80%ED%95%9C%EB%AF%BC%EA%B5%AD/%EC%98%88%EC%82%B0

tslaaftermarket

Share
Published by
tslaaftermarket

Recent Posts

Meaning of LG Energy Solution’s

● Meaning of LG Energy Solution's self-disclosure and what to pay attention to in the…

4일 ago

It feels like inflation is an

It feels like inflation is an issue again. The U.S. consumer price index has bounced…

5일 ago

summarize what seems complicated

😁To summarize what seems complicated below, from the standpoint of LP (fund investors - investors);…

1주 ago

Clearing the history of Tesla stock purchases by major institutions (Q4 2024)

📌 Clearing the history of Tesla stock purchases by major institutions (Q4 2024) Goldman Sachs:…

1주 ago

TeslaNews Summary

25/2/10 #TeslaNews Summary Tesla To Begin Deployment Of FSD v13.2.7 On Some VehiclesTesla has begun…

2주 ago

“Tesla Master Plan – Eason”

■2025 9th book "Tesla Master Plan – Eason" Like Nvidia, which left its first book…

2주 ago