Microsoft is the last big three cloud service providers to provide customized semiconductors for cloud and AI.

[Technical Master – AI/Cloud] Cloud operators are competing in semiconductors

Microsoft is the last big three cloud service providers to provide customized semiconductors for cloud and AI.

Google announced Cloud TPU v5e GA for cost-effective AI model learning and inference on November 9 after stepping into the AI inference market with tensor processing units (TPUs) in 2016. TPU was very active during AlphaGo.

(Of course, Google’s TPU has been provided in cooperation with Broadcom, not in its own design, but when Broadcom tried to raise the price significantly, there was news that Google might be working with Marvel since the 6th generation. )

Amazon Web Services AWS is one of the cloud operators. AWS has been the most widely active in this market, launching various chips such as ARM family Graviton, AI learning training, and Infergentia for inference.

It has taken a strategy to significantly lower costs by allowing customers who use AWS S3 to operate at low prices of ARM-based graphics when they leave because of the high cost.

On the server side, there are powerful companies such as Intel and AMD, but it is a strategy to be competitive while enabling cost-effective workloads to operate based on ARM.

If AWS entered this market by acquiring its own ARM chip company, Microsoft, Google, and Oracle have introduced and operated servers equipped with ARM-based specialized semiconductor products called “Ampere_Computing.” Microsoft now offers a cobalt 100 as well, giving customers a wider choice.

Chinese cloud operators Alibaba Cloud, Huawei, and Tencent all create ARM-based server chips and use them directly. AI chips, too.

In the AI chip market, Nvidia is unrivaled in the learning market. When AMD said it would provide MI300X – 192GB at the end of this year, Nvidia, which had only 80GB of memory on the H100, said it would release the H200 and said it would install 141GB. The memory here is (SK Hynix HBM3e).

Cloud operators are creating ‘purpose-oriented semiconductors’ to respond to service experiences and various needs of customers.

While joining hands with general-purpose CPU and GPU companies such as Intel, AMD, Nvidia, and Ampere, it is not willing to be drawn to the supply of specific corporate products by turning its attention to securing ARM-based server chips and AI inference and learning chips at the same time.

It is not easy to replace Nvidia, AMD, or Intel. It is also necessary to consider whether it is economical to take a continuous roadmap through independent design when used internally. I think the key is how to continue to coordinate appropriately. Microsoft said it has already started designing second-generation products for AI chip Maia 100, ARM-based cobalt.

Perhaps the next version will significantly increase the capacity of HBMs that are all installed in the memory field. Are SK Hynix or Samsung Electronics lobbying well. Come to think of it, cloud operators are the ones who buy trillions of won. I don’t know if the two companies buy and use cloud service units. ^.^

Korean cloud operators do not have enough market size to make their own AI chips or ARM-based server chips. However, as only Naver is targeting the Japanese, Southeast Asian, and American markets with Korean services, lines, and webtoon services, there is room for creating an economy of scale as a challenger.

In the case of GAK Sejong, it is said that it ranked first in Korea’s supercom by introducing 2240 GPUs of Nvidia A100. It’s a great thing, but when I heard what the Microsoft chairman said with a smile at dawn, I thought the government should really help companies a lot.

“We used NVIDIA GPUs to build the most powerful AI supercomputing infrastructure in the cloud,” said Satya Nadella, president and chairman of Microsoft’s board of directors. And OpenAI uses this infrastructure to deliver a leading LLM right now. In fact, last week Azure submitted the most to the ML Perf benchmarking consortium using 10,000 H100 GPUs, three times the previous record, delivering better performance than any other cloud.

And on the latest list of the top 500 supercomputers in the world, Azure ranks third as the most powerful supercomputer in the public cloud. It became the news. What didn’t make news was that we didn’t submit the entire supercomputer. Only a part of the supercomputer has been submitted. So I’m very happy to be the only public cloud to be ranked third.”

Microsoft has submitted only some of its supercomputers without submitting all of them, and it ranks third in the global supercomputer ranking. This is why it is only Microsoft or AWS and Google Cloud, so it is chilling.

Korean cloud operators are competing with these companies. The game is already over in the infrastructure war and even open AI is on top, so what’s the point.

Even in this situation, we have no choice but to applaud those who are struggling to save and defend Korean services. As I get older, I feel like I’m bowing more to their efforts. Given that they can all get dozens of times more money by moving to overseas companies, but they keep their positions.

During the mobile revolution, the government and media, which were fussing about Samsung Electronics and LG Electronics dying because of Apple, do not seem to feel the same sense of crisis at that time during the AI revolution. SK Hynix and Samsung Electronics seem to think that they can sell well to HBM Nvidia and AMD. Or make PIM in the future and think that the two companies can endure well.

Suddenly, it became a new wave as it fell into the story of domestic cloud operators. The same goes for AI semiconductor startups. Yesterday, an AI semiconductor company called Sapion held a meeting for the first time since its establishment. With the launch of the X330, it is now able to target the data center market in earnest, and it is said that it is trying to conduct tests with overseas cloud operators using its full strength and network.

In Korea, it is said to be applied to NHN Cloud and SK Telecom AI service infrastructure, an investment company. It won’t be easy to challenge the upheaval, everyone. Considering that SK Hynix is hitting the HBM market and SK Group is exploring possibilities by investing in non-memory areas such as Sapion, we cannot help but applaud them.

SK Telecom’s attempt to introduce and test an ampere-based HP server would be interesting. However, I have to go to listen to CEO Yoo Myung-hwan’s ARM server market pioneer, but I think I should catch it in December. Then you’ll hear a lot about ARM. Today’s ARM forum was also held in Korea, but I couldn’t go because of a cold.

That’s all for today. Will the 30th anniversary event always be used.

ai #microsoft #aws #gcp #arm #maia #cobalt

tslaaftermarket

Recent Posts

Tesla News Summarizes Up Steadily

Tesla News Summarizes Up Steadily SpaceX Successfully Performs Starship Rocket Flip ManeuveringSpaceX's Starship rocket has…

10시간 ago

MSTR-related Stock Sale Recurring

MSTR #MSTU #MSTX #MSTZ #SMST MSTR-related Stock Sale Recurring I continued to study which one…

16시간 ago

Tesla’s Good News Story

1 Tesla's Good News StoryIn straight acceleration, the electric car wins by a landslide compared…

2일 ago

Tesla News Summary Ends Once Authorized for Self-Driving

Tesla News Summary Ends Once Authorized for Self-Driving Trump's transition team plans to ease regulations…

3일 ago

The most interesting episode of this U.S. presidential election was the North Carolina

[About Starlink] The most interesting episode of this U.S. presidential election was the North Carolina…

3일 ago

On the public television documentaryI watched it because AI came out.

On the public television documentaryI watched it because AI came out. It's part 1 and…

4일 ago