📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
Nvidia is crazy! Consecutive investment in three generative AI unicorns, plus a single belt to fly TSMC's 5nm production capacity
Original Source: Core Things
Xinshi news on June 30, whether it is the first half of this year or this week, Nvidia is a big winner.
In the new wave of artificial intelligence boom caused by generative AI, Nvidia has become one of the hottest stocks. ** The stock price has soared 185% so far this year, and the market value has exceeded 1 trillion US dollars. It is supporting generative AI and large models. Leading the way in the hardware race for R&D**.
** Around the investment in generative AI start-ups, Nvidia's sense of presence is becoming more and more difficult to ignore. ** This Thursday, Inflection AI, an American AI chatbot startup, announced that it has received $1.3 billion in new financing, with a valuation rising to about $4 billion; Runway, an American AI-based video startup, announced the completion of $141 million in new financing, with a valuation rising to $4 billion About $1.5 billion. ** Nvidia is among the investors in both financings. **
According to market research firm TrendForce, Nvidia is expected to replace Qualcomm as the world's largest chip design company in the second quarter of 2023, as the deployment of AI-related chips stimulates revenue growth. **
On the one hand, receiving orders in the AI chip market is soft, and on the other hand, it is investing heavily in the generative AI track. Nvidia's AI wishful thinking is getting louder and louder.
01. Consecutive investment in generative AI unicorns, Nvidia's lightning expansion of AI investment territory
Nvidia’s latest two generative AI unicorns – Inflection AI, a large-scale language model startup that creates ChatGPT-like products, and an AI video editing software startup that allows users to easily create short videos by typing in text Runway, both of these two have gathered a very strong star investment lineup.
In addition to Nvidia, Inflection AI's new round of financing has also received LinkedIn co-founder Reid Hoffman, Microsoft co-founder Bill Gates, Google former CEO Eric Schmidt and other technology giants And the big boss's capital injection. Runway’s latest financing investors include Google, Salesforce and other technology giants, with a cumulative financing of about 237 million US dollars.
Earlier, on June 9 this year, Cohere, a Canadian AI start-up company that is also working on ChatGPT-like chatbots, announced the completion of Series C financing of US$270 million, with a valuation of about US$2.2 billion. Nvidia,* Oracle*, Salesforce, etc. all participated in this round of financing.
It is especially worth mentioning that Inflection AI, co-founded by DeepMind co-founder Mustafa Suleyman and served as CEO in 2022, received a large number of offers after launching the Pi chatbot, and then received this high financing, and Nvidia is the only new investor in this round. So far, this unicorn company has surpassed Cohere, becoming the world's third largest generative AI unicorn after OpenAI and Anthropic in valuation.
Inflection AI recently launched its first proprietary language model, Inflection-1, which it says was trained using thousands of Nvidia H100 on very large datasets, with performance comparable to GPT-3.5, Chinchilla, and PaLM -540B is equivalent.
02. 11 minutes to train GPT-3, Nvidia GPU dominates the big model benchmark test
Inflection AI is working with NVIDIA to build one of the world's largest GPU clusters for training large AI models. Through a partnership with Nvidia and cloud service provider CoreWeave, its supercomputer will be expanded to include 22,000 H100, which far exceeds the 16,000 A100 of the Meta RSC supercomputing cluster.
Founded in 2017, CoreWeave claims to offer computing power "up to 80% cheaper than traditional cloud providers." Nvidia previously invested $100 million in CoreWeave. According to foreign media reports in June this year, Microsoft agreed to invest billions of dollars in CoreWeave in the next few years for cloud computing infrastructure construction.
In the latest authoritative AI performance benchmark test MLPerf, NVIDIA and CoreWeave jointly built a cluster with 3584 H100s, It took less than 11 minutes to train the GPT-3 large-scale language model.
Not surprisingly, Nvidia continues to dominate benchmarks with its flagship computing chip, the H100 GPU.
The latest MLPerf training 3.0 has added the GPT-3 large model benchmark test, and NVIDIA and Intel have become the only two participants. Nvidia set the fastest GPT-3 training record with 3584 GPUs, and Intel AI chip Habana Gaudi2 demonstrated its competitiveness in terms of ease of use and cost performance by running GPT-3 on a smaller system, including 384 Gaudi2 The total training time on the chip is more than 5 hours, and the total training time on 256 Gaudi2 chips is more than 7 hours.
In some tests, Gaudi2 training performance exceeds that of Nvidia A100 GPU. Intel also plans to further narrow the gap between Gaudi2 and H100 through software optimization. It will release software support and new features for FP8 in September, and predicts that Gaudi2 will surpass H100 in terms of performance and cost performance. Another AMD, considered a strong competitor of Nvidia, did not submit test results.
The results of the MLPerf benchmark are published by MLCommons. According to David Kanter, executive director of MLCommons, GPT-3 is the most computationally demanding of the MLPerf benchmarks, and most benchmark networks in MLPerf can run on a single processor, but GPT-3 requires at least 64 processors.
**03.The AI large-scale model arms race is heating up, and the demand for AI chips with large computing power is skyrocketing
Technology companies are actively integrating AI into their products and services, and investors are enthusiastic about investing in generative AI startups. Obviously, no one wants to miss this wave of potential historic growth opportunities because of backward computing speed.
Just this week, the largest generative AI acquisition so far was born at home and abroad: American big data super unicorn Databricks agreed to acquire MosaicML, a large American language model startup, for $1.3 billion (about 9.4 billion yuan), and the U.S. Tuan announced yesterday that it will acquire Light Years Beyond, a large-scale model startup founded by Meituan co-founder Wang Huiwen, for 2.065 billion yuan.
**Training generative AI models is inseparable from expensive data center computing chips. Against the background of the fierce arms race of large-scale models, the market demand for AI chips with large computing power continues to rise. At present, there is only one real chip winner for training AI large models-Nvidia. **
In November last year, Oracle announced the purchase of tens of thousands of A100 and H100 to build a new computing center. Google announced the A3, an AI supercomputer with 26,000 H100s, at its I/O developer conference in May this year. This week, Oracle was reported by foreign media that it is spending billions of dollars to purchase Nvidia chips to expand cloud computing services for the new wave of AI.
Whether Nvidia can continue to win in the future is closely related to the Chinese market. According to the financial report, revenue from mainland China and Hong Kong accounted for 22% of Nvidia's revenue last year. According to "LatePost" reports, after the Spring Festival this year, major Internet companies in China with cloud computing businesses have placed large orders with Nvidia. An order of Wanka level has been placed for Nvidia, and the value is estimated to exceed 1 billion yuan based on the list price. Byte alone may have placed orders this year close to the total number of commercial GPUs Nvidia sold in China last year. Excluding this year’s new orders, the total number of Byte A100 and its predecessor V100 is close to 100,000 yuan; the total number of A100 and H800 that Byte has arrived and has not arrived is 100,000 yuan.
After rumors emerged this week that the U.S. Department of Commerce was considering further restrictions on Nvidia’s A800 and H800 exports to China, Nvidia Chief Financial Officer Colette Kress warned: , will cause U.S. industry to permanently lose the opportunity to compete and lead in one of the largest markets in the world, and will affect our future business and financial performance."
**04.Conclusion: It is only a step away from reaching the top of the world's largest chip design company
Regardless of product performance, new orders, performance progress, stock market performance, ecological expansion, or investment layout, Nvidia has already won visibly to the naked eye.
On May 25, Nvidia released its financial report for the first quarter of fiscal year 2024, in which it achieved revenue of US$7.19 billion in a single quarter, and predicted that its revenue in the second quarter would reach US$11.00 billion. On June 12, according to Taiwan media reports, driven by the increase of Nvidia AI chip orders, the utilization rate of TSMC's advanced process has increased significantly, and the utilization rate of 5nm production capacity has increased from more than 50% to 70% to 80%.
According to data recently released by the market research organization TrendForce, benefiting from the explosive demand for generative AI and cloud computing power and the launch of new GeForce RTX 40 series products, Nvidia’s revenue in the first quarter of 2023 will increase by 13.5% to US$6.73 billion. Chip design market share increased to 19.9%.
TrendForce predicts that due to the deployment of AI-related chips stimulating revenue growth, the growth rate is obvious. In the second quarter of 2023, Nvidia is expected to replace Qualcomm to become the world's largest fabless chip design company.
Under the background that Nvidia has become the biggest beneficiary of the generative AI era, chip giants such as Intel and AMD are eyeing up, trying to share the AI computing market through software and hardware collaborative optimization. At the same time, whether domestic AI chip companies can catch this wave of large model training and deployment dividends has also become the focus of the industry.