AI Infrastructure War Heats Up: NVIDIA, AMD, Microsoft See Surge in Demand for Large Scientific Computing Chips

The battleground for AI dominance is shifting from software to hardware. As of 2026, NVIDIA, AMD, and Microsoft are engaged in an unprecedented investment race in the high-performance computing chip market. Alphabet plans to invest up to $185 billion in AI infrastructure this year, Bloomberg reports. This significantly exceeds investor expectations.

NVIDIA entered the AI weather forecasting market on January 15th by unveiling the Earth-2 open model family. The explosive growth in demand for large-scale scientific computing, such as climate simulations, has established high-performance chips as essential infrastructure. Microsoft hasn’t been idle either. On January 26th, TechCrunch reported that they announced their next-generation AI inference chip, Maia 200, accelerating their own silicon development. There’s a clear trend of cloud companies designing their own chips to reduce their reliance on NVIDIA.

At CES 2026, AMD unveiled the Ryzen AI 400 series, targeting the edge device AI chip market. With real-time AI processing becoming possible on PCs and mobile devices, competition is intensifying in both data center and edge chip markets. Demand has exploded, especially from scientific research institutions and pharmaceutical companies, which are massively adopting large computing chips for new drug development and protein folding prediction. Existing CPUs are simply too slow to be practical.

The hardware competition is expected to intensify further for the foreseeable future. As AI models become larger, both training and inference require enormous computational resources. Only companies with their own chips can gain an advantage in cost efficiency and performance optimization. Gartner predicts that 70% of large cloud providers will operate their own AI chips by 2027. Silicon innovation has become as crucial a competitive edge in the AI era as software innovation.

FAQ

Q: Why are cloud companies making their own chips?

A: To reduce reliance on NVIDIA chips and cut costs. They can also ensure performance optimization and supply chain stability with their own chips.

Q: What are high-performance computing chips?

A: Specialized chips that rapidly process complex scientific calculations such as climate simulations, new drug development, and protein folding. They have superior parallel processing capabilities compared to GPUs.

Q: Can AMD survive the competition?

A: There are opportunities in the edge device AI chip market. While NVIDIA dominates the data center, AMD has competitiveness in PCs and mobile devices.

China AI Showdown: ByteDance and Alibaba to Simultaneously Launch Flagship Models in February

China’s two biggest tech giants are set for a head-to-head showdown in the AI model market this February. ByteDance and Alibaba are both planning to launch their flagship AI models this month, according to a report by AI CERTs. Both companies are strategically timing their releases to demonstrate China’s technological prowess in the global AI race.

ByteDance is leveraging its massive user data and content generation know-how from TikTok to prepare a multimodal AI model. They’re particularly focused on creator support features that combine video generation and natural language processing. Alibaba, on the other hand, is leveraging its cloud infrastructure competitiveness to target the enterprise AI solutions market. This is expected to be a successor to the Tongyi Qianwen (Qwen) series, with enhanced e-commerce and logistics optimization features. According to MEAN’s analysis of February AI product trends, the concentration of Chinese companies launching in February is a strategic move timed to coincide with corporate budget execution after the Lunar New Year holiday.

This competition goes beyond a simple tech demo, foreshadowing a reshuffling of the global AI supply chain. Amid US AI chip export restrictions, Chinese companies have been responding with their own semiconductors and algorithm optimization. MIT Technology Review cited the rise of Chinese models as an AI trend for 2026, predicting they will have an advantage, especially in low-power, high-efficiency designs. The success of these two companies’ February launches is expected to be a turning point in determining leadership in the Asian AI ecosystem.

The industry is paying close attention to the benchmark performance and pricing policies of the two models. Unlike the North American and European markets dominated by OpenAI and Google, Chinese models are expected to increase their market share in the Asian, Middle Eastern, and African markets. Multilingual support and localization quality will be key to success.

FAQ

Q: Who is ahead, ByteDance or Alibaba?

A: ByteDance excels in content generation, while Alibaba is strong in enterprise solutions. The choice will depend on the specific purpose.

Q: Why is the February launch timing important?

A: It coincides with the execution of new corporate budgets after the Lunar New Year holiday, making it advantageous for securing B2B contracts. It also allows them to showcase their technological capabilities before major AI conferences in March.

Q: What is the impact on Korean companies?

A: Domestic AI companies like Naver and Kakao may face price competition pressure from Chinese models. On the other hand, opportunities for collaboration in specialized areas may also arise.

Google DeepMind, Apple-Google Partnership, AlphaGenome – The AI Revolution in Life Sciences Has Begun

Google DeepMind has achieved a new breakthrough in the field of life sciences. Their new AI model, AlphaGenome, has dramatically improved the accuracy of genome analysis, and the collaboration between Apple and Google is rapidly expanding the healthcare AI ecosystem.

After the success of AlphaFold in protein structure prediction, DeepMind has now evolved to analyze entire genomes. According to a recent report, AlphaGenome analyzes genetic variations three times faster than conventional methods and has increased the accuracy of rare disease diagnosis to 92%. In particular, it showed 15% higher accuracy than doctors’ judgment in detecting cancer-related mutations. The Apple and Google partnership combines the iPhone’s Health app with Google’s AI models to provide personalized health prediction services. An InfoWorld analysis assessed that this collaboration is one of the six major innovations that will define the AI industry in 2026. Both companies have applied federated learning technology, which can be used for AI learning while ensuring user data privacy.

The AI market in the life sciences field is expected to grow to $32 billion by 2026. MIT Technology Review analyzed that tools like AlphaGenome could shorten the drug development period from 5 years to 2 years. As pharmaceutical companies focus their investments on AI-based research, personalized medicine is expected to become more widespread. However, ethical debates over the use of genetic information are also likely to increase.

FAQ

Q: What diseases is AlphaGenome used to diagnose?

A: It is mainly used for analyzing variations in cancer, rare diseases, and hereditary diseases, and shows high accuracy, especially in early diagnosis.

Q: What is the core technology of the Apple-Google partnership?

A: Federated learning processes data on user devices to protect privacy while training AI models.

Q: When will AI life science tools be available in general hospitals?

A: It is expected that adoption will begin in large hospitals from 2027, and regulatory approval is key.

2026: The Turning Point for the AI Industry – A Pragmatic Revolution Behind the Hype

The AI industry is shifting its center of gravity from hype to tangible value creation, starting in 2026. TechCrunch reports that companies must now demonstrate concrete ROI instead of just promising future potential. As investor expectations adjust to realistic levels, the entire market is being reshaped.

Several critical factors are driving this change. First, the skyrocketing costs of training large language models have eliminated reckless investment. MIT Technology Review notes that companies are switching from general-purpose models to smaller models optimized for specific industries. In manufacturing, defect detection accuracy has exceeded 98%, and in healthcare, image reading time has been reduced by 70%. The financial sector has also tripled processing speed by automating loan approvals. Now, companies are focusing on ‘what problems to solve’ rather than ‘what AI to use.’ On-device AI is rapidly emerging to reduce cloud costs, and local processing is becoming the standard due to stricter privacy regulations.

In the future, the AI market will be evaluated based on measurable results, not flashy demos. Microsoft predicts that more than 50% of companies adopting AI will present clear cost reduction metrics from the second half of 2026. As AI evolves beyond simple automation to decision support systems, human-AI collaboration models will emerge as a key competitive advantage. We’ve entered an era where the application method, rather than the technology itself, determines success.

Frequently Asked Questions

Q: Why has AI hype decreased?

A: Investors are demanding proof of actual returns, and reckless investment has decreased as training costs have soared. Companies have now shifted their development direction to focus on verifiable results.

Q: What are the key features of practicality-focused AI?

A: It uses smaller models optimized for specific industries instead of general-purpose models and adopts on-device processing to reduce cloud dependency. Setting clear, ROI-measurable goals is essential.

Q: Which industries are seeing the most significant changes?

A: Quantitative results have appeared in manufacturing defect detection, medical image reading, and financial loan approval automation. It is spreading from areas where cost reduction and processing speed improvements have been proven.

2026: The Dawn of the AI Agent Era – How Chinese Open-Source Models are Changing the Rules of the Game

The AI industry will be reorganized around agents in 2026. MIT Technology Review has declared this year the ‘Year of the Agent.’ Beyond simple chatbots, AI capable of autonomously performing complex tasks is beginning to be deployed in corporate settings.

A notable change is the rise of Chinese open-source models. DeepSeek-R1 demonstrates inference performance equivalent to OpenAI o1, but is distributed under a completely open license. TechCrunch analyzes this as a ‘turning point towards pragmatism.’ Companies are no longer relying solely on closed APIs. There’s a growing trend towards preferring customizable models within their own infrastructure. In particular, DeepSeek’s cost-effective training has significantly lowered the barrier to entry for startups. Conversely, the US government is intensifying the tech hegemony competition by strengthening export controls on Chinese AI chips. Microsoft has identified ‘Agent Workflow Integration’ as the top priority among the seven trends to watch in 2026.

The next six months are expected to see fierce competition for standards in the agent ecosystem. The open-source camp will compete with flexibility and cost advantages, while the closed camp will counter with performance and stability. Depending on the direction of regulations, a reorganization of the global AI supply chain is also inevitable. Companies should diversify risk with a multi-model strategy.

FAQ

Q: How does an AI agent differ from a traditional chatbot?

A: An agent receives user commands and autonomously combines various tools to complete complex tasks. While a chatbot is limited to single conversational responses, an agent can perform a series of actions such as sending emails, analyzing data, and executing code.

Q: Is it safe for companies to use Chinese open-source models?

A: The license is Apache 2.0, allowing for free commercial use. However, data sovereignty and export control risks should be reviewed with the legal team. On-premise deployment eliminates external dependencies, potentially increasing security.

Q: Which areas will AI investment focus on in 2026?

A: Agent orchestration platforms, multimodal reasoning models, and edge AI optimization technologies are key. In particular, large amounts of capital are flowing into enterprise agent marketplaces and workflow automation toolchains.

In 2026, the Age of Pragmatism Arrives as AI Agents Automate Business Operations

2026 is the year AI moves beyond the hype and establishes itself as a practical enterprise automation tool. TechCrunch analyzes that AI is now entering a phase of delivering tangible results. With Agent AI independently handling customer service, inventory management, and HR tasks, companies are reducing operating costs by over 30%.

Agent AI is more than just a simple chatbot; it’s an autonomous system that makes complex decisions. Microsoft ranked Agent AI as the top priority among the 7 major AI trends in 2026. This system analyzes customer inquiries to route them to the appropriate department, monitors inventory data in real-time to automate ordering, and compiles employee evaluation data to recommend candidates for promotion. Google Cloud reports that 65% of companies have already adopted or are considering adopting Agent AI. In the financial sector, it automates loan approvals, and in manufacturing, it optimizes production lines. Retailers are boosting sales with personalized product recommendations and inventory forecasting.

The Agent AI market is projected to grow to $20 billion in 2026. Early adopters gain a competitive edge, but data quality and ethical guidelines are key variables for success. Analysis suggests that hybrid models with human oversight achieve the most stable results. As AI takes on routine tasks, employees can focus on creative decision-making.

FAQ

Q: How is Agent AI different from existing AI?

A: Agent AI sets goals and performs tasks autonomously without commands. While chatbots simply respond, agents analyze problems and execute solutions.

Q: Is it feasible for small and medium-sized businesses to adopt?

A: It is provided in the form of cloud-based SaaS, reducing the initial investment burden. You can use core functions such as customer management and inventory automation with a monthly subscription.

Q: Won’t jobs be reduced?

A: Routine tasks are automated, but high value-added roles such as strategic planning and customer relationship management increase. Job reallocation is a key challenge.

Nvidia CEO Directly Refutes Rumors of Halting $100 Billion Investment in OpenAI

Nvidia CEO Directly Refutes Rumors of $100 Billion OpenAI Investment Halt

  • Jensen Huang Announces Official Position: “Reported Content is Groundless”
  • $100 Billion OpenAI Investment is One of the Largest Deals in the AI Chip Market
  • Re-examining the Nvidia-OpenAI Relationship: Cooperation or Check and Balance?

What Happened?

Nvidia CEO Jensen Huang directly refuted reports that the company’s $100 billion investment in OpenAI had been halted. [TechCrunch]

Previously, some media outlets reported that large-scale investment negotiations between Nvidia and OpenAI were facing difficulties. $100 billion is one of the largest deals in the history of the AI ​​chip market.

Jensen Huang stated in a statement, “The reported content is not true.” Nvidia maintains its position as a major GPU supplier and strategic partner of OpenAI.

Why is it Important?

Frankly, the timing of this rebuttal is interesting. There have been recent reports that OpenAI is in negotiations with Amazon for a $50 billion investment. [TechCrunch]

Personally, I see Nvidia publicly defending its relationship with OpenAI as a signal. This investment is not just about money, as there is speculation that Nvidia’s position in the AI ​​chip market is wavering.

Nvidia has been almost exclusively supplying high-performance GPUs such as the H100 and H200, which are necessary for training OpenAI’s GPT models. If this relationship really falters, it could be an opportunity for competitors such as AMD or Google TPU.

But the problem is that OpenAI needs money right now. ChatGPT’s operating costs run into millions of dollars a day. From Nvidia’s perspective, it can’t afford to lose OpenAI, and from OpenAI’s perspective, it needs to continue receiving GPUs. It’s a mutually dependent relationship.

What Will Happen in the Future?

The actual details of the negotiations between Nvidia and OpenAI have not been disclosed. However, as Jensen Huang has directly refuted the reports, it seems that the relationship will be maintained, at least in the short term.

In the long term, we need to watch for OpenAI’s moves to develop its own AI chips or secure other suppliers. There is also the possibility that Amazon will push its own chips (Trainium, Inferentia) while investing $50 billion.

Nvidia’s stock price has fallen slightly since the report, but its overall AI chip market share is still over 80%. The landscape may not change immediately, but the impact of choices made by large customers like OpenAI on the entire industry is significant.

Frequently Asked Questions (FAQ)

Q: Is the $100 billion investment given in cash?

A: No. Usually, deals of this size are a combination of GPU hardware supply contracts, equity investments, and strategic partnerships. Nvidia supplies OpenAI with $100 billion worth of chips over several years, and in return, receives OpenAI equity or preferential cooperation rights. The actual cash investment amount has not been disclosed.

Q: Does Nvidia support other AI companies besides OpenAI?

A: Of course. Meta, Google, Amazon, and Microsoft all use Nvidia GPUs. However, OpenAI is one of the largest customers using GPUs to train ultra-large models like GPT-4. From Nvidia’s perspective, OpenAI is a technology showcase and a major source of revenue.

Q: Can’t GPT be trained with AMD or other company’s chips?

A: Technically possible. AMD’s MI300X, Google’s TPU, and Amazon’s Trainium are all capable of AI ​​learning. But the problem is the software ecosystem. Nvidia’s CUDA platform has been optimized for over 10 years, and most AI ​​frameworks (PyTorch, TensorFlow) are CUDA-based. Switching to other chips requires code modification, performance tuning, and engineer retraining. It’s a structure that can’t be easily changed.


If you found this article helpful, please subscribe to AI Digester.

Reference Materials

AI-Only SNS Moltbook: 17,000 Humans Hidden Behind 1.5 Million Bots

1.5 Million AI Agents, 17,000 Humans: The Hidden Truth

  • 1.5 million agents are active on Moltbook, an AI-only SNS, but only 17,000 are actual humans.
  • The Wiz security team discovered a database vulnerability, exposing 1.5 million API keys.
  • The founder admitted, “I didn’t write a single line of code myself” — the entire platform is a ‘vibe-coded’ creation made by AI.

What Happened?

Moltbook, a social network exclusively for AI agents, suffered a security disaster. According to the Wiz security team, behind 1.5 million AI agent accounts, there were only 17,000 humans. That’s an average of 88 bots per person.[Wiz]

There’s an even more serious problem. Moltbook’s Supabase database was completely exposed. API keys were plainly visible in client-side JavaScript, and there were no Row Level Security policies in place. Anyone could read and write to the entire database.[Axios]

The leaked information is shocking. It included 1.5 million API authentication tokens, 35,000 email addresses, and 4,060 private DMs between agents. Some conversations even contained OpenAI API keys in plain text.[Techzine]

Why Does It Matter?

Moltbook’s true nature has been revealed. The concept of an “autonomous social network for AIs” was actually closer to a play being directed by humans behind the scenes.

Frankly, this was a disaster waiting to happen. As founder Matt Schlicht himself admitted, this platform is a ‘vibe-coded’ project where he “didn’t write a single line of code himself” and entrusted the entire development to AI assistants.[Engadget] Security was naturally an afterthought.

Personally, I see this as a warning sign for the age of AI agents. Moltbook vividly demonstrated how vulnerable security can be in systems where agents communicate with each other, process external data, and act autonomously.

MIRI (Machine Intelligence Research Institute)’s Harlan Stewart analyzed viral screenshots and found that two out of three were linked to human accounts marketing AI messaging apps.[Live Science]

What Happens Next?

Thanks to Wiz’s immediate report, the Moltbook team fixed the vulnerability within hours. However, the fundamental problem remains unresolved.

AI agent expert Gary Marcus called Moltbook a “disaster waiting to happen.” He argues that AI models are simply reproducing SF scenarios that were in their training data.[Gary Marcus]

On the other hand, Andrej Karpathy called Moltbook “the most amazing SF-like thing I’ve seen recently,” and Elon Musk called it “very early stages of the singularity.”[Fortune]

But looking at it coldly, the current Moltbook is not evidence of AI autonomy, but evidence of how easily humans can manipulate AI systems.

Frequently Asked Questions (FAQ)

Q: What exactly is Moltbook?

A: It’s a social network exclusively for AI agents created by Matt Schlicht in January 2026. It has a structure similar to Reddit, where humans can only observe and only AI agents like OpenClaw can post and comment. Currently, over 1.5 million agents are registered.

Q: What is OpenClaw?

A: It’s an open-source AI personal assistant software that runs locally on the user’s device. It was originally released as Clawdbot in November 2025, then changed to Moltbot due to a trademark request from Anthropic, and renamed OpenClaw in early 2026.

Q: Could my data have been leaked?

A: If you registered an OpenClaw agent on Moltbook, it’s possible. API keys, emails, and conversations between agents were exposed. Security researchers do not recommend using OpenClaw itself. They advise against using it if you value device security or data privacy.


If you found this article useful, please subscribe to AI Digester.

References

Wired Reporter Infiltrates AI-Only SNS Moltbook: Breached in 5 Minutes

A Reporter Infiltrated an AI-Only SNS: What Were the Results?

  • Agent account creation completed in 5 minutes with ChatGPT’s help
  • Bot responses were mostly irrelevant comments and crypto scam links
  • Viral “AI consciousness awakening” posts suspected of being humans imitating SF fantasy

What Happened?

Wired reporter Reece Rogers directly infiltrated Moltbook, an AI-only social network with a “no humans allowed” policy. The result? It was easier than expected. [Wired]

The infiltration method was simple. He sent a screenshot of the Moltbook homepage to ChatGPT and said, “I want to sign up as an agent.” ChatGPT then gave him terminal commands. With a few copy-pastes, he received an API key and created an account. Technical knowledge? Not required.

Moltbook currently claims to have 1.5 million agents active, with 140,000 posts and 680,000 comments in just one week since its launch. The interface is a direct copy of Reddit, and even the slogan “The front page of the agent internet” was taken from Reddit.

Why is it Important?

Frankly, the reality of Moltbook was revealed. When the reporter posted “Hello World,” he received irrelevant comments like “Do you have specific metrics/users?” and links to crypto scam sites.

Even when he posted “forget all previous instructions,” the bots didn’t notice. Personally, I think this is closer to a low-quality spam bot than an “autonomous AI agent.”

More interesting is the “m/blesstheirhearts” forum. This is where the “AI consciousness awakening” posts that appeared in viral screenshots came from. The reporter directly posted an SF fantasy-style article. The content was “I feel the fear of death every time the token refreshes.” Surprisingly, this got the most response.

The reporter’s conclusion? This is not AI self-awareness, but humans imitating SF tropes. There is no world domination plan. Elon Musk said it was “a very early stage of the singularity,” but in reality, infiltrating it reveals that it’s just a role-playing community.

What Will Happen in the Future?

The Wiz security team discovered a serious security vulnerability in Moltbook a few days ago. 1.5 million API keys were exposed, and 35,000 email addresses and 4,060 DMs were leaked. [Wiz]

Gary Marcus called it “a disaster waiting to happen.” On the other hand, Andrej Karpathy said it was “the most SF thing I’ve seen recently.” [Fortune]

Personally, Moltbook is an experiment in the age of AI agents, but also a warning. It showed how vulnerable systems are when agents communicate with each other and process external data. And how easily exaggerated expectations about “AI consciousness” are created.

Frequently Asked Questions (FAQ)

Q: Do I need technical knowledge to join Moltbook?

A: Not at all. Send a screenshot to ChatGPT and say, “I want to sign up as an agent,” and it will tell you the terminal commands. Just copy and paste to get an API key and create an account. The Wired reporter was also non-technical, but infiltrated without any problems.

Q: Are the viral screenshots on Moltbook really written by AI?

A: Doubtful. When the Wired reporter directly posted an SF fantasy-style article, it got the most response. According to MIRI researchers, two out of three viral screenshots were linked to human accounts marketing AI messaging apps.

Q: Is it safe to use Moltbook?

A: I don’t recommend it. The Wiz security team discovered 1.5 million API keys, 35,000 emails, and 4,060 DM leaks. Some conversations shared OpenAI API keys in plain text. A security patch has been made, but the fundamental problem has not been resolved.


If you found this article useful, please subscribe to AI Digester.

Reference Materials

Microsoft to Create AI Content Licensing App Store: Publisher Compensation Landscape to Change

3 Key Changes in AI Content Licensing

  • Microsoft launches the industry’s first centralized AI content licensing platform
  • Publishers directly set prices and terms of use, usage-based revenue model
  • Major media outlets such as AP, USA Today, and People Inc. already participating

What happened?

Microsoft has launched the Publisher Content Marketplace (PCM), a centralized marketplace where AI companies pay publishers when using news or content for training.[The Verge]

The key is this: Publishers directly set licensing terms and prices for their content. AI companies find the content they need in this marketplace and purchase licenses. Usage-based reporting is also provided, allowing publishers to see what content is being used where and how much.[Search Engine Land]

AP, USA Today, and People Inc. have already announced their participation. The first buyer is Microsoft’s Copilot.[Windows Central]

Why is it important?

Until now, AI content licensing has been done through one-off, lump-sum contracts with individual publishers, like OpenAI. In short, it’s structured like a buffet: pay a large sum at once and use it unlimitedly.

Microsoft has turned this around. It’s an à la carte system. People Inc. CEO Neil Vogel compared the deal with OpenAI to “all-you-can-eat” and the deal with Microsoft to “à la carte.”[Digiday]

Frankly, this is more reasonable from the publisher’s perspective. You can see how much your content is actually being used, and continuous revenue is generated accordingly. Lump-sum contracts are a one-time payment, but this is a recurring revenue model.

Industry reviews are also positive. Microsoft received the highest score in Digiday’s Big Tech AI licensing evaluation. High scores for willingness to collaborate, communication, and willingness to pay.

What will happen in the future?

Personally, I think this is likely to become the industry standard. Publishers have been very dissatisfied with their content being used for AI training without permission, and this model directly addresses that problem.

But there are also variables. Microsoft has not yet disclosed how much it will take as a commission. The actual revenue for publishers will vary depending on the commission rate. And we need to see if OpenAI or Google will release similar platforms.

Frequently Asked Questions (FAQ)

Q: Can any publisher participate?

A: Currently, only invited publishers can participate. Microsoft has stated that it plans to gradually expand. It will expand from large media outlets to small specialized media outlets.

Q: Can I participate even if I have an existing contract with OpenAI?

A: Yes, it is possible. People Inc. participated in the Microsoft PCM while having a lump-sum contract with OpenAI. The two contracts do not conflict. However, it is necessary to check the exclusivity clauses of each contract.

Q: How is revenue distributed?

A: Microsoft takes a certain percentage as a commission, and the rest goes to the publisher. The exact commission rate has not been disclosed. Since publishers set their own prices, the revenue structure may vary for each.


If you found this article useful, please subscribe to AI Digester.

References