SpaceX-xAI $1.25 Trillion Merger Official: Largest M&A in History Opens Era of Space Data Centers

Update (2026-02-02): SpaceX-xAI merger officially announced. Confirmed at $1.25 trillion valuation, breaking the all-time largest M&A record.

SpaceX-xAI Merger Official: $1.25 Trillion, Largest M&A in History

  • SpaceX has officially acquired xAI. Combined valuation of $1.25 trillion sets the all-time M&A record
  • xAI shareholders will receive 0.1433 shares of SpaceX at $526.59 per share
  • Musk cited building space data centers as the core reason for the merger

What Happened?

Bottom line: Musk actually did the merger. On February 2nd, SpaceX officially acquired xAI.[TechCrunch]

Combined enterprise value is $1.25 trillion. SpaceX is valued at $1 trillion (up from $800 billion in December 2025 secondary sale), xAI at $250 billion.[Bloomberg]

Deal structure is all-stock exchange. xAI shareholders receive 0.1433 shares of SpaceX at $526.59 per share. xAI employees also have a cash-out option at $75.46 per share.[CNBC]

This is the largest M&A in history. It breaks Vodafone’s $203 billion acquisition of Mannesmann in 2000 after 25 years.[Fortune]

Why Does This Matter?

The key is space data centers. Musk stated in an internal memo that “within 2-3 years, space will become the lowest-cost location for AI computation.”[TechCrunch]

SpaceX recently applied to the FCC for permission to launch 1 million satellites. This is part of the “orbital data center” project. The goal is to combine the Starlink satellite network (currently over 9,000 satellites) with xAI’s Grok model.

Honestly, the concept itself is brilliant. The logic is to solve terrestrial data center power and cooling issues in space. But feasibility remains questionable. Satellite communication latency, hardware maintenance, and cosmic radiation issues remain unsolved.

Personally, I see a more realistic reason. xAI is currently burning $1 billion monthly. SpaceX generated $8 billion profit on $15-16 billion revenue in 2025. A cash-generating company absorbed a cash-burning one.

What’s Next?

IPO is next. At $1.25 trillion valuation, it would immediately enter the Top 10 US public companies by market cap. June listing is likely.[Sherwood News]

Tesla merger possibility has been ruled out for now. The SpaceX-Tesla scenario discussed in previous reports was not included in this announcement.

But regulatory risk remains. We need to watch how FTC and DOJ view this mega-consolidation of space and AI assets. Musk’s political influence is a variable.

Frequently Asked Questions (FAQ)

Q: What happens to xAI shareholders?

A: They receive 0.1433 shares of SpaceX at $526.59 per share. Employees can choose cash-out at $75.46 per share. Since xAI acquired X (Twitter) last year, X shareholders also indirectly gain SpaceX shares. First public trading opportunity opens after IPO.

Q: Are space data centers really possible?

A: Technically, yes. SpaceX’s application for 1 million satellite permits to the FCC is real. But timing and economics are uncertain. Musk claims space will be the lowest-cost AI computation location within 2-3 years, but satellite communication latency and hardware maintenance issues remain.

Q: When can regular investors invest?

A: When the IPO happens. June listing is likely, and at $1.25 trillion it would be one of the largest IPOs ever. SpaceX has been private, making it inaccessible to regular investors. This merger opens the opportunity to invest in xAI and Starlink business at once.


If you found this article useful, please subscribe to AI Digester.

References

Google AI Decodes Genomes of 1.85 Million Endangered Species

Google AI Decodes Genomes of 1.85 Million Endangered Species

  • Google AI expands genetic information preservation for endangered species
  • DeepPolisher reduces genome assembly errors by 50%
  • Earth BioGenome Project targets 10,000 species by 2026

What Happened?

Google announced a project to preserve the genetic information of endangered species using AI.[Google Blog]

The core technologies are DeepVariant and DeepPolisher. DeepVariant is a deep learning model that identifies DNA variants, while DeepPolisher reduces genome assembly errors by 50%.[New Atlas]

These tools are being deployed for the Earth BioGenome Project (EBP). The goal is to decode 1.85 million species, with 3,000 species completed so far.[EBP]

Why Does It Matter?

Simply put, it is creating a genetic backup before extinction.

Personally, I see AI playing a decisive role here. While sequencing costs have plummeted, data analysis was the bottleneck. AI is solving this bottleneck.

EBP aims for 10,000 species by 2026. Currently processing 20 species per week, but the target requires 67 species per week.[Science]

What Comes Next?

UNEP-WCMC and Google have started analyzing wildlife trade data with AI.[UNEP-WCMC] The scope is expanding from genome preservation to illegal trade monitoring.

Frequently Asked Questions (FAQ)

Q: Can genome preservation bring back extinct species?

A: Theoretically, it opens the possibility. If genetic information is preserved, future technology could attempt restoration. However, current technology makes it difficult. The current goal is to record the genetic diversity of living species for conservation strategies. Prevention takes priority over restoration.

Q: How does DeepVariant work?

A: It converts DNA sequencing data into image-like formats and analyzes them with deep learning. It has higher variant detection accuracy than traditional statistical methods. After its 2018 release, it contributed to completing the first complete human genome. It is open-source, so any researcher can use it.

Q: Is sequencing 1.85 million species realistic?

A: It is challenging. Since starting in 2018, 3,000 species have been completed. The phase 2 goal is 150,000 species by 2030, requiring a 36x increase in weekly processing. Both AI analysis speed improvements and infrastructure innovations like portable sequencing labs are needed simultaneously.


If you found this article helpful, please subscribe to AI Digester.

References

Anthropic Declares No Ads on Claude, Takes Aim at ChatGPT During Super Bowl

Anthropic “No Ads on Claude”, Takes Aim at ChatGPT During Super Bowl

  • Claude no-ads policy officially announced
  • Super Bowl ad directly targets ChatGPT
  • Subscription-based business model emphasized

What Happened?

Anthropic declared it will not put ads on Claude.[CNBC] This came right after OpenAI announced ChatGPT ad testing.[Axios]

Why Does It Matter?

This is AI chatbot business model competition. OpenAI chose ads, Anthropic chose subscriptions only. 80% of Anthropic’s $9 billion annual revenue comes from paid users.[Neowin]

What Comes Next?

The AI chatbot market may split based on ads or no ads. During the Super Bowl, they conveyed “Ads are coming to AI. Not to Claude.”[Adweek]

Frequently Asked Questions (FAQ)

Q: Is Claude free?

A: There is a free tier but it’s limited. Paid subscriptions offer more usage.

Q: Where are ChatGPT ads shown?

A: On free and Go tiers. Pro and above have no ads.

Q: How much does a Super Bowl ad cost?

A: Over $7 million for 30 seconds.


If you found this useful, subscribe to AI Digester.

References

NVIDIA Takes #1 in Document Search: Nemotron ColEmbed V2 Released

Achieved #1 Overall on ViDoRe V3 Benchmark

  • Scored NDCG@10 of 63.42, ranking #1 overall on ViDoRe V3 benchmark
  • Available in three model sizes: 3B, 4B, and 8B for diverse use cases
  • Late-Interaction approach enables simultaneous text and image search

What Happened?

NVIDIA released Nemotron ColEmbed V2, a multimodal document search model.[Hugging Face] This model specializes in Visual Document Retrieval, searching documents containing visual elements using text queries. It achieved #1 overall on the ViDoRe V3 benchmark with an NDCG@10 score of 63.42.[NVIDIA]

The model comes in three sizes. The 8B model delivers top performance (63.42), the 4B ranks 3rd with 61.54, and the 3B ranks 6th with 59.79. It uses ColBERT-style Late-Interaction mechanism to calculate precise similarity at the token level.[Hugging Face]

Why Does It Matter?

Enterprise documents are not just text. They contain tables, charts, and infographics. Traditional text-based search misses these visual elements. Nemotron ColEmbed V2 understands both images and text together, improving search accuracy.

This is particularly valuable for RAG (Retrieval-Augmented Generation) systems. Before an LLM generates a response, it needs to find relevant documents. The accuracy of this retrieval step determines the final response quality. Key improvements over V1 include advanced model merging techniques and multilingual synthetic data training.

What Comes Next?

Multimodal search is becoming a necessity, not an option. NVIDIA plans to integrate this model into its NeMo Retriever product line. Competition in document search accuracy for enterprise RAG pipelines is about to intensify. However, the Late-Interaction approach requires storing token-level embeddings, which means higher storage costs.

Frequently Asked Questions (FAQ)

Q: What is Late-Interaction?

A: Traditional embedding models compress an entire document into a single vector. Late-Interaction creates separate vectors for each token and sums the maximum similarity between query tokens and document tokens. It is more precise but requires more storage space.

Q: Which model size should I choose?

A: Use the 8B model if accuracy is the top priority. The 4B offers a good balance between cost and speed. The 3B still provides top-tier performance in resource-constrained environments. All are available for free on Hugging Face.

Q: Can I apply this to existing RAG systems?

A: Yes. Load it via Hugging Face Transformers and replace the embedding model in your existing pipeline. You may need to adjust the vector DB indexing method due to Late-Interaction characteristics. NVIDIA NGC also provides containers.


If you found this article useful, please subscribe to AI Digester.

References

GitHub Agent HQ: Unified Control for Claude, Codex, and 6 AI Coding Agents

GitHub Agent HQ: 6 AI Agents Unified

  • GitHub announced Agent HQ for unified management of AI agents including Claude, Codex, and Jules
  • All agents available with existing Copilot subscription
  • Shift from agent selection era to collaboration era

What Happened?

GitHub unveiled ‘Agent HQ’, an integrated platform for AI coding agents. This is the biggest change since Copilot launched.[The New Stack]

It supports Claude, Codex, Jules, Cognition, and xAI agents. All available with existing Copilot subscription.[Security Brief]

Why Does It Matter?

It solves developers’ tool selection dilemma. Mission Control enables managing multiple agents simultaneously. Including competitor agents is an unprecedented strategy.[iTWire]

What’s Next?

Full agent integration expected by 2026. GitHub ecosystem integration becomes more important than individual agent performance.

Frequently Asked Questions (FAQ)

Q: Are there additional costs for Agent HQ?

A: Existing Copilot paid subscribers use all agents at no extra cost. External agents like Claude, Codex, and Jules are included in the same subscription.

Q: Where can I use Mission Control?

A: Available on GitHub web, VS Code, mobile app, and CLI. Check agent task status, adjust direction, and approve code all in one place.

Q: Which AI agents are supported?

A: GitHub Copilot is built-in, with Claude Code, Codex, Jules, Cognition, and xAI added. Each agent handles tasks from issue processing to PR responses.


If you found this useful, subscribe to AI Digester.

References

MIT Professor Antonio Torralba Named 2025 ACM Fellow

MIT Professor Antonio Torralba Named 2025 ACM Fellow

  • World-leading authority in computer vision and machine learning
  • Three MIT alumni also named ACM Fellows
  • ACM Fellowship is the highest honor in computing

What Happened?

Professor Antonio Torralba of MIT’s Department of Electrical Engineering and Computer Science has been named a 2025 ACM Fellow.[MIT News] Professor Torralba was recognized for his contributions to computer vision, machine learning, and human visual perception. Three MIT alumni (Eytan Adar, George Candea, and Suk-Gwon Edward Suh) were also included in this cohort.

ACM Fellowship is the highest honor given to experts who have achieved outstanding accomplishments in computing and information technology.[ACM] Professor Torralba is also a principal researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds and Machines (CBMM).

Why Does This Matter?

Professor Torralba’s research aims to “build systems that perceive the world like humans do.” This is core technology for AI applications such as autonomous driving, medical image analysis, and robotics. He co-authored the 800+ page textbook “Foundations of Computer Vision” and previously served as director of MIT Quest for Intelligence and MIT-IBM Watson AI Lab.

What’s particularly notable is that his research extends beyond academic achievements. His influence across academia has been recognized through the 2021 AAAI Fellowship and an honorary doctorate from the Polytechnic University of Catalonia in 2022. As the faculty lead for AI and decision-making at MIT, he is also contributing to training the next generation of AI researchers.

What’s Next?

Computer vision is emerging as a core pillar of multimodal AI. Research led by experts like Professor Torralba is expected to lead to the development of more sophisticated visual recognition systems. Combined with MIT’s strong AI research ecosystem, industrial applications are also expected to expand.

Frequently Asked Questions (FAQ)

Q: What is an ACM Fellow?

A: ACM Fellowship is the highest honor awarded by the Association for Computing Machinery (ACM). It is given to experts who have achieved outstanding accomplishments or made exceptional contributions to the computing and information technology community. Only a small number of researchers worldwide receive this honor each year.

Q: What are Professor Antonio Torralba’s main research areas?

A: Professor Torralba researches computer vision, machine learning, and human visual perception. His goal is to build AI systems that perceive the world like humans do. He conducts research at CSAIL and the Center for Brains, Minds and Machines, and leads the AI faculty at MIT.

Q: Who are the MIT alumni named alongside him?

A: Eytan Adar (Class of 1997), George Candea (Class of 1997), and Suk-Gwon Edward Suh (MS 2001, PhD 2005) were also named 2025 ACM Fellows. They were also recognized for their outstanding achievements in computing.


If you found this article helpful, please subscribe to AI Digester.

References

OpenAI Codex App Server Released: The Rise of General-Purpose Agent Harness

OpenAI Codex App Server: A New Standard for Coding Agents

  • OpenAI releases Codex App Server architecture
  • JSON-RPC 2.0 based bidirectional communication protocol
  • Over 1 million developers already using Codex

What Happened?

OpenAI has publicly disclosed the App Server architecture, the core infrastructure of Codex. The Codex App Server is the interface that powers rich clients like VS Code extensions.[OpenAI Developers] It manages authentication, conversation history, approval processes, and streaming agent events in a unified manner.

The protocol is based on JSON-RPC 2.0 and communicates bidirectionally in JSONL format via stdio.[OpenAI Developers] It is structured around three core concepts: Thread (conversation), Turn (single request-response), and Item (message, command, file change).

Why Does This Matter?

There is a reason why Codex is called “a general-purpose agent harness disguised as a programmer tool.”[Simon Willison] With the App Server now public, developers can deeply integrate Codex into their own products. Beyond existing CLIs or simple API calls, they can now directly implement real-time agent event streaming and approval flows.

Since the release of GPT-5.2-Codex, total Codex usage has doubled, and over 1 million developers used Codex in the past month.[Simon Willison] With the macOS app launch, parallel multi-agent execution and automation scheduling features have been added, marking the full-scale arrival of agent coding workflows.

What Comes Next?

App Server v2 already broadcasts collaboration tool calls as item events in the turn stream. You can specify agent role presets with spawn_agent and interrupt running agents with send_input. Multi-agent collaboration is expected to become more sophisticated.

Currently, automation features require local execution, but a cloud-based version has been announced. Windows support is also being prepared on an Electron basis, though it is delayed due to OS-level sandboxing limitations. With MCP (Model Context Protocol) integration and OAuth login flow support, external service integration is expected to expand.

Frequently Asked Questions (FAQ)

Q: Is Codex App Server free to use?

A: Currently, both free and paid ChatGPT users can use Codex features. Plus, Pro, Business, Enterprise, and Edu users have temporarily received a 2x increase in request limits. The open-source implementation can be found on GitHub (openai/codex/codex-rs/app-server).

Q: What is the difference between existing Codex CLI and App Server?

A: The CLI handles single sessions in the terminal, while the App Server manages the entire agent ecosystem including authentication, conversation history, approval flows, and real-time event streaming. To integrate Codex into your own product, you should use the App Server.

Q: What products can be built with App Server?

A: You can build IDE integrations like VS Code extensions, custom coding agent platforms, and automated code review systems. With the Thread/Turn/Item based protocol, conversation state management is systematic, and the approval system allows you to control agent file modifications and command execution.


If you found this article useful, please subscribe to AI Digester.

References

Google Annual Revenue Surpasses $400B: A Historic AI-Driven Achievement

Google Annual Revenue Surpasses $400B: A Historic AI-Driven Achievement

  • Alphabet becomes the first company to reach $400 billion in annual revenue
  • Google Cloud grows 48%
  • $185 billion AI investment planned for 2026

What Happened?

Alphabet announced its Q4 2025 results. Annual revenue surpassed $400 billion for the first time.[CNBC] Cloud revenue jumped 48%, leading the growth.[Benzinga]

Why Does It Matter?

48% cloud growth outpaces AWS and Azure. Gemini surpassed 750 million users, with serving costs reduced by 78%.[9to5Google]

What Comes Next?

$185 billion capital expenditure is planned for 2026. The Big Tech AI arms race is intensifying.

Frequently Asked Questions (FAQ)

Q: Why is cloud growing so fast?

A: Enterprises are adopting cloud for AI training and inference. TPU and Gemini are key drivers.

Q: What is the impact of massive investment?

A: Short-term margin pressure, but the market views AI investment as essential.

Q: What does 750 million Gemini users mean?

A: Gemini is holding its ground against ChatGPT. Platform integration gives it an edge.


If you found this article useful, subscribe to AI Digester.

References

Google-Apple AI Deal Worth $1 Billion Annually

Google-Apple AI Deal Worth $1 Billion Annually

  • Apple adopts Google Gemini for Siri
  • Custom model with 1.2 trillion parameters
  • iOS 26.4 beta launching late February

What Happened?

Apple is integrating Google Gemini into Siri. The deal is worth $1 billion annually.[1] The custom model has 1.2 trillion parameters—8 times larger than Apple’s own system. Alphabet disclosed this during earnings but dodged follow-up investor questions.[2]

Why Does This Matter?

Google already pays Apple $20 billion annually to remain the default search engine. Now they’ve added an AI partnership. Compared to Anthropic’s $1.5 billion ask,[1] Google’s $1 billion deal is a strategic win.

What’s Next?

Tim Cook said “a more personalized Siri is coming this year.” It will debut in the iOS 26.4 beta in late February. However, Gmail access won’t be available.

Frequently Asked Questions (FAQ)

Q: How much is the deal worth?

A: $1 billion annually. Less than Anthropic’s $1.5 billion ask.

Q: When does the new Siri launch?

A: iOS 26.4 beta in late February. Includes on-screen understanding and personal context features.

Q: Why did they avoid questions?

A: Likely due to NDA terms and antitrust concerns.


If you found this useful, subscribe to AI Digester.

References

MIT Kitchen Cosmo: AI Generates Recipes from Your Refrigerator Ingredients

3 Key Points

  • MIT developed an AI-powered kitchen device called Kitchen Cosmo
  • Uses a camera to recognize ingredients and prints customized recipes
  • Introduces the concept of Large Language Objects that extends LLMs into the physical world

What is Going On?

MIT architecture students have developed an AI-based kitchen device called Kitchen Cosmo.[MIT News] Standing about 45cm (18 inches) tall, this device uses a webcam to recognize ingredients, accepts user input through a dial, and uses a built-in thermal printer to output recipes.

The project was conducted at MIT Design Intelligence Lab, led by Professor Marcelo Coelho. Graduate student Jacob Payne and design major Ayah Mahmoud participated in the development.[MIT News]

Why Is It Important?

Honestly, what makes this project interesting is more about the philosophy than the technology itself. Professor Coelho calls it Large Language Objects (LLOs). It is a concept of taking LLMs off the screen and moving them into physical objects.

Professor Coelho said, “This new form of intelligence is powerful, but it remains ignorant of the world outside of language.” Kitchen Cosmo bridges that gap.

Personally, I think this shows the future of AI interfaces. Instead of touching and typing on screens, you show objects and turn dials. This is especially useful in situations where your hands are busy, like cooking.

What Will Happen in the Future?

The research team plans to provide real-time cooking tips and collaborative features for multiple users in the next version. They also plan to add role-sharing functionality during cooking.[MIT News] Student Jacob Payne said, “AI can help find creative ways when figuring out what to make with leftover ingredients.”

It is unclear whether this research will lead to a commercial product. However, attempts to extend LLMs into physical interfaces will increase in the future.

Frequently Asked Questions (FAQ)

Q: What ingredients can Kitchen Cosmo recognize?

A: It uses a Vision Language Model to recognize ingredients captured by the camera. It can identify common food items like fruits, vegetables, and meat, and generate recipes considering basic seasonings and condiments typically found at home. However, specific recognition accuracy has not been disclosed.

Q: What factors are reflected in recipe generation?

A: Users can input meal type, cooking techniques, available time, mood, dietary restrictions, and number of servings. They can also select flavor profiles and regional cooking styles (e.g., Korean, Italian). All these conditions are combined to generate customized recipes.

Q: Can the general public purchase it?

A: Currently at the prototype stage in MIT research lab, no commercialization plans have been announced. Since it started as an academic research project, commercialization is expected to take time. However, similar concept products may emerge from other companies.


If you found this article useful, subscribe to AI Digester.

References