DeepSeek Moment 1 Year: 113,000 Qwen Derivative Models, 4x Llama

DeepSeek Moment 1 Year: 3 Changes Proven by Numbers

  • Over 113,000 Qwen-derived models – 4 times that of Meta Llama (27,000)
  • DeepSeek ranks 1st, Qwen ranks 4th in most followers on Hugging Face
  • Chinese AI organizations shift direction: “Open source is the strategy”

What happened?

Hugging Face released an analysis report on the 1st anniversary of the ‘DeepSeek Moment’.[Hugging Face] This is the final part of a three-part series that summarizes with data how the Chinese open-source AI ecosystem has grown since the emergence of DeepSeek in January 2025.

Let’s look at the key figures. The number of derivative models based on Qwen (Alibaba) exceeded 113,000 by mid-2025. Including repositories tagged with Qwen, there are over 200,000.[Hugging Face] This is an overwhelming number compared to Meta’s Llama (27,000) or DeepSeek (6,000).

Why is it important?

Frankly, even a year ago, there was a widespread perception of Chinese AI as ‘copycats’. But now it’s different.

ByteDance, DeepSeek, Tencent, and Qwen are all ranked high in Hugging Face’s popular papers. DeepSeek ranks 1st and Qwen ranks 4th in the number of followers. Looking at Alibaba as a whole, the number of derived models is comparable to that of Google and Meta combined.[Hugging Face]

Personally, I’m paying attention to Alibaba’s strategy. They structured Qwen not as a single flagship model, but as a ‘family’. It supports various sizes, tasks, and modalities. In simple terms, it means “Use our model as a general-purpose AI infrastructure.”

What will happen in the future?

Hugging Face analyzed that “open source is a short-term dominance strategy for Chinese AI organizations.” The interpretation is that they are aiming for large-scale integration and deployment by sharing not only models but also papers and deployment infrastructure.

It has been confirmed by numbers after a year that the DeepSeek Moment was not a one-off event. The center of gravity of the global AI open-source ecosystem is shifting.

Frequently Asked Questions (FAQ)

Q: Why are there more Qwen-derived models than Llama?

A: Alibaba released Qwen in various sizes and modalities, expanding its scope of application. In particular, Chinese developers use it a lot for local deployment. The strategy of continuously updating both Hugging Face and ModelScope was also effective.

Q: Is DeepSeek still important?

A: Yes. DeepSeek is the organization with the most followers on Hugging Face. However, it lags behind Qwen in the number of derived models. DeepSeek has strengths in papers and research contributions, while Qwen focuses on ecosystem expansion.

Q: What does it mean for Korean developers?

A: Qwen-based models are strengthening Korean language support. Because it is open source, local deployment and fine-tuning are free. It has become a good environment for experimenting without cost burden. However, license conditions vary from model to model, so confirmation is necessary.


If you found this article helpful, please subscribe to AI Digester.

Reference Materials

Fitbit Founder, Two Years After Leaving Google, Announces Family Health AI ‘Luffu

Fitbit Founder Returns with Family Health AI After 2 Years of Leaving Google

  • Fitbit co-founders James Park and Eric Friedman announce new startup Luffu
  • AI integrates and manages health data for the entire family, automatically detecting anomalies
  • Targeting 63 million family caregivers in the US, planning to launch an app first and then expand into hardware

What happened?

James Park and Eric Friedman, who created Fitbit, have announced a new startup called Luffu two years after leaving Google.[PRNewswire]

Luffu claims to be an “intelligent family care system.” It is a platform that integrates and manages the health data of the entire family, not just individuals, using AI. This includes children, parents, spouses, and even pets.[TechCrunch]

Currently, there are about 40 employees, most of whom are from Google and Fitbit. It is self-funded and has not received external investment.[PRNewswire]

Why is it important?

Personally, what makes this announcement interesting is that while Fitbit focused on “personal health,” Luffu is trying to create a new category called “family health.”

Approximately 63 million adults in the United States are responsible for family care.[PRNewswire] They are busy taking care of their children, careers, and elderly parents at the same time. However, most healthcare apps are designed for individuals, making it difficult to manage at the family level.

Luffu is targeting this gap. Frankly, Apple Health and Google Fit have very few family sharing features. No one has properly captured this market yet.

James Park said, “At Fitbit, we focused on personal health, but after Fitbit, health became bigger to me than just thinking about myself.”[PRNewswire]

How does it work?

The key to Luffu is that AI works quietly in the background. You don’t need to keep talking like a chatbot.

  • Data Collection: Enter health information via voice, text, and photos. Can also be linked to devices or medical portals.
  • Pattern Learning: AI identifies daily patterns for each family member.
  • Anomaly Detection: Automatically notifies you of missed medication, changes in vital signs, and abnormal sleep patterns.
  • Natural Language Questions: AI answers questions like, “Is Dad’s new diet affecting his blood pressure?”

Privacy is also emphasized. It aims to be a “guardian, not a surveillance,” and users control what information is shared with whom.[PRNewswire]

What will happen in the future?

Luffu plans to start with an app and expand into hardware. It’s similar to the path Fitbit took, but this time it seems they are trying to build a device ecosystem for the whole family.

Currently, it is in private beta testing, and you can register on the waiting list on the website (luffu.com).[PRNewswire]

It is operating with its own funds without external investment, which can be interpreted as a commitment to focusing on the product without VC pressure. This is a different approach than with Fitbit.

Frequently Asked Questions (FAQ)

Q: When will Luffu be released?

A: Currently in limited public beta testing. The official release date has not yet been announced. You can register on the waiting list at luffu.com to receive an invitation to the beta test. The app will be released first, followed by dedicated hardware.

Q: Is it compatible with Fitbit?

A: The official announcement only mentioned that it is compatible with devices and medical portals. Whether it will directly integrate with Fitbit has not yet been confirmed. Google acquired Fitbit, and the founders have left Google, so a complex relationship is expected.

Q: How much does it cost?

A: Pricing policy has not yet been disclosed. Since it is operating with its own funds, there is a possibility of a subscription model or premium feature monetization, but we have to wait for the official announcement. Separate pricing is expected when the hardware is released.


If you found this article useful, please subscribe to AI Digester.

References

Claude Code Major Outage: Developers Forced into ‘Coffee Break

Claude Code Outage: 3 Key Points

  • API error rates surged across all of Anthropic’s Claude models
  • Claude Code users halted work due to 500 errors
  • Microsoft AI team also uses this service — impacting the entire industry

What Happened?

Claude Code experienced a major outage. Developers encountered 500 errors when accessing the service, and Anthropic officially announced an increase in API error rates “across all Claude models.”[The Verge]

Anthropic stated that they identified the issue and are working on a fix. The current status page indicates that the outage has been resolved.[Anthropic Status]

Why is it Important?

Claude Code is not just an AI tool. It has become a core infrastructure that many developers, including the Microsoft AI team, rely on in their daily work.

Frankly, such outages are rare. According to the Anthropic status page, Claude Code’s 90-day uptime is 99.69%. But the problem is that even less than 1% downtime has a significant impact on developer productivity.

Personally, I see this incident as a warning about the dependence on AI coding tools. If you put all your workflows on a single service, you have no alternative when an outage occurs.

Recent Anthropic Service Issues

It is also worth noting that this outage is not an isolated incident:

  • Yesterday (February 2nd): Errors occurred in Claude Opus 4.5
  • Earlier this week: Fixed issues with AI credit system purchases
  • January 31st: Claude Code 2.1.27 memory leak — patched to 2.1.29

The continuous occurrence of multiple issues in a short period is a disappointing aspect in terms of service stability.

What Happens Next?

It’s a good sign that Anthropic is responding quickly. However, it’s time for developers to consider a backup plan.

Alternatives to Claude Code include tools like Goose (free) and pi-mono (open source). They are not complete replacements, but they can help maintain minimal work continuity in the event of an outage.

Frequently Asked Questions (FAQ)

Q: How often does Claude Code outage occur?

A: According to Anthropic’s official data, the 90-day uptime is 99.69%. Major outages like this are rare, but there have been several minor issues in recent weeks. It’s not something to completely ignore.

Q: What are the alternatives in case of an outage?

A: Goose is a free AI coding agent, and pi-mono is an open-source alternative with 5.9k stars on GitHub. Neither covers all the features of Claude Code, but they are options to continue working in an emergency.

Q: Does Anthropic provide compensation?

A: To date, Anthropic has not announced a separate compensation policy for outages. For paid users, the fact that there are no charges during the outage time due to usage-based billing is a de facto compensation.


If you found this article helpful, please subscribe to AI Digester.

References

AWS SageMaker Data Agent: Weeks of Medical Data Analysis → Days

Healthcare Data Analysis, Weeks Reduced to Days

  • AWS SageMaker Data Agent: AI agent that analyzes healthcare data in natural language
  • Cohort comparison and survival analysis can be performed without code
  • Released in November 2025, free to use in SageMaker Unified Studio

What Happened?

AWS has unveiled SageMaker Data Agent, an AI agent for healthcare data analysis. When epidemiologists or clinical researchers ask questions in natural language, the AI automatically generates and executes SQL and Python code.[AWS]

Previously, healthcare data analysis required navigating multiple systems, waiting for data access permissions, understanding schemas, and writing code directly. This process took weeks. SageMaker Data Agent reduces this to days, or even hours.[AWS]

Why is it Important?

Frankly, healthcare data analysis has always been a bottleneck. Epidemiologists spent 80% of their time on data preparation and only 20% on actual analysis. The reality was that they could only conduct 2-3 studies per quarter.

SageMaker Data Agent reverses this ratio. It significantly reduces data preparation time, allowing for more focus on actual clinical analysis. Personally, I believe this will directly impact the speed of discovering patient treatment patterns.

It’s particularly impressive that complex tasks like cohort comparison and Kaplan-Meier survival analysis can be requested in natural language. Saying something like, “Perform survival analysis for male vs. female patients with viral sinusitis,” and the AI automatically plans, writes, and executes the code.[AWS]

How Does it Work?

SageMaker Data Agent operates in two modes. First, code can be generated directly from inline prompts in notebook cells. Second, the Data Agent panel can break down complex analysis tasks into structured steps for processing.[AWS]

The agent understands the current notebook state and generates contextually relevant code by understanding the data catalog and business metadata. It doesn’t just spit out code snippets, but creates an entire analysis plan.[AWS]

What Happens Next?

According to a Deloitte survey, 92% of healthcare executives are investing in or experimenting with generative AI.[AWS] The demand for healthcare AI analysis tools will continue to increase.

If agentic AI like SageMaker Data Agent accelerates healthcare research, it could positively impact new drug development and the discovery of treatment patterns. However, one concern is data quality. No matter how fast the AI is, if the input data is messy, the results will be messy too.

Frequently Asked Questions (FAQ)

Q: What is the cost of SageMaker Data Agent?

A: SageMaker Unified Studio itself is free. However, you are charged for the actual computing resources used (EMR, Athena, Redshift, etc.). The notebook has a free tier of 250 hours for the first two months, so you can test it out lightly.

Q: What data sources are supported?

A: It connects to AWS Glue Data Catalog, Amazon S3, Amazon Redshift, and various other data sources. If you have an existing AWS data infrastructure, you can integrate it immediately. It is also compatible with healthcare data standards such as FHIR and OMOP CDM.

Q: Which regions are available?

A: It is available in all AWS regions where SageMaker Unified Studio is supported. It is best to check the AWS official documentation for Seoul region support.


If you found this article useful, please subscribe to AI Digester.

References

Lotus Health AI Raises $35 Million in Funding as a Free AI Primary Care Physician

Free AI Primary Care Physician Receives $35 Million Investment

  • Lotus Health AI secures $35 million in Series A funding from CRV and Kleiner Perkins
  • Provides 24/7 free primary care services in 50 languages, operating in all 50 US states
  • In an era where 230 million people ask ChatGPT health questions weekly, the AI healthcare market enters full-scale competition

What Happened?

Lotus Health AI received $35 million in a Series A round co-led by CRV and Kleiner Perkins.[TechCrunch] This startup utilizes large language models (LLMs) to provide 24/7 free primary care services in 50 languages.

Founder KJ Dhaliwal previously sold the South Asian dating app Dil Mil for $50 million in 2019.[Crunchbase] Inspired by his childhood experience of interpreting for his parents in medical settings, he launched Lotus Health AI in May 2024 with the goal of addressing inefficiencies in the US healthcare system.

Why is it Important?

Frankly, the size of this investment is notable. The average investment for AI healthcare startups is $34.4 million, and Lotus Health AI matched this level in its Series A.[Crunchbase]

Understanding the background helps. According to OpenAI, 230 million people ask ChatGPT health-related questions every week.[TechCrunch] This means people are already seeking health advice from AI. However, ChatGPT cannot provide medical treatment. Lotus Health AI is targeting this niche.

Personally, the \”free\” model is the most interesting. Considering how expensive healthcare is in the US, free primary care is a pretty disruptive value proposition. Of course, the revenue model is still unclear.

What Happens Next?

The AI healthcare market is expected to enter full-scale competition. OpenAI also entered this market last January with the launch of ChatGPT Health. It provides personalized health advice by integrating with Apple Health, MyFitnessPal, and more.[OpenAI]

Regulatory risks remain. Even OpenAI states in its terms of service, \”Do not use for diagnostic or treatment purposes.\” Several lawsuits have already been filed due to damages caused by AI medical advice. It is necessary to observe how Lotus Health AI will manage these risks.

Frequently Asked Questions (FAQ)

Q: Is Lotus Health AI really free?

A: It is free for patients. However, the specific revenue model has not yet been disclosed. There are various possibilities, such as a B2B model targeting insurance companies or employers, or adding premium services. It seems they are aiming for economies of scale by providing services in all 50 states.

Q: What is the difference between it and a general AI chatbot?

A: Lotus Health AI is a medical service specialized in primary care. Unlike general chatbots, it holds medical service licenses in all 50 US states. The key difference is that it can perform actual medical treatment, not just provide health information.

Q: Does it support Korean?

A: It stated that it supports 50 languages, but the specific language list has not been disclosed. It is necessary to confirm whether Korean is supported. Currently, the service is only available in the US, and there are no plans for overseas expansion announced yet.


If this article was helpful, please subscribe to AI Digester.

References

H Company Holo2: Achieved 1st Place in UI Localization Benchmark

The 235B Parameter Model Has Upended the UI Automation Landscape

  • Achieved SOTA with 78.5% on the ScreenSpot-Pro benchmark
  • 10-20% performance improvement with Agentic Localization
  • Accurately identifies small UI elements even in 4K high-resolution interfaces

What Happened?

H Company has released Holo2-235B-A22B, a model specializing in UI Localization (identifying the location of user interface elements).[Hugging Face] This 235B parameter model accurately locates UI elements such as buttons, text fields, and links in screenshots.

The key is Agentic Localization technology. Instead of providing an answer in one go, it refines predictions over multiple steps. This allows it to accurately pinpoint small UI elements even on 4K high-resolution screens.[Hugging Face]

Why is it Important?

The GUI agent field is heating up. Big tech companies like Claude Computer Use and OpenAI Operator are releasing UI automation features in rapid succession. However, a small startup, H Company, has taken the top spot in this field’s benchmark.

Personally, I’m paying attention to the agentic approach. Existing models often failed because they tried to pinpoint the location in one go, but the approach of refining predictions through multiple attempts has proven effective. The 10-20% performance improvement figure proves this.

Frankly, 235B parameters is quite heavy. We’ll have to see how quickly it operates in a real production environment.

What Will Happen in the Future?

As GUI agent competition intensifies, UI Localization accuracy is expected to become a key differentiator. Since the H Company model has been released as open source, other agent frameworks are likely to integrate it.

It could also impact the RPA (Robotic Process Automation) market. While existing RPA tools were rule-based, vision-based UI understanding could now become the standard.

Frequently Asked Questions (FAQ)

Q: What exactly is UI Localization?

A: It is the technology of finding the exact coordinates of a specific UI element (button, input field, etc.) by looking at a screenshot. Simply put, it’s AI knowing where to click on the screen. It is a core technology of GUI automation agents.

Q: What’s different from existing models?

A: Agentic Localization is key. Instead of trying to get it right in one go, it refines predictions over multiple steps. It’s similar to how a person scans the screen to find a target. This method has achieved a 10-20% performance improvement.

Q: Can I try the model myself?

A: It is publicly available on Hugging Face for research purposes. However, since it is a 235B parameter model, it requires significant GPU resources. It is more suitable for research or benchmarking purposes than for actual production applications.


If you found this article useful, please subscribe to AI Digester.

Reference Materials

Apple Xcode 26.3: Simultaneously Equipped with Anthropic Claude Agent + OpenAI Codex

AI Coding Agents in a Two-Engine System, Landing Simultaneously on Xcode

  • Anthropic Claude Agent and OpenAI Codex are now directly available within Xcode.
  • Third-party agents can also be connected with Model Context Protocol support.
  • Release Candidate (RC) available to developer program members starting today.

What Happened?

Apple announced official support for agentic coding in Xcode 26.3. [Apple Newsroom] Anthropic’s Claude Agent and OpenAI’s Codex can be used directly within the IDE.

Agentic coding goes beyond AI simply suggesting code snippets; it involves analyzing project structure, breaking down tasks independently, and autonomously running build-test-fix cycles. In simple terms, AI works like a junior developer.

Susan Prescott, Apple’s Vice President of Worldwide Developer Relations, stated that “Agentic coding maximizes productivity and creativity, allowing developers to focus on innovation.”[Apple Newsroom]

Why is it Important?

Personally, I think this is a pretty big change. For two reasons.

First, Apple has officially entered the AI coding tool competition. Independent tools like Cursor, GitHub Copilot, and Claude Code have been growing the market, but now the platform owner is directly involved.

Second, it embraces both Anthropic and OpenAI. Typically, Big Tech companies form exclusive partnerships with one AI vendor. But Apple is playing both sides. While claiming to give developers a choice, it seems like they’re hedging their bets because they don’t know which model will win.

The support for Model Context Protocol (MCP) is also noteworthy. This is an open standard for connecting AI agents with external tools, led by Anthropic.[TechCrunch] Apple’s adoption of this signals a departure from its closed ecosystem strategy.

What Will Happen in the Future?

Over a million iOS/macOS developers use Xcode. If they become accustomed to agentic coding, the development paradigm itself could change.

However, there are also concerns. If AI autonomously modifies code, security vulnerabilities or unexpected bugs could arise. We need to see how Apple manages this aspect.

The competitive landscape is also interesting. OpenAI independently released the Codex app for macOS a day earlier.[TechCrunch] The timing is curious, with the integration announcement with Apple coming the very next day.

Frequently Asked Questions (FAQ)

Q: When will Xcode 26.3 be officially released?

A: The Release Candidate (RC) version is currently available to Apple Developer Program members. The official version will be distributed through the App Store soon. The exact date has not yet been announced.

Q: Which should I use, Claude Agent or Codex?

A: It depends on the nature of the project. Claude excels at understanding long code and ensuring safety, while Codex specializes in rapid code generation. Try both and choose the one that suits you best. That’s why Apple gives you the choice.

Q: Can existing Xcode 26 users upgrade?

A: Yes. This is an extension of the Swift coding assistant feature introduced in Xcode 26, so existing users can immediately use the agentic coding feature by updating to 26.3.


If you found this article helpful, please subscribe to AI Digester.

References

Apple Xcode 26.3: Simultaneously Loading Anthropic Claude Agent + OpenAI Codex

AI 2-Layer Coding Agent System Lands on Xcode Simultaneously

  • Anthropic Claude Agent and OpenAI Codex are now available directly within Xcode
  • Third-party agents will also connect with Model Context Protocol support
  • Release Candidate (RC) available to Developer Program members starting today

Apple has officially announced.

Agentic coding support in Xcode 26.3. [Apple Newsroom] Anthropic’s Claude Agent and OpenAI’s Codex are available directly within the IDE.

Agentic coding means AI doesn’t just write code. Beyond the suggested level, it analyzes the project structure, independently divides tasks, and autonomously executes build, test, and modification cycles. Simply put, AI acts like a junior developer.

“Agentic coding maximizes productivity and creativity, allowing developers to focus on innovation,” said Susan Prescott, Apple’s Vice President of Worldwide Developer Relations. [Apple Newsroom]

Why is it important?

Personally, I think this is a pretty big change. There are two reasons:

First, Apple has officially entered the AI coding tool competition. While independent tools like Cursor, GitHub Copilot, and Claude Code have been growing the market, now the platform owner is entering directly.

Second, they are adopting Anthropic and OpenAI simultaneously. Usually, large tech companies form exclusive partnerships with one AI company. But Apple crossed the line. The cause is to give developers a choice, but honestly, it feels like insurance because I don’t know which model will be the winner.

Also noteworthy is the support for the Model Context Protocol (MCP). This is an open standard for connecting AI agents and external tools, with Anthropic leading the way. [TechCrunch] Apple’s adoption of this is a step away from its closed ecosystem strategy. This is a sign that they have resigned.

What will happen in the future?

More than 1 million iOS/macOS developers use Xcode. As they become familiar with agentic coding, the development paradigm itself may change.

However, there are also concerns. If AI autonomously modifies code, security vulnerabilities or unexpected bugs may occur. It is not yet known how Apple will manage this part.

The competitive landscape is also interesting. OpenAI independently released the Codex app for macOS the day before. [TechCrunch] The timing is strange because the integration with Apple was announced the next day.

Frequently Asked Questions (FAQ)

Q: When will Xcode 26.3 be officially released?

A: Release Candidate (RC) version is now available on Apple Developer Program members. The full version will be distributed through the App Store soon. The exact date has not yet been announced.

Q: Which should I use, Claude Agent or Codex?

A: It depends on the nature of the project. Claude excels at understanding long code and safety, while Codex specializes in rapid code generation. Try both and choose the one that suits you. That’s why Apple gave us a choice.

Q: Can existing Xcode 26 users also upgrade?

A: Yes. This is an extension of the Swift coding assistant features introduced in Xcode.

  • Xcode 26.3 unlocks the power of agentic coding – Apple Newsroom (2026-02-03)
  • Agentic coding comes to Apple’s Xcode 26.3 with agents from Anthropic and OpenAI – TechCrunch (2026-02-03)
  • OpenAI launches new macOS app for agentic coding – TechCrunch (2026-02-02)
  • Amazon Bedrock AgentCore: 9 Rules for Enterprise AI Agents

    Enterprise AI Agent, 9 Core Rules

    • AWS Releases Amazon Bedrock AgentCore Best Practices
    • Presents Session Isolation microVM, Multi-Agent Collaboration Patterns
    • Distinguishing Agentic vs. Deterministic Code is Key

    What Happened?

    AWS has released a guide to building enterprise AI agents based on Amazon Bedrock AgentCore.[AWS] AgentCore is a platform for creating, deploying, and managing AI agents at scale.

    The 9 rules are key: Narrowing scope, observability, tool definition, automated evaluation, multi-agent, scaling, code separation, testing, and organizational expansion.

    Why is it Important?

    Frankly, AI agent demos and production are different games. This guide is an attempt to bridge that gap.

    The AgentCore Gateway stands out. It centrally manages scattered tools such as MCP servers and Lambda. It finds the right tool with semantic search.

    Session isolation is also a feature. Each session runs in a separate microVM, and the VM is terminated when the session ends.

    What Will Happen Next?

    Personally, the distinction between “agentic vs. deterministic code” is the most practical. Date calculations with code, intent understanding with agents. The team that finds this balance will win.

    Frequently Asked Questions (FAQ)

    Q: What is the difference between AgentCore and existing Bedrock Agents?

    A: Bedrock Agents focuses on building single agents. AgentCore is an enterprise platform that includes large-scale operation of multiple agents, tool integration, and session management.

    Q: What about multi-agent collaboration?

    A: Supports sequential, hierarchical, and P2P patterns. Shares context with AgentCore Memory and monitors handoffs with OpenTelemetry.

    Q: How is security ensured?

    A: Identity is responsible for authentication, Policy for authorization, and Gateway for pre-execution validation. Each session runs in an isolated microVM.


    If you found this helpful, please subscribe to AI Digester.

    Reference Materials

    Amazon Bedrock AgentCore: 9 Rules for Enterprise AI Agents

    Enterprise AI agent, 9 core rules

    • AWS unveils Amazon Bedrock AgentCore best practices
    • Session isolation microVM, presents multi-agent collaboration pattern
    • Distinguishing between agentic and deterministic code is key

    What happened?

    AWS has released a guide to building an enterprise AI agent based on Amazon Bedrock AgentCore.[AWS] AgentCore is a platform for creating, deploying, and managing AI agents at scale.

    9 rules are key. Narrowing the scope, observability, tool definition, automatic evaluation, multi-agent, scaling, code separation, testing, scaling the organization.

    Why is it important?

    Honestly, AI agent demo and production are different games. This guide is an attempt to fill that gap.

    AgentCore Gateway stands out. Integrated management of scattered tools such as MCP server and Lambda. Find the appropriate tool through semantic search.

    Session isolation is also a feature. Each session runs in a separate microVM, and when terminated, the VM is terminated as well.

    What will happen in the future?

    Personally, “Agentic The “vs deterministic code” distinction is the most practical. Calculate dates with code, and determine intent with agents. The team that finds this balance will win.

    Frequently Asked Questions

    Q: What is the difference between AgentCore and existing Bedrock Agents?

    A: Bedrock Agents focuses on building a single agent. AgentCore is an enterprise platform that includes large-scale operation of multiple agents, tool integration, and session management.

    Q: What about multi-agent collaboration?

    A: Supports sequential, hierarchical, and P2P patterns. Context is shared with AgentCore Memory, and handoffs are monitored with OpenTelemetry.

    Q: How is security ensured?

    A: Identity is responsible for authentication, Policy is for authorization, and Gateway is responsible for pre-execution verification. Each session runs in an isolated microVM.


    If you found this article useful, please subscribe to AI Digester.

    References