Global Admissions Logo

$ USD

Search Programs

LLMs as Operating Systems: Agent Memory

California, United States

The Non-degree in LLMs as Operating Systems: Agent Memory at DeepLearning.AI is a program for international students taught in English.

📖 Introduction

DeepLearning.AI is an online education platform founded in 2017 by Andrew Ng, a leading AI expert and co-founder of Coursera. As a private organization, DeepLearning.AI specializes in AI and machine learning education, offering high-quality courses, specializations, and professional certifications in collaboration with top institutions and industry leaders. The platform is known for its practical, hands-on approach to teaching AI concepts and its focus on making cutting-edge AI knowledge accessible to learners worldwide.

📚 About the Program

Learn how to build agentic memory into your applications in this short course, LLMs as Operating Systems: Agent Memory, created in partnership with Letta, and taught by its founders Charles Packer and Sarah Wooders.An LLM can use any information stored in its input context window but has limited space. Using a longer input context also costs more and causes slower processing. Managing this context window and what to input becomes very important.Based on the innovative approach in the MemGPT research paper “Towards LLMs as Operating Systems,” its authors, two of whom are Charles and Sarah, proposed using an LLM agent to manage this context window, building a management system that provides applications with managed, persistent memory. Examples of Managing Agent Memory are:1. Control Conversation Memory. As conversations grow beyond defined limits, move information from context to a persistent searchable database. Summarize information to keep relevant facts in context memory. Restore relevant conversation elements as needed by conversation flow.2. Persist and edit facts such as names, dates, and preferences, and make them available in context.3. Persist and track ‘task’ specific information. For example, a research agent needs to keep research information in context memory, swapping the most relevant information from a searchable database with previous information.In this course, you’ll learn: 1. How to build an agent with self-editing memory, using tool-calling and multi-step reasoning, from scratch.2. Letta, an open-source framework that adds memory to your LLM agents, giving them advanced reasoning capabilities and transparent long-term memory.3. The key ideas behind the MemGPT paper, the two tiers of memory in and outside the context window, and how agent states comprised of memory, tools, and messages are turned into prompts.4. How to create and interact with a MemGPT agent using the Letta framework, and how to build and edit its core and archival memory.5. How core memory is designed and implemented with an example of how to customize it with blocks and memory tools.6. How to implement multi-agent collaboration both by sending messages and by sharing memory blocks.By the end of this course, you will have the tools to build LLM applications that can leverage virtual context, extending memory beyond the finite context window of LLMs.

Learn how to build agentic memory into your applications in this short course, LLMs as Operating Systems: Agent Memory, created in partnership with Letta, and taught by its founders Charles Packer and Sarah Wooders.An LLM can use any information stored in its input context window but has limited space. Using a longer input context also costs more and causes slower processing. Managing this context window and what to input becomes very important.Based on the innovative approach in the MemGPT research paper “Towards LLMs as Operating Systems,” its authors, two of whom are Charles and Sarah, proposed using an LLM agent to manage this context window, building a management system that provides applications with managed, persistent memory. Examples of Managing Agent Memory are:1. Control Conversation Memory. As conversations grow beyond defined limits, move information from context to a persistent searchable database. Summarize information to keep relevant facts in context memory. Restore relevant conversation elements as needed by conversation flow.2. Persist and edit facts such as names, dates, and preferences, and make them available in context.3. Persist and track ‘task’ specific information. For example, a research agent needs to keep research information in context memory, swapping the most relevant information from a searchable database with previous information.In this course, you’ll learn: 1. How to build an agent with self-editing memory, using tool-calling and multi-step reasoning, from scratch.2. Letta, an open-source framework that adds memory to your LLM agents, giving them advanced reasoning capabilities and transparent long-term memory.3. The key ideas behind the MemGPT paper, the two tiers of memory in and outside the context window, and how agent states comprised of memory, tools, and messages are turned into prompts.4. How to create and interact with a MemGPT agent using the Letta framework, and how to build and edit its core and archival memory.5. How core memory is designed and implemented with an example of how to customize it with blocks and memory tools.6. How to implement multi-agent collaboration both by sending messages and by sharing memory blocks.By the end of this course, you will have the tools to build LLM applications that can leverage virtual context, extending memory beyond the finite context window of LLMs.

🏫 About the University

DeepLearning.AI is dedicated to advancing artificial intelligence education and empowering individuals to build careers in AI and machine learning. The platform offers a range of courses, including the renowned "Deep Learning Specialization" and "AI for Everyone," designed to cater to beginners, professionals, and researchers. By collaborating with leading experts and institutions, DeepLearning.AI provides industry-relevant content that bridges the gap between theoretical knowledge and real-world applications. Through its online courses, research initiatives, and community-driven projects, DeepLearning.AI plays a crucial role in shaping the future of AI education and innovation.

💰 Fees

Application Fee

$0 USD

$0 USD

Tuition Fee

$120 USD

$120 USD

per year

✅ Entry Requirements

The minimum age is 18 and the maximum age is 50.

English Fluent is required.

Minimum education level Bachelor's degree

All students from all countries are eligible to apply to this program.

📬 Admissions Process


1

Step 1

Choose programs

2

Step 2

Apply online

3

Step 3

Enroll

📝 Reviews

Write a review

Application Fee

$0 USD

Service Fee

$0 USD

Tuition

$120 USD

Why Apply on Global Admissions?

Similar Programs to Non-degree

Similar Programs to Non-degree

Blog