Don't Show Again Yes, I would!

Microsoft AutoGen lets you build AI Agent frameworks

Microsoft has quietly made available a new framework that enables development of large language model (LLM) applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools. This quick guide provides an overview of what you can expect from the new Microsoft Microsoft AutoGen AI Agent system and how to install it

In collaboration with Penn State University and the University of Washington, Microsoft has developed and released AutoGen, a tool that enhances the capabilities of large language models (LLMs) by enabling multi-agent conversations. This innovative framework is now available on Github, opening up a new world of possibilities for developers and researchers in the field of artificial intelligence (AI).

Microsoft AutoGen AI Agent made available via Github

AutoGen is a groundbreaking tool that simplifies the development of next-generation LLM applications. It does this by enabling multi-agent conversations with minimal effort, thereby streamlining the orchestration, automation, and optimization of complex LLM workflows. This not only maximizes the performance of LLM models but also helps to overcome their inherent weaknesses.

Other articles you may find of interest on the subject of AI agents and automation :

One of the key features of AutoGen is its support for diverse conversation patterns. Developers can customize and converse with agents to build a wide range of conversation patterns. This flexibility extends to conversation autonomy, the number of agents, and agent conversation topology. This versatility sets AutoGen apart from other AI tools, offering a level of customization and interaction that is not commonly found in other frameworks.

See also  Xiaomi 14 Ultra vs Vivo X100 Pro

AutoGen also provides a collection of working systems with varying levels of complexity. These systems span a wide range of applications from various domains, demonstrating the tool’s adaptability and wide-ranging potential. This feature is a testament to AutoGen’s potential impact on various fields, from customer service to healthcare, education, and beyond.

AssistantAgent

  • The AssistantAgent is designed to act as an AI assistant, using LLMs by default but not requiring human input or code execution. It could write Python code (in a Python coding block) for a user to execute when a message (typically a description of a task that needs to be solved) is received.
  • Under the hood, the Python code is written by LLM (e.g., GPT-4). It can also receive the execution results and suggest code with bug fix. Its behavior can be altered by passing a new system message. The LLM inference configuration can be configured via llm_config.

UserProxyAgent

  • The UserProxyAgent is conceptually a proxy agent for humans, soliciting human input as the agent’s reply at each interaction turn by default and also having the capability to execute code and call functions. The UserProxyAgent triggers code execution automatically when it detects an executable code block in the received message and no human user input is provided.
  • Code execution can be disabled by setting code_execution_config to False. LLM-based response is disabled by default. It can be enabled by setting llm_config to a dict corresponding to the inference configuration. When llm_config is set to a dict, UserProxyAgent can generate replies using an LLM when code execution is not performed. When llm_config is set to a dict, UserProxyAgent can generate replies using an LLM when code execution is not performed.

In terms of technical capabilities, AutoGen provides a drop-in replacement for openai.Completion or openai.ChatCompletion as an enhanced inference API. This allows for easy performance tuning, utilities like API unification & caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, and more. These features help to maximize the utility of expensive LLMs such as ChatGPT and GPT-4.

See also  ChatGPT vs Bing vs Bard vs Claude comparison

By automating chat among multiple capable agents, AutoGen allows for the collective performance of tasks autonomously or with human feedback. This includes tasks that require using tools via code, further expanding the potential applications of this tool.

The release of AutoGen by Microsoft is a significant milestone in the rapid advancement of AI technology. By making this tool available on Github, Microsoft is not only democratizing access to advanced AI tools but also fostering a collaborative environment where developers and researchers can contribute to the evolution of this technology.

AutoGen represents a significant leap forward in the field of AI. Its ability to enable multi-agent conversations, coupled with its customizable and conversable agents, sets it apart from other AI tools. As this technology continues to evolve, it will be interesting to see the innovative applications that developers and researchers will come up with using AutoGen.

Filed Under: Guides, Top News





Latest togetherbe Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, togetherbe may earn an affiliate commission. Learn about our Disclosure Policy.

Share:

John Smith

My John Smith is a seasoned technology writer with a passion for unraveling the complexities of the digital world. With a background in computer science and a keen interest in emerging trends, John has become a sought-after voice in translating intricate technological concepts into accessible and engaging articles.

Leave a Reply

Your email address will not be published. Required fields are marked *