Tutorial — Autonomous Brokers
Llama 3 is a brand new State-of-the-Artwork mannequin from Meta AI, which equals in efficiency Claude 3 and GPT-4 Turbo.
On this tutorial, we are going to construct a reminiscence recorder module for an autonomous Brokers.
I’ll use Groq API on this tutorial for inference, as a result of:
- Quickest inference pace
- Free tier
- Gives Mistral competing fashions inside the similar API documentation.
Alternatively, we might:
- Run smaller 7B mannequin on Nvidia 3060 12GB for $300,
- Hire cloud GPUs ranging from $0.2 per hour for least expensive GPUs.
Let’s get began!
I’ll begin by importing libraries.
!pip set up groq # to put in on first time
import os
import json
from datetime import datetime
from groq import Groq
Let’s import the API key:
groq = Groq(
api_key=os.environ.get("groq_key"),
)
I’ll subsequent outline variables, which I wish to affiliate with the “inner reminiscence” of the autonomous agent:
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
persona = "Teemu"
home_location = "Madrid"
I’ve as properly outlined right here the reminiscence construction, which I need the Llama 3 mannequin to observe:
schema = memory_schema = {
"sort": "object",
"properties": {
"date": {
"sort": "string",
"description": "The present date (YYYY-MM-DD HH-MM-SS format)"
},
"me": {
"sort": "array",
"description": "My title"
},
"folks": {
"sort": "array",
"description": "Listing of individuals concerned within the occasion (optionally available)"
},
"feeling": {
"sort": "string",
"description": "The principle character's feeling through the occasion"
},
"short_description": {
"sort": "string",
"description": "A quick description of the occasion"
},
"climate": {
"sort": "string",
"description": "Present climate circumstances (e.g., sunny, wet, cloudy)"
},
"location": {
"sort": "string",
"description": "Location title (e.g., metropolis, city)"
},
"perception": {
"sort": "string",
"description": "Extra particulars or insights concerning the occasion"
},
"memorable_because": {
"sort": "string",
"description": "The rationale why the occasion is memorable"
}
}
}
I can now save this JSON-schema:
with open("my_schema.json", "w") as f:
json.dump(schema, f)
I can as properly import this JSON-schema:
with open("my_schema.json", "r") as f:
my_schema = json.load(f)
I’ll now outline the person immediate:
prompt_by_user = "Immediately was sunny day after which rained, I went to metropolis to have a dinner with pals and I ate the perfect Sushi I've ever examined in restaurant referred to as Sushita Cafe, the place my pal Paco is a chef."
I’ll subsequent make the API name, the place I’ve included the present time, location and persona into the system immediate and I drive the Llama 3 to reply as JSON-object.
chat_completion = consumer.chat.completions.create(
messages=[
{
"role": "system",
"content": f"You are helpful memory recorder.nWrite outputs in JSON in schema: {my_schema}.nCurrent time is {now}.nI am {persona} living in {home_location} and events may take place in more specific places inside the home location or outside it, so record precisely.n",
#"content": "You are helpful memory recorder. Write outputs in JSON schema.n",
#f" The JSON object must use the schema: {json.dumps(my_schema.model_json_schema(), indent=1)}",
},
{
"role": "user",
"content": "Today was sunny day and then rained, I went to city to have a dinner with friends and I ate the best Sushi I have ever tested in restaurant called Sushita Cafe, where my friend Paco is a chef.",
}
],
mannequin="llama3-70b-8192",
response_format={"sort": "json_object"},
)
I can lastly print the output:
print(chat_completion.decisions[0].message.content material)
The outcome will look the next. The response integrates instantly inside the Llama 3 mannequin output the present time, the persona and it fills the attributes as if the particular person himself skilled them.
The situation-attribute is especially fascinating. The situation of the persona is Madrid, however because the occasion takes place in a restaurant referred to as “Sushita Cafe”, it integrates the items of knowledge into unified location-attribute with none third get together libraries within the center.
Let’s wrap up the tutorial.
Now we have now constructed a a reminiscence module for Autonomous Brokers utilizing Llama 3 70B mannequin.
- State-of-the-Artwork (SOTA) LLM
- Helps JSON-mode
- Integrates retrieved info instantly into JSON-schema.
- Inferenced pace is quick with Groq API
The most effective half is, that Meta AI will later this yr launch even bigger mannequin with 405B parameters, which preliminary evaluations point out it is going to be SOTA-level, probably even past upcoming GPT-5 mannequin.
[1] LLMs. Teemu Maatta. https://github.com/tmgthb/LLMs.