Tutorial — Autonomous Brokers
Llama 3 is a model new State-of-the-Paintings model from Meta AI, which equals in effectivity Claude 3 and GPT-4 Turbo.
On this tutorial, we’re going to assemble a memory recorder module for an autonomous Brokers.
I will use Groq API on this tutorial for inference, on account of:
- Quickest inference tempo
- Free tier
- Provides Mistral competing fashions inside the same API documentation.
Alternatively, we’d:
- Run smaller 7B model on Nvidia 3060 12GB for $300,
- Rent cloud GPUs starting from $0.2 per hour for least costly GPUs.
Let’s get started!
I will start by importing libraries.
!pip arrange groq # to place in on first time
import os
import json
from datetime import datetime
from groq import Groq
Let’s import the API key:
groq = Groq(
api_key=os.environ.get("groq_key"),
)
I will subsequent define variables, which I want to affiliate with the “interior memory” of the autonomous agent:
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
persona = "Teemu"
home_location = "Madrid"
I’ve as correctly outlined proper right here the memory development, which I would like the Llama 3 model to look at:
schema = memory_schema = {
"kind": "object",
"properties": {
"date": {
"kind": "string",
"description": "The current date (YYYY-MM-DD HH-MM-SS format)"
},
"me": {
"kind": "array",
"description": "My title"
},
"people": {
"kind": "array",
"description": "Itemizing of people involved throughout the event (optionally obtainable)"
},
"feeling": {
"kind": "string",
"description": "The precept character's feeling by way of the event"
},
"short_description": {
"kind": "string",
"description": "A fast description of the event"
},
"local weather": {
"kind": "string",
"description": "Current local weather circumstances (e.g., sunny, moist, cloudy)"
},
"location": {
"kind": "string",
"description": "Location title (e.g., metropolis, metropolis)"
},
"notion": {
"kind": "string",
"description": "Additional particulars or insights in regards to the event"
},
"memorable_because": {
"kind": "string",
"description": "The rationale why the event is memorable"
}
}
}
I can now save this JSON-schema:
with open("my_schema.json", "w") as f:
json.dump(schema, f)
I can as correctly import this JSON-schema:
with open("my_schema.json", "r") as f:
my_schema = json.load(f)
I will now define the individual speedy:
prompt_by_user = "Instantly was sunny day after which rained, I went to metropolis to have a dinner with buddies and I ate the proper Sushi I've ever examined in restaurant known as Sushita Cafe, the place my pal Paco is a chef."
I will subsequent make the API title, the place I’ve included the current time, location and persona into the system speedy and I drive the Llama 3 to answer as JSON-object.
chat_completion = shopper.chat.completions.create(
messages=[
{
"role": "system",
"content": f"You are helpful memory recorder.nWrite outputs in JSON in schema: {my_schema}.nCurrent time is {now}.nI am {persona} living in {home_location} and events may take place in more specific places inside the home location or outside it, so record precisely.n",
#"content": "You are helpful memory recorder. Write outputs in JSON schema.n",
#f" The JSON object must use the schema: {json.dumps(my_schema.model_json_schema(), indent=1)}",
},
{
"role": "user",
"content": "Today was sunny day and then rained, I went to city to have a dinner with friends and I ate the best Sushi I have ever tested in restaurant called Sushita Cafe, where my friend Paco is a chef.",
}
],
model="llama3-70b-8192",
response_format={"kind": "json_object"},
)
I can lastly print the output:
print(chat_completion.selections[0].message.content material materials)
The result will look the subsequent. The response integrates immediately contained in the Llama 3 model output the current time, the persona and it fills the attributes as if the actual individual himself expert them.
The situation-attribute is particularly fascinating. The scenario of the persona is Madrid, nonetheless as a result of the event takes place in a restaurant known as “Sushita Cafe”, it integrates the gadgets of information into unified location-attribute with none third get collectively libraries throughout the middle.
Let’s wrap up the tutorial.
Now we’ve now constructed a a memory module for Autonomous Brokers using Llama 3 70B model.
- State-of-the-Paintings (SOTA) LLM
- Helps JSON-mode
- Integrates retrieved data immediately into JSON-schema.
- Inferenced tempo is fast with Groq API
The simplest half is, that Meta AI will later this yr launch even greater model with 405B parameters, which preliminary evaluations level out it’s going to be SOTA-level, most likely even previous upcoming GPT-5 model.
[1] LLMs. Teemu Maatta. https://github.com/tmgthb/LLMs.