Skip to main content

πŸ’» Commands

Update: The Generator Update (0.1.5) introduced streaming:

message = "What operating system are we on?"

for chunk in interpreter.chat(message, display=False, stream=True):
print(chunk)

Interactive Chat​

To start an interactive chat in your terminal, either run interpreter from the command line:

interpreter

Or interpreter.chat() from a .py file:

interpreter.chat()

You can also stream each chunk:

message = "What operating system are we on?"

for chunk in interpreter.chat(message, display=False, stream=True):
print(chunk)

Programmatic Chat​

For more precise control, you can pass messages directly to .chat(message):

interpreter.chat("Add subtitles to all videos in /videos.")

# ... Streams output to your terminal, completes task ...

interpreter.chat("These look great but can you make the subtitles bigger?")

# ...

Start a New Chat​

In Python, Open Interpreter remembers conversation history. If you want to start fresh, you can reset it:

interpreter.messages = []

Save and Restore Chats​

interpreter.chat() returns a List of messages, which can be used to resume a conversation with interpreter.messages = messages:

messages = interpreter.chat("My name is Killian.") # Save messages to 'messages'
interpreter.messages = [] # Reset interpreter ("Killian" will be forgotten)

interpreter.messages = messages # Resume chat from 'messages' ("Killian" will be remembered)

Customize System Message​

You can inspect and configure Open Interpreter's system message to extend its functionality, modify permissions, or give it more context.

interpreter.system_message += """
Run shell commands with -y so the user doesn't have to confirm them.
"""
print(interpreter.system_message)

Change your Language Model​

Open Interpreter uses LiteLLM to connect to hosted language models.

You can change the model by setting the model parameter:

interpreter --model gpt-3.5-turbo
interpreter --model claude-2
interpreter --model command-nightly

In Python, set the model on the object:

interpreter.llm.model = "gpt-3.5-turbo"

Find the appropriate "model" string for your language model here.

Running Open Interpreter locally​

Terminal​

Open Interpreter can use OpenAI-compatible server to run models locally. (LM Studio, jan.ai, ollama etc)

Simply run interpreter with the api_base URL of your inference server (for LM studio it is http://localhost:1234/v1 by default):

interpreter --api_base "http://localhost:1234/v1" --api_key "fake_key"

Alternatively you can use Llamafile without installing any third party software just by running

interpreter --local

for a more detailed guide check out this video by Mike Bird

How to run LM Studio in the background.

  1. Download https://lmstudio.ai/ then start it.
  2. Select a model then click ↓ Download.
  3. Click the ↔️ button on the left (below πŸ’¬).
  4. Select your model at the top, then click Start Server.

Once the server is running, you can begin your conversation with Open Interpreter.

Note: Local mode sets your context_window to 3000, and your max_tokens to 1000. If your model has different requirements, set these parameters manually (see below).

Python​

Our Python package gives you more control over each setting. To replicate and connect to LM Studio, use these settings:

from interpreter import interpreter

interpreter.offline = True # Disables online features like Open Procedures
interpreter.llm.model = "openai/x" # Tells OI to send messages in OpenAI's format
interpreter.llm.api_key = "fake_key" # LiteLLM, which we use to talk to LM Studio, requires this
interpreter.llm.api_base = "http://localhost:1234/v1" # Point this at any OpenAI compatible server

interpreter.chat()

Context Window, Max Tokens​

You can modify the max_tokens and context_window (in tokens) of locally running models.

For local mode, smaller context windows will use less RAM, so we recommend trying a much shorter window (~1000) if it's failing / if it's slow. Make sure max_tokens is less than context_window.

interpreter --local --max_tokens 1000 --context_window 3000

Verbose mode​

To help you inspect Open Interpreter we have a --verbose mode for debugging.

You can activate verbose mode by using its flag (interpreter --verbose), or mid-chat:

$ interpreter
...
> %verbose true <- Turns on verbose mode

> %verbose false <- Turns off verbose mode

Interactive Mode Commands​

In the interactive mode, you can use the below commands to enhance your experience. Here's a list of available commands:

Available Commands:

  • %verbose [true/false]: Toggle verbose mode. Without arguments or with true it enters verbose mode. With false it exits verbose mode.
  • %reset: Resets the current session's conversation.
  • %undo: Removes the previous user message and the AI's response from the message history.
  • %tokens [prompt]: (Experimental) Calculate the tokens that will be sent with the next prompt as context and estimate their cost. Optionally calculate the tokens and estimated cost of a prompt if one is provided. Relies on LiteLLM's cost_per_token() method for estimated costs.
  • %help: Show the help message.

Configuration / Profiles​

Open Interpreter allows you to set default behaviors using yaml files.

This provides a flexible way to configure the interpreter without changing command-line arguments every time.

Run the following command to open the profiles directory:

interpreter --profiles

You can add yaml files there. The default profile is named default.yaml.

Multiple Profiles​

Open Interpreter supports multiple yaml files, allowing you to easily switch between configurations:

interpreter --profile my_profile.yaml