* First commit of memgpt client and some messy test code
* rolled back unnecessary changes to abstract interface; switched client to always use Queueing Interface
* Added missing interface clear() in run_command; added convenience method for checking if an agent exists, used that in create_agent
* Formatting fixes
* Fixed incorrect naming of get_agent_memory in rest server
* Removed erroneous clear from client save method; Replaced print statements with appropriate logger calls in server
* Updated readme with client usage instructions
* added tests for Client
* make printing to terminal togglable on queininginterface (should probably refactor this to a logger)
* turn off printing to stdout via interface by default
* allow importing the python client in a similar fashion to openai-python (see https://github.com/openai/openai-python)
* Allowed quickstart on init of client; updated readme and test_client accordingly
* oops, fixed name of openai_api_key config key
* Fixed small typo
* Fixed broken test by adding memgpt hosted model details to agent config
* silence llamaindex 'LLM is explicitly disabled. Using MockLLM.' on server
* default to openai if user's memgpt directory is empty (first time)
* correct type hint
* updated section on client in readme
* added comment about how MemGPT config != Agent config
* patch unrelated test
* update wording on readme
* patch another unrelated test
* added python client to readme docs
* Changed 'user' to 'human' in example; Defaulted AgentConfig.model to 'None'; Fixed issue in create_agent (accounting for dict config); matched test code to example
* Fixed advanced example
* patch test
* patch
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* I added a "/retry" command to retry for getting another answer.
- Implemented to pop messages until hitting the last user message. Then
extracting the users last message and sending it again. This will also
work with state files and after manually popping messages.
- Updated the README to include /retry
- Update the README for "pop" with parameter and changed default to 3 as
this will pop "function/assistant/user" which is the usual turn
around.
* disclaimer
---------
Co-authored-by: Charles Packer <packercharles@gmail.com>
* I added commands to shape the conversation:
`/rethink <text>` will change the internal dialog of the last assistant message.
`/rewrite <text>` will change the last answer of the assistant.
Both commands can be used to change how the conversation continues in
some pretty drastic and powerfull ways.
* remove magic numbers
* add disclaimer
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* I made dump showing more message and added a count (the last x)
There seem to be some changes about the implementation so that the
current dump message helper functions do not show a lot of useful info.
I changed it so that you can `dump 5` (last 5 messages) and that it will
print user readable output. This lets you get some more understanding about
what is going on.
As some messages are still not shown I also show the index (reverse) of the
printed message, so one can see what to "pop" to reach a special point
without geting into the drumpraw.
* black
* patch
---------
Co-authored-by: Charles Packer <packercharles@gmail.com>