* mark depricated API section
* CLI bug fixes for azure
* check azure before running
* Update README.md
* Update README.md
* bug fix with persona loading
* remove print
* make errors for cli flags more clear
* format
* fix imports
* fix imports
* add prints
* update lock
* update config fields
* cleanup config loading
* commit
* remove asserts
* refactor configure
* put into different functions
* add embedding default
* pass in config
* fixes
* allow overriding openai embedding endpoint
* black
* trying to patch tests (some circular import errors)
* update flags and docs
* patched support for local llms using endpoint and endpoint type passed via configs, not env vars
* missing files
* fix naming
* fix import
* fix two runtime errors
* patch ollama typo, move ollama model question pre-wrapper, modify question phrasing to include link to readthedocs, also have a default ollama model that has a tag included
* disable debug messages
* made error message for failed load more informative
* don't print dynamic linking function warning unless --debug
* updated tests to work with new cli workflow (disabled openai config test for now)
* added skips for tests when vars are missing
* update bad arg
* revise test to soft pass on empty string too
* don't run configure twice
* extend timeout (try to pass against nltk download)
* update defaults
* typo with endpoint type default
* patch runtime errors for when model is None
* catching another case of 'x in model' when model is None (preemptively)
* allow overrides to local llm related config params
* made model wrapper selection from a list vs raw input
* update test for select instead of input
* Fixed bug in endpoint when using local->openai selection, also added validation loop to manual endpoint entry
* updated error messages to be more informative with links to readthedocs
* add back gpt3.5-turbo
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* partial
* working schema builder, tested that it matches the hand-written schemas
* correct another schema diff
* refactor
* basic working test
* refactored preset creation to use yaml files
* added docstring-parser
* add code for dynamic function linking in agent loading
* pretty schema diff printer
* support pulling from ~/.memgpt/functions/*.py
* clean
* allow looking for system prompts in ~/.memgpt/system_prompts
* create ~/.memgpt/system_prompts if it doesn't exist
* pull presets from ~/.memgpt/presets in addition to examples folder
* add support for loading agent configs that have additional keys
---------
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
* Remove AsyncAgent and async from cli
Refactor agent.py memory.py
Refactor interface.py
Refactor main.py
Refactor openai_tools.py
Refactor cli/cli.py
stray asyncs
save
make legacy embeddings not use async
Refactor presets
Remove deleted function from import
* remove stray prints
* typo
* another stray print
* patch test
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* I added a "/retry" command to retry for getting another answer.
- Implemented to pop messages until hitting the last user message. Then
extracting the users last message and sending it again. This will also
work with state files and after manually popping messages.
- Updated the README to include /retry
- Update the README for "pop" with parameter and changed default to 3 as
this will pop "function/assistant/user" which is the usual turn
around.
* disclaimer
---------
Co-authored-by: Charles Packer <packercharles@gmail.com>
* I added commands to shape the conversation:
`/rethink <text>` will change the internal dialog of the last assistant message.
`/rewrite <text>` will change the last answer of the assistant.
Both commands can be used to change how the conversation continues in
some pretty drastic and powerfull ways.
* remove magic numbers
* add disclaimer
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* I made dump showing more message and added a count (the last x)
There seem to be some changes about the implementation so that the
current dump message helper functions do not show a lot of useful info.
I changed it so that you can `dump 5` (last 5 messages) and that it will
print user readable output. This lets you get some more understanding about
what is going on.
As some messages are still not shown I also show the index (reverse) of the
printed message, so one can see what to "pop" to reach a special point
without geting into the drumpraw.
* black
* patch
---------
Co-authored-by: Charles Packer <packercharles@gmail.com>
* strip '/' and use osp.join
* grepped for MEMGPT_DIR, found more places to replace '/'
* typo
* grep pass over filesep
---------
Co-authored-by: Vivian Fang <hi@vivi.sh>
* make tests dummy to make sure github workflow is fine
* black test
* strip circular import
* further dummy-fy the test
* use pexpect
* need y
* Update tests.yml
* Update tests.yml
* added prints
* sleep before decode print
* updated test to match legacy flow
* revising test where it fails
* comment out enter your message check for now, pexpect seems to be stuck on only setting the bootup message
* weird now it's not showing Bootup sequence complete?
* added debug
* handle none
* allow more time
* loosen string check
* add enter after commands
* modify saved compontent snippet
* add try again check
* more sendlines
* more excepts
* test passing locally
* Update tests.yml
* dont clearline
* add EOF catch that seems to only happen on github actiosn (ubuntu) but not macos
* more eof
* try flushing
* add strip_ui flag
* fix archival_memory_search and memory print output
* Don't use questionary for input if strip_ui
* Run black
* Always strip UI if TEST is set
* Add another flush
* expect Enter your message
* more debug prints
* one more shot at printing debug info
* stray fore color in stripped ui
* tests pass locally
* cleanup
---------
Co-authored-by: Vivian Fang <hi@vivi.sh>