Commit Graph

89 Commits

Author SHA1 Message Date
Sarah Wooders
10e956a7be Enable loading local agents with refactored recall memory + update MemGPTConfig to default to sqlite + chroma 2023-12-26 17:06:56 +04:00
Sarah Wooders
1c018cc14e Add more compehensive tests, make row ids be strings (not integers) 2023-12-26 17:05:58 +04:00
Sarah Wooders
de88c9d50d Refactor chroma integration 2023-12-26 17:05:41 +04:00
Sarah Wooders
9b3d59e016 Support recall and archival memory for postgres
working test
2023-12-26 17:05:24 +04:00
Max Blackmer, CSM
e67c521da8
Merge branch 'main' into feature/global-logging 2023-12-25 14:53:19 -05:00
Charles Packer
b97064e372
feat: pull model list for openai-compatible endpoints (#630)
* allow entering custom model name when using openai/azure

* pull models from endpoint

* added/tested vllm and azure

* no print

* make red

* make the endpoint question give you an opportunity to enter your openai api key again in case you made a mitake / want to swap it out

* add cascading workflow for openai+azure model listings

* patched bug w/ azure listing
2023-12-21 23:27:48 -08:00
Charles Packer
09c7fa763b
fix: CLI conveniences (add-on to #674) (#675)
* for openai, check for key and if missing allow user to pass it, for azure, throw error if the key isn't present

* correct prior checking of azure to be more strict, added similar checks at the embedding endpoint config stage

* forgot to override value in config before saving

* clean up the valuerrors from missing keys so that no stacktrace gets printed, make success text green to match others
2023-12-21 21:35:19 -08:00
Matheus
cfbec583ae
fix: Throw "env vars not set" early + enhance attach for KeyboardInterrupt (#669) (#674) 2023-12-21 20:47:34 -08:00
Max Blackmer
72f28d3853 [cpacker#319] run Black Reformat on files. 2023-12-19 15:09:08 -05:00
Charles Packer
c0290088db
feat: Migrate docs (#646)
* updated docs for readme

* Update index.md

* Update index.md

* added header

* broken link

* sync heading sizes

* fix various broken rel links

* Update index.md

* added webp

* Update index.md

* strip mkdocs/rtk files

* replaced readthedocs references with readme
2023-12-18 20:29:24 -08:00
Max Blackmer
f79147b5a6 Merge branch 'feature/global-logging' of github.com:agiletechnologist/MemGPT into feature/global-logging 2023-12-18 16:55:23 -05:00
Max Blackmer
119b1afccd [#319] Global Logging Configuration with directory fixes at config load. 2023-12-18 16:51:23 -05:00
Charles Packer
5c49265aba
migrate to using completions endpoint by default (#628)
* migrate to using completions endpoint by default

* added note about version to docs
2023-12-15 12:29:52 -08:00
Charles Packer
b8b375d663
Patch azure embeddings + handle azure deployments properly (#594)
* Fix bug where embeddings endpoint was getting set to deployment, upgraded pinned llama-index to use new version that has azure endpoint

* updated documentation

* added memgpt example for openai

* change wording to match configure
2023-12-08 16:31:43 -08:00
Sarah Wooders
9c2e6b774c
Chroma storage integration (#285) 2023-12-05 17:49:00 -08:00
Charles Packer
ec7fa25c07
Update AutoGen documentation and notebook example (#540)
* Update AutoGen documentation

* Update webui.md

* Update webui.md

* Update lmstudio.md

* Update lmstudio.md

* Update mkdocs.yml

* Update README.md

* Update README.md

* Update README.md

* Update autogen.md

* Update local_llm.md

* Update local_llm.md

* Update autogen.md

* Update autogen.md

* Update autogen.md

* refreshed the autogen examples + notebook (notebook is untested)

* unrelated patch of typo I noticed

* poetry remove pyautogen, then manually removed autogen extra in .toml

* add pdf dependency

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2023-11-30 17:45:04 -08:00
Sarah Wooders
2adc75d10b
Remove usage of BACKEND_TYPE (#539) 2023-11-30 14:18:25 -08:00
Sarah Wooders
dd5a110be4
Removing dead code + legacy commands (#536) 2023-11-30 13:37:11 -08:00
Charles Packer
bc0c1e4a37
Remove openai package and migrate to requests (#534) 2023-11-30 13:00:13 -08:00
Sarah Wooders
febc7344c7
Add support for HuggingFace Text Embedding Inference endpoint for embeddings (#524) 2023-11-27 16:28:49 -08:00
Sarah Wooders
da081667d6
Add warning if no data sources loaded on /attach command (#513)
* minor fix

* add warn instead of error for no data sources
2023-11-27 13:00:23 -08:00
Charles Packer
2f6ad7878f
vLLM support (#492)
* init vllm (not tested), uses POST API not openai wrapper

* add to cli config list

* working vllm endpoint

* add model configuration for vllm

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2023-11-21 15:16:03 -08:00
Prashant Dixit
f957209c35
Lancedb storage integration (#455) 2023-11-17 11:36:30 -08:00
Charles Packer
34b8b517ce
move webui to new openai completions endpoint, but also provide existing functionality via webui-legacy backend (#468) 2023-11-15 23:08:30 -08:00
Oliver Smith
7aafe34964
When default_mode_endpoint has a value, it needs to become model_endpoint. (#452)
Co-authored-by: Oliver Smith <oliver.smith@superevilmegacorp.com>
2023-11-15 01:18:23 -08:00
Sarah Wooders
28514da5df
Refactor config + determine LLM via config.model_endpoint_type (#422)
* mark depricated API section

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* update config fields

* cleanup config loading

* commit

* remove asserts

* refactor configure

* put into different functions

* add embedding default

* pass in config

* fixes

* allow overriding openai embedding endpoint

* black

* trying to patch tests (some circular import errors)

* update flags and docs

* patched support for local llms using endpoint and endpoint type passed via configs, not env vars

* missing files

* fix naming

* fix import

* fix two runtime errors

* patch ollama typo, move ollama model question pre-wrapper, modify question phrasing to include link to readthedocs, also have a default ollama model that has a tag included

* disable debug messages

* made error message for failed load more informative

* don't print dynamic linking function warning unless --debug

* updated tests to work with new cli workflow (disabled openai config test for now)

* added skips for tests when vars are missing

* update bad arg

* revise test to soft pass on empty string too

* don't run configure twice

* extend timeout (try to pass against nltk download)

* update defaults

* typo with endpoint type default

* patch runtime errors for when model is None

* catching another case of 'x in model' when model is None (preemptively)

* allow overrides to local llm related config params

* made model wrapper selection from a list vs raw input

* update test for select instead of input

* Fixed bug in endpoint when using local->openai selection, also added validation loop to manual endpoint entry

* updated error messages to be more informative with links to readthedocs

* add back gpt3.5-turbo

---------

Co-authored-by: cpacker <packercharles@gmail.com>
2023-11-14 15:58:19 -08:00
Charles Packer
b789549d02
Configurable presets to support easy extension of MemGPT's function set (#420)
* partial

* working schema builder, tested that it matches the hand-written schemas

* correct another schema diff

* refactor

* basic working test

* refactored preset creation to use yaml files

* added docstring-parser

* add code for dynamic function linking in agent loading

* pretty schema diff printer

* support pulling from ~/.memgpt/functions/*.py

* clean

* allow looking for system prompts in ~/.memgpt/system_prompts

* create ~/.memgpt/system_prompts if it doesn't exist

* pull presets from ~/.memgpt/presets in addition to examples folder

* add support for loading agent configs that have additional keys

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2023-11-13 10:43:28 -08:00
Sarah Wooders
e0455b7116
Fix model configuration for when config.model == "local" previously (#415)
* fix agent load

* fix model config
2023-11-10 12:16:33 -08:00
Charles Packer
cb50308ef6
Fix max tokens constant (#374)
* stripped LLM_MAX_TOKENS constant, instead it's a dictionary, and context_window is set via the config (defaults to 8k)

* pass context window in the calls to local llm APIs

* safety check

* remove dead imports

* context_length -> context_window

* add default for agent.load

* in configure, ask for the model context window if not specified via dictionary

* fix default, also make message about OPENAI_API_BASE missing more informative

* make openai default embedding if openai is default llm

* make openai on top of list

* typo

* also make local the default for embeddings if you're using localllm instead of the locallm endpoint

* provide --context_window flag to memgpt run

* fix runtime error

* stray comments

* stray comment
2023-11-09 17:59:03 -08:00
Vivian Fang
e5c0e1276b
Remove AsyncAgent and async from cli (#400)
* Remove AsyncAgent and async from cli

Refactor agent.py memory.py

Refactor interface.py

Refactor main.py

Refactor openai_tools.py

Refactor cli/cli.py

stray asyncs

save

make legacy embeddings not use async

Refactor presets

Remove deleted function from import

* remove stray prints

* typo

* another stray print

* patch test

---------

Co-authored-by: cpacker <packercharles@gmail.com>
2023-11-09 14:51:12 -08:00
Sarah Wooders
069780bc05
Use ~/.memgpt/config to set questionary defaults in memgpt configure + update tests to use specific config path (#389) 2023-11-09 14:01:11 -08:00
Sarah Wooders
e9a2f8e762
Replace memgpt run flags error with warning + remove custom embedding endpoint option + add agent create time (#364) 2023-11-09 09:10:17 -08:00
Charles Packer
aa94ce9515
add gpt-4-turbo (#349)
* add gpt-4-turbo

* add in another place

* change to 3.5 16k
2023-11-06 21:53:49 -08:00
Sarah Wooders
4d1cb31b97
Fix config tests (#343)
Co-authored-by: Vivian Fang <hi@vivi.sh>
2023-11-06 18:43:23 -08:00
Sarah Wooders
e2a685acba
Specify model inference and embedding endpoint separately (#286) 2023-11-06 17:19:45 -08:00
Dividor
e0a653d395
Aligned code with README that environment variable for Azure embeddings should be AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT (#308) 2023-11-05 10:01:02 -08:00
Sarah Wooders
2492db6b59
VectorDB support (pgvector) for archival memory (#226) 2023-11-03 16:19:15 -07:00
Sarah Wooders
de6f6e857f
Cli bug fixes (loading human/persona text, azure setup, local setup) (#222)
* mark depricated API section

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* revert readme

* remove print
2023-10-31 13:51:20 -07:00
Sarah Wooders
b7f9560bef
Refactoring CLI to use config file, connect to Llama Index data sources, and allow for multiple agents (#154)
* Migrate to `memgpt run` and `memgpt configure` 
* Add Llama index data sources via `memgpt load` 
* Save config files for defaults and agents
2023-10-30 16:47:54 -07:00