Commit Graph

18 Commits

Author SHA1 Message Date
Sarah Wooders
f38ca86b4d Increase workflow test timeout time + add test prints 2023-12-27 15:42:11 +04:00
Sarah Wooders
34be9b87d3 Run black formatter 2023-12-26 17:53:57 +04:00
Sarah Wooders
1b968e372b Set get_all limit to None by default and add postgres to archival memory tests 2023-12-26 17:07:54 +04:00
Sarah Wooders
04f8576c66 Support attaching data sources to agents for storage refactor 2023-12-26 17:07:28 +04:00
Sarah Wooders
0694731760 Support metadata table via storage connectors for data sources 2023-12-26 17:06:58 +04:00
Sarah Wooders
bee61270a1
Remove broken tests from chroma merge (#584) 2023-12-05 22:09:44 -08:00
Sarah Wooders
9c2e6b774c
Chroma storage integration (#285) 2023-12-05 17:49:00 -08:00
Prashant Dixit
f957209c35
Lancedb storage integration (#455) 2023-11-17 11:36:30 -08:00
Sarah Wooders
28514da5df
Refactor config + determine LLM via config.model_endpoint_type (#422)
* mark depricated API section

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* update config fields

* cleanup config loading

* commit

* remove asserts

* refactor configure

* put into different functions

* add embedding default

* pass in config

* fixes

* allow overriding openai embedding endpoint

* black

* trying to patch tests (some circular import errors)

* update flags and docs

* patched support for local llms using endpoint and endpoint type passed via configs, not env vars

* missing files

* fix naming

* fix import

* fix two runtime errors

* patch ollama typo, move ollama model question pre-wrapper, modify question phrasing to include link to readthedocs, also have a default ollama model that has a tag included

* disable debug messages

* made error message for failed load more informative

* don't print dynamic linking function warning unless --debug

* updated tests to work with new cli workflow (disabled openai config test for now)

* added skips for tests when vars are missing

* update bad arg

* revise test to soft pass on empty string too

* don't run configure twice

* extend timeout (try to pass against nltk download)

* update defaults

* typo with endpoint type default

* patch runtime errors for when model is None

* catching another case of 'x in model' when model is None (preemptively)

* allow overrides to local llm related config params

* made model wrapper selection from a list vs raw input

* update test for select instead of input

* Fixed bug in endpoint when using local->openai selection, also added validation loop to manual endpoint entry

* updated error messages to be more informative with links to readthedocs

* add back gpt3.5-turbo

---------

Co-authored-by: cpacker <packercharles@gmail.com>
2023-11-14 15:58:19 -08:00
Sarah Wooders
2492db6b59
VectorDB support (pgvector) for archival memory (#226) 2023-11-03 16:19:15 -07:00
Charles Packer
270913218a
Add basic tests that are run on PR/main (#228)
* make tests dummy to make sure github workflow is fine

* black test

* strip circular import

* further dummy-fy the test

* use pexpect

* need y

* Update tests.yml

* Update tests.yml

* added prints

* sleep before decode print

* updated test to match legacy flow

* revising test where it fails

* comment out enter your message check for now, pexpect seems to be stuck on only setting the bootup message

* weird now it's not showing Bootup sequence complete?

* added debug

* handle none

* allow more time

* loosen string check

* add enter after commands

* modify saved compontent snippet

* add try again check

* more sendlines

* more excepts

* test passing locally

* Update tests.yml

* dont clearline

* add EOF catch that seems to only happen on github actiosn (ubuntu) but not macos

* more eof

* try flushing

* add strip_ui flag

* fix archival_memory_search and memory print output

* Don't use questionary for input if strip_ui

* Run black

* Always strip UI if TEST is set

* Add another flush

* expect Enter your message

* more debug prints

* one more shot at printing debug info

* stray fore color in stripped ui

* tests pass locally

* cleanup

---------

Co-authored-by: Vivian Fang <hi@vivi.sh>
2023-11-01 17:01:45 -07:00
Charles Packer
0a075f448b
patch bug (#221) 2023-10-31 11:43:38 -07:00
Sarah Wooders
b7f9560bef
Refactoring CLI to use config file, connect to Llama Index data sources, and allow for multiple agents (#154)
* Migrate to `memgpt run` and `memgpt configure` 
* Add Llama index data sources via `memgpt load` 
* Save config files for defaults and agents
2023-10-30 16:47:54 -07:00
Sarah Wooders
18f14968b5 update poetry 2023-10-26 17:43:37 -07:00
Sarah Wooders
0f251af761 reformat 2023-10-26 16:08:25 -07:00
Sarah Wooders
686bee8a0a add database test 2023-10-26 15:30:31 -07:00
Sarah Wooders
541b2d1403 try to avoid changing current cli logic flow 2023-10-26 14:35:49 -07:00
Sarah Wooders
d3e7fd0517 add archival memory test 2023-10-26 14:25:46 -07:00