Commit Graph

53 Commits

Author SHA1 Message Date
Sarah Wooders
89b51a12cc Fix bug with supporting paginated search for recall memory 2024-01-03 18:32:13 -08:00
Sarah Wooders
f71c855dce Change Message data type to use tool format and create tool_call_id field 2024-01-03 18:03:11 -08:00
Charles Packer
d6a56b262e
Merge branch 'main' into cherry-pick-storage-refactor 2023-12-30 21:38:58 -08:00
Charles Packer
a3e94ae19e
fix: patch TEI error in load (#725)
* patch TEI error in load (now get different error)

* more hiding of MOCKLLM

* fix embedding dim

* refactored bandaid patches into custom embedding class return object patch
2023-12-27 22:09:29 -08:00
Sarah Wooders
83c930e9ce Deprecate list_loaded_data for listing sources, and use metadata DB instead 2023-12-26 18:47:16 +04:00
Sarah Wooders
34be9b87d3 Run black formatter 2023-12-26 17:53:57 +04:00
Sarah Wooders
48f1fd490e Bugfixes for get_all function and code cleanup to match main 2023-12-26 17:50:49 +04:00
Sarah Wooders
1b968e372b Set get_all limit to None by default and add postgres to archival memory tests 2023-12-26 17:07:54 +04:00
Sarah Wooders
0694731760 Support metadata table via storage connectors for data sources 2023-12-26 17:06:58 +04:00
Sarah Wooders
9b3d59e016 Support recall and archival memory for postgres
working test
2023-12-26 17:05:24 +04:00
Sarah Wooders
9f3806dfcb Add in memory storage connector implementation for refactored storage 2023-12-26 17:05:24 +04:00
Sarah Wooders
cb340ff884 Add data_types.py file for standard data types 2023-12-26 17:05:12 +04:00
Sarah Wooders
d258f03899 Define refactored storage table types (archival, recall, documents,
users, agents)
2023-12-26 17:04:11 +04:00
Charles Packer
622ae07208
fix: misc fixes (#700)
* add folder generation

* disable default temp until more testing is done

* apply embedding payload patch to search, add input checking for better runtime error messages

* streamlined memory pressure warning now that heartbeats get forced
2023-12-25 01:29:13 -08:00
Charles Packer
7a019f083b
moved configs for hosted to https, patched bug in embedding creation (#685) 2023-12-23 11:40:07 -08:00
Charles Packer
0e69058331
feat: Add new wrapper defaults (#656) 2023-12-21 17:05:38 +04:00
Charles Packer
093df8b46b
fix runtime error (#586) 2023-12-05 23:01:37 -08:00
Sarah Wooders
9c2e6b774c
Chroma storage integration (#285) 2023-12-05 17:49:00 -08:00
Sarah Wooders
dd5a110be4
Removing dead code + legacy commands (#536) 2023-11-30 13:37:11 -08:00
Sarah Wooders
e24a1a3908
Add user field for vLLM endpoint (#531) 2023-11-29 12:30:42 -08:00
Charles Packer
4dfa063d65
Clean memory error messages (#523)
* Raise a custom keyerror instead of basic keyerror to clarify issue to LLM processor

* remove self value from error message passed to LLM processor

* simplify error message propogated to llm processor
2023-11-27 16:41:42 -08:00
Sarah Wooders
febc7344c7
Add support for HuggingFace Text Embedding Inference endpoint for embeddings (#524) 2023-11-27 16:28:49 -08:00
Charles Packer
c8cf4a6536
extra arg being passed causing a runtime error (#517) 2023-11-27 11:36:26 -08:00
Charles Packer
fd0c4e3393
Fix #487 (summarize call uses OpenAI even with local LLM config) (#488)
* use new chatcompletion function that takes agent config inside of summarize

* patch issue with model now missing
2023-11-19 14:54:12 -08:00
sahusiddharth
5d2865c0a7
Docs: Fix typos (#477) 2023-11-17 15:12:14 -08:00
Charles Packer
c1be8d866a
patch #428 (#433) 2023-11-12 22:59:53 -08:00
Charles Packer
cb50308ef6
Fix max tokens constant (#374)
* stripped LLM_MAX_TOKENS constant, instead it's a dictionary, and context_window is set via the config (defaults to 8k)

* pass context window in the calls to local llm APIs

* safety check

* remove dead imports

* context_length -> context_window

* add default for agent.load

* in configure, ask for the model context window if not specified via dictionary

* fix default, also make message about OPENAI_API_BASE missing more informative

* make openai default embedding if openai is default llm

* make openai on top of list

* typo

* also make local the default for embeddings if you're using localllm instead of the locallm endpoint

* provide --context_window flag to memgpt run

* fix runtime error

* stray comments

* stray comment
2023-11-09 17:59:03 -08:00
Vivian Fang
e5c0e1276b
Remove AsyncAgent and async from cli (#400)
* Remove AsyncAgent and async from cli

Refactor agent.py memory.py

Refactor interface.py

Refactor main.py

Refactor openai_tools.py

Refactor cli/cli.py

stray asyncs

save

make legacy embeddings not use async

Refactor presets

Remove deleted function from import

* remove stray prints

* typo

* another stray print

* patch test

---------

Co-authored-by: cpacker <packercharles@gmail.com>
2023-11-09 14:51:12 -08:00
Sarah Wooders
e5002192b4
Add support for larger archival memory stores (#359) 2023-11-09 09:09:57 -08:00
Sarah Wooders
b1c5566168
Dependency management (#337)
* Divides dependencies into `pip install pymemgpt[legacy,local,postgres,dev]`. 
* Update docs
2023-11-06 19:45:44 -08:00
Charles Packer
a4d7732a9e
Create docs pages (#328)
* Create docs  (#323)

* Create .readthedocs.yaml

* Update mkdocs.yml

* update

* revise

* syntax

* syntax

* syntax

* syntax

* revise

* revise

* spacing

* Docs (#327)

* add stuff

* patch homepage

* more docs

* updated

* updated

* refresh

* refresh

* refresh

* update

* refresh

* refresh

* refresh

* refresh

* missing file

* refresh

* refresh

* refresh

* refresh

* fix black

* refresh

* refresh

* refresh

* refresh

* add readme for just the docs

* Update README.md

* add more data loading docs

* cleanup data sources

* refresh

* revised

* add search

* make prettier

* revised

* updated

* refresh

* favi

* updated

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2023-11-06 12:38:49 -08:00
Charles Packer
f46cc3b15b
Remove embeddings as argument in archival_memory.insert (#284) 2023-11-05 12:48:22 -08:00
Vivian Fang
d6c74337ac hotfix DummyArchivalMemoryWithFaiss 2023-11-03 16:41:06 -07:00
Sarah Wooders
2492db6b59
VectorDB support (pgvector) for archival memory (#226) 2023-11-03 16:19:15 -07:00
Charles Packer
4c80222e0c
strip '/' and use osp.join (Windows support) (#283)
* strip '/' and use osp.join

* grepped for MEMGPT_DIR, found more places to replace '/'

* typo

* grep pass over filesep

---------

Co-authored-by: Vivian Fang <hi@vivi.sh>
2023-11-03 13:54:29 -07:00
Charles Packer
31fd9efc9b
Patch summarize when running with local llms (#213)
* trying to patch summarize when running with local llms

* moved token magic numbers to constants, made special localllm exception class (TODO catch these for retry), fix summarize bug where it exits early if empty list

* missing file

* raise an exception on no-op summary

* changed summarization logic to walk forwards in list until fraction of tokens in buffer is reached

* added same diff to sync agent

* reverted default max tokens to 8k, cleanup + more error wrapping for better error messages that get caught on retry

* patch for web UI context limit error propogation, using best guess for what the web UI error message is

* add webui token length exception

* remove print

* make no wrapper warning only pop up once

* cleanup

* Add errors to other wrappers

---------

Co-authored-by: Vivian Fang <hi@vivi.sh>
2023-11-02 23:44:02 -07:00
Robin Goetz
54e5aef0d5
fix: LocalArchivalMemory prints ref_doc_info on if not using EmptyIndex (#240)
Currently, if you run the /memory command the application breaks if the LocalArchivalMemory
has no existing archival storage and defaults to the EmptyIndex. This is caused by EmptyIndex
not having a ref_doc_info implementation and throwing an Exception when that is used to print
the memory information to the console. This hot fix simply makes sure that we do not try to
use the function if using EmptyIndex and instead prints a message to the console indicating
an EmptyIndex is used.
2023-11-01 18:45:04 -07:00
Vivian Fang
8a3724c449
await async_get_embeddings_with_backoff (#239) 2023-11-01 01:43:17 -07:00
Charles Packer
8cfd9a512f
len needs to be implemented in all memory classes (#236)
* len needs to be implemented in all memory classes so that the pretty print of memory shows statistics

* stub
2023-11-01 01:02:25 -07:00
Vivian Fang
d66deef734
Fix conversation_date_search async bug (#215)
* Fix conversation_date_search async bug

* Also catch TypeError
2023-10-31 00:35:09 -07:00
Vivian Fang
44d4ab4950
hotfix LocalArchivalMemory (#209) 2023-10-30 20:37:33 -07:00
Sarah Wooders
b7f9560bef
Refactoring CLI to use config file, connect to Llama Index data sources, and allow for multiple agents (#154)
* Migrate to `memgpt run` and `memgpt configure` 
* Add Llama index data sources via `memgpt load` 
* Save config files for defaults and agents
2023-10-30 16:47:54 -07:00
Vivian Fang
a78ba2f539
Hotfix bug from async refactor (#203) 2023-10-30 15:38:25 -07:00
Kamelowy
77f7f195b7
New wrapper for Zephyr models + little fix in memory.py (#183)
* VectorIndex -> VectorStoreIndex

VectorStoreIndex is imported but non-existent VectorIndex is used.

* New wrapper for Zephyr family of models.

With inner thoughts.

* Update chat_completion_proxy.py for Zephyr Wrapper
2023-10-29 21:17:01 -07:00
Charles Packer
e621b4c1ca
black patch on outstanding files that were causing workflow fails on PRs (#193) 2023-10-29 20:53:46 -07:00
Vivian Fang
45692f4f8a
Add synchronous memgpt agent (#156) 2023-10-27 16:48:14 -07:00
Sarah Wooders
0f251af761 reformat 2023-10-26 16:08:25 -07:00
Sarah Wooders
686bee8a0a add database test 2023-10-26 15:30:31 -07:00
Sarah Wooders
61269a2f4e add llama index querying 2023-10-26 14:25:35 -07:00
Charles Packer
5714cda986 fixed bug where persistence manager was not saving in demo CLI 2023-10-17 23:40:31 -07:00