Charles Packer
0681e89c01
chore: .gitattributes
( #1511 )
2024-07-04 14:45:35 -07:00
Sarah Wooders
c9f62f54de
feat: refactor CoreMemory
to support generalized memory fields and memory editing functions ( #1479 )
...
Co-authored-by: cpacker <packercharles@gmail.com>
Co-authored-by: Maximilian-Winter <maximilian.winter.91@gmail.com>
2024-07-01 11:50:57 -07:00
madgrizzle
6f2a6c0585
fix: various breaking bugs with local LLM implementation and postgres docker. ( #1355 )
2024-05-12 11:53:46 -07:00
Sarah Wooders
da21e7edbc
fix: refactor create(..)
call to LLMs to not require AgentState
( #1307 )
...
Co-authored-by: cpacker <packercharles@gmail.com>
2024-04-28 15:21:20 -07:00
Charles Packer
a86374b464
ci: update workflows (add autoflake
and isort
) ( #1300 )
2024-04-27 11:54:34 -07:00
Charles Packer
eaed123af8
chore: run autoflake + isort ( #1279 )
2024-04-20 11:40:22 -07:00
Charles Packer
bdf7aeb247
feat: add Google AI Gemini Pro support ( #1209 )
2024-04-10 19:43:44 -07:00
Sarah Wooders
ebfe9495e6
feat: add remaining Python client support for REST API routes + tests ( #1160 )
2024-03-17 17:34:37 -07:00
Charles Packer
ff986ad384
feat: add archival memory GET, POST, DEL to REST API ( #1119 )
2024-03-09 14:23:36 -08:00
Charles Packer
dcf746cd91
feat: one time pass of autoflake
+ add autoflake
to dev extras ( #1097 )
...
Co-authored-by: tombedor <tombedor@gmail.com>
2024-03-05 16:35:12 -08:00
tombedor
9681a60bc9
fix: configure black ( #1072 )
2024-02-29 15:19:08 -08:00
Sarah Wooders
38c184caf8
feat: refactor loading and attaching data sources, and upgrade to llama-index==0.10.6
( #1016 )
2024-02-18 16:57:01 -08:00
Sarah Wooders
cca9b38294
feat: Store embeddings padded to size 4096
to allow DB storage of varying size embeddings ( #852 )
...
Co-authored-by: cpacker <packercharles@gmail.com>
2024-01-19 16:03:13 -08:00
Charles Packer
e4fab1653e
refactor: remove User LLM/embed. defaults, add credentials file, add authentication option for custom LLM backends ( #835 )
2024-01-18 16:11:35 -08:00
Sarah Wooders
643ae41f4b
feat: Get in-context Message.id
values from server ( #851 )
2024-01-18 12:42:55 -08:00
Tom Bedor
fb5e73f447
fix: fix typo in memory.py
2024-01-15 13:54:13 -08:00
Sarah Wooders
3d7ab581b2
passing tests
2024-01-10 20:20:16 -08:00
Sarah Wooders
9edd304f61
fix archival reference
2024-01-10 19:47:39 -08:00
Sarah Wooders
b825e5664a
Remove usage of agent_config from agent.py
2024-01-09 11:22:39 -08:00
Sarah Wooders
8c06cc4bf7
refactor!: Migrate users + agent information into storage connectors ( #785 )
...
Co-authored-by: cpacker <packercharles@gmail.com>
2024-01-08 15:59:49 -08:00
Sarah Wooders
89b51a12cc
Fix bug with supporting paginated search for recall memory
2024-01-03 18:32:13 -08:00
Sarah Wooders
f71c855dce
Change Message data type to use tool format and create tool_call_id field
2024-01-03 18:03:11 -08:00
Charles Packer
d6a56b262e
Merge branch 'main' into cherry-pick-storage-refactor
2023-12-30 21:38:58 -08:00
Charles Packer
a3e94ae19e
fix: patch TEI error in load ( #725 )
...
* patch TEI error in load (now get different error)
* more hiding of MOCKLLM
* fix embedding dim
* refactored bandaid patches into custom embedding class return object patch
2023-12-27 22:09:29 -08:00
Sarah Wooders
83c930e9ce
Deprecate list_loaded_data for listing sources, and use metadata DB instead
2023-12-26 18:47:16 +04:00
Sarah Wooders
34be9b87d3
Run black formatter
2023-12-26 17:53:57 +04:00
Sarah Wooders
48f1fd490e
Bugfixes for get_all function and code cleanup to match main
2023-12-26 17:50:49 +04:00
Sarah Wooders
1b968e372b
Set get_all limit to None by default and add postgres to archival memory tests
2023-12-26 17:07:54 +04:00
Sarah Wooders
0694731760
Support metadata table via storage connectors for data sources
2023-12-26 17:06:58 +04:00
Sarah Wooders
9b3d59e016
Support recall and archival memory for postgres
...
working test
2023-12-26 17:05:24 +04:00
Sarah Wooders
9f3806dfcb
Add in memory storage connector implementation for refactored storage
2023-12-26 17:05:24 +04:00
Sarah Wooders
cb340ff884
Add data_types.py file for standard data types
2023-12-26 17:05:12 +04:00
Sarah Wooders
d258f03899
Define refactored storage table types (archival, recall, documents,
...
users, agents)
2023-12-26 17:04:11 +04:00
Charles Packer
622ae07208
fix: misc fixes ( #700 )
...
* add folder generation
* disable default temp until more testing is done
* apply embedding payload patch to search, add input checking for better runtime error messages
* streamlined memory pressure warning now that heartbeats get forced
2023-12-25 01:29:13 -08:00
Charles Packer
7a019f083b
moved configs for hosted to https, patched bug in embedding creation ( #685 )
2023-12-23 11:40:07 -08:00
Charles Packer
0e69058331
feat: Add new wrapper defaults ( #656 )
2023-12-21 17:05:38 +04:00
Charles Packer
093df8b46b
fix runtime error ( #586 )
2023-12-05 23:01:37 -08:00
Sarah Wooders
9c2e6b774c
Chroma storage integration ( #285 )
2023-12-05 17:49:00 -08:00
Sarah Wooders
dd5a110be4
Removing dead code + legacy commands ( #536 )
2023-11-30 13:37:11 -08:00
Sarah Wooders
e24a1a3908
Add user
field for vLLM endpoint ( #531 )
2023-11-29 12:30:42 -08:00
Charles Packer
4dfa063d65
Clean memory error messages ( #523 )
...
* Raise a custom keyerror instead of basic keyerror to clarify issue to LLM processor
* remove self value from error message passed to LLM processor
* simplify error message propogated to llm processor
2023-11-27 16:41:42 -08:00
Sarah Wooders
febc7344c7
Add support for HuggingFace Text Embedding Inference endpoint for embeddings ( #524 )
2023-11-27 16:28:49 -08:00
Charles Packer
c8cf4a6536
extra arg being passed causing a runtime error ( #517 )
2023-11-27 11:36:26 -08:00
Charles Packer
fd0c4e3393
Fix #487 (summarize call uses OpenAI even with local LLM config) ( #488 )
...
* use new chatcompletion function that takes agent config inside of summarize
* patch issue with model now missing
2023-11-19 14:54:12 -08:00
sahusiddharth
5d2865c0a7
Docs: Fix typos ( #477 )
2023-11-17 15:12:14 -08:00
Charles Packer
c1be8d866a
patch #428 ( #433 )
2023-11-12 22:59:53 -08:00
Charles Packer
cb50308ef6
Fix max tokens constant ( #374 )
...
* stripped LLM_MAX_TOKENS constant, instead it's a dictionary, and context_window is set via the config (defaults to 8k)
* pass context window in the calls to local llm APIs
* safety check
* remove dead imports
* context_length -> context_window
* add default for agent.load
* in configure, ask for the model context window if not specified via dictionary
* fix default, also make message about OPENAI_API_BASE missing more informative
* make openai default embedding if openai is default llm
* make openai on top of list
* typo
* also make local the default for embeddings if you're using localllm instead of the locallm endpoint
* provide --context_window flag to memgpt run
* fix runtime error
* stray comments
* stray comment
2023-11-09 17:59:03 -08:00
Vivian Fang
e5c0e1276b
Remove AsyncAgent and async from cli ( #400 )
...
* Remove AsyncAgent and async from cli
Refactor agent.py memory.py
Refactor interface.py
Refactor main.py
Refactor openai_tools.py
Refactor cli/cli.py
stray asyncs
save
make legacy embeddings not use async
Refactor presets
Remove deleted function from import
* remove stray prints
* typo
* another stray print
* patch test
---------
Co-authored-by: cpacker <packercharles@gmail.com>
2023-11-09 14:51:12 -08:00
Sarah Wooders
e5002192b4
Add support for larger archival memory stores ( #359 )
2023-11-09 09:09:57 -08:00
Sarah Wooders
b1c5566168
Dependency management ( #337 )
...
* Divides dependencies into `pip install pymemgpt[legacy,local,postgres,dev]`.
* Update docs
2023-11-06 19:45:44 -08:00