Commit Graph

20 Commits

Author SHA1 Message Date
Sarah Wooders
c9f62f54de
feat: refactor CoreMemory to support generalized memory fields and memory editing functions (#1479)
Co-authored-by: cpacker <packercharles@gmail.com>
Co-authored-by: Maximilian-Winter <maximilian.winter.91@gmail.com>
2024-07-01 11:50:57 -07:00
Sarah Wooders
da21e7edbc
fix: refactor create(..) call to LLMs to not require AgentState (#1307)
Co-authored-by: cpacker <packercharles@gmail.com>
2024-04-28 15:21:20 -07:00
Charles Packer
a86374b464
ci: update workflows (add autoflake and isort) (#1300) 2024-04-27 11:54:34 -07:00
Charles Packer
eaed123af8
chore: run autoflake + isort (#1279) 2024-04-20 11:40:22 -07:00
Charles Packer
bdf7aeb247
feat: add Google AI Gemini Pro support (#1209) 2024-04-10 19:43:44 -07:00
Charles Packer
15a149233c
fix: patch out-of-sync / missing tzinfo timestamps coming back from API server (#1182) 2024-03-26 20:37:44 -07:00
Charles Packer
0ac67d2733
fix: additions to utc patch (#1176) (#1177) 2024-03-21 13:02:19 -07:00
Charles Packer
03abc4fce0
feat: migrate all calls to datetime.now() to datetime.now(UTC) (#1176) 2024-03-21 12:36:56 -07:00
Charles Packer
2ca92d6955
feat: pass message UUIDs during message streaming (POST SSE send_message) (#1120) 2024-03-10 15:34:37 -07:00
Charles Packer
dcf746cd91
feat: one time pass of autoflake + add autoflake to dev extras (#1097)
Co-authored-by: tombedor <tombedor@gmail.com>
2024-03-05 16:35:12 -08:00
Charles Packer
5674121b33
fix: Patch typo in base.py (#1050) 2024-02-24 20:05:45 -08:00
tombedor
1dca90588a
fix: set json loads strict to false (#946) 2024-01-31 15:50:08 -08:00
Charles Packer
6edffe036e
fix: use utf-8 encodings for all text files (#918) 2024-01-30 11:59:58 -08:00
Charles Packer
e4fab1653e
refactor: remove User LLM/embed. defaults, add credentials file, add authentication option for custom LLM backends (#835) 2024-01-18 16:11:35 -08:00
ifsheldon
f88f930354
fix: Turn off all ensure_ascii of json.dumps (#800) 2024-01-11 23:54:35 -08:00
Charles Packer
40c62f9a75
feat: added new 'hint' wrappers that inject hints into the pre-prefix (#707)
* added new 'hint' wrappers that inject hints into the pre-prefix

* modified basic search functions with extra input sanitization

* updated first message prefix
2023-12-25 11:29:42 -08:00
Charles Packer
8f178e18ca
Add safeguard on tokens returned by functions (#576)
* swapping out hardcoded str for prefix (forgot to include in #569)

* add extra failout when the summarizer tries to run on a single message

* added function response validation code, currently will truncate responses based on character count

* added return type hints (functions/tools should either return strings or None)

* discuss function output length in custom function section

* made the truncation more informative
2023-12-13 21:57:50 -08:00
Sarah Wooders
dd5a110be4
Removing dead code + legacy commands (#536) 2023-11-30 13:37:11 -08:00
Sarah Wooders
28514da5df
Refactor config + determine LLM via config.model_endpoint_type (#422)
* mark depricated API section

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* update config fields

* cleanup config loading

* commit

* remove asserts

* refactor configure

* put into different functions

* add embedding default

* pass in config

* fixes

* allow overriding openai embedding endpoint

* black

* trying to patch tests (some circular import errors)

* update flags and docs

* patched support for local llms using endpoint and endpoint type passed via configs, not env vars

* missing files

* fix naming

* fix import

* fix two runtime errors

* patch ollama typo, move ollama model question pre-wrapper, modify question phrasing to include link to readthedocs, also have a default ollama model that has a tag included

* disable debug messages

* made error message for failed load more informative

* don't print dynamic linking function warning unless --debug

* updated tests to work with new cli workflow (disabled openai config test for now)

* added skips for tests when vars are missing

* update bad arg

* revise test to soft pass on empty string too

* don't run configure twice

* extend timeout (try to pass against nltk download)

* update defaults

* typo with endpoint type default

* patch runtime errors for when model is None

* catching another case of 'x in model' when model is None (preemptively)

* allow overrides to local llm related config params

* made model wrapper selection from a list vs raw input

* update test for select instead of input

* Fixed bug in endpoint when using local->openai selection, also added validation loop to manual endpoint entry

* updated error messages to be more informative with links to readthedocs

* add back gpt3.5-turbo

---------

Co-authored-by: cpacker <packercharles@gmail.com>
2023-11-14 15:58:19 -08:00
Charles Packer
b789549d02
Configurable presets to support easy extension of MemGPT's function set (#420)
* partial

* working schema builder, tested that it matches the hand-written schemas

* correct another schema diff

* refactor

* basic working test

* refactored preset creation to use yaml files

* added docstring-parser

* add code for dynamic function linking in agent loading

* pretty schema diff printer

* support pulling from ~/.memgpt/functions/*.py

* clean

* allow looking for system prompts in ~/.memgpt/system_prompts

* create ~/.memgpt/system_prompts if it doesn't exist

* pull presets from ~/.memgpt/presets in addition to examples folder

* add support for loading agent configs that have additional keys

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2023-11-13 10:43:28 -08:00