Commit Graph

16 Commits

Author SHA1 Message Date
Charles Packer
bccd990ab6
fix: increase the func return char limit (#714)
* increase the funct return limit

* disable truncation for base search functions

* added stdout suppression to remove MockLLM warning
2023-12-27 01:33:30 -08:00
Charles Packer
622ae07208
fix: misc fixes (#700)
* add folder generation

* disable default temp until more testing is done

* apply embedding payload patch to search, add input checking for better runtime error messages

* streamlined memory pressure warning now that heartbeats get forced
2023-12-25 01:29:13 -08:00
Charles Packer
419ffc4bb8
feat: added basic heartbeat override heuristics (#621)
* added basic heartbeat override

* tested and working on lmstudio (patched typo + patched new bug emerging in latest lmstudio build

* added lmstudio patch to chatml wrapper

* update the system messages to be informative about the source

* updated string constants after some tuning
2023-12-24 23:46:00 -08:00
Charles Packer
8f178e18ca
Add safeguard on tokens returned by functions (#576)
* swapping out hardcoded str for prefix (forgot to include in #569)

* add extra failout when the summarizer tries to run on a single message

* added function response validation code, currently will truncate responses based on character count

* added return type hints (functions/tools should either return strings or None)

* discuss function output length in custom function section

* made the truncation more informative
2023-12-13 21:57:50 -08:00
Charles Packer
7615830a73
use a consistent warning prefix across codebase (#569) 2023-12-04 11:38:51 -08:00
Sarah Wooders
dd5a110be4
Removing dead code + legacy commands (#536) 2023-11-30 13:37:11 -08:00
Charles Packer
b789549d02
Configurable presets to support easy extension of MemGPT's function set (#420)
* partial

* working schema builder, tested that it matches the hand-written schemas

* correct another schema diff

* refactor

* basic working test

* refactored preset creation to use yaml files

* added docstring-parser

* add code for dynamic function linking in agent loading

* pretty schema diff printer

* support pulling from ~/.memgpt/functions/*.py

* clean

* allow looking for system prompts in ~/.memgpt/system_prompts

* create ~/.memgpt/system_prompts if it doesn't exist

* pull presets from ~/.memgpt/presets in addition to examples folder

* add support for loading agent configs that have additional keys

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
2023-11-13 10:43:28 -08:00
Charles Packer
cb50308ef6
Fix max tokens constant (#374)
* stripped LLM_MAX_TOKENS constant, instead it's a dictionary, and context_window is set via the config (defaults to 8k)

* pass context window in the calls to local llm APIs

* safety check

* remove dead imports

* context_length -> context_window

* add default for agent.load

* in configure, ask for the model context window if not specified via dictionary

* fix default, also make message about OPENAI_API_BASE missing more informative

* make openai default embedding if openai is default llm

* make openai on top of list

* typo

* also make local the default for embeddings if you're using localllm instead of the locallm endpoint

* provide --context_window flag to memgpt run

* fix runtime error

* stray comments

* stray comment
2023-11-09 17:59:03 -08:00
Charles Packer
31fd9efc9b
Patch summarize when running with local llms (#213)
* trying to patch summarize when running with local llms

* moved token magic numbers to constants, made special localllm exception class (TODO catch these for retry), fix summarize bug where it exits early if empty list

* missing file

* raise an exception on no-op summary

* changed summarization logic to walk forwards in list until fraction of tokens in buffer is reached

* added same diff to sync agent

* reverted default max tokens to 8k, cleanup + more error wrapping for better error messages that get caught on retry

* patch for web UI context limit error propogation, using best guess for what the web UI error message is

* add webui token length exception

* remove print

* make no wrapper warning only pop up once

* cleanup

* Add errors to other wrappers

---------

Co-authored-by: Vivian Fang <hi@vivi.sh>
2023-11-02 23:44:02 -07:00
Sarah Wooders
01be032f7f add black to poetry and reformat 2023-10-26 15:33:50 -07:00
Vivian Fang
f48c81d9a0 Revert "Revert "cleanup""
This reverts commit 6cd2a0049b.
2023-10-25 12:42:35 -07:00
Vivian Fang
6cd2a0049b Revert "cleanup"
This reverts commit 85d9fba811, reversing
changes made to a7e06d0acc.
2023-10-25 01:02:43 -07:00
Vivian Fang
30ced8d5c8 fix memgpt_dir circular import 2023-10-24 13:28:17 -07:00
Vivian Fang
2a7e943186 fix summarizer 2023-10-15 21:07:45 -07:00
cpacker
f69bb699fe relax inner monologue check based on model 2023-10-14 17:59:24 -07:00
Charles Packer
5ed4b8eb92 init commit 2023-10-12 18:48:58 -07:00