Removed Logging from configurations and migrated to constants.py
Modified log.py to configure using constants to configure logging
Conflicts:
memgpt/config.py resolved
* add folder generation
* disable default temp until more testing is done
* apply embedding payload patch to search, add input checking for better runtime error messages
* streamlined memory pressure warning now that heartbeats get forced
* added basic heartbeat override
* tested and working on lmstudio (patched typo + patched new bug emerging in latest lmstudio build
* added lmstudio patch to chatml wrapper
* update the system messages to be informative about the source
* updated string constants after some tuning
* swapping out hardcoded str for prefix (forgot to include in #569)
* add extra failout when the summarizer tries to run on a single message
* added function response validation code, currently will truncate responses based on character count
* added return type hints (functions/tools should either return strings or None)
* discuss function output length in custom function section
* made the truncation more informative
* partial
* working schema builder, tested that it matches the hand-written schemas
* correct another schema diff
* refactor
* basic working test
* refactored preset creation to use yaml files
* added docstring-parser
* add code for dynamic function linking in agent loading
* pretty schema diff printer
* support pulling from ~/.memgpt/functions/*.py
* clean
* allow looking for system prompts in ~/.memgpt/system_prompts
* create ~/.memgpt/system_prompts if it doesn't exist
* pull presets from ~/.memgpt/presets in addition to examples folder
* add support for loading agent configs that have additional keys
---------
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
* stripped LLM_MAX_TOKENS constant, instead it's a dictionary, and context_window is set via the config (defaults to 8k)
* pass context window in the calls to local llm APIs
* safety check
* remove dead imports
* context_length -> context_window
* add default for agent.load
* in configure, ask for the model context window if not specified via dictionary
* fix default, also make message about OPENAI_API_BASE missing more informative
* make openai default embedding if openai is default llm
* make openai on top of list
* typo
* also make local the default for embeddings if you're using localllm instead of the locallm endpoint
* provide --context_window flag to memgpt run
* fix runtime error
* stray comments
* stray comment
* trying to patch summarize when running with local llms
* moved token magic numbers to constants, made special localllm exception class (TODO catch these for retry), fix summarize bug where it exits early if empty list
* missing file
* raise an exception on no-op summary
* changed summarization logic to walk forwards in list until fraction of tokens in buffer is reached
* added same diff to sync agent
* reverted default max tokens to 8k, cleanup + more error wrapping for better error messages that get caught on retry
* patch for web UI context limit error propogation, using best guess for what the web UI error message is
* add webui token length exception
* remove print
* make no wrapper warning only pop up once
* cleanup
* Add errors to other wrappers
---------
Co-authored-by: Vivian Fang <hi@vivi.sh>