* feat: add loading indicator when creating new agent
* feat: reorder front page to avoid overflow and always show add button
* feat: display function calls
* feat: set up proxy during development & remove explicit inclusion of host/port in backend calls
* fix: introduce api prefix, split up fastapi server to become more modular, use app directly instead of subprocess
the api prefix allows us to create a proxy for frontend development that relays all /api
requests to our fastapi, while serving the development files for other paths.
splitting up the fastapi server will allow us to branch out and divide up the work better
in the future. using the application directly in our cli instead of a subprocess makes
debugging a thing in development and overall this python native way just seems cleaner.
we can discuss if we should keep the api prefix or if we should distinguish between a REST only
mode and one that also serves the static files for the GUI.
This is just my initial take on things
* chore: build latest frontend
* updated local APIs to return usage info (#585)
* updated APIs to return usage info
* tested all endpoints
* added autogen as an extra (#616)
* added autogen as an extra
* updated docs
Co-authored-by: hemanthsavasere <hemanth.savasere@gmail.com>
* Update LICENSE
* Add safeguard on tokens returned by functions (#576)
* swapping out hardcoded str for prefix (forgot to include in #569)
* add extra failout when the summarizer tries to run on a single message
* added function response validation code, currently will truncate responses based on character count
* added return type hints (functions/tools should either return strings or None)
* discuss function output length in custom function section
* made the truncation more informative
* patch bug where None.copy() throws runtime error (#617)
* allow passing custom host to uvicorn (#618)
* feat: initial poc for socket server
* feat: initial poc for frontend based on react
Set up an nx workspace which maks it easy to manage dependencies and added shadcn components
that allow us to build good-looking ui in a fairly simple way.
UI is a very simple and basic chat that starts with a message of the user and then simply displays the
answer string that is sent back from the fastapi ws endpoint
* feat: mapp arguments to json and return new messages
Except for the previous user message we return all newly generated messages and let the frontend figure out how to display them.
* feat: display messages based on role and show inner thoughts and connection status
* chore: build newest frontend
* feat(frontend): show loader while waiting for first message and disable send button until connection is open
* feat: make agent send the first message and loop similar to CLI
currently the CLI loops until the correct function call sends a message to the user. this is an initial try to achieve a similar behavior in the socket server
* chore: build new version of frontend
* fix: rename lib directory so it is not excluded as part of python gitignore
* chore: rebuild frontend app
* fix: save agent at end of each response to allow the conversation to carry on over multiple sessions
* feat: restructure server to support multiple endpoints and add agents and sources endpoint
* feat: setup frontend routing and settings page
* chore: build frontend
* feat: another iteration of web interface
changes include: websocket for chat. switching between different agents. introduction of zustand state management
* feat: adjust frontend to work with memgpt rest-api
* feat: adjust existing rest_api to serve and interact with frontend
* feat: build latest frontend
* chore: build latest frontend
* fix: cleanup workspace
---------
Co-authored-by: Charles Packer <packercharles@gmail.com>
Co-authored-by: hemanthsavasere <hemanth.savasere@gmail.com>
Removed Logging from configurations and migrated to constants.py
Modified log.py to configure using constants to configure logging
Conflicts:
memgpt/config.py resolved
* First commit of memgpt client and some messy test code
* rolled back unnecessary changes to abstract interface; switched client to always use Queueing Interface
* Added missing interface clear() in run_command; added convenience method for checking if an agent exists, used that in create_agent
* Formatting fixes
* Fixed incorrect naming of get_agent_memory in rest server
* Removed erroneous clear from client save method; Replaced print statements with appropriate logger calls in server
* Updated readme with client usage instructions
* added tests for Client
* make printing to terminal togglable on queininginterface (should probably refactor this to a logger)
* turn off printing to stdout via interface by default
* allow importing the python client in a similar fashion to openai-python (see https://github.com/openai/openai-python)
* Allowed quickstart on init of client; updated readme and test_client accordingly
* oops, fixed name of openai_api_key config key
* Fixed small typo
* Fixed broken test by adding memgpt hosted model details to agent config
* silence llamaindex 'LLM is explicitly disabled. Using MockLLM.' on server
* default to openai if user's memgpt directory is empty (first time)
* correct type hint
* updated section on client in readme
* added comment about how MemGPT config != Agent config
* patch unrelated test
* update wording on readme
* patch another unrelated test
* added python client to readme docs
* Changed 'user' to 'human' in example; Defaulted AgentConfig.model to 'None'; Fixed issue in create_agent (accounting for dict config); matched test code to example
* Fixed advanced example
* patch test
* patch
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* made quickstart to openai or memgpt the default option when the user doesn't have a config set
* modified formatting + message styles
* revised quickstart guides in docs to talk about quickstart command
* make message consistent
---------
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
* Revert "Revert "nonfunctional 404 quickstart command w/ some other typo corrections""
This reverts commit 5dbdf31f1c.
* Revert "Revert "added example config file""
This reverts commit 72a58f6de3.
* tested and working
* added and tested openai quickstart, added fallback if internet 404's to pull from local copy
* typo
* updated openai key input message to include html link
* renamed --type to --backend, added --latest flag which fetches from online default is to pull from local file
* fixed links
* added memgpt server command
* added the option to specify a port (rest default 8283, ws default 8282)
* fixed import in test
* added agent saving on shutdown
* added basic locking mechanism (assumes only one server.py is running at the same time)
* remove 'STOP' from buffer when converting to list for the non-streaming POST resposne
* removed duplicate on_event (redundant to lifespan)
* added GET agents/memory route
* added GET agent config
* added GET server config
* added PUT route for modifying agent core memory
* refactored to put server loop in separate function called via main