* autogenerate openapi file on server startup
* added endpoint for paginated retrieval of in-context agent messages
* missing diff
* added ability to pass system messages via message endpoint
* patched bad depends into queries to fix the param info not showing up in get requests, fixed some bad copy paste
* feat: add dark mode & make minor UI improvements
added dark mode toggle & picked a color scheme that is closer to the memgpt icons
cleaned up the home page a little bit.
* feat: add thinking indicator & make minor UI improvements
we now show a thinking while the current message is loading.
removed status indicator as we do not work with websockets anymore.
also adjusted some of the chat styles to better fit the new theme.
* feat: add memory viewer and allow memory edit
* chore: build frontend
* First commit of memgpt client and some messy test code
* rolled back unnecessary changes to abstract interface; switched client to always use Queueing Interface
* Added missing interface clear() in run_command; added convenience method for checking if an agent exists, used that in create_agent
* Formatting fixes
* Fixed incorrect naming of get_agent_memory in rest server
* Removed erroneous clear from client save method; Replaced print statements with appropriate logger calls in server
* Updated readme with client usage instructions
* added tests for Client
* make printing to terminal togglable on queininginterface (should probably refactor this to a logger)
* turn off printing to stdout via interface by default
* allow importing the python client in a similar fashion to openai-python (see https://github.com/openai/openai-python)
* Allowed quickstart on init of client; updated readme and test_client accordingly
* oops, fixed name of openai_api_key config key
* Fixed small typo
* Fixed broken test by adding memgpt hosted model details to agent config
* silence llamaindex 'LLM is explicitly disabled. Using MockLLM.' on server
* default to openai if user's memgpt directory is empty (first time)
* correct type hint
* updated section on client in readme
* added comment about how MemGPT config != Agent config
* patch unrelated test
* update wording on readme
* patch another unrelated test
* added python client to readme docs
* Changed 'user' to 'human' in example; Defaulted AgentConfig.model to 'None'; Fixed issue in create_agent (accounting for dict config); matched test code to example
* Fixed advanced example
* patch test
* patch
---------
Co-authored-by: cpacker <packercharles@gmail.com>
* added memgpt server command
* added the option to specify a port (rest default 8283, ws default 8282)
* fixed import in test
* added agent saving on shutdown
* added basic locking mechanism (assumes only one server.py is running at the same time)
* remove 'STOP' from buffer when converting to list for the non-streaming POST resposne
* removed duplicate on_event (redundant to lifespan)
* added GET agents/memory route
* added GET agent config
* added GET server config
* added PUT route for modifying agent core memory
* refactored to put server loop in separate function called via main
* init server refactor
* refactored websockets server/client code to use internal server API
* added intentional fail on test
* update workflow to try and get test to pass remotely
* refactor to put websocket code in a separate subdirectory
* added fastapi rest server
* add error handling
* modified interface return style
* disabled certain tests on remote
* added SSE response option for user_message
* fix ws interface test
* fallback for oai key
* add soft fail for test when localhost is borked
* add step_yield for all server related interfaces
* extra catch
* update toml + lock with server add-ons (add uvicorn+fastapi, move websockets to server extra)
* regen lock file
* added pytest-asyncio as an extra in dev
* add pydantic to deps
* renamed CreateConfig to CreateAgentConfig
* fixed POST request for creating agent + tested it