An MCP server requires fetching very specific information with the right context. We have already seen multiple memory fetching implementations in order to serve the current context length limitations. Even if context length is increased (Gemini 2.5 Pro), the ability to serve intelligently on vast data decreases. Thus, there seems to be a clear need for intelligent fetching systems for blobs of information. Multiple methods for the same have emerged:
In this technique, the information blob is understood and a schema is created. Then with this format, the information is stored in the database. Once the database is populated, the schema is conveyed to the LLM for writing queries on the database to fetch required information.