CompactRAG: Reducing LLM Calls and Token Overhead in Multi-Hop Question Answering | ScienceToStartup | ScienceToStartup