# dumbpilot Get inline completions using llama.cpp as as server backend ## Usage 1. start llama.cpp/server in the background or on a remote machine 2. configure the host 3. press `ctrl+shift+l` to use code prediction