dumbpilot
Get inline completions using llama.cpp as as server backend
Usage
- start llama.cpp/server in the background or on a remote machine
- configure the host
- press
ctrl+shift+l
to use code prediction
Description
Languages
TypeScript
100%