Go to file
2023-11-20 23:47:06 +01:00
.vscode some interactivity 2023-11-20 23:17:29 +01:00
src shorter filename 2023-11-20 23:47:06 +01:00
.eslintrc.json initial commit 2023-11-19 19:37:57 +01:00
.gitignore initial commit 2023-11-19 19:37:57 +01:00
.vscodeignore initial commit 2023-11-19 19:37:57 +01:00
package-lock.json initial commit 2023-11-19 19:37:57 +01:00
package.json some interactivity 2023-11-20 23:17:29 +01:00
README.md initial commit 2023-11-19 19:37:57 +01:00
TODO.md some interactivity 2023-11-20 23:17:29 +01:00
tsconfig.json works tm 2023-11-19 23:37:24 +01:00

dumbpilot

Get inline completions using llama.cpp as as server backend

Usage

  1. start llama.cpp/server in the background or on a remote machine
  2. configure the host
  3. press ctrl+shift+l to use code prediction