Go to file
2023-12-14 21:45:01 +01:00
.vscode some interactivity 2023-11-20 23:17:29 +01:00
src partial move to file 2023-12-14 21:45:01 +01:00
.eslintrc.json initial commit 2023-11-19 19:37:57 +01:00
.gitignore initial commit 2023-11-19 19:37:57 +01:00
.vscodeignore initial commit 2023-11-19 19:37:57 +01:00
package-lock.json initial commit 2023-11-19 19:37:57 +01:00
package.json last chance before changing to OpenAI API 2023-12-14 20:12:43 +01:00
README.md initial commit 2023-11-19 19:37:57 +01:00
TODO.md last chance before changing to OpenAI API 2023-12-14 20:12:43 +01:00
tsconfig.json works tm 2023-11-19 23:37:24 +01:00

dumbpilot

Get inline completions using llama.cpp as as server backend

Usage

  1. start llama.cpp/server in the background or on a remote machine
  2. configure the host
  3. press ctrl+shift+l to use code prediction