Go to file
2023-12-24 23:34:53 +01:00
.vscode some interactivity 2023-11-20 23:17:29 +01:00
src comment 2023-12-24 23:34:53 +01:00
.eslintrc.json initial commit 2023-11-19 19:37:57 +01:00
.gitignore initial commit 2023-11-19 19:37:57 +01:00
.vscodeignore initial commit 2023-11-19 19:37:57 +01:00
dummy.ico added some necessary info 2023-12-16 19:36:25 +01:00
LICENSE added some necessary info 2023-12-16 19:36:25 +01:00
package-lock.json formatting json files 2023-12-16 19:48:27 +01:00
package.json various fixes, better config, better fetch handling 2023-12-24 23:23:35 +01:00
README.md initial commit 2023-11-19 19:37:57 +01:00
TODO.md added streaming support 2023-12-20 15:42:49 +01:00
tsconfig.json works tm 2023-11-19 23:37:24 +01:00

dumbpilot

Get inline completions using llama.cpp as as server backend

Usage

  1. start llama.cpp/server in the background or on a remote machine
  2. configure the host
  3. press ctrl+shift+l to use code prediction