SkillAgentSearch skills...

Gp.nvim

Gp.nvim (GPT prompt) Neovim AI plugin: ChatGPT sessions & Instructable text/code operations & Speech to text [OpenAI, Ollama, Anthropic, ..]

Install / Use

/learn @Robitx/Gp.nvim
About this skill

Quality Score

0/100

Supported Platforms

Claude Code
Claude Desktop
GitHub Copilot
Gemini CLI

README

<!-- panvimdoc-ignore-start -->

<a href="https://github.com/Robitx/gp.nvim/blob/main/LICENSE"><img alt="GitHub" src="https://img.shields.io/github/license/robitx/gp.nvim"></a> <a href="https://github.com/Robitx/gp.nvim/stargazers"><img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/Robitx/gp.nvim"></a> <a href="https://github.com/Robitx/gp.nvim/issues"><img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/Robitx/gp.nvim"></a> <a href="https://github.com/Robitx/gp.nvim/pulls"><img alt="GitHub closed pull requests" src="https://img.shields.io/github/issues-pr-closed/Robitx/gp.nvim?label=PRs"></a> <a href="https://github.com/Robitx/gp.nvim/graphs/contributors"><img alt="GitHub contributors" src="https://img.shields.io/github/contributors-anon/Robitx/gp.nvim"></a> <a href="https://github.com/search?q=%2F%5E%5B%5Cs%5D*require%5C%28%5B%27%22%5Dgp%5B%27%22%5D%5C%29%5C.setup%2F+language%3ALua&type=code&p=1"><img alt="Static Badge" src="https://img.shields.io/badge/Use%20in%20the%20Wild-8A2BE2"></a> <a href="https://discord.gg/dYyHmyNpv7"><img alt="Discord" src="https://img.shields.io/discord/1200485978725433484?label=Discord"></a>

Gp.nvim (GPT prompt) Neovim AI plugin

<!-- panvimdoc-ignore-end --> <br>

ChatGPT like sessions, Instructable text/code operations, Speech to text and Image generation in your favorite editor.

<p align="left"> <img src="https://github.com/Robitx/gp.nvim/assets/8431097/cb288094-2308-42d6-9060-4eb21b3ba74c" width="49%"> <img src="https://github.com/Robitx/gp.nvim/assets/8431097/c538f0a2-4667-444e-8671-13f8ea261be1" width="49%"> </p>

Youtube demos

Goals and Features

The goal is to extend Neovim with the power of GPT models in a simple unobtrusive extensible way.
Trying to keep things as native as possible - reusing and integrating well with the natural features of (Neo)vim.

  • Streaming responses
    • no spinner wheel and waiting for the full answer
    • response generation can be canceled half way through
    • properly working undo (response can be undone with a single u)
  • Infinitely extensible via hook functions specified as part of the config
  • Minimum dependencies (neovim, curl, grep and optionally sox)
    • zero dependencies on other lua plugins to minimize chance of breakage
  • ChatGPT like sessions
    • just good old neovim buffers formated as markdown with autosave and few buffer bound shortcuts
    • last chat also quickly accessible via toggable popup window
    • chat finder - management popup for searching, previewing, deleting and opening chat sessions
  • Instructable text/code operations
    • templating mechanism to combine user instructions, selections etc into the gpt query
    • multimodal - same command works for normal/insert mode, with selection or a range
    • many possible output targets - rewrite, prepend, append, new buffer, popup
    • non interactive command mode available for common repetitive tasks implementable as simple hooks
      (explain something in a popup window, write unit tests for selected code into a new buffer,
      finish selected code based on comments in it, etc.)
    • custom instructions per repository with .gp.md file
      (instruct gpt to generate code using certain libs, packages, conventions and so on)
  • Speech to text support
    • a mouth is 2-4x faster than fingers when it comes to outputting words - use it where it makes sense
      (dicating comments and notes, asking gpt questions, giving instructions for code operations, ..)
  • Image generation
    • be even less tempted to open the browser with the ability to generate images directly from Neovim

Install

1. Install the plugin

Snippets for your preferred package manager:

-- lazy.nvim
{
    "robitx/gp.nvim",
    config = function()
        local conf = {
            -- For customization, refer to Install > Configuration in the Documentation/Readme
        }
        require("gp").setup(conf)

        -- Setup shortcuts here (see Usage > Shortcuts in the Documentation/Readme)
    end,
}
-- packer.nvim
use({
    "robitx/gp.nvim",
    config = function()
        local conf = {
            -- For customization, refer to Install > Configuration in the Documentation/Readme
        }
        require("gp").setup(conf)

        -- Setup shortcuts here (see Usage > Shortcuts in the Documentation/Readme)
    end,
})
-- vim-plug
Plug 'robitx/gp.nvim'

local conf = {
    -- For customization, refer to Install > Configuration in the Documentation/Readme
}
require("gp").setup(conf)

-- Setup shortcuts here (see Usage > Shortcuts in the Documentation/Readme)

2. OpenAI API key

Make sure you have OpenAI API key. Get one here and use it in the 5. Configuration. Also consider setting up usage limits so you won't get suprised at the end of the month.

The OpenAI API key can be passed to the plugin in multiple ways:

| Method | Example | Security Level | | ------------------------- | -------------------------------------------------------------- | ------------------- | | hardcoded string | openai_api_key: "sk-...", | Low | | default env var | set OPENAI_API_KEY environment variable in shell config | Medium | | custom env var | openai_api_key = os.getenv("CUSTOM_ENV_NAME"), | Medium | | read from file | openai_api_key = { "cat", "path_to_api_key" }, | Medium-High | | password manager | openai_api_key = { "bw", "get", "password", "OAI_API_KEY" }, | High |

If openai_api_key is a table, Gp runs it asynchronously to avoid blocking Neovim (password managers can take a second or two).

3. Multiple providers

The following LLM providers are currently supported besides OpenAI:

  • Ollama for local/offline open-source models. The plugin assumes you have the Ollama service up and running with configured models available (the default Ollama agent uses Llama3).
  • GitHub Copilot with a Copilot license (zbirenbaum/copilot.lua or github/copilot.vim for autocomplete). You can access the underlying GPT-4 model without paying anything extra (essentially unlimited GPT-4 access).
  • Perplexity.ai Pro users have $5/month free API credits available (the default PPLX agent uses Mixtral-8x7b).
  • Anthropic to access Claude models, which currently outperform GPT-4 in some benchmarks.
  • Google Gemini with a quite generous free range but some geo-restrictions (EU).
  • Any other "OpenAI chat/completions" compatible endpoint (Azure, LM Studio, etc.)

Below is an example of the relevant configuration part enabling some of these. The secret field has the same capabilities as openai_api_key (which is still supported for compatibility).

	providers = {
		openai = {
			endpoint = "https://api.openai.com/v1/chat/completions",
			secret = os.getenv("OPENAI_API_KEY"),
		},

		-- azure = {...},

		copilot = {
			endpoint = "https://api.githubcopilot.com/chat/completions",
			secret = {
				"bash",
				"-c",
				"cat ~/.config/github-copilot/apps.json | sed -e 's/.*oauth_token...//;s/\".*//'",
			},
		},

		pplx = {
			endpoint = "https://api.perplexity.ai/chat/completions",
			secret = os.getenv("PPLX_API_KEY"),
		},

		ollama = {
			endpoint = "http://localhost:11434/v1/chat/completions",
		},

		googleai = {
			endpoint = "https://generativelanguage.googleapis.com/v1beta/models/{{model}}:streamGenerateContent?key={{secret}}",
			secret = os.getenv("GOOGLEAI_API_KEY"),
		},

		anthropic = {
			endpoint = "https://api.anthropic.com/v1/messages",
			secret = os.getenv("ANTHROPIC_API_KEY"),
		},
	},

Each of these providers has some agents preconfigured. Below is an example of how to disable predefined ChatGPT3-5 agent and create a custom one. If the provider field is missing, OpenAI is assumed for backward compatibility.

	agents = {
		{
			name = "ChatGPT3-5",
			disable = true,
		},
		{
			name = "MyCustomAgent",
			provider = "copilot",
			chat = true,
			command = true,
			model = { model = "gpt-4-turbo" },
			system_prompt = "Answer any query with just: Sure thing..",
		},
	},

4. Dependencies

The core plugin only needs curl installed to make calls to OpenAI API and grep for ChatFinder. So Linux, BSD and Mac OS should be covered.

Voice commands (:GpWhisper*) depend on SoX (Sound eXchange) to handle audio recording and processing:

  • Mac OS: brew install sox
  • Ubuntu/Debian: apt-get install sox libsox-fmt-mp3
  • Arch Linux: pacman -S sox
  • Redhat/CentOS: yum install sox
  • NixOS: nix-env -i sox

5. Configuration

Below is a linked snippet with the default values, but I suggest starting with minimal config possible (just openai_api_key if you don't have OPENAI_API_KEY env set up). Defaults change over time to improve things, options might get deprecated and so on - it's better to change only things where the default doesn't fit your needs.

<!-- README_REFERENCE_MARKER_REPLACE_NEXT_LINE -->

https://github.com/Robitx/gp.nvim/blob/b910a540f16875af7

View on GitHub
GitHub Stars1.3k
CategoryDevelopment
Updated2d ago
Forks126

Languages

Lua

Security Score

100/100

Audited on Mar 25, 2026

No findings