Visual Studio Code client for TabNine. https://marketplace.visualstudio.com/items?itemName=TabNine.tabnine-vscode
This is the Visual Studio Code Tabnine client, advanced AI based autocomplete for all programming languages. Tabnine Indexes your entire project by reading your
.gitignoreor others, and determines which files to index.
Tabnine is part of Codota
The following is a brief guide to using Tabnine in Visual studio Code. First, install Tabnine. Second, navigate to the Tabnine Settings page, It can be accessed via the TabNine: Open Settings command from the Command Palette (Ctrl+Shift+P).
and verify that Tabnine local model is successfully loaded, as shown in the following screenshot:
Tabnine is a textual autocomplete extension. When you type a specific string in your editor, you will view Tabnine completion dialog, with Tabnine suggestions according to the text you type
Many users choose to disable the default behavior of using Enter to accept completions, to avoid accepting a completion when they intended to start a new line. You can do this by going to Settings → Editor: Accept Suggestion On Enter and setting it to off.
Deep Tabnine is trained on around 2 million files from GitHub. During training, TabNine’s goal is to predict the next token given the tokens that came before. To achieve this goal, Tabnine learns complex behaviour, such as type inference in dynamically typed languages.
Deep Tabnine can use subtle clues that are difficult for traditional tools to access. For example,
the return type of
app.get_user()is assumed to be an object with setter methods, while the return type of
app.get_users()is assumed to be a list.
Deep Tabnine is based on GPT-2, which uses the Transformer network architecture. This architecture was first developed to solve problems in natural language processing. Although modelling code and modelling natural language might appear to be unrelated tasks, modelling code requires understanding English in some unexpected ways.
Tabnine requires consumption of memory resources when being run locally on your computer. It may incur some latency that is not optimal to all PC’s. With that in mind, Tabnine has developed a Cloud solution, called Tabnine Deep Cloud.
We understand that users concerned with their privacy prefer to keep their code on their own machine. Rest assured that we’re taking the following steps to address this concern: For individual developers, we are working on a reduced-size model which can run on a laptop with reasonable latency. Update: we’ve released Tabnine Local. For enterprise users, we will soon roll-out the option to license the model and run it on your own hardware. We can also train a custom model for you which understands the unique patterns and style within your codebase. If this could be relevant to you or your team, we would love to hear more about your use case at [email protected] Enabling Tabnine Deep Cloud sends small parts of your code to our servers to provide GPU-accelerated completions. Other than for the purpose of fulfilling your query, your data isn’t used, saved or logged in any way.
Tabnine works for all programming languages. Tabnine does not require any configuration in order to work. Tabnine does not require any external software (though it can integrate with it). Since Tabnine does not parse your code, it will never stop working because of a mismatched bracket.
By default, Tabnine makes web requests only for the purposes of downloading updates and validating registration keys. In this case your code is not sent anywhere, even to Tabnine servers. You may opt in to Tabnine Deep Cloud, which allows you to use TabNine’s servers for GPU-accelerated completions powered by a deep learning model. If sending code to a cloud service is not possible, we also offer a self-hosted option. Contact us at [email protected]
A note on licensing: this repo includes source code as well as packaged Tabnine binaries. The MIT license only applies to the source code, not the binaries. The binaries are covered by the Tabnine End User License Agreement.