Github url

project-guidelines

by elsewhencode

elsewhencode /project-guidelines

A set of best practices for JavaScript projects

21.5K Stars 2.2K Forks Last release: Not found MIT License 158 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

中文版 | 日本語版 | 한국어 | Русский | Português

Project Guidelines · PRs Welcome

While developing a new project is like rolling on a green field for you, maintaining it is a potential dark twisted nightmare for someone else. Here's a list of guidelines we've found, written and gathered that (we think) works really well with most JavaScript projects here at elsewhen. If you want to share a best practice, or think one of these guidelines should be removed, feel free to share it with us.


1. Git

Git

1.1 Some Git rules

There are a set of rules to keep in mind: * Perform work in a feature branch.

\_Why:\_ \>Because this way all work is done in isolation on a dedicated branch rather than the main branch. It allows you to submit multiple pull requests without confusion. You can iterate without polluting the master branch with potentially unstable, unfinished code. [read more...](https://www.atlassian.com/git/tutorials/comparing-workflows#feature-branch-workflow)
  • Branch out from

develop

Why:

This way, you can make sure that code in master will almost always build without problems, and can be mostly used directly for releases (this might be overkill for some projects).

  • Never push into

develop

or

master

branch. Make a Pull Request.

Why:

It notifies team members that they have completed a feature. It also enables easy peer-review of the code and dedicates forum for discussing the proposed feature.

  • Update your local

develop

branch and do an interactive rebase before pushing your feature and making a Pull Request.

Why:

Rebasing will merge in the requested branch (

master

or

develop

) and apply the commits that you have made locally to the top of the history without creating a merge commit (assuming there were no conflicts). Resulting in a nice and clean history. read more ...

  • Resolve potential conflicts while rebasing and before making a Pull Request.

Delete local and remote feature branches after merging.

Why:

It will clutter up your list of branches with dead branches. It ensures you only ever merge the branch back into (

master

or

develop

) once. Feature branches should only exist while the work is still in progress.

  • Before making a Pull Request, make sure your feature branch builds successfully and passes all tests (including code style checks).

Why:

You are about to add your code to a stable branch. If your feature-branch tests fail, there is a high chance that your destination branch build will fail too. Additionally, you need to apply code style check before making a Pull Request. It aids readability and reduces the chance of formatting fixes being mingled in with actual changes.

.gitignore

file.

Why:

It already has a list of system files that should not be sent with your code into a remote repository. In addition, it excludes setting folders and files for most used editors, as well as most common dependency folders.

  • Protect your

develop

and

master

branch.

Why:

It protects your production-ready branches from receiving unexpected and irreversible changes. read more... Github, Bitbucket and GitLab

1.2 Git workflow

Because of most of the reasons above, we use Feature-branch-workflow with Interactive Rebasing and some elements of Gitflow (naming and having a develop branch). The main steps are as follows:

  • For a new project, initialize a git repository in the project directory. For subsequent features/changes this step should be ignored.

sh cd <project directory>
git init
</project>
  • Checkout a new feature/bug-fix branch.

sh git checkout -b <branchname>
</branchname>
  • Make Changes.

sh git add <file1> <file2> ...
git commit
</file2></file1>

Why:

git add <file1> <file2> ...</file2></file1>
  • you should add only files that make up a small and coherent change.
git commit

will start an editor which lets you separate the subject from the body.

Read more about it in section 1.3.

Tip:

You could use

git add -p

instead, which will give you chance to review all of the introduced changes one by one, and decide whether to include them in the commit or not.

  • Sync with remote to get changes you’ve missed.

sh git checkout develop git pull

Why:

This will give you a chance to deal with conflicts on your machine while rebasing (later) rather than creating a Pull Request that contains conflicts.

  • Update your feature branch with latest changes from develop by interactive rebase.

sh git checkout <branchname>
git rebase -i --autosquash develop
</branchname>

Why:

You can use --autosquash to squash all your commits to a single commit. Nobody wants many commits for a single feature in develop branch. read more...

  • If you don’t have conflicts, skip this step. If you have conflicts, resolve them and continue rebase.

sh git add <file1> <file2> ...
git rebase --continue
</file2></file1>
  • Push your branch. Rebase will change history, so you'll have to use

-f

to force changes into the remote branch. If someone else is working on your branch, use the less destructive

--force-with-lease

.

sh git push -f

Why:

When you do a rebase, you are changing the history on your feature branch. As a result, Git will reject normal

git push

. Instead, you'll need to use the -f or --force flag. read more...

  • Make a Pull Request.

  • Pull request will be accepted, merged and close by a reviewer.

  • Remove your local feature branch if you're done.

git branch -d <branchname>
</branchname>

to remove all branches which are no longer on remote

sh git fetch -p && for branch in `git branch -vv --no-color | grep ': gone]' | awk '{print $1}'`; do git branch -D $branch; done

1.3 Writing good commit messages

Having a good guideline for creating commits and sticking to it makes working with Git and collaborating with others a lot easier. Here are some rules of thumb (source):

  • Separate the subject from the body with a newline between the two.

Why:

Git is smart enough to distinguish the first line of your commit message as your summary. In fact, if you try git shortlog, instead of git log, you will see a long list of commit messages, consisting of the id of the commit, and the summary only.

  • Limit the subject line to 50 characters and Wrap the body at 72 characters.

why

Commits should be as fine-grained and focused as possible, it is not the place to be verbose. read more...

  • Capitalize the subject line.

  • Do not end the subject line with a period.

Use imperative mood in the subject line.

Why:

Rather than writing messages that say what a committer has done. It's better to consider these messages as the instructions for what is going to be done after the commit is applied on the repository. read more...

  • Use the body to explain what and why as opposed to how.

2. Documentation

Documentation

  • Use this template for
    README.md
    , Feel free to add uncovered sections.
  • For projects with more than one repository, provide links to them in their respective
    README.md
    files.
  • Keep
    README.md
    updated as a project evolves.
  • Comment your code. Try to make it as clear as possible what you are intending with each major section.
  • If there is an open discussion on github or stackoverflow about the code or approach you're using, include the link in your comment.
  • Don't use comments as an excuse for a bad code. Keep your code clean.
  • Don't use clean code as an excuse to not comment at all.
  • Keep comments relevant as your code evolves.

3. Environments

Environments

  • Define separate

development

,

test

and

production

environments if needed.

Why:

Different data, tokens, APIs, ports etc... might be needed in different environments. You may want an isolated

development

mode that calls fake API which returns predictable data, making both automated and manual testing much easier. Or you may want to enable Google Analytics only on

production

and so on. read more...

  • Load your deployment specific configurations from environment variables and never add them to the codebase as constants, look at this sample.

Why:

You have tokens, passwords and other valuable information in there. Your config should be correctly separated from the app internals as if the codebase could be made public at any moment.

How:

.env

files to store your variables and add them to

.gitignore

to be excluded. Instead, commit a

.env.example

which serves as a guide for developers. For production, you should still set your environment variables in the standard way.read more

  • It’s recommended to validate environment variables before your app starts. Look at this sample using

joi

to validate provided values.

Why:

It may save others from hours of troubleshooting.

3.1 Consistent dev environments:

  • Set your node version in

engines

in

package.json

.

Why:

It lets others know the version of node the project works on. read more...

  • Additionally, use

nvm

and create a

.nvmrc

in your project root. Don't forget to mention it in the documentation.

Why:

Any one who uses

nvm

can simply use

nvm use

to switch to the suitable node version. read more...

  • It's a good idea to setup a

preinstall

script that checks node and npm versions.

Why:

Some dependencies may fail when installed by newer versions of npm.

  • Use Docker image if you can.

Why:

It can give you a consistent environment across the entire workflow. Without much need to fiddle with dependencies or configs. read more...

  • Use local modules instead of using globally installed modules.

Why:

Lets you share your tooling with your colleague instead of expecting them to have it globally on their systems.

3.2 Consistent dependencies:

  • Make sure your team members get the exact same dependencies as you.

Why:

Because you want the code to behave as expected and identical in any development machine read more...

how:

Use

package-lock.json

on

[email protected]

or higher

I don't have [email protected]:

Alternatively you can use

Yarn

and make sure to mention it in

README.md

. Your lock file and

package.json

should have the same versions after each dependency update. read more...

_I don't like the name

Yarn

:_

Too bad. For older versions of

npm

, use

—save --save-exact

when installing a new dependency and create

npm-shrinkwrap.json

before publishing. read more...

4. Dependencies

Github

  • Keep track of your currently available packages: e.g.,
    npm ls --depth=0
    . read more...

See if any of your packages have become unused or irrelevant:

depcheck

. read more...

Why:

You may include an unused library in your code and increase the production bundle size. Find unused dependencies and get rid of them.

  • Before using a dependency, check its download statistics to see if it is heavily used by the community:

npm-stat

. read more...

Why:

More usage mostly means more contributors, which usually means better maintenance, and all of these result in quickly discovered bugs and quickly developed fixes.

  • Before using a dependency, check to see if it has a good, mature version release frequency with a large number of maintainers: e.g.,

npm view async

. read more...

Why:

Having loads of contributors won't be as effective if maintainers don't merge fixes and patches quickly enough.

  • If a less known dependency is needed, discuss it with the team before using it.

Always make sure your app works with the latest version of its dependencies without breaking:

npm outdated

. read more...

Why:

Dependency updates sometimes contain breaking changes. Always check their release notes when updates show up. Update your dependencies one by one, that makes troubleshooting easier if anything goes wrong. Use a cool tool such as npm-check-updates.

  • Check to see if the package has known security vulnerabilities with, e.g., Snyk.

5. Testing

Testing* Have a

test

mode environment if needed.

\_Why:\_ \> While sometimes end to end testing in `production` mode might seem enough, there are some exceptions: One example is you may not want to enable analytical information on a 'production' mode and pollute someone's dashboard with test data. The other example is that your API may have rate limits in `production` and blocks your test calls after a certain amount of requests.
  • Place your test files next to the tested modules using

\*.test.js

or

\*.spec.js

naming convention, like

moduleName.spec.js

.

Why:

You don't want to dig through a folder structure to find a unit test. read more...

  • Put your additional test files into a separate test folder to avoid confusion.

Why:

Some test files don't particularly relate to any specific implementation file. You have to put it in a folder that is most likely to be found by other developers:

\_\_test\_\_

folder. This name:

\_\_test\_\_

is also standard now and gets picked up by most JavaScript testing frameworks.

  • Write testable code, avoid side effects, extract side effects, write pure functions

Why:

You want to test a business logic as separate units. You have to "minimize the impact of randomness and nondeterministic processes on the reliability of your code". read more...

A pure function is a function that always returns the same output for the same input. Conversely, an impure function is one that may have side effects or depends on conditions from the outside to produce a value. That makes it less predictable. read more...

  • Use a static type checker

Why:

Sometimes you may need a Static type checker. It brings a certain level of reliability to your code. read more...

  • Run tests locally before making any pull requests to

develop

.

Why:

You don't want to be the one who caused production-ready branch build to fail. Run your tests after your

rebase

and before pushing your feature-branch to a remote repository.

  • Document your tests including instructions in the relevant section of your

README.md

file.

Why:

It's a handy note you leave behind for other developers or DevOps experts or QA or anyone who gets lucky enough to work on your code.

6. Structure and Naming

Structure and Naming* Organize your files around product features / pages / components, not roles. Also, place your test files next to their implementation.

\*\*Bad\*\* ``` . ├── controllers | ├── product.js | └── user.js ├── models | ├── product.js | └── user.js ``` \*\*Good\*\* ``` . ├── product | ├── index.js | ├── product.js | └── product.test.js ├── user | ├── index.js | ├── user.js | └── user.test.js ``` \_Why:\_ \> Instead of a long list of files, you will create small modules that encapsulate one responsibility including its test and so on. It gets much easier to navigate through and things can be found at a glance.
  • Put your additional test files to a separate test folder to avoid confusion.

Why:

It is a time saver for other developers or DevOps experts in your team.

  • Use a

./config

folder and don't make different config files for different environments.

Why:

When you break down a config file for different purposes (database, API and so on); putting them in a folder with a very recognizable name such as

config

makes sense. Just remember not to make different config files for different environments. It doesn't scale cleanly, as more deploys of the app are created, new environment names are necessary. Values to be used in config files should be provided by environment variables. read more...

  • Put your scripts in a

./scripts

folder. This includes

bash

and

node

scripts.

Why:

It's very likely you may end up with more than one script, production build, development build, database feeders, database synchronization and so on.

  • Place your build output in a

./build

folder. Add

build/

to

.gitignore

.

Why:

Name it what you like,

dist

is also cool. But make sure that keep it consistent with your team. What gets in there is most likely generated (bundled, compiled, transpiled) or moved there. What you can generate, your teammates should be able to generate too, so there is no point committing them into your remote repository. Unless you specifically want to.

7. Code style

Code style

7.1 Some code style guidelines

  • Use stage-2 and higher JavaScript (modern) syntax for new projects. For old project stay consistent with existing syntax unless you intend to modernise the project.

Why:

This is all up to you. We use transpilers to use advantages of new syntax. stage-2 is more likely to eventually become part of the spec with only minor revisions.

  • Include code style check in your build process.

Why:

Breaking your build is one way of enforcing code style to your code. It prevents you from taking it less seriously. Do it for both client and server-side code. read more...

Why:

We simply prefer

eslint

, you don't have to. It has more rules supported, the ability to configure the rules, and ability to add custom rules.

We use Flow type style check rules for ESLint when using FlowType.

Why:

Flow introduces few syntaxes that also need to follow certain code style and be checked.

  • Use

.eslintignore

to exclude files or folders from code style checks.

Why:

You don't have to pollute your code with

eslint-disable

comments whenever you need to exclude a couple of files from style checking.

  • Remove any of your

eslint

disable comments before making a Pull Request.

Why:

It's normal to disable style check while working on a code block to focus more on the logic. Just remember to remove those

eslint-disable

comments and follow the rules.

  • Depending on the size of the task use

//TODO:

comments or open a ticket.

Why:

So then you can remind yourself and others about a small task (like refactoring a function or updating a comment). For larger tasks use

//TODO(#3456)

which is enforced by a lint rule and the number is an open ticket.

  • Always comment and keep them relevant as code changes. Remove commented blocks of code.

Why:

Your code should be as readable as possible, you should get rid of anything distracting. If you refactored a function, don't just comment out the old one, remove it.

  • Avoid irrelevant or funny comments, logs or naming.

Why:

While your build process may(should) get rid of them, sometimes your source code may get handed over to another company/client and they may not share the same banter.

  • Make your names search-able with meaningful distinctions avoid shortened names. For functions use long, descriptive names. A function name should be a verb or a verb phrase, and it needs to communicate its intention.

Why:

It makes it more natural to read the source code.

  • Organize your functions in a file according to the step-down rule. Higher level functions should be on top and lower levels below.

Why:

It makes it more natural to read the source code.

7.2 Enforcing code style standards

  • Use a .editorconfig file which helps developers define and maintain consistent coding styles between different editors and IDEs on the project.

Why:

The EditorConfig project consists of a file format for defining coding styles and a collection of text editor plugins that enable editors to read the file format and adhere to defined styles. EditorConfig files are easily readable and they work nicely with version control systems.

Consider using Git hooks.

Why:

Git hooks greatly increase a developer's productivity. Make changes, commit and push to staging or production environments without the fear of breaking builds. read more...

  • Use Prettier with a precommit hook.

Why:

While

prettier

itself can be very powerful, it's not very productive to run it simply as an npm task alone each time to format code. This is where

lint-staged

(and

husky

) come into play. Read more on configuring

lint-staged

here and on configuring

husky

here.

8. Logging

Logging

  • Avoid client-side console logs in production

Why:

Even though your build process can (should) get rid of them, make sure that your code style checker warns you about leftover console logs.

  • Produce readable production logging. Ideally use logging libraries to be used in production mode (such as winston ornode-bunyan).

Why:

It makes your troubleshooting less unpleasant with colorization, timestamps, log to a file in addition to the console or even logging to a file that rotates daily. read more...

9. API

API

9.1 API design

Why:

Because we try to enforce development of sanely constructed RESTful interfaces, which team members and clients can consume simply and consistently.

Why:

Lack of consistency and simplicity can massively increase integration and maintenance costs. Which is why

API design

is included in this document.

  • We mostly follow resource-oriented design. It has three main factors: resources, collection, and URLs.

    • A resource has data, gets nested, and there are methods that operate against it.
    • A group of resources is called a collection.
    • URL identifies the online location of resource or collection.

Why:

This is a very well-known design to developers (your main API consumers). Apart from readability and ease of use, it allows us to write generic libraries and connectors without even knowing what the API is about.

  • use kebab-case for URLs.

  • use camelCase for parameters in the query string or resource fields.

  • use plural kebab-case for resource names in URLs.

Always use a plural nouns for naming a url pointing to a collection:

/users

.

Why:

Basically, it reads better and keeps URLs consistent. read more...

  • In the source code convert plurals to variables and properties with a List suffix.

Why:

Plural is nice in the URL but in the source code, it’s just too subtle and error-prone.

  • Always use a singular concept that starts with a collection and ends to an identifier:

/students/245743 /airports/kjfk
  • Avoid URLs like this:

GET /blogs/:blogId/posts/:postId/summary

Why:

This is not pointing to a resource but to a property instead. You can pass the property as a parameter to trim your response.

  • Keep verbs out of your resource URLs.

Why:

Because if you use a verb for each resource operation you soon will have a huge list of URLs and no consistent pattern which makes it difficult for developers to learn. Plus we use verbs for something else.

  • Use verbs for non-resources. In this case, your API doesn't return any resources. Instead, you execute an operation and return the result. These are not CRUD (create, retrieve, update, and delete) operations:

/translate?text=Hallo

Why:

Because for CRUD we use HTTP methods on

resource

or

collection

URLs. The verbs we were talking about are actually

Controllers

. You usually don't develop many of these. read more...

  • The request body or response type is JSON then please follow

camelCase

for

JSON

property names to maintain the consistency.

Why:

This is a JavaScript project guideline, where the programming language for generating and parsing JSON is assumed to be JavaScript.

  • Even though a resource is a singular concept that is similar to an object instance or database record, you should not use your

table\_name

for a resource name and

column\_name

resource property.

Why:

Because your intention is to expose Resources, not your database schema details.

  • Again, only use nouns in your URL when naming your resources and don’t try to explain their functionality.

Why:

Only use nouns in your resource URLs, avoid endpoints like

/addNewUser

or

/updateUser

. Also avoid sending resource operations as a parameter.

  • Explain the CRUD functionalities using HTTP methods:

How:

GET

: To retrieve a representation of a resource.

POST

: To create new resources and sub-resources.

PUT

: To update existing resources.

PATCH

: To update existing resources. It only updates the fields that were supplied, leaving the others alone.

DELETE

: To delete existing resources.

  • For nested resources, use the relation between them in the URL. For instance, using

id

to relate an employee to a company.

Why:

This is a natural way to make resources explorable.

How:

GET /schools/2/students

, should get the list of all students from school 2.

GET /schools/2/students/31

, should get the details of student 31, which belongs to school 2.

DELETE /schools/2/students/31

, should delete student 31, which belongs to school 2.

PUT /schools/2/students/31

, should update info of student 31, Use PUT on resource-URL only, not collection.

POST /schools

, should create a new school and return the details of the new school created. Use POST on collection-URLs.

  • Use a simple ordinal number for a version with a

v

prefix (v1, v2). Move it all the way to the left in the URL so that it has the highest scope:

http://api.domain.com/v1/schools/3/students

Why:

When your APIs are public for other third parties, upgrading the APIs with some breaking change would also lead to breaking the existing products or services using your APIs. Using versions in your URL can prevent that from happening. read more...

  • Response messages must be self-descriptive. A good error message response might look something like this:

json { "code": 1234, "message" : "Something bad happened", "description" : "More details" }

or for validation errors:

json { "code" : 2314, "message" : "Validation Failed", "errors" : [{ "code" : 1233, "field" : "email", "message" : "Invalid email" }, { "code" : 1234, "field" : "password", "message" : "No password provided" }] }

Why:

developers depend on well-designed errors at the critical times when they are troubleshooting and resolving issues after the applications they've built using your APIs are in the hands of their users.

Note: Keep security exception messages as generic as possible. For instance, Instead of saying ‘incorrect password’, you can reply back saying ‘invalid username or password’ so that we don’t unknowingly inform user that username was indeed correct and only the password was incorrect.

  • Use these status codes to send with your response to describe whether everything worked, The client app did something wrong or The API did something wrong.

Which ones:

200 OK

response represents success for

GET

,

PUT

or

POST

requests.

201 Created

for when a new instance is created. Creating a new instance, using

POST

method returns

201

status code.

204 No Content

response represents success but there is no content to be sent in the response. Use it when

DELETE

operation succeeds.

304 Not Modified

response is to minimize information transfer when the recipient already has cached representations.

400 Bad Request

for when the request was not processed, as the server could not understand what the client is asking for.

401 Unauthorized

for when the request lacks valid credentials and it should re-request with the required credentials.

403 Forbidden

means the server understood the request but refuses to authorize it.

404 Not Found

indicates that the requested resource was not found.

500 Internal Server Error

indicates that the request is valid, but the server could not fulfill it due to some unexpected condition.

Why:

Most API providers use a small subset HTTP status codes. For example, the Google GData API uses only 10 status codes, Netflix uses 9, and Digg, only 8. Of course, these responses contain a body with additional information. There are over 70 HTTP status codes. However, most developers don't have all 70 memorized. So if you choose status codes that are not very common you will force application developers away from building their apps and over to wikipedia to figure out what you're trying to tell them. read more...

  • Provide total numbers of resources in your response.

Accept

limit

and

offset

parameters.

The amount of data the resource exposes should also be taken into account. The API consumer doesn't always need the full representation of a resource. Use a fields query parameter that takes a comma separated list of fields to include:

GET /student?fields=id,name,age,class
  • Pagination, filtering, and sorting don’t need to be supported from start for all resources. Document those resources that offer filtering and sorting.

9.2 API security

These are some basic security best practices:

  • Don't use basic authentication unless over a secure connection (HTTPS). Authentication tokens must not be transmitted in the URL:

GET /users/123?token=asdf....

Why:

Because Token, or user ID and password are passed over the network as clear text (it is base64 encoded, but base64 is a reversible encoding), the basic authentication scheme is not secure. read more...

  • Tokens must be transmitted using the Authorization header on every request:

Authorization: Bearer xxxxxx, Extra yyyyy

.

  • Authorization Code should be short-lived.

Reject any non-TLS requests by not responding to any HTTP request to avoid any insecure data exchange. Respond to HTTP requests by

403 Forbidden

.

Consider using Rate Limiting.

Why:

To protect your APIs from bot threats that call your API thousands of times per hour. You should consider implementing rate limit early on.

  • Setting HTTP headers appropriately can help to lock down and secure your web application. read more...

  • Your API should convert the received data to their canonical form or reject them. Return 400 Bad Request with details about any errors from bad or missing data.

  • All the data exchanged with the REST API must be validated by the API.

Serialize your JSON.

Why:

A key concern with JSON encoders is preventing arbitrary JavaScript remote code execution within the browser... or, if you're using node.js, on the server. It's vital that you use a proper JSON serializer to encode user-supplied data properly to prevent the execution of user-supplied input on the browser.

  • Validate the content-type and mostly use

application/\*json

(Content-Type header).

Why:

For instance, accepting the

application/x-www-form-urlencoded

mime type allows the attacker to create a form and trigger a simple POST request. The server should never assume the Content-Type. A lack of Content-Type header or an unexpected Content-Type header should result in the server rejecting the content with a

4XX

response.

9.3 API documentation

  • Fill the
    API Reference
    section in README.md template for API.
  • Describe API authentication methods with a code sample.
  • Explaining The URL Structure (path only, no root URL) including The request type (Method).

For each endpoint explain: * URL Params If URL Params exist, specify them in accordance with name mentioned in URL section:

``` Required: id=[integer] Optional: photo_id=[alphanumeric] ```
  • If the request type is POST, provide working examples. URL Params rules apply here too. Separate the section into Optional and Required.

Success Response, What should be the status code and is there any return data? This is useful when people need to know what their callbacks should expect:

Code: 200 Content: { id : 12 }
  • Error Response, Most endpoints have many ways to fail. From unauthorized access to wrongful parameters etc. All of those should be listed here. It might seem repetitive, but it helps prevent assumptions from being made. For example

json { "code": 401, "message" : "Authentication failed", "description" : "Invalid username or password" }
  • Use API design tools, There are lots of open source tools for good documentation such as API Blueprint and Swagger.

10. Licensing

Licensing

Make sure you use resources that you have the rights to use. If you use libraries, remember to look for MIT, Apache or BSD but if you modify them, then take a look at the license details. Copyrighted images and videos may cause legal problems.


Sources:RisingStack Engineering,Mozilla Developer Network,Heroku Dev Center,Airbnb/javascript,Atlassian Git tutorials,Apigee,Wishtack

Icons by icons8

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.