Samples and resources of how to design WebApi with .NET Core
Samples and resources of how to design WebApi with .NET
Feel free to create an issue if you have any questions or request for more explanation or samples. I also take Pull Requests!
💖 If this repository helped you - I'd be more than happy if you join the group of my official supporters at:
From the documentation: "Routing is responsible for matching incoming HTTP requests and dispatching those requests to the app's executable endpoints."
Saying differently routing is responsible for finding exact endpoint based on the request parameters - usually based on the URL pattern matching.
Endpoint executes the logic that creates an HTTP response based on request.
To use routing and endpoints it's needed to call
UseRoutingand
UseEndpointsextension method on app builder in
Startup.Configuremethod. That will register routing in middleware pipeline.
Note that those methods should be registered in the order as presented above. If the order is changed then it won't be registered properly.
Templates add flexibility to supported URL definition.
The simplest option is static URL where you have just URL, eg: -
/Reservations/List-
/GetUsers-
/Orders/ByStatuses/Closed
Static URLs are fine for the list endpoints, but if we'd like to get a list of records.
To allow dynamic matching (eg. reservation by Id) we need to use parameters. They can be added using
{parameterName}syntax. eg. -
/Reservations/{id}-
/users/{id}/orders/{orderId}
They don't need to be only used instead of concrete URL part. You can also do eg.: -
/Reservations?status={reservationStatus}&user={userId}- this will get parameters from the query string and match eg.
/Reservations?status=Open&userId=123and will have
statusparameter equal to
Openand
userIdequal to
123, -
/Download/{fileName}.{extension}- this will match eg.
/Download/testFile.txtand end up with two route data parameters -
fileNamewith
testFilevalue and
extensionwith
txtaccordingly, -
/Configuration/{entityType}Dictionary- this will match
/Configuration/OrderStatusDictionaryand will have
entityTypeparameter with
OrderStatusvalue.
You can also add catch-all parameters -
{**parameterName}, that can be used as fallback when no route was found: -
/Reservations/{id}/{**reservationPath}- this will match eg.
/Reservations/123/changeStatus/confirmedand will have
reservationPathparameter with
changeStatus/confirmedvalue
It's also possible to make the parameter optional by adding
?after its name: -
/Reservations/{id?}- this will match both
/Reservationsand
/Reservation/123routes
Route template parameters can contain constraints to narrow down the matched results. To use it you need to add constraint name after parameter name
{prameter:constraintName}. There is a number of predefined route constraints, eg: -
/Reservations/{id:guid}- will match eg.
/Reservations/632863d2-5cbf-4c9f-92e1-749d264d965ebut wont' match eg.
/Reservations/123, -
/Reservations/top/{limit:int:minlength(1):maxLength(10)- this will allow to pass integers between
1and
10for
limitparameter. So it will allow to get at most top 10 reservations, -
/Inbox?from={fromEmailAddress:regex(\\[A-Z0-9._%+-][email protected][A-Z0-9.-]+\.[A-Z]{2,4})}- regex can be also used to eg. check email address or provide more advanced format check. This will match
/[email protected]and will have
fromEmailAddressparameter with
[email protected]value, - see more constraints examples in route constraint documentation.
Note - failing constraint will result with
400 - BadRequeststatus code, however, the messages are generic and not user friendly. So if you'd like to make them more related to your business case - it's suggested to do move it to validation inside the code.
You can also define your custom constraint. The sample use case would be when you want to provide the validation for your business id format.
See sample that validates if reservation id is built from 3 non-empty parts split by
|;
public class ReservationIdConstraint : IRouteConstraint { public bool Match( HttpContext httpContext, IRouter route, string routeKey, RouteValueDictionary values, RouteDirection routeDirection) { if (routeKey == null) { throw new ArgumentNullException(nameof(routeKey)); }if (values == null) { throw new ArgumentNullException(nameof(values)); } if (!values.TryGetValue(routeKey, out var value) && value != null) { return false; } var reservationId = Convert.ToString(value, CultureInfo.InvariantCulture); return reservationId.Split("|").Where(part => !string.IsNullOrWhiteSpace(part)).Count() == 3; }
}
You need to register it in
Startup.ConfigureServicesin
AddRoutingmethod:
public class Startup { public void ConfigureServices(IServiceCollection services) { // registers controllers in dependency injection container services.AddControllers();services.AddRouting(options => { options.ConstraintMap.Add("reservationId", typeof(ReservationIdConstraint)); }); } // (...)
}
Then you can use it to in route: -
/Reservations/{id:reservationId}- this will match
/Reservations/RES|123|01(and get
idparameter with value
RES|123|01) but wont't match
/Reservations/123.
Routing is split into the following steps: - request URL parsing - perform matching against registered routes (it's done in parallel, so the order of registration doesn't matter) - from matching routes, remove all that do not match routes constraints (eg. route parameter defined as int was not numeric) - select single best matching (the most concrete one) if possible, from the left routes. If there are still more than one matches - the exception is being thrown. If there was only single match but value does not match constraint then exception will be thrown.
Having eg. following routes:
/Clients/List
/Clients/{id}
/Reservations/{id:alpha}
/Reservations/{id:int}
/Reservations/List
and trying to match
/Reservation/Listthe routing process will find matching templates so:
/Reservations/{id:alpha}
/Reservations/{id:int}
/Reservations/List
It matched the
Reservationspart and then both
{id}routes (as
Listcould be just string id text) and concrete part
List.
Then constraints will be verified and we'll end up with two routes (as
{id:int}does not match because
Listis not an integer).
/Reservations/{id:alpha}
/Reservations/List
From this set both are matching, but
Listis more concrete.
Accordingly: - trying to match
Reservations/abcderouting will match
/Reservations/{id:alpha}route, - trying to match
Reservations/123routing will match
/Reservations/{id:int}route.
ASP.NET Core allows to define raw endpoints without the need to use controllers. They can be defined inside
UseEndpointsmethod, by calling
UseGet,
UsePostetc. methods:
public class Startup { public void ConfigureServices(IServiceCollection services) { }public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // registers routing in middleware pipeline app.UseRouting(); // defines endpoints to be routed app.UseEndpoints(endpoints => { endpoints.MapGet("/Reservations/{id}", async context => { var name = context.Request.RouteValues["id"]; await context.Response.WriteAsync($"Reservation with {id}!"); }); }); }
}
Using endpoints currently requires a lot of bare-bone code. This will change with .NET 5 where it will get a set of useful methods that will make it first-class citizen. See more in accepted API review: link.
Http requests can be mapped to controller with two ways: conventional and through attributes
Conventional is done by calling
MapControllerRoutemethod inside
UseEndpoints. It allows to provide route template (
pattern), name and controller action mapping.
public class Startup { public void ConfigureServices(IServiceCollection services) { }public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // registers routing in middleware pipeline app.UseRouting(); // defines endpoints to be routed app.UseEndpoints(endpoints => { // defines concrete routing to single controller action endpoints.MapControllerRoute(name: "blog", pattern: "Reservations/{id}", defaults: new { controller = "Reservations", action = "Get" }); // defines "catch-all" routing that will route all requests // matching `/Controller/Action` or `/Controller/Action/id` endpoints.MapControllerRoute(name: "default", pattern: "{controller=Home}/{action=Index}/{id?}"); }); }
}
Important thing to note is controllers should have the
Controllersuffix in the name (eg.
ReservationsController), but routes should be defined without it (so
Reservations).
Controllers are derived from the MVC pattern concept. They are responsible for orchestration between requests (inputs) and models. Routing can be defined by putting attributes on top of method and controller definition.
If you want to use Controllers then you should also call
AddControlersin configure services (to register them in Dependency Container) and
MapControllersinside
UseEndpointsto map controllers routes configuration.
public class Startup { public void ConfigureServices(IServiceCollection services) { // registers controllers in dependency injection container services.AddControllers(); }public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // registers routing in middleware pipeline app.UseRouting(); // defines endpoints to be routed app.UseEndpoints(endpoints => { // maps controllers routes to endpoints endpoints.MapControllers(); }); }
}
Route attribute
The most generic attribute is
[Route]. It routes that will direct to the method that it's marking.
public class ReservationsController : Controller { [Route("")] [Route("Reservations")] [Route("Reservations/List")] [Route("Reservations/List/{status?}")] public IActionResult List(string status) { //(...) }[Route("Reservations/Summary")] [Route("Reservations/Summary/{userId?}")] public IActionResult Summary(int? userId) { // (...) }
}
In this example routes: -
/,
/Reservations,
/Reservations/List,
/Reservations/List/Openwill be routed to
Listmethod, -
/Reservations/Summary,
Reservations/Summary/123will be routed to
Summarymethod.
Important note is that you should not use
action,
area,
controller,
handler,
pageas route template variable (eg.
/Reservations/{page}). Those names are reserved for the internals of routing logic. Using them will make routing fail.
HTTP methods attributes
ASP.NET Core provides also more specific attributes
[HttpGet],
[HttpPost],
[HttpPut],
[HttpDelete],
[HttpHead],
[HttpPatch]representing HTTP methods. Besides the URL routing they also perform matching based on the HTTP method. Normally using them you should add
[Route]attribute on a controller that will add prefix for all the routes defined by HTTP verbs attributes.
Sample of the most common CRUD controller definition:
[Route("api/[controller]")] [ApiController] public class ReservationsController : ControllerBase { [HttpGet] public IActionResult List([FromQuery] string filter) { //(...) }[HttpGet("{id}")] public IActionResult Get(int id) { // (...) } [HttpPost] public IActionResult Create([FromBody] CreateReservation request) { // (...) } [HttpPut("{id}")] public IActionResult Put(int id, [FromBody] UpdateReservation request) { // (...) } [HttpDelete("{id}")] public IActionResult Delete(int id) { // (...) }
}
Using
[Route("api/[controller]")]will define route based on the controller name - in this case it will be
/api/Reservations. By convention WebApi routes usually start with a
/apiprefix. Prefix existence is optional and can have a different value. If you'd like you could also add suffix eg.
[Route("api/[controller]/open")]if eg. you'd like to have dedicated controller for open reservations. The benefit of using
[controller]is that when you rename controller the route will be also updated. If you want to avoid accidental route name change then you should use concrete route eg.
[Route("api/reservations")]
Having that: -
GET /api/Reservationswill be routed to
Listmethod. Value for the
filterparameter, because of
[FromQuery]attribute will be mapped from request query string. For
GET /api/Reservations?filter=openit will have
openvalue, for default route
GET /api/Reservationsit will be
null, -
GET /api/Reservations/123will be routed to
Getmethod. Value of the
idparameter will be taken by convention from the route parameter, -
POST /api/Reservations/123will be routed to
Createmethod. Value for the
requestparameter, because of
[FromBody]attribute will be mapped from request body (so eg. JSON sent from client), -
PUT /api/Reservations/123will be routed to
Updatemethod, -
DELETE /api/Reservations/123will be routed to
Deletemethod.
It's not mandatory to use route prefix. Most of the time it's useful, but when you have nesting inside the API then it's worth setting up it manually eg.
[ApiController] public class UserReservationsController : ControllerBase { [HttpGet("api/users/{userId}/reservations")] public IActionResult List(int userId, [FromQuery] string filter) { //(...) }[HttpGet("api/users/{userId}/reservations/{id}")] public IActionResult Get(int userId, int id) { // (...) } [HttpPost("api/users/{userId}/reservations/{id}")] public IActionResult Create(int userId, [FromBody] CreateReservation request) { // (...) } [HttpPut("api/users/{userId}/reservations/{id}/status")] public IActionResult Put(int userId, int id, [FromBody] UpdateReservationStatus request) { // (...) }
}
Let's go back in time. In 2000 Roy Fielding wrote doctoral dissertation titled "Architectural Styles and the Design of Network-based Software Architectures". This dissertation gave rise to "REpresentational State Transfer" - REST. Roy created REST as an architectural style based on the principles that make the Internet so successful. The World Wide Web runs itself on HTTP, which has a number of conventions that provide the basis for scalability, fault tolerance, and loose coupling. REST and HTTP are not the same thing, but REST fully embraces HTTP. It means that it uses verbs, status codes, headers, and resource identified as URI in order to fulfill the constraints that together compose the so-called RESTful style. What are those constraints?
REST, like any other architectural style, describes constraints, that composed together define the basis of RESTful style.
This constraint just mainly specifies that there's a distinction between a client and a server. This separation allows the components to evolve independently thus improving portability and scalability.
Each request must have all the information necessary for its correct completion. It means that all the state that's contained for a given web request is contained within the request itself as a part of the URI, query string parameters, body, or headers. Since there is no session related dependency, each server can handle any request thus API can be easily scaled. Removing all server-side state synchronization logic also makes REST APIs less complex.
The server should label what data within a response to a request can be cached and what cannot. If a response can be cached, then a client cache is given the rights to reuse that response data for later, equivalent requests. Following this constraint give the potential to partially or completely eliminate some interactions, thus improving performance and scalability and also decrease latency.
The client can make a request and the response could come from a web server, a load balancer, a cache, etc. For the client, it doesn't really matter where the data is coming from as long as it gets the requested information. In other words, before the server completes the response, it can perform additional operations that the client does not need to know.
This is the only optional constraint. Most of the time, the server will be sending the static representations of resources in the form of XML or JSON, but on demand, it can send additional code (f.e. javascript) that can be executed on the client side. This simplifies clients by reducing the number of features required to be pre-implemented.
The server should provide an API that will be well understood by all applications communicating with it. By designing one interface, we should respond to the needs of all applications that use it. In order to obtain such a uniform interface, four additional constraints must be met.
On the basis of a single request, the server can identify the resource it concerns. For that purpose most often the Uniform Resource Identifier - URI is used. It distinguishes resource from any other, and through it interaction with that resource take place. In the example we have address that is pointing on specific employee with id 123. This address is the URI, which is identifier and the returned employee is the resource.
http GET http://example.org/employees/123
http 200 OK { "employeeId": 123, "firstName": "John", "lastName": "Doe" }
The server can return reponse in various formats (media types) like HTML, XML, JSON etc. That format is the representation of the identified resource, that the client can understand and manipulate. It is possible for the client to request a specific representation that fits it needs. This is accomplished via the Accept header.
http GET http://example.org/employees/123 Accept: application/xml
http 200 OK 123 John DoeClients are also allowed to indicate their preferred representation when sending data to the server. This is accomplished via the Content-type header. The server response should not be affected by the choosen format.
http POST http://example.org/employees Content-type: application/json { "firstName": "John", "lastName": "Doe" }
http 201 Created Location: http://example.org/employees/123
A message, which is a request or a response, is being considered as self-descriptive when it contains all the information necessary to complete the task. In other words it should contains all the information that the recipient needs to understand it. Down bellow is an example of self-descriptive message. It contains information about protocol, host, which type of action need to be performed (HTTP method), and desired resource representation to be returned (Accept header). Such a message will be well understood by the server.
http GET /employees/123 HTTP/1.1 Host: example.org Accept: application/jsonThe server can respond accordingly. That message is also self-descriptive. It tells the client that operation was sucessfull by returning appriopriate status code. It also tells how to interpret the message body by specyfing Content-Type header.
http HTTP/1.1 200 OK Content-Type: application/json { "employeeId": 123, "firstName": "John", "lastName": "Doe" }
Together, the first three uniform interface constraints imply the fourth. It can be summarise as that: sending self-desciptive messages to uniquely identifying resources, using representations, changes the state of the application. This constraint allows to compare the RESTful API to a website. As a website is a collection of links leading to subsequent subpages, HATEOAS informs that the same can be done with API. Also think of it as an situation in the office when you want to start a new business. You can't just go there and "POST" a new company. You must submit an application for creating a new company and then you will receive anwser like "Thank you for submitting an aplication. Here are the next possible steps that you can perform: cancellation of the application, address change, financing".
http POST http://example.org/companies { "name": "NewOne", "address": "Example 5", "owner": { "firstName": "John", "lastName": "Doe" } }
http HTTP/1.1 201 Created { "companyId": 1234, "name": "NewOne", "address": "Example 5", "owner": { "firstName": "John", "lastName": "Doe" }, "_links":{ "self":{ "href": "http://example.org/companies/1234", "method": "GET" }, "cancellation":{ "href": "http://example.org/companies/1234", "method": "DELETE" } } }
By default in .NET Core there are six levels of logging (available through LogLevel enum): -
Trace(value
0) - the most detailed and verbose information about the application flow, -
Debug(
1) - useful information during the development process (eg. local environment bug investigation), -
Information(
2) - usually important information about the application flow that can be useful for diagnostics and flow, -
Warning(
3) - potential unexpected application event or error that's not blocking flow (eg. operation was successfully saved to the database but notification failed) or transient error occurred but was succeeded after retry), -
Error(
4) - unexpected application error - eg. no record found to update, database timeout, argument exception etc., -
Critical(
5) - informing about critical events that require immediate action like application or system crash, end of disk space or database in the irrecoverable state, -
None(
6) - means no logs at all, used usually in the configuration to disable logging for selected category.
It's important to keep in mind that
Traceand
Debugshould not be used on production, and should be used only for development/debugging purposes (
Traceis by default disabled). Because of their characteristic, they may contain sensitive application information to be effective (eg. system secrets, PII/GDPR Data). Because of that, we need to be sure that on production environment they are disabled as that may end up with security leak. As they're also verbose, then keeping them on the production system may increase significantly cost of logs storage. Plus too many logs make them unreadable and hard to read.
Each logger instance needs to have an assigned category. Categories allow to group logs messages (as a category will be added to each log entry). By convention category should be passed as the type parameter of ILogger. Usually it's the class that we're injecting logger, eg.
[Route("api/Reservations")] public class ReservationsController: Controller { private readonly ILogger logger;public ReservationsController(ILogger<reservationscontroller> logger) { this.logger = logger; } [HttpPost] public async Task<iactionresult> Create([FromBody] CreateReservationRequest request) { var reservationId = Guid.NewGuid(); // (...) logger.LogInformation("Created reservation with {ReservationId}", reservationId); return Created("api/Reservations", reservationId); }
}
Log category created with type parameter will contain full type name (so eg.
LoggingSamples.Controllers.ReservationController).
It's also possible (however not recommended) to define that through ILoggerFactory
CreateLogger(string categoryName)method:
[Route("api/Reservations")] public class ReservationsController: Controller { private readonly ILogger logger;public ReservationsController(ILoggerFactory loggerFactory) { this.logger = logger.CreateLogger("LoggingSamples.Controllers.ReservationController"); }
}
Categories are useful for searching through logs and diagnose issues. As mentioned in the previous section - it's also possible to define in different log levels for configuration.
Eg. if you have a default log level
Informationand you need to investigate issues occurring in a specific controller (eg.
ReservationsController) then you can change the log level to
Debugfor a dedicated category.
{ "Logging": { "LogLevel": { "Default": "Information", "LoggingSamples.Controllers.ReservationController": "Debug" } } }
Then for all categories but
LoggingSamples.Controllers.ReservationControlleryou'll have logs logged for Information and above (
Information,
Warning,
Error,
Critical) and for
LoggingSamples.Controllers.ReservationControlleralso
Debug.
The other example is to disable logs from selected category - eg. - because you noticed that is logging some sensitive information and you need quickly to change that, - you want to mute some unimportant system logs, - you want to make sure that logs from a specific category (eg.
LoggingSamples.Controllers.AuthenticationController) won't be ever logged on prod.
{ "Logging": { "LogLevel": { "Default": "Information", "LoggingSamples.Controllers.AuthenticationController": "None" } } }
Besides categories, it's possible to define logging scopes. They allow having add set of custom information to each log entry.
Scopes are disabled by default - if you'd like to use them then you need to toggle them on in configuration:
{ "Logging": { "IncludeScopes": true, "LogLevel": { "Default": "Information" } } }
Having that you can use ILogger.BeginScope method to define one or more logging scopes.
The first potential use case is to always add entity type and identifier to all logs in business logic to not need to add it in each entry. Eg. reservation id during its update. You can also create nested scopes.
[HttpPut] public async Task Create(Guid id, [FromBody] UpdateReservationRequest request) { using(logger.BeginScope("For {EntityType}", "Reservation") { using(logger.BeginScope("With {EntityId}", id) { logger.LogInformation("Starting reservation update process for {request}", request); // (...) } }return OK();
}
You can create also scopes with aspect programming way - so eg. in middleware to inject scopes globally.
An example would be injecting as logging scope information from request eg. client IP, user id.
Sample below shows how to inject CorellationID into logger scope.
public class CorrelationIdMiddleware { private readonly RequestDelegate next; private readonly ILogger logger;public CorrelationIdMiddleware(RequestDelegate next, ILoggerFactory loggerFactory) { this.next = next; logger = loggerFactory.CreateLogger<correlationidmiddleware>(); } public async Task Invoke(HttpContext context /* other scoped dependencies */) { var correlationID = Guid.NewGuid(); using (logger.BeginScope($"CorrelationID: {CorrelationID}", correlationID)) { await next(context); } }
}
The other option for grouping logs is log events. They are used normally to group them eg. by purpose - eg. updating an entity, starting controller action, not finding entity etc. To define them you need to provide a standardized list of int event ids. Eg.
public class LogEvents { public const int InvalidRequest = 911; public const int ConflictState = 112; public const int EntityNotFound = 1000; }
Sample usage:
[HttpPut] public IActionResult Update([FromBody] UpdateReservation request) { logger.LogInformation("Initiating reservation creation for {seatId}", request?.SeatId);if (request?.SeatId == null || request?.SeatId == Guid.Empty) { logger.LogWarning(LogEvents.InvalidRequest, "Invalid {SeatId}", request?.SeatId); return BadRequest("Invalid SeatId"); } if (request?.ReservationId == null || request?.ReservationId == Guid.Empty) { logger.LogWarning(LogEvents.InvalidRequest, "Invalid {ReservationId}", request?.ReservationId); return BadRequest("Invalid ReservationId"); } // (...) return Created("api/Reservations", reservation.Id);
}
To setup docker configuration you need to create Dockerfile (usually it's located in the root project folder).
Docker allows to define complete build and runtime setup. It allows also multistage build. Having that, you can use in first stage different tools for building the binaries. Then in the next stage you can just copy the prepared binaries and host them in the final image. Thank to that the final docker image is smaller and more secure as it doesn't contain eg. source codes and build tools.
Microsoft provides docker images that can be used as a base for the Docker configuration. You can choose from various, but usually you're using either: -
mcr.microsoft.com/dotnet/core/sdk:3.1- Debian based, -
mcr.microsoft.com/dotnet/core/sdk:3.1-alpine- Alpine based, that are trimmed to have only basic tools preinstalled.
It's recommended to start with
alpineas it's much smaller and use the regular if you need more advanced configuration that's lacking in alpine. There are also windows containers, but they're rarely used. For most of the cases linux based will be the first option to choose.
See example of
DOCKERFILE:
######################################## # First stage of multistage build ######################################## # Use Build image with label `builder ######################################## FROM mcr.microsoft.com/dotnet/core/sdk:3.1-alpine AS builderSetup working directory for project
WORKDIR /app
Copy project files
COPY *.csproj ./
Restore nuget packages
RUN dotnet restore
Copy project files
COPY . ./
Build project with Release configuration
and no restore, as we did it already
RUN dotnet build -c Release --no-restore
Test project with Release configuration
and no build, as we did it already
#RUN dotnet test -c Release --no-build
Publish project to output folder
and no build, as we did it already
RUN dotnet publish -c Release --no-build -o out
########################################
Second stage of multistage build
########################################
Use other build image as the final one
that won't have source codes
######################################## FROM mcr.microsoft.com/dotnet/core/runtime:3.1-alpine
Setup working directory for project
WORKDIR /app
Copy published in previous stage binaries
from the
builder
imageCOPY --from=builder /app/out .
Set URL that App will be exposed
ENV ASPNETCORE_URLS="http://*:5000"
sets entry point command to automatically
run application on
docker run
ENTRYPOINT ["dotnet", "DockerContainerRegistry.dll"]
All modern IDE allows to debug ASP.NET Core application that are run inside the local docker. See links:
Azure Devops has built in
[email protected]task that's able to run Azure CLI commands.
To use it, it's needed to configure Azure Resource Manager comnnection. It's possible to do either with default service principal or by setting up custom one with set of permissions.
To allow new resource group creation you need to add at least
Microsoft.Resources/subscriptions/resourcegroups/writepermission on the subscription level. You can do that through
Access Control (IAM)section (Home => Subscriptions => Select subscription => IAM). Then you need to assign role that has that permission (eg.
Contributorbut beware - using it might be dangerous, as it has a high level access permissions, someone with access to Azure Devops can get access to subscription management). You can define your own custom role with minimum set of permissions.
Sample usage would be, creating new resource group and Azure Container Registry:
parameters: vmImageName: 'ubuntu-16.04' resourceGroupName: '' imageRepository: '' subscription: ''stages:
stage: create_azure_group_and_azure_docker_registry displayName: Create Azure Group And Azure Docker Registry jobs:
job: create_azure_group_and_azure_docker_registry pool: vmImageName: ${{ parameters.vmImageName }} steps:
task: [email protected] displayName: Create Resource Group inputs: azureSubscription: ${{ parameters.subscription }} scriptLocation: 'inlineScript' inlineScript: az group create --name ${{ parameters.resourceGroupName }} --location northeurope
task: [email protected] displayName: Create Azure Container Registry inputs: azureSubscription: ${{ parameters.subscription }} scriptLocation: 'inlineScript' inlineScript: az acr create --resource-group ${{ parameters.resourceGroupName }} --name ${{ parameters.imageRepository }} --sku Basic
Sample usage of this template would look like:
variables: vmImageName: 'ubuntu-16.04' imageRepository: dockercontainerregistrysample dockerRegistryServiceConnection: AzureDockerRegistry resourceGroupName: WebApiWithNetCore subscription: AzureWebApiWithNetCorestages:
Links:
Setup the universal template as follows (with eg. filename
BuildAndPublishDocker.yml):
parameters:
name: imageRepository
name: dockerRegistryServiceConnection
name: tag type: string
name: vmImageName default: 'ubuntu-16.04'
name: dockerfilePath default: DOCKERFILE
######################################################
###################################################### stages:
stage: build_and_push_docker_image displayName: Build and push Docker image jobs:
job: Build displayName: Build job pool: vmImage: ${{ parameters.vmImageName }} steps:
checkout: self
task: [email protected] displayName: Build a Docker image inputs: command: build repository: ${{ parameters.imageRepository }} dockerfile: ${{ parameters.dockerfilePath }} containerRegistry: ${{ parameters.dockerRegistryServiceConnection }} tags: |
${{ parameters.tag }}
task: [email protected] displayName: Push a Docker image to container registry condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main')) inputs: command: push repository: ${{ parameters.imageRepository }} dockerfile: ${{ parameters.dockerfilePath }} containerRegistry: ${{ parameters.dockerRegistryServiceConnection }} tags: |
${{ parameters.tag }}
Before running the pipeline, you need to manually using
Azure Cloud Shell: 1. Create Azure Resource Group, eg.:
`az group create --name WebApiWithNETCore --location westus`
Create Azure Container Registry, eg.
az acr create --resource-group WebApiWithNETCore --name dockercontainerregistrysample --sku Basic
Setup service connection in Azure Devops. See more in documentation
Use defined stage template and define needed variables, eg.:
variables: # image version (tag) variables major: 1 minor: 0 patch: 0 build: $[counter(variables['minor'], 0)] #this will reset when we bump patch tag: $(major).$(minor).$(patch).$(build) vmImageName: 'ubuntu-16.04' dockerfilePath: CD/DockerContainerRegistry/DOCKERFILE imageRepository: dockercontainerregistrysample dockerRegistryServiceConnection: AzureDockerRegistrystages:
See more in the pipeline definition: link.
Links: - Microsoft Documentation - Azure Container Registry Authentication
Before running the pipeline, you need to manually using
Azure Cloud Shell: 1. Create an account and sign in to Docker Hub. 2. Create repository (this will be your image name) selecting your Git repository. 3. Setup service connection in Azure Devops. See more in documentation
Use defined stage template and define needed variables, eg.:
variables: # image version (tag) variables major: 1 minor: 0 patch: 0 build: $[counter(variables['minor'], 0)] #this will reset when we bump patch tag: $(major).$(minor).$(patch).$(build) vmImageName: 'ubuntu-16.04' dockerfilePath: CD/DockerContainerRegistry/DOCKERFILE imageRepository: oskardudycz/dockercontainerregistrysample dockerRegistryServiceConnection: DockerHubDockerRegistrystages:
Before running the pipeline: 1. Create an account and sign in to Docker Hub. 2. Go to Account Settings => Security: link and click New Access Token. 3. Provide the name of your access token, save it and copy the value (you won't be able to see it again, you'll need to regenerate it). 4. Go to your GitHub secrets settings (Settings => Secrets, url
https://github.com/{your_username}/{your_repository_name}/settings/secrets/actions). 5. Create two secrets (they won't be visible for other users and will be used in the ) -
DOCKERHUB_USERNAME- with the name of your Docker Hub account (do not mistake it with GitHub account) -
DOCKERHUB_TOKEN- with the pasted value of a token generated in point 3.
Then add new file in the
.github/workflowsrepository folder - e.g. buildandpublishdockertodockerhub.yml.
name: Build And Publish Docker To DockerHubon: [push]
jobs: build: runs-on: ubuntu-latest
steps: - name: Check Out Repo uses: actions/[email protected] - name: Login to DockerHub uses: docker/[email protected] with: # Use secrets defined in GithubRepository # Based on the generated in DockerHub token username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }} - name: Set up Docker Buildx id: buildx uses: docker/[email protected] - name: Build and push id: docker_build uses: docker/[email protected] with: # build image in pull requests # publish only if branch is `main` push: ${{ github.ref == 'refs/heads/main'}} # define at which tag should be docker image published tags: oskardudycz/webapi_net_core_github_actions:latest # path to your project subfolder context: ./CD/DockerContainerRegistry # path to Dockerfile file: ./CD/DockerContainerRegistry/DOCKERFILE - name: Image digest run: echo ${{ steps.docker_build.outputs.digest }}