1. Installation
There are several ways to install Ontrack.
1.1. Prerequisites
Ontrack has been tested on different Linux variants (Ubuntu, Debian, CentOS) and should also work on Windows.
Ontrack relies on at least a JDK 8 build 25. More recent versions of the JDK8 are OK. However, no test has been done yet using JDK 9 and older versions.
Ontrack runs fine with 512 Mb of memory. However, think of upgrading to 2 Gb of memory if you intend to host a lot of projects. See the different installation modes (Docker, RPM, etc.) to know how to setup the memory settings.
Ontrack stores its data in a Postgres database and can optionally use ElasticSearch for search indexes.
1.2. Quick start
The fastest way to start Ontrack is to use Docker Compose:
curl -fsSLO https://raw.githubusercontent.com/nemerosa/ontrack/master/compose/docker-compose.yml
docker-compose up -d
This sets up:
-
a Postgres database
-
an ElasticSearch (single node)
-
Ontrack running on port 8080
Go to http://localhost:8080 and start using Ontrack. The initial
administrator credentials are admin
/ admin
.
See [usage] to start using Ontrack.
1.3. Postgres database
Unless you choose to deploy with Docker Compose, you will need to have a Postgres database accessible by Ontrack.
Version 9.5 of Postgres has been tested successfully with Ontrack, but any later 9.x version should be OK as well.
No test with Postgres 10.x has been performed yet.
Ontrack will by default try to connect to
jdbc:postgresql://postgresql/ontrack
, using ontrack / ontrack
as credentials.
Those parameters can be configured using normal Spring Boot JDBC configuration, for example using arguments at startup:
--spring.datasource.url=jdbc:postgresql://localhost:5432/ontrack
--spring.datasource.username=myuser
--spring.datasource.password=password
See the Spring Boot documentation to see other and better ways to pass those configuration parameters.
1.4. Installing using Docker
Ontrack is distributed as a Docker image on the Docker Hub, as nemerosa/ontrack:develop-fb2add2
.
1.4.1. Overview
The Ontrack image exposes the port 8080
.
Two volumes are defined:
-
/var/ontrack/data
- contains some working files but also the log files. -
/var/ontrack/conf
- contains the configuration files for Ontrack (see later).
Several modes of database setup can be done:
1.4.2. Basic deployment
You can start Ontrack as a container and a shared database and configuration on the host using:
docker run --detach \
--publish=8080:8080 \
--volume=/var/ontrack/data:/var/ontrack/data \
--volume=/var/ontrack/conf:/var/ontrack/conf \
nemerosa/ontrack
The configuration files for Ontrack can be put on the host in
/var/ontrack/conf
and the database and working files will be available
in /var/ontrack/data
. The application will be available on port 8080
of
the host.
Java options, like memory settings, can be passed to the Docker container using
the JAVA_OPTIONS
environment variable:
docker run \
...
--env "JAVA_OPTIONS=-Xmx2048m" \
...
Additional arguments to the Ontrack process, like
configuration arguments
passed on the command line, can use the ONTRACK_ARGS
environment variable.
docker run \
...
--env "ONTRACK_ARGS=..."
...
1.4.3. Docker Compose deployment
Create the following file:
version: "2.1"
services:
# Ontrack container
ontrack:
image: nemerosa/ontrack:3
environment:
PROFILE: prod
links:
- "postgresql:postgresql"
ports:
- "8080:8080"
# Postgresql database
postgresql:
image: postgres:9.5.2
environment:
POSTGRES_DB : ontrack
POSTGRES_USER : ontrack
POSTGRES_PASSWORD: ontrack
ports:
- "5432"
In the same directory, run:
docker-compose up -d
After some time, Ontrack becomes available at http://localhost:8080
1.5. RPM installation
You can install Ontrack using a RPM file you can download from the releases page.
The RPM is continuously tested on CentOS 6.7 and CentOS 7.1.
To install Ontrack:
rpm -i ontrack.rpm
The following directories are created:
Directory | Description |
---|---|
|
Binaries and scripts |
|
Working and configuration directory |
|
Logging directory |
You can optionally create an application.yml
configuration file in
/usr/lib/ontrack
. For example, to customise the port Ontrack is running on:
server:
port: 9080
Ontrack is installed as a service using /etc/init.d/ontrack
.
# Starting Ontrack
service ontrack start
# Status of Ontrack
service ontrack status
# Stopping Ontrack
service ontrack stop
To upgrade Ontrack:
# Stopping Ontrack
sudo service ontrack stop
# Updating
sudo rpm --upgrade ontrack.rpm
# Starting Ontrack
sudo service ontrack start
The optional /etc/default/ontrack
file can be used to define
environment variables like
JAVA_OPTIONS
or ONTRACK_DB_URL
(to use the H2 server mode).
For example:
JAVA_OPTIONS=-Xmx2048m
ONTRACK_DB_URL=jdbc:h2:tcp://h2:9082/ontrack;MODE=MYSQL
The ONTRACK_ARGS
environment variable can be use to pass
additional application parameters.
1.6. Debian installation
You can install Ontrack using a Debian file (.deb
) you can download from the
releases page.
To install Ontrack:
dpkg -i ontrack.deb
The following directories are created:
Directory | Description |
---|---|
|
Binaries and scripts |
|
Working and configuration directory |
|
Logging directory |
Ontrack is installed as a service using /etc/init.d/ontrack
.
# Starting Ontrack
service ontrack start
# Status of Ontrack
service ontrack status
# Stopping Ontrack
service ontrack stop
The optional /etc/default/ontrack
file can be used to define
environment variables like
JAVA_OPTIONS
or ONTRACK_DB_URL
(to use the H2 server mode).
For example:
JAVA_OPTIONS=-Xmx2048m
ONTRACK_DB_URL=jdbc:h2:tcp://h2:9082/ontrack;MODE=MYSQL
The ONTRACK_ARGS
environment variable can be use to pass
additional application parameters.
1.7. Standalone installation
Ontrack can be downloaded as a JAR and started as a Spring Boot application.
Download the JAR from the Ontrack release page
Start it using java -jar ontrack.jar
.
Options can be passed on the command line.
See the Docker installation section for information on how to connect to the database. |
1.8. Configuration
As a regular Spring Boot application, Ontrack can be configured using system properties and/or property files and/or YAML files. See the Spring Boot documentation for more details.
The way to provide a YAML application.yml configuration file or
command line arguments will vary
according to the installation (Docker, RPM, etc.). See the corresponding
section above for more details.
|
For example, to setup the port Ontrack is running on, you can use the
server.port
property. Using a YAML file:
server.port=9999
or the command line option:
--server.port=9999
See Configuration properties for the list of all available properties.
2. Basics
2.1. Managing projects
2.1.1. Project favorites
When the list of projects becomes too important to be manageable, the user can select some of them as favorite projects.
The user must be logged in to select and see his favorite projects. |
The list of favorite projects appears at the top of the home page and each of them displays the status of its branches:
-
name of the branch
-
highest promotion for the branch with its associated build
The list of branches, for a Git-based project is restricted by its associated branching model (only "main" branches are displayed). For non-Git-based projects, as-of now, no restriction on the branches is done. |
In order to make a project a favorite, you have to click on the little star icon which is on the left of the project name in the list of all projects:
To unselect a project as favorite, you do the same (this time, the star is marked as yellow):
Note that you also unselect a project as favorite from the list of favorite projects:
Branches of a project can also be selected as favorite - see Branch favorites. |
2.1.2. Project labels
Projects can be associated with labels. This allows to classify them.
Labels are defined by:
-
an optional category
-
a name (unique within the category)
-
an optional description
-
a color (always given using the form
#RRGGBB
, for example black is#000000
)
Using the labels
Project labels are displayed in the home page, in the list of projects:
They also appear in the project page, under the project’s name:
In the home page, projects can be filtered using labels. Several options are available.
You can type parts of the label in the Label filter box at the top of the project list. This displays a list of matching labels from which you can select an actual label:
Once the item is selected, the list of projects is filtered accordingly:
You can also select the label directly from the Label dropdown:
Finally, from the home page or from the project page, clicking on a label will select this label as a filter.
The selected filter is stored at browser level and is therefore preselected the next time you go to the home page. You can clear the selected label by either:
Note that upon a label selection, this selection appears also in the URL of your browser and can be used as a permalink to this filter. |
Assigning labels to a project
Only some users are allowed to assign labels to projects.
See Security for list of available roles. |
If the user is authorized to assign labels to a project, a pencil icon appears close to the list of labels and the Labels command is available in the page menu. Both commands perform the same operation.
Those commands display a dialog which allows the selection (and unselection) of labels among a list. When exiting the dialog through the OK button, the selection of labels is applied to the project and the project page is reloaded.
The list of available labels can be filtered using the text box at the top of the list.
Management of labels
Authorized users can manage the list of labels from their user menu.
The label management page allows the user to
-
create
-
update
-
delete
labels. In the Projects column, the number of projects associated with the label on the line. If greater than zero, it is a link to the home page, with the corresponding label being selected.
The edition dialog for a label looks like:
The color editor, as of now, relies on the browser default color editor, so the rendering might be different from browser to browser. |
If authorised, the creation of a label is also available from the project label assignment dialog. If the filter being typed does not match any label, a button appears which allows the creation of the new label:
Once the label is created, it’s selected and filtered by default:
Automation of labels
Some labels can be created and assigned automatically using the concept of the "label providers".
The main thing to remember about automatically assigned labels is that they cannot be edited, not deleted, not unselected. |
By default, automated labels are NOT enabled. In order to enable their collection, you can:
-
set the
ontrack.config.job-label-provider-enabled
configuration property totrue
-
or go to the Settings and navigate to the Label provider job section:
-
enabled - enable or disable the collection of automated labels. This overrides the settings defined by
ontrack.config.job-label-provider-enabled
-
interval - how often the collection of labels must be performed
-
job per project - by default, only one job is created for the collection of all labels. Set this option to split this job per project.
Those options are mostly used for tuning the performances on really big Ontrack instances. |
To create a label provider, you have to create an extension: see Label providers.
2.2. Managing branches
2.2.1. Managing the branches in the project page
If you click on the Show all branches button in the project page, you can display all the branches, including the ones being disabled and the templates.
According to your authorizations, the following commands will be displayed as icons just on the right of the branch name, following any other decoration:
-
disabling the branch
-
enabling the branch
-
deleting the branch
This allows you to have a quick access to the management of the branches in a project. Only the deletion of a branch will prompt you about your decision.
2.2.2. Branch favorites
Instead of selectioning a project as a favorite, one might find more convenient to select branches only.
This reduces the clutter on the home page when projects tend to have a lot of branches.
All favorite branches do appear on the home page, together with any favorite project:
The favorite branches of a given project do also appear on the project page:
In both cases, following information is displayed:
-
latest build
-
latest build per promotion
Branches can be unselected as favorite using the star left of their name. |
In order to select a branch as favorite, use the little star left of its name in the branch list in the project page:
You can use this star to unselect it as well. When selected, the star is marked as yellow. |
2.2.3. Branch templating
In a context where branches are numerous, because the workflow you’re working with implies the creation of many branches (feature branches, release branches, …), each of them associated with its own pipeline, creating the branches by hand, even by cloning or copying them would be too much an effort.
Ontrack gives the possibility to create branch templates and to automatically create branches using this template according to a list of branches. This list of branches can either be static or provided by the SCM.
See Branch templates for details about using this feature.
2.2.4. Managing stale branches
By default, Ontrack will keep all the branches of a project forever. This can lead to a big number of branches to be displayed.
You can configure a project to disable branches after a given number of days has elapsed since the last build, and then to delete them after an additional number of days has elapsed again.
To configure this:
-
go to the project page
-
select the Stale branches property and add it:
-
set the number of days before disabling and the number of days before deleting
If the disabling days are set to 0, no branch will be ever disabled or deleted.
If the deleting days are set to 0, no branch will ever be deleted.
You can also set a list of promotion levels - a branch which is or has been promoted to such a promotion level will not be eligible for being disabled or deleted.
In the sample above, the stale branches will be disabled after 60 days (not
shown any longer by default), and after again 300 days, they will be deleted
(so after 360 days in total). Branches which have at least one build being promoted to PRODUCTION
will not
be deleted or disabled.s
Note that the Stale branches property can also be set programmatically using the DSL.
2.2.5. Validation stamp filters
When a branch defines many validation stamps, the view can become cluttered and not really useful any longer, because displaying too much information.
Validation stamp filters can be defined to restrict the view to a set of known validation stamps.
Using filters
Validation stamp filters can be selected in the branch view, just on the left of the list of validation stamp headers:
When a filter is selected, it is marked as such and only associated validation stamp columns are shown in the view:
The validation stamp filter menu is also marked in orange to indicate that a filter has been applied.
When the filter is applied, its name appears also in the URL. This can be used as a permalink:
You can remove the filter by selecting Clear validation stamp filter in the filter menu:
Editing filters
Only authorized users are allowed to edit the validation stamp filters for a branch. See Authorisations for more details. |
A validation stamp filter is defined by:
-
a name
-
a list of validation stamp names to include
While it is possible to edit a filter using a dialog (see later), it is far easier to use the in-place editor.
Start by creating a validation stamp filter by selecting the New Filter… entry in the filter menu:
This displays a dialog to create the new filter:
Only the name is required and all current validation stamps filters are included by default. |
When created, the filter can be directly edited in-place:
The following actions are possible:
-
by clicking on the Select none button, no validation stamps is associated with the filter.
-
by clicking on the Select all button, all validation stamps are associated with the filter.
-
by clicking on the Done with edition button, the in-place edition stops and the normal display is resumed
You can also click on a validation stamp to remove it or to add it to the filter.
In case the validation stamp is associated with the filter, a minus icon appears close to its name. It it is not associated, the icon is dimmed and a plus icon appears:
Note that you can also stop the edition by selecting the eye icon in the menu:
To start editing an existing filter, just click also on the eye icon close to its name:
Select any other filter, or removing the filter, will also stop the in-place edition. |
To edit a filter directly, you can also select the pencil icon and edit the filter using a dialog:
This displays an edition dialog allowing to change the name and the list of validation stamps.
For a filter associated with a branch (see below, sharing), names can be selected among the validation stamps of the branch. For a filter associated with a project, the list of validation stamps for all the branches is available. For a global filter, names are no longer selected but must be edited. |
Finally, to delete a filter, click on the trash icon:
A confirmation will be asked before the deletion actually occurs. |
Sharing
A filter is created by default at branch level and is only visible when the associated branch is displayed.
An authorized user can:
-
share the filter at project level - in this case, the filter is available for all the branches of the project
-
share the filter at global level - in this case, the filter is available for all projects and all branches
A filter shared at project level is shown with a [P]
close to its name
and a global filter with a [G]
:
In the screenshot above:
-
DEPLOYMENT
is associated with the current branch -
DOCUMENTATION
is associated with the project -
the other filters are global
To share a filter at project level, click on the share icon:
To share a filter at global level, click on the share icon:
Authorisations
According to the role of the authenticated used, following actions are possible:
Scope | Action | Participant | Validation stamp manager | Project manager/owner | Administrator |
---|---|---|---|---|---|
Branch |
Create |
Yes |
Yes |
Yes |
Yes |
Branch |
Edit |
Yes |
Yes |
Yes |
Yes |
Branch |
Delete |
Yes |
Yes |
Yes |
Yes |
Branch |
Share to project |
No |
Yes |
Yes |
Yes |
Project |
Edit |
No |
Yes |
Yes |
Yes |
Project |
Delete |
No |
Yes |
Yes |
Yes |
Project |
Share to global |
No |
No |
No |
Yes |
Global |
Edit |
No |
No |
No |
Yes |
Global |
Delete |
No |
No |
No |
Yes |
2.3. Managing validation stamps
2.3.1. Validation stamp data configuration
Validation stamps are often created by tests, or scans, or any other kind of automated process. This is often associated with some metrics. For example:
-
a security scan could bring a list of critical and high defects
-
a test run has a list of passed and total tests
-
a coverage test has a coverage %
-
etc.
Ontrack is able to associate such data to a validation run for a given build and validation stamp.
The only precondition is that the validation stamp must be configured for a given type of data.
When creating or edition a validation stamp, you can select the type of data you want to associate with any validation run for this validation stamp.
There is a list of predefined data types:
-
plain text
-
CHML (Critical / High / Medium / Low)
-
Number (with or without threshold)
-
Fraction (with or without threshold)
-
Percentage (with or without threshold)
-
Metrics (arbitrary map of names to floating point numbers)
You can create your own validation data types. |
Every type is associated with a configuration. For some of them, nothing is needed.
For data types with threshold, you might want to select threshold for generating warnings, failures, according to a given direction.
For example, a validation stamp could be associated with a number of passed tests linked with a total number of tests. This is a "fraction" type, where the numerator is the number of passed tests, and the denominator is the total number of tests.
The warning and failure thresholds are expressed as percentages, and we can choose a direction (higher is better by default).
If the failure threshold is 50, and the total number of tests is 200, and the number of passed tests is 99, then, we’re below the threshold of 50 (%) and the validation run is marked as a failure.
For the creation of data associated with validation runs, see the associated documentation.
In order to use create a validation stamp with some validation data type, you can also use the DSL:
def branch = ...
branch.validationStamp("Text data") {
setTextDataType()
}
branch.validationStamp("CHML data") {
setCHMLDataType("HIGH", 1, "CRITICAL", 1)
}
branch.validationStamp("Number data") {
setNumberDataType(10, 100, true)
}
branch.validationStamp("Percentage data") {
setPercentageDataType(0, 50, false)
}
branch.validationStamp("Fraction data") {
setFractionDataType(100, 90, true)
}
branch.validationStamp("Metrics data") {
setMetricsDataType()
}
2.3.2. Auto creation of validation stamps
Creating the validation stamps for each branch, or making sure they are always up to date, can be a non trivial task. Having mechanisms like cloning or templates can help, but then one must still make sure the list of validation stamps in the template is up to date and than the template is regularly synchronized.
Another approach is to allow projects to create automatically the validation stamps on demand, whenever a build is validated. This must of course be authorised at project level and a list of predefined validation stamps must be maintained globally.
Predefined validation stamps
The management of predefined validation stamps is accessible to any Administrator, in his user menu.
He can create, edit and delete predefined validation stamps, and associate images with them.
Deleting a predefined validation stamp has no impact on the ones which were created from it in the branches. No link is kept between the validation stamps in the branches and the predefined ones. |
Configuring projects
By default, a project does not authorise the automatic creation of a validation stamp. In case one attempts to validate a build using a non existing validation stamp, an error would be thrown.
In order to enable this feature on a project, add the Auto validation stamps property to the project and set Auto creation to Yes.
Disabling the auto creation can be done either by setting Auto creation to No or by removing the property altogether.
Auto creation of validation stamps
When the auto creation is enabled, build validations using a validation stamp name will follow this procedure:
-
if the validation stamp is already defined in the branch, it is of course used
-
if the validation stamp is predefined, it is used to create a new one on the branch and is then used
-
in any other case, an error is displayed
The auto creation of validation stamps is available only through the DSL or through the API. It is not accessible through the GUI, where only the validation stamps of the branch can be selected for a build validation. |
Auto creation of validation stamps when not predefined
You can also configure the project so that validation stamps are created on demand, even when no predefined validation stamp is created.
In this case:
-
if the validation stamp is already defined in the branch, it is of course used
-
if the validation stamp is predefined, it is used to create a new one on the branch and is then used
-
in any other case, a new validation stamp is created in the branch, with the requested name (and with an empty image and a default description)
Predefined validation stamps and validation data
As for validation stamps associated with branches, the predefined validation stamps can be associated with some validation data.
2.3.3. Bulk update of validation stamps
Validation stamps are attached to a branch but in reality, they are often duplicated in a project branches and among all the projects. Updating the description and the image of a validation stamp can fast become cumbersome.
The predefined validation stamp can mitigate but this won’t solve the issue when validation stamps are created automatically even when not predefined.
In order to update all the validation stamps having the same name, across all branches and all projects, you can use the Buld update command in the validation stamp page:
A confirmation will be asked and all the validation stamps having the same name, across all branches and all projects, will be updated with the same image and the same description.
A predefined validation stamp will also be updated or created.
In order to perform a bulk update, you must be an administrator or been granted the global validation manager role. |
Any validation data configuration is also part of the bulk update. |
2.4. Managing promotion levels
2.4.1. Auto promotion
By default, a build is promoted explicitly, by associated a promotion with it.
By configuring an auto promotion, we allow a build to be automatically promoted whenever a given list of validations or promotions have passed on this build.
For example, if a build had passed integration tests on platforms A, B and C, we can imagine promoting automatically this build to a promotion level, without having to do it explicitly.
In order to configure the auto promotion, go to the promotion level and set
the "Auto promotion" property. You then associate the list of validation stamps
that must be PASSED
or promotion levels which must be granted on a build in
order to get this build automatically promoted.
The list of validation stamps can be defined by:
-
selecting a fixed list of validation stamps
-
selecting the validation stamps based on their name, using
include
andexclude
regular expressions
A validation stamp defined in the list is always taken into account in the
auto promotion, whatever the values for the include and exclude regular
promotions.
|
The list of promotion levels can be set independently of the list of validation stamps.
2.4.2. Promotion checks
By default, a build can be promoted to any promotion independently of any other constraint.
In particular, promotions are ordered and a build can be promoted to a given promotion without the previous ones being granted.
If this "lax" behavior is not correct, you can configure this in several ways, using promotion checks.
This check on previous promotions is built-in in Ontrack but other could be added by creating extensions. |
Previous promotion check
One check is to make sure that a promotion is granted only if the previous one is granted. For example, if we have BRONZE → SILVER → GOLD as promotions for a branch:
-
SILVER could not be promoted if IRON is not
-
GOLD could not be promoted if SILVER is not
You can activate this behaviour:
-
globally, by activating the settings for "Previous promotion condition"
-
at project, branch or even promotion level level, by setting and activating the "Previous promotion condition" property
The order of priority of this check is as follows:
-
promotion level "Previous promotion condition" property (takes precedence on a branch setup)
-
branch "Previous promotion condition" property (takes precedence on a project setup)
-
project "Previous promotion condition" property (takes precedence on global settings)
-
Global "Previous promotion condition" settings
Previous dependencies check
You can also define a set of dependencies to a promotion, like a list of promotions which must be granted before a promotion is itself granted.
For example, if we have IRON → BRONZE → SILVER → GOLD as promotions for a branch, we could decide that GOLD cannot be promoted if either BRONZE or SILVER is missing.
You can activate this behaviour by setting the "Promotion dependencies" property on a promotion level, and selecting the dependencies it depends on.
2.4.3. Auto creation
Creating the promotion levels for each branch, or making sure they are always up to date, can be a non trivial task. Having mechanisms like cloning or templating can help, but then one must still make sure the list of promotion levels in the template is up to date and than the template is regularly synchronized.
Another approach is to allow projects to create automatically the promotion levels on demand, whenever a build is promoted. This must of course be authorized at project level and a list of predefined promotion levels must be maintained globally.
Predefined promotion levels
The management of predefined promotion levels is accessible to any Administrator, in his user menu.
He can create, edit and delete predefined promotion levels, and associate images with them.
Deleting a predefined promotion level has no impact on the ones which were created from it in the branches. No link is kept between the promotion levels in the branches and the predefined ones. |
Configuring projects
By default, a project does not authorize the automatic creation of a promotion level. In case one attempts to validate a build using a non existing promotion level, an error would be thrown.
In order to enable this feature on a project, add the Auto promotion levels property to the project and set Auto creation to Yes.
Disabling the auto creation can be done either by setting Auto creation to No or by removing the property altogether.
Auto creation of promotion levels
When the auto creation is enabled, build promotions using a promotion level name will follow this procedure:
-
if the promotion level is already defined in the branch, it is of course used
-
if the promotion level is predefined, it is used to create a new one on the branch and is then used
-
in any other case, an error is displayed
The auto creation of promotion levels is available only through the DSL or through the API. It is not accessible through the GUI, where only the promotion levels of the branch can be selected for a build promotion. |
2.5. Managing the builds
The builds are displayed for a branch.
2.5.1. Filtering the builds
By default, only the 10 last builds of a branch are shown but you have the possibility to create build filters in order to change the list of displayed builds for a branch.
The management of filters is done using the Filter buttons at the top-left and bottom-left corners of the build list. Those buttons behave exactly the same way. They are not displayed if no build has ever been created for the branch.
Some filters, like Last build per promotion, are predefined, and you just have to select them to apply them.
You can create custom filters using the build filter types which are in the New filter section at the end of the Filter menu. You fill in the filter parameters and apply the filter by clicking on OK.
If you give your filter a name, this filter will be saved locally for the current branch and can be reused later on when using the same browser on the same machine account. If you are logged, you can save this filter for your account at ontrack level so you can reuse it from any workstation.
If the filter is not named, it will be applied all the same but won’t be editable nor being able to be saved.
You can delete and edit any of your own filters.
You can disable any filtering by selection Erase filter. You would then return to the default: last 10 builds. Note that the saved filters are not impacted by this operation.
Sharing filters
By selecting the Permalink option in the Filter menu, you update your browser’s URL to include information about the current selected filter. By copying this URL and send to another user, this other user will be able to apply the same filter than you, even if he did not create it in the first place.
Even anonymous (unnamed) filters can be shared this way.
2.5.2. Build links
A build can be linked to other builds. This is particularly useful to represent dependencies between builds and projects.
Definition of links
If authorized, you’ll see a Build links command at the top of the build page:
Clicking on this link will open a dialog which allows you to define the list of links:
Note that:
-
usually, you’ll probably edit those links in an automated process using the DSL
-
you cannot define or see links to builds for which the project is not accessible to you
Decorations
The build links are displayed as decorations in the build page header:
or in the list of builds:
In both cases, the decoration is clickable. If the target build has been promoted, the associated promotions will also be displayed.
If the target project (the project containing the build targeted by the link) has been configured accordingly, the label associated to the build will be displayed instead of its name. |
When the list of dependencies becomes too big, the decoration can be more cumbersome than useful. See the Filtering the build links section below on tips for customizing the display of the decoration. |
Information
The builds which are linked to a given build or which are used by this build are displayed on the build page:
Querying
The build links properties can be used for queries:
-
in build searches
-
in global searches
In all those cases, the syntax to find a match is:
-
project
,project:
orproject:*
- all builds which contain a build link to theproject
project -
project:build
- all builds which contain a link to the buildbuild
in theproject
project -
project:build*
- all builds which contain a link to a build starting withbuild
in theproject
project. The*
wildcard can be used in any place.
Filtering the build links
Once a build has too many dependencies, the decoration is too cluttered and cannot be used correctly:
In order to reduce this clutter, you can act at several levels:
-
setting some global property to so that only "main" build links are displayed
Only the administrators can set those global settings. Navigate to the Settings in the user menu, navigate to Main build links and edit the Project labels.
Enter a list of project labels which will be considered as "main links" and must always be displayed in the build decoration.
-
setting the project so that only "main" build links are displayed. Optionally, the global settings can be overridden.
In the source project (the one having the builds with many links to other projects), add the "Main build links" property and edit the list of the labels designated the projects to be always displayed.
By default, the global settings and the project settings are merged together. You can override this behaviour and take into account only the project settings by checking the "Override global settings" checkbox.
Given a project source
whose one build depends on product
(labeled with main
),
library
(labeled module
) and many other projects, if one sets the following settings:
-
global settings:
main
-
project
source
settings:module
and no override
Then, only the product
dependency is displayed in the decoration:
The last link icon is a link allowing to navigate to the source build and list all dependencies. If the source build would have dependencies which are not flagged as "main builds", only this icon would appear. |
2.5.3. Run info
Builds can be associated with some run info which contains details about the source, the trigger and the duration of this build.
Information about the duration of the builds is shown just right of the build name in the branch page:
or in the list of extensions in the build page:
More details about run information at Run info.
2.6. Managing validation runs
Validation runs associate a validation stamp to a build, and a status describing the validation: passed, failed, etc.
Additionally, a run info can be associated with the validation run to show information like a duration, a source or a trigger.
Validation runs can be created manually, but more often, they will be created automatically by a CI engine like Jenkins.
Validation runs can be seen:
-
in the branch overview - only their latest status is then visible
-
in the branch overview, by clicking on a validation run (at the intersection of a validation stamp and a build), you can either create a new validation run (when there is none) or see the list of them and their statuses.
-
in the build page
-
in the validation stamp page
For one validation run, one can add comments and update the status to reflect (for example, to mention that a failure in under investigation).
2.6.2. Validation run status comment edition
Authorized users can edit the description entered for a validation run status:
Administrators, project owners & managers, global & project validation managers can edit any comment. The author of a validation run status change can edit her own comment.
2.6.3. Hyperlinks in descriptions
The free text which is entered as description for the validation run statuses can be automatically extended with hyperlinks.
Such link expansions are done for:
-
raw hyperlinks in the text
-
issue references, depending on the project configuration
For example, if one enters the following text as a validation run status:
For more information, see http://nemerosa.github.io/ontrack/
the link will be rendered as such:
If the project is configured with JIRA, any reference will be converted to a link to the issue. So a text like
See CLOUD-6800
would be rendered as:
Same thing for GitHub or GitLab, a text like:
See #631
will be rendered as:
See Free text annotations for a way to extend this hyperlinking feature.
2.6.4. Validation run data
Some additional data can be associated with a validation run, according to a format defined by their validation stamp. For example, passed and total tests, or a coverage percentage.
When creating a validation run, either though the GUI or through any of the API, the data will be validated according to the rules defined by the validation stamp.
In particular, the validation run status (passed, warning, failed, etc.) will be computed in some cases, when a threshold of quality is associated with the validation stamp (like a % of passed tests).
In order to use create a validation run with some validation data, you can also use the DSL:
def build = ...
build.validationWithText("Text data", "PASSED", "Some text")
build.validateWithCHML("CHML data", 1, 10, 100, 1000)
build.validateWithNumber("Number data", 80)
build.validateWithPercentage("Percentage data", 57)
build.validateWithFraction("Fraction data", 99, 100)
build.validateWithMetrics("Metrics data", [
"js.bundle" to 1500.56,
"js.error" to 111
])
For a custom validation data type, you can use:
build.validateWithData(
"Validation stamp name",
[:] // Validation data
)
The actual validation data type is taken from the validation stamp. |
Run data metrics
While the validation run data is available from Ontrack, it can also be exported to other databases.
As of today, only InfluxDB is supported.
InfluxDB connector must be enabled - see InfluxDB metrics. |
In order to export Ontrack validation as points into an InfluxDB database, following elements must be configured:
Property | Environment variable | Default | Description |
---|---|---|---|
|
|
|
If |
Each point contains the following information:
-
name:
ontrack_value_validation_data
-
tag
project
- name of the project -
tag
branch
- name of the branch -
tag
build
- name of the build -
tag
validation
- name of the validation -
tag
status
- status of the validation -
tag
type
- FQCN of the validation data type -
field values depend on the type of data
2.7. Properties
All entities can be associated with properties.
2.7.1. Build link display options property
By default, when a build link is displayed as a decoration in the source build, the target build name is used.
The target project can be configured to display any label associated with the build instead of using the build name.
In the target project page, select the "Build link display options" property and configure it to use the label as display option for the build link decoration:
2.7.2. Message property
The Message property can be associated with any entity in Ontrack.
A message is the association of some text with a type: information, warning or error:
The message is of course displayed in the list of properties:
-
and as a decoration:
2.7.3. Meta information property
Some arbitrary meta information properties can be associated with any entity in Ontrack, using a set of values for some names, and optionally some links and categories.
Querying
The meta information properties can be used for queries:
-
in build searches
-
in global searches
In all those cases, the syntax to find a match is:
-
name:
orname:*
- all entities which contain aname
meta information property -
name:value
- all entities which contain aname
meta information property with the exactvalue
-
name:val*
- all entities which contain aname
meta information property whose value starts withval
-
the
*
wildcard can be used in any place
Neither the link nor the category can be used for the search, only the name and the value. |
2.7.4. Release property
Also known as label property. |
A release label can be associated to any build.
Select the "Release" property and define its label:
The release label is then displayed as a decoration:
-
in the build page
-
in the build list
3. Topics
3.1. Branch templates
In a context where branches are numerous, because the workflow you’re working with implies the creation of many branches (feature branches, release branches, …), each of them associated with its own pipeline, creating the branches by hand, even by cloning or copying them would be too much an effort.
Ontrack gives the possibility to create branch templates and to automatically create branches using this template according to a list of branches. This list of branches can either be static or provided by the SCM.
3.1.1. Template Definition
We distinguish between:
-
the branch template definition - which defines a template for a group of branches
-
the branch template instances - which are branches based on a template definition
There can be several template definitions per project, each with its own set of template instances.
A template definition is a branch:
-
which is disabled (not visible by default)
-
which has a special decoration for quick identification in the list of branches for a project
-
which has a list of template parameters:
-
names
-
description
whose descriptions and property values use ${name}
expressions where name is
a template parameter.
One can create a template definition from any branch following those rules:
-
the user must be authorized to manage branch templates for a project
-
the branch must not be already a template instance
-
the branch must not have any existing build
3.1.2. Template Instances
A template instance is also a branch:
-
which is linked to a template definition
-
which has a set of name/values linked to the template parameters
-
which has a special decoration for quick identification in the list of branches for a project
-
it is a "normal branch" as far as the rest of Ontrack is concerned, but:
-
it cannot be edited
-
no property can be edited not deleted (they are linked to the template definition)
-
There are several ways to create template instances:
-
from a definition, we can create one instance by providing:
-
a name for the instance
-
values for each template parameters
-
-
we can define template synchronization settings linked to a template definition:
-
source of instance names - this is an extension point. This can be:
-
a list of names
-
a list of actual branches from a SCM, optionally filtered. The SCM information is taken from the project definition.
-
-
an interval of synchronization (manual or every x minutes)
-
a list of template expressions for each template parameter which define how to map an instance name into an actual parameter value (see below)
-
-
The actual creation of the instance is done using cloning and copy technics already in place in Ontrack. The replacement is done using the template parameters and their values (computed or not).
The manual creation of an instance follows the same rules than the creation of a branch. If the branch already exists, an error is thrown.
For automatic synchronization from a list of names (static or from a SCM):
-
if a previously linked branch does not exist any longer, it is disabled (or deleted directly, according to some additional settings for the synchronization)
-
if a branch already exists with the same name, but is not a template instance, a warning is emitted
-
if a branch exists already, its descriptions and property values are synched again
-
if a branch does not exist, it is created as usual
Reporting about the synchronization (like syncs, errors and warnings) are visible in the Events section, in the template definition or in the template instances.
The same synchronization principle applies to branch components: promotion levels, validation stamps and properties.
Finally, at a higher level, cloning a project would also clone the template definitions (not the instances).
3.1.3. Template expressions
Those expressions are defined for the synchronization between template definitions and template instances. They bind a parameter name and a branch name to an actual parameter value.
A template expression is a string that contains references to the branch name
using the ${…}
construct where the content is a Groovy expression where
the branchName
variable is bound to the branch name.
Note that those Groovy expression are executed in a sand box that prevent malicious code execution.
Examples
In a SVN context, we can bind the branch SVN configuration (branch location
tag pattern) this way, using simple replacements:
branchLocation: branchName -> /project/branches/${branchName} tagPattern: branchName -> /project/tags/{build:${branchName}*}
In a Jenkins context, we can bind the job name for a branch:
jobName: branchName -> PROJECT_${branchName.toUpperCase()}_BUILD
3.2. Working with SCM
Source Control Management (SCM) is at the core of Continuous Integration and Continuous Delivery chains. It’s therefore not a surprise that they play a major role in Ontrack.
Out of the box, several SCM are supported:
-
Git based systems
As of version 3.41, Subversion is now deprecated and support for it will be removed in version 4.x |
3.2.1. SCM Catalog
Ontrack allows to collect information about all registered SCMs and to correlate this information with the Ontrack projects.
Model
A SCM Catalog entry represents a physical SCM reposotory which is accessible by Ontrack. An entry contains the following information:
-
SCM - type of SCM, like
github
orbitbucket
. -
Configuration - associated configuration in Ontrack to access this repository (URL, credentials, etc.).
-
Repository - identifier for this repository. It depends on the type of SCM. For example, for GitHub, it can be name of the repository, like
nemerosa/ontrack
.
A SCM catalog entry can be:
-
linked if an Ontrack project exists which is associated to this repository
-
unlinked otherwise
Some Ontrack projects are orphan if they are not associated with any repository accessible by Ontrack or if their associated repository is not accessible.
SCM Catalog list
To access the SCM catalog, you must be logged in. You must select the SCM Catalog item in your user menu.
The list looks like:
The Project column indicates if the entry is linked or unlinked. In case it is linked, a link to the Ontrack project page is available.
Filtering is possible using text boxes at the top. You can also navigate back and forth in the list using the Previous and Next buttons.
The main filter, labelled Only SCM entries, allows to select the type of entry:
-
Only SCM entries - selected by default, shows all repositories accessible by Ontrack
-
All entries and orphan projects - additionally, shows the orphan projects
-
Linked entries only - shows only the entries which are linked to projects
-
Unlinked entries only - shows only the unlinked entries
-
Orphan projects only - shows only the orphan projects, as shown below:
In this case, only the link to the project is available since no repository information is accessible.
Orphan project decoration
Since orphan projects are an anomaly (because every Ontrack project should be associated with some kind of SCM), they get a special decoration, so that they can easily be identified (and fixed):
Project labels
If the collection of project labels is enabled, the following labels will be set for projects:
-
scm-catalog:entry
when the project is associated with a SCM Catalog entry -
scm-catalog:no-entry
when the project is NOT associated with a SCM Catalog entry
Those labels can be used to filter orphan projects on the home page for example, or in GraphQL queries.
GraphQL schema
The SCM Catalog is accessible through the Ontrack GraphQL schema.
At root level, the scmCatalog
query allows to query the SCM
Catalog itself and to filter the catalog.
For example, to get the list of orphan projects:
{
scmCatalog(link: "ORPHAN") {
pageItems {
project {
name
}
}
}
}
or to get the entries which are unlinked:
{
scmCatalog(link: "UNLINKED") {
pageItems {
entry {
scm
config
repository
repositoryPage
}
}
}
}
See the GraphQL schema documentation for more fields and filters. |
Additionally, the scmCatalogEntry
field is available on the Project
tpe
to provide information about any associated SCM Catalog entry:
{
projects(name: "ontrack") {
scmCatalogEntry {
scm
config
repository
repositoryPage
}
}
}
Metrics
The following metrics are available:
-
ontrack_extension_scm_catalog_total
(gauge) - count of SCM catalog entries + orphan projects -
ontrack_extension_scm_catalog_entries
(gauge) - count of SCM catalog entries -
ontrack_extension_scm_catalog_linked
(gauge) - count of linked SCM catalog entries -
ontrack_extension_scm_catalog_unlinked
(gauge) - count of unlinked SCM catalog entries -
ontrack_extension_scm_catalog_orphan
(gauge) - count of orphan projects
Administration
This feature is enabled by default but can be controlled using some administrative jobs:
-
Collection of SCM Catalog - gets the list of repositories accessible from Ontrack. Runs once a day.
-
Catalog links collection - gets the links between the projects and associated SCM repositories. Runs once a day.
-
Collection of SCM Catalog metrics - computes some metrics about the SCM catalog
Specific configuration for GitHub
The GitHub repositories are not collected unless their organization is specifically allowed. By default, none are.
In order to enable the scanning of a GitHub organization,
log as administrator, go to the Settings, scroll to the
GitHub SCM Catalog section and enter the names of the
organizations to authorise for collection. For example, below,
only the nemerosa
organization is allowed:
3.3. Working with Git
3.3.1. Working with GitHub
GitHub is an enterprise Git repository manager on the cloud or hosted in the premises.
When working with Git in Ontrack, one can configure a project to connect to a GitHub repository.
General configuration
The access to a GitHub instance must be configured.
-
as administrator, go to the GitHub configurations menu
-
click on Create a configuration
-
in the configuration dialog, enter the following parameters:
-
Name - unique name for the configuration
-
URL - URL to the GitHub instance. If left blank, it defaults to the https://github.com location
-
User & Password - credentials used to access GitHub - Ontrack only needs a read access to the repositories
-
OAuth2 token - authentication can also be performed using an API token instead of using a user/password pair
-
The existing configurations can be updated and deleted.
Although it is possible to work with an anonymous user when accessing GitHub, this is not recommended. The rate of the API call will be limited and can lead to some errors. |
Project configuration
The link between a project and a GitHub repository is defined by the GitHub configuration property:
-
Configuration - selection of the GitHub configuration created before - this is used for the accesses
-
Repository - GitHub repository, like
nemerosa/ontrack
-
Indexation interval - interval (in minutes) between each synchronisation (Ontrack maintains internally a clone of the GitHub repositories)
-
Issue configuration - issue service. If not set or set to "GitHub issues", the issues of the repository will be used
Branches can be configured for Git independently.
SCM Catalog configuration
The SCM Catalog feature requires some additional configuration for GitHub. See the specific section for more information.
3.3.2. Working with GitLab
GitLab unifies issues, code review, CI and CD into a single UI.
When working with Git in Ontrack, one can configure a project to connect to a GitLab repository.
General configuration
The access to a GitLab instance must be configured.
-
as administrator, go to the GitLab configurations menu
-
click on Create a configuration
-
in the configuration dialog, enter the following parameters:
-
Name - unique name for the configuration
-
URL - URL to the GitLab instance (not the repository, the GitLab server)
-
User & Personal Access Token - credentials used to access GitLab
-
Ignore SSL Certificate - select Yes if the SSL certificate for your GitLab instance cannot be trusted by default.
-
You cannot use the account’s password - only Personal Access Tokens are supported.s |
The existing configurations can be updated and deleted.
Project configuration
The link between a project and a GitLab repository is defined by the GitLab configuration property:
-
Configuration - selection of the GitLab configuration created before - this is used for the access
-
Issue configuration - select the source of issues for this project. This can be any ticketing system (like JIRA) or the built-in issue management for this GitLab project (displayed as "GitLab issues")
-
Repository - repository name, like
nemerosa/ontrack
-
Indexation interval - how often, in minutes, must the content of this repository be synchronised with Ontrack. Use
0
to not automatically synchronize this repository (this can be done manually).
Branches can be configured for Git independently.
3.3.3. Working with BitBucket
BitBucket is an enterprise Git repository manager by Atlassian.
When working with Git in Ontrack, one can configure a project to connect to a Git repository defined in BitBucket in order to access to the change logs.
General configuration
The access to a BitBucket instance must be configured.
-
as administrator, go to the BitBucket configurations menu
-
click on Create a configuration
-
in the configuration dialog, enter the following parameters:
-
Name - unique name for the configuration
-
URL - URL to the Stash instance
-
User & Password - credentials used to access Stash - Ontrack only needs a read access to the repositories
-
The existing configurations can be updated and deleted.
Project configuration
The link between a project and a Stash repository is defined by the Stash configuration property:
-
Configuration - selection of the Stash configuration created before - this is used for the access and the issues management
-
Project - name of the Stash project
-
Repository - name of the Stash repository
-
Indexation interval - interval (in minutes) between each synchronization (Ontrack maintains internally a clone of the BitBucket repositories)
-
Issue configuration - configured issue service to use when looking for issues in commits.
-
Branches can be configured for Git independently.
3.3.4. Git searches
The Git Ontrack extension provides 3 search indexers:
-
looking for Ontrack branches based on the name of the Git branch
-
looking for commits (using hashes, authors, description, …)
-
looking issues mentioned in commit messages (issue key)
The following configuration properties are available to run the Git search capabilities:
# How often the full-reindexation of commit must be performed
# The schedule is either a number of minutes, or can use
# a notation duration, like 1h, 60m, 1d, etc.
# Even for big volumes, 1 hour is more than enough.
ontrack.config.search.git.commits.schedule = 1h
# Set to false to disable the automated regular indexation
# of Git commits. If disabled, the indexation job is still present
# but must be run manually.
ontrack.config.search.git.commits.scheduled = true
3.3.5. Working with Subversion
Ontrack allows you to configure projects and branches to work with Subversion in order to:
-
get change logs
-
search for issues linked to builds and promotions
-
search for revisions
Subversion configurations
In order to be able to associate projects and branches with Subversion information, an administrator must first define one or several Subversion configurations.
As an administrator, go to the user menu and select SVN configurations.
In this page, you can create, update and delete Subversion configurations. Parameters for a Subversion configuration are:
-
a name - it will be used for the association with projects
-
a URL - Ontrack supports
svn
,http
andhttps
protocols - if the SSL certificate is not recognized by default, some additional configuration must be done at system level.
The URL must be the URL of the repository. |
-
a user and a password if the access to the repository requires authentication
-
a tag filter pattern - optional, a regular expression which defines which tags must be indexed
-
several URL used for browsing
-
indexation interval in minutes (see below)
-
indexation start - the revision where to start the indexation from
-
issue configuration - issue service associated with this repository
Indexation
Ontrack works with Subversion by indexing some repository information locally, in order to avoid going over the network for each Subversion query.
This indexation is controlled by the parameters of the Subversion configuration: starting revision and interval. If this interval is set to 0, the indexation will have to be triggered manually.
Among the information being indexed, the copy of tags is performed and can be filtered if needed.
In order to access the indexation settings of a Subversion configuration, click on the Indexation link.
From the indexation dialog, you can:
-
force the indexation from the latest indexed revision
-
reindex a range of revisions
-
erase all indexed information, and rerun it
The indexations run in background.
Project configuration
You can associate a project with a Subversion configuration by adding the SVN configuration property and selecting:
-
a Subversion configuration using its name
-
a reference path (typically to the
trunk
)
Like all paths in Subversion configurations of projects and branches, this is a relative path to the root of the repository. Not an absolute URL. |
From then on, you can start configuring the branches of the project.
Branch configuration
You can associate a branch with Subversion by adding the SVN configuration property and selecting:
-
a path to the branch
-
a build revision link and its configuration if any
The path to the branch is relative to the URL of the SVN repository. |
The build commit link defines how to associate a build and a location in Subversion (tag, revision, …). This link works in both directions since we need also to find builds based on Subversion informations.
Build commit links are extension points - the following are available in Ontrack.
Tag name
The build name is considered a tag name in the tags
folder for the branch.
For example, if the branch path is /projects/myproject/branches/1.1
then the
tags folder is /projects/myproject/tags
and build names will be looked for
in this folder.
No configuration is needed.
Tag pattern name
The build name is considered a tag name in the tags
folder but must follow
a given pattern.
Build / tag synchronization
For branches whose builds are associated with tags, you have the option to enable a synchronization between the builds in Ontrack and the tags in the Subversion branch.
In the branch page, add the SVN synchronisation property and configure it:
Parameter | Description |
---|---|
|
If set to |
|
The frequency, in minutes, of the synchronization. If set to |
In order to disable globally the tag/build synchronization, without having to change manually all the configured branches, add the following entry in the Ontrack configuration file and restart Ontrack: application.yml
|
3.4. Change logs
When working with Git or Subversion, you have the opportunity to get a change log between two builds of the same project.
3.4.1. Selection
In the branch view, you can select the two boundaries of the change log by:
-
clicking on a build row
-
clicking while holding the
shift
key on another build row
Two arrows indicate the current selection:
Note that by default the first and the last build of the current view are selected as boundaries. |
The Change log button displays the change log page, which contains four sections:
-
general information about the two build boundaries
-
the commits (for Git) or revision (for Subversion) section
-
the issues section
-
the file changes selection
Only the first section (build information) is always displayed - the three other ones are displayed only when you request them by clicking on one of the corresponding buttons or links.
4. File changes
The list of file changes between the two build boundaries is displayed here:
Each file change is associated with the corresponding changes. This includes the list of revisions for Subversion.
Additionally, you can define filters on the file changes, in order to have access to a list of files impacted by the change log.
By entering a ANT-like pattern, you can display the file paths which match:
For more complex selections, you can clock on the Edit button and you’ll have a dialog box which allows you to define:
-
a name for your filter
-
a list of ANT-like patterns to match
If you are authorized, you can also save this filter for the project, allowing its selection by all users.
In the list of filters, you find the filters you have defined and the ones which have been shared for the whole project. The latter ones are marked with an asterisk (*):
You can update and delete filters. Note that the shared filters won’t be actually updated or deleted, unless you are authorized.
Finally, you can get the unified diff for the selected filter by clicking on the Diff button:
This will display a dialog with:
-
the unified diff you can copy
-
a permalink which allows you download the diff from another source
You can obtain a quick diff on one file by clicking on the icon at the right of a file in the change log:
4.1. Searching
The top search box allows to search objects stored in Ontrack like projects, branches or builds, based on their names or on their properties like their label, Git branch, etc. Other items like Git commits or issues mentioned in commit messages can also be looked for.
In the top search box, you can:
-
select the type of object you look for
-
enter search tokens
Upon hitting the Enter key, a search is performed using the token and the given type if selected.
The search page is displayed and repeats the type and token.
If no result is found, the search page display a warning.
If exactly 1 result is returned, Ontrack will automatically redirect to the page associated with this result.
If there are more than 1 result, their list is displayed, up to 20 results. If more results are available, a More link is displayed, which will load up to 20 more results.
4.1.1. Searching engine
By default, a built-in engine is used to provide results but this engine is quite slow and can be replaced by a way-faster ElasticSearch based engine.
The ElasticSearch engine will become the default one starting from version 4.0. |
See ElasticSearch search engine on how to enable the ElasticSearch based engine.
4.2. Project indicators
Project indicators are set at project level to hold values about different types of information.
Those types of information are grouped into categories and can have a specific value type, like a boolean (yes/no), a percentage, a numeric value, etc. Types can be entered manually, imported or computed.
Every of those indicator values have a level of compliance which computed as a
percentage (from 0% - very bad - to 100% - very good) according to the configuration
of the type. The compliance is also associated with a rating, from F
(very bad) to
A
(very good).
The indicator values can be entered manually at project level or be computed.
Projects can be grouped together in portfolios which are also associated with a subset of categories. And a global view of all portfolios is associated with a specific subset of categories.
Finally, the history of indicators is retained by Ontrack and can be used to compute trends at the different levels (at project level, at portfolio level or globally).
4.2.1. Indicators authorization model
Having access to a project grants automatically access to viewing the associated indicators.
However, managing indicators, types & portfolios is granted according to the following matrix:
Function | Administrator | Global indicator manager | Global indicator portfolio manager | Project manager/owner | Project indicator manager | |
---|---|---|---|---|---|---|
Global indicators |
Yes |
Yes |
No |
No |
No |
|
Type and category management (1) |
Yes |
Yes |
No |
No |
No |
|
Portfolio management |
Yes |
Yes |
Yes |
No |
No |
|
Indicator edition (2) |
Yes |
Yes |
No |
Yes |
Yes |
(1) Imported types & categories are not open to edition. (2) Computed indicators are not open to manual edition.
4.2.2. Indicator types management
Categories & types can be managed manually by an authorized used using the following user menus:
-
Indicator categories
-
Indicator types
A category must have the following attributes:
-
id - unique ID for this category among all the categories
-
name - display name for this category
A type must have the following attributes:
-
id - unique ID for this type among all the type
-
name - display name for this type
-
link - optional URL for more information about this type
-
value type - type of indicator value this type. For example, a percentage or a boolean
-
value config - configuration for the value type, used to compute the indicator compliance and rating
Categories and types can also be imported or computed. In such a case, both the category and the type are associated with a source and they cannot be edited.
Value types
The following value types are available in Ontrack:
Type | Description | Configuration | Example |
---|---|---|---|
Yes/No |
Value which can be either Yes ( |
|
"Project should be build with Gradle" - because of the "should", the indicator
required value is set to |
Percentage |
Integer value between |
|
"Test coverage" is expressed as a percentage and * any value >= 80% has a rating of "Duplicated code" can also be expressed as a percentage, but this time
with * any value ⇐ 10% has a rating of |
Number |
Integer value >= 0 |
|
"Number of blocking issues" is expressed as a number with * any value set to 0 has a rating of A "Number of tests" could be expressed as a number with * any value ⇐ 100 has a rating of |
Additional value types can be created by registering an extension
implementing the |
4.2.3. Indicator edition
An authorized user can edit the indicator for a project by going to the Tools menu and select Project indicators:
All available types are displayed, grouped by categories, and each indicator value is shown together with its value, its rating:
If the indicator has a previous value, its previous rating is displayed.
If the indicator is open to edition, the user can click on the pencil icon to edit the value according to the value type. Upon validation, a new indicator value is stored ; the old value is kept for history and trend computation.
Comments can be associated with an indicator values. Links & issue references will be rendered as links.
An authorized user can also delete the indicator ; this actually register a new null value for the indicator. The historical values are kept.
The history of an indicator can be accessed by clicking on the History icon:
The list of portfolios the project belongs to is displayed at the top of the indicator list:
4.2.4. Indicator portfolios
Portfolios are available in the Indicator portfolios user menu and the associated page displays the list of already created portfolios.
Each portfolio is associated with a list of selected global categories and each of those categories is associated with the average rating for all the projects and all the types of this category.
Only indicators having an actual value are used to compute the average rating. The indicators which are not set are not used for the computation and the ratio "number of indicators being set" to the "number of total indicators" is also displayed. This gives an idea about the trust we can have in this average rating. |
The minimum ratings are also mentioned if they diverge from the average. |
The trend period allows to display the average value from the past, and to compare it with the current value.
The trend computation is currently not correct - see the #793 issue. |
Global indicators
Authorized users can edit the list of categories which are displayed on the portfolio overview by clicking on the Global indicators command:
On the associated page, the user can select / unselect the categories which must be displayed for all portfolios:
Closing this page goes back to the portfolio overview.
Management of portfolios
Authorized users can create, edit and delete portfolios.
Creating a portfolio is done using the Create portfolio command:
The portfolio creation dialog requires:
-
an ID - must be unique amont all the portfolios and will be used as an identifier. It must therefore comply with the following regular expression:
[a-z0-9:-]+
(lowercase letters, digits,:
colon or-
dashes). The ID cannot be modified later on. -
a display name
Once created, the portfolio appears on the portfolio overview and can edited or deleted using the appropriate icons:
-
the portfolio name is actually a link going to the detailed portfolio view
-
the arrow icon goes to the home page and displays only the projects associated to this portfolio
-
the edition icon goes to the portfolio edition page
-
the deletion icon displays a warning and allows the user to delete the portfolio.
The deletion of a portfolio does not delete any indicator in any project. |
Portfolio page
By clicking on the portfolio name in the portfolio overview, you get to a page displaying:
-
the list of projects associated with this portfolio
-
the list of categories associated with this portfolio
-
the average indicator rating for project and for each category
As for the portfolio overview, the average rating is computed only using the indicators which are actually set, and the ratio filled vs. total is displayed. |
The trend period selector allows you to check the past average values and the associated trends.
Clicking on a project name goes to the project indicators page.
Clicking on a category name goes to a page displaying a detailed view of indicators for all the types in this category and for all the projects of this portfolio:
In this view, clicking on the icon right to the type name will bring up a page displaying the indicator values for this type for all the projects of this portfolio:
According to your rights, you can edit and delete indicator values from this page.
Portfolio edition
The portfolio edition page allows you to:
-
edit the portfolio display name (not the ID)
-
set a label to select the associated projects
-
select the categories associated with this portfolio
The label allows a portfolio to be associated to all projects which have this label. See Project labels for more information on how to manage labels.
"Global indicator portfolio managers" and "Global indicator managers" can associate existing labels to projects but cannot create new labels. |
4.2.5. Importing categories and types
While indicator categories and types can be entered manually, it is also possible to import lists of categories and their associated types.
To import categories & types in Ontrack, you need a user allowed
to manage types and you can use the
POST /extension/indicators/imports
end point, passing a JSON as payload.
For example, with Curl:
curl --user <user> \
-H "Content-Type: application/json" \
-X POST \
http://ontrack/extension/indicators/imports \
--data @payload.json
where:
{
"source": "principles",
"categories": [
{
"id": "service-principles",
"name": "Service Principles",
"types": [
{
"id": "java-spring-boot",
"name": "SHOULD Use Java & spring boot stack",
"required": false,
"link": "https://example.com/architecture-principles/latest/service_principles.html#java-spring-boot"
}
]
}
]
}
The source
is an ID identifying the nature of this list.
Each category must have an id
(unique in Ontrack) and a display name
.
Each type must have:
-
an
id
(unique in Ontrack) -
a display
name
-
a
required
flag - as of now, only "Yes/No" value types are supported -
an optional
link
to some external documentation
Upon import:
-
new existing & types are created
-
existing categories & types are updated and associated indicators are left untouched
-
removed categories & types are marked as deprecated, and associated indicators are kept
Instead of marking obsolete categories & types as deprecated, those can be deleted
using the |
Imported categories & types cannot be edited. |
4.2.6. Computing indicators
It is possible to define some types whose value is not entered manually but is computed by Ontrack itself.
You do so by registering an extension which implements
the IndicatorComputer
interface, or the AbstractBranchIndicatorComputer
class when the value must be computed from the "main branch" of a project.
See the documentation of those two types for more information.
The SonarQubeIndicatorComputer
extension is an example of such an
implementation.
Computed categories & types cannot be edited, and their values cannot be edited manually. |
5. Administration
5.1. Security
The Ontrack security is based on accounts and account groups, and on authorizations granted to them.
5.1.1. Concepts
Each action in Ontrack is associated with an authorisation function and those functions are grouped together in roles which are granted to accounts and account groups.
An account can belong to several account groups and his set of final authorisation functions will be the aggregation of the rights given to the account and to the groups.
5.1.2. Roles
As of now, only roles can be assigned to groups and accounts, and the list of roles and their associated functions is defined by Ontrack itself. |
Ontrack distinguishes between global roles and project roles.
Extensions can contribute to built-in roles and functions - see Extending the security for details.
Global roles
An ADMINISTRATOR has access to all the functions of Ontrack, in all projects. At least such a role should be defined.
By default, right after installation, a default admin account is
created with the ADMINISTRATOR role, having admin as password. This
password should be changed as soon as possible.
|
A CREATOR can create any project and can, on all projects, configure them, create branches, manage branch templates, create promotion levels and validation stamps. This role should be attributed to service users in charge of automating the definition of projects and branches.
An AUTOMATION user can do the same things than a CREATOR but can, on all projects, additionally edit promotion levels and validation stamps, create builds, promote and validate them, synchronize branches with their template, manage account groups and project permissions. This role is suited for build and integration automation (CI).
A CONTROLLER can, on all projects, create builds, promote and validate them, synchronize branches with their template. It is suited for a basic CI need when the Ontrack structure already exists and does not need to be created.
A GLOBAL VALIDATION MANAGER can manage validation stamps across all projects.
A PARTICIPANT can view all projects, and can add comments to all validation runs.
A READ_ONLY can view all projects, but cannot perform any action on them.
The global roles can only be assigned by an administrator, in the Account management page, by going to the Global permissions command.
A global permission is created by associating:
-
a permission target (an account or a group)
-
a global role
Creation:
-
type the first letter of the account or the group you want to add a permission for
-
select the account or the group
-
select the role you want to give
-
click on Submit
Global permissions are created or deleted, not updated.
Project roles
A project OWNER can perform all operations on a project but to delete it.
A project PARTICIPANT has the right to see a project and to add comments in the validation runs (comment + status change).
A project VALIDATION_MANAGER can manage the validation stamps and create/edit the validation runs.
A project PROMOTER can create and delete promotion runs, can change the validation runs statuses.
A project PROJECT_MANAGER cumulates the functions of a PROMOTER and of a VALIDATION_MANAGER. He can additionally manage branches (creation / edition / deletion) and the common build filters. He can also assign labels to the project.
A project READ_ONLY user can view this project, but cannot perform any action on it.
Only project owners, automation users and administrators can grant rights in a project.
In the project page, select the Permissions command.
A project permission is created by associating:
-
a permission target (an account or a group)
-
a project role
Creation:
-
type the first letter of the account or the group you want to add a permission for
-
select the account or the group
-
select the role you want to give
-
click on Submit
Project permissions are created or deleted, not updated.
5.1.3. Accounts
Accounts are created with either:
-
built-in authentication, with a password stored and encrypted in Ontrack itself
5.1.4. Account groups
An administrator can create groups using a name and a description, and assign them a list of global or project roles.
An account can be assigned to several groups.
If LDAP is enabled, some LDAP groups can be mapped to the account groups. |
5.1.5. General settings
By default, all users (including anonymous ones) have access to all the projects, at least in read only mode.
You can disable this anonymous access by goint go to the Settings and click the Edit button in the General section. There you can set the Grants project view to all option to No.
5.1.6. Extending the security
Extensions can extend the security model beyond what if defined in the Ontrack core. See Extending the security for more details.
5.2. LDAP setup
It is possible to enable authentication using a LDAP instance and to use the LDAP-defined groups to map them against Ontrack groups.
5.2.1. LDAP general setup
As an administrator, go to the Settings menu. In the LDAP settings section, click on Edit and fill the following parameters:
-
Enable LDAP authentication: Yes
-
URL: URL to your LDAP
-
User and Password: credentials needed to access the LDAP
-
Search base: query to get the user
-
Search filter: filter on the user query
-
Full name attribute: attribute which contains the full name,
cn
by default -
Email attribute: attribute which contains the email,
email
by default -
Group attribute: attribute which contains the list of groups a user belongs to,
memberOf
by default -
Group filter: optional, name of the OU field used to filter groups a user belongs to
As of version 2.14, the list of groups (indicated by the memberOf
attribute or any other attribute defined by the Group attribute
property) is not searched recursively and that only the direct groups
are taken into account.
|
For example:
The settings shown above are suitable to use with an Activate Directory LDAP instance. |
5.2.2. LDAP group mapping
A LDAP group a user belongs to can be used to map onto an Ontrack group.
As an administrator, go to the Account management menu and click on the LDAP mapping command.
This command is only available if the LDAP authentication has been enabled in the general settings. |
To add a new mapping, click on Create mapping and enter:
-
the name of the LDAP group you want to map
-
the Ontrack group which must be mapped
For example, if you map the ontrack_admin
LDAP group to an Administrators
group in Ontrack, any user who belongs to ontrack_admin will automatically
be assigned to the Administrators group when connecting.
This assignment based on mapping is dynamic only, and no information is stored about it in Ontrack. |
Note that those LDAP mappings can be generated using the DSL.
Existing mappings can be updated and deleted.
5.3. Administration console
The Administration console is available to the Administrators only and is accessed through the user menu.
It allows an administrator to:
-
manage the running jobs
-
see the state of the external connections
-
see the list of extensions
5.3.1. Managing running jobs
The list of all registered jobs is visible to the administrator. From there, you can see:
-
general informations about the jobs: name, description
-
the run statistics
Filtering the jobs
The following filters are available:
-
status
-
idle jobs: jobs which are scheduled, but not running right now
-
running jobs: jobs which are currently running
-
paused jobs: jobs which are normally scheduled but which have been paused
-
disabled jobs: jobs which are currently disabled
-
invalid jobs: jobs which have been marked as invalid by the system (because their context is no longer applicable for example)
-
category and type of the job
-
error status - jobs whose last run raised an error
-
description - filtering using a search token on the job description
Controlling the jobs
For one job, you can:
-
force it to run now, if not already running or disabled
-
pause it if it is a scheduled job
-
resume it if it was paused
-
remove it if it is an invalid job
You can also pause or resume all the jobs using the Actions menu. All jobs currently selected through the filter will be impacted.
The same Actions menu allows also to clear the current filter and to display all the jobs.
5.4. Application log messages
The list of application log messages is available to the Administrators only and is accessed through the user menu.
It allows an administrator to manage the error messages.
The log items are displayed from the most recent to the oldest. By default, only 20 items are displayed on a page. You can navigate from page to page by using the Previous and Next buttons.
You can filter the log entries you want to see by using the filter fields:
-
after - only log entries created after this time will be displayed
-
before - only log entries created before this time will be displayed
-
authentication - you can enter the name of a user, and only errors having occurred to this user will be displayed.
-
free text - this text will be searched in all other fields of the log message: details, information, type.
Click on the Filter button to activate the filter and on Reset filter to delete all fields.
You can refresh the log entries by clicking the Refresh log button.
Finally, you can remove all log entries (all of them, independently from the current filter) by clicking on the Delete all entries button. A confirmation will be asked.
Log entries are kept only for 7 days. This delay can be configured. See the documentation for more information. |
You can click on the Details… button of a log entry to get more details about the error:
If available, the stack trace can be selected and copied (actually, like any other element of this dialog). Dismiss the dialog by clicking on the OK button.
5.5. Status page
The stage page is available to the Administrators only and is accessed through the user menu.
It displays two sections:
-
the health of Ontrack itself, based on Spring Boot health information
-
the statuses of all the connectors
For example:
This information is also available through several end points:
|
6. Integration
6.1. ElasticSearch search engine
Enable the ElasticSearch engine by setting
the ontrack.config.search.engine
configuration property
to elasticsearch
.
Additionally, the spring.elasticsearch.rest.uris
property must be set
to specify where ElasticSearch is deployed, additionally to any credentials
being needed (see the
Spring Boot documentation).
6.1.1. ElasticSearch configuration properties
See Configuration properties for more search configuration properties.
6.1.2. ElasticSearch indexers
See Extending the search for extending search capabilities of Ontrack.
6.2. Integration with Jenkins
The best way to integration Ontrack with your Jenkins instance is to use the Ontrack plug-in.
Look at its documentation for details about its configuration and how to use it.
6.3. Monitoring
Ontrack is based on Spring Boot and exports metrics and health indicators that can be used to monitor the status of the applications.
6.3.1. Health
The /manage/health
end point provides a JSON tree which indicates the status
of all connected systems: JIRA, Jenkins, Subversion repositories, Git
repositories, etc.
Note than an administrator can have access to this information as a dashboard in the Admin console (accessible through the user menu).
6.3.2. Metrics
Since version 2.35 / 3.35, Ontrack uses the Micrometer framework to manage metrics, in order to allow a better integration with Spring Boot 2. See Metrics migration for information about the migration. |
By default, Ontrack supports two external registries for metrics:
-
InfluxDB
-
Prometheus
The export to those engine is disabled by default and must be enabled explicitely.
For example, for Prometheus, you can use the
management.metrics.export.prometheus.enabled
property or
the MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED
environment variable.
For the rest of the configuration, you have to consult the Spring Boot or Micrometer documentation.
List of metrics
The list of Ontrack specific metrics and their tags and values is available
using the |
General metrics:
-
ontrack_error
(counter) - number of error (thetype
tag contains the type of error)
Statistics about the objects stored by Ontrack:
-
ontrack_entity_project_total
(gauge) - total number of projects -
ontrack_entity_branch_total
(gauge) - total number of branches -
ontrack_entity_build_total
(gauge) - total number of builds -
ontrack_entity_promotionLevel_total
(gauge) - total number of promotion levels -
ontrack_entity_promotionRun_total
(gauge) - total number of promotion runs -
ontrack_entity_validationStamp_total
(gauge) - total number of validation stamps -
ontrack_entity_validationRun_total
(gauge) - total number of validation runs -
ontrack_entity_validationRunStatus_total
(gauge) - total number of validation run statuses -
ontrack_entity_property_total
(gauge) - total number of properties -
ontrack_entity_event_total
(gauge) - total number of events
General metrics about jobs:
-
ontrack_job_count_total
(gauge) - total number of jobs -
ontrack_job_running_total
(gauge) - total number of running jobs -
ontrack_job_error_total
(gauge) - total number of jobs in error -
ontrack_job_paused_total
(gauge) - total number of paused jobs -
ontrack_job_disabled_total
(gauge) - total number of disabled jobs -
ontrack_job_invalid_total
(gauge) - total number of invalid jobs -
ontrack_job_error_count_total
(gauge) - total number of errors among all the jobs
Information about individual jobs:
-
ontrack_job_duration_ms
(timer) - duration of the execution of the job -
ontrack_job_run_count
(counter) - number of times a job has run -
ontrack_job_errors
(counter) - number of errors for this job
Job metrics have the following tags:
|
Run information:
-
ontrack_run_build_time_seconds
(timer) - duration of a run for a build. It is associated withproject
andbranch
tags. -
ontrack_run_validation_run_time_seconds
(timer) - duration of a run for a validation run. It is associated withproject
,branch
,validation_stamp
andstatus
tags.
More details at Run info.
Information about connectors (Jenkins, JIRA, Git, etc.):
-
ontrack_connector_count
(gauge) - number of connectors -
ontrack_connector_up
(gauge) - number of UP connectors -
ontrack_connector_down
(gauge) - number of DOWN connectors
Connector metrics have the following tags:
|
InfluxDB metrics
This is an experimental feature. In the future, especially when migrating to Spring Boot 2.0, the configuration might change. The feature is very likely to stay though. |
The InfluxDB extension is shipped by default with Ontrack but is activated only if some properties are correctly set:
Property | Environment variable | Default | Description |
---|---|---|---|
|
|
|
Enables the export of run info to InfluxDB |
|
|
"http://localhost:8086" |
URI of the InfluxDB database |
Optionally, the following properties can also be set:
Property | Environment variable | Default | Description |
---|---|---|---|
|
|
"root" |
User name to connect to the InfluxDB database |
|
|
"root" |
Password to connect to the InfluxDB database |
|
|
"ontrack" |
Name of the InfluxDB database |
|
|
|
If |
|
|
|
If |
|
|
|
Level of log when communicating with InfluxDB. Possible values are: |
When an InfluxDB connector is correctly set, some Ontrack information is automatically sent to create timed values:
6.4. Management end point
Ontrack exposes additional Spring Boot actuator end points.
6.4.1. Connectors
The connectors are used to connect to external systems like Jenkins,
JIRA, Git repositories, etc. The manage/connectors
end point allows an
administrator to get information about the state of those
connectors.
The connector statuses are also exposed as metrics. |
6.5. GraphQL support
Since version 2.29, Ontrack provides some support for GraphQL.
While most of the Ontrack model is covered, only the query mode is supported right now. Support for mutations might be integrated in later releases. |
The GraphQL end point is available at the /graphql
context path. For example,
if Ontrack is available at http://localhost:8080, then the GraphQL end point
is available at http://localhost:8080/graphql.
Ontrack supports all capabilities of GraphQL schema introspection.
Example of a GraphQL query, to get the list of branches for a project:
{
projects (id: 10) {
branches {
id
name
}
}
}
6.6. Calling with Curl
One basic way to integration with the GraphQL interface of Ontrack is to use Curl.
Given the following file:
{
query: "{ projects (id: $projectId) { branches { id name }}}",
variables: {
projectId: 10
}
}
You can POST
this file to the Ontrack GraphQL end point, for example:
curl -X POST --user user http://localhost:8080/graphql --data @query.json -H "Content-Type: application/json"
6.7. Using the DSL
The simplest way is to use the Ontrack DSL to run a query:
def result = ontrack.graphQLQuery(
'''{
projects (id: $projectId) {
id
branches {
id
name
}
}
}''',
[
projectId: 10,
]
)
assert result.errors != null && result.errors.empty
assert result.data.projects.size() == 1
assert result.data.projects.get(0).id == 10
See the graphQLQuery
documentation for more details.
6.8. GraphiQL support
Ontrack supports GraphiQL and allows to experiment with Ontrack GraphQL queries directly in your browser.
You can access the GraphiQL IDE page by clicking on the GraphiQL command in the top right corner of the home page of Ontrack:
You can then type and experiment with your GraphQL queries:
The access rights used for your GraphQL queries are inherited from your Ontrack connection. Connect with your user in Ontrack before switching to the GraphiQL IDE. Login in from within GraphiQL is not supported yet. |
6.9. Extending the GraphQL schema
The core Ontrack GraphQL query schema can be extended by custom extensions.
See Extending GraphQL for more information.
6.10. Encryption service
Secrets used by Ontrack are encrypted using keys
managed by a ConfidentialStore
.
Ontrack provides three types of storage:
-
file based storage (default)
-
Vault storage
-
database storage
If needed, you can also create your own form of storage using extensions.
6.10.1. Selection of the confidential store
The selection of the confidential store is done at startup time using the
ontrack.config.key-store
configuration property.
It defaults to file
(see below).
Additional configuration properties might be needed according to the type of store.
6.10.2. File confidential store
This is the default store but its selection can be made explicit by setting
the ontrack.config.key-store
configuration property to file
.
This store will store the keys in the
working directory under the
security/secrets
subfolder.
A master.key
file is used to encrypt the individual keys themselves, so
two files will be typically present:
-
master.key
-
net.nemerosa.ontrack.security.EncryptionServiceImpl.encryption
6.10.3. JDBC confidential store
This store manages the keys directly in the Ontrack database. It can be
selected by setting the ontrack.config.key-store
configuration property to jdbc
.
This store is intrinsically insecure since it stores the keys in the same location where the secrets are themselves stored. |
No further configuration is needed.
6.10.4. Vault confidential store
By setting the ontrack.config.key-store
configuration property to vault
, Ontrack
will use Vault to store its encryption keys.
Following configuration properties are available to configure the connection to Vault:
Property | Default | Description |
---|---|---|
|
URI to the Vault end point |
|
|
|
Token authentication |
|
|
Path prefix for the storage of the keys WARNING: As of now, the support for Vault storage is experimental and is subject to change in later releases. In particular, the authentication mechanism might change. |
6.10.5. Migrating encryption keys
In the event you want to migrate the encryption keys from one type of storage to another, follow this procedure.
In the procedure below, ${ONTRACK_URL} designates the Ontrack URL
and ${ONTRACK_ADMIN_USER} the name of an Ontrack user which has the
ADMINISTRATOR role.
|
Using the initial configuration for the store, start by exporting the key:
curl ${ONTRACK_URL}/admin/encryption \
--user ${ONTRACK_ADMIN_USER} \
--output ontrack.key
This command will export the encryption key into the local ontrack/key
file.
Start Ontrack using the new configuration.
There might be errors are startup, when some jobs start to collect some data from the external applications. Those errors can be safely ignored for now. |
Import the key file into Ontrack:
curl ${ONTRACK_URL}/admin/encryption \
--user ${ONTRACK_ADMIN_USER} \
-X PUT \
-H "Content-Type: text/plain" \
--data @ontrack.key
Restart Ontrack.
6.10.6. Losing the encryption keys
In case you lose the encryption keys, the consequence will be that the secrets stored by Ontrack won’t be able to be decrypted. This will typically make the external applications your Ontrack instance connects to unreachable.
The only to fix this is to reenter the secrets.
Some pages might not display correctly if some applications are not reachable. |
6.11. Run info
Builds and validation runs can be associated with some run information which contains:
-
source of the information, like a Jenkins job
-
trigger of the information, like a SCM change
-
duration of the collection for the information (like the duration of a job)
6.11.1. Collection of run info
Run info can be attached to a build or a validation run using the REST API or the DSL of Ontrack.
This is typically done at CI engine level, where a solution like the Ontrack Jenkins plugin simplifies the operation.
When using the Jenkins pipeline as code, the ontrackBuild
and ontrackValidate
steps will do this
automatically, so nothing to change. For example:
post {
success {
ontrackBuild project: "xxx", branch: "1.0", build: version
}
}
When using the DSL, the run info must be specified explicitly. The Jenkins plugin provides a jenkins.runInfo
binding which contains some run into ready to be passed:
ontrackScript script: """
def b = ontrack.build(...)
b.runInfo = jenkins.runInfo
"""
6.11.2. Displaying the run info
The run info is displayed in the branch overview and the build page for builds, and in the validation stamp and the validation run pages for the validation runs.
It is of course available through the REST API, GraphQL and the DSL.
6.11.3. Exporting the run info
While the run info is available from Ontrack, it can also be exported to other databases.
As of today, only InfluxDB is supported.
Exporting the run info to InfluxDB
InfluxDB connector must be enabled - see InfluxDB metrics. |
In order to export Ontrack run info as points into an InfluxDB database, following elements must be configured:
Property | Environment variable | Default | Description |
---|---|---|---|
|
|
|
If |
Exporting the run info using extensions
It’s possible to manage your own export of run info by creating a RunInfoListener
component.
See Run info listeners for more information.
6.12. Integration with SonarQube
It’s possible to configure projects so that any build which has been scanned by SonarQube gets some measures registered in Ontrack and those same measures can then be exported to some event storage like InfluxDB.
6.12.1. General configuration
One configuration must be created per SonarQube server you want to integrate.
As an administrator, you need to select "SonarQube configurations" in your user menu and create SonarQube configurations by setting three parameters:
-
Name - name for this configuration
-
URL - the root URL of the SonarQube server
-
Token - an authentication token to get information from SonarQube
6.12.2. Global settings
As an administrator, go to the Settings menu. In the SonarQube section, click on Edit and fill the following parameters:
Name | Default value | Description |
---|---|---|
Measures |
|
List of SonarQube metric names to collect. They can be completed or overridden at project level. |
Disabled |
|
Global flag to disable the collection of SonarQube measures |
6.12.3. Project configuration
In order to enable the collection of SonarQube measures for a project, it must be associated with the "SonarQube" property.
The property needs the following parameters:
Name | Default value | Description |
---|---|---|
Configuration |
Required |
|
Key |
Required |
Key of the project in SonarQube (typically |
Validation stamp |
|
Name of the validation stamp, which, when granted to a build, triggers the collection of SonarQube measures. |
Measures |
Empty |
List of SonarQube metric names to collect for this project, additionally to those defined globally. |
Override |
|
If set to |
Branch model |
|
If set to |
Branch pattern |
Empty |
If set, it defines a regular expression to use against the branch name (or Git path) |
The Branch model and Branch pattern can be combined together. |
6.12.4. Build measures
Once SonarQube measures have been collected for a build, they are available in the Information section of the build page.
6.12.5. Export of measures
Once SonarQube measures have been collected for a build, they are automatically exported as metrics, like InfluxDB, if enabled.
See InfluxDB metrics for more information.
The list of metrics are the following.
Collection metrics
All metrics linked to the collection of the measures are associated with the following tags:
-
project
- name of the build’s project -
branch
- name of the build’s branch -
uri
- SonarQube URL
Following metrics are collected:
-
ontrack_sonarqube_collection_started_count
- counter - number of times a collection is started -
ontrack_sonarqube_collection_success_count
- counter - number of times a collection is a success -
ontrack_sonarqube_collection_error_count
- counter - number of times a collection is a failure -
ontrack_sonarqube_collection_time
- timer - histogram of times for the collections
Missing measures
-
ontrack_sonarqube_collection_none
- counter - number of times a measure is collected but none such measure was available in SonarQube
This metric is associated with following tags:
-
project
- name of the build’s project -
branch
- name of the build’s branch -
uri
- SonarQube URL -
measure
- name of the measure
Measures
Measures associated to builds are exported to metrics using:
-
metric name -
ontrack_sonarqube_measure
-
tags:
-
project
- name of the build’s project -
branch
- name of the build’s branch -
build
- name of the build for which measures are collected -
version
- display name of the build -
status
- the validation run status reported for the stamp -
measure
- name of the measure
-
-
value - value of the measure
-
timestamp of the metric is the creation time of the build
7. DSL
Ontrack provides several ways of interaction:
-
the graphical user interface (GUI)
-
the REST API (UI - also used internally by the GUI)
-
the Domain Specific Language (DSL)
Using the DSL, you can write script files which interact remotely with your Ontrack instance.
7.1. DSL Usage
In some cases, like when using the Ontrack Jenkins plug-in, you can just write some Ontrack DSL to use it, because the configuration would have been done for you.
In some other cases, you have to set-up the Ontrack DSL environment yourself.
7.1.1. Embedded
You can embed the Ontrack DSL in your own code by importing it.
Using Maven:
<dependencies> <groupId>net.nemerosa.ontrack</groupId> <artifactId>ontrack-dsl</artifactId> <version>{{ontrack-version}}</version> </dependencies>
Using Gradle:
compile 'net.nemerosa.ontrack:ontrack-dsl:{{ontrack-version}}'
7.1.2. Standalone shell
See DSL Tool.
7.1.3. Connection
Before calling any DSL script, you have to configure an Ontrack
instance
which will connect to your remote Ontrack location:
import net.nemerosa.ontrack.dsl.*;
String url = "http://localhost:8080";
String user = "admin";
String password = "admin";
Ontrack ontrack = OntrackConnection.create(url)
// Logging
.logger(new OTHttpClientLogger() {
public void trace(String message) {
System.out.println(message);
}
})
// Authentication
.authenticate(user, password)
// OK
.build();
7.1.4. Retry mechanism
By default, if the remote Ontrack API cannot be reached, the calls will fail. You can enable a retry mechanism by defining a maximum number of retries and a delay between the retries (defaults to 10 seconds):
Ontrack ontrack = OntrackConnection.create(url)
// ...
// Max retries
.maxTries(10)
// Delay between retries (1 minute here)
.retryDelaySeconds(60)
// OK
.build();
7.1.5. Calling the DSL
The Ontrack DSL is expressed through Groovy and can be called using the
GroovyShell
:
import groovy.lang.Binding;
import groovy.lang.GroovyShell;
Ontrack ontrack = ...
Map<String, Object> values = new HashMap<>();
values.put("ontrack", ontrack);
Binding binding = new Binding(values);
GroovyShell shell = new GroovyShell(binding);
Object shellResult = shell.evaluate(script);
7.2. DSL Samples
7.2.1. DSL Security
The DSL allows to manage the accounts and the account groups.
Management of accounts
To add or update a built-in account:
ontrack.admin.account(
"dcoraboeuf", // Name
"Damien Coraboeuf", // Display name
"[email protected]", // Email
"my-secret-password", // Password
[ // List of groups (optional)
"Group1",
"Group2"
]
)
To get the list of accounts:
def accounts = ontrack.admin.accounts
def account = accounts.find { it.name == 'dcoraboeuf' }
assert account != null
assert account.fullName == "Damien Coraboeuf"
assert account.email == "[email protected]"
assert account.authenticationSource.allowingPasswordChange
assert account.authenticationSource.id == "password"
assert account.authenticationSource.name == "Built-in"
assert account.role == "USER"
assert account.accountGroups.length == 2
LDAP accounts cannot be created directly. See the documentation for more details. |
Account permissions
To give a role to an account:
ontrack.admin.setAccountGlobalPermission(
'dcoraboeuf', "ADMINISTRATOR
)
ontrack.project('PROJECT')
ontrack.admin.setAccountProjectPermission(
'PROJECT', 'dcoraboeuf', "OWNER
)
To get the list of permissions for an account:
def permissions = ontrack.admin.getAccountProjectPermissions('PROJECT', 'dcoraboeuf')
assert permissions != null
assert permissions.size() == 1
assert permissions[0].id == 'OWNER'
assert permissions[0].name == 'Project owner'
Management of account groups
To add or update an account group:
ontrack.admin.accountGroup('Administrators', "Group of administrators")
To get the list of groups:
def groups = ontrack.admin.groups
def group = groups.find { it.name == 'Administrators' }
assert group.name == 'Administrators'
assert group.description == "Group of administrators"
Account group permissions
To give a role to an account group:
ontrack.admin.setAccountGroupGlobalPermission(
'Administrators', "ADMINISTRATOR"
)
ontrack.project('PROJECT')
ontrack.admin.setAccountGroupProjectPermission(
'PROJECT', 'Administrators', "OWNER"
)
To get the list of permissions for an account group:
def permissions = ontrack.admin.getAccountGroupProjectPermissions('PROJECT', 'Administrators')
assert permissions != null
assert permissions.size() == 1
assert permissions[0].id == 'OWNER'
assert permissions[0].name == 'Project owner'
DSL LDAP mapping
The LDAP mappings can be generated using the DSL.
To add or update a LDAP mapping:
ontrack.admin.ldapMapping 'ldapGroupName', 'groupName'
To get the list of LDAP mappings:
LDAPMapping mapping = ontrack.admin.ldapMappings[0]
assert mapping.name == 'ldapGroupName'
assert mapping.groupName == 'groupName'
7.2.2. DSL Images and documents
Some resources can be associated with images (like promotion levels and validation stamps) and some documents can be downloaded.
When uploading a document or an image, the DSL will accept any object (see below), optionally associated with a MIME
content type (the content type is either read from the source object or defaults to image/png
).
The object can be any of:
-
a
URL
object - the MIME type and the binary content will be downloaded using the URL - the URL must be accessible anonymously -
a
File
object - the binary content is read from the file and the MIME type must be provided -
a valid URL string - same as an
URL
- see above -
a file path - same as a
File
- see above
For example:
ontrack.project('project') {
branch('branch') {
promotionLevel('COPPER', 'Copper promotion') {
image '/path/to/local/file.png', 'image/png'
}
}
}
Document and image downloads return a Document
object with has two properties:
-
content
- byte array -
type
- MIME content type
For example, to store a promotion level’s image into a file:
File file = ...
def promotionLevel = ontrack.promotionLevel('project', 'branch', 'COPPER')
file.bytes = promotionLevel.image.content
7.2.3. DSL Change logs
When a branch is configured for a SCM (Git, Subversion), a change log can be computed between two builds and following collections can be displayed:
-
revisions or commits
-
issues
-
file changes
Change logs can also be computed between builds which belong to different branches, as long as they are in the same project. This is only supported for Git, not for Subversion. |
Getting the change log
Given two builds, one gets access to the change log using:
def build1 = ontrack.build('proj', 'master', '1')
def build2 = ontrack.build('proj', 'master', '2')
def changelog = build1.getChangeLog(build2)
The returned change log might be null if the project and branches are
not correctly configured.
|
On the returned ChangeLog
object, one can access commits, issues and file
changes.
Commits
The list of commits can be accessed using the commits
property:
changeLog.commits.each {
println "* ${it.shortId} ${it.message} (${it.author} at ${it.timestamp})"
}
Each item in the commits
collection has the following properties:
-
id
- identifier, revision or commit hash -
shortId
- short identifier, revision or abbreviated commit hash -
author
- name of the committer -
timestamp
- ISO date for the commit time -
message
- raw message for the commit -
formattedMessage
- HTML message with links to the issues -
link
- link to the commit
This covers only the common attributes provided by Ontrack - additional properties are also available for a specific SCM. |
Issues
The list of issues can be accessed using the issues
property:
changeLog.issues.each {
println "* ${it.displayKey} ${it.status} ${it.summary}"
}
Each item in the issues
collection has the following properties:
-
key
- identifier, like1
-
displayKey
- display key (like#1
) -
summary
- short title for the issue -
status
- status of the issue -
url
- link to the issue
This covers only the common attributes provided by Ontrack - additional properties are also available for a specific issue service. |
Exporting the change log
The change log can also be exported as text (HTML and Markdown are also available):
String text = changeLog.exportIssues(
format: 'text',
groups: [
'Bugs' : ['defect'],
'Features' : ['feature'],
'Enhancements': ['enhancement'],
],
exclude: ['design', 'delivery']
)
-
format
can be one oftext
(default),html
ormarkdown
-
groups
allows to group issues per type. If not defined, no grouping is done -
exclude
defines the types of issues to not include in the change log -
altGroup
defaults to Other and is the name of the group where remaining issues do not fit.
File changes
The list of file changes can be accessed using the files
property:
changeLog.files.each {
println "* ${it.path} (${it.changeType})"
}
Each item in the files
collection has the following properties:
-
path
- path changed -
changeType
- nature of the change -
changeTypes
- list of changes on this path
This covers only the common attributes provided by Ontrack - additional properties are also available for a specific SCM. |
7.2.4. DSL Branch template definitions
Using the template(Closure)
method on a branch, one can define the template
definition for a branch.
For example:
template {
parameter 'gitBranch', 'Name of the Git branch', 'release/${sourceName}'
fixedSource '1.0', '1.1'
}
-
def parameter(String name, String description = '', String expression = '')
— defines a parameter for the template, with an optional expression based on a source name -
def fixedSource(String… names)
— sets a synchronization source on the template, based on a fixed list of names
You can then use this branch definition in order to generate or update branches from it:
// Create a template
ontrack.branch('project', 'template') {
template {
parameter 'gitBranch', 'Name of the Git branch', 'release/${sourceName}'
}
}
// Creates or updates the TEST instance
ontrack.branch('project', 'template').instance 'TEST', [
gitBranch: 'my-branch'
]
7.2.5. DSL SCM extensions
If a SCM (Subversion or Git) is correctly configured on a branch, it is possible to download some files.
This is allowed only for the project owner. |
For example, the following call:
def text = ontrack.branch('project', 'branch').download('folder/subfolder/path.txt')
will download the folder/subfolder/path.txt
file from the corresponding SCM
branch. A OTNotFoundException
exception is thrown if the file cannot be found.
7.3. DSL Tool
Ontrack comes with an Ontrack DSL Shell tool that you can download from the releases page.
The ontrack-dsl-shell.jar
is a fully executable JAR, published in GitHub
release and in the Maven Central, and can be used to setup a running instance
of Ontrack:
ontrack-dsl-shell.jar --url ... --user ... --password ... --file ...
You can display the full list options using ontrack-dsl-shell.jar --help .
|
The --file
argument is the path to a file containing the Ontrack DSL
to execute. If not set, or set to -
, the DSL is taken from the standard
input. For example:
cat project-list.groovy | ontrack-dsl-shell.jar --url https://ontrack.nemerosa.net
where project-list.groovy
contains:
ontrack.projects*.name
This would return a JSON like:
[
"iteach",
"ontrack",
"ontrack-jenkins",
"versioning"
]
The tool always returns its response as JSON and its output can be pipelined with tools like
jq
. For example:
cat project-list.groovy | ontrack-dsl-shell.jar --url https://ontrack.nemerosa.net | jq .
The JAR is a
real executable,
so there is no need to use java -jar on Unix like systems or MacOS.
|
7.4. DSL Reference
See the appendixes.
7.5. DSL Samples
Creating a build:
ontrack.branch('project', 'branch').build('1', 'Build 1')
Promoting a build:
ontrack.build('project', '1', '134').promote('COPPER')
Validating a build:
ontrack.build('project', '1', '134').validate('SMOKETEST', 'PASSED')
Getting the last promoted build:
def buildName = ontrack.branch('project', 'branch').lastPromotedBuilds[0].name
Getting the last build of a given promotion:
def branch = ontrack.branch('project', 'branch')
def builds = branch.standardFilter withPromotionLevel: 'BRONZE'
def buildName = builds[0].name
Configuring a whole branch:
ontrack.project('project') {
branch('1.0') {
promotionLevel 'COPPER', 'Copper promotion'
promotionLevel 'BRONZE', 'Bronze promotion'
validationStamp 'SMOKE', 'Smoke tests'
}
}
Creating a branch template and an instance out of it:
// Branch template definition
ontrack.project(project) {
config {
gitHub 'ontrack'
}
branch('template') {
promotionLevel 'COPPER', 'Copper promotion'
promotionLevel 'BRONZE', 'Bronze promotion'
validationStamp 'SMOKE', 'Smoke tests'
// Git branch
config {
gitBranch '${gitBranch}'
}
// Template definition
template {
parameter 'gitBranch', 'Name of the Git branch'
}
}
}
// Creates a template instance
ontrack.branch(project, 'template').instance 'TEST', [
gitBranch: 'feature/test'
]
8. Contributing
Contributions to Ontrack are welcome!
-
Fork the GitHub project
-
Code your fixes and features
-
Create a pull request
-
Your pull requests, once tested successfully, will be integrated into the
master
branch, waiting for the next release
8.1. Branching strategy
The branching strategy used for Ontrack is based on the Git Flow.
-
development of features always goes to
feature/
branches created from thedevelop
branch -
new releases are created by branching from the
develop
branch, usingrelease/
as a prefix -
pull requests must be made from the
develop
branch -
the
master
branch contains an image of the latest release - no development is done on it
The versioning is automated using the Gradle Versioning plug-in. No file needs to be updated to set the version.
8.2. Development
8.2.1. Environment set-up
Following tools must be installed before you can start coding with Ontrack:
-
JDK8u181 or better
-
Docker 17.12.0 or more recent
-
Docker Compose 1.18.0 or more recent
Postgres set-up
Starting from release 3, Ontrack uses a Postgresql database for its backend.
By default, the application will try to access it at
jdbc:postgresql://localhost:5432/ontrack
using ontrack
/ ontrack
as
credentials.
You can of course set the database yourself, but the best way is to run the following Gradle command:
./gradlew devStart
This will use Docker in order to set up a Postgres database container, exposing
the port 5432
on the host and named postgresql
.
To override the port being exposed, you can use the
For example, to use the
You can also the name of the Postgres container, which
defaults to |
This container is used only for your local development, and is not used for running the integration tests, where another container is automatically created and destroyed.
To stop the development environment, you can run:
./gradlew devStop
8.2.2. Building locally
./gradlew clean build
To launch the integration tests or acceptance tests, see Testing.
8.2.3. Launching the application from the command line
Just run:
./gradlew :ontrack-ui:bootRun
The application is then available at http://localhost:8080
The dev profile is activated by default and the working files
and database file will be available under ontrack-ui/work .
|
8.2.4. Launching the application in the IDE
Prepare the Web resources by launching:
./gradlew dev
In order to launch the application, run the
net.nemerosa.ontrack.boot.Application
class with
--spring.profiles.active=dev
as argument.
The application is then available at http://localhost:8080
8.2.5. Developing for the web
If you develop on the web side, you can enable a LiveReload watch on the web resources:
./gradlew watch
Upon a change in the web resources, the browser page will be reloaded automatically.
8.2.6. Running the tests
See Testing.
8.2.8. Delivery
Official releases for Ontrack are available at:
-
GitHub for the JAR, RPM and Debian packages
-
Docker Hub for the Docker images
See the Installation documentation to know how to install them.
To create a package for delivery, just run:
./gradlew \
clean \
test \
integrationTest \
dockerLatest \
build
This will create:
-
a
ontrack-ui.jar
-
a
nemerosa/ontrack:latest
Docker image in your local registry
If you’re not interested in having a Docker image, just omit the
dockerLatest task.
|
Versioning
The version of the Ontrack project is computed automatically from the current SCM state, using the Gradle Versioning plug-in.
Deploying in production
See the Installation documentation.
8.2.9. Glossary
Form
Creation or update links can be accessed using the GET
verb in order to get
a form that allows the client to carry out the creation or update.
Such a form will give information about:
-
the fields to be created/updated
-
their format
-
their validation rules
-
their description
-
their default or current values
-
etc.
The GUI can use those forms in order to automatically (and optionally) display dialogs to the user. Since the model is responsible for the creation of those forms, this makes the GUI layer more resilient to the changes.
Link
In resources, links are attached to model objects, in order to implement a HATEOAS principle in the application interface.
HATEOAS does not rely exclusively on HTTP verbs since this would not allow a strong implementation of the actual use cases and possible navigations (which HATEOAS is all about).
For example, the "Project creation" link on the list of projects is not
carried by the sole POST
verb, but by a _create
link. This link can be
accessed through verbs:
-
OPTIONS
- list of allowed verbs -
GET
- access to a form that allows to create the object -
POST
orPUT
for an update - actual creation (or update) of the object
Representation of a concept in the application. This reflects the ubiquitous language used throughout the application, and is used in all layers. As POJO on server side, and JSON objects at client side.
Repository
Model objects are persisted, retrieved and deleted through repository objects. Repositories act as a transparent persistence layer and hides the actual technology being used.
Resource
A resource is a model object decorated with links that allow the client side to interact with the API following the HATEOAS principle. More than just providing access to the model structure, those links reflect the actual use cases and the corresponding navigation. In particular, the links are driven by the authorizations (a "create" link not being there if the user is not authorized). See Link for more information.
Service
Services are used to provide interactions with the model.
8.3. Architecture
8.3.1. Modules
Not all modules nor links are shown here in order to keep some clarity. The Gradle build files in the source remain the main source of authority. |
Modules are used in ontrack for two purposes:
-
isolation
-
distribution
We distinguish also between:
-
core modules
-
extension modules
Extension modules rely on the extension-support
module to be compiled and
tested. The link between the core modules and the extensions is done through
the extension-api
module, visible by the two worlds.
Modules like common
, json
, tx
or client
are purely utilitarian
(actually, they could be extracted from ontrack
itself).
The main core module is the model
one, which defines both the API of the
Ontrack services and the domain model.
8.3.2. UI
Resources
The UI is realized by REST controllers. They manipulate the model and get access to it through services.
In the end, the controllers return model objects that must be decorated by links in order to achieve Hateoas.
The controllers are not directly responsible for the decoration of the model objects as resources (model + links). This is instead the responsibility of the resource decorators.
The model objects are not returned as such, often their content needs to be filtered out. For example, when getting a list of branches for a project, we do not want each project to bring along its own copy of the project object. This is achieved using the model filtering technics.
8.3.3. Forms
Simple input forms do not need a lot of effort to design in Ontrack. They can be used directly in pages or in modal dialogs.
Server components (controllers, services, …) are creating instances of the
Form
object and the client libraries (service.form.js
) is responsible
for their rendering:
Form object
The Form
object is created by adding Field`s into it using its `with
method:
import net.nemerosa.ontrack.model.form.Form;
public Form getMyForm() {
return Form.create()
.with(field1)
.with(field2)
;
}
See the next section on how to create the field objects. The Form
object
contains utility methods for common fields:
Form.create()
// `name` text field (40 chars max), with "Name" as a label
// constrained by the `[A-Za-z0-9_\.\-]+` regular expression
// Suitable for most name fields on the Ontrack model objects
// (projects, branches, etc.)
.name()
// `password` password field (40 chars max) with "Password" as a label
.password()
// `description` memo field (500 chars max), optional, with "Description"
// as a label
.description()
// `dateTime` date/time field (see below) with "Date/time" as a label
.dateTime()
// ...
;
In order to fill the fields with actual values, you can either use the
value(…)
method on the field object (see next section) or use
the fill(…)
method on the Form
object.
Map<String, ?> data = ...
Form.create()
// ...
// Sets `value` as the value of the field with name "fieldName"
.fill("fieldName", value)
// Sets all values in the map using the key as the field name
.fill(data)
Fields
Common field properties
Property |
Method |
Default value |
Description |
|
|
required |
Mapping |
|
|
none |
Display name |
|
|
|
Is the input required? |
|
|
|
Is the input read-only? |
|
|
none |
Message to display is the field content is deemed invalid |
|
|
none |
Help message to display for the field (see below for special syntax) |
|
|
none |
Expression which defines if the field is displayed or not - see below for a detailed explanation |
|
|
none |
Value to associated with the field |
text
field
The text
field is a single line text entry field, mapped to the HTML
<input type="test"/>
form field.
Property | Method | Default value | Description |
---|---|---|---|
|
|
|
Maximum length for the text |
|
|
|
The text must comply with this regex in order to be valid |
Example:
Form.create()
.with(
Text.of("name")
.label("Name")
.length(40)
.regex("[A-Za-z0-9_\\.\\-]+")
.validation("Name is required and must contain only alpha-numeric characters, underscores, points or dashes.")
)
namedEntries
field
Multiple list of name/value fields:
The user can:
-
add / remove entries in the list
-
set a name and a value for each item
-
the name might be optional - the value is not
Property | Method | Default value | Description |
---|---|---|---|
|
|
"Name" |
Label for the "name" input part of an entry. |
|
|
"Value" |
Label for the "value" input part of an entry. |
|
|
|
If the name part is required. |
|
|
"Add an entry" |
Label for the "add" button. |
Example:
Form.create()
.with(
NamedEntries.of("links")
.label("List of links")
.nameLabel("Name")
.valueLabel("Link")
.nameOptional()
.addText("Add a link")
.help("List of links associated with a name.")
.value(value != null ? value.getLinks() : Collections.emptyList())
)
8.3.4. Model
The root entity in Ontrack is the project.
Several branches can be attached to a project. Builds can be created within a branch and linked to other builds (same or other branches).
Promotion levels and validation stamps are attached to a branch:
-
a promotion level is used to define the promotion a given build has reached. A promotion run defines this association.
-
a validation stamp is used to qualify some tests or other validations on a build. A validation run defines this association. There can be several runs per build and per validation stamp. A run itself has a sequence of statuses attached to it: passed, failed, investigated, etc.
Builds and validation runs can be attached to some "run info" which gives additional information like the duration of the build or the validation.
Branches, promotion levels and validation stamps define the static structure of a project.
8.3.6. Jobs
Ontrack makes a heavy use of jobs in order to schedule regular activities, like:
-
SCM indexation (for SVN for example)
-
SCM/build synchronisations
-
Branch templating synchronisation
-
etc.
Services and extensions are responsible for providing Ontrack with the list of
jobs they want to be executed. They do this by implementing the
JobProvider
interface that simply returns a collection of `JobRegistration`s
to register at startup.
One component can also register a JobOrchestratorSupplier
, which provides
also a stream of `JobRegistration`s, but is more dynamic since the list
of jobs to register will be determined regularly.
The job scheduler is in charge to collect all registered jobs and to run them all.
Job architecture overview
This section explains the underlying concepts behind running the jobs in Ontrack.
When a job is registered, it is associated with a schedule. This schedule is dynamic and can be changed with the time. For example, the indexation of a Git repository for a project is scheduled every 30 minutes, but then, is changed to 60 minutes. The job registration schedule is then changed to every 60 minutes.
A job provides the following key elements:
-
a unique identifier: the job key
-
a task to run, provided as a
JobRun
interface:
@FunctionalInterface
public interface JobRun {
void run(JobRunListener runListener);
}
The task defined by the job can use the provided JobRunListener to provide feedback on the execution or to
execution messages.
|
The job task is wrapped into a Runnable
which is responsible to collect statistics about the job execution, like
number of runs, durations, etc.
In the end, the JobScheduler
can be associated with a JobDecorator
to return another Runnable
layer if needed.
The job scheduler is responsible to orchestrate the jobs. The list of jobs is maintained in memory using an index
associating the job itself, its schedule and its current scheduled task (as a ScheduledFuture
).
Job registration
A JobRegistration
is the associated of a Job
and of Schedule
(run
frequency for the job).
A Schedule
can be built in several ways:
// Registration only, no schedule
Schedule.NONE
// Every 15 minutes, starting now
Schedule.everyMinutes(15)
// Every minute, starting now
Schedule.EVERY_MINUTE
// Every day, starting now
Schedule.EVERY_DAY
// Every 15 minutes, starting after 5 minutes
Schedule.everyMinutes(15).after(5)
see the Schedule class for more options.
|
By enabling the scattering options, one can control the schedule by adding a startup delay at the beginning of the job.
The Job
interface must define the unique for the job. A key in unique
within a type within a category.
Typically, the category and the type will be fixed (constants) while the key will depend on the job parameters and context. For example:
JobCategory CATEGORY = JobCategory.of("category").withName("My category");
JobType TYPE = CATEGORY.getType("type").withName("My type");
public JobKey getKey() {
return TYPE.getKey("my-id")
}
The Job
provides also a description, and the desired state of the job:
-
disabled or not - might depend on the job parameters and context. For example, the job which synchronizes a branch instance with its template will be disable if the branch is disabled
-
valid or not - when a job becomes invalid, it is not executed, and will be unregistered automatically. For example, a Subversion indexation job might become invalid if the associated repository configuration has been deleted.
Finally, of course, the job must provide the task to actually execute:
public JobRun getTask() {
return (JobRunListener listener) -> ...
}
The task takes as parameter a JobRunListener
.
All job tasks run with administrator privileges. Job tasks can throw runtime exceptions - they will be caught by the job scheduler and displayed in the administration console. |
8.3.7. Encryption
Ontrack will store secrets, typically passwords and tokens, together with the configurations needed to connect to external applications: Git, Subversion, JIRA, etc.
In order to protect the integrity of those external applications, those secrets must be protected.
Ontrack does so by encrypting those secrets in the database, using the
AES128
algorithm. The EncryptionService
is used for encryption.
The key needed for the encryption is stored and retrieved using a
ConfidentialStore
service.
See Encryption service for more details about using a confidential store.
8.3.8. Build filters
The build filters are responsible for the filtering of builds when listing them for a branch.
Usage
By default, only the last 10 builds are shown for a branch, but a user can choose to create filters for a branch, and to select them.
The filters he creates are saved for later use: * locally, in its local browser storage * remotely, on the server, if he is connected
For a given branch, a filter is identified by a name. The list of available filters for a branch is composed of those stored locally and of those returned by the server. The later ones have priority when there is a name conflict.
Implementation
The BuildFilter
interface defines how to use a filter. This filter takes as
parameters:
-
the current list of filtered builds
-
the branch
-
the build to filter
It returns two boolean results:
-
is the build to be kept in the list?
-
do we need to go on with looking for other builds?
The BuildFilterService
is used to manage the build filters:
-
by creating
BuildFilter
instances -
by managing
BuildFilterResource
instances
The service determines the type of BuildFilter
by using its type, and uses
injected `BuildFilterProvider`s to get an actual instance.
8.3.9. Reference services
This is a work in progress and this list is not exhaustive yet. In the meantime, the best source of information remains the source code… |
Service | Description |
---|---|
|
Access to projects, branches and all entities |
|
Checks the current context for authorizations |
|
Access to the properties of the entities |
|
This service allows to store and retrieve arbitrary data with entities |
|
This service allows to store audited and indexed data with
entities. See |
EntityDataStore
The EntityDataStore
is a service which allows extensions to store
some data associated with entities with the following properties:
-
data stored as JSON
-
data always associated with an entity
-
indexation by category and name, not necessarily unique
-
grouping of data using a group ID
-
unique generated numeric ID
-
audit data - creation + updates
See the Javadoc for net.nemerosa.ontrack.repository.support.store.EntityDataStore
for more details.
Example:
@Autowired
EntityDataStore store;
@Autowired
SecurityService securityService;
Branch branch = ...
store.add(branch, "Category", "Name", securityService.getCurrentSignature(), null, json);
8.3.10. Technology
Client side
One page only, pure AJAX communication between the client and the server.
-
AngularJS
-
Angular UI Router
-
Angular UI Bootstrap
-
Bootstrap
-
Less
8.4. Testing
-
unit tests are always run and should not access resources not load the application context (think: fast!)
-
integration tests can access resources or load the application context, and run against a database
-
acceptance tests are run against the deployed and running application.
8.4.1. Running the unit and integration tests
In order to run the unit tests only:
./gradlew test
In order to add the integration tests:
./gradlew integrationTest
From your IDE, you can launch both unit and integration tests using the default JUnit integration.
8.4.2. Acceptance tests
On the command line
This requires Docker & Docker Compose to be installed and correctly configured. |
The application can be deployed on a local Docker container:
./gradlew localAcceptanceTest
If the Docker container used for the tests must be kept, add the -x localComposeDown
to the arguments.
To only deploy the application in a container without launching any test,
you can also run ./gradlew localComposeUp .
|
From the IDE
In order to develop or test acceptance tests, you might want to run them from your IDE.
-
Make sure you have a running application somewhere, either by launching it from your IDE (see Integration with IDE) or by running
ciStart
(see previous section). -
Launch all, some or one test in the
ontrack-acceptance
module after having set the following system properties:-
ontrack.url
- the URL of the running application to test - defaults to http://localhost:8080 -
ontrack.disableSSL
-true
if the server is running with a self signed certificate, and if you’re usinghttps
-
Standalone mode
For testing ontrack
in real mode, with the application to test deployed on a
remote machine, it is needed to be able to run acceptance tests in standalone
mode, without having to check the code out and to build it.
Running the acceptance tests using the ciAcceptanceTest Gradle task
remains the easiest way.
|
The acceptance tests are packaged as a standalone JAR, that contains all the dependencies.
To run the acceptance tests, you need a JDK8 and you have to run the JAR using:
java -jar ontrack-acceptance.jar <options>
The options are:
-
--option.acceptance.url=<url>
to specify the<url>
whereontrack
is deployed. It defaults to http://localhost:8080 -
--option.acceptance.admin=<password>
to specify the administrator password. It defaults toadmin
. -
--option.acceptance.context=<context>
can be specified several time to define the context(s) the acceptance tests are running in (like--option.acceptance.context=production
for example). According to the context, some tests can be excluded from the run.
The results of the tests is written as a JUnit XML file, in
build/acceptance/ontrack-acceptance.xml
.
The directory can be changed using the ontrack.acceptance.output-dir
argument
or system property and defaults to build/acceptance
.
The JUnit result file name can be changed using the ontrack.acceptance.result-file-name
argument
or system property and defaults to ontrack-acceptance.xml
.
8.4.3. Developing tests
Unit tests are JUnit tests whose class name ends with Test
.
Integration tests are JUnit tests whose class name ends with IT
.
Acceptance tests are JUnit tests whose class name starts with ACC
and are
located in the ontrack-acceptance
module.
Integration test context
Integration tests will usually load an application context and connect to a Postgresql database.
For commodity, those tests will inherit from the AbstractITTestSupport
class,
and more specifically:
-
from
AbstractRepositoryTestSupport
for JDBC repository integration tests -
from
AbstractServiceTestSupport
for service integration tests
Configuration for the integration tests is done in the ITConfig
class.
8.5. Extending Ontrack
Ontrack allows extensions to contribute to the application, and actually, most of the core features, like Git change logs, are indeed extensions.
This page explains how to create your own extensions and to deploy them along Ontrack. The same coding principles apply also for coding core extensions and to package them in the main application.
Having the possibility to have external extensions in Ontrack is very new and the way to provide them is likely to change (a bit) in the next versions. In particular, the extension mechanism does not provide classpath isolation between the "plugins". |
8.5.1. Preparing an extension
In order to create an extension, you have to create a Java project.
The use of Kotlin is also possible. |
Note that Ontrack needs at least a JDK8u65 to run.
Your extension needs to a Gradle project and have at least this minimal
build.gradle
file:
Maven might be supported in the future. |
buildscript { repositories { mavenCentral() jcenter() } dependencies { classpath 'net.nemerosa.ontrack:ontrack-extension-plugin:{{ontrack-version}}' } } repositories { mavenCentral() } apply plugin: 'ontrack'
The buildscript
section declares the version of Ontrack you’re building your
extension for. Both the mavenCentral
and the jcenter
repositories are needed
to resolve the path for the ontrack-extension-plugin
since the plugin is
itself published in the Maven Central and some of its dependencies are in
JCenter.
The repository declaration might be simplified in later versions. |
The plug-in must declare the Maven Central as repository for the dependencies (Ontrack libraries are published in the Maven Central).
Finally, you can apply the ontrack
plug-in. This one will:
-
apply the
java
plug-in. If you want to use Groovy, you’ll have to apply this plug-in yourself. Kotlin is very well supported. -
add the
ontrack-extension-support
module to thecompile
configuration of your extension -
define some tasks used for running, testing and packaging your extension (see later)
8.5.2. Extension ID
Your extension must be associated with an identifier, which will be used throughout all the extension mechanism of Ontrack.
If the name
of your extension project looks like ontrack-extension-<xxx>
,
the xxx
will be ID of your extension. For example, in the
settings.gradle
file:
rootProject.name = 'ontrack-extension-myextension'
then myextension
is your extension ID.
If for any reason, you do not want to use ontrack-extension-
as a prefix for
your extension name, you must specify it using the ontrack
Gradle extension
in the build.gradle
file:
ontrack {
id 'myextension'
}
8.5.3. Coding an extension
All your code must belong to a package starting with net.nemerosa.ontrack
in
order to be visible by the Ontrack application.
Typically, this should be like: net.nemerosa.ontrack.extension.<id>
where
id
is the ID of your extension.
This limitation about the package name is likely to removed in future versions of Ontrack. |
You now must declare your extension to Ontrack by creating an extension feature class:
package net.nemerosa.ontrack.extension.myextension;
import net.nemerosa.ontrack.extension.support.AbstractExtensionFeature;
import net.nemerosa.ontrack.model.extension.ExtensionFeatureOptions;
import org.springframework.stereotype.Component;
@Component
public class MyExtensionFeature extends AbstractExtensionFeature {
public MyExtensionFeature() {
super(
"myextension",
"My extension",
"Sample extension for Ontrack",
ExtensionFeatureOptions.DEFAULT
);
}
}
The @Component
annotation makes this extension feature visible by Ontrack.
The arguments for the extension feature constructor are:
-
the extension ID
-
the display name
-
a short description
-
the extension options (see below)
8.5.4. Extension options
If your extension has some web components (templates, pages, etc.), it must declare this fact:
ExtensionFeatureOptions.DEFAULT.withGui(true)
If your extension depends on other extensions, it must declare them. For
example, to depend on GitHub and Subversion extensions, first declare them as
dependencies in the build.gradle
:
ontrack {
uses 'github'
uses 'svn'
}
then, in your code:
@Component
public class MyExtensionFeature extends AbstractExtensionFeature {
@Autowired
public MyExtensionFeature(
GitHubExtensionFeature gitHubExtensionFeature,
SVNExtensionFeature svnExtensionFeature
) {
super(
"myextension",
"My extension",
"Sample extension for Ontrack",
ExtensionFeatureOptions.DEFAULT
.withDependency(gitHubExtensionFeature)
.withDependency(svnExtensionFeature)
);
}
}
8.5.5. Writing tests for your extension
Additionally to creating unit tests for your extension, you can also write integration tests, which will run with the Ontrack runtime enabled.
This feature is only available starting from version 2.23.1. |
When the ontrack-extension-plugin
is applied to your code, it makes the
ontrack-it-utils
module available for the compilation of your tests.
In particular, this allows you to create integration tests which inherit from
AbstractServiceTestSupport
, to inject directly the Ontrack services you need
and to use utility methods to create a test environment.
For example:
public MyTest extends AbstractServiceTestSupport {
@Autowired
private StructureService structureService
@Test
public void sample_test() {
// Creates a project
Project p = doCreateProject();
// Can retrieve it by name...
asUser().withView(p).execute(() ->
assertTrue(structureService.findProjectByName(p.getName()).isPresent())
);
}
}
8.5.6. List of extension points
Ontrack provides the following extension points:
-
Properties - allows to attach a property to an entity
-
Decorators - allows to display a decoration for an entity
-
User menu action - allows to add an entry in the connected user menu
-
Settings - allows to add an entry in the global settings
-
Metrics - allows to contribute to the metrics exported by Ontrack
-
Event types - allows to define additional event types.
-
GraphQL - allows contributions to the GraphQL Ontrack schema.
-
Encryption key store - allows to define a custom store for the encryption keys.
-
TODO Entity action - allows to add an action for the page of an entity
-
TODO Entity information - allows to add some information into the page of an entity
-
TODO Search extension - provides a search end point for global text based searches
-
TODO Issue service - provides support for a ticketing system
-
TODO SCM service - provides support for a SCM system
-
TODO Account management action - allows to add an action into the account management
Other topics:
-
TODO Creating services
-
TODO Creating jobs
See Reference services for a list of the core Ontrack services.
8.5.7. Running an extension
A Postgres database must be available to run an extension, since it is needed by Ontrack. See the development section to see how quickly set it up. |
Using Gradle
To run your extension using Gradle:
./gradlew ontrackRun
This will make the application available at http://localhost:8080
The ontrackRun
Gradle task can be run directly from Intellij IDEA and if
necessary in debug mode.
When running with Gradle in your IDE, if you edit some web resources and
want your changes available in the browser, just rebuild your project
(Ctrl F9 in Intellij) and refresh your browser.
|
8.5.8. Packaging an extension
Just run:
./gradlew clean build
The extension is available as a JAR (together with its transitive dependencies,
see below) in build/dist
.
8.5.9. Extension dependencies
If your extension depends on dependencies which are not brought by Ontrack, you have to collect them explicitly and put them in the same directory which contain your main JAR file.
The Ontrack plug-in provides an ontrackPrepare
task which copies
all dependencies (transitively) and the main JAR in the build/dist
directory.
This task is called by the main build
task.
8.5.10. Deploying an extension
Using the Docker image
The Ontrack Docker image uses the
/var/ontrack/extensions
volume to load extensions from. Bind this volume to
your host or to a data container to start putting extensions in it.
For example:
docker run --volume /extension/on/host:/var/ontrack/extensions ...
You can also create your own image. Create the following Dockerfile
:
# Base Ontrack image
FROM nemerosa/ontrack:<yourversion>
# Overrides the extension directory, as to NOT use a volume
ENV EXTENSIONS_DIR /var/ontrack/your-extension
# Copies the extensions in the target volume
COPY extensions/*.jar /var/ontrack/your-extension/
We assume here that your extensions are packaged in an extensions
folder
at the same level than your Dockerfile
:
/-- Dockerfile
|-- extensions/
|-- extension1.jar
|-- extension2.jar
When using a child Dockerfile , the extension directory has to be
customized because we cannot use the VOLUME in this case.
|
Using the CentOS or Debian/Ubuntu package
The RPM and Debian packages both
use the /usr/lib/ontrack/extensions
directory for the location of the
extensions JAR files.
You can also create a RPM or Debian package which embeds both Ontrack and your extensions.
The means to achieve this depend on your build technology but the idea is the same in all cases:
Your package must:
-
put the extension JAR files in
/usr/lib/ontrack/extensions
-
have a dependency on the
ontrack
package
In standalone mode
When running Ontrack directly, you have to set the
loader.path
to a directory containing the extensions JAR files:
java -Dloader.path=/path/to/extensions -jar ... <options>
8.5.11. Extending properties
Any entity in Ontrack can be associated with a set of properties. Extensions can contribute to create new ones.
A property is the association some Java components and a HTML template to render it on the screen.
Java components
First, a property must be associated with some data. Just create an invariant POJO like, for example:
package net.nemerosa.ontrack.extension.myextension;
import lombok.Data;
@Data
public class MyProperty {
private final String value;
}
Note that Ontrack extensions can take benefit of using Lombok in order to reduce the typing. But this is not mandatory as all. |
Then, you create the property type itself, by implementing the PropertyType
interface or more easily by extending the AbstractPropertyType
class. The
parameter for this class is the data created above:
@Component
public class MyPropertyType extends AbstractPropertyType<MyProperty> {
}
The @Component
notation registers the property type in Ontrack.
A property, or any extension is always associated with an extension feature and this one is typically injected:
@Autowired
public MyPropertyType(MyExtensionFeature extensionFeature) {
super(extensionFeature);
}
Now, several methods need to be implemented:
-
getName
andgetDescription
return respectively a display name and a short description for the property -
getSupportedEntityTypes
returns the set of entities the property can be applied to. For example, if your property can be applied only on projects, you can return:
@Override
public Set<ProjectEntityType> getSupportedEntityTypes() {
return EnumSet.of(ProjectEntityType.PROJECT);
}
-
canEdit
allows you to control who can create or edit the property for an entity. TheSecurityService
allows you to test the authorizations for the current user. For example, in this sample, we authorize the edition of our property only for users being granted to the project configuration:
@Override
public boolean canEdit(ProjectEntity entity, SecurityService securityService) {
return securityService.isProjectFunctionGranted(entity, ProjectConfig.class);
}
-
canView
allows you to control who can view the property for an entity. Like forcanEdit
, theSecurityService
is passed along, but you will typically returntrue
:
@Override
public boolean canView(ProjectEntity entity, SecurityService securityService) {
return true;
}
-
getEditionForm
returns the form being used to create or edit the property. Ontrack usesForm
objects to generate automatically user forms on the client. See its Javadoc for more details. In our example, we only need a text box:
@Override
public Form getEditionForm(ProjectEntity entity, MyProperty value) {
return Form.create()
.with(
Text.of("value")
.label("My value")
.length(20)
.value(value != null ? value.getValue() : null)
);
}
-
the
fromClient
andfromStorage
methods are used to parse back and forth the JSON into a property value. Typically:
@Override
public MyProperty fromClient(JsonNode node) {
return fromStorage(node);
}
@Override
public MyProperty fromStorage(JsonNode node) {
return parse(node, ProjectCategoryProperty.class);
}
-
the
getSearchKey
is used to provide an indexed search value for the property:
@Override
public String getSearchKey(My value) {
return value.getValue();
}
-
finally, the
replaceValue
method is called when the property has to be cloned for another entity, using a replacement function for the text values:
@Override
public MyProperty replaceValue(MyProperty value, Function<String, String> replacementFunction) {
return new MyProperty(
replacementFunction.apply(value.getValue())
);
}
Web components
A HTML fragment (or template) must be created at:
src/main/resources \-- static \-- extension \-- myextension \-- property \-- net.nemerosa.ontrack.extension.myextension.MyPropertyType.tpl.html
Replace myextension , the package name and the property type accordingly
of course.
|
The tpl.html
will be used as a template on the client side and will have
access to the Property
object. Typically, only its value
field, of the
property data type, will be used.
The template is used the AngularJS template mechanism.
For example, to display the property as bold text in our sample:
<b>{{property.value.value}}</b>
The property must be associated with an icon, typically PNG, 24 x 24, located at:
src/main/resources \-- static \-- extension \-- myextension \-- property \-- net.nemerosa.ontrack.extension.myextension.MyPropertyType.png
Property search
By default, properties are not searchable - their value cannot be used to perform search.
If the property contains some text, it might be suitable to allow this property to be used in search.
To enable this, two main methods must be provided:
-
containsValue
-
getSearchArguments
The containsValue
is used to check if a given string token is
present of not in an instance of a property value. Let’s take a
property data type which has a text
field, we could implement
the containsValue
method by checking if this field contains
the search token in a case insensitive manner:
override fun containsValue(value: MessageProperty, propertyValue: String): Boolean {
return StringUtils.containsIgnoreCase(value.text, propertyValue)
}
The getSearchArguments
method is more complex - it allows the
Ontrack search engine to plug some SQL fragment into a more
global search, for example like when searching for builds.
This method returns a PropertySearchArguments
instance with three properties:
-
jsonContext
- expression to join with to thePROPERTIES
table in order to contraint the JSON scope, for examplejsonb_array_elements(pp.json→'items') as item
. This expression is optional. -
jsonCriteria
- Criteria to act on thejsonContext
defined above, based on a search token, for example:item→>'name' = :name and item→>'value' ilike :value
. This expression is optional. Variables in this expression can be mapped to actual parameters using thecriteriaParams
map parameter below. -
criteriaParams
- Map of parameters for the criteria, for example:name
→"name"
andvalue
→"%value%"
. See the Spring Documentation for more information.
Most of the time, the jsonContext
and jsonCriteria
expressions will
rely on the json
column of the PROPERTIES
table, which is
a Postgres JSONB data type
containing a JSON representation of the property data type.
Refer to the Postgres JSON documentation for more information about the syntax to use in those expressions.
In the |
The |
Example, for a property data type having a links
list of name/value
strings,
and we want to look in the value
field in a case insensitive way:
override fun getSearchArguments(token: String): PropertySearchArguments? {
return PropertySearchArguments(
jsonContext = "jsonb_array_elements(pp.json->'links') as link",
jsonCriteria = "link->>'value' ilike :value",
criteriaParams = mapOf(
"value" to "%$token%"
)
)
}
8.5.12. Extending decorators
A decorator is responsible to display a decoration (icon, text, label, etc.) close to an entity name, in the entity page itself or in a list of those entities. Extensions can contribute to create new ones.
A decorator is the association some Java components and a HTML template to render it on the screen.
Java components
First, a decorator must be associated with some data. You can use any type,
like a String
, an enum
or any other invariant POJO. In our sample,
we’ll take a String
, which is the value of the MyProperty
property
described as example in Extending properties.
Then, you create the decorator itself, by implementing the
DecorationExtension
interface and extending the AbstractExtension
. The
parameter type is the decorator data defined above.
@Component
public class MyDecorator extends AbstractExtension implements DecorationExtension<String> {
}
The @Component
notation registers the decorator in Ontrack.
A decorator, or any extension is always associated with an
extension feature and this one is typically injected. Other services can be
injected at the same time. For example, our sample decorator needs to get a
property on an entity so we inject the PropertyService
:
private final PropertyService propertyService;
@Autowired
public MyDecorator(MyExtensionFeature extensionFeature, PropertyService propertyService) {
super(extensionFeature);
this.propertyService = propertyService;
}
Now, several methods need to be implemented:
-
getScope
returns the set of entities the decorator can be applied to. For example, if your property can be applied only on projects, you can return:
@Override
public EnumSet<ProjectEntityType> getScope() {
return EnumSet.of(ProjectEntityType.PROJECT);
}
-
getDecorations
returns the list of decorations for an entity. In our case, we want to return a decoration only if the project is associated with theMyProperty
property and return its value as decoration data.
@Override
public List<Decoration<String>> getDecorations(ProjectEntity entity) {
return propertyService.getProperty(entity, MyPropertyType.class).option()
.map(p -> Collections.singletonList(
Decoration.of(
MyDecorator.this,
p.getValue()
)
))
.orElse(Collections.emptyList());
}
Web components
A HTML fragment (or template) must be created at:
src/main/resources \-- static \-- extension \-- myextension \-- decoration \-- net.nemerosa.ontrack.extension.myextension.MyDecorator.tpl.html
Replace myextension , the package name and the decorator type
accordingly of course.
|
The tpl.html
will be used as a template on the client side and will have
access to the Decoration
object. Typically, only its data
field, of the
decoration data type, will be used.
The template is used the AngularJS template mechanism.
For example, to display the decoration data as bold text in our sample:
<!-- In this sample, `data` is a string -->
<b>{{decoration.data}}</b>
8.5.13. Extending the user menu
An extension can add a entry in the connected user menu, in order to point to an extension page.
Extension component
Define a component which extends AbstractExtension
and implements
UserMenuExtension
:
package net.nemerosa.ontrack.extension.myextension;
@Component
public class MyUserMenuExtension extends AbstractExtension implements UserMenuExtension {
@Autowired
public MyUserMenuExtension(MyExtensionFeature extensionFeature) {
super(extensionFeature);
}
@Override
public Action getAction() {
return Action.of("my-user-menu", "My User Menu", "my-user-menu-page");
}
@Override
public Class<? extends GlobalFunction> getGlobalFunction() {
return ProjectList.class;
}
}
In this sample, my-user-menu-page
is the relative routing path to the
page the user action must point to.
The getGlobalFunction
method returns the function needed for authorizing
the user menu to appear.
8.5.14. Extending pages
Extensions can also contribute to pages.
Extension menus
Extension pages must be accessible from a location:
-
the global user menu
-
an entity page
From an entity page
In order for an extension to contribute to the menu of an entity page, you have
to implement the ProjectEntityActionExtension
interface and extend the
AbstractExtension
.
@Component
public class MyProjectActionExtension extends AbstractExtension implements ProjectEntityActionExtension {
}
The @Component
notation registers the extension in Ontrack.
An action extension, or any extension is always associated with
an extension feature and this one is typically injected. Other services can be
injected at the same time. For example, our sample extension needs to get a
property on an entity so we inject the PropertyService
:
private final PropertyService propertyService;
@Autowired
public MyProjectActionExtension(MyExtensionFeature extensionFeature, PropertyService propertyService) {
super(extensionFeature);
this.propertyService ==== propertyService;
}
The getAction
method returns an optional Action
for the entity. In our
sample, we want to associate an action with entity if it is a project and if it
has the MyProperty
property being set:
@Override
public Optional<Action> getAction(ProjectEntity entity) {
if (entity instanceof Project) {
return propertyService.getProperty(entity, MyPropertyType.class).option()
.map(p ->
Action.of(
"my-action",
"My action",
String.format("my-action/%d", entity.id())
)
);
} else {
return Optional.empty();
}
}
The returned Action
object has the following properties:
-
an
id
, uniquely identifying the target page in the extension -
a
name
, which will be used as display name for the menu entry -
a URI fragment, which will be used for getting to the extension end point (see later). Note that this URI fragment will be prepended by the extension path. So in our example, the final path for the
SAMPLE
project with id12
would be:extension/myextension/my-action/12
.
Extension page
Before an extension can serve some web components, it must be declared as
being GUI related. See the documentation to enable this
(ExtensionFeatureOptions.DEFAULT.withGui(true) ).
|
The extension must define an AngularJS module file at:
src/main/resources \-- static \-- extension \-- myextension \-- module.js
The module.js
file name is fixed and is used by Ontrack to load the web
components of your extension at startup.
This is an AngularJS (1.2.x) module file and can declare its configuration, its
services, its controllers, etc. Ontrack uses
UI Router
(), version 0.2.11
for
the routing of the pages, allowing a routing declaration as module level.
For our example, we want to declare a page for displaying information for
extension/myextension/my-action/{project}
where {project}
is the ID of
one project:
angular.module('ontrack.extension.myextension', [
'ot.service.core',
'ot.service.structure'
])
// Routing
.config(function ($stateProvider) {
$stateProvider.state('my-action', {
url: '/extension/myextension/my-action/{project}',
templateUrl: 'extension/myextension/my-action.tpl.html',
controller: 'MyExtensionMyActionCtrl'
});
})
// Controller
.controller('MyExtensionMyActionCtrl', function ($scope, $stateParams, ot, otStructureService) {
var projectId ==== $stateParams.project;
// View definition
var view ==== ot.view();
view.commands ==== [
// Closing to the project
ot.viewCloseCommand('/project/' + projectId)
];
// Loads the project
otStructureService.getProject(projectId).then(function (project) {
// Breadcrumbs
view.breadcrumbs ==== ot.projectBreadcrumbs(project);
// Title
view.title ==== "Project action for " + project.name;
// Scope
$scope.project ==== project;
});
})
;
The routing configuration declares that the end point at
/extension/myextension/my-action/{project}
will use the
extension/myextension/my-action.tpl.html
view and the
MyExtensionMyActionCtrl
controller defined below.
The ot
and otStructureService
are Ontrack Angular services, defined
respectively by the ot.service.core
and ot.service.structure
modules.
The MyExtensionMyActionCtrl
controller:
-
gets the project ID from the state (URI) definition
-
it defines an Ontrack view, and defines a close command to go back to the project page
-
it then loads the project using the
otStructureService
service and upon loading completes some information into the view
Finally, we define a template at:
src/main/resources \-- static \-- extension \-- myextension \-- extension/myextension/my-action.tpl.html
which contains:
<ot-view>
Action page for {{project.name}}.
</ot-view>
The ot-view
is an Ontrack directive which does all the layout magic for you.
You just have to provide the content.
Ontrack is using Bootstrap 3.x for the layout and basic styling, so you can start structuring your HTML with columns, rows, tables, etc. For example:
<ot-view>
<div class="row">
<div class="col-md-12">
Action page for {{project.name}}.
</div>
</div>
</ot-view>
8.5.15. Extending event types
Extensions can define additional event types which can then be used to add custom events to entities.
To register a custom event type:
@Autowired
public static final EventType CUSTOM_TYPE = SimpleEventType.of("custom-type", "My custom event");
public MyExtension(..., EventFactory eventFactory) {
super(extensionFeature);
eventFactory.register(CUSTOM_TYPE);
}
Then, you can use it this way when you want to attach an event to, let’s say, a build:
EventPostService eventPostService;
Build build;
...
eventPostService.post(
Event.of(MyExtension.CUSTOM_TYPE).withBuild(build).get()
);
8.5.16. Extending validation data
If built-in validation data types are not enough, additional ones can be created using the extension mechanism.
To register a custom validation data type:
-
implement a component implementing the
ValidationDataType
interface or preferably theAbstractValidationDataType
class (which provides some utility validation methods) -
looks at the Javadoc of the
ValidationDataType
interface to get the list of methods to implement and some guides
The main choice to consider is about the configuration data type (C
) and the data type (T
).
The data type is the type of the data you actually associate with a validation run. For
example, for some code coverage, it would be a percentage, and therefore represented as an Int
. It could
be any other type, either complex or simple.
The configuration data type is responsible for the configuration of the validation stamp, how the actual data will be interpreted when it comes to computing a status. It could be one or several thresholds for example.
The best thing to get started would be to copy the code of existing built-in data types. |
8.5.17. Extending GraphQL
Extensions can contribute to the Ontrack GraphQL core schema:
-
custom types
-
root queries
-
additional fields in project entities
Preparing the extension
In your extension module, import the ontrack-ui-graphql
module:
dependencies {
compile "net.nemerosa.ontrack:ontrack-ui-graphql:${ontrackVersion}"
}
If you want to write integration tests for your GraphQL extension, you have to include the GraphQL testing utilities:
dependencies {
testCompile "net.nemerosa.ontrack:ontrack-ui-graphql:${ontrackVersion}:tests"
}
Custom types
To define an extra type, you create a component which implements the
GQLType
interface:
@Component
public class PersonType implements GQLType {
@Override
public GraphQLObjectType getType() {
return GraphQLObjectType.newObject()
.name("Person")
.field(f -> f.name("name")
.description("Name of the person")
.type(GraphQLString)
)
.build();
}
}
See the graphql-java documentation for the description of the type construction. |
You can use this component in other ones, like in queries, field definitions or other types, like shown below:
@Component
public class AccountType implements GQLType {
private final PersonType personType;
@Autowired
public AccountType (PersonType personType) {
this.personType = personType;
}
@Override
public GraphQLObjectType getType() {
return GraphQLObjectType.newObject()
.name("Account")
.field(f -> f.name("username")
.description("Account name")
.type(GraphQLString)
)
.field(f -> f.name("identity")
.description("Identity")
.type(personType.getType())
)
.build();
}
}
You can also create GraphQL types dynamically by using introspection of your model classes.
Given the following model:
@Data
public class Person {
private final String name;
}
@Data
public class Account {
private final String username;
private final Person identity;
}
You can generate the Account
type by using:
@Override
public GraphQLObjectType getType() {
return GraphQLBeanConverter.asObjectType(Account.class);
}
The GraphQLBeanConverter.asObjectType is still very
experimental and its implementation is likely to change in the next versions
of Ontrack. For example, Map and Collection types are not supported.
|
Root queries
Your extension can contribute to the root query by creating a component
implementing the GQLRootQuery
interface:
@Component
public class AccountGraphQLRootQuery implements GQLRootQuery {
private final AccountType accountType;
@Autowired
public AccountGraphQLRootQuery(AccountType accountType) {
this.accountType = accountType;
}
@Override
public GraphQLFieldDefinition getFieldDefinition() {
return GraphQLFieldDefinition.newFieldDefinition()
.name("accounts")
.argument(a -> a.name("username")
.description("User name pattern")
.type(GraphQLString)
)
.type(accountType.getType())
.dataFetcher(...)
.build();
}
}
This root query can then be used into your GraphQL queries:
{
accounts(username: "admin*") {
username
identity {
name
}
}
}
Extra fields
The Ontrack GraphQL extension mechanism allows contributions to the project entities like the projects, builds, etc.
For example, to contribute a owner
field of type Account
on the Branch
project entity:
@Component
public class BranchOwnerGraphQLFieldContributor
implements GQLProjectEntityFieldContributor {
private final AccountType accountType;
@Autowired
public BranchOwnerGraphQLFieldContributor(AccountType accountType) {
this.accountType = accountType;
}
@Override
public List<GraphQLFieldDefinition> getFields(
Class<? extends ProjectEntity> projectEntityClass,
ProjectEntityType projectEntityType) {
return Collections.singletonList(
GraphQLFieldDefinition.newFieldDefinition()
.name("owner")
.type(accountType.getType())
.dataFetcher(GraphqlUtils.fetcher(
Branch.class,
(environment, branch) -> return ...
))
.build()
);
}
You can now use the owner
field in your queries:
{
branches(id: 1) {
name
project {
name
}
owner {
username
identity {
name
}
}
}
}
Built-in scalar fields
The Ontrack GraphQL module adds the following scalar types, which you can use in your field or type definitions:
-
GQLScalarJSON.INSTANCE
- maps to aJsonNode
-
GQLScalarLocalDateTime.INSTANCE
- maps to aLocalDateTime
You can use them directly in your definitions:
...
.field(f -> f.name("content").type(GQLScalarJSON.INSTANCE))
.field(f -> f.name("timestamp").type(GQLScalarLocalDateTime.INSTANCE))
...
Testing GraphQL
In your tests, create a test class which extends AbstractQLITSupport
and
use the run
method to execute a GraphQL query:
MyTestIT extends AbstractQLITSupport {
@Test
void my_test() {
def p = doCreateProject()
def data = run("""{
projects(id: ${p.id}) {
name
}
}""")
assert data.projects.first().name == p.name
}
}
While it is possible to run GraphQL tests in Java, it’s easier to do using Groovy. |
8.5.18. Extending cache
Ontrack uses Caffeine to cache some data in memory to avoid reloading it from the database. The cache behaviour can be configured using properties.
Extensions can also use the Ontrack cache and make it configurable.
In order to declare one or several caches, just a declare a Component
which
implements CacheConfigExtension
and set the
Caffeine spec
string for each cache.
@Component
class MyCacheConfigExtension : CacheConfigExtension {
override val caches: Map<String, String>
get() = mapOf(
"myCache" to "maximumSize=1000,expireAfterWrite=1h,recordStats"
)
}
The cache statistics are available as
metrics
if the recordStats flag is set.
|
The cache thus declared become configurable through external configuration. For example:
ontrack:
config:
cache:
specs:
myCache: "maximumSize=2000,expireAfterWrite=1d,recordStats"
In order to use the cache in the code, you can just use the Spring cache annotations. For example:
@Service
class MyServiceImpl: MyService {
@Cacheable(cacheNames = "myCache")
fun getValue(id: Int): MyObject = ...
}
8.5.19. Extending metrics
There are several ways to contribute to metrics in Ontrack:
Meter registry direct usage
Starting from version 2.35/3.35, the metrics framework used by Ontrack has been migrated to Micrometer. This is a breaking change - and the way metrics can be contributed to by extensions is totally different and some effort must be done in the migration. |
In order for extensions to add their own metrics, they can
interact directly with an inject MeterRegistry
and then
get gauges, timers, counters, etc.
Or they can create some MeterBinder
beans to register
some gauges at startup time.
Usually, migrating (monotonic) counters and timers will be straightforward:
val meterRegistry: MeterRegistry
meterRegistry.counter("...", tags).increment()
meterRegistry.timer("...", tags).record {
// Action to time
}
For gauge, you have to register them so that they can be call at any time by the meter registry:
val meterRegistry: MeterRegistry
meterRegistry.gauge("...", tags,
sourceObject,
{ obj -> /* Gets the gauge value from the object */ }
)
See the Micrometer documentation for more information on how to register metrics.
Validation run metrics
Every time a validation run is created, an event is sent
to all instances of ValidationRunMetricsExtension
.
You can register an extension to react to this creation:
class InfluxDBValidationRunMetricsExtension(myExtensionFeature: MyExtensionFeature) : AbstractExtension(myExtensionFeature), ValidationRunMetricsExtension {
override fun onValidationRun(validationRun: ValidationRun) {
// Does something with the created validation run
}
}
Run info listeners
Builds and validation runs can be associated with some run info, which contain information about the execution time, source and author.
Every time a run info is created, an event is sent to all instances of
RunInfoListener
. To react to those run info events, you can also declare
a @Component
implementing RunInfoListener
. For example:
@Component
class MyRunInfoListener : RunInfoListener {
override fun onRunInfoCreated(runnableEntity: RunnableEntity, runInfo: RunInfo) {
// Exports the run info to an external metrics system
}
}
Metrics export service
The MetricsExportService
can be used to export any set of metrics, to any registered
metrics system.
As of now, only InfluxDB is supported. |
To export a metric, just call the exportMetrics
method on the service:
metricsExportService.exportMetrics(
"my-metric-name",
tags = mapOf(
"tag1" to "name1",
"tag2" to "name2"
),
fields = mapOf(
"value1" to value1,
"value2" to value2
),
timestamp = Time.now()
)
Metrics exporters must declared an extension of type |
8.5.20. Using Kotlin in extensions
Just mention kotlin()
in the Ontrack configuration in your
build.gradle
file:
ontrack {
kotlin()
...
}
The Kotlin Gradle plug-in will be automatically applied and the Kotlin
JVM for JRE8, with the same version than for Ontrack, will be added
in compileOnly
mode to your dependencies. Enjoy!
8.5.21. Extending the settings
An extension can add a entry in the list of global settings.
Start by creating an invariant class which contains the data to manage in the new settings.
in the sample below, we use some PuppetDB connection settings, which need a URL, a user name and a password. |
@Data
public class PuppetDBSettings {
private final String url;
private final String username;
private final String password;
}
The settings are managed in Ontrack by two distinct services:
-
a manager - responsible for the edition of the settings
-
a provider - responsible for retrieving the settings
as-of today, the service cannot be the same class. |
To define the manager, extend the AbstractSettingsManager
class and use your settings class as a parameter:
@Component
public class PuppetDBSettingsManager extends AbstractSettingsManager<PuppetDBSettings> {
private final SettingsRepository settingsRepository;
private final EncryptionService encryptionService;
@Autowired
public PuppetDBSettingsManager(CachedSettingsService cachedSettingsService, SecurityService securityService, SettingsRepository settingsRepository, EncryptionService encryptionService) {
super(PuppetDBSettings.class, cachedSettingsService, securityService);
this.settingsRepository = settingsRepository;
this.encryptionService = encryptionService;
}
@Override
protected void doSaveSettings(PuppetDBSettings settings) {
settingsRepository.setString(PuppetDBSettings.class, "url", settings.getUrl());
settingsRepository.setString(PuppetDBSettings.class, "username", settings.getUsername());
settingsRepository.setPassword(PuppetDBSettings.class, "password", settings.getPassword(), false, encryptionService::encrypt);
}
@Override
protected Form getSettingsForm(PuppetDBSettings settings) {
return Form.create()
.with(
Text.of("url")
.label("URL")
.help("URL to the PuppetDB server. For example: http://puppetdb")
.value(settings.getUrl())
)
.with(
Text.of("username")
.label("User")
.help("Name of the user used to connect to the PuppetDB server.")
.optional()
.value(settings.getUsername())
)
.with(
Password.of("password")
.label("Password")
.help("Password of the user used to connect to the PuppetDB server.")
.optional()
.value("") // Password never sent to the client
);
}
@Override
public String getId() {
return "puppetdb";
}
@Override
public String getTitle() {
return "PuppetDB settings";
}
}
To define the provided, implement the AbstractSettingsManager
and use your settings class as a parameter:
@Component
public class PuppetDBSettingsProvider implements SettingsProvider<PuppetDBSettings> {
private final SettingsRepository settingsRepository;
private final EncryptionService encryptionService;
@Autowired
public PuppetDBSettingsProvider(SettingsRepository settingsRepository, EncryptionService encryptionService) {
this.settingsRepository = settingsRepository;
this.encryptionService = encryptionService;
}
@Override
public PuppetDBSettings getSettings() {
return new PuppetDBSettings(
settingsRepository.getString(PuppetDBSettings.class, "url", ""),
settingsRepository.getString(PuppetDBSettings.class, "username", ""),
settingsRepository.getPassword(PuppetDBSettings.class, "password", "", encryptionService::decrypt)
);
}
@Override
public Class<PuppetDBSettings> getSettingsClass() {
return PuppetDBSettings.class;
}
}
That’s all there is to do. Now, the new settings will automatically appear in the Settings page:
and can be edited using the form defined above:
8.5.22. Extending the security
The security model of Ontrack can be extended to fit for specific needs in extensions.
Adding functions
All authorizations in the code are granted through functions. We distinguish between:
-
global functions about Ontrack in general
-
project functions linked to a given project
Global roles are then linked to a number of global functions and project functions.
On the other hand, project roles can only be linked to project functions.
The association of core functions and core roles is fixed in the Ontrack core, but extensions can:
-
define new global and project functions
-
assign them to existing roles
For security reasons, extensions cannot associate existing core functions to roles. |
In order to define a global function, just define an interface
which extends GlobalFunction
:
public interface MyGlobalFunction extends GlobalFunction {}
Almost the same thing for a project function:
public interface MyProjectFunction extends ProjectFunction {}
No method is to be implemented. |
Now, you can link those functions to existing roles by providing
a RoleContributor
component. In our example, we want to grant
the global function and the project function to the AUTOMATION
global role and the project
function to the PROJECT_OWNER
project role.
@Component
public MyRoleContributor implements RoleContributor {
@Override
public Map<String, List<Class<? extends GlobalFunction>>> getGlobalFunctionContributionsForGlobalRoles() {
return Collections.singletonMap(
Roles.GLOBAL_AUTOMATION,
Collections.singletonList(
MyGlobalFunction.class
)
);
}
@Override
public Map<String, List<Class<? extends ProjectFunction>>> getProjectFunctionContributionsForGlobalRoles() {
return Collections.singletonMap(
Roles.GLOBAL_AUTOMATION,
Collections.singletonList(
MyProjectFunction.class
)
);
}
@Override
public Map<String, List<Class<? extends ProjectFunction>>> getProjectFunctionContributionsForProjectRoles() {
return Collections.singletonMap(
Roles.PROJECT_OWNER,
Collections.singletonList(
MyProjectFunction.class
)
);
}
}
All available roles are listed in the Roles interface.
|
You can now check for those functions in your code by injecting
the SecurityService
:
private final SecurityService securityService;
...
if (securityService.isGlobalFunctionGranted(MyGlobalFunction.class)) {
...
}
if (securityService.isProjectFunctionGranted(project, MyProjectFunction.class)) {
...
}
or:
private final SecurityService securityService;
...
securityService.checkGlobalFunction(MyGlobalFunction.class)) {
securityService.checkProjectFunction(project, MyProjectFunction.class))
The project functions can be tested on a Project or any other
entity which belongs to a project (branches, builds, etc.).
|
Adding roles
Both global and project roles can be added
using the same RoleContributor
extension class, by
overriding the following methods:
@Component
public MyRoleContributor implements RoleContributor {
@Override
public List<RoleDefinition> getGlobalRoles() {
return Collections.singletonList(
new RoleDefinition(
"MY_GLOBAL_ROLE",
"My Global Role",
"This is a new global role"
)
);
}
@Override
public List<RoleDefinition> getProjectRoles() {
return Collections.singletonList(
new RoleDefinition(
"MY_PROJECT_ROLE",
"My Project Role",
"This is a new project role"
)
);
}
}
A new role can inherit from a built-in role:
In the previous example, the Same principle applies for global roles. |
Those roles becomes eligible for selection when managing accounts and groups.
Note that functions (built-in or contributed) can be associated to those new roles - see Adding functions. By default, no function is associated to a contributed role.
8.5.23. Extending confidential stores
Extensions can define a custom confidential store used to store encryption keys.
Create a component which extends the AbstractConfidentialStore
class:
@Component
@ConditionalOnProperty(name = OntrackConfigProperties.KEY_STORE, havingValue = "custom")
public class CustomConfidentialStore extends AbstractConfidentialStore {
public CustomConfidentialStore() {
LoggerFactory.getLogger(CustomConfidentialStore.class).info(
"[key-store] Using custom store"
);
}
@Override
public void store(String key, byte[] payload) throws IOException {
// ...
// Stores the key
}
@Override
public byte[] load(String key) throws IOException {
// ...
// Retrives the key or ...
return null;
}
}
Note the use of the ConditionalOnProperty
, which allows to select
this store when the ontrack.config.key-store
property is set to custom
.
8.5.24. Free text annotations
Some free text can be entered as description for some elements of the model and can be automatically extended with hyperlinks.
See Hyperlinks in descriptions for this feature in the validation run statuses.
Using extensions, it is possible to extend this hyperlinked to other elements.
For example, let’s imagine that we have a system
where all references like [1234]
can be replaced
to a link to http://reference/1234
with 1234
as a text.
For this, you have to create a @Component
bean which
implements the FreeTextAnnotatorContributor
interface.
The getMessageAnnotators
returns a list
of `MessageAnnotator`s used to transform the text
into a tree of nodes (typically some HTML).
In our example, this can give something like:
@Component
class RefFreeTextAnnotatorContributor : FreeTextAnnotatorContributor {
override fun getMessageAnnotators(entity: ProjectEntity): List<MessageAnnotator> {
val regex = "\\[(d+)\\]".toRegex()
return listOf(
RegexMessageAnnotator(
"\\[d+]\\"
) { match ->
val result = regex.matchEntire(match)
result
?.let {
val id = it.groups[1].value.toInt(10)
MessageAnnotation.of("a")
.attr("href", "http://reference/$id")
.text(id.toString())
}
?: match
}
)
}
}
This component returns a single RegexMessageAnnotator
(other implementations are of course possible,
but this one is very convenient) which, given a regular expression, uses any match to transform
into something else.
In our example, we extract the ID from the expression and return a link.
8.5.25. Label providers
Labels can be created and associated manually with projects.
Ontrack allows also some automation of this process using the concept of a label provider.
Labels created and associated to projects by label providers cannot be managed manually: they cannot be edited, deleted or unselected. |
Implementation
A label provider is a Service
which extends
the LabelProvider
class and returns a list of
labels for a project.
For example, we could have a label provider which associates a "quality" label according to the "health" of the validation stamps in all "main" branches of the project. The label category would be "quality" and different names could be "high", "medium" and "low".
The code would look like:
@Service
class QualityLabelProvider : LabelProvider {
override val name: String = "Quality"
override val isEnabled: Boolean = true
override fun getLabelsForProject(project: Project): List<LabelForm> {
// Computes quality of the project
val quality: String = ...
// Returns a label
return listOf(
LabelForm(
category = "quality",
name = quality,
description = "",
color = ... // Computes color according to quality
)
)
}
}
Activation
Even if you code such a label provider, nothing will happen until you activate the collection of labels.
Ontrack disables this collection by default, because there is no default label provider and that would be a useless job.
To activate the label collection job, just set the
ontrack.config.job-label-provider-enabled
configuration property
to true
.
Additionally, the label collection can be configured by administrators in the Settings:
-
Enabled - Check to enable the automated collection of labels for all projects. This can generate a high level activity in the background.
-
Interval - Interval (in minutes) between each label scan.
-
Per project - Check to have one distinct label collection job per project.
8.5.26. Extending promotion checks
Promotion checks like "checking if the previous promotion is granted"
are built-in in Ontrack but one can create its own by creating instances
of the PromotionRunCheckExtension
extension.
For example, to create a check on the name of the promotion level, that it should be uppercase only:
@Component
class UppercasePromotionRunCheckExtension(
extensionFeature: YourExtensionFeature
): AbstractExtension(extensionFeature), PromotionRunCheckExtension {
override fun checkPromotionRunCreation(promotionRun: PromotionRun) {
if (promotionRun.promotionLevel.name != promotionRun.promotionLevel.name.toUpperCase()) {
throw UppercasePromotionRunCheckException(/* ... */)
}
}
}
8.5.27. Extending the search
The Search capabilities of Ontrack can be extended through extensions and the core capabilities are also coded through extensions.
A Search extension is a component which implements the SearchIndexer
interface.
In versions 3.40 and before, search extensions were done using The |
Search indexer overview
A SearchIndexer
is responsible for two things:
-
feeding a search index
-
transforming found index entries into displayable search results
The SearchIndexer
must be parameterized by a SearchItem
class -
see
The indexerName
is the display name for the indexer, used to log
information or to name the indexation jobs.
Indexation jobs can be totally disabled by setting true
as
the isIndexationDisabled
property. They cannot even be triggered
manually - set isIndexationDisabled
to true
when search indexes
are not applicable. For example, some SearchIndexer
instances
might be fed by other indexers.
The indexerSchedule
is used to set a schedule to the
indexation job. It defaults to Schedule.NONE
meaning that the job can be run only manually. Set another
schedule for an automated job.
The indexName
defines the name of the technical index used
by this SearchIndexer
- when using ElasticSearch, it corresponds
to the name of ElasticSearch index to use. The index can be configured
by setting the indexMapping
property - see Search indexation mapping
for more information on this subject.
At Ontrack startup time, all indexes are created (in ElasticSearch) and their mapping updated. |
The searchResultType
defines the type of result returned by an index
search capability. It’s used:
-
to provide a user a way to filter on the types of results
-
a way for the front-end to associate an icon to the type of result
For example:
@Component
class MySearchIndexer: SearchIndexer<MySearchItem> {
override val searchResultType = SearchResultType(
feature = feature.featureDescription,
id = "my-result",
name = "My result",
description = "Use a comma-separated list of tokens"
)
}
The feature
is the ExtensionFeature
associated with this
SearchIndexer
(see Coding an extension).
The description
property is used to describe the type of search
token one should use to find this type of result (when
applicable).
Search indexation
The indexAll
method is called by the system when indexation
job for this indexer is enabled (it is by default, unless
isIndexationDisabled
returns true
).
It must:
-
loop over all items to be indexed for a search (for example: all projects for the project indexer)
-
transform all those items into instances of the
SearchItem
class associaed with this indexer (for example: keeping only the project ID, its name and description) -
call the the provided
processor
function
The |
For example:
override fun indexAll(processor: (ProjectSearchItem) -> Unit) {
structureService.projectList.forEach { project ->
processor(ProjectSearchItem(project))
}
}
Behind the scene, the indexation job will send the items to index to an index service in batches (which makes the indexation quite performant).
The batch size is set by default to 1000
but can be:
-
configured using the
ontrack.config.search.index.batch
property -
set explicitly using the
indexBatch
property of theSearchIndexer
(this takes precedence)
Search results
When a search is performed, the SearchService
will call
the toSearchResult
method of the SearchIndexer
in order
to transform an indexed item into a result which can be displayed
to the user.
See the documentation of the |
Usually, the indexer will:
-
load the actual Ontrack object or extract information from the indexed item (this latter method is preferred for performance reasons)
-
in particular, it’ll check if the target object makes sense: does it still exist? Is it authorized to the current user?
-
setup a
SearchResult
instance to describe the result
For example, for the build indexer:
override fun toSearchResult(id: String, score: Double, source: JsonNode): SearchRe
structureService.findBuildByID(ID.of(id.toInt()))?.run {
SearchResult(
title = entityDisplayName,
description = description ?: "",
uri = uriBuilder.getEntityURI(this),
page = uriBuilder.getEntityPage(this),
accuracy = score,
type = searchResultType
)
}
In this example:
-
findBuildByID
checks both the existence of the build and if it is accessible by the current user, returningnull
when not available -
the
title
of the result is set of the complete build name (including project and branch name) -
the
uri
andpage
can be computed using an injectedURIBuilder
-
the
accuracy
is the score returned by ElasticSearch -
for the
type
just use thesearchResultType
of the indexer
As of now, the accuracy is used for sorting results, but is not displayed
|
The |
Search index items
The SearchItem
class used to parameterize the SearchIndexer
must return two values:
-
id
- the unique ID of this item in the index -
fields
- a map of values to store together with the index
Most of the times, you can define:
-
a primary constructor listing the properties who want to store
-
a secondary constructor using the domain model of Ontrack
Example for the Git commit indexer:
class GitCommitSearchItem(
val projectId: Int,
val gitType: String,
val gitName: String,
val commit: String,
val commitShort: String,
val commitAuthor: String,
val commitMessage: String
) : SearchItem {
constructor(project: Project, gitConfiguration: GitConfiguration, commit: GitCommit) : this(
projectId = project.id(),
gitType = gitConfiguration.type,
gitName = gitConfiguration.name,
commit = commit.id,
commitShort = commit.shortId,
commitAuthor = commit.author.name,
commitMessage = commit.shortMessage
)
override val id: String = "$gitName::$commit"
override val fields: Map<String, Any?> = asMap(
this::projectId,
this::gitType,
this::gitName,
this::commit,
this::commitAuthor,
this::commitShort,
this::commitMessage
)
}
For the fields
of the item, try to get only simple types
or list of simple types.
The asMap
utility method is optional and can be replaced by
a direct map construction. However, it avoids to hard-code
the field names and uses the property references instead.
Search indexation mapping
By default, indexes are mapped automatically to the provided fields (like in ElasticSearch) but explicit mappings can be provided to:
-
disable the indexation of some fields (like the
projectId
in the example above - while this field is needed for creating a search result, it should not be used for searches) -
set a type, like keyword or text (the search won’t work the same way)
-
boosting the search result score on some fields (a match on a key might be better than a match on a free description text)
While the |
In order to specify a mapping, the indexMapping
of the SearchIndexer
must return an instance of SearchIndexMapping
.
While it’s possible to build such an instance manually, it’s more convenient to use the provided DSL. For example, for the Git commit indexer mentioned above:
override val indexMapping: SearchIndexMapping? = indexMappings<GitCommitSearchItem> {
+GitCommitSearchItem::projectId to id { index = false }
+GitCommitSearchItem::gitType to keyword { index = false }
+GitCommitSearchItem::gitName to keyword { index = false }
+GitCommitSearchItem::commit to keyword { scoreBoost = 3.0 }
+GitCommitSearchItem::commitShort to keyword { scoreBoost = 2.0 }
+GitCommitSearchItem::commitAuthor to keyword()
+GitCommitSearchItem::commitMessage to text()
}
The syntax is:
+<SearchItem::property>> [to <type>[ { <configuration> }]]*
The type for the property can be set using:
-
id
for along
-
keyword
-
text
-
any other type supported by ElasticSearch using `type("typeName")
The configuration is optional but accepts the following properties:
-
index: Boolean
- unset by default - to specify if this property must be indexed or not -
scoreBoost: Double
- multiplicator for the significance of a match on this field (similar to the boost indicator in ElasticSearch)
A property can be associated with two types, for example when a field can be both considered as a keyword or as plain text.
+SearchItem::myProperty to keyword { scoreBoost = 2.0 } to text()
Search indexation jobs
Unless its isIndexationDisabled
property returns true
, every
SearchIndexer
is associated with a job which
runs the indexation of all
items.
By default, those jobs must be launched manually but the indexSchedule
can be used to define a run schedule.
Additionally, there is "All re-indexations" job which launches all re-indexations ; this is useful when migrating Ontrack to a deployment using ElasticSearch or to reset all indexes.
Search result icon
The searchResultType
returned by a SearchIndexer
contains
a feature description and an ID. Both are used to identify the
path to an icon which is used on client side:
-
in the search box dropdown to select and restrict the type of result
-
in the list of results
The icon (PNG, square, will be rescaled at client side) must be put in
the resources
at:
static/extension/<feature>/search-icon/<id>.png
where:
-
<feature>
is the feature ID -
<id>
is the search result type id
Search indexing on events
Re-indexation of a complete index is costly. While some indexes don’t have any other choice but to recompute the index regularly, it’s more efficient to have the following steps:
-
re-indexation once (when Ontrack is migrated to ElasticSearch)
-
populating the index on events
Example: the project index is updated when a project is created, updated or deleted.
The type of event to listen to depends on the type of indexed item, but most of the cases are covered by:
-
implement
EventListener
- when you want to listen to events on project entities like projects, branches, validation runs, etc. -
PropertyType.onPropertyChanged/onPropertyDeleted
to react on properties being created, updated or deleted -
other types of listeners, more specialized, are also available in Ontrack
In all cases, you have to inject the SearchIndexService
and
call the appropriate methods, typically createSearchIndex
,
updateSearchIndex
and deleteSearchIndex
, to update the index.
Don’t try to cover all the cases. For example, if your index is linked
to a build property, listen only to the property change, and not to
the events occurring to the build, its branch or its project. It’s
better to check in the |
9. Appendixes
9.1. Configuration properties
Ontrack uses the Spring Boot mechanism for its configuration. See the documentation on how to set those properties in your Ontrack installation.
All Spring Boot properties are available for configuration.
Additionally, Ontrack defines the following ones.
The names of the configuration properties are given for a .properties
file format but you can configure them in YAML of course. They can also
be provided as system properties or environment variables. See the
Spring Boot documentation
for more details.
|
This sample file is meant as a guide only. Do not copy/paste the entire content into your application; rather pick only the properties that you need. |
When applicable, the default value is mentioned. |
# ======================================================
# Ontrack properties
# ======================================================
# Maximum number of days to keep the log entries
ontrack.config.application-log-retention-days = 7
# Maximum number of errors to display as notification in the GUI
ontrack.config.application-log-info-max = 10
# Directory which contains all the working files of Ontrack
# It is usually set by the installation
ontrack.config.application-working-dir = work/files
# Maximum number of builds which can be returned by a build filter
# Any number above is truncated down to this value
ontrack.config.build-filter-count-max = 200
# Testing the configurations of external configurations
# Used only for internal testing, to disable the checks
# when creating external configurations
ontrack.config.configuration-test = true
# Activation of the provided labels collection job
ontrack.config.job-label-provider-enabled = false
# Number of threads to use to run the background jobs
ontrack.config.jobs.pool-size = 10
# Interval (in minutes) between each refresh of the job list
ontrack.config.jobs.orchestration = 2
# Set to true to not start any job at application startup
# The administrator can restore the scheduling jobs manually
ontrack.config.jobs.paused-at-startup = false
# Enabling the scattering of jobs
# When several jobs have the same schedule, this can create a peak of activity,
# potentially harmful for the performances of the application
# Enabling scattering allows jobs to be scheduled with an additional delay, computed
# as a fraction of the period.
ontrack.config.jobs.scattering = false
# Scattering ratio. Maximum fraction of the period to take into account for the
# scattering. For example, setting 0.5 would not add a dealy greater than half
# the period of the job. Setting 0 would actually disable the scattering altogether.
ontrack.config.jobs.scattering-ratio = 1.0
# Confidential store for the encryption keys
ontrack.config.key-store = file
# Cache configuration
# Caffeine spec strings per cache type
# See http://static.javadoc.io/com.github.ben-manes.caffeine/caffeine/2.6.0/com/github/benmanes/caffeine/cache/CaffeineSpec.html
# For example, for the `properties` cache:
ontrack.config.cache.specs.properties = maximumSize=1000,expireAfterWrite=1d,recordStats
#################################
# Search configuration properties
#################################
# Search engine to use
# Use `elasticsearch` to switch to ElasticSearch based search
ontrack.config.search.engine = default
# By default, indexation is ElasticSearch is done after some
# time after the index has been requested. The flag below
# forces the index to be refreshed immediately.
# This SHOULD NOT be used in production but is very useful
# when testing Ontrack search capabilities
ontrack.config.search.index.immediate = false
# When performing full indexation, the indexation is performed
# by batch. The parameter below allows to set the size
# of this batch processing.
# Note: this is a default batch size. Custom indexers can
# override it.
ontrack.config.search.index.batch = 1000
# When performing full indexation, the indexation is performed
# by batch. The parameter below allows to generate additional
# logging when indexing actions are actually taken.
ontrack.config.search.index.logging = false
# When performing full indexation, the indexation is performed
# by batch. The parameter below allows to generate additional
# logging for all actions on Git issues
# Note: if set to `true` this generates an awful lot if information
# at DEBUG level.
ontrack.config.search.index.tracing = false
9.2. Deprecations and migration notes
9.2.1. Since 3.38
The PropertyType
interface getSearchKey
method is now deprecated
and will be removed in a next version. Returning an empty string
or calling the default super
method is enough.
If the property is searchable, the getSearchArguments
method
must be implemented instead. See Extending properties for more
information.
The |
9.2.2. Since 3.35
StructureService
deprecated method:
-
The
getValidationRunsForBuildAndValidationStamp(net.nemerosa.ontrack.model.structure.ID, net.nemerosa.ontrack.model.structure.ID)
method is deprecated and should be replaced bygetValidationRunsForBuildAndValidationStamp(net.nemerosa.ontrack.model.structure.ID, net.nemerosa.ontrack.model.structure.ID, int, int)
.
GraphQL schema deprecations:
-
Build.linkedFrom
is now deprecated and must be replaced by eitheruses
orusedBy
9.2.3. Since 2.28
BitBucket global configurations are no longer associated with issue services, only project BitBucket configurations are. This is an alignment with the way the other SCM connections are working in Ontrack.
Upgrading to 2.28 performs an automated migration of the global configuration settings to the project ones.
9.2.4. Since 2.16
Support for custom branch and tags patterns in
Subversion configurations has been removed.
Ontrack now supports only standard Subversion structure:
project/trunk , project/branches and project/tags . This has
allowed a better flexibility in the association between builds and
Subversion locations.
|
Association between builds and Subversion locations is now configured through
a build revision link at branch level. The previous buildPath
parameter is
converted automatically to the appropriate type of link.
9.3. Roadmap
Here are big ideas for the future of Ontrack. No plan yet, just rough ideas or wish lists.
9.3.1. Use JPA / Hibernate for SQL queries
-
caching (no existent today)
-
see impact on multi Ontrack cluster
9.3.2. Using Neo4J as backend
Ontrack basically stores its data as a graph, and Neo4J would be a perfect match for the storage.
Consider:
-
migration
-
search engine
9.3.3. Global DSL
The current Ontrack DSL can be used only remotely and cannot be run on the server.
We could design a DSL which can be run, either:
-
remotely - interacting with the HTTP API
-
in the server - interacting directly with the services
Additionally, the DSL should be extensible so that extensions can contribute to it, on the same model than the Jenkins Job DSL.
9.4. Certificates
Some resources (Jenkins servers, ticketing systems, SCM…) will be configured and accessed in Ontrack using the
https
protocol, possibly with certificates that are not accepted by default.
Ontrack does not offer any mechanism to accept such invalid certificates.
The running JDK has to be configured in order to accept those certificates.
9.4.1. Registering a certificate in the JDK
To register the certificate in your JDK:
sudo keytool -importcert \
-keystore ${JAVA_HOME}/jre/lib/security/cacerts -storepass changeit \
-alias ${CER_ALIAS} \
-file ${CER_FILE}
To display its content:
sudo keytool -list \
-keystore ${JAVA_HOME}/jre/lib/security/cacerts \
-storepass changeit \
-alias ${CER_ALIAS} \
-v
See the complete documentation at http://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html.
9.4.2. Saving the certificate on MacOS
-
Open the Keychain Access utility (Applications → Utilities)
-
Select your certificate or key from the Certificates or Keys category
-
Choose File → Export items …
-
In the Save As field, enter a ".cer" name for the exported item, and click Save.
You will be prompted to enter a new export password for the item.
9.5. Metrics migration
Since version 2.35 / 3.35, Ontrack uses the Micrometer framework to manage metrics, in order to allow a better integration with Spring Boot 2. This means that old metrics are no longer supported and that any tool / dashboard using the old keys must be adapted. See also Metrics. |
Old metric key | New metric key | Tags |
---|---|---|
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
n/a |
- |
|
n/a |
- |
|
n/a |
- |
|
|
|
|
n/a |
- |
|
n/a |
- |
|
n/a |
- |
|
n/a |
- |
|
n/a |
- |
|
n/a |
- |
|
n/a |
- |
|
n/a |
- |
|
|
- |
|
|
<type> as `type`tag |
If a metric marked as n/a (not available) is still needed, either create an extension to add it or create an issue to have it added. |
9.6. Migration from H2 to Postgres
Starting from version 3, Ontrack uses Postgres instead of H2 for its back-end.
9.6.1. Prerequisites
Before you actually start the migration, please make sure you have the following elements:
-
a copy of the H2 database file to migrate (typically a
data.mv.db
file) and the associated credentials -
an access to the Postgres database to migrate to and the associated credentials
-
a copy of the secret files
For a Docker installation, the database is located
at |
9.6.2. Migration tool
The migration tool is part of the Ontrack release and can be downloaded
as ontrack-postgresql-migration.jar
in the GitHub release page.
9.6.3. Running the migration
Create a directory, called ${MIGRATION}
in the rest of this section, and
-
copy the
data.mv.db
database file in this directory -
copy the
ontrack-postgresql-migration.jar
migration tool in this directory
Run the following command:
java -jar ontrack-postgresql-migration.jar
By default, the tool will look for the H2 database file in the current
directory, using ontrack
/ ontrack
as credentials.
By default, the tool will use a Postgres database located at
jdbc:postgresql://localhost:5432/ontrack
, using ontrack
/ ontrack
as
credentials.
You can change those default values using the configuration options below.
9.6.4. Migration tool options
Migration options can either be specified on the command line or by
creating a local application.properties
file.
Option | Default value | Description |
---|---|---|
|
|
If set to |
|
|
If set to |
|
|
JDBC URL to the H2 database |
|
|
User used to connect to the H2 database |
|
|
Password used to connect to the H2 database |
|
|
JDBC URL to the Postgres database |
|
|
User used to connect to the Postgres database |
|
|
Password used to connect to the Postgres database |
9.6.5. Secret files
The master.key
and
net.nemerosa.ontrack.security.EncryptionServiceImpl.encryption
files
must be copied and put in the correct place.
For a development environment, put those files in
work/files/security/secrets , relatively to the workspace root.
|
9.7. Postgres and Flyway
The creation and updates of the Ontrack schema in Postgres is managed using Flyway.
The configuration is initialized in the net.nemerosa.ontrack.repository.RepositoryConfig
class and SQL files
are stored in ontrack-database/src/main/resources/ontrack/sql
.
As of now, extensions cannot contribute to the schema. |
The actual migration is processed using the net.nemerosa.ontrack.service.support.StartupStrategy
component. Once
the database has been upgraded, all StartupService
implementations are started in heir specified order.
9.8. Using production data
Some times, when developing Ontrack or working with extensions, it might be useful to copy the production data locally.
Make sure to have pg_dump
and psql
tools available in your working
environment.
They are part of the Postgres installation but can be downloaded individually from the download page. |
To export the data from the source database in a local ontrack.sql
file:
pg_dump --dbname ontrack --host <source-host> --port 5432 \
--username <source-user> > ontrack.sql
To import this data into the target database:
psql --dbname ontrack --host <target-host> --port 5432 \
--username <target-user> < ontrack.sql
the target database must be empty (no table, no sequences). |
When the migration of data is done, do not forget to also copy the secret files and to put them in the correct location.
For a Docker installation, the secret files are in
|
In the development environment,
the secret files are in
|
9.9. DSL Reference
Class summary |
||
Class |
||
---|---|---|
- |
||
9.9.1. Ontrack
An Ontrack instance is usually bound to the ontrack
identifier and is the root for all DSL calls.
Method summary |
|
---|---|
Method |
Description |
Looks for a branch in a project. Fails if not found. |
|
Looks for a build by name. Fails if not found. |
|
Configures the general settings of Ontrack. See Config. |
|
Runs an arbitrary DELETE request for a relative path and returns JSON |
|
Downloads an arbitrary document using a relative path. |
|
Finds a project using its name. Returns null if not found. |
|
Runs an arbitrary GET request for a relative path and returns JSON |
|
Access to the administration of Ontrack |
|
Access to the general configuration of Ontrack |
|
Gets the list of projects |
|
Gets the version of the remote Ontrack server |
|
|
|
Runs an arbitrary POST request for a relative path and some data, and returns JSON |
|
Finds or creates a project. |
|
Finds or creates a project, and configures it. |
|
Looks for a promotion level by name. Fails if not found. |
|
Runs an arbitrary PUT request for a relative path and some data, and returns JSON |
|
Launches a global search based on a token. |
|
Resets all search indexes and re-index optionally |
|
Runs an arbitrary GET request for a relative path and returns text |
|
Uploads some typed data on a relative path and returns some JSON |
|
Uploads some arbitrary binary data on a relative path and returns some JSON. See |
|
Looks for a validation stamp by name. Fails if not found. |
branch |
---|
Looks for a branch in a project. Fails if not found. See: Branch |
build |
---|
Looks for a build by name. Fails if not found. See: Build Sample:
|
configure |
---|
Configures the general settings of Ontrack. See Config. |
delete |
---|
Runs an arbitrary DELETE request for a relative path and returns JSON |
download |
---|
findProject |
---|
Finds a project using its name. Returns null if not found. See: Project Sample:
|
get |
---|
Runs an arbitrary GET request for a relative path and returns JSON |
getAdmin |
---|
getConfig |
---|
getProjects |
---|
Gets the list of projects See: Project Sample:
|
getVersion |
---|
Gets the version of the remote Ontrack server |
graphQLQuery |
---|
The It returns a JSON representation of an
The
See GraphQL support for more information about the GraphQL Ontrack integration. |
post |
---|
Runs an arbitrary POST request for a relative path and some data, and returns JSON |
project |
---|
Finds or creates a project. See: Project Sample:
|
project |
---|
Finds or creates a project, and configures it. See: Project Sample:
|
promotionLevel |
---|
Looks for a promotion level by name. Fails if not found. See: PromotionLevel |
put |
---|
Runs an arbitrary PUT request for a relative path and some data, and returns JSON |
search |
---|
Launches a global search based on a token. See: SearchResult |
searchIndexReset |
---|
Resets all search indexes and re-index optionally |
text |
---|
Runs an arbitrary GET request for a relative path and returns text |
upload |
---|
Uploads some typed data on a relative path and returns some JSON Creates a multi-part upload request where the
|
upload |
---|
Uploads some arbitrary binary data on a relative path and returns some JSON. See |
validationStamp |
---|
Looks for a validation stamp by name. Fails if not found. See: ValidationStamp |
9.9.2. AbstractProjectResource
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Configures this entity. |
|
Deletes this entity. |
|
Gets the data for the first decoration of a given type. If no decoration is available, returns null. |
|
Returns the list of decoration data (JSON) for a given decoration type. |
|
Returns the list of decorations for this entity. Each item has a |
|
Returns any description attached to this entity. |
|
Returns the numeric ID of this entity. |
|
Gets the Jenkins decoration for this entity. |
|
Gets any message for this entity |
|
Returns the name of this entity. |
|
|
|
|
|
Sets the value for a property of this entity. Prefer using dedicated DSL methods. |
config |
---|
Configures this entity. |
delete |
---|
Deletes this entity. |
getDecoration |
---|
Gets the data for the first decoration of a given type. If no decoration is available, returns null. |
getDecorations |
---|
Returns the list of decoration data (JSON) for a given decoration type. |
getDecorations |
---|
Returns the list of decorations for this entity. Each item has a |
getDescription |
---|
Returns any description attached to this entity. |
getId |
---|
Returns the numeric ID of this entity. |
getJenkinsJobDecoration |
---|
Gets the Jenkins decoration for this entity. |
getMessageDecoration |
---|
Gets any message for this entity The message is returned as a map containing:
The returned map is Sample:
|
getName |
---|
Returns the name of this entity. |
getProperty |
---|
Gets a property on the project entity. If If
|
property |
---|
Gets a required property on the project entity. If the property does not exist or is not set, a
|
property |
---|
Sets the value for a property of this entity. Prefer using dedicated DSL methods. |
9.9.3. AbstractResource
Method summary |
|
---|---|
Method |
Description |
Gets the internal JSON representation of this resource. |
|
Gets the Web page address for this resource. |
|
Gets a link address. |
|
Gets a link address if it exists. |
getNode |
---|
Gets the internal JSON representation of this resource. |
getPage |
---|
Gets the Web page address for this resource. This method returns the URL of the Web page which displays this resource. This is equivalent to calling |
link |
---|
Gets a link address. The For example, If the link does not exist, an The value of the link can be called using the Ontrack methods:
|
optionalLink |
---|
Gets a link address if it exists. This is the same than the |
9.9.4. Account
Representation of a user account.
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
List of groups this account belongs to. |
|
Source of the account: LDAP, built-in, … |
|
Email for the account. |
|
Display name for the account. |
|
Unique ID for the account. |
|
User name, used for signing in. |
|
Role for the user: admin or not. |
getAccountGroups |
---|
getAuthenticationSource |
---|
Source of the account: LDAP, built-in, … See: AuthenticationSource |
getEmail |
---|
Email for the account. |
getFullName |
---|
Display name for the account. |
getId |
---|
Unique ID for the account. |
getName |
---|
User name, used for signing in. |
getRole |
---|
Role for the user: admin or not. |
9.9.5. AccountGroup
Account group. Just a name and a description.
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Description of the group. |
|
Unique ID for the group. |
|
Name of the group. Unique. |
getDescription |
---|
Description of the group. |
getId |
---|
Unique ID for the group. |
getName |
---|
Name of the group. Unique. |
9.9.6. Admin
Administration management
Method summary |
|
---|---|
Method |
Description |
Creates or updates an account. |
|
Creates or updates an account group. |
|
Gets the list of global roles an account has. See Account permissions. |
|
Gets the list of global roles an account group has. See Account group permissions. |
|
Gets the list of roles an account group has on a project. See Account group permissions. |
|
Gets the list of roles an account has on a project. See Account permissions. |
|
Returns the list of all accounts. |
|
Returns the list of all groups. |
|
Gets the list of LDAP mappings. |
|
Gets the health/status of the application |
|
Creates or updates a LDAP mapping. |
|
Sets a global role on an account. See Account permissions. |
|
Sets a global role on an account group. See Account group permissions. |
|
Sets a project role on an account group. See Account group permissions. |
|
Sets a project role on an account. See Account permissions. |
account |
---|
Creates or updates an account. See: Account This method is used only to create built-in accounts. There is no need to create LDAP-based accounts. The The list of groups the account must belong to is provided using the names of the groups.
|
accountGroup |
---|
Creates or updates an account group. See: AccountGroup |
getAccountGlobalPermissions |
---|
Gets the list of global roles an account has. See Account permissions. |
getAccountGroupGlobalPermissions |
---|
Gets the list of global roles an account group has. See Account group permissions. |
getAccountGroupProjectPermissions |
---|
Gets the list of roles an account group has on a project. See Account group permissions. |
getAccountProjectPermissions |
---|
Gets the list of roles an account has on a project. See Account permissions. |
getAccounts |
---|
getGroups |
---|
getLdapMappings |
---|
getStatus |
---|
Gets the health/status of the application |
ldapMapping |
---|
Creates or updates a LDAP mapping. See: GroupMapping
The The
|
setAccountGlobalPermission |
---|
Sets a global role on an account. See Account permissions. |
setAccountGroupGlobalPermission |
---|
Sets a global role on an account group. See Account group permissions. |
setAccountGroupProjectPermission |
---|
Sets a project role on an account group. See Account group permissions. |
setAccountProjectPermission |
---|
Sets a project role on an account. See Account permissions. |
9.9.7. AuthenticationSource
Authentication source for an account - indicates how the account is authenticated: LDAP, built-in, etc.
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Identifier for the source: ldap, password |
|
Display name for the source |
|
Does this source allow to change the password? |
getId |
---|
Identifier for the source: ldap, password |
getName |
---|
Display name for the source |
isAllowingPasswordChange |
---|
Does this source allow to change the password? |
9.9.8. Branch
See also: AbstractProjectResource
Go to the methods
Configuration properties |
|
---|---|
See also: ProjectEntityProperties |
|
Configuration property summary |
|
Property |
Description |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Configuration: svn | ||
---|---|---|
To get the SVN branch configuration:
To associate a branch with some Subversion properties:
The parameters are:
Example:
|
Configuration: getSvn |
---|
See svn |
Configuration: gitBranch | ||
---|---|---|
Defines the Git properties for an Ontrack branch.
Examples:
|
Configuration: svnValidatorClosedIssues |
---|
A SVN-enabled branch can be associated with a validator in order to validate if there are some anomalies for the issues in the change logs.
Sets the list of issues statuses which can raise warnings if one of the issues is present after the change log.
Gets the list of statuses to look for. Example:
|
Configuration: getSvnValidatorClosedIssues |
---|
|
Configuration: svnSync |
---|
For a Subversion-enabled branch, an automated synchronisation can be set in order to regularly create builds from the list of tags in Subversion.
Sets a synchronisation every Example:
|
Configuration: getSvnSync |
---|
See svnSync |
Configuration: artifactorySync |
---|
Branch builds can be synchronised with Artifactory:
and the corresponding configuration can be accessed:
Example:
See also Artifactory configuration to have access to the list of available configurations. |
Configuration: getArtifactorySync |
---|
See artifactorySync |
Configuration: getGitBranch |
---|
See gitBranch |
Method summary |
|
---|---|
Method |
Description |
Creates a build for the branch and configures it using a closure. See |
|
Creates a build for the branch |
|
Configures the branch using a closure. |
|
Disables the branch |
|
|
|
Enables the branch |
|
Runs any filter and returns the list of corresponding builds. |
|
Access to the branch properties |
|
|
|
Returns the last promoted builds. |
|
Returns the name of the project the branch belongs to. |
|
Gets the list of promotion levels for this branch. |
|
|
|
Gets the list of validation stamps for this branch. |
|
Creates or updates a new branch from this branch template. See DSL Branch template definitions. |
|
Gets the disabled state of the branch |
|
|
|
Creates a promotion level for this branch. |
|
Creates a promotion level for this branch and configures it using a closure. |
|
Returns a list of builds for the branch, filtered according to given criteria. |
|
Synchronizes the branch template with its associated instances. Will fail if this branch is not a template. |
|
Synchronises the branch instance with its associated template. Will fail if this branch is not a template instance. |
|
Configure the branch as a template definition - see DSL Branch template definitions. |
|
|
|
Creates a validation stamp for this branch and configures it using a closure. |
|
Creates a validation stamp for this branch. |
build |
---|
build |
---|
Creates a build for the branch See: Build For example,
Settings the |
call |
---|
Configures the branch using a closure. |
disable |
---|
Disables the branch |
download | ||
---|---|---|
Download a file from the branch SCM. The branch must be associated with a SCM branch, for Git or Subversion. If not, the call will fail. The
See also DSL SCM extensions. |
enable |
---|
Enables the branch |
filter |
---|
Runs any filter and returns the list of corresponding builds. See: Build
This is a low level method and more specialised methods should instead be used like
|
getConfig |
---|
Access to the branch properties |
getInstance |
---|
If the branch is a template instance, returns a
|
getLastPromotedBuilds |
---|
Returns the last promoted builds. See: Build For example, to get the last promoted build:
|
getProject |
---|
Returns the name of the project the branch belongs to. Sample:
|
getPromotionLevels |
---|
Gets the list of promotion levels for this branch. See: PromotionLevel |
getType |
---|
Returns the type of the branch when it comes to templating. Possible values are:
|
getValidationStamps |
---|
Gets the list of validation stamps for this branch. See: ValidationStamp |
instance |
---|
Creates or updates a new branch from this branch template. See DSL Branch template definitions. |
isDisabled |
---|
Gets the disabled state of the branch |
link |
---|
Links a branch to an existing template. It will fail if the branch is already linked to a template
or is a template definition itself. See The You can put the |
promotionLevel |
---|
Creates a promotion level for this branch. See: PromotionLevel |
promotionLevel |
---|
Creates a promotion level for this branch and configures it using a closure. See: PromotionLevel |
standardFilter | ||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Returns a list of builds for the branch, filtered according to given criteria. See: Build For example, to get the last build of a given promotion:
The
|
sync |
---|
Synchronizes the branch template with its associated instances. Will fail if this branch is not a template. |
syncInstance |
---|
Synchronises the branch instance with its associated template. Will fail if this branch is not a template instance. |
template |
---|
Configure the branch as a template definition - see DSL Branch template definitions. |
unlink |
---|
Disconnects the branch template instance from its template:
|
validationStamp |
---|
Creates a validation stamp for this branch and configures it using a closure. See: ValidationStamp |
validationStamp |
---|
Creates a validation stamp for this branch. See: ValidationStamp |
9.9.9. Build
In order to get the change log between two builds, look at the documentation at DSL Change logs.
See also: AbstractProjectResource
Go to the methods
Configuration properties |
|
---|---|
See also: ProjectEntityProperties |
|
Configuration property summary |
|
Property |
Description |
Gets the Git commit associated to this build. |
|
Gets the Jenkins build property. |
|
Gets any label attached to the build. Returns null if none is attached. |
|
Sets a Git commmit associated to this build. |
|
Associates a Jenkins build with this build. |
|
|
Configuration: label |
---|
A label or release can be attached to a build using:
For example:
To get the label associated with a build:
|
Configuration: getLabel |
---|
Configuration: jenkinsBuild | ||
---|---|---|
Associates a Jenkins build with this build. The The The
|
Configuration: getJenkinsBuild | ||
---|---|---|
Gets the Jenkins build property. Returns an object describing the associated Jenkins build or The returned object contains the following attributes:
|
Configuration: gitCommit |
---|
Sets a Git commmit associated to this build. When working with Git, it is needed to associate a build with a commit. It can be done by using the build name itself as a commit indicator (full, short, or tag) or by putting the commit as a build property:
To get the commit back:
Example:
|
Configuration: getGitCommit |
---|
Gets the Git commit associated to this build. |
Method summary |
|
---|---|
Method |
Description |
|
|
Configuration of the build in a closure. |
|
Gets the build branch name. |
|
Returns the build links associated with this build |
|
|
|
Computes the change log between this build and the one given in parameter. |
|
|
|
Returns the next build in the same branch, or |
|
Returns the previous build in the same branch, or |
|
Gets the build project name. |
|
Gets the list of promotion runs for this build |
|
Returns any label associated with this build. |
|
Gets the associated run info with this build, or |
|
|
|
Gets the list of validation runs for this build |
|
Promotes this build to the given promotion level and configures the created promotion run. |
|
Promotes this build to the given promotion level. |
|
Sets the run info for this build. |
|
|
|
|
|
|
|
Associates some critical / high / medium / low issue counts with the validation. The validation stamp must be configured to accept CHML as validation data. |
|
Associates some data with the validation. |
|
Associates some fraction with the validation. The validation stamp must be configured to accept fraction as validation data. |
|
Associates some arbitrary metrics with the validation. |
|
Associates some number with the validation. The validation stamp must be configured to accept number as validation data. |
|
Associates some percentage with the validation. The validation stamp must be configured to accept percentage as validation data. |
|
Associates some text with the validation. The validation stamp must be configured to accept text as validation data. |
buildLink | ||||
---|---|---|---|---|
A build can be linked to other builds. To create links:
To get the list of linked builds:
The |
call |
---|
Configuration of the build in a closure. |
getBranch |
---|
Gets the build branch name. |
getBuildLinkDecorations |
---|
Returns the build links associated with this build |
getBuildLinks |
---|
See buildLink |
getChangeLog |
---|
Computes the change log between this build and the one given in parameter. See: ChangeLog |
getConfig |
---|
|
getNextBuild |
---|
Returns the next build in the same branch, or |
getPreviousBuild |
---|
Returns the previous build in the same branch, or |
getProject |
---|
Gets the build project name. |
getPromotionRuns |
---|
Gets the list of promotion runs for this build See: PromotionRun |
getReleaseDecoration |
---|
Returns any label associated with this build. |
getRunInfo |
---|
Gets the associated run info with this build, or The returned object has the following properties:
Example:
|
getSvnRevisionDecoration |
---|
Returns any Subversion revision attached to this build.
|
getValidationRuns |
---|
Gets the list of validation runs for this build See: ValidationRun |
promote |
---|
Promotes this build to the given promotion level and configures the created promotion run. See: PromotionRun |
promote |
---|
Promotes this build to the given promotion level. See: PromotionRun |
setRunInfo |
---|
Sets the run info for this build. Accepted parameters are:
Example:
|
signature |
---|
Sets the signature of the build. This method is granted
only for users having the Date is expected to be UTC. |
validate |
---|
See: ValidationRun Validates the build using the given validation stamp and status - and configures the resulting validation run. |
validate |
---|
See: ValidationRun
Validates the build using the given validation stamp and status - possible values for the status are:
|
validateWithCHML |
---|
Associates some critical / high / medium / low issue counts with the validation. The validation stamp must be configured to accept CHML as validation data. See: ValidationRun |
validateWithData |
---|
Associates some data with the validation. See: ValidationRun |
validateWithFraction |
---|
Associates some fraction with the validation. The validation stamp must be configured to accept fraction as validation data. See: ValidationRun |
validateWithMetrics |
---|
Associates some arbitrary metrics with the validation. See: ValidationRun |
validateWithNumber |
---|
Associates some number with the validation. The validation stamp must be configured to accept number as validation data. See: ValidationRun |
validateWithPercentage |
---|
Associates some percentage with the validation. The validation stamp must be configured to accept percentage as validation data. See: ValidationRun |
validateWithText |
---|
Associates some text with the validation. The validation stamp must be configured to accept text as validation data. See: ValidationRun |
9.9.10. ChangeLog
Change log between two builds. See getChangeLog
method.
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Export the issue change log. See this section for an example. |
|
List of commits in the change log. |
|
List of file changes in the change log. |
|
Lower boundary of the change log. |
|
List of issues in the change log. |
|
List of issues IDs in the change log. |
|
Upper boundary of the change log. |
|
UUID of the change log. |
exportIssues |
---|
Export the issue change log. See this section for an example. |
getCommits |
---|
getFiles |
---|
getFrom |
---|
getIssues |
---|
getIssuesIds |
---|
List of issues IDs in the change log. |
getTo |
---|
getUuid |
---|
UUID of the change log. |
9.9.11. ChangeLogCommit
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Gets the author name of the commit |
|
Gets the author email of the commit. Can be |
|
Gets the formatted message of the commit, where issues might have been replaced by links. |
|
Gets the full hash of the commit |
|
Gets a link to SCM for this commit. |
|
Gets the trimmed message of the commit. |
|
Gets the abbreviated hash of the commit |
|
Gets the timestamp of the commit as a ISO date string. |
getAuthor |
---|
Gets the author name of the commit |
getAuthorEmail |
---|
Gets the author email of the commit. Can be |
getFormattedMessage |
---|
Gets the formatted message of the commit, where issues might have been replaced by links. |
getId |
---|
Gets the full hash of the commit |
getLink |
---|
Gets a link to SCM for this commit. |
getMessage |
---|
Gets the trimmed message of the commit. |
getShortId |
---|
Gets the abbreviated hash of the commit |
getTimestamp |
---|
Gets the timestamp of the commit as a ISO date string. |
9.9.12. ChangeLogFile
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Change type for this file. Can be one of: ADDED, MODIFIED, DELETED, RENAMED, COPIED, UNDEFINED |
|
List of possible change types. Can be one of: ADDED, MODIFIED, DELETED, RENAMED, COPIED, UNDEFINED |
|
Relative path to the file being changed. |
getChangeType |
---|
Change type for this file. Can be one of: ADDED, MODIFIED, DELETED, RENAMED, COPIED, UNDEFINED |
getChangeTypes |
---|
List of possible change types. Can be one of: ADDED, MODIFIED, DELETED, RENAMED, COPIED, UNDEFINED |
getPath |
---|
Relative path to the file being changed. |
9.9.13. ChangeLogIssue
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Gets the display key for this issue. For example, |
|
Gets the technical key for this issue. For example, |
|
Gets the status of this issue. |
|
Gets the summary for this issue. |
|
Gets the last update time for this issue, as ISO date string. |
|
Gets the URL to this issue. |
getDisplayKey |
---|
Gets the display key for this issue. For example, |
getKey |
---|
Gets the technical key for this issue. For example, |
getStatus |
---|
Gets the status of this issue. |
getSummary |
---|
Gets the summary for this issue. |
getUpdateTime |
---|
Gets the last update time for this issue, as ISO date string. |
getUrl |
---|
Gets the URL to this issue. |
9.9.14. Config
General configuration of Ontrack.
Method summary |
|
---|---|
Method |
Description |
Creates or updates a Artifactory configuration. |
|
|
|
|
|
|
|
|
|
Checks if the projects are accessible in anonymous mode. |
|
|
|
|
|
Gets an existing label or returns null |
|
Gets the list of labels |
|
Gets the global LDAP settings |
|
Gets the main build links settings |
|
Gets the list of promotion levels. See |
|
Gets the list of validation stamps. See |
|
Gets the previous promotion condition settings |
|
Gets the list of SonarQube configuration ids |
|
Gets the global SonarQube settings |
|
|
|
|
|
Creates or update a Git configuration |
|
|
|
|
|
|
|
Creates or updates a Jenkins configuration. |
|
Creates or updates a JIRA configuration. |
|
Creates or updates a label |
|
See |
|
See autoPromotionLevel. |
|
See |
|
See |
|
Sets if the projects are accessible in anonymous mode. |
|
Sets the global LDAP settings |
|
Sets the main build links settings |
|
Sets the previous promotion condition settings |
|
Sets the global SonarQube settings |
|
Creates or updates a SonarQube configuration. |
|
Creates or updates a BitBucket configuration. |
|
Creates a or updates a Subversion configuration. |
artifactory |
---|
Creates or updates a Artifactory configuration. Access to Artifactory is done through the configurations:
The list of Artifactory configurations is accessible:
Example:
|
getArtifactory |
---|
See artifactory |
getGit |
---|
See git |
getGitHub |
---|
See gitHub |
getGitLab |
---|
See gitLab |
getGrantProjectViewToAll |
---|
Checks if the projects are accessible in anonymous mode. Sample:
|
getJenkins |
---|
See jenkins |
getJira |
---|
See jira |
getLabel |
---|
getLabels |
---|
getLdapSettings |
---|
getMainBuildLinks |
---|
Gets the main build links settings |
getPredefinedPromotionLevels |
---|
Gets the list of promotion levels. See |
getPredefinedValidationStamps |
---|
Gets the list of validation stamps. See |
getPreviousPromotionRequired |
---|
Gets the previous promotion condition settings |
getSonarQube |
---|
Gets the list of SonarQube configuration ids |
getSonarQubeSettings |
---|
Gets the global SonarQube settings |
getStash |
---|
See stash |
getSvn |
---|
See svn |
git | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Creates or update a Git configuration When working with Git, the access to the Git repositories must be configured.
The The parameters are the following:
See the documentation to know the meaning of those parameters. Example:
|
gitHub | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
When working with GitHub, the access to the GitHub API must be configured.
The The parameters are the following:
See Working with GitHub to know the meaning of those parameters. Example:
You can also configure an anonymous access to https://github.com (not recommended) by doing:
|
gitHub |
---|
See gitHub |
gitLab | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
When working with GitLab, the access to the GitLab API must be configured.
The The parameters are the following:
Example:
|
jenkins |
---|
Creates or updates a Jenkins configuration. Access to Jenkins is done through the configurations:
The list of Jenkins configurations is accessible:
Example:
|
jira |
---|
Creates or updates a JIRA configuration. Access to JIRA is done through the configurations:
The list of JIRA configurations is accessible:
Example:
|
label |
---|
Creates or updates a label See: Label |
predefinedPromotionLevel |
---|
See |
predefinedPromotionLevel |
---|
See autoPromotionLevel. |
predefinedValidationStamp |
---|
See |
predefinedValidationStamp |
---|
See |
setGrantProjectViewToAll |
---|
Sets if the projects are accessible in anonymous mode. Sample:
|
setLdapSettings |
---|
Sets the global LDAP settings |
setMainBuildLinks |
---|
Sets the main build links settings |
setPreviousPromotionRequired |
---|
Sets the previous promotion condition settings |
setSonarQubeSettings |
---|
Sets the global SonarQube settings |
sonarQube |
---|
Creates or updates a SonarQube configuration. |
stash | ||||||
---|---|---|---|---|---|---|
Creates or updates a BitBucket configuration. When working with BitBucket, the access to the BitBucket application must be configured.
The The parameters are the following:
Example:
|
svn | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Creates a or updates a Subversion configuration. In order to create, update and access a Subversion configuration, use:
Some parameters, like
Example of issue link (with JIRA):
|
9.9.15. Document
Definition for a document, for upload and download methods. See also DSL Images and documents.
Method summary |
|
---|---|
Method |
Description |
Returns the content of the document as an array of bytes. |
|
Returns the MIME type of the document. |
|
Returns true is the document is empty and has no content. |
getContent |
---|
Returns the content of the document as an array of bytes. |
getType |
---|
Returns the MIME type of the document. |
isEmpty |
---|
Returns true is the document is empty and has no content. |
9.9.16. GroupMapping
Mapping between a LDAP group and an account group.
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Name of the Ontrack account group. |
|
Name of the LDAP group. |
getGroupName |
---|
Name of the Ontrack account group. |
getName |
---|
Name of the LDAP group. |
9.9.17. LDAPSettings
LDAP settings parameters.
The LDAP settings are defined using the following values:
Parameter | Description |
---|---|
enabled |
Set to |
url |
URL to the LDAP end point. For example, |
searchBase |
DN for the search root, for example, |
searchFilter |
Query to look for an account. |
user |
Service account user to connect to the LDAP |
password |
Password of the service account user to connect to the LDAP. |
fullNameAttribute |
Attribute which contains the display name of the account. Defaults to |
emailAttribute |
Attribute which contains the email of the account. Default to |
groupAttribute |
Multiple attribute name which contains the groups the account belong to.
Defaults to |
groupFilter |
When getting the list of groups for an account, filter this list using the
|
When getting the LDAP settings,
the password field is always returned as an empty string.
|
For example, to set the LDAP settings:
ontrack.config.ldapSettings = [
enabled : true,
url : 'ldaps://ldap.company.com:636',
searchBase : 'dc=company,dc=com',
searchFilter: '(sAMAccountName={0})',
user : 'service',
password : 'secret',
]
9.9.18. Label
Label
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Category of the label. |
|
Color of the label. |
|
Description of the label. |
|
ID of the label. |
|
Name of the label. |
getCategory |
---|
Category of the label. |
getColor |
---|
Color of the label. |
getDescription |
---|
Description of the label. |
getId |
---|
ID of the label. |
getName |
---|
Name of the label. |
9.9.19. MainBuildLinks
Configuration which describes the list of build links to display, based on some project labels.
9.9.20. PredefinedPromotionLevel
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Downloads the image for the promotion level. See DSL Images and documents. |
|
Sets the image for this validation stamp (must be a PNG file). See DSL Images and documents. |
getImage |
---|
Downloads the image for the promotion level. See DSL Images and documents. See: Document |
image |
---|
Sets the image for this validation stamp (must be a PNG file). See DSL Images and documents. |
9.9.21. PredefinedValidationStamp
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Gets the data type for the validation stamp, map with |
|
Downloads the image for the validation stamp. See DSL Images and documents. |
|
Sets the image for this validation stamp (must be a PNG file). See DSL Images and documents. |
|
Sets a data type for the validation stamp |
getDataType |
---|
Gets the data type for the validation stamp, map with |
getImage |
---|
Downloads the image for the validation stamp. See DSL Images and documents. See: Document |
image |
---|
Sets the image for this validation stamp (must be a PNG file). See DSL Images and documents. |
setDataType |
---|
Sets a data type for the validation stamp |
9.9.22. Project
The project is the main entity of Ontrack.
// Getting a project
def project = ontrack.project('project')
project {
// Creates a branch for the project
branch('1.0')
}
See also: AbstractProjectResource
Go to the methods
Configuration properties |
|
---|---|
See also: ProjectEntityProperties |
|
Configuration property summary |
|
Property |
Description |
|
|
|
|
Sets the display options for the build links targeting this project. |
|
|
|
|
|
|
|
|
|
|
|
|
|
Gets the options for displaying the builds being used by the builds of this project. |
|
|
|
|
|
|
|
|
|
Configures the project for Git. |
|
Configures the project for GitHub. |
|
|
|
|
|
|
|
Sets the options for displaying the builds being used by the builds of this project. |
|
Sets the SonarQube settings for this project. |
|
Setup of stale branches management. |
|
|
|
Configures the project for Subversion. |
Configuration: stale |
---|
Setup of stale branches management. Stale branches can be automatically disabled or even deleted. To enable this property on a project:
It is possible to make sure to keep branches which have been promoted to some levels. For example, if you want
to keep branches which have been promoted to
|
Configuration: gitLab |
---|
|
Configuration: getGitLab |
---|
See gitLab |
Configuration: gitHub |
---|
Configures the project for GitHub. |
Configuration: stash |
---|
Associates the project with the BitBucket configuration with the given Example:
|
Configuration: getStash |
---|
See stash |
Configuration: git |
---|
Configures the project for Git. Associates a project with a Git configuration.
Gets the associated Git configuration:
Example:
|
Configuration: getGit |
---|
See git |
Configuration: svn |
---|
Configures the project for Subversion. To associate a project with an existing Subversion configuration:
To get the SVN project configuration:
Example:
|
Configuration: getSvn |
---|
See svn |
Configuration: getMainBuildLinks |
---|
Gets the options for displaying the builds being used by the builds of this project. See: MainBuildLinks |
Configuration: setMainBuildLinks |
---|
Sets the options for displaying the builds being used by the builds of this project. |
Configuration: sonarQube | ||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Sets the SonarQube settings for this project. To enable SonarQube collection of measures for a project:
Only the Full list of parameters is described below:
|
Configuration: getSonarQube |
---|
See sonarQube |
Configuration: getStale |
---|
See stale |
Configuration: jiraFollowLinks |
---|
See jiraFollowLinks |
Configuration: jiraFollowLinks |
---|
Links between JIRA issues can be followed when getting information about issues. The links to follow can be configured at the project’s level:
The list of links to follow is accessible through:
Example:
|
Configuration: getJiraFollowLinks |
---|
See jiraFollowLinks |
Configuration: autoValidationStamp | ||
---|---|---|
Validation stamps can be automatically created for a branch, from a list of predefined validation stamps, if the "Auto validation stamps" property is enabled on a project. To enable this property on a project:
or:
You can also edit the property so that a validation stamp is created even when no predefined validation stamp does exit. In this case, the validation stamp will be created with the required name and without any image. To enable this feature:
To get the value of this property:
The list of predefined validation stamps is accessible using:
Each item contains the following properties:
Its image is accessible through the In order to create/update predefined validation stamps, use the following method:
|
Configuration: getAutoValidationStamp |
---|
|
Configuration: autoPromotionLevel | ||
---|---|---|
Promotion levels can be automatically created for a branch, from a list of predefined promotion levels, if the "Auto promotion levels" property is enabled on a project. To enable this property on a project:
or:
To get the value of this property:
The list of predefined promotion levels is accessible using:
Each item contains the following properties:
Its image is accessible through the In order to create/update predefined promotion levels, use the following method:
|
Configuration: getAutoPromotionLevel |
---|
|
Configuration: buildLinkDisplayOptions |
---|
Sets the display options for the build links targeting this project. |
Configuration: getBuildLinkDisplayOptions |
---|
|
Method summary |
|
---|---|
Method |
Description |
Assign a label to the project, optionally creating it if requested (and authorized) |
|
Retrieves or creates a branch for the project, and then configures it. |
|
Retrieves or creates a branch for the project |
|
Gets the list of branches for the project. |
|
Access to the project properties |
|
Gets the labels for this project |
|
Searches for builds in the project. |
|
Unassign a label from a project |
assignLabel |
---|
Assign a label to the project, optionally creating it if requested (and authorized) |
branch |
---|
Retrieves or creates a branch for the project, and then configures it. See: Branch |
branch |
---|
Retrieves or creates a branch for the project See: Branch
If the branch already exists, the
If the branch does not exist, it is created. Sample:
|
getBranches |
---|
getConfig |
---|
Access to the project properties |
getLabels |
---|
search | |||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Searches for builds in the project. See: Build Possible options are:
Example of build searches:
|
unassignLabel |
---|
Unassign a label from a project |
9.9.23. ProjectEntityProperties
Method summary |
|
---|---|
Method |
Description |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Sets a property. |
|
|
getJenkinsBuild |
---|
See jenkinsBuild |
getJenkinsJob |
---|
See jenkinsJob |
getLinks |
---|
See links |
getMessage |
---|
See message |
getMetaInfo |
---|
See metaInfo |
getPreviousPromotionRequired |
---|
|
jenkinsBuild |
---|
For builds, promotion runs and validation runs, it is possible to attach a reference to a Jenkins build:
or to get the build reference:
Example:
Note that Jenkins folders are supported by
giving the full job name. For example, the job name to give to the job in A > B > C would be See also the Jenkins job property. |
jenkinsJob |
---|
Projects, branches, promotion levels and validation stamps can have a reference to a Jenkins job:
or to get the job reference:
Example:
Note that Jenkins folders are supported by giving
the full job name. For example, the job name to give to the job in A > B > C would be See also the Jenkins build property. |
links |
---|
Arbitrary named links can be associated with projects, branches, etc.
|
message |
---|
An arbitrary message, together with a message type, can be associated with any entity. To set the message on any entity:
Following types of messages are supported:
For example, on a build:
To get a message:
See Message property for more details about this property. |
metaInfo |
---|
See metaInfo |
metaInfo | ||
---|---|---|
Arbitrary meta information properties can be associated with any entity. To set a list of meta information properties:
The following method is the preferred one:
This allows to keep any previous meta information property and to specify links if needed. To get the meta information, for example on the previous build:
See Meta information property for more details about this property. |
property |
---|
Gets a property on the project entity. If If
|
property |
---|
Sets a property. |
setPreviousPromotionRequired |
---|
|
9.9.24. PromotionLevel
See also: AbstractProjectResource
Go to the methods
Configuration properties |
|
---|---|
Builds can be auto promoted to a promotion level when this latter is configured to do so. A promotion level is configured for auto promotion using:
where To get the list of validation stamps for the auto promotion of a promotion level:
The validation stamps used to define an auto promotion can also be defined using regular expressions:
In this sample, all validation stamps whose name starts with You can also exclude validation stamps using their name:
In this sample, all validation stamps whose name starts with See also: ProjectEntityProperties |
|
Configuration property summary |
|
Property |
Description |
Sets the validation stamps participating into the auto promotion. |
|
Sets the validation stamps or promotion levels participating into the auto promotion, and sets the include/exclude settings. |
|
Checks if the promotion level is set in auto promotion. |
|
Gets the validation stamps participating into the auto promotion. The returned list can be null if the property is not defined. |
|
Sets the validation stamps participating into the auto promotion. |
Configuration: getAutoPromotion |
---|
Checks if the promotion level is set in auto promotion. |
Configuration: autoPromotion |
---|
Sets the validation stamps participating into the auto promotion. |
Configuration: autoPromotion |
---|
Sets the validation stamps or promotion levels participating into the auto promotion, and sets the include/exclude settings. |
Configuration: setPromotionDependencies |
---|
Sets the validation stamps participating into the auto promotion. |
Configuration: getPromotionDependencies |
---|
Gets the validation stamps participating into the auto promotion. The returned list can be null if the property is not defined. |
Method summary |
|
---|---|
Method |
Description |
Configuration of the promotion level with a closure. |
|
Checks if this promotion level is set in auto decoration mode. |
|
Name of the associated branch. |
|
Access to the promotion level properties |
|
Gets the promotion level image (see DSL Images and documents) |
|
Name of the associated project. |
|
Sets the promotion level image (see DSL Images and documents) |
|
Sets the promotion level image (see DSL Images and documents) |
call |
---|
Configuration of the promotion level with a closure. |
getAutoPromotionPropertyDecoration |
---|
Checks if this promotion level is set in auto decoration mode. |
getBranch |
---|
Name of the associated branch. |
getConfig |
---|
Access to the promotion level properties |
getImage |
---|
getProject |
---|
Name of the associated project. |
image |
---|
Sets the promotion level image (see DSL Images and documents) |
image |
---|
Sets the promotion level image (see DSL Images and documents) |
9.9.25. PromotionRun
You can get a promotion run by promoting a build:
def run = ontrack.build('project', 'branch', '1').promote('BRONZE')
assert run.promotionLevel.name == 'BRONZE'
or by getting the list of promotion runs for a build:
def runs = ontrack.build('project', 'branch', '1').promotionRuns
assert runs.size() == 1
assert runs[0].promotionLevel.name == 'BRONZE'
See also: AbstractProjectResource
Method summary |
|
---|---|
Method |
Description |
Gets the associated promotion level (JSON) |
getPromotionLevel |
---|
Gets the associated promotion level (JSON) |
9.9.26. SearchResult
The SearchResult
class is used for listing the results of a search.
See also: AbstractResource
Sample:
ontrack.project('prj')
def results = ontrack.search('prj')
assert results.size() == 1
assert results[0].title == 'Project prj'
assert results[0].page == 'https://host/#/project/1'
assert results[0].page == 'https://host/#/project/1'
Method summary |
|
---|---|
Method |
Description |
Gets a percentage of accuracy about the result. |
|
Gets a description for the search result. |
|
Gets the URI to display the search result details (Web). |
|
Gets the display name for the search result. |
|
Gets the URI to access the search result details (API). |
getAccuracy |
---|
Gets a percentage of accuracy about the result. |
getDescription |
---|
Gets a description for the search result. |
getPage |
---|
Gets the URI to display the search result details (Web). |
getTitle |
---|
Gets the display name for the search result. |
getUri |
---|
Gets the URI to access the search result details (API). |
9.9.28. ValidationRun
You can get a validation run by validating a build:
def run = ontrack.build(branch.project, branch.name, '2').validate('SMOKE', 'FAILED')
assert run.validationStamp.name == 'SMOKE'
assert run.validationRunStatuses[0].statusID.id == 'FAILED'
assert run.validationRunStatuses[0].statusID.name == 'Failed'
assert run.status == 'FAILED'
or by getting the list of validation runs for a build:
def runs = ontrack.build(branch.project, branch.name, '2').validationRuns
assert runs.size() == 1
assert runs[0].validationStamp.name == 'SMOKE'
assert runs[0].validationRunStatuses[0].statusID.id == 'FAILED'
assert runs[0].status == 'FAILED'
See also: AbstractProjectResource
Method summary |
|
---|---|
Method |
Description |
Gets the data for the validation run, map with |
|
Gets the last of statuses |
|
Gets the associated run info with this validation run or |
|
Gets the status for this validation run. |
|
Gets the list of statuses |
|
Gets the associated validation stamp (JSON) |
|
Sets the run info for this validation run. |
getData |
---|
Gets the data for the validation run, map with |
getLastValidationRunStatus |
---|
getRunInfo |
---|
Gets the associated run info with this validation run or The returned object has the following properties:
|
getStatus |
---|
Gets the status for this validation run. Possible values are:
|
getValidationRunStatuses |
---|
Gets the list of statuses See: ValidationRunStatus |
getValidationStamp |
---|
Gets the associated validation stamp (JSON) |
setRunInfo |
---|
Sets the run info for this validation run. Accepted parameters are:
|
9.9.29. ValidationRunStatus
See also: AbstractResource
Method summary |
|
---|---|
Method |
Description |
Returns the status description |
|
Returns the numeric ID of this entity. |
|
Returns the status ID in text form |
|
Returns the status ID in JSON form |
|
Returns the status display name |
|
Returns if the status is passed or not |
|
Sets the description on this status |
getDescription |
---|
Returns the status description |
getId |
---|
Returns the numeric ID of this entity. |
getStatus |
---|
Returns the status ID in text form |
getStatusID |
---|
Returns the status ID in JSON form |
getStatusName |
---|
Returns the status display name |
isPassed |
---|
Returns if the status is passed or not |
setDescription |
---|
Sets the description on this status |
9.9.30. ValidationStamp
See also: AbstractProjectResource
Go to the methods
Configuration properties |
|
---|---|
See also: ProjectEntityProperties |
Method summary |
|
---|---|
Method |
Description |
Configuration of the promotion level with a closure. |
|
Name of the associated branch. |
|
Access to the validation stamp properties |
|
Gets the data type for the validation stamp, map with |
|
Gets the validation stamp image (see DSL Images and documents) |
|
Name of the associated project. |
|
Gets the validation stamp weather decoration. |
|
Sets the validation stamp image (see DSL Images and documents) |
|
Sets the validation stamp image (see DSL Images and documents) |
|
Sets the data type for this validation stamp to 'CHML' (number of critical / high / medium / low issues). |
|
Sets a data type for the validation stamp |
|
Sets the data type for this validation stamp to 'Fraction'. |
|
Sets the data type for this validation stamp to 'metrics'. |
|
Sets the data type for this validation stamp to 'Number'. |
|
Sets the data type for this validation stamp to 'Percentage'. |
|
Sets the data type for this validation stamp to 'TestSummary'. |
|
Sets the data type for this validation stamp to 'text'. |
call |
---|
Configuration of the promotion level with a closure. |
getBranch |
---|
Name of the associated branch. |
getConfig |
---|
Access to the validation stamp properties |
getDataType |
---|
Gets the data type for the validation stamp, map with |
getImage |
---|
getProject |
---|
Name of the associated project. |
getValidationStampWeatherDecoration |
---|
Gets the validation stamp weather decoration. The "weather" of the validation stamp is the status of the last 4 builds having been validated for this validation stamp on the corresponding branch. The returned object contains two attributes:
|
image |
---|
Sets the validation stamp image (see DSL Images and documents) |
image |
---|
Sets the validation stamp image (see DSL Images and documents) |
setCHMLDataType |
---|
Sets the data type for this validation stamp to 'CHML' (number of critical / high / medium / low issues). |
setDataType |
---|
Sets a data type for the validation stamp |
setFractionDataType |
---|
Sets the data type for this validation stamp to 'Fraction'. |
setMetricsDataType |
---|
Sets the data type for this validation stamp to 'metrics'. |
setNumberDataType |
---|
Sets the data type for this validation stamp to 'Number'. |
setPercentageDataType |
---|
Sets the data type for this validation stamp to 'Percentage'. |
setTestSummaryDataType |
---|
Sets the data type for this validation stamp to 'TestSummary'. |
setTextDataType |
---|
Sets the data type for this validation stamp to 'text'. |