Setup Guide

In this guide you learn how to setup the development environment of Artemis. Artemis is based on JHipster, i.e. Spring Boot development on the application server using Java 17, and TypeScript development on the application client in the browser using Angular and Webpack. To get an overview of the used technology, have a look at the JHipster Technology stack and other tutorials on the JHipster homepage.

You can find tutorials how to setup JHipster in an IDE (IntelliJ IDEA Ultimate is recommended) on https://jhipster.github.io/configuring-ide. Note that the Community Edition of IntelliJ IDEA does not provide Spring Boot support (see the comparison matrix). Before you can build Artemis, you must install and configure the following dependencies/tools on your machine:

  1. Java JDK: We use Java (JDK 17) to develop and run the Artemis application server which is based on Spring Boot.

  2. MySQL Database Server 8: Artemis uses Hibernate to store entities in a MySQL database. Download and install the MySQL Community Server (8.0.x) and configure it according to section MySQL Setup.

  3. Node.js: We use Node LTS (>=16.13.0 < 17) to compile and run the client Angular application. Depending on your system, you can install Node either from source or as a pre-packaged bundle.

  4. Npm: We use Npm (>=8.1.0) to manage client side dependencies. Npm is typically bundled with Node.js, but can also be installed separately.

  5. ( Graphviz: We use Graphviz to generate graphs within exercise task descriptions. It’s not necessary for a successful build, but it’s necessary for production setups as otherwise errors will show up during runtime. )

  6. ( A version control and build system is necessary for the programming exercise feature of Artemis. There are multiple stacks available for the integration with Artemis:



MySQL Setup

The required Artemis schema will be created / updated automatically at startup time of the server application.

As an alternative to a native MySQL setup, you can run the MySQL Database Server inside a Docker container using e.g. docker-compose -f src/main/docker/mysql.yml up.

If you run your own MySQL server, make sure to specify the default character-set as utf8mb4 and the default collation as utf8mb4_unicode_ci. You can achieve this e.g. by using a my.cnf file in the location /etc.

[client]
default-character-set = utf8mb4
[mysql]
default-character-set = utf8mb4
[mysqld]
character-set-client-handshake = TRUE
init-connect='SET NAMES utf8mb4'
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci

Make sure the configuration file is used by MySQL when you start the server. You can find more information on https://dev.mysql.com/doc/refman/8.0/en/option-files.html

Users for MySQL

For the development environment the default MySQL user is ‘root’ with an empty password.
(In case you want to use a different password, make sure to change the value in application-local.yml (spring > datasource > password) and in liquibase.gradle (within the ‘liquibaseCommand’ as argument password)).

Set empty root password for MySQL 8

If you have problems connecting to the MySQL 8 database using an empty root password, you can try the following command to reset the root password to an empty password:

mysql -u root --execute "ALTER USER 'root'@'localhost' IDENTIFIED WITH caching_sha2_password BY ''";

Warning

Empty root passwords should only be used in a development environment. The root password for a production environment must never be empty.


Server Setup

To start the Artemis application server from the development environment, first import the project into IntelliJ and then make sure to install the Spring Boot plugins to run the main class de.tum.in.www1.artemis.ArtemisApp. Before the application runs, you have to change some configuration options. You can change the options directly in the file application-artemis.yml in the folder src/main/resources/config. However, you have to be careful that you do not accidentally commit your password. Therefore, we strongly recommend, to create a new file application-local.yml in the folder src/main/resources/config which is ignored by default. You can override the following configuration options in this file.

artemis:
    repo-clone-path: ./repos/
    repo-download-clone-path: ./repos-download/
    encryption-password: <encrypt-password>      # LEGACY: arbitrary password for encrypting database values
    bcrypt-salt-rounds: 11   # The number of salt rounds for the bcrypt password hashing. Lower numbers make it faster but more unsecure and vice versa.
                             # Please use the bcrypt benchmark tool to determine the best number of rounds for your system. https://github.com/ls1intum/bcrypt-Benchmark
    user-management:
        use-external: true
        password-reset:
             credential-provider: <provider> # The credential provider which users can log in though (e.g. TUMonline)
             links: # The password reset links for different languages
                 en: '<link>'
                 de: '<link>'
        external:
            url: https://jira.ase.in.tum.de
            user: <username>    # e.g. ga12abc
            password: <password>
            admin-group-name: tumuser
        ldap:
            url: <url>
            user-dn: <user-dn>
            password: <password>
            base: <base>
    version-control:
        url: https://bitbucket.ase.in.tum.de
        user: <username>    # e.g. ga12abc
        password: <password>
        token: <token>                 # VCS API token giving Artemis full Admin access. Not needed for Bamboo+Bitbucket
        ci-token: <token from the CI>   # Token generated by the CI (e.g. Jenkins) for webhooks from the VCS to the CI. Not needed for Bamboo+Bitbucket
    continuous-integration:
        url: https://bamboo.ase.in.tum.de
        user: <username>    # e.g. ga12abc
        token: <token>      # Enter a valid token generated by bamboo or leave this empty to use the fallback authentication user + password
        password: <password>
        vcs-application-link-name: LS1 Bitbucket Server     # If the VCS and CI are directly linked (normally only for Bitbucket + Bamboo)
        empty-commit-necessary: true                        # Do we need an empty commit for new exercises/repositories in order for the CI to register the repo
        # Hash/key of the ci-token, equivalent e.g. to the ci-token in version-control
        # Some CI systems, like Jenkins, offer a specific token that gets checked against any incoming notifications
        # from a VCS trying to trigger a build plan. Only if the notification request contains the correct token, the plan
        # is triggered. This can be seen as an alternative to sending an authenticated request to a REST API and then
        # triggering the plan.
        # In the case of Artemis, this is only really needed for the Jenkins + GitLab setup, since the GitLab plugin in
        # Jenkins only allows triggering the Jenkins jobs using such a token. Furthermore, in this case, the value of the
        # hudson.util.Secret is stored in the build plan, so you also have to specify this encrypted string here and NOT the actual token value itself!
        # You can get this by GETting any job.xml for a job with an activated GitLab step and your token value of choice.
        secret-push-token: <token hash>
        # Key of the saved credentials for the VCS service
        # Bamboo: not needed
        # Jenkins: You have to specify the key from the credentials page in Jenkins under which the user and
        #          password for the VCS are stored
        vcs-credentials: <credentials key>
        # Key of the credentials for the Artemis notification token
        # Bamboo: not needed
        # Jenkins: You have to specify the key from the credentials page in Jenkins under which the notification token is stored
        notification-token: <credentials key>
        # The actual value of the notification token to check against in Artemis. This is the token that gets send with
        # every request the CI system makes to Artemis containing a new result after a build.
        # Bamboo: The token value you use for the Server Notification Plugin
        # Jenkins: The token value you use for the Server Notification Plugin and is stored under the notification-token credential above
        authentication-token: <token>
    git:
        name: Artemis
        email: artemis@in.tum.de
    athene:
        url: http://localhost
        base64-secret: YWVuaXF1YWRpNWNlaXJpNmFlbTZkb283dXphaVF1b29oM3J1MWNoYWlyNHRoZWUzb2huZ2FpM211bGVlM0VpcAo=
        token-validity-in-seconds: 10800

Change all entries with <...> with proper values, e.g. your TUM Online account credentials to connect to the given instances of JIRA, Bitbucket and Bamboo. Alternatively, you can connect to your local JIRA, Bitbucket and Bamboo instances. It’s not necessary to fill all the fields, most of them can be left blank. Note that there is additional information about the setup for programming exercises provided:

Note

Be careful that you do not commit changes to application-artemis.yml. To avoid this, follow the best practice when configuring your local development environment:

  1. Create a file named application-local.yml under src/main/resources/config.

  2. Copy the contents of application-artemis.yml into the new file.

  3. Update configuration values in application-local.yml.

By default, changes to application-local.yml will be ignored by git so you don’t accidentally share your credentials or other local configuration options. The run configurations contain a profile local at the end to make sure the application-local.yml is considered. You can create your own configuration files application-<name>.yml and then activate the profile <name> in the run configuration if you need additional customizations.

If you use a password, you need to adapt it in gradle/liquibase.gradle.

Run the server via a service configuration

This setup is recommended for production instances as it registers Artemis as a service and e.g. enables auto-restarting of Artemis after the VM running Artemis has been restarted. As alternative you could take a look at the section below about deploying artemis as docker container. For development setups, see the other guides below.

This is a service file that works on Debian/Ubuntu (using systemd):

[Unit]
Description=Artemis
After=syslog.target
[Service]
User=artemis
WorkingDirectory=/opt/artemis
ExecStart=/usr/bin/java \
  -Djdk.tls.ephemeralDHKeySize=2048 \
  -DLC_CTYPE=UTF-8 \
  -Dfile.encoding=UTF-8 \
  -Dsun.jnu.encoding=UTF-8 \
  -Djava.security.egd=file:/dev/./urandom \
  -Xmx2048m \
  --add-modules java.se \
  --add-exports java.base/jdk.internal.ref=ALL-UNNAMED \
  --add-exports java.naming/com.sun.jndi.ldap=ALL-UNNAMED \
  --add-opens java.base/java.lang=ALL-UNNAMED \
  --add-opens java.base/java.nio=ALL-UNNAMED \
  --add-opens java.base/sun.nio.ch=ALL-UNNAMED \
  --add-opens java.management/sun.management=ALL-UNNAMED \
  --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED \
  -jar artemis.war \
  --spring.profiles.active=prod,bamboo,bitbucket,jira,ldap,scheduling,openapi
SuccessExitStatus=143
StandardOutput=/opt/artemis/artemis.log
StandardError=inherit
[Install]
WantedBy=multi-user.target

The following parts might also be useful for other (production) setups, even if this service file is not used:

  • -Djava.security.egd=file:/dev/./urandom: This is required if repositories are cloned via SSH from the VCS.

    The default (pseudo-)random-generator /dev/random is blocking which results in very bad performance when using SSH due to lack of entropy.

The file should be placed at /etc/systemd/system/artemis.service and after running sudo systemctl daemon-reload, you can start the service using sudo systemctl start artemis.

You can stop the service using sudo service artemis stop and restart it using sudo service artemis restart.

Logs can be fetched using sudo journalctl -u artemis -f -n 200.

Run the server via Docker

Artemis provides a Docker image named ghcr.io/ls1intum/artemis:<TAG/VERSION>.
The current develop branch is provided by the tag develop.
The latest release is provided by the tag latest.
Specific releases like 5.7.1 can be retrieved as ghcr.io/ls1intum/artemis:5.7.1.
Branches tied to a pull request can be obtained by using the tag PR-<PR NUMBER>.

Dockerfile

You can find the latest Artemis Dockerfile at src/main/docker/Dockerfile.

  • The Dockerfile defines three Docker volumes

    • /opt/artemis/config: This will be used to store the configuration of Artemis in YAML files. If this directory is empty, the default configuration of Artemis will be copied upon container start.

      Tip

      Instead of mounting this config directory, you can also use environment variables for the configuration as defined by the Spring relaxed binding. You can either place those environment variables directly in the environment section, or create an .env-file. When starting an Artemis container directly with the Docker-CLI, an .env-file can also be given via the --env-file option.

      To ease the transition of an existing set of YAML configuration files into the environment variable style, a helper script can be used.

    • /opt/artemis/data: This directory should be used for any data (e.g., local clone of repositories). Therefore, configure Artemis to store this files into this directory. In order to do that, you have to change some properties in configuration files (i.e., artemis.repo-clone-path, artemis.repo-download-clone-path, artemis.course-archives-path, artemis.submission-export-path, and artemis.file-upload-path). Otherwise you’ll get permission failures.

    • /opt/artemis/public/content: This directory will be used for branding. You can specify a favicon, imprint.html, and privacy_statement.html here.

  • The Dockerfile sets the correct permissions to the folders that are mounted to the volumes on startup (not recursive).

  • The startup script is located here.

  • The Dockerfile assumes that the mounted volumes are located on a file system with the following locale settings (see #4439 for more details):

    • LC_ALL en_US.UTF-8

    • LANG en_US.UTF-8

    • LANGUAGE en_US.UTF-8

Run the server via a run configuration in IntelliJ

The project comes with some pre-configured run / debug configurations that are stored in the .idea directory. When you import the project into IntelliJ the run configurations will also be imported.

The recommended way is to run the server and the client separated. This provides fast rebuilds of the server and hot module replacement in the client.

  • Artemis (Server): The server will be started separated from the client. The startup time decreases significantly.

  • Artemis (Client): Will execute npm install and npm run serve. The client will be available at http://localhost:9000/ with hot module replacement enabled (also see Client Setup).

Other run / debug configurations

  • Artemis (Server & Client): Will start the server and the client. The client will be available at http://localhost:8080/ with hot module replacement disabled.

  • Artemis (Server, Jenkins & GitLab): The server will be started separated from the client with the profiles dev,jenkins,gitlab,artemis instead of dev,bamboo,bitbucket,jira,artemis.

  • Artemis (Server, Athene): The server will be started separated from the client with athene profile enabled (see Athene Service).

Run the server with Spring Boot and Spring profiles

The Artemis server should startup by running the main class de.tum.in.www1.artemis.ArtemisApp using Spring Boot.

Note

Artemis uses Spring profiles to segregate parts of the application configuration and make it only available in certain environments. For development purposes, the following program arguments can be used to enable the dev profile and the profiles for JIRA, Bitbucket and Bamboo:

--spring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling

If you use IntelliJ (Community or Ultimate) you can set the active profiles by

  • Choosing Run | Edit Configurations...

  • Going to the Configuration Tab

  • Expanding the Environment section to reveal VM Options and setting them to -Dspring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling

Set Spring profiles with IntelliJ Ultimate

If you use IntelliJ Ultimate, add the following entry to the section Active Profiles (within Spring Boot) in the server run configuration:

dev,bamboo,bitbucket,jira,artemis,scheduling

Run the server with the command line (Gradle wrapper)

If you want to run the application via the command line instead, make sure to pass the active profiles to the gradlew command like this:

./gradlew bootRun --args='--spring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling'

As an alternative, you might want to use Jenkins and GitLab with an internal user management in Artemis, then you would use the profiles:

dev,jenkins,gitlab,artemis,scheduling

Configure Text Assessment Analytics Service

Text Assessment Analytics is an internal analytics service used to gather data regarding the features of the text assessment process. Certain assessment events are tracked:

  1. Adding new feedback on a manually selected block

  2. Adding new feedback on an automatically selected block

  3. Deleting a feedback

  4. Clicking to resolve feedback conflicts

  5. Clicking to view origin submission of automatically generated feedback

  6. Hovering over the text assessment feedback impact warning

  7. Editing/Discarding an automatically generated feedback

  8. Clicking the Submit button when assessing a text submission

  9. Clicking the Assess Next button when assessing a text submission

These events are tracked by attaching a POST call to the respective DOM elements on the client side. The POST call accesses the TextAssessmentEventResource which then adds the events in its respective table. This feature is disabled by default. We can enable it by modifying the configuration in the file: src/main/resources/config/application-artemis.yml like so:

info:
   text-assessment-analytics-enabled: true

Client Setup

You need to install Node and Npm on your local machine.

Using IntelliJ

If you are using IntelliJ you can use the pre-configured Artemis (Client) run configuration that will be delivered with this repository:

  • Choose Run | Edit Configurations...

  • Select the Artemis (Client) configuration from the npm section

  • Now you can run the configuration in the upper right corner of IntelliJ

Using the command line

You should be able to run the following command to install development tools and dependencies. You will only need to run this command when dependencies change in package.json.

npm install

To start the client application in the browser, use the following command:

npm run serve

This compiles TypeScript code to JavaScript code, starts the hot module replacement feature in Webpack (i.e. whenever you change a TypeScript file and save, the client is automatically reloaded with the new code) and will start the client application in your browser on http://localhost:9000. If you have activated the JIRA profile (see above in Server Setup) and if you have configured application-artemis.yml correctly, then you should be able to login with your TUM Online account.

In case you encounter any problems regarding JavaScript heap memory leaks when executing npm run serve or any other scripts from package.json, you can add a memory limit parameter (--max_old_space_size=5120) in the script. You can do it by changing the start script in package.json from:

"start": "ng serve --hmr",

to

"start": "node --max_old_space_size=5120 ./node_modules/@angular/cli/bin/ng serve --hmr",

If you still face the issue, you can try to set a higher value than 5120. Possible values are 6144, 7168, and 8192.

The same change could be applied to each ng command as in the example above.

Make sure to not commit this change in package.json.

For more information, review Working with Angular. For further instructions on how to develop with JHipster, have a look at Using JHipster in development.


Customize your Artemis instance

You can define the following custom assets for Artemis to be used instead of the TUM defaults:

  • The logo next to the “Artemis” heading on the navbar → ${artemisRunDirectory}/public/images/logo.png

  • The favicon → ${artemisRunDirectory}/logo/favicon.svg

  • The privacy statement HTML → ${artemisRunDirectory}/public/content/privacy_statement.html

  • The imprint statement HTML → ${artemisRunDirectory}/public/content/imprint.html

  • The contact email address in the application-{dev,prod}.yml configuration file under the key info.contact


Programming Exercise adjustments

There are several variables that can be configured when using programming exercises. They are presented in this separate section to keep the ‘normal’ setup guide shorter.

Path variables

There are variables for several paths:

  • artemis.repo-clone-path

    Repositories that the Artemis server needs are stored in this folder. This e.g. affects repositories from students which use the online code editor or the template/solution repositories of new exercises, as they are pushed to the VCS after modification.

    Files in this directory are usually not critical, as the latest pushed version of these repositories are also stored at the VCS. However, changed that are saved in the online code editor but not yet committed will be lost when this folder is deleted.

  • artemis.repo-download-clone-path

    Repositories that were downloaded from Artemis are stored in this directory.

    Files in this directory can be removed without loss of data, if the downloaded repositories are still present at the VCS. No changes to the data in the VCS are stored in this directory (or they can be retrieved by performing the download-action again).

  • artemis.template-path

    Templates are available within Artemis. The templates should fit to most environments, but there might be cases where one wants to change the templates.

    This value specifies the path to the templates which should overwrite the default ones. Note that this is the path to the folder where the templates folder is located, not the path to the templates folder itself.

Templates

Templates are shipped with Artemis (they can be found within the src/main/resources/templates folder in GitHub). These templates should fit well for many deployments, but one might want to change some of them for special deployments.

As of now, you can overwrite the jenkins folders that is present within the src/main/resources/templates folder. Files that are present in the file system will be used, if a file is not present in the file system, it is loaded from the classpath (e.g. the .war archive).

We plan to make other folders configurable as well, but this is not supported yet.

Jenkins template

The build process in Jenkins is stored in a config.xml-file (src/main/resources/templates/jenkins) that shares common steps for all programming languages (e.g. triggering a build when a push to GitLab occurred). It is extended by a Jenkinsfile that is dependent on the used programming language which will be included in the generic config.xml file. The builds steps (including used docker images, the checkout process, the actual build steps, and the reporting of the results to Artemis) is included in the Jenkinsfile.

A sample Jenkinsfile can be found at src/main/resources/templates/jenkins/java/Jenkinsfile. Note that the Jenkinsfile must start either

  • with pipeline (there must not be a comment before pipeline, but there can be one at any other position, if the Jenkinsfile-syntax allows it)

  • or the special comment // ARTEMIS: JenkinsPipeline in the first line.

The variables #dockerImage, #testRepository, #assignmentRepository, #jenkinsNotificationToken and #notificationsUrl will automatically be replaced (for the normal Jenkinsfile, within the Jenkinsfile-staticCodeAnalysis, #staticCodeAnalysisScript is also replaced).

You should not need to touch any of these variables, except the #dockerImage variable, if you want to use a different agent setup (e.g. a Kubernetes setup).

Caching example for Maven

The Docker image used to run the maven-tests already contains a set of commonly used dependencies (see artemis-maven-docker). This significantly speeds up builds as the dependencies do not have to be downloaded every time a build is started. However, the dependencies included in the Docker image might not match the dependencies required in your tests (e.g. because you added new dependencies or the Docker image is outdated).

You can cache the maven-dependencies also on the machine that runs the builds (that means, outside the docker container) using the following steps:

Adjust the agent-args and add the environment block.

agent {
    docker {
        image '#dockerImage'
        label 'docker'
        args '-v $HOME/maven-cache-docker:/var/maven'
    }
}
environment {
  JAVA_TOOL_OPTIONS = '-Duser.home=/var/maven'
}
stages {
    stage('Checkout') {

You have to add permissions to the folder (which will be located at the $HOME folder of the user that jenkins uses), e.g. with sudo chmod 777 maven-cache-docker -R.

Note that this might allow students to access shared resources (e.g. jars used by Maven), and they might be able to overwrite them. You can use Ares to prevent this by restricting the resources the student’s code can access.


Bamboo, Bitbucket and Jira Setup

This section describes how to set up a programming exercise environment based on Bamboo, Bitbucket and Jira.

Please note that this setup will create a deployment that is very similar to the one used in production but has one difference:
In production, the builds are performed within Docker containers that are created by Bamboo (or its build agents). As we run Bamboo in a Docker container in this setup, creating new Docker containers within that container is not recommended (e.g. see this article). There are some solution where one can pass the Docker socket to the Bamboo container, but none of these approaches work quite well here as Bamboo uses mounted directories that cause issues.

Therefore, a check is included within the BambooBuildPlanService that ensures that builds are not started in Docker agents if the development setup is present.

Prerequisites:

Docker-Compose

Before you start the docker-compose, check if the bamboo version in the build.gradle (search for com.atlassian.bamboo:bamboo-specs) is equal to the bamboo version number in the docker compose in src/main/docker/atlassian.yml If the version number is not equal, adjust the version number. Further details about the docker-compose setup can be found in src/main/docker

Execute the docker-compose file e.g. with docker-compose -f src/main/docker/atlassian.yml up -d.

Error Handling: It can happen that there is an overload with other docker networks ERROR: Pool overlaps with other one on this address space. Use the command docker network prune to resolve this issue.

Make sure that docker has enough memory (~ 6GB). To adapt it, go to Settings -> Resources

In case you want to enable Swift or C programming exercises, refer to the readme in src/main/docker

Configure Bamboo, Bitbucket and Jira

By default, the Jira instance is reachable under localhost:8081, the Bamboo instance under localhost:8085 and the Bitbucket instance under localhost:7990.

Get evaluation licenses for Atlassian products: Atlassian Licenses

  1. Get licenses for Bamboo, Bitbucket and Jira Service Management.

    • Bamboo: Select Bamboo (Data Center) and not installed yet

    • Bitbucket: Select Bitbucket (Data Center) and not installed yet

    • Jira: Select Jira Service Management (formerly Service Desk) (Data Center) and not installed yet

  2. Provide the just created license key during the setup and create an admin user with the same credentials in all 3 applications. For the Bamboo database you can choose H2. Also, you can select the evaluation/internal/test/dev setups if you are asked. Follow the additional steps for Jira and Bitbucket.

    • Jira:

    • On startup select I'll set it up myself

    • Select Build In Database Connection

    • Create a sample project

    • Bitbucket: Do not connect Bitbucket with Jira yet

  3. Make sure that Jira, Bitbucket and Bamboo have finished starting up.

    (Only Linux & Windows) Make sure that xdg-utils is installed before running the following script.

    xdg-utils for Windows users An easy way to use the xdg-utils on Windows would be to install them on the linux-subsystem, which should be activated anyways when running Docker on Windows. For the installation on the subsystem the above linked explanation can be used.
    Make sure to execute the script from the subsystem.

    Execute the shell script atlassian-setup.sh in the src/main/docker/atlassian directory (e.g. with src/main/docker/./atlassian-setup.sh). This script creates groups, users and assigns the user to their respective group. In addition, it configures disabled application links between the 3 applications.

  4. Enable the created application links between all 3 application (OAuth Impersonate). The links should open automatically after the shell script has finished. If not open them manually:

  5. The script (step 3) has already created the required users and assigned them to their respective group in Jira. Now, make sure that they are assigned correctly according to the following test setup: users 1-5 are students, 6-10 are tutors, 11-15 are editors and 16-20 are instructors. The usernames are artemis_test_user_{1-20} and the password is again the username. When you create a course in artemis you have to manually choose the created groups (students, tutors, editors, instructors).

  6. Use the user directories in Jira to synchronize the users in bitbucket and bamboo:

    • Go to Jira → User management → Jira user server → Add application → Create one application for bitbucket and one for bamboo → add the IP-address 0.0.0.0/0 to IP Addresses

    ../../_images/jira_add_application_bitbucket.png
    ../../_images/jira_add_application_bamboo.png
    • Go to Bitbucket and Bamboo → User Directories → Add Directories → Atlassian Crowd → use the URL http://jira:8080 as Server URL → use the application name and password which you used in the previous step. Also, you should decrease the synchronisation period (e.g. to 2 minutes). Press synchronise after adding the directory, the users and groups should now be available.

    ../../_images/user_directories_bitbucket.png

    Adding Crowd Server in Bitbucket

    ../../_images/user_directories_bamboo.png

    Adding Crowd Server in Bamboo

  7. Give the test users User access on Bitbucket: Configure → Global permissions

  8. In Bamboo create a global variable named SERVER_PLUGIN_SECRET_PASSWORD, the value of this variable will be used as the secret. The value of this variable should be then stored in src/main/resources/config/application-artemis.yml as the value of artemis-authentication-token-value. You can create a global variable from settings on Bamboo.

  9. Download the bamboo-server-notification-plugin and add it to bamboo. Go to Bamboo → Manage apps → Upload app → select the downloaded .jar file → Upload

  10. Add Maven and JDK:

    • Go to Bamboo → Server capabilities → Add capabilities menu → Capability type Executable → select type Maven 3.x → insert Maven 3 as executable label → insert /artemis as path.

    • Add capabilities menu → Capability type JDK → insert JDK17 as JDK label → insert /usr/lib/jvm/java-17-oracle as Java home.

  11. Create a Bamboo agent. Configure → Agents → Add local agent

  12. Generate a personal access token

    While username and password can still be used as a fallback, this option is already marked as deprecated and will be removed in the future.

    1. Personal access token for Bamboo.

      • Log in as the admin user and go to Bamboo -> Profile (top right corner) -> Personal access tokens -> Create token

        ../../_images/bamboo-create-token.png
      • Insert the generated token into the file application-artemis.yml in the section continuous-integration:

      artemis:
          continuous-integration:
              user: <username>
              password: <password>
              token: #insert the token here
      

    # Personal access token for Bitbucket.

    • Log in as the admin user and go to Bitbucket -> View Profile (top right corner) -> Manage account -> Personal access tokens -> Create token

      ../../_images/bitbucket-create-token.png
    • Insert the generated token into the file application-artemis.yml in the section version-control:

    artemis:
        version-control:
            user: <username>
            password: <password>
            token: #insert the token here
    
  13. Add a SSH key for the admin user

    Artemis can clone/push the repositories during setup and for the online code editor using SSH. If the SSH key is not present, the username + token will be used as fallback (and all git operations will use HTTP(S) instead of SSH). If the token is also not present, the username + password will be used as fallback (again, using HTTP(S)).

    You first have to create a SSH key (locally), e.g. using ssh-keygen (more information on how to create a SSH key can be found e.g. at ssh.com or at atlassian.com).

    The list of supported ciphers can be found at Apache Mina.

    It is recommended to use a password to secure the private key, but it is not mandatory.

    Please note that the private key file must be named id_rsa, id_dsa, id_ecdsa or id_ed25519, depending on the ciphers used.

    You now have to extract the public key and add it to Bitbucket. Open the public key file (usually called id_rsa.pub (when using RSA)) and copy it’s content (you can also use cat id_rsa.pub to show the public key).

    Navigate to BITBUCKET-URL/plugins/servlet/ssh/account/keys and add the SSH key by pasting the content of the public key.

    <ssh-key-path> is the path to the folder containing the id_rsa file (but without the filename). It will be used in the configuration of Artemis to specify where Artemis should look for the key and store the known_hosts file.

    <ssh-private-key-password> is the password used to secure the private key. It is also needed for the configuration of Artemis, but can be omitted if no password was set (e.g. for development environments).

Configure Artemis

  1. Modify src/main/resources/config/application-artemis.yml

    repo-clone-path: ./repos/
    repo-download-clone-path: ./repos-download/
    encryption-password: artemis-encrypt         # LEGACY: arbitrary password for encrypting database values
    bcrypt-salt-rounds: 11   # The number of salt rounds for the bcrypt password hashing. Lower numbers make it faster but more unsecure and vice versa.
                             # Please use the bcrypt benchmark tool to determine the best number of rounds for your system. https://github.com/ls1intum/bcrypt-Benchmark
    user-management:
        use-external: true
        external:
            url: http://localhost:8081
            user:  <jira-admin-user>
            password: <jira-admin-password>
            admin-group-name: instructors
        internal-admin:
            username: artemis_admin
            password: artemis_admin
    version-control:
        url: http://localhost:7990
        user:  <bitbucket-admin-user>
        password: <bitbucket-admin-password>
        token: <bitbucket-admin-token>   # step 10.2
        ssh-private-key-folder-path: <ssh-private-key-folder-path>
        ssh-private-key-password: <ssh-private-key-password>
    continuous-integration:
        url: http://localhost:8085
        user:  <bamboo-admin-user>
        password: <bamboo-admin-password>
        token: <bamboo-admin-token>   # step 10.1
        vcs-application-link-name: LS1 Bitbucket Server
        empty-commit-necessary: true
        artemis-authentication-token-value: <artemis-authentication-token-value>   # step 7
    
  2. Modify the application-dev.yml

    server:
        port: 8080                                         # The port of artemis
        url: http://172.20.0.1:8080                        # needs to be an ip
        // url: http://docker.for.mac.host.internal:8080   # If the above one does not work for mac try this one
        // url: http://host.docker.internal:8080           # If the above one does not work for windows try this one
    

In addition, you have to start Artemis with the profiles bamboo, bitbucket and jira so that the correct adapters will be used, e.g.:

--spring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling

Please read Server Setup for more details.

How to verify the connection works?

Artemis → Jira

You can login to Artemis with the admin user you created in Jira

Artemis → Bitbucket

You can create a programming exercise

Artemis → Bamboo

You can create a programming exercise

Bitbucket → Bamboo

The build of the students repository gets started after pushing to it

Bitbucket → Artemis

When using the code editor, after clicking on Submit, the text Building and testing… should appear.

Bamboo → Artemis

The build result is displayed in the code editor.


Jenkins and GitLab Setup

This section describes how to set up a programming exercise environment based on Jenkins and GitLab. Optional commands are in curly brackets {}.

The following assumes that all instances run on separate servers. If you have one single server, or your own NGINX instance, just skip all NGINX related steps and use the configurations provided under Separate NGINX Configurations

If you want to setup everything on your local development computer, ignore all NGINX related steps. Just make sure that you use unique port mappings for your Docker containers (e.g. 8081 for GitLab, 8082 for Jenkins, 8080 for Artemis)

Prerequisites:

Make sure that docker has enough memory (~ 6GB). To adapt it, go to Preferences -> Resources and restart Docker.

Artemis

In order to use Artemis with Jenkins as Continuous Integration Server and Gitlab as Version Control Server, you have to configure the file application-prod.yml (Production Server) or application-artemis.yml (Local Development) accordingly. Please note that all values in <..> have to be configured properly. These values will be explained below in the corresponding sections. If you want to set up a local environment, copy the values below into your application-artemis.yml or application-local.yml file (the latter is recommended), and follow the Gitlab Server Quickstart guide.

artemis:
 course-archives-path: ./exports/courses
 repo-clone-path: ./repos
 repo-download-clone-path: ./repos-download
 encryption-password: artemis_admin           # LEGACY: arbitrary password for encrypting database values
 bcrypt-salt-rounds: 11  # The number of salt rounds for the bcrypt password hashing. Lower numbers make it faster but more unsecure and vice versa.
                         # Please use the bcrypt benchmark tool to determine the best number of rounds for your system. https://github.com/ls1intum/bcrypt-Benchmark
 user-management:
     use-external: false
     internal-admin:
         username: artemis_admin
         password: artemis_admin
     accept-terms: false
     login:
         account-name: TUM
 version-control:
     url: http://localhost:8081
     user: root
     password: artemis_admin # created in Gitlab Server Quickstart step 2
     token: artemis-gitlab-token # generated in Gitlab Server Quickstart steps 4 and 5
     ci-token: jenkins-secret-token # generated in Jenkins Server Quickstart step 8
 continuous-integration:
     user: artemis_admin
     password: artemis_admin
     url: http://localhost:8082
     empty-commit-necessary: true
     secret-push-token: AQAAABAAAAAg/aKNFWpF9m2Ust7VHDKJJJvLkntkaap2Ka3ZBhy5XjRd8s16vZhBz4fxzd4TH8Su # generated in Automated Jenkins Server step 3
     vcs-credentials: artemis_gitlab_admin_credentials
     artemis-authentication-token-key: artemis_notification_plugin_token
     artemis-authentication-token-value: artemis_admin
     build-timeout: 30
 git:
     name: Artemis
     email: artemis.in@tum.de
jenkins:
    internal-urls:
        ci-url: http://jenkins:8080
        vcs-url: http://gitlab:80
    use-crumb: false
server:
     port: 8080
     url: http://172.17.0.1:8080 # `http://host.docker.internal:8080` for Windows

In addition, you have to start Artemis with the profiles gitlab and jenkins so that the correct adapters will be used, e.g.:

--spring.profiles.active=dev,jenkins,gitlab,artemis,scheduling

Please read Server Setup for more details.

For a local setup on Windows you can use http://host.docker.internal appended by the chosen ports as the version-control and continuous-integration url.

Make sure to change the server.url value in application-dev.yml or application-prod.yml accordingly. This value will be used for the communication hooks from GitLab to Artemis and from Jenkins to Artemis. In case you use a different port than 80 (http) or 443 (https) for the communication, you have to append it to the server.url value, e.g. 127.0.0.1:8080.

When you start Artemis for the first time, it will automatically create an admin user.

Note: Sometimes Artemis does not generate the admin user which may lead to a startup error. You will have to create the user manually in the MySQL database and in GitLab. Make sure both are set up correctly and follow these steps:

  1. Use the tool mentioned above to generate a password hash.

  2. Connect to the database via a client like MySQL Workbench and execute the following query to create the user. Replace artemis_admin and HASHED_PASSWORD with your chosen username and password:

    INSERT INTO `artemis`.`jhi_user` (`id`,`login`,`password_hash`,`first_name`,`last_name`,`email`,
    `activated`,`lang_key`,`activation_key`,`reset_key`,`created_by`,`created_date`,`reset_date`,
    `last_modified_by`,`last_modified_date`,`image_url`,`last_notification_read`,`registration_number`)
    VALUES (1,"artemis_admin","HASHED_PASSWORD","artemis","administrator","artemis_admin@localhost",
    1,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
    
  3. Give the user admin and user roles:

    INSERT INTO `artemis`.`jhi_user_authority` (`user_id`, `authority_name`) VALUES (1,"ROLE_ADMIN");
    INSERT INTO `artemis`.`jhi_user_authority` (`user_id`, `authority_name`) VALUES (1,"ROLE_USER");
    

4. Create a user in Gitlab (http://your-gitlab-domain/admin/users/new) and make sure that the username, email, and password are the same as the user from the database:

../../_images/gitlab_admin_user.png

Starting the Artemis server should now succeed.

GitLab

GitLab Server Quickstart

The following steps describes how to set up the GitLab server in a semi-automated way. This is ideal as a quickstart for developers. For a more detailed setup, see Manual Gitlab Server Setup. In a production setup, you have to at least change the root password (by either specifying it in step 1 or extracting the random password in step 2) and generate random access tokens (instead of the pre-defined values). Set the variable GENERATE_ACCESS_TOKENS to true in the gitlab-local-setup.sh script and use the generated tokens instead of the predefined ones.

  1. Start the GitLab container defined in src/main/docker/gitlab-jenkins-mysql.yml by running

    GITLAB_ROOT_PASSWORD=artemis_admin docker-compose -f src/main/docker/gitlab-jenkins-mysql.yml up --build -d gitlab
    

    If you want to generate a random password for the root user, remove the part before docker-compose from the command.

    The file uses the GITLAB_OMNIBUS_CONFIG environment variable to configure the Gitlab instance after the container is started. It disables prometheus monitoring, sets the ssh port to 2222, and adjusts the monitoring endpoint whitelist by default.

  2. Wait a couple of minutes since GitLab can take some time to set up. Open the instance in your browser (usually http://localhost:8081).

    You can then login using the username root and your password (which defaults to artemis_admin, if you used the command from above). If you did not specify the password, you can get the initial one using:

    docker-compose -f src/main/docker/gitlab-jenkins-mysql.yml exec gitlab cat /etc/gitlab/initial_root_password
    
  3. Insert the GitLab root user password in the file application-local.yml (in src/main/resources) and insert the GitLab admin account. If you copied the template from above and used the default password, this is already done for you.

    artemis:
        version-control:
            url: http://localhost:8081
            user: root
            password: your.gitlab.admin.password # artemis_admin
    
  4. You now need to create an admin access token. You can do that using the following command (which takes a while to execute):

    docker-compose -f src/main/docker/gitlab-jenkins-mysql.yml exec gitlab gitlab-rails runner "token = User.find_by_username('root').personal_access_tokens.create(scopes: [:api, :read_user, :read_api, :read_repository, :write_repository, :sudo], name: 'Artemis Admin Token'); token.set_token('artemis-gitlab-token'); token.save!"
    
    You can also manually create in by navigating to http://localhost:8081/-/profile/personal_access_tokens and generate a token with all scopes.
    Copy this token into the ADMIN_PERSONAL_ACCESS_TOKEN field in the src/main/docker/gitlab/gitlab-local-setup.sh file.
    If you used the command to generate the token, you don’t have to change the gitlab-local-setup.sh file.
  5. Adjust the GitLab setup by running, this will configure GitLab’s network setting to allow local requests:

    docker-compose -f src/main/docker/gitlab-jenkins-mysql.yml exec gitlab /bin/sh -c "sh /gitlab-local-setup.sh"
    

    This script can also generate random access tokens, which should be used in a production setup. Change the variable $GENERATE_ACCESS_TOKENS to true to generate the random tokens and insert them into the Artemis configuration file.

  6. You’re done! Follow the Automated Jenkins Server Setup section for configuring Jenkins.

Manual GitLab Server Setup

GitLab provides no possibility to set a users password via API without forcing the user to change it afterwards (see Issue 19141). Therefore, you may want to patch the official gitlab docker image. Thus, you can use the following Dockerfile:

FROM gitlab/gitlab-ce:latest
RUN sed -i '/^.*user_params\[:password_expires_at\] = Time.current if admin_making_changes_for_another_user.*$/s/^/#/' /opt/gitlab/embedded/service/gitlab-rails/lib/api/users.rb

This Dockerfile disables the mechanism that sets the password to expired state after changed via API. If you want to use this custom image, you have to build the image and replace all occurrences of gitlab/gitlab-ce:latest in the following instructions by your chosen image name.

  1. Pull the latest GitLab Docker image (only if you don’t use your custom gitlab image)

    docker pull gitlab/gitlab-ce:latest
    
Start GitLab
  1. Run the image (and change the values for hostname and ports). Add -p 2222:22 if cloning/pushing via ssh should be possible. As GitLab runs in a docker container and the default port for SSH (22) is typically used by the host running Docker, we change the port GitLab uses for SSH to 2222. This can be adjusted if needed.

    Make sure to remove the comments from the command before running it.

    docker run -itd --name gitlab \
        --hostname your.gitlab.domain.com \   # Specify the hostname
        --restart always \
        -m 3000m \                            # Optional argument to limit the memory usage of Gitlab
        -p 8081:80 -p 443:443 \               # Alternative 1: If you are NOT running your own NGINX instance
        -p <some port of your choosing>:80 \  # Alternative 2: If you ARE running your own NGINX instance
        -p 2222:22 \                          # Remove this if cloning via SSH should not be supported
        -v gitlab_data:/var/opt/gitlab \
        -v gitlab_logs:/var/log/gitlab \
        -v gitlab_config:/etc/gitlab \
        gitlab/gitlab-ce:latest
    
  2. Wait a couple of minutes until the container is deployed and GitLab is set up, then open the instance in you browser. You can get the initial password for the root user using docker exec gitlab cat /etc/gitlab/initial_root_password.

  3. We recommend to rename the root admin user to artemis. To rename the user, click on the image on the top right and select Settings. Now select Account on the left and change the username. Use the same password in the Artemis configuration file application-artemis.yml

    artemis:
        version-control:
            user: artemis
            password: the.password.you.chose
    
  4. If you run your own NGINX or if you install Gitlab on a local development computer, then skip the next steps (6-7)

  5. Configure GitLab to automatically generate certificates using LetsEncrypt. Edit the GitLab configuration

    docker exec -it gitlab /bin/bash
    nano /etc/gitlab/gitlab.rb
    

    And add the following part

    letsencrypt['enable'] = true                          # GitLab 10.5 and 10.6 require this option
    external_url "https://your.gitlab.domain.com"         # Must use https protocol
    letsencrypt['contact_emails'] = ['gitlab@your.gitlab.domain.com'] # Optional
    
    nginx['redirect_http_to_https'] = true
    nginx['redirect_http_to_https_port'] = 80
    
  6. Reconfigure GitLab to generate the certificate.

    # Save your changes and finally run
    gitlab-ctl reconfigure
    

    If this command fails, try using

    gitlab-ctl renew-le-certs
    
  7. Login to GitLab using the Artemis admin account and go to the profile settings (upper right corned → Settings)

    ../../_images/gitlab_setting_button.png
GitLab Access Token
  1. Go to Access Tokens

../../_images/gitlab_access_tokens_button.png
  1. Create a new token named “Artemis” and give it all rights.

../../_images/artemis_gitlab_access_token.png
  1. Copy the generated token and insert it into the Artemis configuration file application-artemis.yml

    artemis:
        version-control:
            token: your.generated.api.token
    
  2. (Optional, only necessary for local setup) Allow outbound requests to local network

    There is a known limitation for the local setup: webhook URLs for the communication between GitLab and Artemis and between GitLab and Jenkins cannot include local IP addresses. This option can be deactivated in GitLab on <https://gitlab-url>/admin/application_settings/network → Outbound requests. Another possible solution is to register a local URL, e.g. using ngrok, to be available over a domain the Internet.

  3. Adjust the monitoring-endpoint whitelist. Run the following command

    docker exec -it gitlab /bin/bash
    

    Then edit the GitLab configuration

    nano /etc/gitlab/gitlab.rb
    

    Add the following lines

    gitlab_rails['monitoring_whitelist'] = ['0.0.0.0/0']
    gitlab_rails['gitlab_shell_ssh_port'] = 2222
    

    This will disable the firewall for all IP addresses. If you only want to allow the server that runs Artemis to query the information, replace 0.0.0.0/0 with ARTEMIS.SERVER.IP.ADRESS/32

    If you use SSH and use a different port than 2222, you have to adjust the port above.

  4. Disable prometheus. As we encountered issues with the Prometheus log files not being deleted and therefore filling up the disk space, we decided to disable Prometheus within GitLab. If you also want to disable prometheus, edit the configuration again using

    nano /etc/gitlab/gitlab.rb
    

    and add the following line

    prometheus_monitoring['enable'] = false
    

    The issue with more details can be found here.

  5. Add a SSH key for the admin user.

    Artemis can clone/push the repositories during setup and for the online code editor using SSH. If the SSH key is not present, the username + token will be used as fallback (and all git operations will use HTTP(S) instead of SSH).

    You first have to create a SSH key (locally), e.g. using ssh-keygen (more information on how to create a SSH key can be found e.g. at ssh.com or at gitlab.com).

    The list of supported ciphers can be found at Apache Mina.

    It is recommended to use a password to secure the private key, but it is not mandatory.

    Please note that the private key file must be named ìd_rsa, id_dsa, id_ecdsa or id_ed25519, depending on the ciphers used.

    You now have to extract the public key and add it to GitLab. Open the public key file (usually called id_rsa.pub (when using RSA)) and copy it’s content (you can also use cat id_rsa.pub to show the public key).

    Navigate to GITLAB-URL/-/profile/keys and add the SSH key by pasting the content of the public key.

    <ssh-key-path> is the path to the folder containing the id_rsa file (but without the filename). It will be used in the configuration of Artemis to specify where Artemis should look for the key and store the known_hosts file.

    <ssh-private-key-password> is the password used to secure the private key. It is also needed for the configuration of Artemis, but can be omitted if no password was set (e.g. for development environments).

Reconfigure GitLab

gitlab-ctl reconfigure

Upgrade GitLab

You can upgrade GitLab by downloading the latest Docker image and starting a new container with the old volumes:

docker stop gitlab
docker rename gitlab gitlab_old
docker pull gitlab/gitlab-ce:latest

See https://hub.docker.com/r/gitlab/gitlab-ce/ for the latest version. You can also specify an earlier one.

Note that upgrading to a major version may require following an upgrade path. You can view supported paths here.

Start a GitLab container just as described in Start-Gitlab and wait for a couple of minutes. GitLab should configure itself automatically. If there are no issues, you can delete the old container using docker rm gitlab_old and the olf image (see docker images) using docker rmi <old-image-id>. You can also remove all old images using docker image prune -a

Jenkins

Automated Jenkins Server Setup

The following steps describe how to deploy a pre-configured version of the Jenkins server. This is ideal as a quickstart for developers. For a more detailed setup, see Manual Jenkins Server Setup. In a production setup, you have to at least change the user credentials (in the file jenkins-casc-config.yml) and generate random access tokens and push tokens.

1. Create a new access token in GitLab named Jenkins and give it api and read_repository rights. You can do either do it manually or using the following command:

docker-compose -f src/main/docker/gitlab-jenkins-mysql.yml exec gitlab gitlab-rails runner "token = User.find_by_username('root').personal_access_tokens.create(scopes: [:api, :read_repository], name: 'Jenkins'); token.set_token('jenkins-gitlab-token'); token.save!"
  1. You can now deploy Jenkins. A src/main/docker/gitlab-jenkins-mysql.yml file is provided which deploys the Jenkins, GitLab, and Mysql containers bound to static ip addresses. You can deploy them by running:

    JAVA_OPTS=-Djenkins.install.runSetupWizard=false docker-compose -f src/main/docker/gitlab-jenkins-mysql.yml up --build -d
    

    Jenkins is then reachable under http://localhost:8082/ and you can login using the credentials specified in jenkins-casc-config.yml (defaults to artemis_admin as both username and password).

  2. You need to generate the ci-token and secret-push-token. If you used the preset master.key within the file gitlab-jenkins-mysql.yml, you can skip this step. In a production setup, you should use a random master.key, then you have to follow the steps described in Gitlab to Jenkins push notification token to generate the token.

  3. The application-local.yml must be adapted with the values configured in jenkins-casc-config.yml: If you used the preset master.key and are running a development setup, the secrets can be found in the artemis configuration template posted at the beginning of this page.

artemis:
    user-management:
        use-external: false
        internal-admin:
            username: artemis_admin
            password: artemis-admin
    version-control:
        url: http://localhost:8081
        user: artemis_admin
        password: artemis_admin
        ci-token: # generated in step 9
    continuous-integration:
        url: http://localhost:8082
        user: artemis_admin
        password: artemis_admin
        vcs-credentials: artemis_gitlab_admin_credentials
        artemis-authentication-token-key: artemis_notification_plugin_token
        artemis-authentication-token-value: artemis_admin
        secret-push-token: # generated in step 3
  1. Open the src/main/resources/config/application-jenkins.yml and change the following: Again, if you are using a development setup, the template in the beginning of this page already contains the correct values.

jenkins:
    internal-urls:
        ci-url: http://jenkins:8080
        vcs-url: http://gitlab:80
  1. You’re done. You can now run Artemis with the GitLab/Jenkins environment.

Manual Jenkins Server Setup

  1. Pull the latest Jenkins LTS Docker image

    Run the following command to get the latest jenkins LTS docker image.

    docker pull jenkins/jenkins:lts
    
  2. Create a custom docker image

    In order to install and use Maven with Java in the Jenkins container, you have to first install maven, then download Java and finally configure Maven to use Java instead of the default version. You also need to install Swift and SwiftLint if you want to be able to create Swift programming exercises.

    To perform all these steps automatically, you can prepare a Docker image:

    Create a Dockerfile with the content found here <src/main/docker/jenkins/Dockerfile> or here <src/main/docker/jenkins/swift/Dockerfile> in case you want to additionally install Swift/SwiftLint. Copy it in a file named Dockerfile, e.g. in the folder /opt/jenkins/ using vim Dockerfile.

    Now run the command docker build --no-cache -t jenkins-artemis .

    This might take a while because Docker will download Java, but this is only required once.

  3. If you run your own NGINX or if you install Jenkins on a local development computer, then skip the next steps (4-7)

  4. Create a file increasing the maximum file size for the nginx proxy. The nginx-proxy uses a default file limit that is too small for the plugin that will be uploaded later. Skip this step if you have your own NGINX instance.

    echo "client_max_body_size 16m;" > client_max_body_size.conf
    
  5. The NGINX default timeout is pretty low. For plagiarism check and unlocking student repos for the exam a higher timeout is advisable. Therefore we write our own nginx.conf and load it in the container.

    user  nginx;
    worker_processes  auto;
    
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    
    events {
        worker_connections  1024;
    }
    
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        fastcgi_read_timeout 300;
        proxy_read_timeout 300;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
        include /etc/nginx/conf.d/*.conf;
    }
    daemon off
    
  6. Run the NGINX proxy docker container, this will automatically setup all reverse proxies and force https on all connections. (This image would also setup proxies for all other running containers that have the VIRTUAL_HOST and VIRTUAL_PORT environment variables). Skip this step if you have your own NGINX instance.

    docker run -itd --name nginx_proxy \
        -p 80:80 -p 443:443 \
        --restart always \
        -v /var/run/docker.sock:/tmp/docker.sock:ro \
        -v /etc/nginx/certs \
        -v /etc/nginx/vhost.d \
        -v /usr/share/nginx/html \
        -v $(pwd)/client_max_body_size.conf:/etc/nginx/conf.d/client_max_body_size.conf:ro \
        -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro \
        jwilder/nginx-proxy
    
  7. The nginx proxy needs another docker-container to generate letsencrypt certificates. Run the following command to start it (make sure to change the email-address). Skip this step if you have your own NGINX instance.

    docker run --detach \
        --name nginx_proxy-letsencrypt \
        --volumes-from nginx_proxy \
        --volume /var/run/docker.sock:/var/run/docker.sock:ro \
        --env "DEFAULT_EMAIL=mail@yourdomain.tld" \
        jrcs/letsencrypt-nginx-proxy-companion
    
Start Jenkins
  1. Run Jenkins by executing the following command (change the hostname and choose which port alternative you need)

    docker run -itd --name jenkins \
        --restart always \
        -v jenkins_data:/var/jenkins_home \
        -v /var/run/docker.sock:/var/run/docker.sock \
        -v /usr/bin/docker:/usr/bin/docker:ro \
        -e VIRTUAL_HOST=your.jenkins.domain -e VIRTUAL_PORT=8080 \    # Alternative 1: If you are NOT using a separate NGINX instance
        -e LETSENCRYPT_HOST=your.jenkins.domain \                     # Only needed if Alternative 1 is used
        -p 8082:8080 \                                                # Alternative 2: If you ARE using a separate NGINX instance OR you ARE installing Jenkins on a local development computer
        -u root \
        jenkins/jenkins:lts
    

    If you still need the old setup with Python & Maven installed locally, use jenkins-artemis instead of jenkins/jenkins:lts. Also note that you can omit the -u root, -v /var/run/docker.sock:/var/run/docker.sock and -v /usr/bin/docker:/usr/bin/docker:ro parameters, if you do not want to run Docker builds on the Jenkins controller (but e.g. use remote agents).

  2. Open Jenkins in your browser (e.g. localhost:8082) and setup the

    admin user account (install all suggested plugins). You can get the initial admin password using the following command.

    # Jenkins highlights the password in the logs, you can't miss it
    docker logs -f jenkins
    or alternatively
    docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword
    
  3. Set the chosen credentials in the Artemis configuration application-artemis.yml

    artemis:
        continuous-integration:
            user: your.chosen.username
            password: your.chosen.password
    

Required Jenkins Plugins

Note: The custom Jenkins Dockerfile takes advantage of the Plugin Installation Manager Tool for Jenkins to automatically install the plugins listed below. If you used the Dockerfile, you can skip these steps and Server Notification Plugin. The list of plugins is maintained in src/main/docker/jenkins/plugins.yml.

You will need to install the following plugins (apart from the recommended ones that got installed during the setup process):

  1. GitLab for enabling webhooks to and from GitLab

  2. Timestamper for adding the time to every line of the build output (Timestamper might already be installed)

  3. Pipeline for defining the build description using declarative files (Pipeline might already be installed)

    Note: This is a suite of plugins that will install multiple plugins

  4. Pipeline Maven to use maven within the pipelines. If you want to use Docker for your build agents you may also need to install Docker Pipeline .

  5. Matrix Authorization Strategy Plugin for configuring permissions for users on a project and build plan level (Matrix Authorization Strategy might already be installed).

The plugins above (and the pipeline-setup associated with it) got introduced in Artemis 4.7.3. If you are using exercises that were created before 4.7.3, you also have to install these plugins:

Please note that this setup is deprecated and will be removed in the future. Please migrate to the new pipeline-setup if possible.

  1. Multiple SCMs for combining the exercise test and assignment repositories in one build

  2. Post Build Task for preparing build results to be exported to Artemis

  3. Xvfb for exercises based on GUI libraries, for which tests have to have some virtual display

Choose “Download now and install after restart” and checking the “Restart Jenkins when installation is complete and no jobs are running” box

Timestamper Configuration

Go to Manage Jenkins → Configure System. There you will find the Timestamper configuration, use the following value for both formats:

'<b>'yyyy-MM-dd'T'HH:mm:ssX'</b> '
../../_images/timestamper_config.png

Server Notification Plugin

Artemis needs to receive a notification after every build, which contains the test results and additional commit information. For that purpose, we developed a Jenkins plugin, that can aggregate and POST JUnit formatted results to any URL.

You can download the current release of the plugin here (Download the .hpi file). Go to the Jenkins plugin page (Manage Jenkins → Manage Plugins) and install the downloaded file under the Advanced tab under Upload Plugin

../../_images/jenkins_custom_plugin.png

Jenkins Credentials

Go to Manage Jenkins -> Security -> Manage Credentials → Jenkins → Global credentials and create the following credentials

GitLab API Token
  1. Create a new access token in GitLab named Jenkins and give it api rights and read_repository rights. For detailed instructions on how to create such a token follow Gitlab Access Token.

    ../../_images/gitlab_jenkins_token_rights.png
  2. Copy the generated token and create new Jenkins credentials:

    1. Kind: GitLab API token

    2. Scope: Global

    3. API token: your.copied.token

    4. Leave the ID field blank

    5. The description is up to you

  3. Go to the Jenkins settings Manage Jenkins → Configure System. There you will find the GitLab settings. Fill in the URL of your GitLab instance and select the just created API token in the credentials dropdown. After you click on “Test Connection”, everything should work fine. If you have problems finding the right URL for your local docker setup, you can try http://host.docker.internal:8081 for Windows or http://docker.for.mac.host.internal:8081 for Mac if GitLab is reachable over port 8081.

    ../../_images/jenkins_gitlab_configuration.png
Server Notification Token
  1. Create a new Jenkins credential containing the token, which gets send by the server notification plugin to Artemis with every build result:

    1. Kind: Secret text

    2. Scope: Global

    3. Secret: your.secret_token_value (choose any value you want, copy it for the nex step)

    4. Leave the ID field blank

    5. The description is up to you

  2. Copy the generated ID of the new credentials and put it into the Artemis configuration application-artemis.yml

    artemis:
        continuous-integration:
            artemis-authentication-token-key: the.id.of.the.notification.token.credential
    
  3. Copy the actual value you chose for the token and put it into the Artemis configuration application-artemis.yml

    artemis:
        continuous-integration:
            artemis-authentication-token-value: the.actual.value.of.the.notification.token
    
GitLab Repository Access
  1. Create a new Jenkins credentials containing the username and password of the GitLab administrator account:

    1. Kind: Username with password

    2. Scope: Global

    3. Username: the_username_you_chose_for_the_gitlab_admin_user

    4. Password: the_password_you_chose_for_the_gitlab_admin_user

    5. Leave the ID field blank

    6. The description is up to you

  2. Copy the generated ID (e.g. ea0e3c08-4110-4g2f-9c83-fb2cdf6345fa) of the new credentials and put it into the Artemis configuration file application-artemis.yml

    artemis:
        continuous-integration:
            vcs-credentials: the.id.of.the.username.and.password.credentials.from.jenkins
    

GitLab to Jenkins push notification token

GitLab has to notify Jenkins build plans if there are any new commits to the repository. The push notification that gets sent here is secured by a token generated by Jenkins. In order to get this token, you have to do the following steps:

  1. Create a new item in Jenkins (use the Freestyle project type) and name it TestProject

  2. In the project configuration, go to Build Triggers → Build when a change is pushed to GitLab and activate this option

  3. Click on Advanced.

  4. You will now have a couple of new options here, one of them being a “Secret token”.

  5. Click on the “Generate” button right below the text box for that token.

  6. Copy the generated value, let’s call it $gitlab-push-token

  7. Apply these change to the plan (i.e. click on Apply)

../../_images/jenkins_test_project.png
  1. Perform a GET request to the following URL (e.g. with Postman) using Basic Authentication and the username and password you chose for the Jenkins admin account:

    GET https://your.jenkins.domain/job/TestProject/config.xml
    

    If you have xmllint installed, you can use this command, which will output the secret-push-token from steps 9 and 10 (you may have to adjust the username and password):

    curl -u artemis_admin:artemis_admin http://localhost:8082/job/TestProject/config.xml | xmllint --nowarning --xpath "//project/triggers/com.dabsquared.gitlabjenkins.GitLabPushTrigger/secretToken/text()" - | sed 's/^.\(.*\).$/\1/'
    
  2. You will get the whole configuration XML of the just created build plan, there you will find the following tag:

    <secretToken>{$some-long-encrypted-value}</secretToken>
    
../../_images/jenkins_project_config_xml.png

Job configuration XML

  1. Copy the secret-push-token value in the line <secretToken>{secret-push-token}</secretToken>. This is the encrypted value of the gitlab-push-token you generated in step 5.

  2. Now, you can delete this test project and input the following values into your Artemis configuration application-artemis.yml (replace the placeholders with the actual values you wrote down)

    artemis:
        version-control:
            ci-token: $gitlab-push-token
        continuous-integration:
            secret-push-token: $some-long-encrypted-value
    
  3. In a local setup, you have to disable CSRF otherwise some API endpoints will return HTTP Status 403 Forbidden. This is done be executing the following command: docker-compose -f src/main/docker/gitlab-jenkins-mysql.yml exec -T jenkins dd of=/var/jenkins_home/init.groovy < src/main/docker/jenkins/jenkins-disable-csrf.groovy

    The last step is to disable the use-crumb option in application-local.yml:

    jenkins:
        use-crumb: false
    

Upgrading Jenkins

In order to upgrade Jenkins to a newer version, you need to rebuild the Docker image targeting the new version. The stable LTS versions can be viewed through the changelog and the corresponding Docker image can be found on dockerhub.

  1. Open the Jenkins Dockerfile and replace the value of FROM with jenkins/jenkins:lts. After running the command docker pull jenkins/jenkins:lts, this will use the latest LTS version in the following steps. You can also use a specific LTS version. For example, if you want to upgrade Jenkins to version 2.289.2, you will need to use the jenkins/jenkins:2.289.2-lts image.

  2. If you’re using docker-compose, you can simply use the following command and skip the next steps.

    docker-compose -f src/main/docker/gitlab-jenkins-mysql.yml up --build -d
    
  3. Build the new Docker image:

    docker build --no-cache -t jenkins-artemis .
    

    The name of the image is called jenkins-artemis.

  4. Stop the current Jenkins container (change jenkins to the name of your container):

    docker stop jenkins
    
  5. Rename the container to jenkins_old so that it can be used as a backup:

    docker rename jenkins jenkins_old
    
  6. Run the new Jenkins instance:

    docker run -itd --name jenkins --restart always \
     -v jenkins_data:/var/jenkins_home \
     -v /var/run/docker.sock:/var/run/docker.sock \
     -p 9080:8080 jenkins-artemis \
    
  7. You can remove the backup container if it’s no longer needed:

    docker rm jenkins_old
    

You should also update the Jenkins plugins regularly due to security reasons. You can update them directly in the Web User Interface in the Plugin Manager.

Build agents

You can either run the builds locally (that means on the machine that hosts Jenkins) or on remote build agents.

Configuring local build agents

Go to Manage Jenkins > Manage Nodes and Clouds > master Configure your master node like this (adjust the number of executors, if needed). Make sure to add the docker label.

../../_images/jenkins_local_node.png

Jenkins local node

Alternative local build agents setup using docker

An alternative way of adding a build agent that will use docker (similar to the remote agents below) but running locally, can be done using the jenkins/ssh-agent docker image docker image.

Prerequisites:

  1. Make sure to have Docker installed

Agent setup:

  1. Create a new SSH key using ssh-keygen (if a passphrase is added, store it for later)

  2. Copy the public key content (e.g. in ~/.ssh/id_rsa.pub)

  3. Run:

    docker run -d --name jenkins_agent -v /var/run/docker.sock:/var/run/docker.sock \
    jenkins/ssh-agent:latest "<copied_public_key>"
    
  4. Get the GID of the ‘docker’ group with cat /etc/groups and remember it for later

  5. Enter the agent’s container with docker exec -it jenkins_agent bash

  6. Install Docker with apt update && apt install docker.io

  7. Check if group ‘docker’ already exists with cat /etc/groups. If yes, remove it with groupdel docker

  8. Add a new ‘docker’ group with the same GID as seen in point 2 with groupadd -g <GID> docker

  9. Add ‘jenkins’ user to the group with usermod -aG docker jenkins

  10. Activate changes with newgrp docker

  11. Now check if ‘jenkins’ has the needed permissions to run docker commands

    1. Log in as ‘jenkins’ with su jenkins

    2. Try if docker inspect <agent_container_name> works or if a permission error occurs

    3. If an permission error occurs, try to restart the docker container

  12. Now you can exit the container executing exit twice (the first will exit the jenkins user and the second the container)

Add agent in Jenkins:

  1. Open Jenkins in your browser (e.g. localhost:8082)

  2. Go to Manage Jenkins -> Manage Credentials -> (global) -> Add Credentials

    • Kind: SSH Username with private key

    • ID: leave blank

    • Description: Up to you

    • Username: jenkins

    • Private Key: <content of the previous generated private key> (e.g /root/.ssh/id_rsa)

    • Passphrase: <the previous entered passphrase> (you can leave it blank if none has been specified)

    ../../_images/alternative_jenkins_node_credentials.png
  3. Go to Manage Jenkins -> Manage Nodes and Clouds -> New Node

    • Node name: Up to you (e.g. Docker)

    • Check ‘Permanent Agent’

    ../../_images/alternative_jenkins_node_setup.png
  4. Node settings:

    • # of executors: Up to you (e.g. 4)

    • Remote root directory: /home/jenkins/agent

    • Labels: docker

    • Usage: Only build jobs with label expressions matching this node

    • Launch method: Launch agents via SSH

    • Host: output of command docker inspect --format '{{ .Config.Hostname }}' jenkins_agent

    • Credentials: <the previously created SSH credential>

    • Host Key Verification Strategy: Non verifying Verification Strategy

    • Availability: Keep this agent online as much as possible

    ../../_images/alternative_jenkins_node.png
  5. Save the new node

  6. Node should now be up and running

Installing remote build agents

You might want to run the builds on additional Jenkins agents, especially if a large amount of students should use the system at the same time. Jenkins supports remote build agents: The actual compilation of the students submissions happens on these other machines but the whole process is transparent to Artemis.

This guide explains setting up a remote agent on an Ubuntu virtual machine that supports docker builds.

Prerequisites: 1. Install Docker on the remote machine: https://docs.docker.com/engine/install/ubuntu/

  1. Add a new user to the remote machine that Jenkins will use: sudo adduser --disabled-password --gecos "" jenkins

  2. Add the jenkins user to the docker group (This allows the jenkins user to interact with docker): sudo usermod -a -G docker jenkins

  3. Generate a new SSH key locally (e.g. using ssh-keygen) and add the public key to the .ssh/authorized_keys file of the jenkins user on the agent VM.

  4. Validate that you can connect to the build agent machine using SSH and the generated private key and validate that you can use docker (docker ps should not show an error)

  5. Log in with your normal account on the build agent machine and install Java: sudo apt install default-jre

  6. Add a new secret in Jenkins, enter private key you just generated and add the passphrase, if set:

    ../../_images/jenkins_ssh_credentials.png

    Jenkins SSH Credentials

  7. Add a new node (select a name and select Permanent Agent): Set the number of executors so that it matches your machine’s specs: This is the number of concurrent builds this agent can handle. It is recommended to match the number of cores of the machine, but you might want to adjust this later if needed.

    Set the remote root directory to /home/jenkins/remote_agent.

    Set the usage to Only build jobs with label expressions matching this node. This ensures that only docker-jobs will be built on this agent, and not other jobs.

    Add a label docker to the agent.

    Set the launch method to Launch via SSH and add the host of the machine. Select the credentials you just created and select Manually trusted key Verification Strategy as Host key verification Strategy. Save it.

    ../../_images/jenkins_node.png

    Add a Jenkins node

  8. Wait for some moments while jenkins installs it’s remote agent on the agent’s machine. You can track the progress using the Log page when selecting the agent. System information should also be available.

  9. Change the settings of the master node to be used only for specific jobs. This ensures that the docker tasks are not executed on the master agent but on the remote agent.

../../_images/jenkins_master_node.png

Adjust Jenkins master node settings

  1. You are finished, the new agent should now also process builds.

Jenkins User Management

Artemis supports user management in Jenkins as of version 4.11.0. Creating an account in Artemis will also create an account on Jenkins using the same password. This enables users to login and access Jenkins. Updating and/or deleting users from Artemis will also lead to updating and/or deleting from Jenkins.

Unfortunately, Jenkins does not provide a Rest API for user management which present the following caveats:

  • The username of a user is treated as a unique identifier in Jenkins.

  • It’s not possible to update an existing user with a single request. We update by deleting the user from Jenkins and recreating it with the updated data.

  • In Jenkins, users are created in an on-demand basis. For example, when a build is performed, its change log is computed and as a result commits from users who Jenkins has never seen may be discovered and created.

  • Since Jenkins users may be re-created automatically, issues may occur such as 1) creating a user, deleting it, and then re-creating it and 2) changing the username of the user and reverting back to the previous one.

  • Updating a user will re-create it in Jenkins and therefore remove any additionally saved Jenkins-specific user data such as API access tokens.

Jenkins Build Plan Access Control Configuration

Artemis takes advantage of the Project-based Matrix Authorization Strategy plugin to support build plan access control in Jenkins. This enables specific Artemis users to access build plans and execute actions such as triggering a build. This section explains the changes required in Jenkins in order to set up build plan access control:

  1. Navigate to Manage Jenkins -> Manage Plugins -> Installed and make sure that you have the Matrix Authorization Strategy plugin installed

  2. Navigate to Manage Jenkins -> Configure Global Security and navigate to the “Authorization” section

  3. Select the “Project-based Matrix Authorization Strategy” option

  4. In the table make sure that the “Read” permission under the “Overall” section is assigned to the “Authenticated Users” user group.

  5. In the table make sure that all “Administer” permission is assigned to all administrators.

  6. You are finished. If you want to fine-tune permissions assigned to teaching assistants and/or instructors, you can change them within the JenkinsJobPermission.java file.

../../_images/jenkins_authorization_permissions.png

Caching

You can configure caching for e.g. Maven repositories. See Programming Exercise adjustments for more details.

Separate NGINX Configurations

There are some placeholders in the following configurations. Replace them with your setup specific values ### GitLab

server {
    listen 443 ssl http2;
    server_name your.gitlab.domain;
    ssl_session_cache shared:GitLabSSL:10m;
    include /etc/nginx/common/common_ssl.conf;
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    add_header X-Frame-Options DENY;
    add_header Referrer-Policy same-origin;
    client_max_body_size 10m;
    client_body_buffer_size 1m;

    location / {
        proxy_pass              http://localhost:<your exposed GitLab HTTP port (default 80)>;
        proxy_read_timeout      300;
        proxy_connect_timeout   300;
        proxy_http_version      1.1;
        proxy_redirect          http://         https://;

        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto   $scheme;

        gzip off;
    }
}

Jenkins

server {
    listen 443 ssl http2;
    server_name your.jenkins.domain;
    ssl_session_cache shared:JenkinsSSL:10m;
    include /etc/nginx/common/common_ssl.conf;
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    add_header X-Frame-Options DENY;
    add_header Referrer-Policy same-origin;
    client_max_body_size 10m;
    client_body_buffer_size 1m;

    location / {
        proxy_pass              http://localhost:<your exposed Jenkins HTTP port (default 8081)>;
        proxy_set_header        Host                $host:$server_port;
        proxy_set_header        X-Real-IP           $remote_addr;
        proxy_set_header        X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto   $scheme;
        proxy_redirect          http://             https://;

        # Required for new HTTP-based CLI
        proxy_http_version 1.1;
        proxy_request_buffering off;
        proxy_buffering off; # Required for HTTP-based CLI to work over SSL

        # workaround for https://issues.jenkins-ci.org/browse/JENKINS-45651
        add_header 'X-SSH-Endpoint' 'your.jenkins.domain.com:50022' always;
    }

    error_page 502 /502.html;
    location /502.html {
        root /usr/share/nginx/html;
        internal;
    }
}

/etc/nginx/common/common_ssl.conf

If you haven’t done so, generate the DH param file: sudo openssl dhparam -out /etc/nginx/dhparam.pem 4096

ssl_certificate     <path to your fullchain certificate>;
ssl_certificate_key <path to the private key of your certificate>;
ssl_protocols       TLSv1.2 TLSv1.3;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_prefer_server_ciphers   on;
ssl_ciphers ECDH+CHACHA20:EECDH+AESGCM:EDH+AESGCM:!AES128;
ssl_ecdh_curve secp384r1;
ssl_session_timeout  10m;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver <if you have any, specify them here> valid=300s;
resolver_timeout 5s;

Deployment Artemis / GitLab / Jenkins using Docker on Local machine

Execute the following steps in addition to the ones described above:

Preparation

  1. Create a Docker network named “artemis” with docker network create artemis.

GitLab

  1. Add the GitLab container to the created network with docker network connect artemis gitlab.

  2. Get the URL of the GitLab container with the first IP returned by docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' gitlab.

  3. Use this IP in the application-artemis.yml file at artemis.version-control.url.

Jenkins

  1. Add the Jenkins container to the created network with docker network connect artemis jenkins.

  2. Get the URL of the GitLab container with the first IP returned by docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' jenkins.

  3. Use this IP in the application-artemis.yml file at artemis.continuous-integration.url.

Artemis

  1. In docker-compose.yml:

    1. Make sure to use unique ports, e.g. 8080 for Artemis, 8081 for GitLab and 8082 for Jenkins.

    2. Change the SPRING_PROFILES_ACTIVE environment variable to dev,jenkins,gitlab,artemis,scheduling.

  2. In src/main/resources/config/application-dev.yml at server: use port: 8080 for Artemis.

  3. Run docker-compose up.

  4. After the container has been deployed run docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' artemis_artemis-server and copy the first resulting IP.

  5. In src/main/resources/config/application-dev.yml at server: at url: paste the copied IP with the port number, e.g. url: http://172.33.0.1:8080.

  6. Stop the Artemis docker container with Control-C and re-run docker-compose up.


Athene Service

The semi-automatic text assessment relies on the Athene service. To enable automatic text assessments, special configuration is required:

Enable the athene Spring profile:

--spring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling,athene

Configure API Endpoints:

The Athene service is running on a dedicated machine and is addressed via HTTP. We need to extend the configuration in the file src/main/resources/config/application-artemis.yml like so:

artemis:
  # ...
  athene:
    url: http://localhost
    base64-secret: YWVuaXF1YWRpNWNlaXJpNmFlbTZkb283dXphaVF1b29oM3J1MWNoYWlyNHRoZWUzb2huZ2FpM211bGVlM0VpcAo=
    token-validity-in-seconds: 10800

Apollon Service

The Apollon Converter is needed to convert models from their JSON representaiton to PDF. Special configuration is required:

Enable the apollon Spring profile:

--spring.profiles.active=dev,bamboo,bitbucket,jira,artemis,scheduling,apollon

Configure API Endpoints:

The Apollon conversion service is running on a dedicated machine and is adressed via HTTP. We need to extend the configuration in the file src/main/resources/config/application-artemis.yml like so:

apollon:
   conversion-service-url: http://localhost:8080

Common Setup Problems

General Setup Problems

  • Restarting IntelliJ with invalidated caches (File > Invalidate Caches…) might resolve the current issue.

  • When facing issues with deep dependencies and changes were made to the package.json file, executing npm install --force might resolve the issue.

  • When encountering a compilation error due to invalid source release make sure that you have set the Java version properly at 3 places

    • File > Project Structure > Project Settings > Project > Project SDK

    • File > Project Structure > Project Settings > Project > Project Language Level

    • File > Settings > Build, Execution, Deployment > Build Tools > Gradle > Gradle JVM

Database

  • On the first startup, there might be issues with the text_block table. You can resolve the issue by executing ALTER TABLE text_block CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; in your database.

  • One typical problem in the development setup is that an exception occurs during the database initialization. Artemis uses Liquibase to automatically upgrade the database scheme after the data model has changed. This ensures that the changes can also be applied to the production server. In case you encounter errors with Liquibase checksum values:

    • Run the following command in your terminal / command line: ./gradlew liquibaseClearChecksums

    • You can manually adjust the checksum for a breaking changelog: UPDATE `DATABASECHANGELOG` SET `MD5SUM` = NULL WHERE `ID` = '<changelogId>'

Client

  • If you are using a machine with limited RAM (e.g. ~8 GB RAM) you might have issues starting the Artemis Client. You can resolve this by following the description in Using the command line

Programming Exercise Setup

Atlassian Setup (Bamboo, Bitbucket and Jira)

  • When setting up the Bamboo, Bitbucket, and Jira, at the same time within the same browser, you might receive the message that the Jira token expired. You can resolve the issue by using another browser for configuring Jira, as there seems to be a synchronization problem within the browser.

  • When you create a new programming exercise and receive the error message The project <ProgrammingExerciseName> already exists in the CI Server. Please choose a different short name! and you have double checked that this project does not exist within the CI Server Bamboo, you might have to renew the trial licenses for the Atlassian products.

    Update Atlassian Licenses You need to create new Atlassian Licenses, which requires you to retrieve the server id and navigate to the license editing page after creating new trial licenses.
    • Bamboo: Retrieve the Server ID and edit the license in License key details (Administration > Licensing)
    • Bitbucket: Retrieve the Server ID and edit the license in License Settings (Administration > Licensing)
    • Jira: Retrieve the Server ID (System > System info) and edit the JIRA Service Desk License key in Versions & licenses

Multiple Artemis instances

Setup with one instance

Artemis usually runs with one instance of the application server:

../../_images/deployment_before.drawio.png

Setup with multiple instances

There are certain scenarios, where a setup with multiple instances of the application server is required. This can e.g. be due to special requirements regarding fault tolerance or performance.

Artemis also supports this setup (which is also used at the Chair for Applied Software Engineering at TUM).

Multiple instances of the application server are used to distribute the load:

../../_images/deployment_after_simple.drawio.png

A load balancer (typically a reverse proxy such as nginx) is added, that distributes the requests to the different instances.

Note: This documentation focuses on the practical setup of this distributed setup. More details regarding the theoretical aspects can be found in the Bachelor’s Thesis Securing and Scaling Artemis WebSocket Architecture, which can be found here: pdf.

Additional synchronization

All instances of the application server use the same database, but other parts of the system also have to be synchronized:

  1. Database cache

  2. WebSocket messages

  3. File system

Each of these three aspects is synchronized using a different solution

Database cache

Artemis uses a cache provider that supports distributed caching: Hazelcast.

All instances of Artemis form a so-called cluster that allows them to synchronize their cache. You can use the configuration argument spring.hazelcast.interface to configure the interface on which Hazelcast will listen.

../../_images/deployment_hazelcast.drawio.png

One problem that arises with a distributed setup is that all instances have to know each other in order to create this cluster. This is problematic if the instances change dynamically. Artemis uses a discovery service to solve the issue (named JHipster Registry).

Discovery service

JHipster registry contains Eureka, the discovery service where all instances can register themselves and fetch the other registered instances.

Eureka can be configured like this within Artemis:

# Eureka configuration
eureka:
    client:
        enabled: true
        service-url:
            defaultZone: {{ artemis_eureka_urls }}
instance:
    prefer-ip-address: true
    ip-address: {{ artemis_ip_address }}
    appname: Artemis
    instanceId: Artemis:{{ artemis_eureka_instance_id }}

logging:
    file:
        name: '/opt/artemis/artemis.log'

{{ artemis_eureka_urls }} must be the URL where Eureka is reachable, {{ artemis_ip_address }} must be the IP under which this instance is reachable and {{ artemis_eureka_instance_id }} must be a unique identifier for this instance. You also have to setup the value jhipster.registry.password to the password of the registry (which you will set later).

Note that Hazelcast (which requires Eureka) is by default binding to 127.0.0.1 to prevent other instances to form a cluster without manual intervention. If you set up the cluster on multiple machines (which you should do for a production setup), you have to set the value spring.hazelcast.interface to the ip-address of the machine. Hazelcast will then bind on this interface rather than 127.0.0.1, which allows other instances to establish connections to the instance. This setting must be set for every instance, but you have to make sure to adjust the ip-address correspondingly.

Setup

Installing

  1. Create the directory

sudo mkdir /opt/registry/
sudo mkdir /opt/registry/config-server
  1. Download the application

Download the latest version of the jhipster-registry from GitHub, e.g. by using

sudo wget -O /opt/registry/registry.jar https://github.com/jhipster/jhipster-registry/releases/download/v6.2.0/jhipster-registry-6.2.0.jar

Service configuration

  1. sudo vim /etc/systemd/system/registry.service

[Unit]
Description=Registry
After=syslog.target

[Service]
User=artemis
WorkingDirectory=/opt/registry
ExecStart=/usr/bin/java \
    -Xmx256m \
    -jar registry.jar \
    --spring.profiles.active=prod,native
SuccessExitStatus=143
StandardOutput=/opt/registry/registry.log
#StandardError=inherit

[Install]
WantedBy=multi-user.target
  1. Set Permissions in Registry Folder

sudo chown -R artemis:artemis /opt/registry
sudo chmod g+rwx /opt/registry
  1. Enable the service

sudo systemctl daemon-reload
sudo systemctl enable registry.service
  1. Start Service (only after performing steps 1-3 of the configuration)

sudo systemctl start registry
  1. Logging

sudo journalctl -f -n 1000 -u registry

Configuration

  1. sudo vim /opt/registry/application-prod.yml

logging:
    file:
        name: '/opt/registry/registry.log'

jhipster:
    security:
        authentication:
        jwt:
            base64-secret: THE-SAME-TOKEN-THAT-IS-USED-ON-THE-ARTEMIS-INSTANCES
    registry:
        password: AN-ADMIN-PASSWORD-THAT-MUST-BE-CHANGED
spring:
    security:
        user:
            password: AN-ADMIN-PASSWORD-THAT-MUST-BE-CHANGED
  1. sudo vim /opt/registry/bootstrap-prod.yml

jhipster:
    security:
        authentication:
        jwt:
            base64-secret: THE-SAME-TOKEN-THAT-IS-USED-ON-THE-ARTEMIS-INSTANCES
            secret: ''

spring:
    cloud:
        config:
        server:
            bootstrap: true
            composite:
            - type: native
              search-locations: file:./config-server
  1. sudo vim /opt/registry/config-server/application.yml

# Common configuration shared between all applications
configserver:
    name: Artemis JHipster Registry
    status: Connected to the Artemis JHipster Registry

jhipster:
    security:
        authentication:
        jwt:
            secret: ''
            base64-secret: THE-SAME-TOKEN-THAT-IS-USED-ON-THE-ARTEMIS-INSTANCES

eureka:
    client:
        service-url:
            defaultZone: http://admin:${jhipster.registry.password}@localhost:8761/eureka/

nginx config

You still have to make the registry available:

  1. sudo vim /etc/nginx/sites-available/registry.conf

server {
    listen 443 ssl http2;
    server_name REGISTRY_FQDN;
    ssl_session_cache shared:RegistrySSL:10m;
    include /etc/nginx/common/common_ssl.conf;
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    add_header X-Frame-Options DENY;
    add_header Referrer-Policy same-origin;
    client_max_body_size 10m;
    client_body_buffer_size 1m;

    location / {
        proxy_pass              http://localhost:8761;
        proxy_read_timeout      300;
        proxy_connect_timeout   300;
        proxy_http_version      1.1;
        proxy_redirect          http://         https://;

        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto   $scheme;

        gzip off;
    }
}
  1. sudo ln -s /etc/nginx/sites-available/registry.conf /etc/nginx/sites-enabled/

This enables the registry in nginx

  1. sudo service nginx restart

This will apply the config changes and the registry will be reachable.

WebSockets

WebSockets should also be synchronized (so that a user connected to one instance can perform an action which causes an update to users on different instances, without having to reload the page - such as quiz starts). We use a so-called broker for this (named Apache ActiveMQ Artemis).

It relays message between instances:

../../_images/deployment_broker.drawio.png

Setup

  1. Create a folder to store ActiveMQ

sudo mkdir /opt/activemq-distribution
  1. Download ActiveMQ here: http://activemq.apache.org/components/artemis/download/

sudo wget -O /opt/activemq-distribution/activemq.tar.gz https://downloads.apache.org/activemq/activemq-artemis/2.13.0/apache-artemis-2.13.0-bin.tar.gz
  1. Extract the downloaded contents

cd /opt/activemq-distribution
sudo tar -xf activemq.tar.gz
  1. Navigate to the folder with the CLI

cd /opt/activemq-distribution/apache-artemis-2.13.0/bin
  1. Create a broker in the /opt/broker/broker1 directory, replace USERNAME and PASSWORD accordingly

sudo ./artemis create --user USERNAME --password PASSWORD --require-login /opt/broker/broker1
  1. Adjust the permissions

sudo chown -R artemis:artemis /opt/broker
sudo chmod g+rwx /opt/broker
  1. Adjust the configuration of the broker: sudo vim /opt/broker/broker1/etc/broker.xml

<?xml version='1.0'?>
<configuration xmlns="urn:activemq"
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
            xmlns:xi="http://www.w3.org/2001/XInclude"
            xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:activemq:core ">

    <name>0.0.0.0</name>

    <journal-pool-files>10</journal-pool-files>

    <acceptors>
        <!-- STOMP Acceptor. -->
        <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;heartBeatToConnectionTtlModifier=6</acceptor>
    </acceptors>

    <connectors>
        <connector name="netty-connector">tcp://localhost:61616</connector>
    </connectors>

    <security-settings>
        <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
        </security-setting>
    </security-settings>

    <address-settings>
        <!--default for catch all-->
        <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
        </address-setting>
    </address-settings>
</core>
</configuration>
  1. Service configuration: sudo vim /etc/systemd/system/broker1.service

[Unit]
Description=ActiveMQ-Broker
After=network.target

[Service]
User=artemis
WorkingDirectory=/opt/broker/broker1
ExecStart=/opt/broker/broker1/bin/artemis run


[Install]
WantedBy=multi-user.target
  1. Enable the service

sudo systemctl daemon-reload
sudo systemctl enable broker1
sudo systemctl start broker1

Configuration of Artemis

Add the following values to your Artemis config:

spring:
    websocket:
        broker:
            username: USERNAME
            password: PASSWORD
            addresses: "localhost:61613"

USERNAME and PASSWORD are the values used in step 5. Replace localhost if the broker runs on a separate machine.

File system

The last (and also easiest) part to configure is the file system: You have to provide a folder that is shared between all instances of the application server (e.g. by using NFS).

You then have to set the following values in the application config:

artemis:
    repo-clone-path: {{ artemis_repo_basepath }}/repos/
    repo-download-clone-path: {{ artemis_repo_basepath }}/repos-download/
    file-upload-path: {{ artemis_repo_basepath }}/uploads
    submission-export-path: {{ artemis_repo_basepath }}/exports

Where {{ artemis_repo_basepath }} is the path to the shared folder

The file system stores (as its names suggests) files, these are e.g. submissions to file upload exercises, repositories that are checked out for the online editor, course icons, etc.

Scheduling

Artemis uses scheduled tasks in various scenarios: e.g. to lock repositories on due date, clean up unused resources, etc. As we now run multiple instances of Artemis, we have to ensure that the scheduled tasks are not executed multiple times. Artemis uses to approaches for this:

  1. Tasks for quizzes (e.g. evaluation once the quiz is due) are automatically distributed (using Hazelcast)

  2. Tasks for other exercises are only scheduled on one instance:

You must add the Scheduling profile to exactly one instance of your cluster. This instance will then perform scheduled tasks whereas the other instances will not.

nginx configuration

You have to change the nginx configuration (of Artemis) to ensure that the load is distributed between all instances. This can be done by defining an upstream (containing all instances) and forwarding all requests to this upstream.

upstream artemis {
    server instance1:8080;
    server instance2:8080;
}

Overview

All instances can now communicate with each other on 3 different layers:

  • Database cache

  • WebSockets

  • File system

You can see the state of all connected instances within the registry:

It relays message between instances:

../../_images/registry.png

Alternative: Docker-Compose Setup

A full functioning development environment can also be set up using docker-compose:

  1. Install docker and docker-compose

  2. Configure the credentials in application-artemis.yml in the folder src/main/resources/config as described above

  3. Run docker-compose up

  4. Go to http://localhost:9000

The client and the server will run in different containers. As Npm is used with its live reload mode to build and run the client, any change in the client’s codebase will trigger a rebuild automatically. In case of changes in the codebase of the server one has to restart the artemis-server container via docker-compose restart artemis-server.

(Native) Running and Debugging from IDEs is currently not supported.

Get a shell into the containers

  • app container: docker exec -it $(docker-compose ps -q artemis-app) sh

  • mysql container: docker exec -it $(docker-compose ps -q artemis-mysql) mysql

Other useful commands

  • Stop the server: docker-compose stop artemis-server (restart via docker-compose start artemis-server)

  • Stop the client: docker-compose stop artemis-client (restart via docker-compose start artemis-client)


Alternative: Kubernetes Setup

This section describes how to set up an environment deployed in Kubernetes.

Prerequisites:

Follow the links to install the tools which will be needed to proceed with the Kubernetes cluster setup.

  • Docker - v20.10.7

    Docker is a platform for developing, shipping and running applications. In our case, we will use it to build the images which we will deploy. It is also needed from k3d to create a cluster. The cluster nodes are deployed on Docker containers.

  • DockerHub Account

    Docker Hub is a service provided by Docker for finding and sharing container images. Account in DockerHub is needed to push the Artemis image which will be used by the Kubernetes deployment.

  • k3d - v4.4.7

    k3d is a lightweight wrapper to run k3s which is a lightweight Kubernetes distribution in Docker. k3d makes it very easy to create k3s clusters especially for local deployment on Kubernetes.

    Windows users can use choco to install it. More details can be found in the link under Other Installation Methods

  • kubectl - v1.21

    kubectl is the Kubernetes command-line tool, which allows you to run commands against Kubernetes clusters. It can be used to deploy applications, inspect and manage cluster resources, and view logs.

  • helm - v3.6.3

    Helm is the package manager for Kubernetes. We will use it to install cert-manager and Rancher

Setup Kubernetes Cluster

To be able to deploy Artemis on Kubernetes, you need to set up a cluster. A cluster is a set of nodes that run containerized applications. Kubernetes clusters allow for applications to be more easily developed, moved and managed.

With the following commands, you will set up one cluster with three agents as well as Rancher which is a platform for cluster management with an easy to use user interface.

IMPORTANT: Before you continue make sure Docker has been started.

  1. Set environment variables

    The CLUSTER_NAME, RANCHER_SERVER_HOSTNAME and KUBECONFIG_FILE environment variables need to be set so that they can be used in the next commands. If you don’t want to set environment variables you can replace their values in the commands. What you need to do is replace $CLUSTER_NAME with “k3d-rancher”, $RANCHER_SERVER_HOSTNAME with “rancher.localhost” and $KUBECONFIG_FILE with “k3d-rancher.yml”.

    For macOS/Linux:

    export CLUSTER_NAME="k3d-rancher"
    export RANCHER_SERVER_HOSTNAME="rancher.localhost"
    export KUBECONFIG_FILE="$CLUSTER_NAME.yaml"
    

    For Windows:

    $env:CLUSTER_NAME="k3d-rancher"
    $env:RANCHER_SERVER_HOSTNAME="rancher.localhost"
    $env:KUBECONFIG_FILE="${env:CLUSTER_NAME}.yaml"
    
  2. Create the cluster

    With the help of the commands block below you can create a cluster with one server and three agents at a total of four nodes. Your deployments will be distributed almost equally among the 4 nodes.

    Using k3d cluster list you can see whether your cluster is created and how many of its nodes are running.

    Using kubectl get nodes you can see the status of each node of the newly created cluster.

    You should also write the cluster configuration into the KUBECONFIG_FILE. This configuration will be later needed when you are creating deployments. You can either set the path to the file as an environment variable or replace it with “<path-to-kubeconfig-file>” when needed.

    For macOS/Linux:

    k3d cluster create $CLUSTER_NAME --api-port 6550 --servers 1 --agents 3 --port 443:443@loadbalancer --wait
    k3d cluster list
    kubectl get nodes
    k3d kubeconfig get $CLUSTER_NAME > $KUBECONFIG_FILE
    export KUBECONFIG=$KUBECONFIG_FILE
    

    For Windows:

    k3d cluster create $env:CLUSTER_NAME --api-port 6550 --servers 1 --agents 3 --port 443:443@loadbalancer --wait
    k3d cluster list
    kubectl get nodes
    k3d kubeconfig get ${env:CLUSTER_NAME} > $env:KUBECONFIG_FILE
    $env:KUBECONFIG=($env:KUBECONFIG_FILE)
    
  3. Install cert-manager

    cert-manager is used to add certificates and certificate issuers as resource types in Kubernetes clusters. It simplifies the process of obtaining, renewing and using those certificates. It can issue certificates from a variety of supported sources, e.g. Let’s Encrypt, HashiCorp Vault, Venafi.

    In our case, it will issue self-signed certificates to our Kubernetes deployments to secure the communication between the different deployments.

    Before the installation, you need to add the Jetstack repository and update the local Helm chart repository cache. cert-manager has to be installed in a separate namespace called cert-manager so one should be created as well. After the installation, you can check the status of the installation.

    helm repo add jetstack https://charts.jetstack.io
    helm repo update
    kubectl create namespace cert-manager
    helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.0.4 --set installCRDs=true --wait
    kubectl -n cert-manager rollout status deploy/cert-manager
    
  4. Install Rancher

    Rancher is a Kubernetes management tool that allows you to create and manage Kubernetes deployments more easily than with the CLI tools.

    You can install Rancher using Helm - the package manager for Kubernetes. It has to be installed in a namespace called cattle-system and we should create such a namespace before the installation itself. During the installation, we set the namespace and the hostname on which Rancher will be accessible. Then we can check the installation status.

    For macOS/Linux:

    helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
    helm repo update
    kubectl create namespace cattle-system
    helm install rancher rancher-stable/rancher --namespace cattle-system --set hostname=$RANCHER_SERVER_HOSTNAME --version 2.5.9 --wait
    kubectl -n cattle-system rollout status deploy/rancher
    

    For Windows:

    helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
    helm repo update
    kubectl create namespace cattle-system
    helm install rancher rancher-stable/rancher --namespace cattle-system --set hostname=${env:RANCHER_SERVER_HOSTNAME} --version 2.5.9 --wait
    kubectl -n cattle-system rollout status deploy/rancher
    
  5. Open Rancher and update the password

Open Rancher on https://rancher.localhost/.

You will be notified that the connection is not private. The reason for that is that the Rancher deployment uses a self-signed certificate by an unknown authority ‘dynamiclistener-ca’. It is used for secure communication between internal components. Since it’s your local environment this is not an issue and you can proceed to the website. If you can’t continue using the Chrome browser, you can try with another browser, e.g. Firefox.

You will be prompted to set a password which later will be used to log in to Rancher. The password will often be used, so you shouldn’t forget it.

../../_images/rancher_password.png

Then you should save the Rancher Server URL, please use the predefined name.

../../_images/rancher_url.png

After saving, you will be redirected to the main page of Rancher, where you see your clusters. There will be one local cluster.

../../_images/rancher_cluster.png

You can open the workloads using the menu, there will be no workloads deployed at the moment.

../../_images/rancher_nav_workloads.png
../../_images/rancher_empty_workloads.png
  1. Create a new namespace in Rancher

Namespaces are virtual clusters backed by the same physical cluster. Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Usually, different namespaces are created to separate environments deployments e.g. development, staging, production.

For our development purposes, we will create a namespace called artemis. It can be done easily using Rancher.

  1. Navigate to Namespaces using the top menu of Rancher

  2. Select Add Namespace to open the form for namespace creation

    ../../_images/rancher_namespaces.png
  3. Put artemis as namespace’s name and select the Create button

    ../../_images/rancher_create_namespace.png

Create DockerHub Repository

The Artemis image will be stored and managed in DockerHub. Kubernetes will pull it from there and deploy it afterwards.

After you log in to your DockerHub account you can create as many public repositories as you want. To create a repository you need to select the Create repository button.

DockerHub:

../../_images/dockerhub.png

Then fill in the repository name with artemis. Then use the Create button to create your repository.

../../_images/dockerhub_create_repository.png

Configure Docker ID (username)

The username in DockerHub is called Docker ID. You need to set your Docker ID in the artemis-deployment.yml resource so that Kubernetes knows where to pull the image from. Open the src/main/kubernetes/artemis/deployment/artemis-deployment.yml file and edit

template:
   spec:
   containers:
      image: <DockerId>/artemis

and replace <DockerId> with your docker ID in DockerHub

e.g. it will look like this:

template:
   spec:
   containers:
      image: mmehmed/artemis

Configure Artemis Resources

To run Artemis, you need to configure the Artemis’ User Management, Version Control and Continuous Integration. You can either run it with Jira, Bitbucket, Bamboo or Jenkins, GitLab. Make sure to configure the src/main/resources/config/application-artemis.yml file with the proper configuration for User Management, Version Control and Continuous Integration.

You should skip setting the passwords and token since the Docker image that we are going to build is going to include those secrets. You can refer to chapter Add/Edit Secrets for setting those values.

If you want to configure Artemis with Bitbucket, Jira, Bamboo you can set a connection to existing staging or production deployments. If you want to configure Artemis with local user management and no programming exercises continue with Configure Local User Management.

Configure Local User Management

If you want to run with local user management and no programming exercises setup follow the steps:

1. Go to the src/main/resources/config/application-artemis.yml file, and set use-external in the user-management section to false. If you have created an additional application-local.yml file as it is described in the Setup documentation, make sure to edit this one.

Another possibility is to add the variable directly in src/main/kubernetes/artemis/configmap/artemis-configmap.yml.

data:
   artemis.user-management.use-external: "false"

2. Remove the jira profile from the SPRING_PROFILES_ACTIVE field in the ConfigMap found at src/main/kubernetes/artemis/configmap/artemis-configmap.yml

Now you can continue with the next step Build Artemis

Build Artemis

Build the Artemis application war file using the following command:

./gradlew -Pprod -Pwar clean bootWar

Run Docker Build

Run Docker build and prepare the Artemis image to be pushed in DockerHub using the following command:

docker build  -t <DockerId>/artemis -f src/main/docker/Dockerfile .

This will create the Docker image by copying the war file which was generated by the previous command.

Push to Docker

Push the image to DockerHub from where it will be pulled during the deployment:

docker push <DockerId>/artemis

In case that you get an “Access denied” error during the push, first execute

docker login

and then try again the docker push command.

Configure Spring Profiles

ConfigMaps are used to store configuration data in key-value pairs.

You can change the current Spring profiles used for running Artemis in the src/main/kubernetes/artemis/configmap/artemis-configmap.yml file by changing SPRING_PROFILES_ACTIVE. The current ones are set to use Bitbucket, Jira and Bamboo. If you want to use Jenkins and GitLab please replace bamboo,bitbucket,jira with jenkins,gitlab. You can also change prod to dev if you want to run in development profile.

Deploy Kubernetes Resources

Kustomization files declare the resources that will be deployed in one place and with their help we can do the deployment with only one command.

Once you have your Artemis image pushed to Docker you can use the kustomization.yml file in src/main/kubernetes to deploy all the Kubernetes resources. You can do it by executing the following command:

kubectl apply -k src/main/kubernetes/artemis --kubeconfig <path-to-kubeconfig-file>

<path-to-kubeconfig-file> is the path where you created the KUBECONFIG_FILE.

In the console, you will see that the resources are created. It will take a little bit of time when you are doing this for the first time. Be patient!

../../_images/kubectl_kustomization.png

Add/Edit Secrets

Once you have deployed Artemis you need to add/edit the secrets so that it can run successfully.

Open Rancher using https://rancher.localhost/ and navigate to your cluster.

Then navigate to Secrets like shown below:

../../_images/rancher_secrets_menu.png

You will see list of all defined secret files

../../_images/rancher_secrets_list.png

Continue with artemis-secrets and you will see the values in the secret file. Then navigate to the edit page.

../../_images/rancher_secrets_edit.png

You can edit each secret you want or add more secrets. Once you select any value box the value itself will be shown and you can edit it.

../../_images/rancher_secrets_edit_page.png

After you are done you can save your changes and redeploy the Artemis workload.

Check the Deployments in Rancher

Open Rancher using https://rancher.localhost/ and navigate to your cluster.

It may take some time but in the end, you should see that all the workloads have Active status. In case there is a problem with some workloads you can check the logs to see what the issue is.

../../_images/rancher_workloads.png

You can open the Artemis application using the link https://artemis-app.artemis.rancher.localhost/

You will get the same “Connection is not private” issue as you did when opening https://rancher.localhost/. As said before this is because a self-signed certificate is used and it is safe to proceed.

It takes several minutes for the application to start. If you get a “Bad Gateway” error it may happen that the application has not been started yet. Wait several minutes and if you still have this issue or another one you can check out the pod logs (described in the next chapter).

Check out the Logs

Open the workload which logs you need to check. There is a list of pods. Open the menu for one of the pods and select View Logs. A pop-up with the logs will be opened.

../../_images/rancher_logs.png

Troubleshooting

If the Artemis application is successfully deployed but there is an error while trying to run the application, the reason is most likely related to the Artemis yml configuration files. One of the common errors is related to missing server.url variable. You can fix it by adding it as an environment variable to the Artemis deployment.

Set Additional Environment Variables

This chapter explains how you can set environment variables for your deployment in case you need it.

Open the Workloads view on Rancher

../../_images/rancher_workloads.png

Enter the details page of the Artemis workload and then select Edit in the three-dot menu

../../_images/workload_edit.png

Expand the Environment Variables menu. After pressing the Add Variable button two fields will appear where you can add the variable key and the value.

../../_images/workload_set_environment_variable.png

You can add as many variables as you want. Once you are done you can save your changes which will trigger the Redeploy of the application.