logoBack to home screen

Installing ADx Core

Having installed and started the conversion service, you can follow this procedure to install ADx Core. When you finish this procedure, you can start using ADx.

The installation process is almost the same as for the conversion service. The difference is that now you won't have to enter conversion database data in the configuration file, because you have already set it up with the conversion service. Instead, you will simply link to the conversion service from runtime properties.

Prerequisites

Installation

Follow this tutorial to install ADx from a package. Make sure to meet the prerequisites first.

  1. Unzip your package to a directory of your choice.

    When done, your package is unzipped to this directory, to a sub-folder (adx-deployment-package or similar). When you open this folder, you should see the following files:

  2. Open the terminal from the adx-deployment-package folder presented above and run the following commands:

    mkdir ../additional-libraries
    
    mkdir ../license
    
    cp example-environment.sh ../environment.sh
    
    cp example-installation-settings.yaml ../installation-settings.yaml
    

    As a result, you have now created the license and additional-libraries folders next to the adx-deployment-package folder, and copied the template files with the installation settings and the environment script:

    Note that you will only ever need the environment script if Java directory is not yet added to the PATH on your machine. If that's the case, click here for more information.

  3. Put the license file provided to you in the license folder. If the file is provided as a .zip archive (or similar), unzip it into this folder. Example: license/example-license.sigxml.glf.

  4. Add the database driver to the additional-libraries folder, as explained below.

    1. First, you need to download the correct driver. Knowing the database type used by ADx in your organization, download one of the below drivers:

      DatabaseDriver download page
      PostgreSQLDownload. Important: PostgreSQL has been tested with driver version 42.2.6. Please use this version or newer.
      OracleDownload
      MSSQLDownload. Important: MSSQL has been tested with driver version 7.2.2 and is reported to work with versions 6.3.2 and newer. Do not use older drivers.
      DB2Download
    2. Now, add your driver to the additional-libraries folder, available in the package:

      When downloaded, simply copy and paste the driver file into this folder.

  5. Now, some configuration will be necessary. Open the installation-settings.yaml file in a text editor of your choice and edit the sections mentioned below (and only these sections).

Installation Path

When installing, you will need write access to create the below-mentioned path.

You can either use the default path or set the installationPath: property to the installation directory of your choice. If you do, remember to provide an absolute path in the configuration file. As a result, conversion service will be installed in the provided directory. All relative paths used in the configuration file are resolved from the installation directory.

The result should present itself as below:

# The directory where the application will be installed. Note that files from previous installations may be overridden.
# Note that there are other (relative) paths specified in this file, which will be resolved relative to this installation path.
# For example, if path is "/opt/braintribe/adx/tribefire", the (default) log files directory is "/opt/braintribe/adx/logs".
installationPath: "/opt/braintribe/adx/tribefire"

Ports

The ports determine the URL under which ADx will be available when installed (as explained by the comments). For example http://hostname:8080 or https://hostname:8443.

Initially, this section looks as follows:

# ** Ports **
# The following settings are used to configure ports where server listens for requests.
# If multiple instances are installed on the same machine, each instance should use its own port range.
# For example, instance 1 uses 8080,8443,8009,8005 and instance 2 uses 9080,9443,9009,9005, etc.

# The HTTP port where the server listens for requests. If set to e.g. 8080, HTTP (base) url will be http://[host]:8080/.
httpPort: 8080
# The HTTPS port where the server listens for requests. If set to e.g. 8443, HTTPS (base) url will be https://[host]:8433/.
httpsPort: 8443
# The AJP connector port (see https://tomcat.apache.org/tomcat-9.0-doc/config/ajp.html)
ajpPort: 8009
# The Tomcat server port (e.g. for shutdown commands)
serverPort: 8005

In fact, if you don't want to change the ports, you don't have to. If you do, simply change the values of httpPort:, httpsPort:, ajpPort:, and serverPort: to the values you want.

That's it - let's move on to the next part.

HTTPS/SSL

HTTPS/SSL settings control how ADx should be accessed once installed. If you don't need HTTPS access, you can skip this section entirely, and move on to Resources. Otherwise, follow the below procedure.

Initially, this section looks as follows:

# ** HTTPS/SSL **
# Whether or not to enforce HTTPS, i.e. redirect HTTP to HTTPS
#enforceHttps: false

# The path to the SSL keystore file, PKCS 12 format (see https://en.wikipedia.org/wiki/PKCS_12).
# If not set, the default keystore with a self-signed certificate will be used.
#sslKeystoreFile: !com.braintribe.model.resource.FileResource
#  path: "/path/to/keystore.p12"

# One can use openssl to generate a (self signed) keystore file:
#   openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt
#   openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in certificate.crt -inkey privateKey.key -out keystore.pkcs12 -name "tribefire"
# For more information see https://www.openssl.org/.

# The password for the keystore file (see above). Replace "[ENCRYPTED_PASSWORD]" with the encrypted password, e.g. "${decrypt('HMuN/VXo5+L0vVQzuJe7bAOiBmeKzWluP+POb7zjkcLCnzgawUfWmZAIu9eIOfVAzEQn6Q==')}".
#sslKeystorePassword: "${decrypt('[ENCRYPTED_PASSWORD]')}"

# If the keystore file was generated without a password, set the password to empty string.
#sslKeystorePassword: ""
  1. Set enforceHttps: to true:

    # Whether or not to enforce HTTPS, i.e. redirect HTTP to HTTPS
    enforceHttps: true
    
  2. Now, you can either skip the path section entirely to use the default keystore, or generate your own self-signed keystore file. This tutorial shows the second option. If you don't want to do it, move on to Resources.

    1. Remove the # comment marks in front of sslKeystoreFile: and path:. You should get the following result:

      # If not set, the default keystore with a self-signed certificate will be used.
      sslKeystoreFile: !com.braintribe.model.resource.FileResource
        path: "/path/to/keystore.p12"
      
    2. Now we need to generate the keystore. First, let's create a folder where it will be stored. In this tutorial, it's the SSL folder under Home/Documents.

    3. Open the newly created folder and run the terminal from it.

    4. Execute $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt. You will be prompted for some data - this is expected. This command generates the private key and certificate:

      We will need those files to create the keystore.

    5. Execute openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in certificate.crt -inkey privateKey.key -out keystore.pkcs12 -name "tribefire". This generates the keystore.pkcs12 file. You will be prompted for password in the process - remember it, you will need it later!

    6. Now that you have the keystore, you can add its path to the configuration file:

      # If not set, the default keystore with a self-signed certificate will be used.
      sslKeystoreFile: !com.braintribe.model.resource.FileResource
        path: "/home/user/Documents/SSL/keystore.p12"
      
    7. Finally, we need to encrypt the keystore password in order to provide it in the configuration file. Let's go back to the installation directory, where you will find the encrypt.sh script.

    8. Open the terminal and run ./encrypt.sh --value mypassword. You should get the encrypted password as a response:

      VTGEjDxNV17nHqxj/aXrLwAKmksFgUWIht5JZdPIZb5r3yeODUE0v+hz72y4TDD7eZfP9Q==
      

      Copy this response and paste it into the sslKeystorePassword property, in place of [ENCRYPTED_PASSWORD]:

      sslKeystorePassword: "${decrypt('VTGEjDxNV17nHqxj/aXrLwAKmksFgUWIht5JZdPIZb5r3yeODUE0v+hz72y4TDD7eZfP9Q==')}"
      

That's it - you have now configured HTTPS access for ADx. When installed with those setting, it should always redirect to HTTPS, using the keystore you generated.

HTTP Security

You can edit the trustedDomain value to restrict access to this node from other domains. By defauls, cross-domain access is permitted for all hosts.

# ** HTTP Security **
# The Cross Domain Policy is used to define cross-domain related security settings.
# For example, if there are other web applications which want to embed this application's content in an iframe, e.g. the Web Reader,
# these applications have to be in the trustedDomain configured below. Examples:
#   "*" - permit cross-domain access from any host
#   "*.example.com" - permit cross-domain access from respective hosts on example.com domain
#   "" - disable cross-domain access
# Note that this setting has nothing to do with normal clients connecting to this application or other applications connecting via API.
# For further information read e.g. the HTTP Security section at https://developer.mozilla.org/en-US/docs/Web/HTTP.
crossDomainPolicy: !com.braintribe.model.platform.setup.api.cdp.SimpleCrossDomainPolicy
  trustedDomain: "*"

Resources

Unless instructed otherwise, you can simply use the default JVM values as shown below:

# ** Resources **
# The initial heap size of the Java Virtual Machine
initialHeapSize: 512m
# The maximum heap size of the Java Virtual Machine
maxHeapSize: 4096m
# The maximum number of connections (or -1 for no limit)
maxConnections: -1
# The maximum number of request worker threads.
maxThreads: 4000

That's it! You can move on to the next section.

Logging

In this section, you need to provide the directory for the log files. The easiest course of action is to use the default settings:

# ** Logging **
# The path of the directory where log files will be written to. You can simply use the provided default location, which is resolved relative to the 'installationPath:' property. '&logFilesDir' specifies an anchor, which makes it possible to reference the value below, see setting 'checkWriteAccessForDirs'.
logFilesDir: &logFilesDir "../logs"

If you do, this section should look as follows:

# ** Logging **
# The path of the directory where log files will be written to. You can simply use the provided default location, which is resolved relative to the 'installationPath:' property. '&logFilesDir' specifies an anchor, which makes it possible to reference the value below, see setting 'checkWriteAccessForDirs'.
logFilesDir: &logFilesDir "../logs"

# Log level for console output. Examples: SEVERE,WARNING,INFO,FINE,FINEST
consoleLogLevel: "INFO"

# Log level for file output. Examples: SEVERE,WARNING,INFO,FINE,FINEST
logFilesLogLevel: "FINE"

# Enables log rotation based on file size (in bytes). If the specified maximum is exceeded, the current log file will be archived and a new one will be created.
logFilesMaxSize: 15000000

# Enables log rotation based on a Cron expression. For more info see https://en.wikipedia.org/wiki/Cron.
# Rotate every midnight
logFilesCronRotate: "0 0 * * *"

# Maximum number of archived log files (see also log rotation settings above). If the maximum is exceeded, the oldest archived log files gets deleted.
logFilesMaxCount: 10

That's it! You can move on to the next section.

Temporary files

Similarly as with the logs, you can simply use the default location for temporary files:

# ** Temporary Files **
# The path of the directory where temporary files will be written to. You can simply use the provided default location, which is resolved relative to the 'installationPath:' property. '&tempDir' specifies an anchor, which makes it possible to reference the value below, see setting 'checkWriteAccessForDirs'.
tempDir: &tempDir "../temp"

That's it! You can move on to the next section.

Admin User

In this section, you need to enter the credentials of the admin user.

  1. As with the keystore, you need to encrypt the admin password. To do so, open the unzipped package, then open the terminal and run ./encrypt.sh --value mypassword. You should get the encrypted password as a response:

    VTGEjDxNV17nHqxj/aXrLwAKmksFgUWIht5JZdPIZb5r3yeODUE0v+hz72y4TDD7eZfP9Q==
    

    Copy this response (not the one above but the one you get) and paste it into the password: property. The file should then look as follows:

    # ** Admin User **
    # Configures the credentials for the default admin user.
    # If the user doesn't exist yet, it will be created and the password will be set as specified here.
    # If the user already exists, nothing will be done, i.e. its password will NOT be changed!
    predefinedComponents:
    ADMIN_USER: !com.braintribe.model.user.User
        # Admin user name
        name: "admin"
        # Admin user password. Replace "[ENCRYPTED_PASSWORD]" with the encrypted password, e.g. "${decrypt('HMuN/VXo5+L0vVQzuJe7bAOiBmeKzWluP+POb7zjkcLCnzgawUfWmZAIu9eIOfVAzEQn6Q==')}".
        password: "${decrypt('VTGEjDxNV17nHqxj/aXrLwAKmksFgUWIht5JZdPIZb5r3yeODUE0v+hz72y4TDD7eZfP9Q==')}"
    

That's it! You can move on to the next section.

ActiveMQ Settings

Each node provides embedded ActiveMQ communication. You don't need to change these settings if you don't change anything in ActiveMQ runtime properties. If you did, adapt hostAddress accordingly.

# ** ActiveMQ settings **
# Configures the ActiveMQ connection. Since each node provides an embedded ActiveMQ server, the URL just points to localhost.
# Unless a custom ActiveMQ server port is configured below (see AMQ_SERVER_PORT), there is no need to change these settings.
  MQ: !com.braintribe.model.messaging.jms.JmsActiveMqConnection
    name: "ActiveMQ Connection"
    hostAddress: "tcp://localhost:61616"

System Database

In this section, you need to put the information describing your system database (user name, encrypted password, database driver, database URL) into the properties found under connectionDescriptor:. Other properties (apart from the name) must be left with the default values. Let's focus on the section in question:

# Connection settings ( '&systemDatabaseConnection' specifies an anchor, which makes it possible to re-use the connection settings below)
    connectionDescriptor: &systemDatabaseConnection !com.braintribe.model.deployment.database.connector.GenericDatabaseConnectionDescriptor
      # JDBC Driver
      #   Postgres: "org.postgresql.Driver"
      #   Oracle: "oracle.jdbc.OracleDriver"
      #   MSSQL: "com.microsoft.sqlserver.jdbc.SQLServerDriver"
      driver: "[JDBC_DRIVER]"
      # JDBC URL
      #   Postgres: "jdbc:postgresql://localhost:5432/system-db"
      #   Oracle: " jdbc:oracle:thin:@localhost:1521:orcl12c"
      #   MSSQL: "jdbc:sqlserver://localhost:5433;databaseName=system-db;"
      url: "[JDBC_URL]"
      # Database user name
      user: "[DATABASE_USER]"
      # Database user password. Replace "[ENCRYPTED_PASSWORD]" with the encrypted password, e.g. "${decrypt('HMuN/VXo5+L0vVQzuJe7bAOiBmeKzWluP+POb7zjkcLCnzgawUfWmZAIu9eIOfVAzEQn6Q==')}".
      password: "${decrypt('[ENCRYPTED_PASSWORD]')}"

driver:

driver: will be different depending on the database type. Copy the driver property from the table below:

Databasedriver: value
PostgreSQL"org.postgresql.Driver" (default)
Oracle"oracle.jdbc.OracleDriver"
Oracle 8i"oracle.jdbc.driver.OracleDriver"
MSSQL"com.microsoft.sqlserver.jdbc.SQLServerDriver"
MSSQL JTDS"net.sourceforge.jtds.jdbc.Driver"
MYSQL"com.mysql.jdbc.Driver"
DB2"com.ibm.db2.jdbc.net.DB2Driver"

url:

url: will be different depending on the database type. Use the syntax as explained in the table below:

Databaseurl: syntaxExample
PostgreSQL"jdbc:postgresql://hostname:port/databaseName". Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and databaseName with the name of the database (for example system-db), then copy-paste into the url: property."jdbc:postgresql://localhost:5432/system-db"
Oracle with service name"jdbc:oracle:thin@hostname:port/serviceName". Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and serviceName with the database service name TNS alias (for example system-db), then copy-paste into the url: property."jdbc:oracle:thin@localhost:5432/system-db"
Oracle with SID"jdbc:oracle:thin@hostname:port:SID". Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and SID with the database ID (for example system-db), then copy-paste into the url: property."jdbc:oracle:thin@localhost:5432:system-db"
MSSQL"jdbc:sqlserver://hostname:port;databaseName". Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and databaseName with the name of the database (for example system-db), then copy-paste into the url: property."jdbc:sqlserver://localhost:5432;system-db"
MYSQL"jdbc:mysql://host:port/databaseName" Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and databaseName with the name of the database (for example system-db), then copy-paste into the url: property."jdbc:mysql://localhost:5432/system-db"
DB2"jdbc:db2://hostname:port/databaseName". Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and databaseName with the name of the database (for example system-db), then copy-paste into the url: property."jdbc:db2://localhost:5432/system-db"

user:

Simply enter the user name with access to the database.

password

As with all passwords, you need to encrypt the database user password. To do so, open the unzipped package, then open the terminal and run ./encrypt.sh --value mypassword. You should get the encrypted password as a response:

```
VTGEjDxNV17nHqxj/aXrLwAKmksFgUWIht5JZdPIZb5r3yeODUE0v+hz72y4TDD7eZfP9Q==
```

Copy this response and paste it into the password: property.

That's it! You can now proceed to the next section of the configuration file.

Connection Pools

As stated by the configuration file comments, you don't need to change these properties unless instructed otherwise.

Project Settings

Project descriptor available in this section defines the name and the displayed name of the tomcat service. You don't need to change these settings unless instructed otherwise.

# ** Project Settings **
# Configures the project / product name. Usually these settings don't have to be changed. They names are e.g. used for Tomcat service name, see below.
projectDescriptor:
  name: "adx"
  displayName: "ADx"

Tomcat Settings

Tomcat service descriptor available in this section defines the name of the service user when you run the application as a Tomcat service. You don't need to change these settings unless instructed otherwise.

# ** Tomcat Service **
# One can use script /tribefire/runtime/host/bin/tribefire-service.sh to run the application as a Tomcat service.
tomcatServiceDescriptor:
  # Specifies the name of the service user
  user: "service-user"

Runtime Properties

Platform Settings

ADx is powered by the Tribefire platform, configured via the below settings.

  ##############################################################################
  # Platform settings
  ##############################################################################
  # This enables support for encrypted passwords in this configuration file (don't change!)
  TRIBEFIRE_SECURED_ENVIRONMENT: "true"

  # Specifies how long a user session should remain active when there is no activity on the session.
  # After the specified inactive time has passed (i.e. no request with the corresponding session ID has been received by the server),
  # the session is flagged as inactive and consequently removed by a periodic cleanup process.
  # The time span can be specified as a human-readable string, using numbers and the time unit, as in 12h, 30m, 3600s, etc.
  TRIBEFIRE_USER_SESSIONS_MAX_IDLE_TIME: "30m"

  # When this is set to true, the login dialog will offer an option to stay signed-in after a browser restart.
  # If this is set to false, the user session will always be closed when the browser is closed.
  # This is achieved by using a session cookie that stores the user's session ID until the browser is closed.  
  TRIBEFIRE_RUNTIME_OFFER_STAYSIGNED: "true"

  # The public tribefire services URL, i.e. the URL through which clients and other services can reach this service.
  # This is e.g. needed for callbacks, for example when service A invokes an (asynchronous) job on B and passes a callback URL,
  # through which B can notify A when the job is done.
  #
  # In many cases this settings can just be set to "https://(hostname of this machine):(https port configured above)/tribefire-services".
  # A typical use case where the URL is different, is a clustered service, i.e. multiple nodes behind a load balancer.
  # In that case the load balancer URL has to be specified here.
  #
  # Make sure that this public URL is reachable from other machines, e.g. verify that configured ports are opened in the firewall.
  TRIBEFIRE_PUBLIC_SERVICES_URL: "https://[PUBLIC_HOST]:[PUBLIC_PORT]/tribefire-services"

  # Indicates whether or not this node is part of a cluster.
  # (If it is, also ActiveMQ settings must be configured, see above.)
  TRIBEFIRE_IS_CLUSTERED: "false"

TRIBEFIRE_PUBLIC_SERVICES_URL provides the URL on which this service can be reached. When using multiple ADx Core servers, provide the load balancer URL.

> Important: `TRIBEFIRE_PUBLIC_SERVICES_URL` must be reachable from the outside. Make sure it is not blocked by a firewall. 

TRIBEFIRE_IS_CLUSTERED must be set to true if this node is going to be part of a cluster. In this case, don't forget to add it to the list of hosts in AMQ_CLUSTER_NODES - see ActiveMQ Settings.

ActiveMQ Settings

Enter a full list of ADx nodes in AMQ_CLUSTER_NODES. Optionally, you can also set up Multicast communication. If you change the host or port, you also need to adapt the host address in previous ActiveMQ Settings

##############################################################################
  # ActiveMQ settings
  ##############################################################################
  # Each node provides its own, embedded messaging service (based on ActiveMQ).
  # The messaging service enables nodes to communicate with each other, e.g. to send notification events from one node to all others.

  # The ActiveMQ server port.
  AMQ_SERVER_PORT: "61616"

  # A comma-separated list of host names (or IP addresses) of ActiveMQ nodes that should form a cluster.
  # Each address may also include a port (separated by a colon). If no port is specified, it defaults to the port configured in setting AMQ_SERVER_PORT.
  # Example: "adx1.example.com,adx2.example.com,adx3.example.com:61617".
  AMQ_CLUSTER_NODES: "localhost"

  # As an alternative to setting AMQ_CLUSTER_NODES ActiveMQ also supports connection to remote nodes via multicast transport.
  # This feature is disabled by default. Before enabling it ensure that the network (and firewalls, if any) permit multicasts on port 6155.
  #
  # The group name shared between all adx cluster nodes. Setting this value activates the multicast transport.
  #AMQ_DISCOVERY_MULTICAST_GROUP: "adx"
  # The network interface through which multicasts are sent, e.g. "eth0".
  #AMQ_DISCOVERY_MULTICAST_NETWORK_INTERFACE: "[NETWORK_INTERFACE]"

Conversion Settings

This is where you decide if the conversion should be local or remote (recommended for production setup) and provide information necessary for this node to connect to a remote conversion service.


##############################################################################
# Conversion settings
##############################################################################
# Enables local conversion service. If set to false, the remote conversion service needs to be initialized.
CONV_INITIALIZE: false
# Enables remote conversion service.
# If enabled the following three properties need to be set as well.
DOCUMENTS_REMOTE_CONVERSION: true
# The URL to remote conversion service.
DOCUMENTS_CONVERSION_TFS_URL: "https://[CONV_HOST]:[CONV_PORT]/tribefire-services"
# The name of remote conversion service user. Add the name of the standard conversion user here,
# i.e. the name set in property CONV_STANDARD_USER_NAME in conversion settings.
DOCUMENTS_CONVERSION_USERNAME: "tf-conversion"
# The password of remote conversion service user. Add the password of the standard conversion user here,
# i.e. the password set in property CONV_STANDARD_USER_PASSWORD in conversion settings.
DOCUMENTS_CONVERSION_PASSWORD: "${decrypt('[ENCRYPTED_PASSWORD]')}"

DOCUMENTS_CONVERSION_TFS_URL: - provide the hostname and port of the conversion service installed previously. When using multiple conversion servers, provide the load balancer URL.

> Important: `DOCUMENTS_CONVERSION_TFS_URL:` must be reachable from the outside. Make sure it is not blocked by a firewall. 

DOCUMENTS_CONVERSION_USERNAME: - provide a user name set up on the conversion service. Typically it is the default user set during conversion installation in CONV_STANDARD_USER_NAME.

DOCUMENTS_CONVERSION_PASSWORD: provide the encrypted password of the above user.

Fulltext Settings

Settings related to elasticsearch configuration.

  ##############################################################################
  # Fulltext settings
  ##############################################################################
  # Whether an elasticsearch service should be started together with this ADx installation.
  ELASTIC_RUN_SERVICE: "true"
  # Whether to enable the default elasticsearch access.
  #ELASTIC_CREATE_DEMO_ACCESS: "false"
  # The base directory to store elasticsearch indices.
  ELASTIC_SERVICE_DATA_PATH: '${TRIBEFIRE_INSTALLATION_ROOT_DIR}/../elastic'
  # The elasticsearch service port (used for intercommunication between the nodes).
  ELASTIC_PORT: 9300
  # A comma-separated list of host names (or IP addresses) of the nodes that should form a cluster.
  # Each address may also include a port (separated by a colon). If no port is specified, it defaults to the port configured in setting ELASTIC_PORT.
  # Example: "adx1.example.com,adx2.example.com,adx3.example.com:9301"
  # By default this setting just points to TRIBEFIRE_CLUSTER_NODES and usually there is no need to change this.
  # Only exception is when one runs multiple instances on the same host (with different elasticsearch ports).
  ELASTIC_CLUSTER_NODES: "${TRIBEFIRE_CLUSTER_NODES}"

Repository Settings

In this part of the settings file, you will notice a number of ADX_DEFAULT properties. They relate to ADx standard, CMIS, and Documentum repositories (and their cache configuration). If you add these settings now (which is not mandatory, but highly recommended), your newly created repositories will have those default properties, and you won't have to add them manually in ADx.

##############################################################################
  # Standard Repository Default Settings - uncomment and adapt values if needed
  # (If you decide to use these settings, you probably also want to configure the default Cache repository settings below.)
  ##############################################################################
  # Whether or not to create a default repository based on the default settings below.
  # The creation of that default repository simplifies the post installation health checks
  # (because the checks can be run directly against that repository without having to create another one first).
  # This approach also verifies that the default settings are correct.
  #ADX_INIT_DEFAULT_REPOSITORY: "true"
  # The Type of the default database. Can be one of the following: Oracle, MSSQL, MySQL, PostgreSQL, DB2.
  #ADX_DEFAULT_DB_TYPE: "[DATABASE_TYPE]"
  # The name of the default database. When using Oracle database, enter the SID.
  #ADX_DEFAULT_DB_NAME: "[DATABASE_NAME]"
  # The hostname/ip of the default database.
  #ADX_DEFAULT_DB_HOST: "[DATABASE_HOST]"
  # The port of the default database.
  #ADX_DEFAULT_DB_PORT: "[DATABASE_PORT]"
  # The username for authentication with the default DB.
  #ADX_DEFAULT_DB_USER: "[DATABASE_USER]"
  # The password for authentication with the default DB. This value has to be encrypted.
  #ADX_DEFAULT_DB_PASSWORD: "${decrypt('[ENCRYPTED_PASSWORD]')}"
  # The default content storage type.
  # Specifies whether to store resources on file system (-> "fs") or in the database (-> "db").
  # Note that this only affects storage of files. Metadata is always stored in the database.
  #ADX_DEFAULT_STORAGE_CONTENT_TYPE: "fs"
  # The default content storage path. (This setting only takes effect if ADX_DEFAULT_STORAGE_CONTENT_TYPE is set to 'fs'.)
  # In clustered environments this must point to a shared file system and all nodes must use the same folder.
  #ADX_DEFAULT_STORAGE_CONTENT_PATH: "${TRIBEFIRE_INSTALLATION_ROOT_DIR}/../repository-resources/content"

  ##############################################################################
  # CMIS Repository Default Settings - uncomment and adapt values if needed
  # (If you decide to use these settings, you probably also want to configure the default Cache repository settings below.)
  ##############################################################################
  # The default CMIS Service URL.
  #ADX_DEFAULT_CMIS_SERVICEURL: "https://[HOST]:[PORT]/emc-cmis/browser"
  # The default CMIS RepoID.
  #ADX_DEFAULT_CMIS_REPOID: "[REPOSITORY_ID]"
  # The default CMIS USER.
  #ADX_DEFAULT_CMIS_USER: "[USER]"
  # The default CMIS password.
  #ADX_DEFAULT_CMIS_PASSWORD: "${decrypt('[ENCRYPTED_PASSWORD]')}"

  ##############################################################################
  # Documentum Repository Default Settings - uncomment and adapt values if needed
  # (If you decide to use these settings, you probably also want to configure the default Cache repository settings below.)
  ##############################################################################
  # The default Documentum Service URL.
  #ADX_DEFAULT_DCTM_SERVICEURL: "https://[HOST]:[PORT]/emc-dfs/services"
  # The default Documentum RepoID.
  #ADX_DEFAULT_DCTM_REPOID: "[REPOSITORY_ID]"
  # The default Documentum USER.
  #ADX_DEFAULT_DCTM_USER: "[USER]"
  # The default Documentum password.
  #ADX_DEFAULT_DCTM_PASSWORD: "${decrypt('[ENCRYPTED_PASSWORD]')}"

  ##############################################################################
  # Cache Repository Default Settings - uncomment and adapt values if needed
  # (If you configured any of the default repository settings above, you probably also want to configure the default Cache repository settings.)
  ##############################################################################
  # The Type of the default cache database. Can be one of the following: Oracle, MSSQL, MySQL, PostgreSQL, DB2.
  #ADX_DEFAULT_CACHE_DB_TYPE: "${ADX_DEFAULT_DB_TYPE}"
  # The name of the default cache database. When using Oracle database, enter the SID.
  #ADX_DEFAULT_CACHE_DB_NAME: "${ADX_DEFAULT_DB_NAME}"
  # The hostname/ip of the default cache database.
  #ADX_DEFAULT_CACHE_DB_HOST: "${ADX_DEFAULT_DB_HOST}"
  # The port of the default cache database.
  #ADX_DEFAULT_CACHE_DB_PORT: "${ADX_DEFAULT_DB_PORT}"
  # The username for authentication with the default cache DB.
  #ADX_DEFAULT_CACHE_DB_USER: "${ADX_DEFAULT_DB_USER}"
  # The password for authentication with the default cache DB. This value has to be encrypted.
  #ADX_DEFAULT_CACHE_DB_PASSWORD: "${ADX_DEFAULT_DB_PASSWORD}"
  # The default cache storage type.
  # Specifies whether to store resources on file system (-> "fs") or in the database (-> "db").
  # Note that this only affects storage of files. Metadata is always stored in the database.
  #ADX_DEFAULT_STORAGE_CACHE_TYPE: "fs"
  # The default storage path. (This setting only takes effect if ADX_DEFAULT_STORAGE_CACHE_TYPE is set to 'fs'.)
  # In clustered environments this must point to a shared file system and all nodes must use the same folder.
  #ADX_DEFAULT_STORAGE_CACHE_PATH: "${TRIBEFIRE_INSTALLATION_ROOT_DIR}/../repository-resources/cache"

  ##############################################################################
  # Cloud Repository Default Settings
  #
  # Content can also be stored in the cloud, e.g Amazon S3.
  # Since this is a less common use case for a default repository, the properties are not listed in these example settings.
  # However, they can be looked up in the documentation:
  # https://adx.tribefire.com/tribefire.adx.phoenix/adx-doc/Installation/runtime_properties.html#adx-cloud-storage-properties
  ##############################################################################

  ##############################################################################
  # Repository Connectivity Default Settings
  # These settings define the default connection priviledges for newly created repositories
  ##############################################################################
  # Sets the "Connect Default" property in the Access Control Configuration. Possible values are GRANT (default) and DENY.
  # ADX_DEFAULT_REPOSITORY_CONNECTIVITY_PERMISSION: "GRANT"
  # A comma-separated list of the roles that should automatically get CONNECT access granted.
  # For example: "role1,role2"
  # ADX_DEFAULT_REPOSITORY_CONNECTIVITY_ROLES_GRANT: ""
  # A comma-separated list of the roles that should automatically get CONNECT access denied.
  # For example: "role3,role4"
  # ADX_DEFAULT_REPOSITORY_CONNECTIVITY_ROLES_DENY: ""
  ##############################################################################

These properties are there for your convenience. If you decide to use these settings, simply un-comment the properties (remove the # sign in front of property name) and set the values. As a result, the properties you enter here will be set as the default values of your repositories when you create them in ADx, saving you manual work. You can even initialize repositories based on your default values upon installation. To do so, set ADX_INIT_DEFAULT_REPOSITORY: to true, and ADx will be installed with initial repositories based on your settings.

Important: If you set defaults on any repository (Standard, CMIS or Documentum), you also have to set cache repository defaults. When using Oracle as the repository database, you can only initialize a repository using the SID.

You can leave the other properties as is, unless instructed otherwise. For a full list of available properties, see Runtime Properties.

Retry Mechanism Settings

These properties allow you to override the default retry mechanism settings. If you don't change them, retries will simply run based on the below settings.

  ##############################################################################
  # ADx Conversion Jobs Retry Settings
  # This mechanism periodically checks for stale ADx jobs and revives them.
  # (The settings below are for now global settings, which means they can't be
  #  configured on repository level via the administration UI yet.)
  ##############################################################################
  # The interval between checks for stale ADx Content representation jobs.
  # Allowed units: year (y), month (m), day (d), hour (h), minute (min), second (s)
  ADX_DEFAULT_JOB_REVIVE_WORKER_CHECK_INTERVAL: "5 min"
  # Maximum inactivity period of a stale job before a retry.
  # Allowed units: year (y), month (m), day (d), hour (h), minute (min), second (s)
  ADX_DEFAULT_JOB_REVIVE_WORKER_MAX_INACTIVITY_BEFORE_RETRY: "60 min"
  # The maximum number of retries for a single job. If this number is reached,
  # there will be no further retries and the problem has to be analyzed and fixed manually.
  ADX_DEFAULT_JOB_REVIVE_WORKER_MAX_TRIES: "3"

Installation

Congratulations, you are now ready to install ADx!

  1. Go to the directory where you unzipped the package. Open adx-deployment-package.

  2. Open the terminal and run ./install.sh. If your installation settings file is not in the default location, adapt the paths and/or names accordingly (for example /install.sh --settings /path/to/installation-settings.yaml --environment /path/to/environment.sh.

    If the installation failed, quote the full version of the package (including the -p suffix if it's in your package name) to the support team.

  3. ADx should now be installed in the directory specified in the configuration file. To start it, enter /runtime/host/bin in the installation directory, then run ./tribefire-console-start.sh. Alternatively, you can start the server as a Linux service.

To stop the service, run ./tribefire-console-stop.sh.

After start-up, ADx will be available in your browser under the host and port you configured (for example http://localhost:8080). Enjoy using ADx!

Post-installation

Having installed ADx, do the following to verify that the functionality works as expected:

  1. Log in to ADx.

  2. Open the landing page, then click Explore under ADx Admin Access (available under Service Domains). Admin Access opens.

  3. Create a new repository. If you used properties related to default repository settings upon installation, your default values should be already configured:

  4. Run a connection health check on the repository (click Health in the bottom menu, then Connection Check). The resulting report should be all green.

  5. Synchronize the repository (click Synchronize in the bottom menu).

  6. Activate the repository (click Activate in the bottom menu). This action enables the Content Access and brings all endpoints online.

  7. Run a deep health check on the repository (click Health in the bottom menu, then Deep Check). The resulting report should be all green. If the report is OK, you can now start using ADx!

What's next?

Having installed and deployed ADx, you can start using the provided functionality. After you log in, you can find all of your repositories (and their APIs) in the landing page, under Operations. Click Explore to open a repository. See the following resources for more information:

You can always run a number of health checks to make sure everything is running smoothly. For more information, see Running Conversion and Platform Health Checks. If you want to run checks on legacy endpoints, see Running Deep Health Checks on Legacy Endpoints.