logoBack to home screen

Installing Conversion Service

It is best to install the conversion service separately and then link to it from ADx Core using runtime properties. This is related to performance - conversion service can be resource-heavy.

Follow this tutorial to install the conversion service from a package. Make sure to meet the prerequisites first.

  1. Unzip your package to a directory of your choice.

    When done, your package is unzipped to this directory, to a sub-folder (conversion-deployment-package or similar). When you open this folder, you should see the following files:

  2. Open the terminal from the conversion-deployment-package folder presented above and run the following commands:

    mkdir ../additional-libraries
    
    mkdir ../license
    
    cp example-environment.sh ../environment.sh
    
    cp example-installation-settings.yaml ../installation-settings.yaml
    

    As a result, you have now created the license and additional-libraries folders next to the conversion-deployment-package folder, and copied the template files with the installation settings and the environment script:

    Note that you will only ever need the environment script if Java directory is not yet added to the PATH on your machine. If that's the case, click here for more information.

  3. Put the license file provided to you in the license folder. If the file is provided as a .zip archive (or similar), unzip it into this folder. Example: license/example-license.sigxml.glf.

  4. Add the database driver to the additional-libraries folder, as explained below.

    1. First, you need to download the correct driver. Knowing the database type used by the conversion service in your organization, download one of the below drivers:

      DatabaseDriver download page
      PostgreSQLDownload. Important: PostgreSQL has been tested with driver version 42.2.6. Please use this version or newer.
      OracleDownload
      MSSQLDownload. Important: MSSQL has been tested with driver version 7.2.2 and is reported to work with versions 6.3.2 and newer. Do not use older drivers.
      DB2Download
    2. Now, add your driver to the additional-libraries folder, available in the package (simply copy and paste the driver .jar file):

  5. Now, some configuration will be necessary. Open the installation-settings.yaml file in a text editor of your choice and edit the sections mentioned below (and only these sections).

Installation Path

When installing, you will need write access to create the below-mentioned path.

You can either use the default path or set the installationPath: property to the installation directory of your choice. If you do, remember to provide an absolute path in the configuration file. As a result, conversion service will be installed in the provided directory. All relative paths used in the configuration file are resolved from the installation directory.

The result should present itself as below:

# The directory where the application will be installed. Note that files from previous installations may be overridden.
# Note that there are other (relative) paths specified in this file, which will be resolved relative to this installation path.
# For example, if path is "/opt/braintribe/conversion/tribefire", the (default) log files directory is "/opt/braintribe/conversion/logs".
installationPath: "/opt/braintribe/conversion/tribefire"

Ports

The ports determine the URL under which the conversion service will be available when installed (as explained by the comments). For example http://hostname:8080 or https://hostname:8443.

Initially, this section looks as follows:

# ** Ports **
# The following settings are used to configure ports where server listens for requests.
# If multiple instances are installed on the same machine, each instance should use its own port range.
# For example, instance 1 uses 8080,8443,8009,8005 and instance 2 uses 9080,9443,9009,9005, etc.

# The HTTP port where the server listens for requests. If set to e.g. 8080, HTTP (base) url will be http://[host]:8080/.
httpPort: 8080
# The HTTPS port where the server listens for requests. If set to e.g. 8443, HTTPS (base) url will be https://[host]:8433/.
httpsPort: 8443
# The AJP connector port (see https://tomcat.apache.org/tomcat-9.0-doc/config/ajp.html)
ajpPort: 8009
# The Tomcat server port (e.g. for shutdown commands)
serverPort: 8005

In fact, if you don't want to change the ports, you don't have to. If you do, simply change the values of httpPort:, httpsPort:, ajpPort:, and serverPort: to the values you want.

That's it - let's move on to the next part.

HTTPS/SSL

HTTPS/SSL settings control how the conversion service should be accessed once installed. If you don't need HTTPS access, you can skip this section entirely, and move on to Resources. Otherwise, follow the below procedure.

Initially, this section looks as follows:

# ** HTTPS/SSL **
# Whether or not to enforce HTTPS, i.e. redirect HTTP to HTTPS
#enforceHttps: false

# The path to the SSL keystore file, PKCS 12 format (see https://en.wikipedia.org/wiki/PKCS_12).
# If not set, the default keystore with a self-signed certificate will be used.
#sslKeystoreFile: !com.braintribe.model.resource.FileResource
#  path: "/path/to/keystore.p12"

# One can use openssl to generate a (self signed) keystore file:
#   openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt
#   openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in certificate.crt -inkey privateKey.key -out keystore.pkcs12 -name "tribefire"
# For more information see https://www.openssl.org/.

# The password for the keystore file (see above). Replace "[ENCRYPTED_PASSWORD]" with the encrypted password, e.g. "${decrypt('HMuN/VXo5+L0vVQzuJe7bAOiBmeKzWluP+POb7zjkcLCnzgawUfWmZAIu9eIOfVAzEQn6Q==')}".
#sslKeystorePassword: "${decrypt('[ENCRYPTED_PASSWORD]')}"

# If the keystore file was generated without a password, set the password to empty string.
#sslKeystorePassword: ""
  1. Set enforceHttps: to true:

    # Whether or not to enforce HTTPS, i.e. redirect HTTP to HTTPS
    enforceHttps: true
    
  2. Now, you can either skip the path section entirely to use the default keystore, or generate your own self-signed keystore file. This tutorial shows the second option. If you don't want to do it, move on to Resources.

    1. Remove the # comment marks in front of sslKeystoreFile: and path:. You should get the following result:

      # If not set, the default keystore with a self-signed certificate will be used.
      sslKeystoreFile: !com.braintribe.model.resource.FileResource
        path: "/path/to/keystore.p12"
      
    2. Now we need to generate the keystore. First, let's create a folder where it will be stored. In this tutorial, it's the SSL folder under Home/Documents.

    3. Open the newly created folder and run the terminal from it.

    4. Execute $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout privateKey.key -out certificate.crt. You will be prompted for some data - this is expected. This command generates the private key and certificate:

      We will need those files to create the keystore.

    5. Execute openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in certificate.crt -inkey privateKey.key -out keystore.pkcs12 -name "tribefire". This generates the keystore.pkcs12 file. You will be prompted for password in the process - remember it, you will need it later!

    6. Now that you have the keystore, you can add its path to the configuration file:

      # If not set, the default keystore with a self-signed certificate will be used.
      sslKeystoreFile: !com.braintribe.model.resource.FileResource
        path: "/home/user/Documents/SSL/keystore.p12"
      
    7. Finally, we need to encrypt the keystore password in order to provide it in the configuration file. Let's go back to the installation directory, where you will find the encrypt.sh script.

    8. Open the terminal and run ./encrypt.sh --value mypassword. You should get the encrypted password as a response:

      VTGEjDxNV17nHqxj/aXrLwAKmksFgUWIht5JZdPIZb5r3yeODUE0v+hz72y4TDD7eZfP9Q==
      

      Copy this response and paste it into the sslKeystorePassword property, in place of [ENCRYPTED_PASSWORD]:

      sslKeystorePassword: "${decrypt('VTGEjDxNV17nHqxj/aXrLwAKmksFgUWIht5JZdPIZb5r3yeODUE0v+hz72y4TDD7eZfP9Q==')}"
      

That's it - you have now configured HTTPS access for the conversion service. When installed with those setting, it should always redirect to HTTPS, using the keystore you generated.

HTTP Security

You can edit the trustedDomain value to restrict access to this node from other domains. By defauls, cross-domain access is permitted for all hosts.

# ** HTTP Security **
# The Cross Domain Policy is used to define cross-domain related security settings.
# For example, if there are other web applications which want to embed this application's content in an iframe, e.g. the Web Reader,
# these applications have to be in the trustedDomain configured below. Examples:
#   "*" - permit cross-domain access from any host
#   "*.example.com" - permit cross-domain access from respective hosts on example.com domain
#   "" - disable cross-domain access
# Note that this setting has nothing to do with normal clients connecting to this application or other applications connecting via API.
# For further information read e.g. the HTTP Security section at https://developer.mozilla.org/en-US/docs/Web/HTTP.
crossDomainPolicy: !com.braintribe.model.platform.setup.api.cdp.SimpleCrossDomainPolicy
  trustedDomain: "*"

Resources

Unless instructed otherwise, you can simply use the default JVM values as shown below:

# ** Resources **
# The initial heap size of the Java Virtual Machine
initialHeapSize: 512m
# The maximum heap size of the Java Virtual Machine
maxHeapSize: 4096m
# The maximum number of connections (or -1 for no limit)
maxConnections: -1
# The maximum number of request worker threads.
maxThreads: 4000

That's it! You can move on to the next section.

Logging

In this section, you need to provide the directory for the log files. The easiest course of action is to use the default settings:

# ** Logging **
# The path of the directory where log files will be written to. You can simply use the provided default location, which is resolved relative to the 'installationPath:' property. '&logFilesDir' specifies an anchor, which makes it possible to reference the value below, see setting 'checkWriteAccessForDirs'.
logFilesDir: &logFilesDir "../logs"

If you do, this section should look as follows:

# ** Logging **
# The path of the directory where log files will be written to. You can simply use the provided default location, which is resolved relative to the 'installationPath:' property. '&logFilesDir' specifies an anchor, which makes it possible to reference the value below, see setting 'checkWriteAccessForDirs'.
logFilesDir: &logFilesDir "../logs"

# Log level for console output. Examples: SEVERE,WARNING,INFO,FINE,FINEST
consoleLogLevel: "INFO"

# Log level for file output. Examples: SEVERE,WARNING,INFO,FINE,FINEST
logFilesLogLevel: "FINE"

# Enables log rotation based on file size (in bytes). If the specified maximum is exceeded, the current log file will be archived and a new one will be created.
logFilesMaxSize: 15000000

# Enables log rotation based on a Cron expression. For more info see https://en.wikipedia.org/wiki/Cron.
# Rotate every midnight
logFilesCronRotate: "0 0 * * *"

# Maximum number of archived log files (see also log rotation settings above). If the maximum is exceeded, the oldest archived log files gets deleted.
logFilesMaxCount: 10

That's it! You can move on to the next section.

Temporary files

Similarly as with the logs, you can simply use the default location for temporary files:

# ** Temporary Files **
# The path of the directory where temporary files will be written to. You can simply use the provided default location, which is resolved relative to the 'installationPath:' property. '&tempDir' specifies an anchor, which makes it possible to reference the value below, see setting 'checkWriteAccessForDirs'.
tempDir: &tempDir "../temp"

That's it! You can move on to the next section.

Admin User

In this section, you need to enter the credentials of the admin user.

  1. As with the keystore, you need to encrypt the admin password. To do so, open the unzipped package, then open the terminal and run ./encrypt.sh --value mypassword. You should get the encrypted password as a response:

    VTGEjDxNV17nHqxj/aXrLwAKmksFgUWIht5JZdPIZb5r3yeODUE0v+hz72y4TDD7eZfP9Q==
    

    Copy this response (not the one above but the one you get) and paste it into the password: property. The file should then look as follows:

    # ** Admin User **
    # Configures the credentials for the default admin user.
    # If the user doesn't exist yet, it will be created and the password will be set as specified here.
    # If the user already exists, nothing will be done, i.e. its password will NOT be changed!
    predefinedComponents:
    ADMIN_USER: !com.braintribe.model.user.User
        # Admin user name
        name: "admin"
        # Admin user password. Replace "[ENCRYPTED_PASSWORD]" with the encrypted password, e.g. "${decrypt('HMuN/VXo5+L0vVQzuJe7bAOiBmeKzWluP+POb7zjkcLCnzgawUfWmZAIu9eIOfVAzEQn6Q==')}".
        password: "${decrypt('VTGEjDxNV17nHqxj/aXrLwAKmksFgUWIht5JZdPIZb5r3yeODUE0v+hz72y4TDD7eZfP9Q==')}"
    

That's it! You can move on to the next section.

ActiveMQ Settings

Each node provides embedded ActiveMQ communication. You don't need to change these settings if you don't change anything in ActiveMQ runtime properties. If you did, adapt hostAddress accordingly.

# ** ActiveMQ settings **
# Configures the ActiveMQ connection. Since each node provides an embedded ActiveMQ server, the URL just points to localhost.
# Unless a custom ActiveMQ server port is configured below (see AMQ_SERVER_PORT), there is no need to change these settings.
  MQ: !com.braintribe.model.messaging.jms.JmsActiveMqConnection
    name: "ActiveMQ Connection"
    hostAddress: "tcp://localhost:61616"

System Database

In this section, you need to put the information describing your system database (user name, encrypted password, database driver, database URL) into the properties found under connectionDescriptor:. Other properties (apart from the name) must be left with the default values. Let's focus on the section in question:

# Connection settings ( '&systemDatabaseConnection' specifies an anchor, which makes it possible to re-use the connection settings below)
    connectionDescriptor: &systemDatabaseConnection !com.braintribe.model.deployment.database.connector.GenericDatabaseConnectionDescriptor
      # JDBC driver
      driver: "org.postgresql.Driver"
      # JDBC URL
      url: "jdbc:postgresql://localhost:5432/system-db"
      # Database user name
      user: "example-db-user"
      # Database user password. Replace "[ENCRYPTED_PASSWORD]" with the encrypted password, e.g. "${decrypt('HMuN/VXo5+L0vVQzuJe7bAOiBmeKzWluP+POb7zjkcLCnzgawUfWmZAIu9eIOfVAzEQn6Q==')}".
      password: "${decrypt('[ENCRYPTED_PASSWORD]')}"

driver:

driver: will be different depending on the database type. Copy the driver property from the table below:

Databasedriver: value
PostgreSQL"org.postgresql.Driver" (default)
Oracle"oracle.jdbc.OracleDriver"
Oracle 8i"oracle.jdbc.driver.OracleDriver"
MSSQL"com.microsoft.sqlserver.jdbc.SQLServerDriver"
MSSQL JTDS"net.sourceforge.jtds.jdbc.Driver"
MYSQL"com.mysql.jdbc.Driver"
DB2"com.ibm.db2.jdbc.net.DB2Driver"

url:

url: will be different depending on the database type. Use the syntax as explained in the table below:

Databaseurl: syntaxExample
PostgreSQL"jdbc:postgresql://hostname:port/databaseName". Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and databaseName with the name of the database (for example system-db), then copy-paste into the url: property."jdbc:postgresql://localhost:5432/system-db"
Oracle with service name"jdbc:oracle:thin@hostname:port/serviceName". Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and serviceName with the database service name TNS alias (for example system-db), then copy-paste into the url: property."jdbc:oracle:thin@localhost:5432/system-db"
Oracle with SID"jdbc:oracle:thin@hostname:port:SID". Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and SID with the database ID (for example system-db), then copy-paste into the url: property."jdbc:oracle:thin@localhost:5432:system-db"
MSSQL"jdbc:sqlserver://hostname:port;databaseName". Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and databaseName with the name of the database (for example system-db), then copy-paste into the url: property."jdbc:sqlserver://localhost:5432;system-db"
MYSQL"jdbc:mysql://host:port/databaseName" Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and databaseName with the name of the database (for example system-db), then copy-paste into the url: property."jdbc:mysql://localhost:5432/system-db"
DB2"jdbc:db2://hostname:port/databaseName". Replace hostname with the actual name (for example localhost) port with the database port (for example 5432), and databaseName with the name of the database (for example system-db), then copy-paste into the url: property."jdbc:db2://localhost:5432/system-db"

user:

Simply enter the user name with access to the database.

password

As with all passwords, you need to encrypt the database user password. To do so, open the unzipped package, then open the terminal and run ./encrypt.sh --value mypassword. You should get the encrypted password as a response:

```
VTGEjDxNV17nHqxj/aXrLwAKmksFgUWIht5JZdPIZb5r3yeODUE0v+hz72y4TDD7eZfP9Q==
```

Copy this response and paste it into the password: property.

That's it! You can now proceed to the next section of the configuration file.

Connection Pools

As stated by the configuration file comments, you don't need to change these properties unless instructed otherwise.

Conversion Database

The configuration of the conversion database is exactly the same as described for System Database - simply carry out the same procedure in the conversion section of the file, providing data of the conversion database in the process.

Project Settings

Project descriptor available in this section defines the name and the displayed name of the tomcat service. You don't need to change these settings unless instructed otherwise.

# ** Project Settings **
# Configures the project / product name. Usually these settings don't have to be changed. They names are e.g. used for Tomcat service name, see below.
projectDescriptor:
  name: "conversion"
  displayName: "Conversion"

Tomcat Settings

Tomcat service descriptor available in this section defines the name of the service user when you run the application as a Tomcat service. You don't need to change these settings unless instructed otherwise.

# ** Tomcat Service **
# One can use script /tribefire/runtime/host/bin/tribefire-service.sh to run the application as a Tomcat service.
tomcatServiceDescriptor:
  # Specifies the name of the service user
  user: "service-user"

Runtime Properties

Platform Settings

Conversion is powered by the Tribefire platform, configured via the below settings.

  ##############################################################################
  # Platform settings
  ##############################################################################
  # This enables support for encrypted passwords in this configuration file (don't change!)
  TRIBEFIRE_SECURED_ENVIRONMENT: "true"

  # Specifies how long a user session should remain active when there is no activity on the session.
  # After the specified inactive time has passed (i.e. no request with the corresponding session ID has been received by the server),
  # the session is flagged as inactive and consequently removed by a periodic cleanup process.
  # The time span can be specified as a human-readable string, using numbers and the time unit, as in 12h, 30m, 3600s, etc.
  TRIBEFIRE_USER_SESSIONS_MAX_IDLE_TIME: "30m"

  # When this is set to true, the login dialog will offer an option to stay signed-in after a browser restart.
  # If this is set to false, the user session will always be closed when the browser is closed.
  # This is achieved by using a session cookie that stores the user's session ID until the browser is closed.  
  TRIBEFIRE_RUNTIME_OFFER_STAYSIGNED: "true"

  # The public tribefire services URL, i.e. the URL through which clients and other services can reach this service.
  # This is e.g. needed for callbacks, for example when service A invokes an (asynchronous) job on B and passes a callback URL,
  # through which B can notify A when the job is done.
  #
  # In many cases this settings can just be set to "https://(hostname of this machine):(https port configured above)/tribefire-services".
  # A typical use case where the URL is different, is a clustered service, i.e. multiple nodes behind a load balancer.
  # In that case the load balancer URL has to be specified here.
  #
  # Make sure that this public URL is reachable from other machines, e.g. verify that configured ports are opened in the firewall.
  TRIBEFIRE_PUBLIC_SERVICES_URL: "https://[PUBLIC_HOST]:[PUBLIC_PORT]/tribefire-services"

  # Indicates whether or not this node is part of a cluster.
  # (If it is, also ActiveMQ settings must be configured, see above.)
  TRIBEFIRE_IS_CLUSTERED: "false"

TRIBEFIRE_PUBLIC_SERVICES_URL provides the URL on which this service can be reached. When using multiple ADx Core servers, provide the load balancer URL.

> Important: `TRIBEFIRE_PUBLIC_SERVICES_URL` must be reachable from the outside. Make sure it is not blocked by a firewall. 

TRIBEFIRE_IS_CLUSTERED must be set to true if this node is going to be part of a cluster. In this case, don't forget to add it to the list of hosts in AMQ_CLUSTER_NODES - see ActiveMQ Settings.

ActiveMQ Settings

Enter a full list of Conversion nodes in AMQ_CLUSTER_NODES. Optionally, you can also set up Multicast communication. If you change the host or port, you also need to adapt the host address in previous ActiveMQ Settings

##############################################################################
  # ActiveMQ settings
  ##############################################################################
  # Each node provides its own, embedded messaging service (based on ActiveMQ).
  # The messaging service enables nodes to communicate with each other, e.g. to send notification events from one node to all others.

  # The ActiveMQ server port.
  AMQ_SERVER_PORT: "61616"

  # A comma-separated list of host names (or IP addresses) of ActiveMQ nodes that should form a cluster.
  # Each address may also include a port (separated by a colon). If no port is specified, it defaults to the port configured in setting AMQ_SERVER_PORT.
  # Example: "adx1.example.com,adx2.example.com,adx3.example.com:61617".
  AMQ_CLUSTER_NODES: "localhost"

  # As an alternative to setting AMQ_CLUSTER_NODES ActiveMQ also supports connection to remote nodes via multicast transport.
  # This feature is disabled by default. Before enabling it ensure that the network (and firewalls, if any) permit multicasts on port 6155.
  #
  # The group name shared between all adx cluster nodes. Setting this value activates the multicast transport.
  #AMQ_DISCOVERY_MULTICAST_GROUP: "adx"
  # The network interface through which multicasts are sent, e.g. "eth0".
  #AMQ_DISCOVERY_MULTICAST_NETWORK_INTERFACE: "[NETWORK_INTERFACE]"

Conversion Settings

These setting control the storage type and location for conversion files as well as the default user credentials on this conversion node. Change these settings as required by your organization.

##############################################################################
  # Conversion Settings
  ##############################################################################
  # Configures conversion services to use the conversion database configured above (don't change!)
  CONV_DB_EXISTING_EXTERNAL_ID: "connection.conversion"
  # Specifies whether to store conversion resources on file system (-> "fs") or in the database (-> "db").
  # Note that this only affects storage of files. Metadata is always stored in the database.
  CONV_STORAGE_TYPE: "fs"
  # If conversion resources are stored on the file system (see above), this setting points to the folder where resources are stored.
  # In clustered environments this must point to a shared file system and all nodes must use the same folder.
  CONV_STORAGE_FOLDER: &convStorageFolder "${TRIBEFIRE_INSTALLATION_ROOT_DIR}/../conversion-resources"
  # The name of the standard conversion user. This is the user to be used to send conversion requests (e.g. from ADx). This user does not have the admin role.
  CONV_STANDARD_USER_NAME: "tf-conversion"
  # The password of the standard conversion user (see above). If this property is not set, password will be 'cortex'.
  CONV_STANDARD_USER_PASSWORD: "\${decrypt('[ENCRYPTED_PASSWORD]')}"
  # The maximum age of a conversion job in the access. This only refers to jobs that have not been updated for this amount of time (in ms).
  #CONV_MAX_AGE: "86400000" # 1 day
  # The maximum allowed file size (in bytes) of an input resource.
  #CONV_MAX_INPUT_FILE_SIZE: "157286400" # 150 MB
  # The interval (in ms) how often the system should check for jobs that reached their end of life or should be removed.
  #CONV_CHECK_INTERVAL: "600000" # 10 minutes
  # The number of parallel worker threads. If this value is less than 1, the number will be computed based on available CPU cores and memory.
  #CONV_WORKER_THREADS: "0"
	##############################################################################
  # Conversion Retry Settings
  # This mechanism periodically checks for stale jobs and revives them.
  ##############################################################################
  # The interval (in ms) of the Job Scheduler checking for stale jobs.
  CONV_JOB_SCHEDULER_INTERVAL: "300000" # 5 minutes
  # The maximum interval of inactivity before a job gets restarted.
  CONV_MAX_INACTIVITY_BEFORE_RETRY: "3600000" # 60 minutes
  # The maximum number of retries for a single job. If this number is reached,
  # there will be no further retries and the problem has to be analyzed and fixed manually.
  CONV_MAX_RETRIES: "3"

  ##############################################################################
  # Conversion Restriction Settings
  # These settings define limits that job input files must not break.
  # If these limits are broken, the job fails automatically to avoid overloading. 
  ##############################################################################
  # Maximum size of an individual input file (in bytes) of a job.
  # CONV_MAX_INPUT_FILE_SIZE: "524288000" # 500 MB
  # Maximum total file size of all input files (in bytes) of a job.
  # CONV_MAX_TOTAL_INPUT_FILE_SIZE: "2147483648" # 2 GB
  # Maximum number of input files of a job.
  # CONV_MAX_NUMBER_OF_INPUT_FILES: "100"
  # Maximum number of objects in a PDF-to-image operation, counted from all pages.
  # CONV_PDF_TO_IMAGE_MAX_NUMBER_OF_OBJECTS: "10000000" # ten million
  # Maximum number of objects per page in a PDF-to-image operation.
  # CONV_PDF_TO_IMAGE_MAX_NUMBER_OF_OBJECTS_PER_PAGE: "1000000" # one million
  # Maximum number of pages in a PDF-to-image operation
  # CONV_PDF_TO_IMAGE_MAX_NUMBER_OF_PAGES: "10000"

Installation

Congratulations, now you're done configuring the file. Remember to save it (you can change the name and location, but you don't have to), then proceed to installation.

  1. Go to the directory where you unzipped the package.

  2. Open the terminal and run ./install.sh. This command doesn't require adapting, provided you prepared the installation-setting.yaml and environment.sh as described. Otherwise, you need to adapt the paths and/or names accordingly (for example /install.sh --settings /path/to/installation-settings.yaml --environment /path/to/environment.sh.

    If the installation fails, please quote the full version of the package (with the -p suffix) to the support team.

  3. Conversion should now be installed in the directory specified in the configuration file. To start it, enter tribefire/runtime/host/bin in the installation directory, then run ./tribefire-console-start.sh. Alternatively, you can start the server as a Linux service.

  4. Run the health check on the installed service.

  5. Open the URL entered as TRIBEFIRE_PUBLIC_SERVICES_URL: to manually verify that it works.

  6. You're done!

    If the installation failed, quote the full version of the package (including the -p suffix if it's in your package name) to the support team.

What's Next?

To start the service, enter tribefire/runtime/host/bin in the installation directory, then run ./tribefire-console-start.sh. Alternatively, you can start the server as a Linux service.

After start-up, conversion service will be available in your browser under the host and port you configured (for example http://localhost:8080). You can also run a number of health checks to make sure everything is running smoothly. For more information, see Running Conversion and Platform Health Checks. For checks on legacy features, see Running Deep Health Checks on Legacy Endpoints.

Having installed and deployed the conversion service locally, you can now do the same for ADx Core. See Installing ADx Core for details.