GithubHelp home page GithubHelp logo

pegasystems / docker-pega-web-ready Goto Github PK

View Code? Open in Web Editor NEW
57.0 57.0 102.0 349 KB

Docker project for generating a tomcat docker image for Pega

License: Apache License 2.0

Shell 62.92% Dockerfile 25.10% HTML 6.46% Makefile 5.52%
cd ci cicd docker docker-image docker-pega-web-ready pega ready tomcat web

docker-pega-web-ready's Introduction

Pega Docker Image

Pega Platform is a distributed web application for customer engagement, customer service, and digital process automation. A Pega deployment consists of a number of containers connecting to a Database and any other required backing services. The Pega database contains business rule logic that must be preloaded with an installer for the containers to successfully start. For more information and instructions on how to get started with a container based deployment of Pega, see Pega's Cloud Choice documentation.

Docker Image Build Docker Image

Using this image

This ready Docker image extends a customized Tomcat base image pegasystems/tomcat:9-jdk11 and contains required components that allow you to run a Pega Platform on your deployment nodes. It does not include Pega Platform rules. This image is web-ready for clients to build a final image that includes the Pega .war file of your choice.

Pega offers an alterative, full image which includes the .war file - for details, see pegasystems/pega on DockerHub. Docker images provided by Pegasystems are validated and supported by Pega Support.

Image customizations

If you do not want to use the Pega-provided Docker image, you can copy this repository and build your own image based on your preferred base image such as one enforced by a corporate standard. When making customizations for your environment, check the Pega Platform Support Guide Resources to verify that those changes are supported by your Pega Platform version. If you choose to build your own image, Pega will continue to offer support for Pega Platform, but problems that arise from your custom image are not the responsibility of Pegasystems.

User access and control considerations for this image

Pega provides this web-ready Docker image with built-in user privileges - pegauser:pegauser (9001:9001) which allows you to set default, limited user access policies, so file system access can be controlled by non-root users who deploy the image. The image only provides required file access to pegauser:pegauser. When you build your pega deployment Docker image from this web-ready, you should consider adding any user access and control restrictions within the image such as required roles ot priveleges for file or directory access and ownership.

Building a deployable Docker image using this web-ready image

For clients who need to build their own deployment image, Pega recommends building your Pega image using your own Dockerfile with contents similar to the example below and specifying the .war file from the Pega distribution kit. You may also specify a database driver as shown in the example. It is a best practice to build this image on a Linux system to retain proper file permissions. Replace the source paths with the actual paths to the Pega Infinity software libraries and specify a valid JDBC driver for your target database to bake it in.

To build the Pega image on JDK 11, use pegasystems/pega-ready:3-jdk11 as the base image. To build the Pega image on JDK 17, use pegasystems/pega-ready:3-jdk17 as the base image. Currently, the latest tag points to the Pega image on JDK 17, but it may point to later versions in the future, so as a best practice, use tags that specify the version you want to deploy.

FROM busybox AS builder

# Expand prweb to target directory
COPY /path/to/prweb.war /prweb.war
RUN mkdir prweb
RUN unzip -q -o prweb.war -d /prweb

# Building the Pega image on JDK 11. To use images on JDK 17, use the tag 3-jdk17.
FROM pegasystems/pega-ready:3-jdk11 

# Copy prweb to tomcat webapps directory
COPY --chown=pegauser:root --from=builder /prweb ${CATALINA_HOME}/webapps/prweb

RUN chmod -R g+rw   ${CATALINA_HOME}/webapps/prweb

# Make a jdbc driver available to tomcat applications
COPY --chown=pegauser:root /path/to/jdbcdriver.jar ${CATALINA_HOME}/lib/

RUN chmod g+rw ${CATALINA_HOME}/webapps/prweb/WEB-INF/classes/prconfig.xml
RUN chmod g+rw ${CATALINA_HOME}/webapps/prweb/WEB-INF/classes/prlog4j2.xml

Build the image using the following command:

docker build -t pega-tomcat .

Since this image uses a secure base image, it doesn't include all the packages in the environment. Therefore use the multi-stage docker build to include only unzipped packages in the final image to reduce the risk of vulnerabilities. Upon successful completion of the above command, you will have a Docker image that is registered in your local registry named pega-tomcat:latest, which you can view using the docker images command.

Running the image

You must use an orchestration tool to run Pega applications using containers. Pega provides support for deployments on Kubernetes using either Helm charts or direct yaml files. You can find the source code for the deployment scripts in the pega-helm-charts repository. For information about deploying Pega Platform on a client managed cloud, see the Cloud Choice community article.

Mount points

Mount points are used to link a directory within the Docker container to a durable location on a filesystem. For complete information, see the Docker documentation, bind mounts.

Mount point Purpose
/opt/pega/kafkadata Used to persist Kafka data when you run stream nodes.
/heapdumps Used as the default output directory when you generate a heapdump.
/search_index Used to persist a search index when the node hosts searched.

Environment variables

You customize your docker image by overriding environmental variables using the -e Docker flag.

$ docker run -e "var_1=foo" -e "var_2=bar" <image name>[:tags]

Database connection

Specify your required settings for your connection to the database where Pega will be installed.

Name Purpose Default
JDBC_DRIVER_URI Download (curl) the specified database driver. If you do not specify a driver to download, you must embed the driver into your Docker image. See Constructing Your Image for more information on baking a driver in.
JDBC_URL Specify the JDBC url to connect to your database.
JDBC_CLASS Specify the JDBC driver class to use for your database. org.postgresql.Driver
DB_USERNAME Specify the username to connect to your database.
DB_PASSWORD Specify the password to connect to your database.
RULES_SCHEMA Specify the rules schema for your database. rules
DATA_SCHEMA Specify the data schema for your database. data
CUSTOMERDATA_SCHEMA If configured in your database, set the customer data schema for your database. If you do not provide a value, this setting defaults to dataSchema.

Secured Custom artifactory settings used for downloading JDBC driver

If you use a secured custom artifactory to manager your JDBC driver, provide the basic authentication credentials or the API key authentication details to satisfy your custom artifactory authentication mechanism.

Name Purpose Default
CUSTOM_ARTIFACTORY_USERNAME Custom artifactory basic authentication username.
CUSTOM_ARTIFACTORY_PASSWORD Custom artifactory basic authentication password.
CUSTOM_ARTIFACTORY_APIKEY_HEADER Custom artifactory dedicated APIKey authentication header name.
CUSTOM_ARTIFACTORY_APIKEY Custom artifactory APIKey value for APIKey authentication.
ENABLE_CUSTOM_ARTIFACTORY_SSL_VERIFICATION Sets ssl verification when downloading JDBC driver using curl from custom artifactory. false

JDBC connection examples

See the following examples for specifying the database and type of driver used for your connection.

PostgreSQL

JDBC_URL=jdbc:postgresql://YOUR_DB_HOST:5432/YOUR_DB_NAME
JDBC_CLASS=org.postgresql.Driver

Oracle

JDBC_URL=jdbc:oracle:thin:@//YOUR_DB_HOST:1521/YOUR_DB_NAME
JDBC_CLASS=oracle.jdbc.OracleDriver

Microsoft SQL Server

JDBC_URL=jdbc:sqlserver://YOUR_DB_HOST:1433;databaseName=YOUR_DB_NAME;selectMethod=cursor;sendStringParametersAsUnicode=false
JDBC_CLASS=com.microsoft.sqlserver.jdbc.SQLServerDriver

For a complete list of supported relational databases, see the Pega Platform Support Guide.

Advanced JDBC configuration

You can specify a variety settings for your connection to the database where Pega will be installed.

Name Purpose Default
JDBC_MAX_ACTIVE The maximum number of active connections that can be allocated from this pool at the same time. 75
JDBC_MIN_IDLE The minimum number of established connections that should be kept in the pool at all times. 3
JDBC_MAX_IDLE The maximum number of connections that should be kept in the pool at all times. 25
JDBC_MAX_WAIT The number of milliseconds that the database connection pool will wait (when there are no available connections) for a connection to be returned before throwing an exception. 10000
JDBC_INITIAL_SIZE The initial number of database connections that are created when the pool is started. 0
JDBC_CONNECTION_PROPERTIES The database connection pool properties that deploying sends to the JDBC driver when creating new database connections. Format of the string must be [propertyName=property;]*
JDBC_TIME_BETWEEN_EVICTIONS The number of milliseconds to sleep between runs of the idle connection validation/cleaner thread. 30000
JDBC_MIN_EVICTABLE_IDLE_TIME The number of milliseconds that an object is allowed to sit idle in the database connection pool before it is eligible for eviction. 60000

Pega customization

You can specify a variety settings for nodes in your deployment.

Name Purpose Default
NODE_TYPE Specify a node type or classification to specialize the processing within this container. for more information, see Node types for on-premises environments.
PEGA_DIAGNOSTIC_USER Set a Pega diagnostic username to download log files.
PEGA_DIAGNOSTIC_PASSWORD Set a secure Pega diagnostic username to download log files.
NODE_TIER Specify the display name of the tier to which you logically associate this node.

Customize the Tomcat runtime

You can specify a variety settings for the Tomcat server running in your deployment.

Name Purpose Default
PEGA_APP_CONTEXT_PATH The application context path that Tomcat uses to direct traffic to the Pega application prweb
PEGA_DEPLOYMENT_DIR The location of the Pega app deployment /usr/local/tomcat/webapps/prweb
JAVA_OPTS Specify any additional parameters that should be appended to the java command.
INITIAL_HEAP Specify the initial size (Xms) of the java heap. 2048m
MAX_HEAP Specify the maximum size (Xmx) of the java heap. 4096m
HEAP_DUMP_PATH Specify a location for a heap dump using XX:HeapDumpPath /heapdumps

Cassandra settings

For Pega Platform deployments running Pega Decisioning, you must specify how to connect to your organization's existing Cassandra service by using parameters to manage the connection to the service.

Name Purpose Default
CASSANDRA_CLUSTER Enable a connection to your organization's Cassandra service. false
CASSANDRA_NODES Specify A comma separated list of hosts in your Cassandra service cluster (for example, 10.20.205.26,10.20.205.233).
CASSANDRA_PORT Specify the TCP port to connect to your Cassandra service cluster. 9042
CASSANDRA_USERNAME Specify the plain text username for authentication with your Cassandra service cluster. For better security, avoid plain text usernames and leave this parameter blank; then include the username in an external secrets manager with the key CASSANDRA_USERNAME.
CASSANDRA_PASSWORD Specify the plain text password for authentication with your Cassandra service cluster. For better security, avoid plain text passwords and leave this parameter blank; then include the password in an external secrets manager with the key CASSANDRA_PASSWORD.
CASSANDRA_CLIENT_ENCRYPTION Enable encryption of traffic between Pega Platform instance and your organization's Cassandra service. false
CASSANDRA_CLIENT_ENCRYPTION_STORE_TYPE Specify the archive file format in which Cassandra client encryption keys are held. JKS
CASSANDRA_TRUSTSTORE Specify the path to the truststore file which contains trusted third party certificates that will be used in Cassandra client encryption.
CASSANDRA_TRUSTSTORE_PASSWORD Specify the plain text password for the Cassandra client encryption truststore file. For better security, avoid plain text passwords and leave this parameter blank; then include the password in an external secrets manager with the key CASSANDRA_TRUSTSTORE_PASSWORD.
CASSANDRA_KEYSTORE Specify the path to the keystore file which contains keys and certificates that will be used in Cassandra client encryption to establish secure connection.
CASSANDRA_KEYSTORE_PASSWORD Specify the plain text password for the Cassandra client encryption keystore file. For better security, avoid plain text passwords and leave this parameter blank; then include the password in an external secrets manager with the key CASSANDRA_KEYSTORE_PASSWORD.
CASSANDRA_ASYNC_PROCESSING_ENABLED Enable asynchronous processing of records in DDS Dataset save operation. Failures to store individual records will not interrupt Dataset save operations. false
CASSANDRA_KEYSPACES_PREFIX Specify a prefix to use when creating Pega-managed keyspaces in Cassandra.
CASSANDRA_EXTENDED_TOKEN_AWARE_POLICY Enable an extended token aware policy for use when a Cassandra range query runs. When enabled this policy selects a token from the token range to determine which Cassandra node to send the request. Before you can enable this policy, you must configure the token range partitioner. false
CASSANDRA_LATENCY_AWARE_POLICY Enable a latency awareness policy, which collects the latencies of the queries for each Cassandra node and maintains a per-node latency score (an average). false
CASSANDRA_CUSTOM_RETRY_POLICY Enable the use of a customized retry policy for your Pega Platform deployment for Pega Platform ’23 and earlier releases. After you enable this policy in your deployment configuration, the deployment retries Cassandra queries that time out. Configure the number of retries using the dynamic system setting (DSS): dnode/cassandra_custom_retry_policy/retryCount. The default is 1, so if you do not specify a retry count, timed out queries are retried once. false
CASSANDRA_CUSTOM_RETRY_POLICY_ENABLED Use this parameter in Pega Platform '24 and later instead of CASSANDRA_CUSTOM_RETRY_POLICY. Configure the number of retries using the CASSANDRA_CUSTOM_RETRY_POLICY_COUNT property. false
CASSANDRA_CUSTOM_RETRY_POLICY_COUNT Specify the number of retry attempts when CASSANDRA_CUSTOM_RETRY_POLICY is true. For Pega Platform '23 and earlier releases use the dynamic system setting (DSS): dnode/cassandra_custom_retry_policy/retryCount. 1
CASSANDRA_SPECULATIVE_EXECUTION_POLICY Enable the speculative execution policy for retrieving data from your Cassandra service for Pega Platform '23 and earlier releases. When enabled, the Pega Platform will send a query to multiple nodes in your Cassandra service and process the first query response. This provides lower perceived latencies for your deployment, but puts greater load on your Cassandra service. Configure the speculative execution delay and max executions using the following dynamic system settings (DSS): dnode/cassandra_speculative_execution_policy/delay and dnode/cassandra_speculative_execution_policy/max_executions. false
CASSANDRA_SPECULATIVE_EXECUTION_POLICY_ENABLED Use this parameter in Pega Platform '24 and later instead of CASSANDRA_SPECULATIVE_EXECUTION_POLICY. Configure the speculative execution delay and max executions using the CASSANDRA_SPECULATIVE_EXECUTION_DELAY and CASSANDRA_SPECULATIVE_EXECUTION_MAX_EXECUTIONS properties. false
CASSANDRA_SPECULATIVE_EXECUTION_DELAY Specify the delay in milliseconds before speculative executions are made when CASSANDRA_SPECULATIVE_EXECUTION_POLICY is true. For Pega Platform '23 and earlier releases use the dynamic system setting (DSS): dnode/cassandra_speculative_execution_policy/delay. 100
CASSANDRA_SPECULATIVE_EXECUTION_MAX_EXECUTIONS Specify the maximum number of speculative execution attempts when CASSANDRA_SPECULATIVE_EXECUTION_POLICY is true. For Pega Platform '23 and earlier releases use the dynamic system setting (DSS): dnode/cassandra_speculative_execution_policy/max_executions. 2
CASSANDRA_JMX_METRICS_ENABLED Enable reporting of DDS SDK metrics to a Java Management Extension (JMX) format for use by your organization to monitor your Cassandra service. Setting this property false disables metrics being exposed through the JMX interface; disabling also limits the metrics being collected using the DDS landing page. true
CASSANDRA_CSV_METRICS_ENABLED Enable reporting of DDS SDK metrics to a Comma Separated Value (CSV) format for use by your organization to monitor your Cassandra service. If you enable this property, use the Pega Platform DSS: dnode/ddsclient/metrics/csv_directory to customize the filepath to which the deployment writes CSV files. By default, after you enable this property, CSV files will be written to the Pega Platform work directory. false
CASSANDRA_LOG_METRICS_ENABLED Enable reporting of DDS SDK metrics to your Pega Platform logs. false

Hazelcast settings

The clustering used in a Pega environment is powered by a technology called Hazelcast. Hazelcast can be used in an embedded mode with no additional configuration required. Some larger deployments of more than 20 Pega containers may start to benefit from improved performance and stability of running Hazelcast in a dedicated ReplicaSet. For more information about deploying Pega with Hazelcast as an external server, see the Helm charts and the Pega Community documentation.

Name Purpose Default
HZ_CLIENT_MODE Enables client mode for infinity false
HZ_VERSION Hazelcast service version.
HZ_DISCOVERY_K8S Indicates infinity client will use K8s discovery plugin to look for hazelcast nodes
HZ_CLUSTER_NAME Hazelcast cluster name
HZ_SERVER_HOSTNAME Hazelcast server hostname
HZ_CS_AUTH_USERNAME Hazelcast username for authentication
HZ_CS_AUTH_PASSWORD Hazelcast password for authentication
HZ_SSL_ENABLED Set to true to enable SSL between the Clustering Service and Pega Platform. false
HZ_SSL_PROTOCOL The SSL protocol for the Clustering Service. For example, TLS.
HZ_SSL_CUSTOM_CLASS SSL context factory class fully qualified name com.pega.hazelcast.v5.nio.ssl.BasicSSLContextFactory
HZ_SSL_KEY_STORE_NAME SSL keystore name
HZ_SSL_KEYSTORE_PASSWORD SSL keystore password
HZ_SSL_ALGO SSL algorithm name
HZ_SSL_TRUST_STORE_NAME SSL truststore name
HZ_SSL_TRUSTSTORE_PASSWORD SSL truststore password
HIGHLY_SECURE_CRYPTO_MODE_ENABLED Set to true to enable highly secure encryption mode that complies with NIST SP 800-53 and NIST SP 800-131. false

Contributing

This is an open source project and contributions are welcome. Please see the contributing guidelines to get started.

docker-pega-web-ready's People

Contributors

akshithac-21 avatar amphibithen avatar apegadavis avatar cdancy avatar dcasavant avatar khick77 avatar kishorv10 avatar madhuriarugula avatar mandeep-pega avatar misterdorito avatar nekhilkotha avatar pega-abhinav avatar pega-chikv avatar pega-roska avatar pega-talba avatar pegadave avatar petehayes avatar saikarthik528 avatar saurabh-16 avatar shashikant-koder avatar slimatic avatar smootherbug avatar sushmareddyloka avatar umaveerabasaveswararao avatar viper0131 avatar vnihal72 avatar wonim2022 avatar yashwanth-p avatar yashwanth-pega avatar zitikay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-pega-web-ready's Issues

Not able to access /opt/pega/prweb permission denied.

When i run the fully baked pega image using docker-pega-web-ready getting below errors:-
cp: cannot create directory '/opt/pega/prweb/META-INF': Permission denied
cp: cannot create directory '/opt/pega/prweb/WEB-INF': Permission denied
cp: cannot create regular file '/opt/pega/prweb/archive.info': Permission denied
cp: cannot create directory '/opt/pega/prweb/diagnostic': Permission denied
cp: cannot create directory '/opt/pega/prweb/images': Permission denied
Loading prlog4j2 from /opt/pega/config/prlog4j2.xml...
cp: cannot create regular file '/opt/pega/prweb/WEB-INF/classes/': No such file or directory
Loading prconfig from /opt/pega/config/prconfig.xml...
cp: cannot create regular file '/opt/pega/prweb/WEB-INF/classes/': No such file or directory

Looks like the image does not have proper permission for pegauser to access /opt/pega/prweb

Add Travis-CI pipeline

This project should build continuously using Travis-CI and publish the resulting pega-ready image to DockerHub.

PegaMKTSMS and PegaMKTEmail are Valid NodeTypes for CDH Deployments

Describe the bug
PegaMKTSMS and PegaMKTEmail are Valid NodeTypes for CDH Deployments and there is no configurational way to pass in those NodeTypes

To Reproduce
Set the -DNode=PegaMKTSMS or PegaMKTEmail , we get an error from the platform its invalid nodetype the valid applicable NodeTypes are [Search, WebUser, BIX, BackgroundProcessing, Custom1, Custom2, Custom3, Custom4, Custom5, DDS, ADM, Batch, RealTime, RTDG, Stream]

Expected behavior
For CDH Deployments we should include- PegaMKTSMS and PegaMKTEmail as a valid NodeTypes

Additional context
As of today there is no way to pass the ApplicableNodeTypes environment value to tomcat.
Currently in existing cloud cuttyhunk deployment tomcat images have an option to pass the applicable NodeTypes and we need similar param in tomcat for CDH deployments in this repo to pass applicableNodeTypes

kafka data is not persistent

Describe the bug
The stream nodes are defined as stateful sets with a PVC mounted at /opt/pega/streamvol. Nothing gets written at this location, i think the intention is to write kafka data at this location. However, there is a setting in the prweb.xml namely which makes the stream nodes write kafka data at /opt/pega/kafkadata location which is not persistent. Any attempts to overwrite this setting using prconfig.xml, DSS or even context.xml does not work as prweb.xml is "the" context file for prweb war.

To Reproduce
start stream nodes and observe that kafka logs get written to /opt/pega/kafkadata

Expected behavior
Try overriding the kafka data location to point to /opt/pega/streamvol using this article https://community.pega.com/knowledgebase/articles/decision-management-overview/advanced-configurations-stream-service#4 and it does not work

Chart version
What version of the charts are you using? Have you made any customizations?

Server (if applicable, please complete the following information):

  • OS: [Ubuntu 18.04]
  • Environment: [AKS]
  • Database: [PostgreSQL]
  • Pega version [8.4.0]

Additional context
NA

Running image as non-root user fails to copy provided conf files like prconfig.xml

Describe the bug
If I provide a volume mapping for the /opt/pega/config dir when I run the image as a non-root user, the entry script fails to copy over the various config files.

To Reproduce

  1. Create a local directory containing a prconfig.xml file (/var/customconf/prconfig.xml)
  2. Run the image as a non-root user mapping the volume $ docker run -v /var/customconf:/opt/pega/config --user 1000 pega-tomcat:latest

Expected behavior
The mapped files are copied correctly

Screenshots

2019/12/11 13:50:59 unable to chmod temp file: chmod /usr/local/tomcat/conf/Catalina/localhost/prweb.xml: operation not permitted
Loading prlog4j2 from /opt/pega/config/prlog4j2.xml...
cp: cannot create regular file '/usr/local/tomcat/webapps/prweb/WEB-INF/classes/prlog4j2.xml': Permission denied
Loading prconfig from /opt/pega/config/prconfig.xml...
cp: cannot create regular file '/usr/local/tomcat/webapps/prweb/WEB-INF/classes/prconfig.xml': Permission denied
Loading context.xml from /opt/pega/config/context.xml...
2019/12/11 13:50:59 unable to chmod temp file: chmod /usr/local/tomcat/conf/tomcat-users.xml: operation not permitted

Reverse DNS lookups should be disabled

Describe the bug
The Tomcat connector will automatically do reverse DNS lookups for the source IPs of requests. It uses this, for example, to log human readable hostnames instead of IP addresses in the access log files. However, the Tomcat container is typically deployed in a cluster, where external traffic is routed via a load balancer (e.g. AWS ALB), Envoy or kube proxy, etc. So the source IP arriving at Tomcat will not be the original source IP, but instead the IP of the internal routing component (the original source IP is then often found in the 'x-forwarded-for' header). That is: all reverse DNS lookups result in the hostname of the routing component. The DNS lookup adds cost, without providing any value. What's more, the DNS lookups have been identified as a source of spikes in response times - at least in some environments. Disabling reverse DNS lookups resolves this.

To Reproduce
Check the Tomcat access logs - they contain the host names of the internal routing components.

Expected behavior
Reverse DNS lookups should be prevented. As a result the access logs will show IP addresses instead.

Screenshots
N/a

Desktop (if applicable, please complete the following information):
N/a

Server (if applicable, please complete the following information):
N/a

Additional context
Also see the related Issue #240 of the 'pega-helm-charts' repository. For internal reference, see US-400592

Undocumented build instructions and dependency on container-structure-test for running 'make test'

Describe the bug
The README.md doesn't include the (trivial) instructions on how to build the image ('make all'), nor how to execute the tests ('make test'). Further, to run the tests the Google Container Structure Tests tool needs to be installed (https://github.com/GoogleContainerTools/container-structure-test) - which has not been documented

To Reproduce
After building the image ('make all') execute the tests ('make test'). This will fail with the following message:

# Execute test cases
container-structure-test test --image qualitytest --config tests/pega-web-ready-testcases.yaml
make: container-structure-test: Command not found
make: *** [Makefile:14: test] Error 127

After installing the Container Sructure Test tool the tests will pass

Expected behavior
The build & test instructions should be documented, including the required dependency on Container Structure Test

Screenshots
n/a

Desktop (if applicable, please complete the following information):

  • OS: Ubuntu 20.4.1
  • Browser: n/a
  • Version: n/a

Server (if applicable, please complete the following information):
n/a

Additional context
none

Pega8 support

Please add support for native pega8.x version
(remove sma, upgrade tomcat to version 8.5 or higher)

Removal of pega7-tomcat-ready from DockerHub

The pega7-tomcat-ready image needs to be removed from DockerHub to avoid confusion and the possibility of users deploying an unsupported version of the Docker image.

Removal is scheduled for Sept 1, 2019.

JDBC Driver Timeout with MSSQL

Describe the bug
When starting Pega Platform using the Docker image and latest MSSQL driver jar, a DB timeout will often occur during startup.

To Reproduce

  • Pega 8.3.0
  • Azure AKS
  • Azure SQL Server

Expected behavior
Startup completes without error.

Screenshots / Logs

com.microsoft.sqlserver.jdbc.SQLServerException: Read timed out
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:2924)
    at com.microsoft.sqlserver.jdbc.TDSChannel.read(IOBuffer.java:2029)
    at com.microsoft.sqlserver.jdbc.TDSReader.readPacket(IOBuffer.java:6418)
    at com.microsoft.sqlserver.jdbc.TDSCommand.startResponse(IOBuffer.java:7581)
    at com.microsoft.sqlserver.jdbc.SQLServerResultSet$CursorFetchCommand.doExecute(SQLServerResultSet.java:5459)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7194)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2979)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:248)
    at com.microsoft.sqlserver.jdbc.SQLServerResultSet.doServerFetch(SQLServerResultSet.java:5496)
    at com.microsoft.sqlserver.jdbc.SQLServerResultSet.next(SQLServerResultSet.java:1038)
    at org.apache.tomcat.dbcp.dbcp2.DelegatingResultSet.next(DelegatingResultSet.java:1160)
    at org.apache.tomcat.dbcp.dbcp2.DelegatingResultSet.next(DelegatingResultSet.java:1160)
    at com.pega.pegarules.internal.bootstrap.phase2.jdbc.AbstractJdbcJarReader.readEntry(AbstractJdbcJarReader.java:408)
    at com.pega.pegarules.internal.bootstrap.phase2.jdbc.PegaJdbcURLConnection.connect(PegaJdbcURLConnection.java:250)
    at com.pega.pegarules.internal.bootstrap.phase2.jdbc.PegaJdbcURLConnection.getInputStream(PegaJdbcURLConnection.java:311)
    at java.base/java.net.URL.openStream(URL.java:1139)
    at com.pega.pegarules.internal.bootstrap.phase2.PRBootstrapImpl.extractJarsFromDB(PRBootstrapImpl.java:1035)
    at com.pega.pegarules.internal.bootstrap.phase2.PRBootstrapImpl._finishInitialization_privact(PRBootstrapImpl.java:264)
    at com.pega.pegarules.internal.bootstrap.phase2.PRBootstrapImpl.finishInitialization(PRBootstrapImpl.java:128)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at com.pega.pegarules.internal.bootstrap.PRBootstrap.checkForStartup(PRBootstrap.java:693)
    at com.pega.pegarules.internal.bootstrap.PRBootstrap.invokeMethodPropagatingThrowable(PRBootstrap.java:419)
    at com.pega.pegarules.boot.internal.extbridge.AppServerBridgeToPega.invokeMethodPropagatingThrowable(AppServerBridgeToPega.java:224)
    at com.pega.pegarules.boot.internal.extbridge.AppServerBridgeToPega.invokeMethod(AppServerBridgeToPega.java:273)
    at com.pega.pegarules.internal.web.servlet.WebAppLifeCycleListenerBoot.contextInitialized(WebAppLifeCycleListenerBoot.java:92)
    at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4685)
    at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5146)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
    at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:717)
    at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690)
    at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705)
    at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:631)
    at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1831)
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
    at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
    at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:526)
    at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:425)
    at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1576)
    at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:309)
    at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123)
    at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423)
    at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366)
    at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:936)
    at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:841)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
    at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384)
    at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
    at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140)
    at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909)
    at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
    at org.apache.catalina.core.StandardService.startInternal(StandardService.java:421)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
    at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
    at org.apache.catalina.startup.Catalina.start(Catalina.java:633)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343)
    at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474)
Caused by: java.net.SocketTimeoutException: Read timed out
    at java.base/java.net.SocketInputStream.socketRead0(Native Method)
    at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115)
    at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168)
    at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
    at com.microsoft.sqlserver.jdbc.TDSChannel.read(IOBuffer.java:2023)

Additional context
This appears to be related to the default socket timeout value. Changing the socket timeout with connectionProperties="socketTimeout=60000" seems to resolve the issue. The root cause seems to be the unit of time for the socketTimeout - with some JDBC connections it is in seconds and others it is in ms.

Expose Cassandra encryption settings

Is your feature request related to a problem? Please describe.
The prconfig settings for using Cassandra client encryption currently have to be set via NODE_SETTINGS

Describe the solution you'd like
These settings should be exposed as environment variables.

Additional context
This will allow easier testing with these settings configured.

Build failing due to clair-scanner connection problem

The current builds are failing due to a clair scanner problem.

$ clair-scanner -w tests/cve-scan-whitelist.yaml -c "http://127.0.0.1:6060" --threshold="High" --ip "$(ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')" $IMAGE_NAME:latest
Device "eth0" does not exist.
2019/10/08 20:14:11 [INFO] ▶ Start clair-scanner
2019/10/08 20:14:16 [INFO] ▶ Server listening on port 9279
2019/10/08 20:14:16 [INFO] ▶ Analyzing 0695c281e5f4d28f2363f55bebfdc752a9e4679eed17a055005113f0635ff613
2019/10/08 20:14:16 [CRIT] ▶ Could not analyze layer: Clair responded with a failure: Got response 400 with message {"Error":{"Message":"could not find layer"}}
The command "clair-scanner -w tests/cve-scan-whitelist.yaml -c "http://127.0.0.1:6060" --threshold="High" --ip "$(ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')" $IMAGE_NAME:latest" exited with 1.

Done. Your build exited with 1.

The cause seems to be Device "eth0" does not exist. Since there were no changes in this repo, this may be caused by a change in the travis vm.

Read Time Out issues using Tomcat DataSourceFactory

Describe the bug
People observe read timeouts soon after starting systems. It appears that the connection pool has been exhausted, and then the environment goes down.

This issue is not seen if the following line is removed from the context.xml template:
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"

The proposal is to remove this declared data source factory, so that we use DBCP2 and leverage the correct connection settings.

Additional context
This issue is tracked internally by SE-60201

Error page exposes Tomcat version information

Is your feature request related to a problem? Please describe.
The error page displayed by the appserver in certain cases (such as when it fails to parse a URL) exposes the exact version information for the appserver.

Describe the solution you'd like
These error pages should not display the appserver version or the stacktrace.

Additional context
This is a security requirement

Migrate to travis-ci.com

Describe the bug
Per below change announced on Travis website
Please be aware travis-ci.org will be shutting down by end of May 2021. Please consider migrating to travis-ci.com.
Migrate the repository to travis-ci.com refer here
Build links on the repository README.md need to be modified.

To Reproduce
NA

Expected behavior
Build history is migrated and accessible on Travis-ci.com
Build links on README.md working

build fails on gpg errors

Current build fails with gpg errors

gpg: directory '/root/.gnupg' created gpg: keybox '/root/.gnupg/pubring.kbx' created gpg: cannot open '/dev/tty': No such device or address gpg: cannot open '/dev/tty': No such device or address gpg: cannot open '/dev/tty': No such device or address gpg: cannot open '/dev/tty': No such device or address gpg: /root/.gnupg/trustdb.gpg: trustdb created

easy-fix is to add --no-tty to gpg paramaters

support for custom certificates to be injectable to infinity

#146
Is your feature request related to a problem? Please describe.
support for custom certificates to be injectable to infinity

Describe the solution you'd like
Mounting the certificates into the /opt/pega/certs folder and then store the certs into the cacerts file in lib/security folder of java

Additional context
Currently we have support only for .cer file, we will need to enhance in the future

Allow custom application root context for Pega tomcat nodes

In a multi-tenant environment, application root context may be used to differentiate pega deployments.

If the helm charts define the PEGA_APP_CONTEXT_ROOT env var to be something other than prweb, the extracted prweb.war and associated config should be moved to a webapps directory with the specfied name. This enables application access using the custom context.

Although the custom context can be baked into the image, we want customers to have the ability to change the context without building a new image.

PR builds are failing with docker pull rate limit issue

Describe the bug
PR builds for this repository are failing with docker pull rate limit issue and not being able to authenticate with the same creds as used in master.

To Reproduce
Check the travis builds for PR's

Expected behavior
Successful run by scanning images for vulnerabilties

Screenshots
NA

Desktop (if applicable, please complete the following information):
NA

Server (if applicable, please complete the following information):
https://travis-ci.org/github/pegasystems/docker-pega-web-ready/builds/772285429

Support for authenticated JDBC driver URIs

Authenticated JDBC Driver URI
Currently, we are unable to set JDBC_DRIVER_URI to a web server that requires authentication. Authenticated JDBC Driver URI support will be very helpful for easily launching Pega web containers within unsecured networks and when driver downloads requires authentication.

Describe the solution you'd like
It would be great if the following variables where exposed so that JDBC_DRIVER_URI can reside on an authenticated server.

  • JDBC_DRIVER_URI_USERNAME
  • JDBC_DRIVER_URI_PASSWORD

Describe alternatives you've considered
It would be added complexity to construct our own image with the drivers baked-in, hence, the first choice would be support for authenticated JDBC driver URIs.

PostgreSQL Running?

Hi all,

Does this Docker image has PostgreSQL already running. I couldn't manage to get it up and running? Sorry for such newbie question.

Regards,
Zul

Clair Scanner CVE Threshold

Per agreement with the Pega security team, the default build-failure threshold for the image scanner in this OSS project should be High, not Medium.

All CVEs regardless of severity will continue to be logged and audit-able by viewing the travis build. The build should only fail for non-whitelisted CVEs of severity 'High' or higher.

prconfig/dsm/services setting doesn't get set for Stream nodes

Describe the bug
Currently the intention is for the 'prconfig/dsm/services' prconfig setting to get set via JNDI Environment entry contained within the /usr/local/tomcat/conf/Catalina/localhost/prweb.xml file. This is not happening (and for that matter seems never to have happened) because the prconfig keys are for lack of a better term not JNDI friendly.

The prweb.xml file for Stream nodes contains the following:

  <Environment name="prconfig/dsm/services" value="StreamServer" type="java.lang.String" />
  <Environment name="prconfig/dsm/services/stream/pyUnpackBasePath" value="/tmp/kafka" type="java.lang.String" />
  <Environment name="prconfig/dsm/services/stream/server_properties/unclean.leader.election.enable" value="false" type="java.lang.String" />

These environment settings all start with the common root of 'prconfig/dsm/services' and this is problematic. The name 'prconfig/dsm/services' is bound to a string, however when tomcat is building out the JNDI environment on startup, it processes of the other config settings (like 'prconfig/dsm/services/stream/pyUnpackBasePath') and in doing so builds out the intervening JNDI contexts ('prconfig', 'prconfig/dsm', 'prconfig/dsm/services', ...). Depending on the order in which this file is processed, it seems that you either see this message during startup:

20-Dec-2019 12:01:31.658 SEVERE [main] org.apache.catalina.core.NamingContextListener.addEnvironment naming.invalidEnvEntryValue

catalina_behavior1.log

or in other environments you see a ClassCastException while processing the longer prconfig keys because tomcat attempts to cast the object bound to 'prconfig/dsm/services' to a javax.naming.Context but the name is already bound to a java.lang.String. Both behaviors have been seen -- it is not clear why there is a behavioral difference, but it is my suspicion that a dependency (os perhaps) being pulled into more recent docker images is responsible.

To Reproduce
Use the pega-helm-charts to deploy Pega into K8S environment. Examine the /usr/local/tomcat/logs/catalina*.log for log messages related to the processing of prweb.xml.

Expected behavior
The setting ('prconfig/dsm/services') would be set (with value 'StreamServer') for the Pega application. Additionally, no error messages would be reported while tomcat builds out the JNDI environment.

Screenshots
n/a

Desktop (if applicable, please complete the following information):
n/a

Server (if applicable, please complete the following information):
n/a

Additional context
n/a

Pega image doesn't have the Pega Installer

Hi Team,

I've tried to use the same Dockerfile and looks like this image pegasystems/tomcat:9-jdk11 as releasedoesn't have the Pega installer. Because I've tried to execute the same and Tomcat is able to launch but its throwing an error as Message The requested resource [/prweb] is not available

Also, I've tried to check the same under the Tomcat container and not able to see the prweb installer things. Please do let us know if we need to download and use it manually or can configure the things in Docker file itself.

[Proposal] Image Versioning

Is your feature request related to a problem? Please describe.
As a consumer of the docker-pega-web-ready image, I want more control over the adoption of incoming changes in my pipeline.

Current image versioning and tagging policy

Branch Tag
master 2.1.0, latest
v2.0.0 2.0.0
v0.1-Pega7 none

Describe the solution you'd like

Branch Tag
master 2.1.0-%BUILD%, 2.1.0, latest
v2.0.0 2.0.0-%BUILD%, 2.0.0
v0.1-Pega7 none

The %BUILD% value must be unique for every build and sequential for each branch. Unfortunately using something as easy as the travis build # would cause gaps between image version numbers and may result in confusion. If a build number is not feasible, perhaps a build date would be a good option.

Readme for /kafkadata mount point is incorrect

Describe the bug
The readme indicates that the kafka mount point is /kafkadata but I can see clearly in the files that its actually using /opt/pega/kafkadata

To Reproduce
N/A

Expected behavior
Readme should be accurate

Screenshots
Capture

Capture1

Capture3

Desktop (if applicable, please complete the following information):
N/A

Server (if applicable, please complete the following information):
N/A

Additional context
N/A

NODE_TIER Not exposed as an Env Var in Dockerfile or README

Describe the bug
The NODE_TIER env var is passed in to pega as -DNodeTier=${NODE_TIER} in setenv.sh. However, this env var is not exposed in the Dockerfile, and it is not documented in the README.

To Reproduce
N/A

Expected behavior

  • NODE_TIER should be a declared Environmental Variable in the Dockerfile
  • NODE_TIER should be documented in the README

Screenshots
N/A

Desktop (if applicable, please complete the following information):
N/A

Server (if applicable, please complete the following information):
N/A

Additional context
N/A

Validation query works for Postgres but not for Oracle

Since the validation query validationQuery="SELECT 1" works only for Postgres and not for Oracle, the PR Web application container will fail to start with the error: Failed to validate a newly established connection.

The validation query is hardcoded in docker-pega7-tomcat-ready/conf/Catalina/localhost/prweb.xml

Support for external Kafka service as Stream node

Since 8.7.0 having Pega Stream node is a deprecated configuration. Would be neat to have possibility to configure external Kafka service using environmental variables instead of having need to update prconfig/DSS settings manually.
The same way it was done there for Hazelcast and Cassandra

Pega images based on Red Hat Universal Base Image (UBI)?

Hello Pegasystems,

A requirement from our organisation will be that all container images must be build on a Red Hat Universal Base Image. Could you give me any information what base image the Pega images are build on?

If the Pega images are build on another base image (which I expect), would it be possible to rebuild these images with a Red Hat Universal Base Image? Either by you or by us.

Thank you in advance.

gpg: Can't check signature: public key not found

Hi,
I'm trying to create few Docker images using this source code. The goal is to have on my local an image per version of tomcat. But before being able to do this, I can't build this image.

I clone this repository and run in the current repo:
docker build -t pegaready .

The instructions in the Dockerfile runs until stop at the step 29/48 :
RUN curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL} -o catalina-jmx-remote.jar && curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL}.asc -o catalina-jmx-remote.jar.asc && gpg --verify catalina-jmx-remote.jar.asc && mv catalina-jmx-remote.jar /usr/local/tomcat/lib/catalina-jmx-remote.jar && rm catalina-jmx-remote.jar.asc
Here is the error:

Step 29/48 : RUN curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL} -o catalina-jmx-remote.jar && curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL}.asc -o catalina-jmx-remote.jar.asc && gpg --verify catalina-jmx-remote.jar.asc && mv catalina-jmx-remote.jar /usr/local/tomcat/lib/catalina-jmx-remote.jar && rm catalina-jmx-remote.jar.asc
 ---> Running in 9c26dac3c27e
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 13082  100 13082    0     0  17446      0 --:--:-- --:--:-- --:--:-- 17442
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   836  100   836    0     0   2683      0 --:--:-- --:--:-- --:--:--  2688
gpg: directory `/root/.gnupg' created
gpg: new configuration file `/root/.gnupg/gpg.conf' created
gpg: WARNING: options in `/root/.gnupg/gpg.conf' are not yet active during this run
gpg: keyring `/root/.gnupg/pubring.gpg' created
gpg: assuming signed data in `catalina-jmx-remote.jar'
gpg: Signature made Fri 29 Sep 2017 12:26:00 PM UTC using RSA key ID D63011C7
gpg: Can't check signature: public key not found
The command '/bin/sh -c curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL} -o catalina-jmx-remote.jar && curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL}.asc -o catalina-jmx-remote.jar.asc && gpg --verify catalina-jmx-remote.jar.asc && mv catalina-jmx-remote.jar /usr/local/tomcat/lib/catalina-jmx-remote.jar && rm catalina-jmx-remote.jar.asc' returned a non-zero code: 2

It seems to be a problem with the public key. Somebody can try to build this image to know if it's a general problem ?

Thank you in advance.

Expose CATALINA_OPTS as environment variable

Integration with pinpoint APM tools.
To integrate pinpoint or other APM tools, agent jar path and configurations has to be passed in CATALINA_OPTS. But CATALINA_OPTS is not exposed as environment variable. We are unable to set APM tool configuration in CATALINA_OPTS.

Describe the solution you'd like
https://github.com/pegasystems/docker-pega-web-ready/blob/master/tomcat-bin/setenv.sh
CATALINA_OPTS is initialized with empty string. Removing that will help.

Intermittent Travis Build failures when connecting to clair database

Describe the bug
Travis builds are failing while connecting to clair database for vulnerability scan

To Reproduce
Any branch, PR or master build

Expected behavior
Build should run without any socket connection refused errors.

Server (if applicable, please complete the following information):
https://travis-ci.org/github/pegasystems/docker-pega-web-ready/builds/773888390

Additional context
This seems to be a known issue with clair. Use HOSTIP while starting the server.
arminc/clair-scanner#63

PostgreSQL driver 42.2.11 Exception: Problem committing a transaction

Describe the bug
The recently released PostgreSQL driver version 42.2.11 contains a change in behavior which is resulting in an exception that stops Platform startup. 42.2.10 does not exhibit this.

Looks like this is related to the following change in 42.2.11: pgjdbc/pgjdbc#1729

This is reverted for the next patch (42.2.12) but will be re-introduced in 42.3.0: “This reverts commit adcb194. we still want to do this but it is a breaking change and we will introduce this change in 42.3.0”. Therefore the issue will still need to be fixed.

To Reproduce
Startup a system using PostgreSQL driver version 42.2.11

Logs

20-03-20 13:34:26,294 [os-app1.bounceme.net] [          ] [                    ] [                    ] (ernal.store.ManagedTransaction) ERROR   - There was a problem committing a transaction on database pegarules
com.pega.pegarules.pub.database.DatabaseException: Database-General    There was a problem committing a transaction on database pegarules    0    25P02    The database returned ROLLBACK, so the transaction cannot be committed. Transaction failure cause is <<ERROR: duplicate key value violates unique constraint "pr_sys_ruleset_index_pk"
  Detail: Key (pzinskey)=(SYSTEM-RULESET-INDEX C5F8D11E1C6712B822BE592273FD678F!PEGA-PROCESSCOMMANDER) already exists.>>
DatabaseException caused by prior exception: org.postgresql.util.PSQLException: The database returned ROLLBACK, so the transaction cannot be committed. Transaction failure cause is <<ERROR: duplicate key value violates unique constraint "pr_sys_ruleset_index_pk"
  Detail: Key (pzinskey)=(SYSTEM-RULESET-INDEX C5F8D11E1C6712B822BE592273FD678F!PEGA-PROCESSCOMMANDER) already exists.>>
 | SQL Code: 0 | SQL State: 25P02

    at com.pega.pegarules.data.internal.access.ExceptionInformation.createAppropriateExceptionDueToDBFailure(ExceptionInformation.java:384) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.access.ExceptionInformation.createExceptionDueToDBFailure(ExceptionInformation.java:363) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.store.ManagedTransaction.commit(ManagedTransaction.java:422) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.store.DataStoreManager.commit(DataStoreManager.java:344) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.store.DataStoreManager.doInTransaction(DataStoreManager.java:217) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.access.SaverImpl.save(SaverImpl.java:234) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.access.SaverImpl.saveAllOrNone(SaverImpl.java:128) ~[prprivate-data.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.RulesetIndexWriter.persistRSLRows(RulesetIndexWriter.java:425) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.RulesetIndexWriter.writeIndexRows(RulesetIndexWriter.java:297) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.RulesetIndexWriter.writeRuleSetListHashToDB(RulesetIndexWriter.java:211) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.LocalizedApplicationContextImmutableImpl.<init>(LocalizedApplicationContextImmutableImpl.java:155) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.ApplicationContextFactory.createLocalizedApplicationContext(ApplicationContextFactory.java:92) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.GlobalContextCache.getLocalizedApplicationContext(GlobalContextCache.java:456) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.AuthorizationContextManagerImpl.getContext(AuthorizationContextManagerImpl.java:109) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.SessionAuthorization.getContext(SessionAuthorization.java:497) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.SessionAuthorization.getContext(SessionAuthorization.java:441) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.SessionAuthorization.reset(SessionAuthorization.java:376) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.SessionAuthorization.<init>(SessionAuthorization.java:147) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRRequestorBase.allocateSessionAuthorization(PRRequestorBase.java:548) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRRequestorImpl.allocateSessionAuthorization(PRRequestorImpl.java:2289) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.Authorization.reset(Authorization.java:258) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.Authorization.reset(Authorization.java:196) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.Authorization.onBeforeThreadUse(Authorization.java:1779) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.ThreadPassivation.configureThreadImpl(ThreadPassivation.java:344) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRThreadBase.configureThread(PRThreadBase.java:184) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRThreadImpl.<init>(PRThreadImpl.java:157) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRThreadImpl.acquire(PRThreadImpl.java:182) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.RequestorThreadSync.getOrCreateThread(RequestorThreadSync.java:195) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.RequestorThreadSync.getOrCreateThread(RequestorThreadSync.java:171) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.RequestorThreadSync.getOrCreateThread(RequestorThreadSync.java:167) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.RequestorPassivation.configureRequestorImpl(RequestorPassivation.java:510) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRRequestorBase.configureRequestor(PRRequestorBase.java:491) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRRequestorImpl.<init>(PRRequestorImpl.java:332) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRRequestorImpl.acquire(PRRequestorImpl.java:353) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.MultiThreadRequestorFactory.acquire(MultiThreadRequestorFactory.java:76) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.NodeRequestorMgt.createRequestorImpl(NodeRequestorMgt.java:1671) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.NodeRequestorMgt.createRequestorImpl(NodeRequestorMgt.java:1650) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRNodeImpl.initializeSystem(PRNodeImpl.java:878) ~[prprivate-session.jar:?]

Pega internal ref BUG-549667

JDBC URL will not work for Oracle

The Oracle JDBC URL format is:
jdbc:oracle:thin:@host:port/service-name

The Host prefix @ is needed in stead of // as defined in docker-entrypoint.sh:

JDBC_URL="jdbc:${JDBC_DB_TYPE}://${DB_HOST}:${DB_PORT}/${DB_NAME}${JDBC_URL_SUFFIX

I would suggest to have a new variable like JDBC_URL_PREFIX which will be have the default value '//' and can be overridden by '@' .

The JDBC_URL will look like this in docker-entrypoint.sh:

JDBC_URL="jdbc:${JDBC_DB_TYPE}:${JDBC_URL_PREFIX}${DB_HOST}:${DB_PORT}/${DB_NAME}${JDBC_URL_SUFFIX

Impact of losing the /kafkadata mount point

We are transitioning to containerized pega applications (our current platform is mesos/dcos - there's no stateful-set like concept there). What is the actual impact of losing the /kafkadata mount point (e.g. during an update, due to a container failure, or if the vm that runs the container goes down) ? Assuming 1 new pega container (re)starts, and it doesn't have the /kafkadata folder, will that result in data loss? Same question in case all the pega container instances (of the same application) are missing the /kafkadata mount (after an update, or in case of nodes failure)?

Open source the git repo / Dockerfile for the base image pegasystems/tomcat:9-jdk11

Is your feature request related to a problem? Please describe.
In order to keep our docker images in sync with pegasystems we would like to use your base image: pegasystems/tomcat:9-jdk11 and build on top of that.

Describe the solution you'd like
I would like to see the Dockefile / git repository used to build that image. Can you opensource that?

The default initial JDBC connection count is too high for large clusters

Describe the bug
The default initial JDBC connection count (JDBC_INITIAL_SIZE) is currently 10, which higher than needed, and too high for large clusters. When multiple PRPC instances start at the same time, this can lead to large connection spikes, potentially exhausting the maximum number of connections configured at the database. A value of 4 should be sufficient.

To Reproduce
This is non-trivial to reproduce, since it requires a large cluster. But outages due to connection spikes at startup have been observed in large clusters. Those were resolved by lowering this value to 4.

Expected behavior
When multiple PRPC instances start concurrently, this should not lead to exhaustion of DB connections.

Screenshots
n/a

Desktop (if applicable, please complete the following information):
n/a

Server (if applicable, please complete the following information):
n/a

Additional context
For internal reference, also see US-400591

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.