GithubHelp home page GithubHelp logo

pegasystems / docker-pega-web-ready Goto Github PK

View Code? Open in Web Editor NEW
52.0 52.0 99.0 326 KB

Docker project for generating a tomcat docker image for Pega

License: Apache License 2.0

Shell 63.09% Dockerfile 24.60% HTML 6.64% Makefile 5.68%
cd ci cicd docker docker-image docker-pega-web-ready pega ready tomcat web

docker-pega-web-ready's People

Contributors

akshithac-21 avatar amphibithen avatar apegadavis avatar browm2 avatar cdancy avatar dcasavant avatar khick77 avatar kishorv10 avatar madhuriarugula avatar mandeep-pega avatar misterdorito avatar nekhilkotha avatar pega-abhinav avatar pega-chikv avatar pega-roska avatar pega-sagas1 avatar pega-talba avatar pegadave avatar petehayes avatar saikarthik528 avatar saurabh-16 avatar shashikant-koder avatar slimatic avatar smootherbug avatar sushmareddyloka avatar viper0131 avatar wonim2022 avatar yashwanth-p avatar yashwanth-pega avatar zitikay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-pega-web-ready's Issues

Expose Cassandra encryption settings

Is your feature request related to a problem? Please describe.
The prconfig settings for using Cassandra client encryption currently have to be set via NODE_SETTINGS

Describe the solution you'd like
These settings should be exposed as environment variables.

Additional context
This will allow easier testing with these settings configured.

Read Time Out issues using Tomcat DataSourceFactory

Describe the bug
People observe read timeouts soon after starting systems. It appears that the connection pool has been exhausted, and then the environment goes down.

This issue is not seen if the following line is removed from the context.xml template:
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"

The proposal is to remove this declared data source factory, so that we use DBCP2 and leverage the correct connection settings.

Additional context
This issue is tracked internally by SE-60201

Migrate to travis-ci.com

Describe the bug
Per below change announced on Travis website
Please be aware travis-ci.org will be shutting down by end of May 2021. Please consider migrating to travis-ci.com.
Migrate the repository to travis-ci.com refer here
Build links on the repository README.md need to be modified.

To Reproduce
NA

Expected behavior
Build history is migrated and accessible on Travis-ci.com
Build links on README.md working

JDBC Driver Timeout with MSSQL

Describe the bug
When starting Pega Platform using the Docker image and latest MSSQL driver jar, a DB timeout will often occur during startup.

To Reproduce

  • Pega 8.3.0
  • Azure AKS
  • Azure SQL Server

Expected behavior
Startup completes without error.

Screenshots / Logs

com.microsoft.sqlserver.jdbc.SQLServerException: Read timed out
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:2924)
    at com.microsoft.sqlserver.jdbc.TDSChannel.read(IOBuffer.java:2029)
    at com.microsoft.sqlserver.jdbc.TDSReader.readPacket(IOBuffer.java:6418)
    at com.microsoft.sqlserver.jdbc.TDSCommand.startResponse(IOBuffer.java:7581)
    at com.microsoft.sqlserver.jdbc.SQLServerResultSet$CursorFetchCommand.doExecute(SQLServerResultSet.java:5459)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7194)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2979)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:248)
    at com.microsoft.sqlserver.jdbc.SQLServerResultSet.doServerFetch(SQLServerResultSet.java:5496)
    at com.microsoft.sqlserver.jdbc.SQLServerResultSet.next(SQLServerResultSet.java:1038)
    at org.apache.tomcat.dbcp.dbcp2.DelegatingResultSet.next(DelegatingResultSet.java:1160)
    at org.apache.tomcat.dbcp.dbcp2.DelegatingResultSet.next(DelegatingResultSet.java:1160)
    at com.pega.pegarules.internal.bootstrap.phase2.jdbc.AbstractJdbcJarReader.readEntry(AbstractJdbcJarReader.java:408)
    at com.pega.pegarules.internal.bootstrap.phase2.jdbc.PegaJdbcURLConnection.connect(PegaJdbcURLConnection.java:250)
    at com.pega.pegarules.internal.bootstrap.phase2.jdbc.PegaJdbcURLConnection.getInputStream(PegaJdbcURLConnection.java:311)
    at java.base/java.net.URL.openStream(URL.java:1139)
    at com.pega.pegarules.internal.bootstrap.phase2.PRBootstrapImpl.extractJarsFromDB(PRBootstrapImpl.java:1035)
    at com.pega.pegarules.internal.bootstrap.phase2.PRBootstrapImpl._finishInitialization_privact(PRBootstrapImpl.java:264)
    at com.pega.pegarules.internal.bootstrap.phase2.PRBootstrapImpl.finishInitialization(PRBootstrapImpl.java:128)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at com.pega.pegarules.internal.bootstrap.PRBootstrap.checkForStartup(PRBootstrap.java:693)
    at com.pega.pegarules.internal.bootstrap.PRBootstrap.invokeMethodPropagatingThrowable(PRBootstrap.java:419)
    at com.pega.pegarules.boot.internal.extbridge.AppServerBridgeToPega.invokeMethodPropagatingThrowable(AppServerBridgeToPega.java:224)
    at com.pega.pegarules.boot.internal.extbridge.AppServerBridgeToPega.invokeMethod(AppServerBridgeToPega.java:273)
    at com.pega.pegarules.internal.web.servlet.WebAppLifeCycleListenerBoot.contextInitialized(WebAppLifeCycleListenerBoot.java:92)
    at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4685)
    at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5146)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
    at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:717)
    at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690)
    at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705)
    at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:631)
    at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1831)
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
    at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
    at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:526)
    at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:425)
    at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1576)
    at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:309)
    at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123)
    at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423)
    at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366)
    at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:936)
    at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:841)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
    at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384)
    at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
    at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140)
    at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909)
    at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
    at org.apache.catalina.core.StandardService.startInternal(StandardService.java:421)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
    at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
    at org.apache.catalina.startup.Catalina.start(Catalina.java:633)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343)
    at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474)
Caused by: java.net.SocketTimeoutException: Read timed out
    at java.base/java.net.SocketInputStream.socketRead0(Native Method)
    at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115)
    at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168)
    at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
    at com.microsoft.sqlserver.jdbc.TDSChannel.read(IOBuffer.java:2023)

Additional context
This appears to be related to the default socket timeout value. Changing the socket timeout with connectionProperties="socketTimeout=60000" seems to resolve the issue. The root cause seems to be the unit of time for the socketTimeout - with some JDBC connections it is in seconds and others it is in ms.

Open source the git repo / Dockerfile for the base image pegasystems/tomcat:9-jdk11

Is your feature request related to a problem? Please describe.
In order to keep our docker images in sync with pegasystems we would like to use your base image: pegasystems/tomcat:9-jdk11 and build on top of that.

Describe the solution you'd like
I would like to see the Dockefile / git repository used to build that image. Can you opensource that?

Validation query works for Postgres but not for Oracle

Since the validation query validationQuery="SELECT 1" works only for Postgres and not for Oracle, the PR Web application container will fail to start with the error: Failed to validate a newly established connection.

The validation query is hardcoded in docker-pega7-tomcat-ready/conf/Catalina/localhost/prweb.xml

PegaMKTSMS and PegaMKTEmail are Valid NodeTypes for CDH Deployments

Describe the bug
PegaMKTSMS and PegaMKTEmail are Valid NodeTypes for CDH Deployments and there is no configurational way to pass in those NodeTypes

To Reproduce
Set the -DNode=PegaMKTSMS or PegaMKTEmail , we get an error from the platform its invalid nodetype the valid applicable NodeTypes are [Search, WebUser, BIX, BackgroundProcessing, Custom1, Custom2, Custom3, Custom4, Custom5, DDS, ADM, Batch, RealTime, RTDG, Stream]

Expected behavior
For CDH Deployments we should include- PegaMKTSMS and PegaMKTEmail as a valid NodeTypes

Additional context
As of today there is no way to pass the ApplicableNodeTypes environment value to tomcat.
Currently in existing cloud cuttyhunk deployment tomcat images have an option to pass the applicable NodeTypes and we need similar param in tomcat for CDH deployments in this repo to pass applicableNodeTypes

PR builds are failing with docker pull rate limit issue

Describe the bug
PR builds for this repository are failing with docker pull rate limit issue and not being able to authenticate with the same creds as used in master.

To Reproduce
Check the travis builds for PR's

Expected behavior
Successful run by scanning images for vulnerabilties

Screenshots
NA

Desktop (if applicable, please complete the following information):
NA

Server (if applicable, please complete the following information):
https://travis-ci.org/github/pegasystems/docker-pega-web-ready/builds/772285429

Add Travis-CI pipeline

This project should build continuously using Travis-CI and publish the resulting pega-ready image to DockerHub.

Running image as non-root user fails to copy provided conf files like prconfig.xml

Describe the bug
If I provide a volume mapping for the /opt/pega/config dir when I run the image as a non-root user, the entry script fails to copy over the various config files.

To Reproduce

  1. Create a local directory containing a prconfig.xml file (/var/customconf/prconfig.xml)
  2. Run the image as a non-root user mapping the volume $ docker run -v /var/customconf:/opt/pega/config --user 1000 pega-tomcat:latest

Expected behavior
The mapped files are copied correctly

Screenshots

2019/12/11 13:50:59 unable to chmod temp file: chmod /usr/local/tomcat/conf/Catalina/localhost/prweb.xml: operation not permitted
Loading prlog4j2 from /opt/pega/config/prlog4j2.xml...
cp: cannot create regular file '/usr/local/tomcat/webapps/prweb/WEB-INF/classes/prlog4j2.xml': Permission denied
Loading prconfig from /opt/pega/config/prconfig.xml...
cp: cannot create regular file '/usr/local/tomcat/webapps/prweb/WEB-INF/classes/prconfig.xml': Permission denied
Loading context.xml from /opt/pega/config/context.xml...
2019/12/11 13:50:59 unable to chmod temp file: chmod /usr/local/tomcat/conf/tomcat-users.xml: operation not permitted

build fails on gpg errors

Current build fails with gpg errors

gpg: directory '/root/.gnupg' created gpg: keybox '/root/.gnupg/pubring.kbx' created gpg: cannot open '/dev/tty': No such device or address gpg: cannot open '/dev/tty': No such device or address gpg: cannot open '/dev/tty': No such device or address gpg: cannot open '/dev/tty': No such device or address gpg: /root/.gnupg/trustdb.gpg: trustdb created

easy-fix is to add --no-tty to gpg paramaters

kafka data is not persistent

Describe the bug
The stream nodes are defined as stateful sets with a PVC mounted at /opt/pega/streamvol. Nothing gets written at this location, i think the intention is to write kafka data at this location. However, there is a setting in the prweb.xml namely which makes the stream nodes write kafka data at /opt/pega/kafkadata location which is not persistent. Any attempts to overwrite this setting using prconfig.xml, DSS or even context.xml does not work as prweb.xml is "the" context file for prweb war.

To Reproduce
start stream nodes and observe that kafka logs get written to /opt/pega/kafkadata

Expected behavior
Try overriding the kafka data location to point to /opt/pega/streamvol using this article https://community.pega.com/knowledgebase/articles/decision-management-overview/advanced-configurations-stream-service#4 and it does not work

Chart version
What version of the charts are you using? Have you made any customizations?

Server (if applicable, please complete the following information):

  • OS: [Ubuntu 18.04]
  • Environment: [AKS]
  • Database: [PostgreSQL]
  • Pega version [8.4.0]

Additional context
NA

Pega8 support

Please add support for native pega8.x version
(remove sma, upgrade tomcat to version 8.5 or higher)

Pega images based on Red Hat Universal Base Image (UBI)?

Hello Pegasystems,

A requirement from our organisation will be that all container images must be build on a Red Hat Universal Base Image. Could you give me any information what base image the Pega images are build on?

If the Pega images are build on another base image (which I expect), would it be possible to rebuild these images with a Red Hat Universal Base Image? Either by you or by us.

Thank you in advance.

Support for authenticated JDBC driver URIs

Authenticated JDBC Driver URI
Currently, we are unable to set JDBC_DRIVER_URI to a web server that requires authentication. Authenticated JDBC Driver URI support will be very helpful for easily launching Pega web containers within unsecured networks and when driver downloads requires authentication.

Describe the solution you'd like
It would be great if the following variables where exposed so that JDBC_DRIVER_URI can reside on an authenticated server.

  • JDBC_DRIVER_URI_USERNAME
  • JDBC_DRIVER_URI_PASSWORD

Describe alternatives you've considered
It would be added complexity to construct our own image with the drivers baked-in, hence, the first choice would be support for authenticated JDBC driver URIs.

Removal of pega7-tomcat-ready from DockerHub

The pega7-tomcat-ready image needs to be removed from DockerHub to avoid confusion and the possibility of users deploying an unsupported version of the Docker image.

Removal is scheduled for Sept 1, 2019.

PostgreSQL driver 42.2.11 Exception: Problem committing a transaction

Describe the bug
The recently released PostgreSQL driver version 42.2.11 contains a change in behavior which is resulting in an exception that stops Platform startup. 42.2.10 does not exhibit this.

Looks like this is related to the following change in 42.2.11: pgjdbc/pgjdbc#1729

This is reverted for the next patch (42.2.12) but will be re-introduced in 42.3.0: “This reverts commit adcb194. we still want to do this but it is a breaking change and we will introduce this change in 42.3.0”. Therefore the issue will still need to be fixed.

To Reproduce
Startup a system using PostgreSQL driver version 42.2.11

Logs

20-03-20 13:34:26,294 [os-app1.bounceme.net] [          ] [                    ] [                    ] (ernal.store.ManagedTransaction) ERROR   - There was a problem committing a transaction on database pegarules
com.pega.pegarules.pub.database.DatabaseException: Database-General    There was a problem committing a transaction on database pegarules    0    25P02    The database returned ROLLBACK, so the transaction cannot be committed. Transaction failure cause is <<ERROR: duplicate key value violates unique constraint "pr_sys_ruleset_index_pk"
  Detail: Key (pzinskey)=(SYSTEM-RULESET-INDEX C5F8D11E1C6712B822BE592273FD678F!PEGA-PROCESSCOMMANDER) already exists.>>
DatabaseException caused by prior exception: org.postgresql.util.PSQLException: The database returned ROLLBACK, so the transaction cannot be committed. Transaction failure cause is <<ERROR: duplicate key value violates unique constraint "pr_sys_ruleset_index_pk"
  Detail: Key (pzinskey)=(SYSTEM-RULESET-INDEX C5F8D11E1C6712B822BE592273FD678F!PEGA-PROCESSCOMMANDER) already exists.>>
 | SQL Code: 0 | SQL State: 25P02

    at com.pega.pegarules.data.internal.access.ExceptionInformation.createAppropriateExceptionDueToDBFailure(ExceptionInformation.java:384) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.access.ExceptionInformation.createExceptionDueToDBFailure(ExceptionInformation.java:363) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.store.ManagedTransaction.commit(ManagedTransaction.java:422) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.store.DataStoreManager.commit(DataStoreManager.java:344) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.store.DataStoreManager.doInTransaction(DataStoreManager.java:217) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.access.SaverImpl.save(SaverImpl.java:234) ~[prprivate-data.jar:?]
    at com.pega.pegarules.data.internal.access.SaverImpl.saveAllOrNone(SaverImpl.java:128) ~[prprivate-data.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.RulesetIndexWriter.persistRSLRows(RulesetIndexWriter.java:425) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.RulesetIndexWriter.writeIndexRows(RulesetIndexWriter.java:297) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.RulesetIndexWriter.writeRuleSetListHashToDB(RulesetIndexWriter.java:211) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.LocalizedApplicationContextImmutableImpl.<init>(LocalizedApplicationContextImmutableImpl.java:155) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.ApplicationContextFactory.createLocalizedApplicationContext(ApplicationContextFactory.java:92) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.GlobalContextCache.getLocalizedApplicationContext(GlobalContextCache.java:456) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.context.AuthorizationContextManagerImpl.getContext(AuthorizationContextManagerImpl.java:109) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.SessionAuthorization.getContext(SessionAuthorization.java:497) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.SessionAuthorization.getContext(SessionAuthorization.java:441) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.SessionAuthorization.reset(SessionAuthorization.java:376) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.SessionAuthorization.<init>(SessionAuthorization.java:147) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRRequestorBase.allocateSessionAuthorization(PRRequestorBase.java:548) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRRequestorImpl.allocateSessionAuthorization(PRRequestorImpl.java:2289) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.Authorization.reset(Authorization.java:258) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.Authorization.reset(Authorization.java:196) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.authorization.Authorization.onBeforeThreadUse(Authorization.java:1779) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.ThreadPassivation.configureThreadImpl(ThreadPassivation.java:344) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRThreadBase.configureThread(PRThreadBase.java:184) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRThreadImpl.<init>(PRThreadImpl.java:157) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRThreadImpl.acquire(PRThreadImpl.java:182) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.RequestorThreadSync.getOrCreateThread(RequestorThreadSync.java:195) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.RequestorThreadSync.getOrCreateThread(RequestorThreadSync.java:171) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.RequestorThreadSync.getOrCreateThread(RequestorThreadSync.java:167) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.RequestorPassivation.configureRequestorImpl(RequestorPassivation.java:510) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRRequestorBase.configureRequestor(PRRequestorBase.java:491) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRRequestorImpl.<init>(PRRequestorImpl.java:332) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRRequestorImpl.acquire(PRRequestorImpl.java:353) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.MultiThreadRequestorFactory.acquire(MultiThreadRequestorFactory.java:76) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.NodeRequestorMgt.createRequestorImpl(NodeRequestorMgt.java:1671) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.base.NodeRequestorMgt.createRequestorImpl(NodeRequestorMgt.java:1650) ~[prprivate-session.jar:?]
    at com.pega.pegarules.session.internal.mgmt.PRNodeImpl.initializeSystem(PRNodeImpl.java:878) ~[prprivate-session.jar:?]

Pega internal ref BUG-549667

Build failing due to clair-scanner connection problem

The current builds are failing due to a clair scanner problem.

$ clair-scanner -w tests/cve-scan-whitelist.yaml -c "http://127.0.0.1:6060" --threshold="High" --ip "$(ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')" $IMAGE_NAME:latest
Device "eth0" does not exist.
2019/10/08 20:14:11 [INFO] ▶ Start clair-scanner
2019/10/08 20:14:16 [INFO] ▶ Server listening on port 9279
2019/10/08 20:14:16 [INFO] ▶ Analyzing 0695c281e5f4d28f2363f55bebfdc752a9e4679eed17a055005113f0635ff613
2019/10/08 20:14:16 [CRIT] ▶ Could not analyze layer: Clair responded with a failure: Got response 400 with message {"Error":{"Message":"could not find layer"}}
The command "clair-scanner -w tests/cve-scan-whitelist.yaml -c "http://127.0.0.1:6060" --threshold="High" --ip "$(ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')" $IMAGE_NAME:latest" exited with 1.

Done. Your build exited with 1.

The cause seems to be Device "eth0" does not exist. Since there were no changes in this repo, this may be caused by a change in the travis vm.

[Proposal] Image Versioning

Is your feature request related to a problem? Please describe.
As a consumer of the docker-pega-web-ready image, I want more control over the adoption of incoming changes in my pipeline.

Current image versioning and tagging policy

Branch Tag
master 2.1.0, latest
v2.0.0 2.0.0
v0.1-Pega7 none

Describe the solution you'd like

Branch Tag
master 2.1.0-%BUILD%, 2.1.0, latest
v2.0.0 2.0.0-%BUILD%, 2.0.0
v0.1-Pega7 none

The %BUILD% value must be unique for every build and sequential for each branch. Unfortunately using something as easy as the travis build # would cause gaps between image version numbers and may result in confusion. If a build number is not feasible, perhaps a build date would be a good option.

JDBC URL will not work for Oracle

The Oracle JDBC URL format is:
jdbc:oracle:thin:@host:port/service-name

The Host prefix @ is needed in stead of // as defined in docker-entrypoint.sh:

JDBC_URL="jdbc:${JDBC_DB_TYPE}://${DB_HOST}:${DB_PORT}/${DB_NAME}${JDBC_URL_SUFFIX

I would suggest to have a new variable like JDBC_URL_PREFIX which will be have the default value '//' and can be overridden by '@' .

The JDBC_URL will look like this in docker-entrypoint.sh:

JDBC_URL="jdbc:${JDBC_DB_TYPE}:${JDBC_URL_PREFIX}${DB_HOST}:${DB_PORT}/${DB_NAME}${JDBC_URL_SUFFIX

Error page exposes Tomcat version information

Is your feature request related to a problem? Please describe.
The error page displayed by the appserver in certain cases (such as when it fails to parse a URL) exposes the exact version information for the appserver.

Describe the solution you'd like
These error pages should not display the appserver version or the stacktrace.

Additional context
This is a security requirement

The default initial JDBC connection count is too high for large clusters

Describe the bug
The default initial JDBC connection count (JDBC_INITIAL_SIZE) is currently 10, which higher than needed, and too high for large clusters. When multiple PRPC instances start at the same time, this can lead to large connection spikes, potentially exhausting the maximum number of connections configured at the database. A value of 4 should be sufficient.

To Reproduce
This is non-trivial to reproduce, since it requires a large cluster. But outages due to connection spikes at startup have been observed in large clusters. Those were resolved by lowering this value to 4.

Expected behavior
When multiple PRPC instances start concurrently, this should not lead to exhaustion of DB connections.

Screenshots
n/a

Desktop (if applicable, please complete the following information):
n/a

Server (if applicable, please complete the following information):
n/a

Additional context
For internal reference, also see US-400591

Intermittent Travis Build failures when connecting to clair database

Describe the bug
Travis builds are failing while connecting to clair database for vulnerability scan

To Reproduce
Any branch, PR or master build

Expected behavior
Build should run without any socket connection refused errors.

Server (if applicable, please complete the following information):
https://travis-ci.org/github/pegasystems/docker-pega-web-ready/builds/773888390

Additional context
This seems to be a known issue with clair. Use HOSTIP while starting the server.
arminc/clair-scanner#63

PostgreSQL Running?

Hi all,

Does this Docker image has PostgreSQL already running. I couldn't manage to get it up and running? Sorry for such newbie question.

Regards,
Zul

Clair Scanner CVE Threshold

Per agreement with the Pega security team, the default build-failure threshold for the image scanner in this OSS project should be High, not Medium.

All CVEs regardless of severity will continue to be logged and audit-able by viewing the travis build. The build should only fail for non-whitelisted CVEs of severity 'High' or higher.

Expose CATALINA_OPTS as environment variable

Integration with pinpoint APM tools.
To integrate pinpoint or other APM tools, agent jar path and configurations has to be passed in CATALINA_OPTS. But CATALINA_OPTS is not exposed as environment variable. We are unable to set APM tool configuration in CATALINA_OPTS.

Describe the solution you'd like
https://github.com/pegasystems/docker-pega-web-ready/blob/master/tomcat-bin/setenv.sh
CATALINA_OPTS is initialized with empty string. Removing that will help.

gpg: Can't check signature: public key not found

Hi,
I'm trying to create few Docker images using this source code. The goal is to have on my local an image per version of tomcat. But before being able to do this, I can't build this image.

I clone this repository and run in the current repo:
docker build -t pegaready .

The instructions in the Dockerfile runs until stop at the step 29/48 :
RUN curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL} -o catalina-jmx-remote.jar && curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL}.asc -o catalina-jmx-remote.jar.asc && gpg --verify catalina-jmx-remote.jar.asc && mv catalina-jmx-remote.jar /usr/local/tomcat/lib/catalina-jmx-remote.jar && rm catalina-jmx-remote.jar.asc
Here is the error:

Step 29/48 : RUN curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL} -o catalina-jmx-remote.jar && curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL}.asc -o catalina-jmx-remote.jar.asc && gpg --verify catalina-jmx-remote.jar.asc && mv catalina-jmx-remote.jar /usr/local/tomcat/lib/catalina-jmx-remote.jar && rm catalina-jmx-remote.jar.asc
 ---> Running in 9c26dac3c27e
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 13082  100 13082    0     0  17446      0 --:--:-- --:--:-- --:--:-- 17442
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   836  100   836    0     0   2683      0 --:--:-- --:--:-- --:--:--  2688
gpg: directory `/root/.gnupg' created
gpg: new configuration file `/root/.gnupg/gpg.conf' created
gpg: WARNING: options in `/root/.gnupg/gpg.conf' are not yet active during this run
gpg: keyring `/root/.gnupg/pubring.gpg' created
gpg: assuming signed data in `catalina-jmx-remote.jar'
gpg: Signature made Fri 29 Sep 2017 12:26:00 PM UTC using RSA key ID D63011C7
gpg: Can't check signature: public key not found
The command '/bin/sh -c curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL} -o catalina-jmx-remote.jar && curl -kSL ${TOMCAT_JMX_JAR_TGZ_URL}.asc -o catalina-jmx-remote.jar.asc && gpg --verify catalina-jmx-remote.jar.asc && mv catalina-jmx-remote.jar /usr/local/tomcat/lib/catalina-jmx-remote.jar && rm catalina-jmx-remote.jar.asc' returned a non-zero code: 2

It seems to be a problem with the public key. Somebody can try to build this image to know if it's a general problem ?

Thank you in advance.

Not able to access /opt/pega/prweb permission denied.

When i run the fully baked pega image using docker-pega-web-ready getting below errors:-
cp: cannot create directory '/opt/pega/prweb/META-INF': Permission denied
cp: cannot create directory '/opt/pega/prweb/WEB-INF': Permission denied
cp: cannot create regular file '/opt/pega/prweb/archive.info': Permission denied
cp: cannot create directory '/opt/pega/prweb/diagnostic': Permission denied
cp: cannot create directory '/opt/pega/prweb/images': Permission denied
Loading prlog4j2 from /opt/pega/config/prlog4j2.xml...
cp: cannot create regular file '/opt/pega/prweb/WEB-INF/classes/': No such file or directory
Loading prconfig from /opt/pega/config/prconfig.xml...
cp: cannot create regular file '/opt/pega/prweb/WEB-INF/classes/': No such file or directory

Looks like the image does not have proper permission for pegauser to access /opt/pega/prweb

Pega image doesn't have the Pega Installer

Hi Team,

I've tried to use the same Dockerfile and looks like this image pegasystems/tomcat:9-jdk11 as releasedoesn't have the Pega installer. Because I've tried to execute the same and Tomcat is able to launch but its throwing an error as Message The requested resource [/prweb] is not available

Also, I've tried to check the same under the Tomcat container and not able to see the prweb installer things. Please do let us know if we need to download and use it manually or can configure the things in Docker file itself.

NODE_TIER Not exposed as an Env Var in Dockerfile or README

Describe the bug
The NODE_TIER env var is passed in to pega as -DNodeTier=${NODE_TIER} in setenv.sh. However, this env var is not exposed in the Dockerfile, and it is not documented in the README.

To Reproduce
N/A

Expected behavior

  • NODE_TIER should be a declared Environmental Variable in the Dockerfile
  • NODE_TIER should be documented in the README

Screenshots
N/A

Desktop (if applicable, please complete the following information):
N/A

Server (if applicable, please complete the following information):
N/A

Additional context
N/A

Impact of losing the /kafkadata mount point

We are transitioning to containerized pega applications (our current platform is mesos/dcos - there's no stateful-set like concept there). What is the actual impact of losing the /kafkadata mount point (e.g. during an update, due to a container failure, or if the vm that runs the container goes down) ? Assuming 1 new pega container (re)starts, and it doesn't have the /kafkadata folder, will that result in data loss? Same question in case all the pega container instances (of the same application) are missing the /kafkadata mount (after an update, or in case of nodes failure)?

Reverse DNS lookups should be disabled

Describe the bug
The Tomcat connector will automatically do reverse DNS lookups for the source IPs of requests. It uses this, for example, to log human readable hostnames instead of IP addresses in the access log files. However, the Tomcat container is typically deployed in a cluster, where external traffic is routed via a load balancer (e.g. AWS ALB), Envoy or kube proxy, etc. So the source IP arriving at Tomcat will not be the original source IP, but instead the IP of the internal routing component (the original source IP is then often found in the 'x-forwarded-for' header). That is: all reverse DNS lookups result in the hostname of the routing component. The DNS lookup adds cost, without providing any value. What's more, the DNS lookups have been identified as a source of spikes in response times - at least in some environments. Disabling reverse DNS lookups resolves this.

To Reproduce
Check the Tomcat access logs - they contain the host names of the internal routing components.

Expected behavior
Reverse DNS lookups should be prevented. As a result the access logs will show IP addresses instead.

Screenshots
N/a

Desktop (if applicable, please complete the following information):
N/a

Server (if applicable, please complete the following information):
N/a

Additional context
Also see the related Issue #240 of the 'pega-helm-charts' repository. For internal reference, see US-400592

Readme for /kafkadata mount point is incorrect

Describe the bug
The readme indicates that the kafka mount point is /kafkadata but I can see clearly in the files that its actually using /opt/pega/kafkadata

To Reproduce
N/A

Expected behavior
Readme should be accurate

Screenshots
Capture

Capture1

Capture3

Desktop (if applicable, please complete the following information):
N/A

Server (if applicable, please complete the following information):
N/A

Additional context
N/A

support for custom certificates to be injectable to infinity

#146
Is your feature request related to a problem? Please describe.
support for custom certificates to be injectable to infinity

Describe the solution you'd like
Mounting the certificates into the /opt/pega/certs folder and then store the certs into the cacerts file in lib/security folder of java

Additional context
Currently we have support only for .cer file, we will need to enhance in the future

Undocumented build instructions and dependency on container-structure-test for running 'make test'

Describe the bug
The README.md doesn't include the (trivial) instructions on how to build the image ('make all'), nor how to execute the tests ('make test'). Further, to run the tests the Google Container Structure Tests tool needs to be installed (https://github.com/GoogleContainerTools/container-structure-test) - which has not been documented

To Reproduce
After building the image ('make all') execute the tests ('make test'). This will fail with the following message:

# Execute test cases
container-structure-test test --image qualitytest --config tests/pega-web-ready-testcases.yaml
make: container-structure-test: Command not found
make: *** [Makefile:14: test] Error 127

After installing the Container Sructure Test tool the tests will pass

Expected behavior
The build & test instructions should be documented, including the required dependency on Container Structure Test

Screenshots
n/a

Desktop (if applicable, please complete the following information):

  • OS: Ubuntu 20.4.1
  • Browser: n/a
  • Version: n/a

Server (if applicable, please complete the following information):
n/a

Additional context
none

Support for external Kafka service as Stream node

Since 8.7.0 having Pega Stream node is a deprecated configuration. Would be neat to have possibility to configure external Kafka service using environmental variables instead of having need to update prconfig/DSS settings manually.
The same way it was done there for Hazelcast and Cassandra

Allow custom application root context for Pega tomcat nodes

In a multi-tenant environment, application root context may be used to differentiate pega deployments.

If the helm charts define the PEGA_APP_CONTEXT_ROOT env var to be something other than prweb, the extracted prweb.war and associated config should be moved to a webapps directory with the specfied name. This enables application access using the custom context.

Although the custom context can be baked into the image, we want customers to have the ability to change the context without building a new image.

prconfig/dsm/services setting doesn't get set for Stream nodes

Describe the bug
Currently the intention is for the 'prconfig/dsm/services' prconfig setting to get set via JNDI Environment entry contained within the /usr/local/tomcat/conf/Catalina/localhost/prweb.xml file. This is not happening (and for that matter seems never to have happened) because the prconfig keys are for lack of a better term not JNDI friendly.

The prweb.xml file for Stream nodes contains the following:

  <Environment name="prconfig/dsm/services" value="StreamServer" type="java.lang.String" />
  <Environment name="prconfig/dsm/services/stream/pyUnpackBasePath" value="/tmp/kafka" type="java.lang.String" />
  <Environment name="prconfig/dsm/services/stream/server_properties/unclean.leader.election.enable" value="false" type="java.lang.String" />

These environment settings all start with the common root of 'prconfig/dsm/services' and this is problematic. The name 'prconfig/dsm/services' is bound to a string, however when tomcat is building out the JNDI environment on startup, it processes of the other config settings (like 'prconfig/dsm/services/stream/pyUnpackBasePath') and in doing so builds out the intervening JNDI contexts ('prconfig', 'prconfig/dsm', 'prconfig/dsm/services', ...). Depending on the order in which this file is processed, it seems that you either see this message during startup:

20-Dec-2019 12:01:31.658 SEVERE [main] org.apache.catalina.core.NamingContextListener.addEnvironment naming.invalidEnvEntryValue

catalina_behavior1.log

or in other environments you see a ClassCastException while processing the longer prconfig keys because tomcat attempts to cast the object bound to 'prconfig/dsm/services' to a javax.naming.Context but the name is already bound to a java.lang.String. Both behaviors have been seen -- it is not clear why there is a behavioral difference, but it is my suspicion that a dependency (os perhaps) being pulled into more recent docker images is responsible.

To Reproduce
Use the pega-helm-charts to deploy Pega into K8S environment. Examine the /usr/local/tomcat/logs/catalina*.log for log messages related to the processing of prweb.xml.

Expected behavior
The setting ('prconfig/dsm/services') would be set (with value 'StreamServer') for the Pega application. Additionally, no error messages would be reported while tomcat builds out the JNDI environment.

Screenshots
n/a

Desktop (if applicable, please complete the following information):
n/a

Server (if applicable, please complete the following information):
n/a

Additional context
n/a

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.