GithubHelp home page GithubHelp logo

kylinolap / kylin Goto Github PK

View Code? Open in Web Editor NEW
563.0 91.0 225.0 107.72 MB

This code base is retained for historical interest only, please visit Apache Incubator Repo for latest one

Home Page: https://github.com/apache/incubator-kylin

License: Apache License 2.0

Java 85.17% Protocol Buffer 0.02% HTML 5.44% Shell 0.52% JavaScript 5.54% CSS 3.29% PLSQL 0.02%

kylin's Introduction

Apache Kylin

Apache Kylin is an open source Distributed Analytics Engine to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop supporting extremely large datasets. Initial contributed by eBay Inc.

This code base is retained for historical interest only, please visit Apache Incubator Repo for latest code: https://github.com/apache/incubator-kylin

This github repository is no longer maintained, please visit kylin.apache.org instead. If you're seeking for help, please go with the apache kylin mail list http://kylin.apache.org/community/

kylin's People

Contributors

abansal avatar devth avatar fengrui129 avatar janzhongi avatar jiangxuchina avatar kejia-wang avatar liyang-kylin avatar luffy-xiao avatar lukehan avatar mustangore avatar oguennec avatar qhzhou avatar rongcui avatar shaofengshi avatar songyi10011001 avatar tdunning avatar vipinkumar7 avatar wteo avatar xduo avatar xiaowangyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kylin's Issues

Can't add extra columns to existing cube

When add new dimension into existing cube (have to be "disable" status before edit it), there's error message will popup when "save" the cube.
The root cause is the "rowkey" section of cube metadata mismatched with dimensions: each dimension should present in rowkey also.

screenshot-1

docs are missing

Document link in menu on web-site goes nowhere. I guess the docs will come later.

Group Hive tables in Project level

Currently, all Hive tables are system wild that mean all projects can access and build cube from any table. Also there are many confuse on the web interface and hard to isolate ACL.
To well organize and support future feature, here are the changes relative to Table and Project:

  1. One table has to belong to at least one project (1:M)
  2. One project could contains many tables, all this project cubes' backend tables have to be included in this project
  3. Cube Modeler could sync Hive Table Metadata at project level
  4. Different projects could contain different Hive Table's version (TBD)

Increase HDFS block size 1GB

Increase HDFS block size 1GB or close, use "mapred.max.split.size" to control mapper's input size. Verify mapreduce performance retains after the change.

MapReduce Job information can't display on sandbox deployment

MapReduce Job link in Cube Build Job will use "sandbox.hortonworks.com" as default hostname, this will failed to load exactly job information as below:

screenshot 2014-11-03 23 11 42

screenshot 2014-11-03 23 12 07

Replace "sandbox.hortonworks.com" with "127.0.0.1" will redirect to Job History page but still failed with same problem, replace again with "127.0.0.1" will show exactly information:

screenshot 2014-11-03 23 16 28

To fix this issue, add following content in your /etc/hosts file:

127.0.0.1 localhost sandbox.hortonworks.com

Add validation step of cube build job as first one

Add one job step: "Validation" to run series check to ensure cube build job will start without obvious issues:

  1. Fact Table have to contains at least one records
  2. No Duplicate rows in lookup tables
  3. At least one records after Hive Join Step (especially to check "inner join" case)

Raise exception and stop entire cube build job with "Validation Failed" message.

Sort job list by last modified timestamp

The current jobs order on Job List page is not sorted by last modified timestamp which will hide the last jobs on the page. User have to filter by status or project to get the last job.

Merge metadata tables

There are too many HBase tables for metadata now. It will be great to merge them:

  1. One for all metadata and instance, using different column family for metadata and instance.
  2. One for job information, including cube build job and others which will coming soon like cardinality.

Automatically create local cluster for running tests

Phoenix automatically creates a mini-cluster when you run a test. For example, download the latest Phoenix, do 'mvn -DskipTests install', open UpsertBigValuesIT.java in Intellij, and press Control-Shift-F10. (It works in Eclipse too.) The test creates a mini-cluster, runs 4 test methods, and tears down the cluster. It takes 1 minute to run. No configuration whatsoever is required.

I know that this kind of automation hard to achieve, but it allows casual committers (like me) to get involved in the project without spending hours or days setting up the environment.

Since Phoenix and Kylin are both based on HBase, you could probably copy-paste their maven XML and the base classes for their tests.

Add client information into Job

As an administer and cube owner, I would like to know one job be triggered by which user, when, and triggered by which client, corn tab, web, 3rd party client...

styles.css forces font HelveticaNeueLight, which does not display well.

Viewing the site in Chrome Version 37.0.2062.124 m (64-bit), MSIE 9.08 and FireFox 17.04 in Windows 7.

In /assets/css/styles.css, this section @ line 294

.main {
    position: relative;
    margin: 0;
    font-family: 'HelveticaNeueLight';
}

kylin_helveticaneuelight_only

Overrides the default font-family @ line 52

body {
    font-size: 14px;
    font-size: 0.9375rem;
    font-family: 'open_sansregular', 'HelveticaNeueRegular', Arial, sans-serif;
    color: #666666;
    padding: 0;
}

Only a few characters are visible. By removing the font-family line under the .main definition, the browsers can select the proper replacement font.
kylin_font_list

HBase schema enhancement

  1. Since the data table only write once and won't be updated, set VERSION to 1
  2. Shorten the column family name, e.g., from "Fn" to "n"
  3. For the data tables, remove IN_MEMORY setting

Check duplicate rows in lookup table

When there are duplicate rows in lookup table, the cube build job will fail.
Add this check into "Validation" step of Cube Build Job:
#37

Also enable this single step available to be called from shell or rest, which will help cube modeler check such data quality before trigger cube build job.

Can't get cube segment size

I build test_kylin_cube_with_slr_empty failed, it shows that can't get cube segment size. The error log as followed:
#13 Step Name: Load HFile to HBase Table

Start to execute command:?
?-input /tmp/kylin-e3f73338-edb8-4a1c-9013-e14d61f58f0c/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN_QA_CUBE_HYIZYTA36N -cubename test_kylin_cube_with_slr_empty
Command execute return code 0

Failed with Exception:java.lang.RuntimeException: Can't get cube segment size.
at com.kylinolap.job.flow.JobFlowListener.updateCubeSegmentInfoOnSucceed(JobFlowListener.java:245)
at com.kylinolap.job.flow.JobFlowListener.jobWasExecuted(JobFlowListener.java:99)
at org.quartz.core.QuartzScheduler.notifyJobListenersWasExecuted(QuartzScheduler.java:1985)
at org.quartz.core.JobRunShell.notifyJobListenersComplete(JobRunShell.java:340)
at org.quartz.core.JobRunShell.run(JobRunShell.java:224)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)

other steps are successful, which model will set the segment size?

Excute query failed and always show loading cubes

after install the kylin, the kylin.log file shows the exception:
[Thread-5]:[2014-10-29 16:00:27,374][ERROR][com.kylinolap.common.persistence.ResourceStore.getStore(ResourceStore.java:79)] - Create new store instance failed
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at com.kylinolap.common.persistence.ResourceStore.getStore(ResourceStore.java:66)
at com.kylinolap.metadata.MetadataManager.getStore(MetadataManager.java:118)
at com.kylinolap.metadata.MetadataManager.reloadAllSourceTable(MetadataManager.java:246)
at com.kylinolap.metadata.MetadataManager.init(MetadataManager.java:224)
at com.kylinolap.metadata.MetadataManager.(MetadataManager.java:110)
at com.kylinolap.metadata.MetadataManager.getInstance(MetadataManager.java:80)
at com.kylinolap.job.JobDAO.(JobDAO.java:61)
at com.kylinolap.job.JobDAO.getInstance(JobDAO.java:53)
at com.kylinolap.job.JobManager.(JobManager.java:67)
at com.kylinolap.rest.service.BasicService.getJobManager(BasicService.java:169)
at com.kylinolap.rest.service.BasicService$$FastClassByCGLIB$$55364bce.invoke()
at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:618)
at com.kylinolap.rest.service.JobService$$EnhancerByCGLIB$$1508084c.getJobManager()
at com.kylinolap.rest.controller.JobController$1.run(JobController.java:81)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: java.io.IOException: Failed to specify server's Kerberos principal name
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1630)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1656)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1863)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHTableDescriptor(HConnectionManager.java:2651)
at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:397)
at com.kylinolap.common.persistence.HBaseResourceStore.tableExist(HBaseResourceStore.java:126)
at com.kylinolap.common.persistence.HBaseResourceStore.createHTableIfNeeded(HBaseResourceStore.java:110)
at com.kylinolap.common.persistence.HBaseResourceStore.(HBaseResourceStore.java:102)
... 20 more
Caused by: com.google.protobuf.ServiceException: java.io.IOException: java.io.IOException: Failed to specify server's Kerberos principal name
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1678)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(HConnectionManager.java:1667)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1576)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1602)
... 27 more
Caused by: java.io.IOException: java.io.IOException: Failed to specify server's Kerberos principal name
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$1.run(RpcClient.java:837)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleSaslConnectionFailure(RpcClient.java:796)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:898)
at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1543)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1442)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
... 32 more
Caused by: java.io.IOException: Failed to specify server's Kerberos principal name
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.(HBaseSaslRpcClient.java:117)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupSaslConnection(RpcClient.java:767)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.access$600(RpcClient.java:357)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:891)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:888)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:888)
... 35 more
[Thread-5]:[2014-10-29 16:00:27,382][ERROR][com.kylinolap.common.persistence.ResourceStore.getStore(ResourceStore.java:79)] - Create new store instance failed
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at com.kylinolap.common.persistence.ResourceStore.getStore(ResourceStore.java:66)
at com.kylinolap.metadata.MetadataManager.getStore(MetadataManager.java:118)
at com.kylinolap.metadata.MetadataManager.reloadAllSourceTable(MetadataManager.java:246)
at com.kylinolap.metadata.MetadataManager.init(MetadataManager.java:224)
at com.kylinolap.metadata.MetadataManager.(MetadataManager.java:110)
at com.kylinolap.metadata.MetadataManager.getInstance(MetadataManager.java:80)
at com.kylinolap.job.JobDAO.(JobDAO.java:61)
at com.kylinolap.job.JobDAO.getInstance(JobDAO.java:53)
at com.kylinolap.job.JobManager.(JobManager.java:67)
at com.kylinolap.rest.service.BasicService.getJobManager(BasicService.java:169)
at com.kylinolap.rest.service.BasicService$$FastClassByCGLIB$$55364bce.invoke()
at net.sf.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:618)
at com.kylinolap.rest.service.JobService$$EnhancerByCGLIB$$1508084c.getJobManager()
at com.kylinolap.rest.controller.JobController$1.run(JobController.java:81)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: File not exist by 'kylin_metadata_qa@hbase:localhost:2181:/hbase': /root/kylin_metadata_qa@hbase:localhost:2181:/hbase
at com.kylinolap.common.persistence.FileResourceStore.(FileResourceStore.java:38)
... 20 more

when i execute the query, it shows the same exception.

Ambari plugin to manage Kylin service

Ambari is great tool to manage most of Hadoop components in one single place:

screen shot 2014-10-30 at 4 08 15 pm

It make sense to use Ambari to easy manage Kylin serivce in same place.
Requirement:

  1. Deploy and install Kylin service via Ambari
  2. Start and Stop Kylin service through Ambari
  3. Display Kylin service status on Ambari Web

[Call Volunteer!]

We are not Ambari expert, please let's know if you have interesting to contribute on this!

Enhance cuboid geneatation to have tail-zero only

Now we only support hierarchy aggregation group that will generate tail-zero & tail-one partial cuboid.

In many case, user may want independent cuboid that will only has tail-zero cuboid. It can greatly reduce the cube size.

Auto merge after incremental refresh on cube which contains "Distinct Count" measure

When cube contains "Distinct Count" measure, the query result will not correct without merge old cube segment and new one (with incremental refresh). And the current "Merge" operation will bring downtime for such cube to serve query (not available before merge finished).
To avoid such issue, enhance job engine to auto merge such segment after incremental build success, then swap new HTable with old one which will not impact queries too much (seconds).

Display job status with exactly percentage number

The current MR job step can't show exactly percentage number about the progress as below

Job job_1415330509679_11957 get status check result.
2014-11-07 20:28:59.654 - State of Hadoop job: job_1415330509679_11957:RUNNING - UNDEFINED

Fetch exactly progress with exactly number and display in Cube Build Job steps

Display Duration with current timestamp

The current "Duration" will only show exactly number after the step finished.
Display Duration with current timestamp for well monitor:
Duration = Current Timestamp - Start Timestamp

screen shot 2014-11-09 at 5 36 20 pm

Derive meaningful cost in OLAP relational operator

Currently the sub-classes of OLAPRel implement computeSelfCost() in a native way, which is multiply super.computeSelfCost() by a 0.05 such that OLAP cost is very small and is always favored by optimization engine.

The cost shall be derived in more meaningful way to be comparable with Calcite cost.

Merge "Build" and "Refresh" in one button

There are two button for cube build:

  1. Build: build cube from beginning and incremental load. User could specify start and end date.
  2. Refresh: re-build cube with current date range

screen shot 2014-10-30 at 3 05 10 pm

Using one button to simplify and reduce confuse for users to build cube:

  1. Use "Build Cube" as menu
  2. Popup dialog for user to specify start (if appliable) and end date,
  3. Fulfill that values from existing cube metadata and instance

Remove hardcode hostname

The current deployment script using hardcode for the hostname. it could fail deployment on Hadoop cluster other than sandbox from Hortonworks and Cloudera.

Can't get master address from ZooKeeper when installing Kylin on Hortonworks Sandbox

I am running into following errors while trying to build Kylin on my HW Hadoop Sandbox.
Let me know next steps
-Medha

jdbc.driverClassName=com.mysql.jdbc.Driver
jdbc.url=
jdbc.username=kylin
jdbc.password=
ganglia.group=
ganglia.port=8664===================================================================

please ensure the CLI address/username/password is correct, and press y to proceed: y
[INFO] Scanning for projects...
[WARNING]
[WARNING] Some problems were encountered while building the effective model for com.kylinolap:kylin-server:war:0.6.1-SNAPSHOT
[WARNING] 'build.plugins.plugin.version' for org.springframework.boot:spring-boot-maven-plugin is missing. @ com.kylinolap:kylin-server:[unknown-version], /root/Kylin/server/pom.xml, line 376, column 21
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING]
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:


T E S T S

Running com.kylinolap.job.SampleCubeSetupTest
The conf paths is :/etc/hbase/conf:/etc/hadoop/conf:/etc/tez/conf:/etc/hadoop/conf
Adding path /etc/hbase/conf to class path
Adding path /etc/hadoop/conf to class path
Adding path /etc/tez/conf to class path
Adding path /etc/hadoop/conf to class path
/etc/kylin
L4J [2014-10-22 17:01:54,502][INFO][com.kylinolap.common.KylinConfig] - kylin.properties found in /etc/kylin
L4J [2014-10-22 17:01:54,869][DEBUG][com.kylinolap.common.KylinConfig] - Loading property file /etc/kylin/kylin.properties
[[email protected]] Execute command: rm -rf /tmp/kylin
[[email protected]] Command exit-status: 0
[[email protected]] Execute command: mkdir -p /tmp/kylin/logs
[[email protected]] Command exit-status: 0
L4J [2014-10-22 17:01:57,482][INFO][com.kylinolap.common.persistence.ResourceStore] - Using metadata url kylin_metadata_qa@hbase:sandbox.hortonworks.com:2181:/hbase-unsecure for resource store
L4J [2014-10-22 17:01:57,695][DEBUG][com.kylinolap.common.util.HadoopUtil] - Creating hbase conf by parsing -- sandbox.hortonworks.com:2181:/hbase-unsecure
L4J [2014-10-22 17:01:58,158][WARN][org.apache.hadoop.util.NativeCodeLoader] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
L4J [2014-10-22 17:01:58,344][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
L4J [2014-10-22 17:01:58,344][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:host.name=sandbox.hortonworks.com
L4J [2014-10-22 17:01:58,344][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_45
L4J [2014-10-22 17:01:58,344][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
L4J [2014-10-22 17:01:58,345][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:java.home=/usr/jdk64/jdk1.7.0_45/jre
L4J [2014-10-22 17:01:58,345][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=/root/Kylin/job/target/test-classes:/root/Kylin/job/target/classes:/root/Kylin/cube/target/classes:/root/Kylin/metadata/target/classes:/root/Kylin/common/target/classes:/root/.m2/repository/net/sf/trove4j/trove4j/3.0.3/trove4j-3.0.3.jar:/root/.m2/repository/org/apache/commons/commons-lang3/3.1/commons-lang3-3.1.jar:/root/Kylin/dictionary/target/classes:/root/.m2/repository/com/ning/compress-lzf/0.8.4/compress-lzf-0.8.4.jar:/root/.m2/repository/com/n3twork/druid/extendedset/1.3.4/extendedset-1.3.4.jar:/root/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/root/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/root/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/root/.m2/repository/commons-configuration/commons-configuration/1.9/commons-configuration-1.9.jar:/root/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/root/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.2.3/jackson-databind-2.2.3.jar:/root/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.2.3/jackson-annotations-2.2.3.jar:/root/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.2.3/jackson-core-2.2.3.jar:/root/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/root/.m2/repository/commons-codec/commons-codec/1.2/commons-codec-1.2.jar:/root/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar:/root/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/root/.m2/repository/org/quartz-scheduler/quartz/2.2.1/quartz-2.2.1.jar:/root/.m2/repository/c3p0/c3p0/0.9.1.1/c3p0-0.9.1.1.jar:/root/.m2/repository/org/slf4j/slf4j-api/1.6.4/slf4j-api-1.6.4.jar:/root/.m2/repository/org/quartz-scheduler/quartz-jobs/2.2.1/quartz-jobs-2.2.1.jar:/root/.m2/repository/commons-daemon/commons-daemon/1.0.15/commons-daemon-1.0.15.jar:/root/.m2/repository/org/apache/curator/curator-framework/2.6.0/curator-framework-2.6.0.jar:/root/.m2/repository/org/apache/curator/curator-client/2.6.0/curator-client-2.6.0.jar:/root/.m2/repository/org/apache/curator/curator-recipes/2.6.0/curator-recipes-2.6.0.jar:/root/.m2/repository/org/anarres/lzo/lzo-hadoop/1.0.0/lzo-hadoop-1.0.0.jar:/root/.m2/repository/org/anarres/lzo/lzo-core/1.0.0/lzo-core-1.0.0.jar:/root/.m2/repository/org/apache/commons/commons-email/1.1/commons-email-1.1.jar:/root/.m2/repository/javax/mail/mail/1.4/mail-1.4.jar:/root/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/root/.m2/repository/org/apache/hbase/hbase-common/0.98.0-hadoop2/hbase-common-0.98.0-hadoop2.jar:/root/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/root/.m2/repository/com/github/stephenc/findbugs/findbugs-annotations/1.3.9-1/findbugs-annotations-1.3.9-1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-common/2.4.1/hadoop-common-2.4.1.jar:/root/.m2/repository/org/apache/commons/commons-math3/3.1.1/commons-math3-3.1.1.jar:/root/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/root/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/root/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/root/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/root/.m2/repository/com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar:/root/.m2/repository/com/sun/jersey/jersey-json/1.9/jersey-json-1.9.jar:/root/.m2/repository/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/root/.m2/repository/org/codehaus/jackson/jackson-xc/1.8.3/jackson-xc-1.8.3.jar:/root/.m2/repository/com/sun/jersey/jersey-server/1.9/jersey-server-1.9.jar:/root/.m2/repository/asm/asm/3.1/asm-3.1.jar:/root/.m2/repository/tomcat/jasper-compiler/5.5.23/jasper-compiler-5.5.23.jar:/root/.m2/repository/tomcat/jasper-runtime/5.5.23/jasper-runtime-5.5.23.jar:/root/.m2/repository/javax/servlet/jsp/jsp-api/2.1/jsp-api-2.1.jar:/root/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/root/.m2/repository/net/java/dev/jets3t/jets3t/0.9.0/jets3t-0.9.0.jar:/root/.m2/repository/org/apache/httpcomponents/httpclient/4.3.3/httpclient-4.3.3.jar:/root/.m2/repository/org/apache/httpcomponents/httpcore/4.1.2/httpcore-4.1.2.jar:/root/.m2/repository/com/jamesmurty/utils/java-xmlbuilder/0.4/java-xmlbuilder-0.4.jar:/root/.m2/repository/org/slf4j/slf4j-log4j12/1.6.4/slf4j-log4j12-1.6.4.jar:/root/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.8.8/jackson-core-asl-1.8.8.jar:/root/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.8.8/jackson-mapper-asl-1.8.8.jar:/root/.m2/repository/org/apache/avro/avro/1.7.4/avro-1.7.4.jar:/root/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/root/.m2/repository/org/xerial/snappy/snappy-java/1.0.4.1/snappy-java-1.0.4.1.jar:/root/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/root/.m2/repository/org/apache/hadoop/hadoop-auth/2.4.1/hadoop-auth-2.4.1.jar:/root/.m2/repository/com/jcraft/jsch/0.1.51/jsch-0.1.51.jar:/root/.m2/repository/org/apache/zookeeper/zookeeper/3.4.5/zookeeper-3.4.5.jar:/root/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/root/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/root/.m2/repository/org/apache/hadoop/hadoop-annotations/2.4.1/hadoop-annotations-2.4.1.jar:/usr/jdk64/jdk1.7.0_45/jre/../lib/tools.jar:/root/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.4.1/hadoop-mapreduce-client-core-2.4.1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.4.1/hadoop-yarn-common-2.4.1.jar:/root/.m2/repository/com/google/inject/extensions/guice-servlet/3.0/guice-servlet-3.0.jar:/root/.m2/repository/io/netty/netty/3.6.2.Final/netty-3.6.2.Final.jar:/root/.m2/repository/org/apache/hadoop/hadoop-minicluster/2.4.1/hadoop-minicluster-2.4.1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-common/2.4.1/hadoop-common-2.4.1-tests.jar:/root/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.4.1/hadoop-hdfs-2.4.1-tests.jar:/root/.m2/repository/org/apache/hadoop/hadoop-yarn-server-tests/2.4.1/hadoop-yarn-server-tests-2.4.1-tests.jar:/root/.m2/repository/org/apache/hadoop/hadoop-yarn-server-nodemanager/2.4.1/hadoop-yarn-server-nodemanager-2.4.1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-jobclient/2.4.1/hadoop-mapreduce-client-jobclient-2.4.1-tests.jar:/root/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-common/2.4.1/hadoop-mapreduce-client-common-2.4.1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-yarn-client/2.4.1/hadoop-yarn-client-2.4.1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-shuffle/2.4.1/hadoop-mapreduce-client-shuffle-2.4.1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-app/2.4.1/hadoop-mapreduce-client-app-2.4.1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-jobclient/2.4.1/hadoop-mapreduce-client-jobclient-2.4.1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-hs/2.4.1/hadoop-mapreduce-client-hs-2.4.1.jar:/root/.m2/repository/org/apache/mrunit/mrunit/1.0.0/mrunit-1.0.0-hadoop2.jar:/root/.m2/repository/org/mockito/mockito-all/1.8.5/mockito-all-1.8.5.jar:/root/.m2/repository/org/apache/hbase/hbase-hadoop2-compat/0.98.0-hadoop2/hbase-hadoop2-compat-0.98.0-hadoop2.jar:/root/.m2/repository/org/apache/hbase/hbase-hadoop-compat/0.98.0-hadoop2/hbase-hadoop-compat-0.98.0-hadoop2.jar:/root/.m2/repository/com/yammer/metrics/metrics-core/2.1.2/metrics-core-2.1.2.jar:/root/.m2/repository/org/apache/hbase/hbase-client/0.98.0-hadoop2/hbase-client-0.98.0-hadoop2.jar:/root/.m2/repository/org/apache/hbase/hbase-protocol/0.98.0-hadoop2/hbase-protocol-0.98.0-hadoop2.jar:/root/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar:/root/.m2/repository/org/apache/hbase/hbase-server/0.98.0-hadoop2/hbase-server-0.98.0-hadoop2.jar:/root/.m2/repository/org/apache/hbase/hbase-prefix-tree/0.98.0-hadoop2/hbase-prefix-tree-0.98.0-hadoop2.jar:/root/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/root/.m2/repository/org/apache/commons/commons-math/2.2/commons-math-2.2.jar:/root/.m2/repository/org/mortbay/jetty/jetty-sslengine/6.1.26/jetty-sslengine-6.1.26.jar:/root/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/root/.m2/repository/org/eclipse/jdt/core/3.1.1/core-3.1.1.jar:/root/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/root/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/root/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.8.8/jackson-jaxrs-1.8.8.jar:/root/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/root/.m2/repository/javax/xml/bind/jaxb-api/2.2.2/jaxb-api-2.2.2.jar:/root/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-client/2.2.0/hadoop-client-2.2.0.jar:/root/.m2/repository/org/apache/hadoop/hadoop-yarn-server-resourcemanager/2.4.1/hadoop-yarn-server-resourcemanager-2.4.1.jar:/root/.m2/repository/javax/servlet/servlet-api/2.5/servlet-api-2.5.jar:/root/.m2/repository/com/google/inject/guice/3.0/guice-3.0.jar:/root/.m2/repository/javax/inject/javax.inject/1/javax.inject-1.jar:/root/.m2/repository/aopalliance/aopalliance/1.0/aopalliance-1.0.jar:/root/.m2/repository/com/sun/jersey/contribs/jersey-guice/1.9/jersey-guice-1.9.jar:/root/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/root/.m2/repository/com/sun/jersey/jersey-client/1.9/jersey-client-1.9.jar:/root/.m2/repository/org/apache/hadoop/hadoop-yarn-server-common/2.4.1/hadoop-yarn-server-common-2.4.1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-yarn-server-applicationhistoryservice/2.4.1/hadoop-yarn-server-applicationhistoryservice-2.4.1.jar:/root/.m2/repository/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/root/.m2/repository/org/apache/hadoop/hadoop-yarn-server-web-proxy/2.4.1/hadoop-yarn-server-web-proxy-2.4.1.jar:/root/.m2/repository/junit/junit/4.11/junit-4.11.jar:/root/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/root/.m2/repository/org/apache/maven/maven-model/3.1.1/maven-model-3.1.1.jar:/root/.m2/repository/org/codehaus/plexus/plexus-utils/3.0.15/plexus-utils-3.0.15.jar:/root/.m2/repository/org/apache/hadoop/hadoop-yarn-api/2.4.1/hadoop-yarn-api-2.4.1.jar:/root/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.4.1/hadoop-hdfs-2.4.1.jar:
L4J [2014-10-22 17:01:58,350][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
L4J [2014-10-22 17:01:58,351][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=/tmp
L4J [2014-10-22 17:01:58,351][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=
L4J [2014-10-22 17:01:58,351][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Linux
L4J [2014-10-22 17:01:58,351][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
L4J [2014-10-22 17:01:58,351][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:os.version=2.6.32-431.11.2.el6.x86_64
L4J [2014-10-22 17:01:58,351][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:user.name=root
L4J [2014-10-22 17:01:58,352][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:user.home=/root
L4J [2014-10-22 17:01:58,352][INFO][org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=/root/Kylin/job
L4J [2014-10-22 17:01:58,353][INFO][org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=sandbox.hortonworks.com:2181 sessionTimeout=30000 watcher=hconnection-0x17fe6f4a, quorum=sandbox.hortonworks.com:2181, baseZNode=/hbase-unsecure
L4J [2014-10-22 17:01:58,408][INFO][org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x17fe6f4a connecting to ZooKeeper ensemble=sandbox.hortonworks.com:2181
L4J [2014-10-22 17:01:58,409][INFO][org.apache.zookeeper.ClientCnxn] - Opening socket connection to server sandbox.hortonworks.com/10.225.81.47:2181. Will not attempt to authenticate using SASL (unknown error)
L4J [2014-10-22 17:01:58,423][INFO][org.apache.zookeeper.ClientCnxn] - Socket connection established to sandbox.hortonworks.com/10.225.81.47:2181, initiating session
L4J [2014-10-22 17:01:58,458][INFO][org.apache.zookeeper.ClientCnxn] - Session establishment complete on server sandbox.hortonworks.com/10.225.81.47:2181, sessionid = 0x149380f2ef30003, negotiated timeout = 30000
L4J [2014-10-22 17:01:59,138][WARN][org.apache.hadoop.hdfs.BlockReaderLocal] - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
L4J [2014-10-22 17:01:59,220][INFO][org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation] - getMaster attempt 1 of 5 failed; retrying after sleep of 3026, exception=java.io.IOException: Can't get master address from ZooKeeper; znode data == null
L4J [2014-10-22 17:02:02,249][INFO][org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation] - getMaster attempt 2 of 5 failed; retrying after sleep of 6013, exception=java.io.IOException: Can't get master address from ZooKeeper; znode data == null
L4J [2014-10-22 17:02:08,264][INFO][org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation] - getMaster attempt 3 of 5 failed; retrying after sleep of 9035, exception=java.io.IOException: Can't get master address from ZooKeeper; znode data == null
L4J [2014-10-22 17:02:17,302][INFO][org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation] - getMaster attempt 4 of 5 failed; retrying after sleep of 15027, exception=java.io.IOException: Can't get master address from ZooKeeper; znode data == null
L4J [2014-10-22 17:02:32,332][INFO][org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation] - getMaster attempt 5 of 5 failed; no more retrying.
java.io.IOException: Can't get master address from ZooKeeper; znode data == null
at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:108)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1577)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1622)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1676)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1884)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHTableDescriptor(HConnectionManager.java:2655)
at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:388)
at com.kylinolap.common.persistence.HBaseResourceStore.tableExist(HBaseResourceStore.java:126)
at com.kylinolap.common.persistence.HBaseResourceStore.createHTableIfNeeded(HBaseResourceStore.java:110)

Results :

Tests in error:
SampleCubeSetupTest.before:48->CubeDevelopTestCase.initEnv:296->HBaseMetadataTestCase.installMetadataToHBase:67 » IllegalArgument
SampleCubeSetupTest.after:55 » IllegalArgument Failed to find metadata store b...

Tests run: 2, Failures: 0, Errors: 2, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Kylin:HadoopOLAPEngine ............................. SUCCESS [ 0.002 s]
[INFO] Kylin:Common ....................................... SUCCESS [ 1.767 s]
[INFO] Kylin:Metadata ..................................... SUCCESS [ 0.151 s]
[INFO] Kylin:Dictionary ................................... SUCCESS [ 0.120 s]
[INFO] Kylin:Cube ......................................... SUCCESS [ 0.390 s]
[INFO] Kylin:Job .......................................... FAILURE [01:16 min]
[INFO] Kylin:Storage ...................................... SKIPPED
[INFO] Kylin:Query ........................................ SKIPPED
[INFO] Kylin:RESTServer ................................... SKIPPED
[INFO] Kylin:Jdbc ......................................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:19 min
[INFO] Finished at: 2014-10-22T17:03:05-07:00
[INFO] Final Memory: 32M/332M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project kylin-job: There are test failures.
[ERROR]
[ERROR] Please refer to /root/Kylin/job/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn -rf :kylin-job
ERROR exit from ./deploy.sh : line 175 with exit code 1

Can't login the web server

i install in my linux.
but i have an issue about login
the error:“Unable to login, please check your username/password and make sure you have L2 access.”
i didn't the reason.
please help me

Tune HDFS & HBase parameters

  • Set replica to 3 for cube files in HDFS
  • Lower F1 column (where has no distinct count) block size to 64KB
  • Disable bloom filter on hbase table

"ERROR Could not instantiate appender" from log4j when tomcat startup

[localhost-startStop-1]:[2014-11-09 02:30:25,243][DEBUG][com.kylinolap.common.KylinConfig.getKylinPropertiesAsInputSteam(KylinConfig.java:510)] - Loading property file /etc/kylin/kylin.properties
log4j:ERROR A "org.apache.log4j.ConsoleAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
log4j:ERROR The class "org.apache.log4j.Appender" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@4d97507c] whereas object of type
log4j:ERROR "org.apache.log4j.ConsoleAppender" was loaded by [WebappClassLoader
context: /kylin
delegate: false
repositories:
/WEB-INF/classes/
----------> Parent Classloader:
org.apache.catalina.loader.StandardClassLoader@6de40a47
].
log4j:ERROR Could not instantiate appender named "stdout".
log4j:ERROR A "org.apache.log4j.DailyRollingFileAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
log4j:ERROR The class "org.apache.log4j.Appender" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@4d97507c] whereas object of type
log4j:ERROR "org.apache.log4j.DailyRollingFileAppender" was loaded by [WebappClassLoader
context: /kylin
delegate: false
repositories:
/WEB-INF/classes/
----------> Parent Classloader:
org.apache.catalina.loader.StandardClassLoader@6de40a47
].
log4j:ERROR Could not instantiate appender named "file".
log4j:ERROR A "org.apache.log4j.DailyRollingFileAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
log4j:ERROR The class "org.apache.log4j.Appender" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@4d97507c] whereas object of type
log4j:ERROR "org.apache.log4j.DailyRollingFileAppender" was loaded by [WebappClassLoader
context: /kylin
delegate: false
repositories:
/WEB-INF/classes/
----------> Parent Classloader:
org.apache.catalina.loader.StandardClassLoader@6de40a47
].
log4j:ERROR Could not instantiate appender named "query".
log4j:ERROR A "org.apache.log4j.DailyRollingFileAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
log4j:ERROR The class "org.apache.log4j.Appender" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@4d97507c] whereas object of type
log4j:ERROR "org.apache.log4j.DailyRollingFileAppender" was loaded by [WebappClassLoader
context: /kylin
delegate: false
repositories:
/WEB-INF/classes/
----------> Parent Classloader:
org.apache.catalina.loader.StandardClassLoader@6de40a47
].
log4j:ERROR Could not instantiate appender named "job".
log4j:ERROR A "org.apache.log4j.DailyRollingFileAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
log4j:ERROR The class "org.apache.log4j.Appender" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@4d97507c] whereas object of type
log4j:ERROR "org.apache.log4j.DailyRollingFileAppender" was loaded by [WebappClassLoader
context: /kylin
delegate: false
repositories:
/WEB-INF/classes/
----------> Parent Classloader:
org.apache.catalina.loader.StandardClassLoader@6de40a47
].
log4j:ERROR Could not instantiate appender named "query".
log4j:ERROR A "org.apache.log4j.DailyRollingFileAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
log4j:ERROR The class "org.apache.log4j.Appender" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@4d97507c] whereas object of type
log4j:ERROR "org.apache.log4j.DailyRollingFileAppender" was loaded by [WebappClassLoader
context: /kylin
delegate: false
repositories:
/WEB-INF/classes/
----------> Parent Classloader:
org.apache.catalina.loader.StandardClassLoader@6de40a47
].
log4j:ERROR Could not instantiate appender named "job".
log4j:ERROR A "org.apache.log4j.DailyRollingFileAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
log4j:ERROR The class "org.apache.log4j.Appender" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@4d97507c] whereas object of type
log4j:ERROR "org.apache.log4j.DailyRollingFileAppender" was loaded by [WebappClassLoader
context: /kylin
delegate: false
repositories:
/WEB-INF/classes/
----------> Parent Classloader:
org.apache.catalina.loader.StandardClassLoader@6de40a47
].
log4j:ERROR Could not instantiate appender named "job".
log4j:ERROR A "org.apache.log4j.DailyRollingFileAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
log4j:ERROR The class "org.apache.log4j.Appender" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@4d97507c] whereas object of type
log4j:ERROR "org.apache.log4j.DailyRollingFileAppender" was loaded by [WebappClassLoader
context: /kylin
delegate: false
repositories:
/WEB-INF/classes/
----------> Parent Classloader:
org.apache.catalina.loader.StandardClassLoader@6de40a47
].
log4j:ERROR Could not instantiate appender named "query".
log4j:ERROR A "org.apache.log4j.DailyRollingFileAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
log4j:ERROR The class "org.apache.log4j.Appender" was loaded by
log4j:ERROR [sun.misc.Launcher$AppClassLoader@4d97507c] whereas object of type
log4j:ERROR "org.apache.log4j.DailyRollingFileAppender" was loaded by [WebappClassLoader
context: /kylin
delegate: false
repositories:
/WEB-INF/classes/
----------> Parent Classloader:
org.apache.catalina.loader.StandardClassLoader@6de40a47
].
log4j:ERROR Could not instantiate appender named "query".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.