GithubHelp home page GithubHelp logo

openinx / huker Goto Github PK

View Code? Open in Web Editor NEW
4.0 2.0 1.0 1.24 MB

An easy Hadoop deploy system

Makefile 0.41% Go 85.74% Shell 0.35% HTML 8.27% JavaScript 3.61% CSS 0.69% Ruby 0.19% Python 0.74%
hadoop auto-deployment java-deployment hadoop-deployment

huker's People

Contributors

openinx avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

huker's Issues

Let a config file in `config` part can read from a local static file

The config part as following:

    config:
      zoo.cfg:
        - dataDir={{.PkgDataDir}}
        - dataLogDir={{.PkgLogDir}}
        - clientPort=2182
        - tickTime=2000
        - maxClientCnxns=60
        - initLimit=30
        - syncLimit=20
        - maxSessionTimeout=40000
      {{.PkgDataDir}}/myid:

Now we need to put all the properties in xx.yaml, for log config, a better way is to put it into a static file, and the config part can ref to the static file.

change cluster.java_home to main_process

Cluster.javaHome is actually the main process of the program. after fixing up #32, basically we support all program bootstrapping, but it's a confusing name.
we have two places to change. For configuration,java_home will be changed tomain_process; for the code, javaHome will be mainProcess.

Supervisor: Abstract a common method to handle http response

In method hBootstrapProgram & hCleanupProgram & handleProgram , we can found following code:

if err := handleFunc(prog); err != nil {
	w.Write(renderResp(err))
	return
}

We can abstract a common method to handle the http response (both error case and right case)

Render all fields in cluster.yaml instead of config files

Currently, we only render config files by following:

// If skipHostRender is true , will skip to the HostRender
func (c *Cluster) RenderConfigFiles(job *Job, taskId int, skipHostRender bool) (map[string]string, error) {
	//....
}

Need to render all fields in cluster.yaml, such as java_class , extra_args , because when I want to config the following job, it does not work:

  sqlline_thin:
    super_job: job_common
    main_entry:
      java_class: org.apache.phoenix.queryserver.client.SqllineWrapper
      extra_args:
        -d org.apache.phoenix.queryserver.client.Driver
        -u jdbc:phoenix:thin:url=http://%{queryserver.0.host:queryserver.0.base_port};serialization=PROTOBUF
        -n none
        -p none
        --color=true
        --fastConnect=false
        --verbose=true
        --incremental=false
        --isolation=TRANSACTION_READ_COMMITTED

MRD

Aim at deploying Hadoop components efficiently, with built-in monitoring and alerting service and page view.
firstly, we should abstract the deployment procedure, we know a complete requests flow looks like a DAG(directed acyclic graph) , so all the compononts is the point in the graph, and the backend IP will be assumed to be edges.
secondly, we need abstract the specification of single componets, like hdfs node or zookeeper server. docker is the first thing we could use. for monitoring, we install the collecting agent , configuration updating agent in the docker container.

Agent : different root dir for different agent project.

Now, we use the same root directory to store the lib( data/conf etc) of project. A Better way is: different agent can assign different root dir by following:

./bin/huker start-agent  --dir /tmp/rootdir-01 

and their lib/data/conf will store under difference directory. Then, we can start multiple agent processes at a standalone, and it's useful when dev & testing

Support to start a cluster in a single node

Now we cann't start a zookeeper cluster in a single node, because current config design can not specific different port for different process with the same job name.

we need to change the host part from:

hosts:
  - 127.0.0.1

to

hosts:
  127.0.0.1/port=9001/id=1:  # host.1
    config:
      zoo.cfg:
        - clientPort=2181
        - server.1=zoo1:2188:3188
  127.0.0.1/port=9001/id=2:  # host.2
    config:
      zoo.cfg:
        - clientPort=2281
        - server.2=zoo1:2288:3288
  127.0.0.1/port=9001/id=3:  # host.3
    config:
      zoo.cfg:
        - clientPort=2381
        - server.3=zoo1:2388:3288

BTW, the base_port part can be removed now.

Try to build a HDFS cluster based on zookeeper cluster.

Currently, we can start a distribute zookeeper cluster under only one host. now, need to test whether we can succeed to build a hdfs cluster based on the existed zookeeper.

BTW, need a common HDFS template file under conf/hdfs/ directory for user to build their own HDFS cluster.

Can not cleanup the job, if bootstrap a job failed when start the process

see following:

➜  huker git:(alpha-1.0) ./bin/huker bootstrap  --project zookeeper --cluster test-cluster --job zookeeper 
2018/01/15 07:39:27 [ERROR][github.com/openinx/huker] huker_cli.go:55: Bootstrap job zookeeper at 127.0.0.1 failed, err: {"message":"error: symlink /tmp/huker/agent12/test-cluster/zookeeper/stdout/zookeeper-3.4.11 /tmp/huker/agent12/test-cluster/zookeeper/pkg: file exists","status":1}
➜  huker git:(alpha-1.0) ./bin/huker cleanup  --project zookeeper --cluster test-cluster --job zookeeper 
2018/01/15 07:39:36 [ERROR][github.com/openinx/huker] huker_cli.go:112: Cleanup job zookeeper at 127.0.0.1 failed, err: {"message":"error: name: test-cluster, job: zookeeper not found.","status":1}

Supervisor: implement rolling_update

Currently, our supervisor will do nothing if call rolling_update. need to implement this.

func (p *Program) rollingUpdate(s *Supervisor) error {
	// TODO
	return nil
}

Some features

  1. should abstract the zookeeper.znode.parent & hbase.rootdir for differect hbase cluster, otherwise we can not deploy diff hbase cluster ..
  2. should update the pkg.yaml configuration now , because the apache official have updated their latest stable binary package .

Render template improvement.

Currently, we can not render the dependencies's job information to the configuration files. for example, HBase depend on HDFS, and we have to config the namenode address explicitly now ... need to fix this by write a variable, such as

fs.defaultfs:=hdfs://%{dependencies.0.namenode.host}:%{dependencies.0.namenode.port}/hbase/rootdir

Need to add variable %{dependencies.0.cluster_name} for phoenix

Now we have the following in phoenix/common/common.yaml

      hbase-site.xml:
        - hbase.cluster.distributed=true
        - hbase.zookeeper.quorum=%{dependencies.0.zkServer.1.host}:%{dependencies.0.zkServer.1.base_port}
        - zookeeper.znode.parent=/hbase/test-hbase # TODO need to change this to be %{dependencies.0.cluster_name}
      log4j.properties:

Need to fix this.

Try to build hive cluster under HDFS/Yarn cluster.

Problem encountered:

  1. render issue: #31
  2. hive depends on hadoop's lib which was not included in release package of hive. so , currently I've to write the constant path in classpath, need to fix this:
classpath:
  - {{.PkgConfDir}}
  - {{.PkgRootDir}}/lib/*
  - {{.PkgRootDir}}/jdbc/*
  - /tmp/huker/.packages/967c24f3c15fcdd058f34923e92ce8ac/hadoop-2.6.5/lib/native/*
  - /tmp/huker/.packages/967c24f3c15fcdd058f34923e92ce8ac/hadoop-2.6.5/share/hadoop/common/*
  - /tmp/huker/.packages/967c24f3c15fcdd058f34923e92ce8ac/hadoop-2.6.5/share/hadoop/common/lib/*
  - /tmp/huker/.packages/967c24f3c15fcdd058f34923e92ce8ac/hadoop-2.6.5/share/hadoop/mapreduce/*
  - /tmp/huker/.packages/967c24f3c15fcdd058f34923e92ce8ac/hadoop-2.6.5/share/hadoop/mapreduce/lib/*
  - /tmp/huker/.packages/967c24f3c15fcdd058f34923e92ce8ac/hadoop-2.6.5/share/hadoop/yarn/*
  - /tmp/huker/.packages/967c24f3c15fcdd058f34923e92ce8ac/hadoop-2.6.5/share/hadoop/yarn/lib/*
  - /tmp/huker/.packages/967c24f3c15fcdd058f34923e92ce8ac/hadoop-2.6.5/share/hadoop/hdfs/*
  - /tmp/huker/.packages/967c24f3c15fcdd058f34923e92ce8ac/hadoop-2.6.5/share/hadoop/hdfs/lib/*
  1. in cfg.go , we parsed key value pair by
strings.Split(c.keyValues[i], "=") 

so, for hive case, the hive-site.xml may have the following config:

javax.jdo.option.ConnectionURL=jdbc:derby:;databaseName={{.PkgDataDir}}/metastore_db;create=true

which lead to

if len(parts) != 2 {
	panic(fmt.Sprintf("Invalid key value pair, key or value not found. %s", c.keyValues[i]))
}
  1. another problem is , after we start the hiveserver2, and connect to its port by beeline, and create a table, it throws that :
0: jdbc:hive2://localhost:50100> CREATE TABLE u_data (
. . . . . . . . . . . . . . . .>   userid INT,
. . . . . . . . . . . . . . . .>   movieid INT,
. . . . . . . . . . . . . . . .>   rating INT,
. . . . . . . . . . . . . . . .>   unixtime STRING)
. . . . . . . . . . . . . . . .> ROW FORMAT DELIMITED
. . . . . . . . . . . . . . . .> FIELDS TERMINATED BY '\t'
. . . . . . . . . . . . . . . .> STORED AS TEXTFILE;
Error: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.ipc.RemoteException User: openinx is not allowed to impersonate openinx)
	at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380)
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257)
	at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.