GithubHelp home page GithubHelp logo

blog's People

Contributors

snow-swallow avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

blog's Issues

Native Popup

<!DOCTYPE html>
<html lang="en">
	<head>
		<title>Popup</title>
		<style type="text/css">
			#popupEle {
				display: none;
			}
			.popup-container {
				position: absolute;
				top: 0;
				left: 0;
				bottom: 0;
				right: 0;
				display: flex;
				justify-content: center;
				align-items: center;
			}
			.popup-container .popup-mask {
				position: absolute;
				top: 0;
				left: 0;
				bottom: 0;
				right: 0;
				background-color: rgba(100, 100, 100, 0.5);
			}
			.popup-container .popup-content {
				height: 300px;
				width: 500px;
				background-color: white;
				position: relative;
			}
		</style>
	</head>
	<body>
		<a onclick="togglePopup()" href="javascript:void(0)">show popup</a>
		<div id="popupEle" class="popup-container">
			<div class="popup-mask" onclick="togglePopup()"></div>
			<div class="popup-content">popup dialog</div>
		</div>
		<script type="text/javascript">
			var isPopupShown = false;
			function togglePopup() {
				isPopupShown = !isPopupShown;
				document.getElementById('popupEle').style.display = isPopupShown ? 'flex' : 'none';
			}
		</script>
	</body>
</html>

Reset Page Scroll Position to Top when Route Changes

For Angular5

import { Component, OnInit } from '@angular/core';
import { Router, NavigationEnd } from '@angular/router';

@Component({
    selector: 'my-app',
    template: '<ng-content></ng-content>',
})
export class MyAppComponent implements OnInit {
    constructor(private router: Router) { }

    ngOnInit() {
        this.router.events.subscribe((evt) => {
            if (!(evt instanceof NavigationEnd)) {
                return;
            }
            window.scrollTo(0, 0)
        });
    }
}

For Angular6.1+

const routes: Routes = [
  {
    path: '...'
    component: ...
  },
  ...
];

@NgModule({
  imports: [
    RouterModule.forRoot(routes, {
      scrollPositionRestoration: 'enabled', // Add options right here
    })
  ],
  exports: [RouterModule]
})
export class AppRoutingModule { }

Reference:

What does react eject do

Curious about the progress of react eject. Here's the exec log after I do npm run eject (which is equal to react-scripts eject).

What Does eject Do:

  1. Copy scripts and config from $PROJECT_HOME/node_modules/react-scripts to $PROJECT_HOME
  2. Update the package.json of project home
  • copy dependency, scripts, jest, babel, eslintConfig from react-scripts to project
  • remove dependency react-scripts

image

xuyuzhus-mbp:test_eject xuyuzhu$ npm run eject

> [email protected] eject /Users/xuyuzhu/developworks/workspace/FED/react/test_eject
> react-scripts eject

? Are you sure you want to eject? This action is permanent. Yes
Ejecting...

Copying files into /Users/xuyuzhu/developworks/IBM/workspace/FED/react/test_eject
  Adding /config/env.js to the project
  Adding /config/paths.js to the project
  Adding /config/polyfills.js to the project
  Adding /config/webpack.config.dev.js to the project
  Adding /config/webpack.config.prod.js to the project
  Adding /config/webpackDevServer.config.js to the project
  Adding /config/jest/cssTransform.js to the project
  Adding /config/jest/fileTransform.js to the project
  Adding /scripts/build.js to the project
  Adding /scripts/start.js to the project
  Adding /scripts/test.js to the project

Updating the dependencies
  Removing react-scripts from dependencies
  Adding autoprefixer to dependencies
  Adding babel-core to dependencies
  Adding babel-eslint to dependencies
  Adding babel-jest to dependencies
  Adding babel-loader to dependencies
  Adding babel-preset-react-app to dependencies
  Adding babel-runtime to dependencies
  Adding case-sensitive-paths-webpack-plugin to dependencies
  Adding chalk to dependencies
  Adding css-loader to dependencies
  Adding dotenv to dependencies
  Adding dotenv-expand to dependencies
  Adding eslint to dependencies
  Adding eslint-config-react-app to dependencies
  Adding eslint-loader to dependencies
  Adding eslint-plugin-flowtype to dependencies
  Adding eslint-plugin-import to dependencies
  Adding eslint-plugin-jsx-a11y to dependencies
  Adding eslint-plugin-react to dependencies
  Adding extract-text-webpack-plugin to dependencies
  Adding file-loader to dependencies
  Adding fs-extra to dependencies
  Adding html-webpack-plugin to dependencies
  Adding jest to dependencies
  Adding object-assign to dependencies
  Adding postcss-flexbugs-fixes to dependencies
  Adding postcss-loader to dependencies
  Adding promise to dependencies
  Adding raf to dependencies
  Adding react-dev-utils to dependencies
  Adding resolve to dependencies
  Adding style-loader to dependencies
  Adding sw-precache-webpack-plugin to dependencies
  Adding url-loader to dependencies
  Adding webpack to dependencies
  Adding webpack-dev-server to dependencies
  Adding webpack-manifest-plugin to dependencies
  Adding whatwg-fetch to dependencies

Updating the scripts
  Replacing "react-scripts start" with "node scripts/start.js"
  Replacing "react-scripts build" with "node scripts/build.js"
  Replacing "react-scripts test" with "node scripts/test.js"

Configuring package.json
  Adding Jest configuration
  Adding Babel preset
  Adding ESLint configuration

Ejected successfully!

Please consider sharing why you ejected in this survey:
  http://goo.gl/forms/Bi6CZjk1EqsdelXk1

All Kubernetes pods become Evicted

Error
Almost all pods were duplicate and their status are Evicted. And it still keeps increasing more pods due to the deployment self-heal mechanism. And pod description shows 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

Debug Action
Check the description of each pod kubectl describe pod [pod name] -n [namespace], and I found there is error log telling the DiskPressure. Here're the debug cli.

# Describe the pod status and detailed action trace log
kubectl describe pod [pod name] -n [namespace]

# Describe the node status and detailed action trace log
kubectl get nodes
kubectl describe node [node name]

# Get the running log of kubelet.
systemctl status kubelet

# Get the detailed log of kubelet
journalctl -xefu kubelet

# Get the taints status
kubectl get no -o yaml | grep taint -A 5 
---
    taints:
    - effect: PreferNoSchedule
      key: node-role.kubernetes.io/master
    - effect: NoSchedule
      key: node.kubernetes.io/disk-pressure
      timeAdded: "2019-11-14T11:20:54Z"
----

# Set the allowance of master node's deployment. BTW, the default taint of master is NoSchedule. Now we removed this rule.
kubectl taint nodes --all node-role.kubernetes.io/master-

# Check the disk usage
df -h

# Check the detailed size of the specified dir
du -h --max-depth=1 [current directory]

# Get all evicted pods
kubectl get pods --all-namespaces -owide | grep Evicted | awk '{ printf "kubectl delete pods -n %s %s --force --grace-period 0\n", $1, $2}' | sh

# Delete all evicted pods
kubectl get pods | grep Evicted | awk '{print $1}' | xargs kubectl delete pod
kubectl get pods -n [your namespace] | grep Evicted | awk '{print $1}'| xargs kubectl delete pod -n [your namespace]

# Delete all pods under specified namespace
kubectl delete --all pods --namespace=[your namespace]

# Check the kubelet resource threshold
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

# Restart docker
systemctl daemon-reload
systemctl restart docker

# Restart kubelet
systemctl restart kubelet.service

Cause
This is usually caused by the resources are not enough. Check the disk usage by df -h.
Pay attention to the first line /dev/mapper/rhel-root 50G 43G 7.5G 86% /. This is the mount dir that docker uses. There is only 14% disk left, which leads to the Kubernetes engine warning.

[root@xyz ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   50G   43G  7.5G  86% /
devtmpfs                16G     0   16G   0% /dev
tmpfs                   16G   12M   16G   1% /dev/shm
tmpfs                   16G  1.6G   15G  11% /run
tmpfs                   16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/rhel-home  148G   19G  129G  13% /home
/dev/sda1              497M   68M  430M  14% /boot
tmpfs                  3.2G   28K  3.2G   1% /run/user/0
tmpfs                  3.2G   12K  3.2G   1% /run/user/42
172.16.2.155:/nfsdata   50G   43G  7.2G  86% /testdata

Solution
Action is to release disk or change the mount directory. Please refer more to this blog: #41

Multiple AuthGuard

Angular CLI: 1.7.4
Node: 8.10.0
Angular: 5.2.10
OS: darwin x64

Issue Description

In our project, there are several router guards existing for either authenticate or authorize. And some guard returns boolean, some returns observable or promise. The question is we can NOT control the execution sequence of the guards.

Here's some code snippet of my customized guards.

// auth-guard.service.ts
@Injectable()
export class AuthGuard implements CanActivate {
  canActivate(route: ActivatedRouteSnapshot,  state: RouterStateSnapshot): Observable<boolean> {
    return Observable.of(true);
  }
}

// access-guard.service.ts
@Injectable()
export class AccessGuard implements CanActivate {
  canActivate(route: ActivatedRouteSnapshot,  state: RouterStateSnapshot): boolean {
    return false;
  }
}

// admin-routing.module.ts
const routes: Routes = [
  {
    path: 'admin',
    component: AdminComponent,
    canActivate: [ AuthGuard, AccessGuard ],
...
}...]

Root Cause

Here's the source code of interface CanActivate. The defined return value of method canActivate can be either Observable<boolean> | Promise<boolean>.

export interface CanActivate {
    canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean> | Promise<boolean> | boolean;
}

But NOW the guard list defined IS out of control due to some angular bug or not implemented features.

Solution

Inject AuthGuard into AccessGuard, and

@Injectable()
export class AccessGuard implements CanActivate {
  constructor(private _authGuard: AuthGuard) {}
  canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Promise<boolean> {
    return this._authGuard.canActivate(route, state).then((auth: boolean) => {
        if(!auth) {
           return Promise.resolve(false);
        }
        //... your role guard check code goes here
    });
}

Reference

Zuul Routes WebSocket

Background:
Failed to route WebSocket services behind Zuul.

Ref (Netflix/zuul#551 (comment)):
Zuul server doesn't yet support proxying WebSocket connection. It allows client to maintain an open connection with the server and then push messages to the client, on that WebSocket, by sending those messages to zuul.server.port.http.push port (7008 by default). It won't proxy inbound WebSocket messages to the backend.

分布式事物 - Distributed Transaction

问题

  1. 微服务推崇_分库_分表,当业务逻辑涉及多个微服务的_写操作_时,需要考虑其中一个服务crash了该如何rollback 已经修改了的服务数据
  2. TODO: 收集整理

现有解决方案

  1. 尽量不要分库
  2. 数据库两段提交
  3. 数据库三段提交

openssl 命令转化证书格式

背景

image

大多数常用的证书的格式:

  • 二进制的DER证书,包含X.509证书为原始格式,使用DER ASN.1编码。
  • ASCII的PEM证书,包含1个base64编码的DER证书,以-----BEGIN CERTIFICATE-----开头而以-----END CERTIFICATE-----结束。
  • PKCS#7证书,1个复杂格式的设计用于传输签名或加密数据,定义在RFC 2315中。通常以.p7b和.p7c作为后缀且可以包含整个证书链。这种格式被Java的keytool工具支持。
  • PKCS#12(PFX)的证书和私钥,1个复杂的格式它可以存储和保护1个服务器的私钥并和1个完整的证书链一起。它通常以.p12和.pfx为后缀。这种格式常用于微软的产品,不过也可以用于客户端证书。

然后是对应的私钥的格式:

  • 二进制的DER私钥,包含1个私钥以原始形式,使用DER ASN.1编码。OpenSSL以它传统的SSLeay格式创建私钥,不过也可以使用另外1种称为PKCS#8,但不广泛使用的格式(定义在RFC 5208)。在OpenSSL中可以使用pkcs8命令来进行PKCS#8格式的处理操作。
  • ASCII格式的私钥,包含1个base64编码的DER私钥,有些时候有一些额外的元信息,例如密码保护采用的算法。比如RSA加密后的私钥是以-----BEGIN RSA PRIVATE KEY-----开始,以-----END RSA PRIVATE KEY-----结束。

说了这么多,可以发现对于私钥之间的转换就简单的很多,只能在DER和PEM格式之间进行转换。而相比证书之间的转换,就稍微复杂一些。

转化方法

基本语法

openssl [original cert format] -in [source file] -out [target file, you can identify your expected format] [...args]
openssl pkcs12 -in domain.p12 -out key.pem -nodes -nocerts
openssl pkcs12 -in domain.p12 -out cert.pem -nodes -nokeys
openssl rsa -in key.pem -out rsakey.pem

# 此处同理至.key/.crt
openssl pkcs12 -in filename.pfx -nocerts -out filename.key
openssl pkcs12 -in filename.pfx -clcerts -nokeys -out filename.crt

## 一些其他 openssl 参数
# -clcerts: 仅仅输出客户端证书,不输出CA证书
# -cacerts: 仅仅输出CA证书,不输出客户端证书
# -nodes: 一直对私钥不加密
# -nocerts: 不输出任何证书
# -nokeys: 不输出任何私钥信息值

关于 nodes 参数的一些作用

## 1. with -nodes

-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----



## 2. without -nodes

-----BEGIN ENCRYPTED PRIVATE KEY-----
...
-----END ENCRYPTED PRIVATE KEY-----

References

https://segmentfault.com/a/1190000006808275
https://blog.csdn.net/as3luyuan123/article/details/16105475
https://blog.csdn.net/zxh2075/article/details/79967227
image
image

Ops design for Angular project

Background
We have 4 environments: dev, qa, staging and production. Each environment has different configurations, like: API entrypoint, web server (host, port, ssl, domain), compile params, deploy params, etc. So the purpose is to quickly build and deploy our web app on various environments thru simple cmd.

Design

  1. create different environment files
|-environments
	|-environment.dev.ts
	|-environment.prod.ts
	|-environment.qa.ts
	|-environment.stg.ts
	|-environment.ts
  1. configure .angular-cli.json for corresponding environment files
      "environmentSource": "environments/environment.ts",
      "environments": {
        "local": "environments/environment.ts",
        "dev": "environments/environment.dev.ts",
        "qa": "environments/environment.qa.ts",
        "stg": "environments/environment.stg.ts",
        "prod": "environments/environment.prod.ts"
      }
  1. create different proxy configurations
|-proxy.conf.dev.json
|-proxy.conf.qa.json
|-proxy.conf.stg.json
|-proxy.conf.prod.json
  1. add start up commands for various environments
"scripts": {
    "ng": "ng",
    "start": "ng serve --ssl --proxy-config proxy.conf.json --environment=local --deploy-url=/",
    "dev": "ng serve --ssl --proxy-config proxy.conf.dev.json --prod --environment=dev --sourcemaps --disable-host-check --deploy-url=/",
    "qa": "ng serve --ssl --proxy-config proxy.conf.qa.json --prod --environment=qa --sourcemaps --disable-host-check --deploy-url=/",
    "stg": "ng serve --ssl --proxy-config proxy.conf.stg.json --prod --environment=stg --sourcemaps --disable-host-check --deploy-url=/app/xyz/",
    "prod": "ng serve --ssl --proxy-config proxy.conf.prod.json --prod --sourcemaps --disable-host-check --deploy-url=/app/xyz/",
...

Docker Commands Collection | Docker 命令集合

docker-comopse start: 启动当前部署
docker-compose stop: 停止当前部署
docker-compose down: 删除当前部署
docker-compose ps: 获取当前部署的运行情况
docker-compose up -d: 后台启动 docker-compose

docker network ls:列出所有Docker 网络
docker network create webnet: 创建虚拟网桥

References:

Large size of Angular prod build outputs

Hash: cf3f49f4e32ba3245150
Time: 212552ms
chunk {scripts} scripts.84800f8e08ba46ef92c9.bundle.js, scripts.84800f8e08ba46ef92c9.bundle.js.map (scripts) 712 kB [initial] [rendered]
chunk {0} 0.fc9075eacc1ed85f83bb.chunk.js, 0.fc9075eacc1ed85f83bb.chunk.js.map () 1.44 MB  [rendered]
chunk {1} 1.cc75c2547d32d0fcc728.chunk.js, 1.cc75c2547d32d0fcc728.chunk.js.map () 67.5 kB  [rendered]
chunk {2} polyfills.39dce4f37bbf9159855d.bundle.js, polyfills.39dce4f37bbf9159855d.bundle.js.map (polyfills) 59.9 kB [initial] [rendered]
chunk {3} main.a6f5d38b822358a3d76a.bundle.js, main.a6f5d38b822358a3d76a.bundle.js.map (main) 364 kB [initial][rendered]
chunk {4} styles.fd8a8bdd1420259e6edb.bundle.css, styles.fd8a8bdd1420259e6edb.bundle.css.map (styles) 586 kB [initial] [rendered]
chunk {5} vendor.dec7adeabc35d2cdf344.bundle.js, vendor.dec7adeabc35d2cdf344.bundle.js.map (vendor) 1.88 MB [initial] [rendered]
chunk {6} inline.864842f6ff5a0ad569ef.bundle.js, inline.864842f6ff5a0ad569ef.bundle.js.map (inline) 1.48 kB [entry] [rendered]

Solution

  1. Enable Angular build with production mode, which includes --aot --build-optimizer by default.
ng build --sourcemaps --prod --environment ${env} --vendor-chunk true --deploy-url / --base-href /
  1. Enable compression on web server. In my case, we use Nginx as web server, so just add the following configuration.
gzip  on;
gzip_vary on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_types  text/plain application/javascript application/x-javascript text/javascript text/xml text/css;

Docker Multistage Build | Docker 多阶段 build

Background (A tragedy story..)
Our webapp project (React based) was initialized within Java Dynamic Web framework several years ago. We need THREE main tools:

  • Node.js & NPM: compile and build the frontend source code
  • JDK: package the project to WAR
  • Tomcat: provide the runtime Web server

My purpose is to automate above process into one Dockerfile with the seeds of Node and Tomcat. Previously I used Tomcat seed and install the Node bin by Shell in Dockerfile. But later I found the multistage build is supported by Docker 17 (sad & lol). Here's the two versions of Dockerfile.

Multistage build

############ Stage 1 ############
FROM node:8.11.2 as NODE_BUILDER
LABEL maintainer="Yu Zhu Xu <[email protected]>"

# System setup
RUN apt-get update && apt-get install -y

ENV PKG_NAME="webapp" \
  HOME=/home/node
  
# Development workspace setup
WORKDIR /$HOME/$PKG_NAME/
RUN rm -rf $HOME/$PKG_NAME \
  && mkdir -p $HOME/$PKG_NAME

COPY . $HOME/$PKG_NAME/

# Build source code
RUN rm -rf node_modules \
  && npm install \
  && npm rebuild node-sass \
  && npm run build-cvms \
  && echo "JSX built successfully!"

############ Stage 2 ############
FROM tomcat:jdk8-openjdk AS JAVA_BUILDER

COPY --from=NODE_BUILDER /home/node /home/node
ENV APP_NAME="webapp"
WORKDIR /home/node/$APP_NAME/WebContent

# Package source code to WAR
RUN jar -cvf $APP_NAME.war ./* \
  && echo "WAR built successfully!"

# Deploy package to Tomcat
RUN rm -rf /usr/local/tomcat/webapps/* \
  && cp $APP_NAME.war /usr/local/tomcat/webapps/ \
  && echo "Tomcat webapp updated successfully!"

Tomcat seed

FROM tomcat:jdk8-openjdk
LABEL maintainer="Yu Zhu Xu <[email protected]>"

# System setup
RUN apt-get update \
  && apt-get install -y \
  && wget https://nodejs.org/download/release/v8.11.2/node-v8.11.2-linux-x64.tar.gz \
  && tar -xvf node-v8.11.2-linux-x64.tar.gz \
  && mv node-v8.11.2-linux-x64 /usr/local/node-v8.11.2
  
  
# Identify project related environments

RUN echo $PATH
ENV NODE_HOME=/usr/local/node-v8.11.2
ENV PATH=$NODE_HOME/bin:$PATH
RUN ls -al $NODE_HOME/bin

ENV PKG_NAME="webapp" \
  HOME=/home/node

RUN rm -rf $HOME/$PKG_NAME \
  && mkdir -p $HOME/$PKG_NAME
ADD . $HOME/$PKG_NAME/
WORKDIR $HOME/$PKG_NAME/

# Build source code of builder webapp
RUN rm -rf node_modules \
  && npm install \
  && RUN npm rebuild node-sass \
  && RUN bower --allow-root install \
  && npm run build-cvms \
  && cd WebContent \
  && jar -cvf $PKG_NAME.war ./*
  
# Deploy package to Tomcat
RUN rm -rf /usr/local/tomcat/webapps/* \
  && cp $APP_NAME.war /usr/local/tomcat/webapps/ \
  && echo "Tomcat webapp updated successfully!"

Some tips:

  1. ENV not shared
  2. Set correct WORKDIR
  3. Consider about the scratch before copying to Tomcat to optimize the image prune
  4. This feature is only supported by Docker above 17.05

Ref: https://docs.docker.com/develop/develop-images/multistage-build/

JavaScript - call / apply / bind / curry

call & apply

obj.call(thisObj, arg1, arg2, ...);
obj.apply(thisObj, [arg1, arg2, ...]);

两者作用一致,都是把obj(即this)绑定到thisObj,这时候thisObj具备了obj的属性和方法。或者说thisObj『继承』了obj的属性和方法。
唯一区别是apply接受的是数组参数,call接受的是连续参数。

bind

obj.bind(thisObj, arg1, arg2, ...);

把obj绑定到thisObj,这时候thisObj具备了obj的属性和方法。与call和apply不同的是,bind绑定后不会立即执行。
bind 的实现原理

Function.prototype.bind = Function.prototype.bind || function(context){
  var self = this;

  return function(){
    return self.apply(context, arguments);
  };
}

函数的柯里化
定义:将接受多个参数的方法,转变成:只接受第一个参数,并且返回一个接受余下参数并返回结果的新函数。

var currying = function(fn) {
    var args = [].slice.call(arguments, 1); // 获取去除第一个参数的剩余参数作为要返回 fn 的新参数
    return function() {
        var newArgs = args.concat([].slice.call(arguments)); // 将新参数与默认参数做整合
        return fn.apply(null, newArgs); // apply 新参数到 fn 中
    };
};

reference:

Screen Rotation of Web App on Mobile Device

  1. Use Web API ScreenOrientation. But this way's capability is bad, it is only supported on Chrome...
  var lockFunction = window.screen.lockOrientation || window.screen.mozLockOrientation || window.screen.msLockOrientation || window.screen.orientation;
  if (lockFunction) {
    lockFunction.lock('landscape');
  } else {
    console.log('Screen lockFunction is not supported');
  }
  1. Rotate -90deg when screen is portrait mode. But it's not very good when the locale is Arabic... Anyway, this can cover most scenarios. Suggested.
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=0">
@media (max-device-width: $small) and (orientation: portrait) {
  body {
    transform: rotate(-90deg);
    transform-origin: left top;
    width: 100vh;
    overflow-x: hidden;
    position: absolute;
    top: 100%;
    left: 0;
  }
}

Ref

Nginx error page

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

Slack integration with Jenkins Pipeline

Pipeline snippets

pipeline{

    stages{
        stage('demo'){
            steps{
                echo "hello kitty"
            }
        }
       
    }
    
    post {
      success {
        slackSend channel: '#my-channel', color: '#5FA269', iconEmoji: '', message: "[cvms-${profile}] Build successfully ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>)", teamDomain: "$MY_DOMAIN", token: "$MY_TOKEN", username: ''
      }

      failure {
        slackSend channel: '#my-channel', color: '#C92B2A', iconEmoji: '', message: "[cvms-${profile}] Failed to build ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>)", teamDomain:"$MY_DOMAIN", token: "$MY_TOKEN", username: ''
      }

      aborted {
        slackSend channel: "#my-channel", color: '#CCCCCC', iconEmoji: '', message: "[cvms-${profile}] Aborted to build ${env.JOB_NAME} ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>)", teamDomain: "$MY_DOMAIN", token: "$MY_TOKEN", username: ''
      }
    }
}
 

Out of Memory When Compiling Angular with `aot`

Angular CLI: 1.7.4
Node: 8.10.0
Angular: 5.2.10
OS: darwin x64

Issue Description
The build/serve with aot would probably failed. However, it works if I remove the aot/prod param.
Here's my error log.

92% chunk asset optimization
<--- Last few GCs --->

[93700:0x102801e00]   145215 ms: Mark-sweep 1342.7 (1467.7) -> 1342.7 (1469.7) MB, 855.7 / 0.0 ms  allocation failure GC in old space requested
[93700:0x102801e00]   146103 ms: Mark-sweep 1342.7 (1469.7) -> 1342.5 (1437.2) MB, 887.1 / 0.0 ms  last resort GC in old space requested
[93700:0x102801e00]   146951 ms: Mark-sweep 1342.5 (1437.2) -> 1342.5 (1435.7) MB, 847.6 / 0.0 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x7459c0257c1 <JSObject>
    1: /* anonymous */ [/Users/xxx/web-app/node_modules/webpack-sources/node_modules/source-map/lib/source-node.js:~342] [pc=0x2284d747912e](this=0x7453720c211 <JSGlobal Object>,chunk=0x745500bede9 <String[6]: $event>,original=0x745616556a1 <Object map = 0x74532790011>)
    2: SourceNode_walk [/Users/xxx/web-app/n...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/usr/local/bin/node]
 2: node::FatalException(v8::Isolate*, v8::Local<v8::Value>, v8::Local<v8::Message>) [/usr/local/bin/node]
 3: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/local/bin/node]
 4: v8::internal::Factory::NewUninitializedFixedArray(int) [/usr/local/bin/node]
 5: v8::internal::(anonymous namespace)::ElementsAccessorBase<v8::internal::(anonymous namespace)::FastPackedObjectElementsAccessor, v8::internal::(anonymous namespace)::ElementsKindTraits<(v8::internal::ElementsKind)2> >::GrowCapacity(v8::internal::Handle<v8::internal::JSObject>, unsigned int)[/usr/local/bin/node]
 6: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/local/bin/node]
 7: 0x2284d5f842fd

Reason
hmmm, it seems to be an known issue of Node.js when has already been fixed on later version like 8.x.

Solution
Add max_old_space_size to expand the max memory size. So just replace the build command ng build with node --max_old_space_size=8192 ./node_modules/@angular/cli/bin/ng build.

Reference

angular/angular-cli#5618
angular/angular-cli#1652

Customize input/textarea placeholder style

$color: gray;
$fontSize: .875rem;
textarea, input {
	&::-webkit-input-placeholder {
		font-style: italic;
		color: $color;
		font-size: $fontSize;
		line-height: $fontSize;
	}
	&::-moz-placeholder {
		font-style: italic;
		color: $color;
		font-size: $fontSize;
		line-height: $fontSize;
	}
	&::-ms-input-placeholder {
		font-style: italic;
		color: $color;
		font-size: .875rem;
		line-height: $fontSize;
	}
}

To be mentioned, pseudo selectors could not be combined into one rule. The following is incorrect.

&::-webkit-input-placeholder, &::-ms-input-placeholder {
		font-style: italic;
		color: $color;
		font-size: $fontSize;
		line-height: $fontSize;
	}

Ref:

Maven 配置本地多仓库

Memo: keep the id and name of repository the same in both setting.xml and pom.xml

Here's the setting.xml

  <mirrors>
  	<mirror>
  		<id>central</id>
  		<name>Human Readable Name for this Mirror.</name>
  		<url>http://repo2.maven.org/maven2/</url>
      <mirrorOf>central</mirrorOf>
  	</mirror>
  </mirrors>
  <profiles>
    <profile>
      <repositories>
        <repository>
          <snapshots>
            <enabled>false</enabled>
          </snapshots>
          <id>my_repo</id>
          <name>[self owned repository name]</name>
          <url>[self own repository url]/url>
        </repository>
        <repository>
          <id>central</id>
          <name>Central Repository</name>
          <url>https://repo.maven.apache.org/maven2</url>
          <layout>default</layout>
          <snapshots>
            <enabled>false</enabled>
          </snapshots>
        </repository>
    </profile>
  </profiles>

And here's the pom.xml in project

	<repositories>
	    <repository>
          <id>my_repo</id>
          <name>[self owned repository name]</name>
          <url>[self own repository url]/url>
		</repository>
		<repository>
	      <id>central</id>
	      <name>Central Repository</name>
	      <url>https://repo.maven.apache.org/maven2</url>
	      <layout>default</layout>
	      <snapshots>
	        <enabled>false</enabled>
	      </snapshots>
	    </repository>
	</repositories>

mvn 几个命令的区别

  • mvn clean package依次执行了clean、resources、compile、testResources、testCompile、test、jar(打包)等7个阶段。
  • mvn clean install依次执行了clean、resources、compile、testResources、testCompile、test、jar(打包)、install等8个阶段。
  • mvn clean deploy依次执行了clean、resources、compile、testResources、testCompile、test、jar(打包)、install、deploy等9个阶段。

Ref

Deep Clone

  1. 真深拷贝

    function deepClone(source) {
    	var dest = {};
    	var keys = Object.keys(source);
    	for(var idx in keys) {
    		var key = keys[idx];
    		if (typeof source[key] == "object") {
    			dest[key] = deepCopy(source[key]);
    		} else {
    			dest[key] = source[key];
    		}
    	}
    	return dest;
    }
  2. 一层深拷贝,实则浅拷贝(其实效果就是 Object.assign(target, ...source)

    function fakeDeepClone(source) {
    	var dest = {};
    	var keys = Object.keys(source);
    	for(var idx in keys) {
    		var key = keys[idx];
    		dest[key] = source[key];
    	}
    	return dest;
    }
  3. 验证方法

    let foo = {
    	a: 1,
    	b: {
    		c: 1
    	}
    };
    let deepFollower = deepClone(foo);
    let faker = fakeDeepClone(foo);
    let sir = {self: 1};
    Object.assign(sir, foo);
    
    foo.a ++;
    foo.b.c ++;
    
    foo.a; // 2
    foo.b.c; // 2
    
    deepFollower.a; // 1
    deepFollower.b.c; //1
    
    faker.a; // 1
    faker.b.c; // 2
    
    sir.a; // 1
    sir.b.c; // 2

Reference:

JS prototype chain & inherit

function Rectangle(length, width) {
	this.length = length;
	this.width = width;
}

Rectangle.prototype.toString = function() {
	return "[Rectangle] length=" + this.length + ", width=" + this.width;
}

Rectangle.prototype.getArea = function() {
	return this.length * this.width;
}

function Square(size) {
        // inherits from Rectangle
	Rectangle.call(this, size, size);
}

Square.prototype.toString = function() {
	return "[Square] length=" + this.length + ", width=" + this.width;
}

var square = new Square(5);
var rect = new Rectangle(1, 2);

分布式搜索引擎相关的考量点

Background: 刚开始接触搜索引擎,记录一些note

  • Hotspots : Two or more big customers would land on to same indexer causing search performance to be non-deterministic.
  • No Replication/HA/Failover : each indexer would store the lucene index but there was no copy and if machine goes down we had to reindex while taking a downtime.
  • SPOF : If indexer goes down due to write load then all customers on the shard would be affected even for read queries.
  • Manual rebalancing : Hotspots were eliminated manually by resharding and copying data from one machine to another.
  • Inferior Scaling : Adding new nodes would require downtime for rebalancing and index moves.

Checkpoints

  • Replication support
  • Elastic scaling
  • Auto rebalancing
  • No SPOF
  • REST API interface
  • Good documentation
  • Active community support
  • Cluster monitoring tools

Challenges
...TBD

Ref: https://www.elastic.co/blog/scaling-file-system-search-with-elasticsearch-at-egnyte

阻塞渲染的 CSS

在渲染树构建中,我们看到关键渲染路径要求我们同时具有 DOM 和 CSSOM 才能构建渲染树。这会给性能造成严重影响:HTML 和 CSS 都是阻塞渲染的资源。 HTML 显然是必需的,因为如果没有 DOM,我们就没有可渲染的内容,但 CSS 的必要性可能就不太明显。

不过,如果我们有一些 CSS 样式只在特定条件下(例如显示网页或将网页投影到大型显示器上时)使用,又该如何?如果这些资源不阻塞渲染,该有多好。

我们可以通过 CSS“媒体类型”和“媒体查询”来解决这类用例:

<link href="style.css" rel="stylesheet">
<link href="style.css" rel="stylesheet" media="all">
<link href="print.css" rel="stylesheet" media="print">
<link href="other.css" rel="stylesheet" media="(min-width: 40em)">
<link href="portrait.css" rel="stylesheet" media="orientation:portrait">
<link href="print.css" rel="stylesheet" media="print">

请注意“阻塞渲染”仅是指浏览器是否需要暂停网页的首次渲染,直至该资源准备就绪。无论哪一种情况,浏览器仍会下载 CSS 资产,只不过不阻塞渲染的资源优先级较低罢了。

References:

JS Constructor

function Person(name) {
	Object.defineProperty(this, "name", {
		get: function() { return name; },
		set: function(newName) { name = newName; },
		enumerable: true,
		configurable: true
	});
	this.sayName = function() {
		console.log(this.name);
    }
}

Some basic rules based on scss

#margin, padding#

@for $i from 1 through 10 {
  .ml#{$i * 5} {
    margin-left: calc(5px * #{$i});
  }
  .mr#{$i * 5} {
    margin-right: calc(5px * #{$i});
  }
  .mt#{$i * 5} {
    margin-top: calc(5px * #{$i});
  }
  .mb#{$i * 5} {
    margin-bottom: calc(5px * #{$i});
  }
  .pl#{$i * 5} {
    padding-left: calc(5px * #{$i});
  }
  .pr#{$i * 5} {
    padding-right: calc(5px * #{$i});
  }
  .pt#{$i * 5} {
    padding-top: calc(5px * #{$i});
  }
  .pb#{$i * 5} {
    padding-bottom: calc(5px * #{$i});
  }
}

跨节点数据中心/跨地域系统设计 - Cross-geo System Design

我们的痛点

  1. 用户分布在不同的geo,比如 NA, AP, EU, etc. 考虑到网络延迟问题,决定在不同geo 分别部署一整套解决工程
  2. 目前大多云服务商均不提cross-geo 的persistent replication solution, 比如阿里云、IBM Cloud, AWS 未调研不确定
  3. PostgreSQL, MongoDB, Redis 自带的 cluster replication strategy 各有不同
  4. NoSQL 在连接时候有提供天生的集群URL。然,RDBMS 自带并不支持

理想方案的 key checkpoint

  1. 同步的高精度
  2. 同步的低延迟
  3. 自动 failover
  4. little 窗口期
  5. 支持负载均衡
  6. License 友好

理想方案

  • RDBMS 最理想的方案是双主 (multi-master cluster) ---- 双主最大的问题是写冲突的处理, 目前pgsql 中间件大多对双主的支持不友好(或者说不完善),FYI - pgsql 官方有一个收费的双主方案是最完善的一个
  • NOSQL 采用集群自带的同步机制

此处调研的解决方方案如下

  • PostgreSQL cluster 采用读写分离,内部使用Streaming replication + pgpool-II
  • MongoDB replication set replication
  • Redis 集群同步机制 TODO: 调研集群同步策略

Key ref:

Kubernetes issue - 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

Background: All pods becomes evicted.. Please refer to #40

Error log

[root@xyz ~]# kubectl describe pod my.xyz.com -n XYZ
Name:               my.xyz.com
Roles:              edge,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=cv-ms-dev.austin.ibm.com
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/edge=
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"96:e5:1e:31:ff:fb"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 172.16.2.154
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 20 Oct 2019 10:44:01 -0400
Taints:             node.kubernetes.io/disk-pressure:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 14 Nov 2019 06:38:32 -0500   Sun, 20 Oct 2019 10:43:58 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     True    Thu, 14 Nov 2019 06:38:32 -0500   Thu, 14 Nov 2019 06:20:31 -0500   KubeletHasDiskPressure       kubelet has disk pressure
  PIDPressure      False   Thu, 14 Nov 2019 06:38:32 -0500   Sun, 20 Oct 2019 10:43:58 -0400   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 14 Nov 2019 06:38:32 -0500   Thu, 14 Nov 2019 06:18:27 -0500   KubeletReady                 kubelet is posting ready status
.....
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From                               Message
  ----     ------            ----                 ----                               -------
  Warning  FailedScheduling  18m (x17 over 34m)   default-scheduler                  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

Cause 1
The master node is not allowed to deploy pod due to security consideration by default. So the default schedule is "NoSchedule". However, my Kubernetes cluster has only one node, taking the role of both master and edge. I need to change the master node's default schedule or remove the master off taint list. Then apply the change by restart kubelet service.

[root@xyz ~]# kubectl get no -o yaml | grep taint -A 5
    taints:
    - effect: PreferNoSchedule
      key: node-role.kubernetes.io/master
    - effect: NoSchedule
      key: node.kubernetes.io/disk-pressure
      timeAdded: "2019-11-14T11:20:54Z"

Solution

[root@xyz ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/mynodename untainted
[root@xyz ~]# systemctl restart kubelet.service
[root@xyz ~]# kubectl get pods --all-namespaces | grep Evicted | awk '{print $1}'| xargs kubectl delete pod --all-namespaces

Cause 2
The DiskPressure status is True. I checked my disk status of /, it's already 85% used... So I need to either expand the root dir size or expand the kubelet DiskPressure threshold. I chose the second by add the eviction cfg --eviction-hard=nodefs.available<5%

Solution

[root@xyz docker]# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --eviction-hard=nodefs.available<5%"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
~                                                                                                                  

Refs:

Nginx 针对IP的配置 | Nginx Configuration per IP Address

Background
Proxy server 需要针对部分 IP 地址做不同的处理,比如
1.2.3.10/19 ----> https://www.google.com
1.2.3.20/29 ----> 502 error page

Solution 1
使用正则表达式

location / {
  if ( $remote_addr ~* ^(.*)\.(.*)\.(.*)\.*[026]$){
       proxy_pass http://test-01.com;
       break;
      }
      proxy_pass http://test-02.com;
  }
  ...
}

Solution 2
使用 geo 模块

geo $bad_user {
  default 0;
  1.2.3.4/32 1;
  4.3.2.1/32 1;
}

server {
  if ($bad_user) {
    rewrite ^ https://www.google.com;
  }
}

References

iframe issue on iOS Safari

Angular CLI: 1.7.4
Node: 8.10.0
Angular: 5.2.10
OS: darwin x64

Issue Description
iframe issues on Safari of iOS 10/11

  1. The attribute height of iframe does NOT work
  2. iframe failed to scroll the content

Solution

<div class="iframe-wrapper">
  <iframe [src]="xyz.html"></iframe>
</div>
.iframe-wrapper {
  flex: 1 0 0;
  width: 100%;
  -webkit-overflow-scrolling: touch;
  overflow-y: auto;

  iframe {
    border: 0;
    width: 100%;
    height: 100%;
    display: block;
  }
}

reference

Configure Cors Requests

Two ways to allow the cross domain requests.

  1. Implement interface WebMvcConfigurer
  • Pros: It allows strick customization of interceptor, including Cors, URL, static resources.
  • Cons: It inceptors all resources' visit, we need to add white rule one by one all by ourselves. For example, it is hard to integrate with Hystrix dashboard.
  1. Use CorsFilter
  • Pros: Separate configurations into standalone class, easy to maintain.
  • Cons: TBD....

Code snippets for above solutions.
Dependency:

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.1.6.RELEASE</version>
		<relativePath />
	</parent>
package com.xyz.config;

import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.CorsRegistry;
import org.springframework.web.servlet.config.annotation.EnableWebMvc;
import org.springframework.web.servlet.config.annotation.ResourceHandlerRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;

@Configuration
@EnableWebMvc
public class InteceptorConfig implements WebMvcConfigurer {
	
    @Override
    public void addCorsMappings(CorsRegistry registry) {
        registry.addMapping("/**")
            .allowedOrigins("http://localhost:8000", "https://xxx.sso.com")
	        .allowedMethods("GET", "POST", "PUT", "DELETE", "OPTIONS")
	        .allowCredentials(true);
    }
 
    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry.addResourceHandler("swagger-ui.html").addResourceLocations("classpath:/META-INF/resources/");
        registry.addResourceHandler("/webjars/**").addResourceLocations("classpath:/META-INF/resources/webjars/");
    }
}
package com.xyz.config;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.cors.CorsConfiguration;
import org.springframework.web.cors.UrlBasedCorsConfigurationSource;
import org.springframework.web.filter.CorsFilter;

@Configuration
public class GlobalCorsConfig {
	@Bean
	public CorsFilter corsFilter() {
		UrlBasedCorsConfigurationSource configSource = new UrlBasedCorsConfigurationSource();
		CorsConfiguration config = new CorsConfiguration();
		config.addAllowedOrigin("http://localhost:8000");
		config.addAllowedOrigin("http://localhost:8081");
		config.addAllowedOrigin("https://xxx.sso.xxx.com");
		config.setAllowCredentials(true);
		config.addAllowedMethod("*");
		config.addAllowedHeader("*");
		configSource.registerCorsConfiguration("/**", config);

		return new CorsFilter(configSource);
	}
}

Dynamic `base` of Angular2+ app

Background

  1. Our project would like to use a shared domain name, so our Frontend server is delegated on a third party proxy server. Now this proxy server has assigned a specific URL snippet for our application, for example /app/xyz. It means our app URL would be https://host:port/app/xyz.

  2. SVG image with <use xlink:href="#search_16"> kinda syntax does not work when Angular app set <base> on index.html. It would be not able to find the SVG file. T_T

  3. Resource link embedded in template file is still source URL without any base path like /app/xyz. It means the browser can display the resource file correctly (since the HTTP request of resource contains base path) but it failed to preview/download by URL (except you manually add the base path).

Solution

  1. index.html: Remove base tag
<!-- <base href="/"> -->
  1. app.module.ts: Add provider APP_BASE_HREF
  providers: [
    {
      provide: APP_BASE_HREF, 
      useValue: '/app/xyz/'
    }, ....
  1. package.json: Add deploy url into start cmd
"prod": "ng serve --ssl --proxy-config proxy.conf.json --build-optimizer --prod --sourcemaps --disable-host-check --deploy-url=/app/xyz/",
  1. *.html: use relative URL of static resources on template files or move that URLs into CSS styles.
<img src="/assets/images/logo.svg" alt="logo" />

Local Development Environments
Angular CLI: 1.7.4
Node: 8.10.0
OS: darwin x64
Angular: 5.2.10

《Kafka权威指南》note

image

snappy 压缩算法由 Google 发明, 它占用较少的 CPU,却能提供较好的性能和相当可观的压缩比,如果比较关注性能和网 络带宽,可以使用这种算法。gzip 压缩算法一般会占用较多的 CPU,但会提供更高的压缩 比,所以如果网络带宽比较有限,可以使用这种算法。使用压缩可以降低网络传输开销和 存储开销,而这往往是向 Kafka 发送消息的瓶颈所在。

默认情况下,生产者会在每次重试之间等待 100ms,不过可以通过 retry.backoff.ms 参数来改变这个时间间隔。建议在设置重试次数和重试时间间隔之前, 先测试一下恢复一个崩溃节点需要多少时间(比如所有分区选举出首领需要多长时间), 让总的重试时间比 Kafka 集群从崩溃中恢复的时间长,否则生产者会过早地放弃重试。不 过有些错误不是临时性错误,没办法通过重试来解决(比如“消息太大”错误)。一般情 况下,因为生产者会自动进行重试,所以就没必要在代码逻辑里处理那些可重试的错误。 你只需要处理那些不可重试的错误或重试次数超出上限的情况。


为什么不建议使用自定义的序列化反序列化器
如果我们有多种类型的消费者,可能需要把 customerID 字段变成长整型, 或者为 Customer 添加 startDate 字段,这样就会出现新旧消息的兼容性问题。在不同版 本的序列化器和反序列化器之间调试兼容性问题着实是个挑战——你需要比较原始的字节 数组。更糟糕的是,如果同一个公司的不同团队都需要往 Kafka 写入 Customer 数据,那 么他们就需要使用相同的序列化器,如果序列化器发生改动,他们几乎要在同一时间修改 代码。


Avro 序列化

定义
Apache Avro(以下简称 Avro)是一种与编程语言无关的序列化格式。
Avro 数据通过与语言无关的 schema 来定义。schema 通过 JSON 来描述,数据被序列化 成二进制文件或 JSON 文件,不过一般会使用二进制文件。Avro 在读写文件时需要用到 schema,schema 一般会被内嵌在数据文件里。
image

Some interesting meta tags of Web app

  <meta http-equiv="X-UA-Compatible" content="IE=edge">
/** viewport **/
  <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=0">
/** iOS mobile status bar **/
  <meta name="apple-mobile-web-app-capable" content="yes" />
  <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent" />
/** Windows 8.1+ app icon **/
  <meta name="msapplication-TileColor" content="#da532c">
  <meta name="msapplication-TileImage" content="path/to/tileicon.png">
  <meta name="msapplication-tap-highlight" content="no" />
  <meta name="theme-color" content="#ffffff">

image

ref

WebApp CacheManager Design

Tech Checkpoints

  1. Allow choice of memory and localStorage/sessionStorage
  2. Cache update mechanism
  3. Storage auto expiration
  4. Storage size limitation
  5. Getter/Setter with KV

Override Isolated Component Style

Use the :host pseudo-class selector to target styles in the element that hosts the component (as opposed to targeting elements inside the component's template).

:host /deep/ .wrap-page {...}

use ng-deep (deprecated but worked for me)

::ng-deep .dropdown-menu {...}

References:


Local Development Environments
Angular CLI: 1.7.4
Node: 8.10.0
OS: darwin x64
Angular: 5.2.10

vs code 配置background image

Configuration snippet in settings.json

    ...,
    "background.customImages": [
        "/Users/xuyuzhu/Documents/kkk.jpg"
    ],
    "background.useDefault": false,
    "background.style": {
        "opacity": 0.1
    }

Linux - 硬件参数查阅命令

du -h --max-depth=1
du -sh

docker ps --filter volume=<name of volume>

#Check RAM
top

# 总核数 = 物理CPU个数 X 每颗物理CPU的核数 
# 总逻辑CPU数 = 物理CPU个数 X 每颗物理CPU的核数 X 超线程数

# 查看物理CPU个数
cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l

# 查看每个物理CPU中core的个数(即核数)
cat /proc/cpuinfo| grep "cpu cores"| uniq

# 查看逻辑CPU的个数
cat /proc/cpuinfo| grep "processor"| wc -l

# 查看CPU信息(型号)
cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c

# CPU 使用率
mpstat

# RAM 使用
free -h 
free -m

## RAM, CPU
top

## 查看内核版本
cat /etc/redhat-release

## 检查docker fs 类型
docker info | egrep -i 'storage|pool|space|filesystem'

## 查看操作系统位数
$ uname -a
Linux timely.xyz.com 3.10.0-862.14.4.el7.x86_64 #1 SMP Fri Sep 21 09:07:21 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.