GithubHelp home page GithubHelp logo

aylei / aliyun-exporter Goto Github PK

View Code? Open in Web Editor NEW
273.0 15.0 151.0 3.58 MB

Prometheus exporter for Alibaba Cloud Monitor

License: Apache License 2.0

Dockerfile 0.88% Python 91.52% CSS 0.42% HTML 6.97% Shell 0.22%
metrics prometheus-exporter monitoring-stack grafana aliyun-exporter

aliyun-exporter's People

Contributors

aylei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aliyun-exporter's Issues

Add basic Grafana alerting rules

For now, we bundle some basic alerting rules in the Prometheus Rules file. Prometheus alerting has some strengths over Grafana, but is not as user-friendly as Grafana.

It will be helpful if we add the basic alerting rules in Grafana as an example for users.

使用 RAM 角色概览 - use RAM role

在配置文件中写密码是不好的行为。
请问我们可以使用RAM Role而不是access_key_id和access_key_secret吗?
https://www.alibabacloud.com/help/zh/doc-detail/93689.htm

It is poor behaviour to write passwords in configuration files.
Please may we use RAM Role instead of access_key_id and access_key_secret ?
https://www.alibabacloud.com/help/doc-detail/93689.htm

wget -qO- http://100.100.100.200/latest/meta-data/ram/security-credentials/PrometheusRole

measure变量

vim ../aliyun_exporter/collector.py

if 'measures' in metric:
measure = metric['measure']

#97行measures变量命名错误。改成measure还是报错,这个地方需要看看。

配置了网关监控就出现错误了

错误信息
[zhaojiedi@vpc prometheus]$ kubectl -n monitoring logs aliyun-exporter-699f69944c-wckt4 Traceback (most recent call last): File "/usr/local/bin/aliyun-exporter", line 11, in <module> load_entry_point('aliyun-exporter', 'console_scripts', 'aliyun-exporter')() File "/usr/src/app/aliyun_exporter/__init__.py", line 39, in main REGISTRY.register(collector) File "/usr/local/lib/python3.7/site-packages/prometheus_client/registry.py", line 24, in register names = self._get_names(collector) File "/usr/local/lib/python3.7/site-packages/prometheus_client/registry.py", line 64, in _get_names for metric in desc_func(): File "/usr/src/app/aliyun_exporter/collector.py", line 134, in collect yield from self.metric_generator(project, metric) File "/usr/src/app/aliyun_exporter/collector.py", line 125, in metric_generator gauge.add_metric([try_or_else(lambda: str(point[k]), '') for k in label_keys], point[measure]) KeyError: 'Average'

我的配置
`[zhaojiedi@vpc prometheus]$ cat aliyun-exporter-config.yml.2019.07.03.17
credential:
access_key_id: LTAITveITrbiHm6x
access_key_secret: AKLksmbAULSdWhcbWRJ4sFJkNZUTWh
region_id: cn-beijing

metrics:
acs_cdn:

  • name: QPS
    acs_mongodb:
  • name: CPUUtilization
    period: 300
    acs_nat_gateway:
  • name: SnatConnection
    period: 60
    #- name: net_rx.rate

period: 60

#- name: net_tx.rate

period: 60

#- name: net_rx.Pkgs

period: 60

#- name: net_tx.Pkgs

period: 60

#- name: net_tx.ratePercent

period: 60

  • name: SnatConnectionDrop_ConcurrentConnectionLimit
    period: 30
  • name: SnatConnectionDrop_ConnectionRateLimit
    period: 60`

其他:
使用样例的就没有问题, 添加了snat的监控就出错了。帮忙看下。

info_metrics: - ecs :超过十台 Ecs 主机就会报以下错误,用的子账号(AliyunCloudMonitorReadOnlyAccess权限)

Traceback (most recent call last):
File "C:/Users/AS/Desktop/app/app/aliyun-exporter.py", line 10, in
load_entry_point('aliyun-exporter', 'console_scripts', 'aliyun-exporter')()
File "C:\Users\AS\Desktop\app\app\aliyun_exporter_init_.py", line 39, in main
REGISTRY.register(collector)
File "C:\Users\AS\Desktop\app\app\venv\lib\site-packages\prometheus_client\registry.py", line 24, in register
names = self._get_names(collector)
File "C:\Users\AS\Desktop\app\app\venv\lib\site-packages\prometheus_client\registry.py", line 64, in get_names
for metric in desc_func():
File "C:\Users\AS\Desktop\app\app\aliyun_exporter\collector.py", line 136, in collect
yield self.info_provider.get_metrics(resource)
File "C:\Users\AS\Desktop\app\app\venv\lib\site-packages\cachetools_init
.py", line 46, in wrapper
v = func(*args, **kwargs)
File "C:\Users\AS\Desktop\app\app\aliyun_exporter\info_provider.py", line 41, in get_metrics
}resource
File "C:\Users\AS\Desktop\app\app\aliyun_exporter\info_provider.py", line 37, in
'ecs': lambda : self.ecs_info(),
File "C:\Users\AS\Desktop\app\app\aliyun_exporter\info_provider.py", line 50, in ecs_info
return self.info_template(req, 'aliyun_meta_ecs_info', nested_handler=nested_handler)
File "C:\Users\AS\Desktop\app\app\aliyun_exporter\info_provider.py", line 81, in info_template
gauge.add_metric(labels=self.label_values(instance, label_keys, nested_handler), value=1.0)
File "C:\Users\AS\Desktop\app\app\venv\lib\site-packages\prometheus_client\metrics_core.py", line 145, in add_metric
self.samples.append(Sample(self.name, dict(zip(self._labelnames, labels)), value, timestamp))
File "C:\Users\AS\Desktop\app\app\aliyun_exporter\info_provider.py", line 106, in
return map(lambda k: str(nested_handlerk) if k in nested_handler else str(instance[k]),
KeyError: 'KeyPairName'

Process finished with exit code 1

docker 镜像中不支持slb?

docker中info_provider.py中get_metrics没有slb的key。
copy最新的代码替换,发下启动仍然会报错

Traceback (most recent call last):
File "/usr/local/lib/python3.7/socketserver.py", line 313, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/local/lib/python3.7/socketserver.py", line 344, in process_request
self.finish_request(request, client_address)
File "/usr/local/lib/python3.7/socketserver.py", line 357, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/local/lib/python3.7/socketserver.py", line 717, in init
self.handle()
File "/usr/local/lib/python3.7/wsgiref/simple_server.py", line 133, in handle
handler.run(self.server.get_app())
File "/usr/local/lib/python3.7/wsgiref/handlers.py", line 144, in run
self.close()
File "/usr/local/lib/python3.7/wsgiref/simple_server.py", line 35, in close
self.status.split(' ',1)[0], self.bytes_sent
AttributeError: 'NoneType' object has no attribute 'split'

INFO:root:Start exporter, listen on 9525

请问下是否不支持slb?如支持是否可以从新打包Docker镜像,谢谢

exporter加了redis或lsb的metrics就报错

直接复制ip:9525中提供的yaml,然后启动exporter,prometheus显示节点down,ctrl c中止exporter就报错...

Traceback (most recent call last):
File "/opt/python368/lib/python3.6/wsgiref/handlers.py", line 138, in run
self.finish_response()
File "/opt/python368/lib/python3.6/wsgiref/handlers.py", line 180, in finish_response
self.write(data)
File "/opt/python368/lib/python3.6/wsgiref/handlers.py", line 279, in write
self._write(data)
File "/opt/python368/lib/python3.6/wsgiref/handlers.py", line 453, in _write
result = self.stdout.write(data)
File "/opt/python368/lib/python3.6/socketserver.py", line 803, in write
self._sock.sendall(b)
BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/python368/lib/python3.6/wsgiref/handlers.py", line 141, in run
self.handle_error()
File "/opt/python368/lib/python3.6/wsgiref/handlers.py", line 368, in handle_error
self.finish_response()
File "/opt/python368/lib/python3.6/wsgiref/handlers.py", line 180, in finish_response
self.write(data)
File "/opt/python368/lib/python3.6/wsgiref/handlers.py", line 274, in write
self.send_headers()
File "/opt/python368/lib/python3.6/wsgiref/handlers.py", line 331, in send_headers
if not self.origin_server or self.client_is_modern():
File "/opt/python368/lib/python3.6/wsgiref/handlers.py", line 344, in client_is_modern
return self.environ['SERVER_PROTOCOL'].upper() != 'HTTP/0.9'
TypeError: 'NoneType' object is not subscriptable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/python368/lib/python3.6/socketserver.py", line 320, in _handle_request_noblock
self.process_request(request, client_address)
File "/opt/python368/lib/python3.6/socketserver.py", line 351, in process_request
self.finish_request(request, client_address)
File "/opt/python368/lib/python3.6/socketserver.py", line 364, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/opt/python368/lib/python3.6/socketserver.py", line 724, in init
self.handle()
File "/opt/python368/lib/python3.6/wsgiref/simple_server.py", line 133, in handle
handler.run(self.server.get_app())
File "/opt/python368/lib/python3.6/wsgiref/handlers.py", line 144, in run
self.close()
File "/opt/python368/lib/python3.6/wsgiref/simple_server.py", line 35, in close
self.status.split(' ',1)[0], self.bytes_sent
AttributeError: 'NoneType' object has no attribute 'split'

Grafana Dashboard

Hi @aylei,

I have done setup the aliyun exporter and all its packages, but it ends up showing nothing on dashboard detail, but on dashboard overview, it only showing the running instance count and it shows the right numbers.

I want to ask is this code will generate all the ecs/rds/redis on my aliyun console?

"targets": [
{
"expr": "sum(aliyun_meta_redis_info)",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}

any suggestions or is there some step got skipped maybe?

Thank you,

请问启动后一直狂刷这个报错是什么导致的。

Traceback (most recent call last):
File "/usr/local/python3/lib/python3.7/logging/init.py", line 983, in emit
msg = self.format(record)
File "/usr/local/python3/lib/python3.7/logging/init.py", line 829, in format
return fmt.format(record)
File "/usr/local/python3/lib/python3.7/logging/init.py", line 569, in format
record.message = record.getMessage()
File "/usr/local/python3/lib/python3.7/logging/init.py", line 331, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "/usr/local/python3/bin/aliyun-exporter", line 11, in
sys.exit(main())
File "/usr/local/python3/lib/python3.7/site-packages/aliyun_exporter/init.py", line 45, in main
httpd.serve_forever()
File "/usr/local/python3/lib/python3.7/socketserver.py", line 234, in serve_forever
self._handle_request_noblock()
File "/usr/local/python3/lib/python3.7/socketserver.py", line 313, in handle_request_noblock
self.process_request(request, client_address)
File "/usr/local/python3/lib/python3.7/socketserver.py", line 344, in process_request
self.finish_request(request, client_address)
File "/usr/local/python3/lib/python3.7/socketserver.py", line 357, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/local/python3/lib/python3.7/socketserver.py", line 717, in init
self.handle()
File "/usr/local/python3/lib/python3.7/wsgiref/simple_server.py", line 133, in handle
handler.run(self.server.get_app())
File "/usr/local/python3/lib/python3.7/wsgiref/handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
File "/usr/local/python3/lib/python3.7/site-packages/werkzeug/wsgi.py", line 826, in call
return app(environ, start_response)
File "/usr/local/python3/lib/python3.7/site-packages/prometheus_client/exposition.py", line 45, in prometheus_app
output = encoder(r)
File "/usr/local/python3/lib/python3.7/site-packages/prometheus_client/openmetrics/exposition.py", line 14, in generate_latest
for metric in registry.collect():
File "/usr/local/python3/lib/python3.7/site-packages/prometheus_client/registry.py", line 75, in collect
for metric in collector.collect():
File "/usr/local/python3/lib/python3.7/site-packages/aliyun_exporter/collector.py", line 133, in collect
yield from self.metric_generator(project, metric)
File "/usr/local/python3/lib/python3.7/site-packages/aliyun_exporter/collector.py", line 115, in metric_generator
logging.error('Error query metrics for {}
{}'.format(project, metric_name), e)
Message: 'Error query metrics for acs_ecs_dashboard_memory_usedutilization'
Arguments: (KeyError('Datapoints'),)
--- Logging error ---
Traceback (most recent call last):
File "/usr/local/python3/lib/python3.7/site-packages/aliyun_exporter/collector.py", line 113, in metric_generator
points = self.query_metric(project, metric_name, period)
File "/usr/local/python3/lib/python3.7/site-packages/aliyun_exporter/collector.py", line 89, in query_metric
points = json.loads(data['Datapoints'])
KeyError: 'Datapoints'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/python3/lib/python3.7/logging/init.py", line 983, in emit
msg = self.format(record)
File "/usr/local/python3/lib/python3.7/logging/init.py", line 829, in format
return fmt.format(record)
File "/usr/local/python3/lib/python3.7/logging/init.py", line 569, in format
record.message = record.getMessage()
File "/usr/local/python3/lib/python3.7/logging/init.py", line 331, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "/usr/local/python3/bin/aliyun-exporter", line 11, in
sys.exit(main())
File "/usr/local/python3/lib/python3.7/site-packages/aliyun_exporter/init.py", line 45, in main
httpd.serve_forever()
File "/usr/local/python3/lib/python3.7/socketserver.py", line 234, in serve_forever
self._handle_request_noblock()
File "/usr/local/python3/lib/python3.7/socketserver.py", line 313, in handle_request_noblock
self.process_request(request, client_address)
File "/usr/local/python3/lib/python3.7/socketserver.py", line 344, in process_request
self.finish_request(request, client_address)
File "/usr/local/python3/lib/python3.7/socketserver.py", line 357, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/local/python3/lib/python3.7/socketserver.py", line 717, in init
self.handle()
File "/usr/local/python3/lib/python3.7/wsgiref/simple_server.py", line 133, in handle
handler.run(self.server.get_app())
File "/usr/local/python3/lib/python3.7/wsgiref/handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
File "/usr/local/python3/lib/python3.7/site-packages/werkzeug/wsgi.py", line 826, in call
return app(environ, start_response)
File "/usr/local/python3/lib/python3.7/site-packages/prometheus_client/exposition.py", line 45, in prometheus_app
output = encoder(r)
File "/usr/local/python3/lib/python3.7/site-packages/prometheus_client/openmetrics/exposition.py", line 14, in generate_latest
for metric in registry.collect():
File "/usr/local/python3/lib/python3.7/site-packages/prometheus_client/registry.py", line 75, in collect
for metric in collector.collect():
File "/usr/local/python3/lib/python3.7/site-packages/aliyun_exporter/collector.py", line 133, in collect
yield from self.metric_generator(project, metric)
File "/usr/local/python3/lib/python3.7/site-packages/aliyun_exporter/collector.py", line 115, in metric_generator
logging.error('Error query metrics for {}
{}'.format(project, metric_name), e)
Message: 'Error query metrics for acs_ecs_dashboard_net_tcpconnection'
Arguments: (KeyError('Datapoints'),)

配置oss 后报错 KeyError: 'Average'

配置文件内容:
acs_oss_dashboard:

  • name: AppendObjectCount
    period: 60
    报错信息:
    Traceback (most recent call last):
    File "/home/opsuser/pyenv/versions/3.6.5/bin/aliyun-exporter", line 10, in
    sys.exit(main())
    File "/home/opsuser/pyenv/versions/3.6.5/lib/python3.6/site-packages/aliyun_exporter/init.py", line 39, in main
    REGISTRY.register(collector)
    File "/home/opsuser/pyenv/versions/3.6.5/lib/python3.6/site-packages/prometheus_client/registry.py", line 24, in register
    names = self._get_names(collector)
    File "/home/opsuser/pyenv/versions/3.6.5/lib/python3.6/site-packages/prometheus_client/registry.py", line 64, in _get_names
    for metric in desc_func():
    File "/home/opsuser/pyenv/versions/3.6.5/lib/python3.6/site-packages/aliyun_exporter/collector.py", line 134, in collect
    yield from self.metric_generator(project, metric)
    File "/home/opsuser/pyenv/versions/3.6.5/lib/python3.6/site-packages/aliyun_exporter/collector.py", line 125, in metric_generator
    gauge.add_metric([try_or_else(lambda: str(point[k]), '') for k in label_keys], point[measure])
    KeyError: 'Average'

请问,这个问题该怎么解决呢? 配置nas和slb的时候都会出现同样的问题.
谢谢.

More dashboards and alerting rules

The docker-compose stack bundle some dashboards and alerting rules for ECS, RDS, Redis and SLB. Every one is welcomed to contribute dashboards and general alerting rules for Alibaba cloud resources.

RDS采集数据时某些库没有数据

Rds 数据库采集指标的时候,会频繁出现采集不到数据的情况,导致图表出现断层,这种情况出现在有压力的主库(六个库,有两个主库频繁出现采集不到数据,其他四个库间隔很久才会出现一次),请问有解决的方法吗?

need help

配置信息
[zhaojiedi@vpc prometheus]$ cat aliyun-exporter-config.yml
credential:
access_key_id: xxxx
access_key_secret: xxxx
region_id: cn-beijing

metrics:
acs_cdn:

  • name: QPS
    acs_mongodb:
  • name: CPUUtilization
    period: 300
    acs_nat_gateway:
  • name: SnatConnection
    period: 60
    #- name: net_rx.rate

period: 60

#- name: net_tx.rate

period: 60

#- name: net_rx.Pkgs

period: 60

#- name: net_tx.Pkgs

period: 60

#- name: net_tx.ratePercent

period: 60

  • name: SnatConnectionDrop_ConcurrentConnectionLimit
    period: 30
  • name: SnatConnectionDrop_ConnectionRateLimit
    period: 60

错误信息:

[zhaojiedi@vpc prometheus]$ kubectl -n monitoring logs aliyun-exporter-699f69944c-wckt4
Traceback (most recent call last):
File "/usr/local/bin/aliyun-exporter", line 11, in
load_entry_point('aliyun-exporter', 'console_scripts', 'aliyun-exporter')()
File "/usr/src/app/aliyun_exporter/init.py", line 39, in main
REGISTRY.register(collector)
File "/usr/local/lib/python3.7/site-packages/prometheus_client/registry.py", line 24, in register
names = self._get_names(collector)
File "/usr/local/lib/python3.7/site-packages/prometheus_client/registry.py", line 64, in _get_names
for metric in desc_func():
File "/usr/src/app/aliyun_exporter/collector.py", line 134, in collect
yield from self.metric_generator(project, metric)
File "/usr/src/app/aliyun_exporter/collector.py", line 125, in metric_generator
gauge.add_metric([try_or_else(lambda: str(point[k]), '') for k in label_keys], point[measure])
KeyError: 'Average'

额外信息:
如果不加入gateway的监控, 也就是使用默认的qps的和cpu这2个就是没有问题的。

还是会报 KeyError: 'mongodb' 错误

使用 python3.6 版本 pip3 install aliyun-exporter
pip3 search aliyun-exporter
aliyun-exporter (0.3.1) - Alibaba Cloud CloudMonitor Prometheus exporter
INSTALLED: 0.3.1 (latest)
看到版本是0.3.1 已经是最新版本
info_metrics:

  • ecs
  • rds
  • redis能正常获取info 信息
    若加上 mongodb
    info_metrics:
  • ecs
  • rds
  • redis
  • mongodb
    就会报KeyError: 'mongodb' 错误
    希望能够帮忙解决 感谢!!!!

The exporter always give AttributeError

When followed the steps given in read me using pip installed package or using docker image/docker-compose, the exporter always errors out,

  File "/usr/local/bin/aliyun-exporter", line 11, in <module>
    load_entry_point('aliyun-exporter', 'console_scripts', 'aliyun-exporter')()
  File "/usr/src/app/aliyun_exporter/__init__.py", line 39, in main
    REGISTRY.register(collector)
  File "/usr/local/lib/python3.7/site-packages/prometheus_client/registry.py", line 24, in register
    names = self._get_names(collector)
  File "/usr/local/lib/python3.7/site-packages/prometheus_client/registry.py", line 65, in _get_names
    for suffix in type_suffixes.get(metric.type, ['']):
AttributeError: 'NoneType' object has no attribute 'type'

ecs info metrics fetch error

Traceback (most recent call last):
  File "/usr/local/bin/aliyun-exporter", line 11, in <module>
    load_entry_point('aliyun-exporter', 'console_scripts', 'aliyun-exporter')()
  File "/usr/src/app/aliyun_exporter/__init__.py", line 39, in main
    REGISTRY.register(collector)
  File "/usr/local/lib/python3.7/site-packages/prometheus_client/registry.py", line 24, in register
    names = self._get_names(collector)
  File "/usr/local/lib/python3.7/site-packages/prometheus_client/registry.py", line 64, in _get_names
    for metric in desc_func():
  File "/usr/src/app/aliyun_exporter/collector.py", line 136, in collect
    yield self.info_provider.get_metrics(resource)
  File "/usr/local/lib/python3.7/site-packages/cachetools/__init__.py", line 46, in wrapper
    v = func(*args, **kwargs)
  File "/usr/src/app/aliyun_exporter/info_provider.py", line 38, in get_metrics
    }[resource]()
  File "/usr/src/app/aliyun_exporter/info_provider.py", line 35, in <lambda>
    'ecs': lambda : self.ecs_info(),
  File "/usr/src/app/aliyun_exporter/info_provider.py", line 47, in ecs_info
    return self.info_template(req, 'aliyun_meta_ecs_info', nested_handler=nested_handler)
  File "/usr/src/app/aliyun_exporter/info_provider.py", line 74, in info_template
    gauge.add_metric(labels=self.label_values(instance, label_keys, nested_handler), value=1.0)
  File "/usr/local/lib/python3.7/site-packages/prometheus_client/metrics_core.py", line 145, in add_metric
    self.samples.append(Sample(self.name, dict(zip(self._labelnames, labels)), value, timestamp))
  File "/usr/src/app/aliyun_exporter/info_provider.py", line 99, in <lambda>
    return map(lambda k: str(nested_handler[k](instance[k])) if k in nested_handler else str(instance[k]),

The response json has changed so the nest_handler should be updated accordingly, moreover, nest_handler should be safe for un-wanted structure.

acs_ecs_dashboard的操作系统级别监控项拿不到数据

HELP cloudmonitor_failed_request_latency_seconds CloudMonitor failed request latency

TYPE cloudmonitor_failed_request_latency_seconds summary

HELP aliyun_acs_ecs_dashboard_diskusage_utilization_up Did the aliyun_acs_ecs_dashboard_diskusage_utilization fetch succeed.

TYPE aliyun_acs_ecs_dashboard_diskusage_utilization_up gauge

aliyun_acs_ecs_dashboard_diskusage_utilization_up 0.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.