GithubHelp home page GithubHelp logo

parca-dev / docs Goto Github PK

View Code? Open in Web Editor NEW
9.0 9.0 34.0 22.99 MB

The parca project website and documentation.

Home Page: https://parca.dev/docs/

License: Creative Commons Attribution Share Alike 4.0 International

JavaScript 64.92% CSS 16.53% Makefile 2.73% TypeScript 15.82%
documentation docusaurus hacktoberfest

docs's People

Contributors

brancz avatar containerpope avatar dependabot[bot] avatar importhuman avatar javierhonduco avatar jnsgruk avatar jpkrohling avatar juanig1 avatar kakkoyun avatar lnikell avatar manojvivek avatar maxbrunet avatar mayankgupta804 avatar metalmatze avatar mhausenblas avatar monicawoj avatar mping avatar mrueg avatar orgads avatar raffo avatar renovate[bot] avatar ricoberger avatar rira12621 avatar sylfrena avatar sylon23 avatar teleivo avatar thorfour avatar v-thakkar avatar v-zhuravlev avatar yomete avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

docs's Issues

storage: Document different encodings in storage

Currently, the documentation only talks about XOR encoding, but we utilize run-length and double-delta encodings as well. These should be documented and explained why they are better suited where they are used.

Kubernetes example not working on apple silicon

Following the example on https://github.com/parca-dev/parca.dev/blob/main/docs/kubernetes.mdx isn't working as expected.

Expected behaviour

$ kubectl apply -f https://github.com/parca-dev/parca/releases/download/v0.7.1/kubernetes-manifest.yaml

Should create all required resources and allow me to see the UI via

$ kubectl -n parca port-forward service/parca 7070

Actual behaviour

$ kubectl apply -f https://github.com/parca-dev/parca/releases/download/v0.7.1/kubernetes-manifest.yaml
Warning: resource namespaces/parca is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
namespace/parca configured
configmap/parca-config created
deployment.apps/parca created
namespace/parca unchanged
podsecuritypolicy.policy/parca created
role.rbac.authorization.k8s.io/parca created
rolebinding.rbac.authorization.k8s.io/parca created
service/parca created
$ kubectl -n parca port-forward service/parca 7070
error: unable to forward port because pod is not running. Current status=Pending
$ kubens parca
Context "minikube" modified.
Active namespace is "parca".
$ oc get pods
NAME                     READY   STATUS    RESTARTS   AGE
parca-58c8487fcf-tk8zd   0/1     Pending   0          114s
$ oc describe pod parca-58c8487fcf-tk8zd
Name:           parca-58c8487fcf-tk8zd
Namespace:      parca
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/component=observability
                app.kubernetes.io/instance=parca
                app.kubernetes.io/name=parca
                app.kubernetes.io/version=v0.7.1
                pod-template-hash=58c8487fcf
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/parca-58c8487fcf
Containers:
  parca:
    Image:      ghcr.io/parca-dev/parca:v0.7.1
    Port:       7070/TCP
    Host Port:  0/TCP
    Args:
      /parca
      --config-path=/var/parca/parca.yaml
      --log-level=info
      --cors-allowed-origins=*
    Liveness:     exec [/grpc-health-probe -v -addr=:7070] delay=5s timeout=1s period=10s #success=1 #failure=3
    Readiness:    exec [/grpc-health-probe -v -addr=:7070] delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/parca from parca-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from parca-token-d4cx2 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  parca-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      parca-config
    Optional:  false
  parca-token-d4cx2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  parca-token-d4cx2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/arch=amd64
                 kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  65s (x3 over 2m14s)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.

Environment

Macbook Pro M1

$ uname -a
Darwin MAC-FVFGH12JQ05P 21.3.0 Darwin Kernel Version 21.3.0: Wed Jan  5 21:37:58 PST 2022; root:xnu-8019.80.24~20/RELEASE_ARM64_T8101 arm64

Analysis

The problematic part is:

Node-Selectors:  kubernetes.io/arch=amd64
                 kubernetes.io/os=linux

Parca from Binary tutorial uses config file from main

The parca.yaml that is downloaded via cURL in the tutorial "Parca from Binary" uses the main's version, which currently does not work with the version used in the previous cURL calls to download the binary (v0.12.0). Instead, the parca.yaml from the same tag should be used.

FAQ

We already have plenty of frequently asked questions. We should probably have a dedicated docs page as well as an FAQ on the landing page.

What languages are currently supported?

Infrastructure-wide profiling with Parca Agent currently supports all compiled languages, eg. C, C++, Rust, Go (with extended support for Go). Further language support coming in the upcoming weeks and months.

Parca itself supports any pprof formatted profile. Any library or implementation that outputs valid pprof profiles is supported by Parca.

What overhead does running always-on profiling have with Parca Agent?

We have observed <1% in CPU, but more elaborate and reproducible reports coming soon.

Parca Agent has to run as root for eBPF, what are the security considerations?

  • The profiler source code is open source, so anyone can inspect the code that would be running as root on their servers.
  • It is written in Go, a memory-safe language.
  • Binaries and container images are fully reproducible, so users can ensure that the artifacts they are running are exactly the same as the ones we are distributing.

Read the docs on more in-depth explanations on security considerations

Does Parca upload binaries or our code?

No. Profiling data is made up of statistics representing for example how much time the CPU has spent in a particular function, but the function metadata is decoupled from the actual executable and source code.

Read the docs on symbolization to understand further why.

If all data were to leak, what would be the worst-case scenario?

Function, package, and file names would leak, but no executable code.

Read the docs on symbolization to understand further why.

Example yaml file syntax is invalid since 0.14

I followed https://www.parca.dev/docs/systemd which contains a yaml file example like:

debug_info:
  bucket:
    type: "FILESYSTEM"
    config:
      directory: "/tmp"
  cache:
    type: "FILESYSTEM"
    config:
      directory: "/tmp"

scrape_configs:
  - job_name: "default"
    scrape_interval: "2s"
    static_configs:
      - targets: ["127.0.0.1:7070"]

However this fails to start, with an error like:

level=error name=parca ts=2022-11-22T10:51:43.582639041Z caller=main.go:59 msg="Program exited with error" err="parsing YAML file /etc/parca/parca.yaml: yaml: unmarshal errors:\n  line 1: field debug_info not found in type config.Config"

AFAICS this is because debug_info is no longer valid since parca-dev/parca#1403 - instead I used a config like this:

object_storage:
  bucket:
    type: "FILESYSTEM"
    config:
      directory: "/tmp"

scrape_configs:
  - job_name: "default"
    scrape_interval: "2s"
    static_configs:
      - targets: ["127.0.0.1:7070"]

OpenShift docs: Warnings after following docs

After deploying the server I got this:

Warning: resource namespaces/parca is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically.
....
....
W1014 14:33:28.076094   60726 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1014 14:33:28.204159   60726 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+

I got the following warning after applying agent on OpenShift v4.10:

W1014 14:35:06.649758   60757 warnings.go:70] would violate "latest" version of "baseline" PodSecurity profile: non-default capabilities (container "parca-agent" must not include "SYS_ADMIN" in securityContext.capabilities.add), host namespaces (hostPID=true), hostPath volumes (volumes "root", "proc", "run", "cgroup", "modules", "bpffs", "debugfs", "localtime"), hostPort (container "parca-agent" uses hostPort 7071), privileged (container "parca-agent" must not set securityContext.privileged=true)

tutorial: Using parca-agent with compiled binaries

  • How to build binaries with debug info

  • How to split debug info and upload

  • How to discover linked shared libraries and provide debug information for those using package managers

  • Examples for language runtimes

cc @Sylfrena

@parca-dev/parca-demo could be used as an example

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Pending Approval

These branches will be created by Renovate only once you click their checkbox below.

  • chore(deps): update dependency node to v20
  • fix(deps): update dependency @rive-app/react-canvas to v4
  • fix(deps): update dependency @svgr/webpack to v8
  • fix(deps): update dependency clsx to v2
  • fix(deps): update dependency prism-react-renderer to v2
  • fix(deps): update docusaurus monorepo to v3 (major) (@docusaurus/core, @docusaurus/preset-classic, @docusaurus/theme-search-algolia)
  • ๐Ÿ” Create all pending approval PRs at once ๐Ÿ”

Awaiting Schedule

These updates are awaiting their schedule. Click on a checkbox to get an update now.

  • chore(deps): npm Lock file maintenance

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

asdf
.tool-versions
  • node 18.20.2
github-actions
.github/workflows/spellcheck.yaml
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
npm
docusaurus-github-releases-plugin/package.json
  • node-fetch 3.3.2
  • node >=12.13.0
package.json
  • @docusaurus/core 2.4.3
  • @docusaurus/preset-classic 2.4.3
  • @docusaurus/theme-search-algolia 2.4.3
  • @mdx-js/react 1.6.22
  • @rive-app/react-canvas 3.0.57
  • @svgr/webpack 6.5.1
  • clsx 1.2.1
  • file-loader 6.2.0
  • prism-react-renderer 1.3.5
  • raw-loader 4.0.2
  • react 18.2.0
  • react-dom 18.2.0
  • url-loader 4.1.1
  • node >=16.0.0

  • Check this box to trigger a request for Renovate to run again on this repository

Nice!

Nice!
Seems worth automating to have it always up-to-date. It's not too bad to do it manually, but I did not release the versions between v0.8 and v0.12 cause I simply forgot. ๐Ÿคทโ€โ™‚๏ธ

Originally posted by @metalmatze in parca-dev/parca#1491 (comment)

Docs refer to removed systemd-units/kubernetes flags

The docs agent release e.g here resolves to v0.10.0-rc.1, but that release drops support for the --systemd-units flag since parca-dev/parca-agent#627 removed it.

$ grep -R systemd-units
docs/agent-binary.mdx:sudo parca-agent --node=systemd-test --systemd-units=docker.service --log-level=debug --kubernetes=false --store-address=localhost:7070 --insecure
docs/parca-agent-systemd.mdx:      --systemd-units=SYSTEMD-UNITS,...
docs/parca-agent-systemd.mdx:To profile units, you just need to specify the name of the service in `--systemd-units` flag.
docs/parca-agent-systemd.mdx:+  --systemd-units=docker.service,my-app.service \
docs/systemd.mdx:ExecStart=/usr/bin/parca-agent --http-address=":7071" --node=systemd-test --systemd-units=docker.service,parca.service,parca-agent.service --kubernetes=false --store-address=localhost:7070 --insecure
src/components/HomepageQuickstart.js:./parca-agent --node=systemd-test --systemd-units=parca-agent.service --kubernetes=false`

It also looks like --kubernetes=false is no longer valid - should these both just be removed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.