GithubHelp home page GithubHelp logo

buildbuddy-io / buildbuddy Goto Github PK

View Code? Open in Web Editor NEW
554.0 11.0 84.0 640.74 MB

BuildBuddy is an open source Bazel build event viewer, result store, remote cache, and remote build execution platform.

Home Page: https://buildbuddy.io

License: Other

Starlark 8.63% TypeScript 25.75% Go 59.52% Shell 0.41% HTML 0.02% CSS 3.42% Python 0.49% Dockerfile 0.16% JavaScript 0.18% HCL 0.10% MDX 1.33%
bazel golang typescript react grpc protobuf kubernetes cache remote-caching results-viewer

buildbuddy's Introduction


BuildBuddy is an open source Bazel build event viewer, result store, and remote cache.

Intro

BuildBuddy is an open source Bazel build event viewer, result store, and remote cache. It helps you collect, view, share and debug build events in a user-friendly web UI.

It's written in Golang and React and can be deployed as a Docker image. It's run both as a cloud hosted service and can be deployed to your cloud provider or run on-prem. BuildBuddy's core is open sourced in this repo under the MIT License.

Get started

Getting started with BuildBuddy is simple. Just add these two lines to your .bazelrc file.

.bazelrc

build --bes_results_url=https://app.buildbuddy.io/invocation/
build --bes_backend=grpcs://remote.buildbuddy.io

This will print a BuildBuddy URL containing your build results at the beginning and end of every Bazel invocation. You can command click / double click on these to open the results in a browser.

Want more? Get up and running quickly with our fully managed BuildBuddy Cloud service. It's free for individuals, open source projects, and teams of up to 3.

If you'd like to host your own instance on-prem or in the cloud, check out our documentation.

Documentation

Our documentation gives you a full look at how to set up and configure BuildBuddy.

Questions?

If you have any questions, join the BuildBuddy Slack channel or e-mail us at [email protected]. We’d love to chat!

Features

  • Build summary & logs - a high level overview of the build including who initiated the build, how long it took, how many targets were affected, etc. The build log makes it easy to share stack traces and errors with teammates which makes collaborative debugging easier. Build summary & logs

  • Target overview - quickly see which targets and tests passed / failed and dig into more details about them. Target overview

  • Detailed timing information - BuildBuddy invocations include a "Timing" tab - which pulls the Bazel profile logs from your build cache and displays them in a human-readable format. Detailed timing information

  • Invocation details - see all of the explicit flags, implicit options, and environment variables that affect your build. This is particularly useful when a build is working on one machine but not another - you can compare these and see what's different. Invocation details

  • Build artifacts - get a quick view of all of the build artifacts that were generated by this invocation so you can easily access them. Clicking on build artifacts downloads the artifact when using either the built-in BuildBuddy cache, or a third-party cache running in GRPC mode that supports the bytestream API - like bazel-remote. Artifacts

  • Raw logs - you can really dig into the details here. This is a complete view of all of the events that get sent up via Bazel's build event protocol. If you find yourself digging in here too much, let us know and we'll surface that info in a nicer UI. Raw logs

  • Remote cache support - BuildBuddy comes with an optional built-in Bazel remote cache to BuildBuddy, implementing the GRPC remote caching APIs. This allows BuildBuddy to optionally collect build artifacts, timing profile information, test logs, and more. Alternatively, BuildBuddy supports third-party caches running in GRPC mode that support the bytestream API - like bazel-remote.

  • Viewable test logs - BuildBuddy surfaces test logs directly in the UI when you click on a test target (GRPC remote cache required). Viewable test logs

  • Dense UI mode - if you want more information density, BuildBuddy has a "Dense mode" that packs more information into every square inch. Dense UI mode

  • BES backend multiplexing - if you're already pointing your bes_backend flag at another service. BuildBuddy has a build_event_proxy configuration option that allows you to specify other backends that your build events should be forwarded to. See the configuration docs for more information. BES backend multiplexing

  • Slack webhook support - BuildBuddy allows you to message a Slack channel when builds finish. It's a nice way of getting a quick notification when a long running build completes, or a CI build fails. See the configuration docs for more information. Slack webhook support

buildbuddy's People

Contributors

aleksandergondek avatar bduffany avatar brentleyjones avatar dependabot[bot] avatar fmeum avatar geeker-smallwhite avatar glukasiknuro avatar gravypod avatar gtli7 avatar iain-macdonald avatar jaredneil avatar jdhollen avatar johnamican avatar jonshea avatar kernald avatar layus avatar luluz66 avatar maggie-lou avatar marcussorealheis avatar minor-fixes avatar pariparajuli avatar purkhusid avatar rogerhu avatar siggisim avatar sluongng avatar srodriguezo avatar tempoz avatar thii avatar tylerwilliams avatar vadimberezniker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

buildbuddy's Issues

Remote cache issues: `IOException: Could not parse Deps.Dependencies message from proto`

Hey beloved Buildbuddy authors,

When we are building in CI (Linux -- Ubuntu LTS, 20.x), we get the following error, but only when remote caching is turned on with Buildbuddy:

java.io.IOException: error reading deps artifact: bazel-out/k8-opt-exec-2B5CBBC6/bin/external/io_grpc_grpc_java/context/libcontext-hjar.jdeps
	at com.google.devtools.build.buildjar.javac.plugins.dependency.DependencyModule.collectDependenciesFromArtifact(DependencyModule.java:290)
	at com.google.devtools.build.buildjar.javac.plugins.dependency.DependencyModule.computeStrictClasspath(DependencyModule.java:256)
	at com.google.devtools.build.buildjar.ReducedClasspathJavaLibraryBuilder.compileSources(ReducedClasspathJavaLibraryBuilder.java:52)
	at com.google.devtools.build.buildjar.SimpleJavaLibraryBuilder.compileJavaLibrary(SimpleJavaLibraryBuilder.java:110)
	at com.google.devtools.build.buildjar.SimpleJavaLibraryBuilder.run(SimpleJavaLibraryBuilder.java:118)
	at com.google.devtools.build.buildjar.BazelJavaBuilder.build(BazelJavaBuilder.java:101)
	at com.google.devtools.build.buildjar.BazelJavaBuilder.parseAndBuild(BazelJavaBuilder.java:81)
	at com.google.devtools.build.buildjar.BazelJavaBuilder.main(BazelJavaBuilder.java:65)
Caused by: java.io.IOException: Could not parse Deps.Dependencies message from proto.
	at com.google.devtools.build.buildjar.javac.plugins.dependency.DependencyModule.collectDependenciesFromArtifact(DependencyModule.java:280)
	... 7 more

This particular set of jdeps doesn't seem special, and we've seen this error before in our build logs. We're running on Bazel latest (5.1.1 at time of writing)

Is it possible to use buildbuddy to build go apps using the go toolchain?

Hi,

I've been playing around with buildbuddy, but unfortunately I'm not super experienced with Bazel yet.

I'm using the rules_go go toolchain to build my application. When trying to use Buildbuddy's RBE feature, I can't quite seem to get the builds to work because the go toolchain cannot be resolved.

Is it possible to configure it to use the go toolchain? According to this document it seems like only C and Java is supported. Is this correct? I've tried launching using a go docker container, but no success.

Thanks.

Official Helm chart

Thanks for writing BuildBuddy! In addition to a pure Kubernetes deployment, do you have any plans to create an official Helm chart, so variables like the port number or number of replicas can be programmatically set via a standardized variables API?

Reporting a vulnerability

Hello!

I hope you are doing well!

We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called Private vulnerability reporting, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.

Can you enable it, so that we can report it?

Thanks in advance!

PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository

Timing tab encounters a stack overflow

RangeError: Maximum call stack size exceeded occurs from apparently flame_chart.tsx:66

I imagine this happens due to the number of actions, but it would be nice for this view to at least fail smoothly, or possibly be able to segment the view to work for any number of actions.

Cache misses compared to other cache implementations

When using build buddy as a remote cache for my bazel project, I was debugging remote cache misses. I did all the guides in the bazel docs to determine what actions were causing issues. The weird thing is that the execution log was coming out completely consistent, but I was not getting a 100% cache rate. Moreover, the actions not cached were marked in the logs as "cacheable: true".

At a loss, I tried bazel-remote instead as a backing cache, and everything started working. So, I cannot give super detailed information as to what was causing the issue, but I have since stopped using build buddy as a backing cache; partially because of not getting a cache rate, but also partially due to some lost trust in the backing implementation as it is a critical piece; handling and storing cached binaries. I'll come back in a year to check on the status again, as I do like the UI of build buddy and the other enterprise offerings it provides.

Artifacts/test logs aren't sharable

Currently, clicking on an artifact or test log gives the path to the file on the local machine that ran the test - which doesn't help when the invocation ran on a different machine (e.g. coworker or CI).

As far as I understand, having those files reachable would require to either implement a CAS storage (maybe outside of the scope of this project?) or plug to an existing one (e.g. https://github.com/buildbarn/bb-storage).

Getting started doc for on prem misses vital information

SETUP.md document basically lists only one command to run buildbuddy on prem: bazel run -c opt server:buildbuddy While this works a few vital pieces of information are missing and very much needed:

  1. How to configure non-default ports for gRPC and HTTP?
  2. How to stop buildbuddy in order to restart it with the different parameters?
  3. In case it requires non-trivial amount of disk space to store build data - how to specify the folder where to keep it?
  4. How to specify data retention period?
  5. What other options are available?

Feature Request: When showing suggestions, show the .bazelrc command

For suggestions like:

Consider setting the Bazel flag --experimental_remote_build_event_upload=minimal to reduce the number of unnecessary cache uploads that Bazel might perform during a build

Consider showing the appropriate command like build, or common. It's not straightforward to figure out the command by looking at the bazel flag page since it's massive, and you can't tell what option a flag applies to.

Y'all know better than I do, but I'd guess the desired outcome is for the user to modify their bazelrc, so it might be worth showing the bazelrc directly.

BuildBuddy onprem to support cache compression?

running bazel build with --experimental_remote_cache_compression on onprem docker BuildBuddy v2.12.4 (41467653eedb096fe4c53341d8b4dcd36812283e) compiled with go1.18.1

got this error

ERROR: --experimental_remote_cache_compression requested but remote does not support compression

does this feature exist on the current version(am I missing a config?), or will it come in a future version?
note: it works on cloud

Read-Only API Key Still Capable of Pushing to Remote Cache

Hey 👋🏻,

When trying to fix some CI issue, tweag/rules_nixpkgs#243, I noticed that I was still capable of pushing to the remote cache when build --remote_upload_local_results is set is set in my .bazelrc, using a read-only API key.

Is it possible that the read-only API key doesn't enforce read-only on the remote cache, and that the bazel flag enforces it, or am I doing something wrong?

Remote execution uses over ~50x more cache transfers than local upstreaming

rules_ll replicates remote execution environments locally in a way that lets users build a project locally and upstream results to a remote cache, such as the buildbuddy remote cache. This way we can reuse artifacts on different machines without remote execution. Building LLVM like this on a machine locally causes ~3.5GB artifact upload:

c484ad18-a047-442a-bf04-ecc81166b525_raw.json.txt

Running roughly the same build (at a later LLVM commit) with a buildbuddy remote executor still has the same ~3.5GB cache upload, but causes ~200GB of cache download:

be9102f9-6977-47b0-8ab9-25ddb38c6265_raw.json.txt

So at 100GB cache transfer on an open source account this is a difference of ~1 build per day and ~0.5 builds per month 😅

It seems that the remote executor refetches artifacts even if it has just built them. This doesn't look like intended behavior to me. If it is WAI because of some sandboxing policy, it might be a good idea to add a suggestion on how to relax the executor-local cache reuse policy.

What is the scope of the '100 GB of cache transfer' in the personal plan?

I'm exploring using BuildBuddy for a couple of OSS projects, for which the caches are in the range of 1-2.5 GB per job.

Could you clarify the following:

  • Does the 100 GB cover storage (e.g. GitHub Actions has 10GB storage for free) or purely transfer?
  • Is the 100 GB of transfer a 1 time limit, or does it renew (does it apply per-invocation/per fixed duration of time/something else)?

Thanks! (Also, I like buildbuddy.io website's design, it feels modern but it's still fast.)

gcr.io/flame-public/buildbuddy-app-onprem:v2.12.31, version `GLIBC_2.33' not found

From https://www.buildbuddy.io/docs/on-prem/#docker-image,
whem I run the command:

docker pull gcr.io/flame-public/buildbuddy-app-onprem:latest && docker run -p 1985:1985 -p 8080:8080 gcr.io/flame-public/buildbuddy-app-onprem:latest

I get the following error:

/app/server/cmd/buildbuddy/buildbuddy: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /app/server/cmd/buildbuddy/buildbuddy)
/app/server/cmd/buildbuddy/buildbuddy: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /app/server/cmd/buildbuddy/buildbuddy)
/app/server/cmd/buildbuddy/buildbuddy: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /app/server/cmd/buildbuddy/buildbuddy)

The error is present in v2.12.31, but not in v2.12.30

Environment

Distributor ID: Ubuntu
Description: Ubuntu-Server 20.04.3 2022.02.25 (Cubic 2022-02-25 21:58)
Release: 20.04
Codename: focal

Docker version 20.10.12, build 20.10.12-0ubuntu2~20.04.1

BuildBuddy onprem (Docker) >2.3.3 fails to start

Using the Docker image with a quite simple config file:

app:
  build_buddy_url: "https://redacted.com"
    #cache_api_url: "grpc://redacted.com:9092"

database:
  data_source: "sqlite3:///buildbuddy.db"

storage:
  ttl_seconds: 604800  # One week
  chunk_file_size_bytes: 3000000  # 3 MB
  disk:
    root_directory: /storage

cache:
  max_size_bytes: 10000000000  # 10 GB
  disk:
    root_directory: /cache

It crashes on start-up:

2021/08/12 04:50:31.730 INF BuildBuddy v2.3.4 (d85aa1d0d447d980c4c483a7fbe9fe71fc292fd6) compiled with go1.16.2
2021/08/12 04:50:31.730 INF Reading buildbuddy config from '/config.yaml'
2021/08/12 04:50:31.732 INF Auto-migrating DB
2021/08/12 04:50:31.744 INF Cache: BuildBuddy cache API enabled!
2021/08/12 04:50:31.765 INF Finished initializing disk cache partition "default" at "/cache". Current size: 626605297 (max: 10000000000) bytes
2021/08/12 04:50:32.437 FTL Error reading app bundle hash: open sha.sum: file does not exist

Every version above 2.3.3 that I tried showed the same issue. I tried clearing all the storage/db/cache data, with the same result.

BuildBuddy nodejs support

More of a question than an issue but does build buddy support nodejs rules/toolchain? We're mostly utilizing a angular monorepo with bazel so curious about this.

Remote asset API - CacheNotFoundException

When using flag --experimental_remote_downloader after a while of running I seem to be getting

(14:33:22) ERROR: An error occurred during the fetch of repository 'org_apache_spark_spark_core_2_12':
   Traceback (most recent call last):
	File "/workspace/source/third_party/workspace.bzl", line 4, column 17, in _jar_artifact_impl
		ctx.download(
Error in download: com.google.devtools.build.lib.remote.common.CacheNotFoundException: Missing digest: 9382b8ec58373a133416468ee2f8c50bfa262f76e72a737646d739ea9560ad1c/10167565

in the logs this seems to be repeating

[90m2021/05/28 14:33:22.191�[0m �[32mINF�[0m gRPC 22d52547-a08e-421a-972b-2eaa7ca9c199 f6e42943-826c-4e3a-b86f-ffbc48c52a7d /FetchBlob OK [40 ms]
Info
2021-05-28 16:33:22.308 CEST�[90m2021/05/28 14:33:22.308�[0m �[32mINF�[0m gRPC 4af58b7d-42ae-42f3-bacc-c83c02678081 f6e42943-826c-4e3a-b86f-ffbc48c52a7d /Read OK [2.14 s]
Info
2021-05-28 16:33:22.365 CEST�[90m2021/05/28 14:33:22.364�[0m �[32mINF�[0m gRPC 3b596b15-4dd4-40ec-a74c-aff1097fe99d f6e42943-826c-4e3a-b86f-ffbc48c52a7d /FetchBlob OK [27 ms]
Info
2021-05-28 16:33:22.377 CEST�[90m2021/05/28 14:33:22.374�[0m �[32mINF�[0m gRPC b624f3b5-b8f3-464b-817e-7be372108412 f6e42943-826c-4e3a-b86f-ffbc48c52a7d /Read OK [161 ms]
Info
2021-05-28 16:33:22.401 CEST�[90m2021/05/28 14:33:22.401�[0m �[32mINF�[0m gRPC c5b7fe7d-bc50-4d35-a42d-aa123112f202 f6e42943-826c-4e3a-b86f-ffbc48c52a7d /Read NotFound [24 ms]
Info
2021-05-28 16:33:22.473 CEST�[90m2021/05/28 14:33:22.473�[0m �[31mWRN�[0m Error inserting object into cache: context canceled
Info
2021-05-28 16:33:22.871 CEST�[90m2021/05/28 14:33:22.871�[0m �[31mWRN�[0m Error inserting object into cache: context canceled

After rerunning the fetch it fails on different dependency, could there be an issue with read after write? E.g digest is send back before it's available in the cache? Or based on the logs maybe cache save operation fails but FetchBlob return OK?

Buildbuddy does not work on Firefox

I was testing Buildbuddy locally from the master branch and I can´t seem to get it to work on Firefox.

It seems like it's related to the recent gzip change. Buildbuddy works fine on Firefox if I checkout the commit before gzip was introduced.

Support recording TargetStatus info for more/all invocations

I'm trying to gather data to see what the heaviest targets are in our repository (by execution count and time) to assist in figuring out to what extent a (rather complex) set of targets is well-cached. The SQL database backing BuildBuddy seems to have all the pieces in place for recording what I need (runtime per target per invocation, by joining invocations <-> TargetStatuses <-> Targets), but the resulting aggregation only contains results for builds tagged role=CI. In my case, this is set on automated postsubmit builds only (always from master, reflects merged code health). As a result, presubmit builds and one-off interactive builds are not reflected in this data.

The code appears to make this choice explicitly. I could set role=CI on all builds, but that would reduce the usefulness of the test grid page.

Some questions:

  • What problems should I expect if I remove this check? (I already expect ~10x DB data usage, and needing to beef up the instance so that queries can go through the data). Do semantics elsewhere break? (maybe some queries assume they don't need a WHERE role = "CI" clause when they now would?)
  • The BuildBuddy UI still seems to know target times for non-CI builds in the UI - how does it know this if it's not stored in the DB? Is it re-fetching and parsing a BES stream object?
  • Would you all be amenable to making this a configurable setting? If so, what kind of setting would you prefer? (I may be able to make that contribution)

Failed to build in Mac OS

Run
bazelisk run -c opt server:buildbuddy --verbose_failures command faild in Mac OS.

Log:

ERROR: /private/var/tmp/_bazel_bytedance/dec3efcbf31172670b2193c1360342f6/external/com_github_mattn_go_sqlite3/BUILD.bazel:3:11: GoCompilePkg external/com_github_mattn_go_sqlite3/go-sqlite3.a failed: (Exit 1): builder failed: error executing command 
  (cd /private/var/tmp/_bazel_bytedance/dec3efcbf31172670b2193c1360342f6/sandbox/darwin-sandbox/901/execroot/buildbuddy && \
  exec env - \
    APPLE_SDK_PLATFORM=MacOSX \
    APPLE_SDK_VERSION_OVERRIDE=11.1 \
    CC=external/local_config_cc/wrapped_clang \
    CGO_ENABLED=1 \
    GOARCH=amd64 \
    GOOS=darwin \
    GOPATH='' \
    GOROOT=external/go_sdk_darwin \
    GOROOT_FINAL=GOROOT \
    PATH=external/local_config_cc:/bin:/usr/bin \
    XCODE_VERSION_OVERRIDE=12.4.0.12D4e \
  bazel-out/host/bin/external/go_sdk_darwin/builder compilepkg -sdk external/go_sdk_darwin -installsuffix darwin_amd64 -src external/com_github_mattn_go_sqlite3/backup.go -src external/com_github_mattn_go_sqlite3/callback.go -src external/com_github_mattn_go_sqlite3/convert.go -src external/com_github_mattn_go_sqlite3/doc.go -src external/com_github_mattn_go_sqlite3/error.go -src external/com_github_mattn_go_sqlite3/sqlite3.go -src external/com_github_mattn_go_sqlite3/sqlite3_context.go -src external/com_github_mattn_go_sqlite3/sqlite3_func_crypt.go -src external/com_github_mattn_go_sqlite3/sqlite3_go18.go -src external/com_github_mattn_go_sqlite3/sqlite3_load_extension.go -src external/com_github_mattn_go_sqlite3/sqlite3_opt_preupdate.go -src external/com_github_mattn_go_sqlite3/sqlite3_opt_preupdate_omit.go -src external/com_github_mattn_go_sqlite3/sqlite3_opt_userauth_omit.go -src external/com_github_mattn_go_sqlite3/sqlite3_other.go -src external/com_github_mattn_go_sqlite3/sqlite3_solaris.go -src external/com_github_mattn_go_sqlite3/sqlite3_type.go -src external/com_github_mattn_go_sqlite3/sqlite3_usleep_windows.go -src external/com_github_mattn_go_sqlite3/sqlite3_windows.go -src external/com_github_mattn_go_sqlite3/static_mock.go -src external/com_github_mattn_go_sqlite3/sqlite3-binding.c -src external/com_github_mattn_go_sqlite3/sqlite3_opt_unlock_notify.c -src external/com_github_mattn_go_sqlite3/sqlite3-binding.h -src external/com_github_mattn_go_sqlite3/sqlite3ext.h -importpath github.com/mattn/go-sqlite3 -p github.com/mattn/go-sqlite3 -package_list bazel-out/host/bin/external/go_sdk_darwin/packages.txt -o bazel-out/darwin-opt/bin/external/com_github_mattn_go_sqlite3/go-sqlite3.a -x bazel-out/darwin-opt/bin/external/com_github_mattn_go_sqlite3/go-sqlite3.x -nogo bazel-out/darwin-opt-exec-2B5CBBC6/bin/vet_/vet -gcflags '' -asmflags '' -cppflags '-I external/com_github_mattn_go_sqlite3 -iquote .' -cflags '-D_FORTIFY_SOURCE=1 -fstack-protector -Wthread-safety -Wself-assign -fno-omit-frame-pointer -O2 -DNDEBUG -DNS_BLOCK_ASSERTIONS=1 DEBUG_PREFIX_MAP_PWD=. -isysroot __BAZEL_XCODE_SDKROOT__ -F__BAZEL_XCODE_SDKROOT__/System/Library/Frameworks -F__BAZEL_XCODE_DEVELOPER_DIR__/Platforms/MacOSX.platform/Developer/Library/Frameworks -mmacosx-version-min=11.1 -no-canonical-prefixes -Wno-builtin-macro-redefined -D__DATE__="redacted" -D__TIMESTAMP__="redacted" -D__TIME__="redacted" -target x86_64-apple-macosx -DHAVE_USLEEP=1 -DSQLITE_DEFAULT_WAL_SYNCHRONOUS=1 -DSQLITE_DISABLE_INTRINSIC -DSQLITE_ENABLE_FTS3 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_FTS4_UNICODE61 -DSQLITE_ENABLE_RTREE -DSQLITE_ENABLE_UPDATE_DELETE_LIMIT -DSQLITE_OMIT_DEPRECATED -DSQLITE_THREADSAFE=1 -DSQLITE_TRACE_SIZE_LIMIT=15 -Wno-deprecated-declarations -std=gnu99 -I. -fPIC' -cxxflags '-D_FORTIFY_SOURCE=1 -fstack-protector -Wthread-safety -Wself-assign -fno-omit-frame-pointer -O2 -DNDEBUG -DNS_BLOCK_ASSERTIONS=1 -std=c++11 DEBUG_PREFIX_MAP_PWD=. -isysroot __BAZEL_XCODE_SDKROOT__ -F__BAZEL_XCODE_SDKROOT__/System/Library/Frameworks -F__BAZEL_XCODE_DEVELOPER_DIR__/Platforms/MacOSX.platform/Developer/Library/Frameworks -mmacosx-version-min=11.1 -no-canonical-prefixes -Wno-builtin-macro-redefined -D__DATE__="redacted" -D__TIMESTAMP__="redacted" -D__TIME__="redacted" -target x86_64-apple-macosx -fPIC' -objcflags '-D_FORTIFY_SOURCE=1 -fstack-protector -Wthread-safety -Wself-assign -fno-omit-frame-pointer -O2 -DNDEBUG -DNS_BLOCK_ASSERTIONS=1 DEBUG_PREFIX_MAP_PWD=. -isysroot __BAZEL_XCODE_SDKROOT__ -F__BAZEL_XCODE_SDKROOT__/System/Library/Frameworks -F__BAZEL_XCODE_DEVELOPER_DIR__/Platforms/MacOSX.platform/Developer/Library/Frameworks -mmacosx-version-min=11.1 -D_FORTIFY_SOURCE=1 -fstack-protector -Wthread-safety -Wself-assign -fno-omit-frame-pointer -O2 -DNDEBUG -DNS_BLOCK_ASSERTIONS=1 DEBUG_PREFIX_MAP_PWD=. -isysroot __BAZEL_XCODE_SDKROOT__ -F__BAZEL_XCODE_SDKROOT__/System/Library/Frameworks -F__BAZEL_XCODE_DEVELOPER_DIR__/Platforms/MacOSX.platform/Developer/Library/Frameworks -mmacosx-version-min=11.1 -no-canonical-prefixes -Wno-builtin-macro-redefined -D__DATE__="redacted" -D__TIMESTAMP__="redacted" -D__TIME__="redacted" -target x86_64-apple-macosx -DHAVE_USLEEP=1 -DSQLITE_DEFAULT_WAL_SYNCHRONOUS=1 -DSQLITE_DISABLE_INTRINSIC -DSQLITE_ENABLE_FTS3 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_FTS4_UNICODE61 -DSQLITE_ENABLE_RTREE -DSQLITE_ENABLE_UPDATE_DELETE_LIMIT -DSQLITE_OMIT_DEPRECATED -DSQLITE_THREADSAFE=1 -DSQLITE_TRACE_SIZE_LIMIT=15 -Wno-deprecated-declarations -std=gnu99 -I. -fPIC' -objcxxflags '-D_FORTIFY_SOURCE=1 -fstack-protector -Wthread-safety -Wself-assign -fno-omit-frame-pointer -O2 -DNDEBUG -DNS_BLOCK_ASSERTIONS=1 DEBUG_PREFIX_MAP_PWD=. -isysroot __BAZEL_XCODE_SDKROOT__ -F__BAZEL_XCODE_SDKROOT__/System/Library/Frameworks -F__BAZEL_XCODE_DEVELOPER_DIR__/Platforms/MacOSX.platform/Developer/Library/Frameworks -mmacosx-version-min=11.1 -D_FORTIFY_SOURCE=1 -fstack-protector -Wthread-safety -Wself-assign -fno-omit-frame-pointer -O2 -DNDEBUG -DNS_BLOCK_ASSERTIONS=1 -std=c++11 DEBUG_PREFIX_MAP_PWD=. -isysroot __BAZEL_XCODE_SDKROOT__ -F__BAZEL_XCODE_SDKROOT__/System/Library/Frameworks -F__BAZEL_XCODE_DEVELOPER_DIR__/Platforms/MacOSX.platform/Developer/Library/Frameworks -mmacosx-version-min=11.1 -no-canonical-prefixes -Wno-builtin-macro-redefined -D__DATE__="redacted" -D__TIMESTAMP__="redacted" -D__TIME__="redacted" -target x86_64-apple-macosx -fPIC' -ldflags '-lc++ -fobjc-link-runtime -headerpad_max_install_names -no-canonical-prefixes -target x86_64-apple-macosx -mmacosx-version-min=11.1')
Execution platform: @local_config_platform//:host

Use --sandbox_debug to see verbose messages from the sandbox
compilepkg: nogo: errors found by nogo during build-time code analysis:
/var/folders/7c/xbkf48h10g1f8qfvhjjlrjf00000gp/T/rules_go_work-066840393/_cgo_gotypes.go:1362:8: tautological condition: nil == nil (nilness)
Target //server/cmd/buildbuddy:buildbuddy failed to build
INFO: Elapsed time: 32.825s, Critical Path: 32.35s
INFO: 2 processes: 2 internal.
ERROR: Build failed. Not running target
INFO: Streaming build results to: https://app.buildbuddy.io/invocation/42be2f2c-0783-4f70-8bc1-eaba08c43848
FAILED: Build did NOT complete successfully

The buildbuddy repo is incompatible with OSS tooling

The BuildBuddy repo uses the Google convention of having a go_library named for each package you would like declared. Unfortunately this is incompatible with the opensource golang ecosystem which makes importing it to a standard golang tree impossible.

Would you be open to a PR that moves to a more standard OSS golang import path?

UX: Make it easy to copy a build target presented in the buildbuddy UI

The most common flow I have when using buildbuddy for CI is to identify tests which failed, click into to see the details of a particular target, and eventually manually copy and paste the target so that I can test it locally while I fix the errors

This flow could be improved in a number of ways

  1. Buildbuddy could supply me with a copy pastable command to run all failed tests locally
  2. When I click into a particular target, there could be a button to click which would copy the target name to my clipboard

I would be a very happy camper if one or both of these UX tweaks are implemented :)

Which Tabs are Build Buddy

In dark mode on Firefox, it’s difficult to identify which tabs are build buddy tabs because of the favicon. Can we make that image context-aware (theme)?

Timeout when building java targets w/ BB RBE + java toolchain (bazel_tools)

I am integrating remote cache + execution on local bazel repo and am running into for building any java targets. Currently setup to use bazel_tool jdk11 toolchains and base image. I am able to build everything locally but not through rbe

Bazel version : 4.1.0

Build stalls when executing this bazel_tools action : https://github.com/bazelbuild/bazel/blob/master/tools/jdk/DumpPlatformClassPath.java

Example failed build : https://app.buildbuddy.io/invocation/94246fde-8d83-4f46-bfee-2ea575b614cb

.bazelrc

build:remote --javabase=@bazel_tools//tools/jdk:remote_jdk11
build:remote --host_javabase=@bazel_tools//tools/jdk:remote_jdk11
build:remote --host_java_toolchain=@bazel_tools//tools/jdk:toolchain_java11
build:remote --java_toolchain=@bazel_tools//tools/jdk:toolchain_java11

WORKSPACE

# java - jdk 
http_archive(
    name = "rules_java",
    sha256 = "220b87d8cfabd22d1c6d8e3cdb4249abd4c93dcc152e0667db061fb1b957ee68",
    url = "https://github.com/bazelbuild/rules_java/releases/download/0.1.1/rules_java-0.1.1.tar.gz",
)

load("@rules_java//java:repositories.bzl", "rules_java_dependencies")
rules_java_dependencies()

...

#build buddy
http_archive(
    name = "io_buildbuddy_buildbuddy_toolchain",
    sha256 = "a2a5cccec251211e2221b1587af2ce43c36d32a42f5d881737db3b546a536510",
    strip_prefix = "buildbuddy-toolchain-829c8a574f706de5c96c54ca310f139f4acda7dd",
    urls = ["https://github.com/buildbuddy-io/buildbuddy-toolchain/archive/829c8a574f706de5c96c54ca310f139f4acda7dd.tar.gz"],
)

load("@io_buildbuddy_buildbuddy_toolchain//:deps.bzl", "buildbuddy_deps")

buildbuddy_deps()

load("@io_buildbuddy_buildbuddy_toolchain//:rules.bzl", "buildbuddy")

buildbuddy(name = "buildbuddy_toolchain")

java dep : https://github.com/bazelbuild/rules_java/blob/ca8f85f30491148dc86865e1e799d96410f31500/java/repositories.bzl#L505

Support regex matching on global advanced filters

The hosts page is super helpful for identifying issues across Bazel builds in a set of CI host machines. Whilst the global filter is great, it would be incredibly useful to be able to filter these down with more precision to see only the set of hosts which you might be looking for.

Allow to set prefix for S3 storage

We use the same s3 bucket for buildbuddy and bazel-remote. The latter allows to specify prefix via config, the former doesn't. It makes it hard to navigate bucket since the root is polluted and also to specify different lifecycle policies for cache and buildbuddy reports.

Basically, we want this kind of structure:

cache/
   ac/
   cas.v/
buildbuddy/
   24971223-462c-4a5d-85e7-3c6f8c50f0a6/
   24989c17-ab53-4744-ace6-d76ddda9d720/
   ...

[feature-request] fine-grained `exclusive`

I'm running a few tests with exclusive. With remote execution at buildbuddy, I'd like to have a way (e.g., exec_properties) to express the constraint that these tests are mutually exclusive but are not mutually exclusive with other tests or build.

So that these mutually exclusive tests can still run in parallel if there is enough remote executors.

Otherwise, these tests are forced to run locally and in serial, greatly slowing down everything.

Feature: Browse/Query support

Having the ability to browse builds and query for builds (look for all my builds, look for all builds from user, look for all builds containing X target) is pretty crucial to this being useful at any sort of scale.

There is a database created for this service - it seems that with some appropriate indexing, some of these queries should be reasonably achievable.

No registered executors in pool "" with os "darwin" with arch "amd64"

Hello, BuildBuddy!
When getting started on BuildBuddy with Apple Silicon the very first build faced with the following error:

Building md4c failed: (Exit 34): UNAVAILABLE: Error scheduling execution task "/uploads/7288e302-fd27-495c-939f-ec94b5f27ffe/blobs/251686baefe82c2e0ab2b7f1f1617925cd1f955bbe4fa25f7f7d5f9f3f4dddee/189": rpc error: code = Unavailable desc = No registered executors in pool "" with os "darwin" with arch "amd64".
java.io.IOException: io.grpc.StatusRuntimeException: UNAVAILABLE: Error scheduling execution task "/uploads/7288e302-fd27-495c-939f-ec94b5f27ffe/blobs/251686baefe82c2e0ab2b7f1f1617925cd1f955bbe4fa25f7f7d5f9f3f4dddee/189": rpc error: code = Unavailable desc = No registered executors in pool "" with os "darwin" with arch "amd64".

Looks like you have no toolchain for that.

An error occurred during the fetch of repository 'go_googleapis'

I am trying to follow Getting Started guide for on prem and the build fails with the errors:

INFO: Streaming build results to: https://app.buildbuddy.io/invocation/2b5c8107-852d-4f7c-a720-45615d2b3210
INFO: Call stack for the definition of repository 'go_googleapis' which is a http_archive (rule definition at /home/ubuntu/.cache/bazel/_bazel_ubuntu/d66ea702277a82b20058abe99a1364a3/external/bazel_tools/tools/build_defs/repo/http.bzl:296:16):
 - <builtin>
 - /home/ubuntu/.cache/bazel/_bazel_ubuntu/d66ea702277a82b20058abe99a1364a3/external/io_bazel_rules_go/go/private/repositories.bzl:241:9
 - /home/ubuntu/.cache/bazel/_bazel_ubuntu/d66ea702277a82b20058abe99a1364a3/external/io_bazel_rules_go/go/private/repositories.bzl:209:5
 - /home/ubuntu/Desktop/buildbuddy/WORKSPACE:29:1
ERROR: An error occurred during the fetch of repository 'go_googleapis':
   Traceback (most recent call last):
	File "/home/ubuntu/.cache/bazel/_bazel_ubuntu/d66ea702277a82b20058abe99a1364a3/external/bazel_tools/tools/build_defs/repo/http.bzl", line 84
		patch(ctx)
	File "/home/ubuntu/.cache/bazel/_bazel_ubuntu/d66ea702277a82b20058abe99a1364a3/external/bazel_tools/tools/build_defs/repo/utils.bzl", line 148, in patch
		fail(<1 more arguments>)
Error applying patch @io_bazel_rules_go//third_party:go_googleapis-directives.patch:
patching file BUILD.bazel
patching file WORKSPACE
Hunk #1 FAILED at 1.
1 out of 1 hunk FAILED -- saving rejects to file WORKSPACE.rej
patching file google/BUILD.bazel
ERROR: /home/ubuntu/.cache/bazel/_bazel_ubuntu/d66ea702277a82b20058abe99a1364a3/external/org_golang_google_grpc/status/BUILD.bazel:3:1: @org_golang_google_grpc//status:go_default_library depends on @go_googleapis//google/rpc:status_go_proto in repository @go_googleapis which failed to fetch. no such package '@go_googleapis//google/rpc': Traceback (most recent call last):
	File "/home/ubuntu/.cache/bazel/_bazel_ubuntu/d66ea702277a82b20058abe99a1364a3/external/bazel_tools/tools/build_defs/repo/http.bzl", line 84
		patch(ctx)
	File "/home/ubuntu/.cache/bazel/_bazel_ubuntu/d66ea702277a82b20058abe99a1364a3/external/bazel_tools/tools/build_defs/repo/utils.bzl", line 148, in patch
		fail(<1 more arguments>)
Error applying patch @io_bazel_rules_go//third_party:go_googleapis-directives.patch:
patching file BUILD.bazel
patching file WORKSPACE
Hunk #1 FAILED at 1.
1 out of 1 hunk FAILED -- saving rejects to file WORKSPACE.rej
patching file google/BUILD.bazel
ERROR: Analysis of target '//server:buildbuddy' failed; build aborted: no such package '@go_googleapis//google/rpc': Traceback (most recent call last):
	File "/home/ubuntu/.cache/bazel/_bazel_ubuntu/d66ea702277a82b20058abe99a1364a3/external/bazel_tools/tools/build_defs/repo/http.bzl", line 84
		patch(ctx)
	File "/home/ubuntu/.cache/bazel/_bazel_ubuntu/d66ea702277a82b20058abe99a1364a3/external/bazel_tools/tools/build_defs/repo/utils.bzl", line 148, in patch
		fail(<1 more arguments>)
Error applying patch @io_bazel_rules_go//third_party:go_googleapis-directives.patch:
patching file BUILD.bazel
patching file WORKSPACE
Hunk #1 FAILED at 1.
1 out of 1 hunk FAILED -- saving rejects to file WORKSPACE.rej
patching file google/BUILD.bazel
INFO: Elapsed time: 67.262s
INFO: 0 processes.
ERROR: Build failed. Not running target
INFO: Streaming build results to: https://app.buildbuddy.io/invocation/2b5c8107-852d-4f7c-a720-45615d2b3210
FAILED: Build did NOT complete successfully (220 packages loaded, 14517 target\
s configured)
    Fetching @in_gopkg_yaml_v2; fetching 20s
    Fetching @org_golang_google_api; fetching 20s
    Fetching @com_github_mattn_go_sqlite3; fetching
ubuntu@bazel-server:~/Desktop/buildbuddy$ ^C

Bazel is crashing while uploading BES events to BuildBuddy

We have hosted BuildBuddy OnPrem version on Kubernetes, but while running some of the bazel targets, we are seeing that bazel is crashing even after the build/tests succeeded.

Bazel: v4.1.0
BuildBuddy: v2.5.3 (and v2.3.3)

Stack trace:

INFO: Build completed successfully, 98 total actions
WARNING: BES was not properly closed
FATAL: bazel crashed due to an internal error. Printing stack trace:
java.util.concurrent.RejectedExecutionException: Task com.google.common.util.concurrent.TrustedListenableFutureTask@34c86160[status=PENDING, info=[task=[running=[NOT STARTED YET], com.google.devtools.build.lib.remote.ByteStreamBuildEventArtifactUploader$$Lambda$755/0x000000080073c440@1a8c36e8]]] rejected from java.util.concurrent.ThreadPoolExecutor@66d323f6[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 648]
	at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source)
	at com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:586)
	at java.base/java.util.concurrent.AbstractExecutorService.submit(Unknown Source)
	at com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:66)
	at com.google.devtools.build.lib.remote.ByteStreamBuildEventArtifactUploader.upload(ByteStreamBuildEventArtifactUploader.java:220)
	at com.google.devtools.build.lib.buildeventstream.BuildEventArtifactUploader.uploadReferencedLocalFiles(BuildEventArtifactUploader.java:100)
	at com.google.devtools.build.lib.buildeventservice.BuildEventServiceUploader.enqueueEvent(BuildEventServiceUploader.java:196)
	at com.google.devtools.build.lib.buildeventservice.BuildEventServiceTransport.sendBuildEvent(BuildEventServiceTransport.java:95)
	at com.google.devtools.build.lib.runtime.BuildEventStreamer.post(BuildEventStreamer.java:268)
	at com.google.devtools.build.lib.runtime.BuildEventStreamer.buildEvent(BuildEventStreamer.java:472)
	at com.google.devtools.build.lib.runtime.BuildEventStreamer.buildEvent(BuildEventStreamer.java:481)
	at com.google.devtools.build.lib.runtime.BuildEventStreamer.clearPendingEvents(BuildEventStreamer.java:307)
	at com.google.devtools.build.lib.runtime.BuildEventStreamer.clearEventsAndPostFinalProgress(BuildEventStreamer.java:634)
	at com.google.devtools.build.lib.runtime.BuildEventStreamer.close(BuildEventStreamer.java:354)
	at com.google.devtools.build.lib.runtime.BuildEventStreamer.closeOnAbort(BuildEventStreamer.java:336)
	at com.google.devtools.build.lib.buildeventservice.BuildEventServiceModule.forceShutdownBuildEventStreamer(BuildEventServiceModule.java:409)
	at com.google.devtools.build.lib.buildeventservice.BuildEventServiceModule.afterCommand(BuildEventServiceModule.java:578)
	at com.google.devtools.build.lib.runtime.BlazeRuntime.afterCommand(BlazeRuntime.java:626)
	at com.google.devtools.build.lib.runtime.BlazeCommandDispatcher.execExclusively(BlazeCommandDispatcher.java:603)
	at com.google.devtools.build.lib.runtime.BlazeCommandDispatcher.exec(BlazeCommandDispatcher.java:231)
	at com.google.devtools.build.lib.server.GrpcServerImpl.executeCommand(GrpcServerImpl.java:543)
	at com.google.devtools.build.lib.server.GrpcServerImpl.lambda$run$1(GrpcServerImpl.java:606)
	at io.grpc.Context$1.run(Context.java:579)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.base/java.lang.Thread.run(Unknown Source)
Bazel exited with status code 37

And in the next bazel command, we see the below message which means the bazel crashed.

$ bazel info
Starting local Bazel server and connecting to it...

Related error I am seeing in the BuildBuddy server logs.

31mWRN Error receiving build event stream build_id:"e39b864b-e734-4791-b11f-dc01fdd1256e" invocation_id:"556bdbef-692a-450a-bc2a-ab77292cb5e5" component:TOOL: rpc error: code = Canceled desc = context canceled
31mWRN Marking invocation "556bdbef-692a-450a-bc2a-ab77292cb5e5" as disconnected: rpc error: code = Canceled desc = context canceled

Relevant settings in our .bazelrc

build --remote_cache=grpcs://...
build --remote_upload_local_results=false # Developers don't populate the remote cache
build --bes_results_url=https://.../invocation/
build --bes_backend=grpcs://....
build --bes_timeout=60s
build --bes_upload_mode=nowait_for_upload_complete

Please let me know if you need any other details to debug the issue.

bytestream_url override

I was wondering if it was possible to override the bytestream_url that buildbuddy uses in the UI to retrieve things like the profile, artifacts, and the timing info.

Right now, we have some clients that use a proxy service to connect to our bazel cache via a localhost URL. Buildbuddy accesses it via a different URL though via a Kubernetes service.

Is there a way to tell buildbuddy that even though the client provided a URL for accessing the cache, it should use a different URL?

Thanks!

ssl.upgrade_insecure uses wrong port

The ssl.upgrade_insecure: true setting issues a redirect to https:// but uses the http port.

E.g.

curl -i  http://example.com:1980/   
HTTP/1.1 301 Moved Permanently
Content-Type: text/html; charset=utf-8
Location: https://example.com:1980/
Date: Sun, 26 Mar 2023 20:36:07 GMT
Content-Length: 62

<a href="https://example.com:1980/">Moved Permanently</a>.

ssl.enable_ssl is true and --ssl_port is specified as 1981.

app.build_buddy_url is specified as https://example.com:1981.

Allow expanding/hiding groups in the timing tab

On larger builds the timing tab can get quite fully of irrelevant information, it would be nice to maybe show the top 4/5 entries per field and allow them to be expanded.

This would be especially useful for the grpc-command-<N> groups which end up full of information which isn't necessarily relevant to most users.

image

Also the option of merging all groups and just having the sorting would be incredibly useful for keeping an eye on larger actions without having to scan through each of the skyframe evaluators.

Cannot start buildbuddy with v1.0.4

2020/05/17 18:55:36 Reading buildbuddy config from 'config/buildbuddy.onprem.yaml'
2020/05/17 18:55:36 Error loading config from file: Config file config/buildbuddy.onprem.yaml not found

Args:

  • --config_file=config/buildbuddy.onprem.yaml

This worked with the v1.0.0 image.

Confusing log message on macOS during startup

Noticed the following during start of BuildBuddy locally:

2023/01/23 07:19:42.540 INF BuildBuddy v2.12.16 (ef1312bec990efa6d9d46da55361e96eae7c1467) compiled with go1.19.3
...
2023/01/23 07:19:43.788 INF Increasing open files limit 65535 => 10240

I've increased the macOS file limit on my system myself. Thus, this looks odd: 65535 => 10240. Is BuildBuddy decreasing the open files limit?

Provide action integration with Buildbarn Browser

This would be incredibly useful and definitely open up the project to interest from users of the Buildbarn project, bb-browser (buildbarn browser) is an incredible utility for viewing information stored in the CAS and AC through a web page and the links to the pages for this can be assembled from a few key components ${bb-browser-url}/action/${instance}/${hash}/${size}/.

I propose adding some configuration options for the browser url and instance (remote_instance_name from bazel), and allowing these actions to be accessed via links from targets. Currently targets cannot be broken down into their various actions but this should be straight forward from the BES data.

Problem The main blocker here is the limitation on the current implementation of BES which doesn't seem to provide ANY route to action digests / action sizes through the BEP.

Add configuration for janitor cleanup batch size

The janitor cleans a max of 10 invocations per cycle, so to keep up, we had to set the cleanup_interval down to 10s. It seems a bit wasteful to query the database so frequently for such small batches.

Increasing the number of cleanup_workers doesn't make it go any faster because the Ticker still only sends one signal per interval. More workers just means they take turns. If cleaning one batch takes longer than the cleanup_interval, then more workers would allow them to run in parallel, but I think they'll just try to delete the same invocations because the database query is going to return the same set of invocations to both workers.

Should I create a PR to add cleanup_batch_size configuration value instead of the hard-coded 10?

Use GitHub App instead of user token

GitHub Apps are the improved way to authenticate as a service to GitHub, and allow the installation to get an auth token instead of relying on a single user's token.

This is important because using that token will bypass various restrictions, for example organizations that have a SSH CA set up and required, see this error when using Workflows:

Cloning target repo...
Cloning into '.'...
remote: This repository requires SSH certificate authentication. Contact the owner to receive a certificate.
fatal: unable to access 'https://github.com/REDACTED/REDACTED.git/': The requested URL returned error: 403

It also ensures that there is no single load-bearing user account for the Workflows setup.

Tag / release missing for 0.0.4

It'd be useful if you could tag / release the code at the point that the docker container is built so we have a reference for which version of the code is in use

Allow more flexibility with TargetSelector in Build Results API

I've been looking through some of the API documentation this afternoon for the Build Results API and noticed that the TargetSelector is missing any option to query by label.

TargetSelector requires a invocation ID, would it make sense to extend this to allow querying by label and remove the invocation_id requirement? This would allow for querying of potentially troublesome targets across many invocations.

Keep up the great work!

No results when using BuildFarm remote execution locally

I'm trying to run bazelbuild/buildfarm by spinning up a server and a worker locally.
I am compiling mihaigalos/osi_stack and have remote execution up and running, including caching:

➜  osi_stack git:(master) bazel clean --async --expunge && bazel test --bes_results_url=http://localhost:8080/invocation/ --bes_backend=grpc://localhost:1985 --remote_executor=grpc://localhost:8980 //...
...
INFO: Analyzed 27 targets (22 packages loaded, 373 targets configured).
INFO: Found 2 targets and 25 test targets...
INFO: Elapsed time: 19.066s, Critical Path: 0.77s
INFO: 100 processes: 100 remote cache hit.

There is currently no way to have remote execution statistics from buildfarm back into buildbuddy, is there?

Janitor fails to delete blobs because directory is not empty

WRN Error deleting blob (03516230-c019-4f85-8283-1059f6512a4a): remove /buildbuddy/03516230-c019-4f85-8283-1059f6512a4a: directory not empty

I think this has something to do with how Chunkstore and DiskBlobStore interact, but I think it comes down to
DiskBlobStore.DeleteBlob calls disk.DeleteFile, which calls os.Remove, which always fails because the blob is actually a directory, and not empty.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.