GithubHelp home page GithubHelp logo

buildbot / buildbot Goto Github PK

View Code? Open in Web Editor NEW
5.2K 200.0 1.6K 70.28 MB

Python-based continuous integration testing framework; your pull requests are more than welcome!

Home Page: https://www.buildbot.net

License: GNU General Public License v2.0

Makefile 0.15% Python 88.90% Shell 0.12% HTML 0.06% JavaScript 0.15% RAML 0.91% Roff 0.13% TypeScript 8.98% Dockerfile 0.02% Pug 0.01% Less 0.01% Jinja 0.06% Mako 0.01% CSS 0.01% SCSS 0.46% Gherkin 0.03%
continuous-integration ci python ci-framework

buildbot's Introduction

Buildbot

The Continuous Integration Framework

Buildbot is based on original work from Brian Warner <mailto:warner-buildbot @ lothar . com>, and currently maintained by the Botherders.

Visit us on http://buildbot.net !

travis-badge_ codecov-badge_ readthedocs-badge_

Buildbot consists of several components:

  • master
  • worker
  • www/base
  • www/console_view
  • www/waterfall_view

and so on

See the README in each subdirectory for more information

Related repositories:

buildbot's People

Contributors

bdbaddog avatar bhearsum avatar cmouse avatar dependabot[bot] avatar djmitche avatar ewongbb avatar gward avatar hborkhuis avatar jaredgrubb avatar jpommerening avatar krajaratnam avatar macdems avatar marcus-sonestedt avatar maruel avatar mokibit avatar p12tic avatar perilousapricot avatar pmisik avatar protatremy avatar pyup-bot avatar rajgoesout avatar rjarry avatar rodrigc avatar rutsky avatar seankelly avatar shanzi avatar tardyp avatar tomprince avatar tothandras avatar warner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

buildbot's Issues

gerritStartCB don't work

This ticket is a migrated Trac ticket 2755

People contributed to the original ticket: @djmitche, @unknown_contributor
Ticket created on: Apr 13 2014
Ticket last modified on: Jun 18 2014


Hi,

I try the code in http://docs.buildbot.net/latest/manual/cfg-statustargets.html about update Gerrit Status.

No problem with gerritReviewCB or gerritSummaryCB with last trunk
but gerritStartCB don't work

I look the code
and in master/buildbot/status/status_gerrit.py

        # Gerrit + Git
        if build.getProperty("gerrit_branch") is not None:  # used only to verify Gerrit source

build.getProperty("gerrit_branch") is always None with [[GerritStart]]CB


Git(mode="copy", ..) doesn't work with multiple codebases

This ticket is a migrated Trac ticket 2821

People contributed to the original ticket: @djmitche, @unknown_contributor
Ticket created on: Jun 11 2014
Ticket last modified on: Jun 18 2014


Hello,

I am using multiple git repositories, some of which derive data from another. For instance, for a release, the changelog of the software is split into posts for the blog.

Another use case is to make performance measurements, and to commit the result in a performance web site git.

I would like to have:

Git(repo=repo1, workdir # w2, mode "copy")
Git(repo=repo2, workdir # w2, mode "copy")

and then the ability to make
[[ShellCommand]](workdir # w2, command "xxx %s" % w1" )

so that build step scripts can work across repository boundaries.

Currently, I am working these limitations around by using git submodules, which scripts will automatically update, since git has no means to track branches of other repos as submodules.

To me, it would be cleaner, if these cross-repo scripts and builders were possible. Right now, I don't have an urgent need, as I believe submodules are workable, with the noise that the automatic submodule updates bring, it's not an optimal solution, and bears no flexibility. I cannot e.g. override from the outside the commit id values for these submodules.


Comment from: @dustin
Date: Jun 11 2014

Buildbot's source steps are intended for checkouts, not for making new commits.

That said, you can use multiple Git steps, with codebases allowing you to have those steps use different sourcestamps.

The ticket title says "Cannot", but doesn't give any evidence of why you cannot. Please re-open if that's not working for you for some reason.


Comment from: @dustin
Date: Jun 12 2014

Ah! I just read the corresponding mailing-list thread (http://article.gmane.org/gmane.comp.python.buildbot.devel/10311)

The issue here is that Git's "copy" mode is not codebase-compatible, per http://article.gmane.org/gmane.comp.python.buildbot.devel/10314.


Comment from: @dustin
Date: Jun 12 2014

Apologies for my confusion.

buildbot.test.unit.test_scripts_start.TestStart.test_start is flaky

This ticket is a migrated Trac ticket 2760

People contributed to the original ticket: @djmitche
Ticket created on: Apr 20 2014
Ticket last modified on: Aug 03 2014


2014-04-17 18:41:54-0400 [-] --> buildbot.test.unit.test_scripts_start.[[TestStart]].test_start <--
2014-04-17 18:42:04-0400 [-] Main loop terminated.

and

[FAIL]
Traceback (most recent call last):
  File "/var/lib/buildslave/bbot/py27-tw1110/build/master/buildbot/test/unit/test_scripts_start.py", line 109, in cb
    self.assertSubstring('[[BuildMaster]] is running', out)
  File "/var/lib/buildslave/bbot/py27-tw1110/sandbox-python2.7-Twisted==11.1.0samigr-0.7.1/lib/python2.7/site-packages/twisted/trial/unittest.py", line 421, in fail[[UnlessSubstring]]
    return self.fail[[UnlessIn]](substring, astring, msg)
twisted.trial.unittest.[[FailTest]]: '[[BuildMaster]] is running' not in "Following twistd.log until startup finished..\n\nThe buildmaster took more than 10 seconds to start, so we were unable to\nconfirm that it started correctly. Please 'tail twistd.log' and look for a\nline that says '[[BuildMaster]] is Running' to verify correct startup.\n\n"

The test is disabled in #2748.


Make GitHub hook listen to other kind of events as well

This ticket is a migrated Trac ticket 2887

People contributed to the original ticket: @djmitche, @sa2ajj
Ticket created on: Sep 07 2014
Ticket last modified on: Jun 22 2015


That would allow to build around Buildbot systems aware of a particular workflow.

Cf.: [[Bors] and [https://github.com/barosl/homu Homu|https://github.com/graydon/bors]]


Comment from: @dustin
Date: Sep 07 2014

This would be a powerful improvement, if the events and reactions to them were easily configurable.

GitHubStatus and HTTP Proxy Support?

Hi,

Is there an known way to make use of the GitHubStatus module if my server is sitting behind an HTTP proxy? I've also opened a ticket with the txgithub project asking the same question. So far I haven't figured out how to set up the twisted networking used by txgithub to filter through my proxy. Any help is appreciated.

Thanks

Support a single config across the cluster, or something like that

This ticket is a migrated Trac ticket 2959

People contributed to the original ticket: @djmitche, @sa2ajj, benoit.allard@...
Ticket created on: Oct 22 2014
Ticket last modified on: Feb 08 2015


The www interface is showing all builders that ever where there. This is a difference with eight were only the 'active' builders were displayed on the web interface. I believe the latter is wished.


Comment from: @dustin
Date: Oct 23 2014

This is a little tricky -- we don't want builders to "disappear" if the master that happens to be hosting them goes down. Although we could do that now, with a rather complex set of queries.

I suspect that the better plan will be to have builders disappear using the same kind of "horizon" cleanup we do for logs and builds.


Comment from: @ben
Date: Oct 23 2014

Provided this 'horizon' cleanup would be implemented in 0.9.+, this would then, if I understand correctly, prevent me to look at builders where build didn't happen for some time, even if the builders are still 'active' (defined in my config).

Anyway, the trouble I'm having now, is that at each reconfig where I renamed a builder, the old one stay there. Those were previous tests from me, and have no value anymore once the renamed builder is there.

I need a way to cleanup my interface, and anyone who experiment a bit with their configuration also does.


Comment from: @dustin
Date: Oct 24 2014

I wonder what would be the best interface for that. The options I see are:

  • horizons: delete builders which have no active master and no builds
  • immediate disappearance: if there are no active masters with a builder configured, it disappears
  • command-line cleanup: buildbot delete-builder XXX
  • web UI: administrators can delete builders from the web UI

I'm not sure which of these options is best..


Comment from: @sa2ajj
Date: Oct 24 2014

FWIW, two last options look the most appealing to me (cli & web ui options):

  • people do make mistakes -> give them a chance to see that something is wrong
  • in a multi-master setup it's more likely to make a mistake

Comment from: @ben
Date: Oct 25 2014

During a reconfig, we should be able to flag the dead builders as dead, or not ?


Comment from: @ben
Date: Oct 25 2014

Same is valid for the slaves by the way. Even worst, the slaves still show on the web interface a connection with the builders they were connected to (when they existed). I.e.: configured_on: [{"builderid":1,"masterid":1},{"builderid":2,"masterid":1},{"builderid":3,"masterid":1}] But that slave is absolutely not configured there ...

Those slave got new names / passwords, hence disapeared, and have been replaced by other ones ...


Comment from: @dustin
Date: Oct 26 2014

Reconfig happens on one master at a time, so it's still tricky at that point. Probably the worst outcome would be to have a builder deleted while its builderid was still cached in RAM in another master -- any new inserts with that ID would fail.

So, yes, I think a way to delete builders (and slaves) manually would be good.


Comment from: @ben
Date: Nov 07 2014

I believe, we need to move to a next-generation multi-master configuration.

The more we move stuff in the db, the more we are having this kind of troubles which boils down to:

We do have a decentralized system without coordinating entity (well, the db, but it's only passive).

What if we would require for the master.cfg to always contains the whole definition, and then, during buildbot start we would add a parameter to tell the master which one he actually is ?


Comment from: @dustin
Date: Nov 07 2014

What are some good models for this, among other clustered applications?

I think your suggestion is to use the same master.cfg for all masters, with runtime flags that cause it to be interpreted differently. But, how would you enforce that? And how would you accomplish a rolling configuration update?

LatentSlaveBuilder status does not get reset to LATENT when detached

This ticket is a migrated Trac ticket 2770

People contributed to the original ticket: @djmitche, @unknown_contributor, @sa2ajj
Ticket created on: Apr 30 2014
Ticket last modified on: Sep 04 2014


When insubstantiating a latent slave, its slavebuilders' status doesn't get reset to LATENT. On subsequent builds, the SlaveBuilder list passed to nextSlave will incorrectly note the slave as IDLE instead of LATENT.


Comment from: @sa2ajj
Date: Sep 04 2014

I am not sure I understand the desired behaviour: when the build is finished, but the buildslave is not shut down, it'd show IDLE. As soon as it's down, it'd become LATENT.


Comment from: @unknown_contributor
Date: Sep 04 2014

Clarifying the description a bit more:

In the list of SlaveBuilder objects passed to !nextSlave, the slave's status after finishing a build is IDLE. When a slave is insubstantiated, however, the slave's status remains as IDLE. The status attribute in the SlaveBuilder object is not reset to LATENT, the state it has before first substantiation.

Improve support for Gerrit

This ticket is a migrated Trac ticket 2890

People contributed to the original ticket: @sa2ajj
Ticket created on: Sep 09 2014
Ticket last modified on: Sep 09 2014


There's a number of things that could be improved for Gerrit support.

(And I'd need to create a ticket for this particular problem: force the version (if an installation is prepared incorrectly, Gerrit fails to report its version: might fail certain actions).)

List of relevant tickets

[[TicketQuery(status=new&status=assigned&status=reopened&keywords=~gerrit,group=type)]]
#2851
(missing other tickets)


MailNotifier 'all' mode does not work

When I set mode=['all'] on MailNotifier, I get an error in the log file:

2014-03-24 10:50:46-0400 [-] Configuration Errors:
2014-03-24 10:50:46-0400 [-]   mode all is not a valid mode

Failed FileUpload step generates a 0-sized file in the destination.

This ticket is a migrated Trac ticket 2826

People contributed to the original ticket: @djmitche, @unknown_contributor
Ticket created on: Jun 18 2014
Ticket last modified on: Jun 20 2014


I have a build step like this:

factory.addStep([[FileUpload]](slavesrc="file.png", workdir=dir_on_the_source, mode=0777, name="Upload", masterdest=[WithProperties], url=[WithProperties], halt[[OnFailure]]=False, flunk[[OnFailure]]=False, hide[[StepIf]]=lambda results, s: results == FAILURE))

Observed:
If the source file 'file.png' does not exist on the build slave, the build master will still create a new file to the destination, but the size of that file will be zero.

Also, if I remove the flunk[[OnFailure]], halt[[OnFailure]] and hide[[StepIf]] directives, the log will not indicate a reason for the failure (e.g. "File upload failed, because source file was not found")

Expected:

No zero-sized files should be generated in the destination and a descriptive error message should be displayed in the output.


move github_buildbot out of contrib

This ticket is a migrated Trac ticket 2828

People contributed to the original ticket: @djmitche, @sa2ajj, @ewongbb
Ticket created on: Jun 22 2014
Ticket last modified on: Jan 14 2017


This really should be a part of core Buildbot, as a web hook..


Comment from: @dustin
Date: Jun 22 2014

I especially like the addition of support for --secret, which allows Buildbot to at least trust that the request is from GitHub.


Comment from: @sa2ajj
Date: Sep 08 2014

See also #2889.


Comment from: @unknown_contributor
Date: Jan 14 2017

With master/buildbot/www/hooks/github.py, is this still necessary?

New web plugin idea: a health indicator

This ticket is a migrated Trac ticket 2966

People contributed to the original ticket: benoit.allard@..., @tardyp, @rutsky, @unknown_contributor, @sa2ajj, @rutsky
Ticket created on: Oct 24 2014
Ticket last modified on: Mar 19 2016


I like to extract as much useful indicator from my builds as possible (time, but also amount of warnings, and such ...)

It would be cool to have a web plugin that could print the evolution of my indicators over time ! (Of course, I would have to configure which indicator I want to see plotted, maybe the kind of plot, and so on ...)


Comment from: @sa2ajj
Date: Oct 24 2014

Could you please elaborate or provide a more specific example?

I think it's related to the [[metrics support|http://docs.buildbot.net/latest/developer/metrics.html]], but without an example I can easily be wrong :)


Comment from: @ben
Date: Oct 24 2014

I was more aiming at [[Statistics|http://docs.buildbot.net/latest/developer/cls-buildsteps.html?highlight=statistics#buildbot.process.buildstep.[[BuildStep]].hasStatistic]], but I just realized that

Note that statistics are not preserved after a build is complete.

So metrics is probably where we want to interface with the master.

I used to abuse Properties for that purpose ...


Comment from: @tardyp
Date: Oct 24 2014

Buildbot plugin system is really made for enabling such dashboards.

A web ui plugin is not technically restricted to creating a bunch of js file, it could also create a twisted service.

For me having the JS only use existing data api to query the data will be very inefficient. I think we could easily create a service, like a status service that registers to a bunch of mq events, and create statistics based on that.

I also had in mind that plugins could have some table in the DB they could use to store the data, or maybe use another db configuration with all schema + migration stuff separated.


Comment from: @tardyp
Date: Oct 26 2014

on IRC, sa2ajj talked about http://graphite.readthedocs.org/

He told us, he actually uses it at work, and has integration working with eigth.

Looking at the documentation, the first reaction is how to integrate this with multimaster, as graphite has its own db called whisper.
I haven't look too much deeply, but I think this is still feasible as a external tool. Probably this would be much cheaper than making our own metrics system inside buildbot.

An external graphite server could be setup, and watch for the (TBD) mq server. As there are messages for nearly every kind of activity that happens in buildbot, this is a good mean of making solid analysis of what is going on. Of course, this solution would not be fully integrated, as probably the UI would be external, but anyway, I think this is a possible cheap path.

@sa2ajj do you think it is possible? How would you estimate the cost of integration?


Comment from: @ben
Date: Oct 26 2014

There are a lot of [[interfaces|http://graphite.readthedocs.org/en/latest/tools.html#visualization]], as well as data collectors / forwarders (same page, a bit higher) available for graphite. It looks like some of them are js-only ! Still way to go for a www-plugin !


Comment from: @sa2ajj
Date: Oct 27 2014

What I said was that we indeed use graphite, but I did not say we use it with Buildbot.

I have an oldish branch where I tried to publish metrics to graphite. However I stopped working on that when I realised that it's not very straightforward to implement it to support multi-master case.

If there's an interest, I can revive the branch or, at least, publish what I have (after rebasing the latest master) so other could comment in what direction my thinking worked.


Comment from: @unknown_contributor
Date: Mar 19 2016

+1 for this feature. I suggest to merge the statistic API with the [[stats module|https://github.com/buildbot/buildbot/tree/master/master/buildbot/statistics]] developed last year by my GSOC student. And add a default backend that store a subset of these stat in the main database that would enable the development of heath/stats visualization modules directly installed by default (using Highchart js lib for example)

[SOLVED]>[question/problem] looks like buildslave not using http_proxy on freebsd.

Have problem with make buildslave use a proxy connection is any tips recommendation available for use buildslave through proxy ? Using freebsd10...

Error what give log is " failed: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.error.ConnectError'>: An error occurred while connecting: 65: No route to host."

Other programs like pkg/lynx working w/o problems...

buildslave --version
Buildslave version: 0.8.8
Twisted version: 13.2.0

Documentation fails to build with Sphinx >=1.2.2

Documentation fails to build with Sphinx >=1.2.2 due to some fixes in handling of paths.
(See Sphinx issue 1352, Sphinx pull request 211 and these commits: c2917a7a5fe917031c8aebee35c2cbfecc4e3e74, 542b090c2fed1f98d78d2a9764707b7e44ce7384.)

conf.py of Buildbot sets html_favicon = 'buildbot.ico', but buildbot.ico is actually in _static subdirectory.

$ make html
sphinx-build -b html -d _build/doctrees  -q -W . _build/html

Warning, treated as error:
WARNING: favicon file 'buildbot.ico' does not exist

make: *** [html] Error 1

Fix:

--- docs/conf.py
+++ docs/conf.py
@@ -134,7 +134,7 @@
 # The name of an image file (within the static path) to use as favicon of the
 # docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32
 # pixels large.
-html_favicon = 'buildbot.ico'
+html_favicon = os.path.join('_static', 'buildbot.ico')

 # Add any paths that contain custom static files (such as style sheets) here,
 # relative to this directory. They are copied after the builtin static files,

Gitlab hook is receiving commits in the wrong order

This ticket is a migrated Trac ticket 2886

People contributed to the original ticket: @djmitche, @unknown_contributor, @sa2ajj
Ticket created on: Sep 05 2014
Ticket last modified on: Feb 03 2015


When I receive a push on the Gitlab hook with multiple commits, they are processed in the wrong order and buildbot ends up building a revision that is not the latest.

In the JSON Gitlab is sending, the last commit is the first in the array; it looks like Buildbot expects the reverse.

Thanks!


re-implement progress

This ticket is a migrated Trac ticket 2950

People contributed to the original ticket: @djmitche
Ticket created on: Oct 19 2014
Ticket last modified on: Oct 19 2014


In #2818, I'm removing support for progress as implemented via the status API. It should be re-added!

ETA needed a lot of work, anyway -- see

[[TicketQuery(order=priority,status!=closed,keywords~=eta,format=table,col=summary|owner)]]


BuildSlave keepalive_interval arg is ignored

This ticket is a migrated Trac ticket 2812

People contributed to the original ticket: @djmitche, @sa2ajj, @tomprince
Ticket created on: May 26 2014
Ticket last modified on: Feb 08 2015


In nine, we seem to be ignoring this argument -- at least, a quick grep suggests we are. Perhaps this was part of the landing of the slave-proto stuff?


Comment from: @sa2ajj
Date: Nov 25 2014

Are we talking about [[BuildSlave]] class or we are talking about build slave?


Comment from: @dustin
Date: Feb 08 2015

master/buildbot/buildslave/protocols/pb.py:    # TODO: configure keepalive_interval in c['protocols']['pb']['keepalive_interval']

Handling test results

This ticket is a migrated Trac ticket 2706

People contributed to the original ticket: @djmitche, jonas.pommerening+foss@...
Ticket created on: Feb 26 2014
Ticket last modified on: Jun 18 2014


Me again

I'm currently working on integrating our test results with the [[BuildBot]] web-frontend and mail notifications. I discovered the "halfway there" implementation consisting of [[BuildStatus]].add[[TestResult]] some unfinished web views etc. and thought now would be a good opportunity to tackle that problem.

I already whipped up a little proof-of-concept based on 0.8.8 and am now looking to clean up some rough edges and learn how make the right extensions to nine's the data api (?).
First: What's there already?

[[I[[TestResult]]|https://github.com/buildbot/buildbot/blob/master/master/buildbot/interfaces.py#L595]]: Everything that is known about a single test case: Its name (a tuple of strings), result, a string/text further describing the result, a dictionary (string->string) of logs.

[[I[[BuildStatus]].get[[TestResults]]|https://github.com/buildbot/buildbot/blob/master/master/buildbot/interfaces.py#L577]]: Returns all test results for the given build, keeping I[[TestResult]] per name (where name is I[[TestResult]].getName()).

[[Status[[ResourceBuildTest]]|https://github.com/buildbot/buildbot/blob/master/master/buildbot/status/web/tests.py]]: a web view that can display the result and logs of a single test case.
What do I want to contribute?

  1. I[[LogObserver]] implementations for incrementally parsing test result logs
    • [[SubunitLogObserver]] already there
    • J[[UnitLogObserver]] parse incrementally with defusedxml (I'll port my lxml-based implementation)
    • TAP[[LogObserver]] can someone point me to a python library for that?
    • are there any other wide spread formats?
  2. I want to replace the dictionary with a list (possible upgrade hazard?)
    • currently the test results are in whatever order the dictionary deems right, the list would assure chronological order
    • Allows duplicate names, I'll have to see how to handle that in the ui. I actually think this is needed for data-driven tests (a single method, multiple datasets that are tested within the same test, multiple results).
  3. A "click-through/drill-down" web interface to navigate test results on 0.8.x (#1794)
    • linked on the build-status page in the "Result" section with number of passed and failed tests (#536)
    • tests results grouped by common prefix of the name tuple (one page per "level")
    • maybe a "proportionally split bar" style visualization of failed vs. passed tests
  4. hook it up in nine
    • I have no idea what that means, can someone point me in the right direction?

(each of these steps may come as a separate pull request on github)
Why is this important?

This may not yet be the final solution but I think it is a step in the right direction. Buildbot is all about CI and proper integration of test tools should be a big part of that. Right now the whole test result thing is kind of in stealth mode I didn't know about it until last week, despite having worked with Buildbot for about 2 years. The changes I want to make would show that there is a feature like that and maybe inspire discussion and ideas for improving it.
What feedback do I need?

Can I go ahead and change get[[TestResults]] to return a list?
What changes are needed for nine?
Do you have any suggestions, things I should consider, etc?


Comment from: @dustin
Date: Feb 26 2014

This is a great idea!

The existing code for handling test results is, as far as I know, completely inoperative. So you should feel free to throw that out.

The big danger here is an explosion of data. If you have thousands of builds (as most users do) and each build has thousands of tests (as most test runs do), that's millions of test-result rows. It's a lot more for a larger-than-average installation. If we're sending a message for each new test result, that's also a lot of messages. We've done some work with log handling in nine to try to reduce this (by collapsing multiple log lines into a single database record, and compressing it).

I don't think there's a lot of sense in implementing this in 0.8.x, unless you're planning to simultaneously re-implement it in nine. At this point the branches have diverged significantly, and I can't promise to re-implement any new 0.8.x functionality in nine. This would be a substantial rewrite in nine: it will need DB connector plus tables (and new DB migrations in 0.8.x are forbidden..), plus a data API connector, methods for tests to add new test results (which will need to be asynchronous to support DB inserts), and an AngularJS implementation of the frontend.

A good place to start looking at developing for nine is in the developer section of the nine docs - see http://docs.buildbot.net/nine/


Comment from: @unknown_contributor
Date: Feb 26 2014

Some good points!

If you have thousands of builds (as most users do)
check.
and each build has thousands of tests (as most test runs do),
check
that's millions of test-result rows.
ouch so what to do about it? If anything is good at handling that amount of equally structured data it should be a database, right?

It's a lot more for a larger-than-average installation. If we're sending a message for each new test result, that's also a lot of messages.
We've done some work with log handling in nine to try to reduce this (by collapsing multiple log lines into a single database record, and compressing it).

This, indeed, sounds problematic. But wait, you're storing the logs in the database? And you were doing it line-by-line before? Also messages is that related to nine's MQ stuff?

My impression was that it worked something like this:

  • Slave watches logfile
  • Slave sends chunks to the master
  • [[LogObservers]] get updated with received chunks
  • Master pipes chunks to a file
  • Master compresses logfile, when step is done
  • file is read from disk and decompressed when it's needed

But before I'll bother you with any more misinformed assumptions I'll have a look at the nine docs ;)


Comment from: @dustin
Date: Feb 26 2014

In 0.8.x, logs are stored as netstrings in the filesystem, with one file per log. The naive way to port that to the DB, and still allow per-line access, would be to put each line in a DB row. And yes, messages are related to nine's MQ stuff.

If this test support is optional, meaning that users won't find themselves surprised by a 20GB sqlite DB and a slow master, then it might be OK to go ahead and implement this in a fairly naive fashion, maybe a table with columns (buildid, testname, result, logid, logline). If that proves inefficient, we can revisit and optimize.

master does not reconnect to PostgreSQL when database server is restarted

This ticket is a migrated Trac ticket 2753

People contributed to the original ticket: michael+buildbot@..., @djmitche
Ticket created on: Apr 09 2014
Ticket last modified on: Jun 18 2014


On every PostgreSQL upgrade, I need to restart buildbot. I dont think this should be necessary instead, buildbot should simply reconnect :).

Even worse is that the failure mode is pretty horrible. The git_buildbot.py script doesnt output anything and you simply dont get any new builds.

Here is a stack trace of the problem:

2014-04-09 22:19:48+0200 [Broker,7,79.140.39.201] perspective_addChange called
2014-04-09 22:19:48+0200 [-] Peer will receive following PB traceback:
2014-04-09 22:19:48+0200 [-] Unhandled Error
        Traceback (most recent call last):
          File "/usr/lib/python2.7/threading.py", line 783, in __bootstrap
            self.__bootstrap_inner()
          File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
            self.run()
          File "/usr/lib/python2.7/threading.py", line 763, in run
            self.__target(*self.__args, **self.__kwargs)
        --- <exception caught here> ---
          File "/usr/lib/python2.7/dist-packages/twisted/python/threadpool.py", line 191, in _worker
            result = context.call(ctx, function, *args, **kwargs)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 118, in call[[WithContext]]
            return self.currentContext().call[[WithContext]](ctx, func, *args, **kw)
          File "/usr/lib/python2.7/dist-packages/twisted/python/context.py", line 81, in call[[WithContext]]
            return func(*args,**kw)
          File "/usr/lib/python2.7/dist-packages/buildbot/db/pool.py", line 184, in __thd
            rv = callable(arg, *args, **kwargs)
          File "/usr/lib/python2.7/dist-packages/buildbot/db/users.py", line 40, in thd
            rows = conn.execute(q).fetchall()
          File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 720, in execute
            return meth(self, multiparams, params)
          File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 317, in _execute_on_connection
            return connection._execute_clauseelement(self, multiparams, params)
          File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 817, in _execute_clauseelement
            compiled_sql, distilled_params
          File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 947, in _execute_context
            context)
          File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1108, in _handle_dbapi_exception
            exc_info
          File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 185, in raise_from_cause
            reraise(type(exception), exception, tb=exc_tb)
          File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 940, in _execute_context
            context)
          File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 435, in do_execute
            cursor.execute(statement, parameters)
        sqlalchemy.exc.[[OperationalError]]: ([[OperationalError]]) SSL connection has been closed unexpectedly
         'SELECT users_info.uid \nFROM users_info \nWHERE users_info.attr_type # %(attr_type_1)s AND users_info.attr_data %(attr_data_1)s' {'attr_data_1': u'Michael Stapelberg <[email protected]>', 'attr_type_1': u'git'}

Let me know if you need any other information. Thanks.


Comment from: @dustin
Date: Apr 17 2014

It looks like the postgres dialect needs automatic-reconnection functionality similar to that already in place for MySQL.

http://docs.sqlalchemy.org/en/rel_0_8/core/pooling.html#disconnect-handling-pessimistic

buildmaster doesn't resume when slave unpauses

This ticket is a migrated Trac ticket 2931

People contributed to the original ticket: vlovich@..., @djmitche
Ticket created on: Oct 08 2014
Ticket last modified on: Oct 30 2014


If a slave unpauses, the buildmaster doesn't resume:

  1. Pause the slave. Wait for no jobs being queued on the builder/buildslave.
  2. Submit a job.
  3. Unpause the slave

Expected: job continues running.
Actual: job remains paused until another job is submitted or the buildmaster is reconfig'ed.


Comment from: @dustin
Date: Oct 30 2014

It sounds like the fix here is for the unpausing to call maybe[[StartBuildsForSlave]]

Git: fetch all branches before switching to the given revision, if branch=None

This ticket is a migrated Trac ticket 2750

People contributed to the original ticket: @djmitche, @sa2ajj
Ticket created on: Apr 07 2014
Ticket last modified on: Sep 09 2014


See #1074

branch=None is a bit of a degenerate case, referring vacuously to the "default branch", which in Git really means master. However, it's also the default, and there's no reason to make that default fail in the interest of an optimization (not fetching every commit).


Comment from: @sa2ajj
Date: Sep 08 2014

Maybe the meaning of branch=None should be reconsidered:

  • if revision is not specified -> default branch
  • if a revision is specified -> fetch that particular revision and set it as the one to work on

Comment from: @dustin
Date: Sep 09 2014

Maybe I misunderstand, but you can't instruct git to fetch a specific revision from a remote repository -- only to fetch some named revision (and its parents).


Comment from: @sa2ajj
Date: Sep 09 2014

Maybe, I misremembered something.

I'll investigate and update the ticket.


Comment from: @sa2ajj
Date: Sep 09 2014

It looks like

git fetch <remote> <SHA-1>:FETCH_HEAD

works just fine.

$ git --version
git version 2.1.0

Comment from: @dustin
Date: Sep 09 2014

Hah, well, I stand corrected!

buildbot.test.integration.test_try_client.Schedulers.test_userpass_wait is flaky

This ticket is a migrated Trac ticket 2762

People contributed to the original ticket: @djmitche, @sa2ajj, @unknown_contributor
Ticket created on: Apr 20 2014
Ticket last modified on: Aug 27 2015


===============================================================================
[ERROR]
Traceback (most recent call last):
Failure: twisted.trial.util.[[DirtyReactorAggregateError]]: Reactor was unclean.
[[DelayedCalls]]: (set twisted.internet.base.[[DelayedCall]].debug = True to debug)
<[[DelayedCall]] 0x80c0d3b90 [3599.75786614s] called=0 cancelled=0 [[BuildSlave]].doKeepalive()>

buildbot.test.integration.test_try_client.Schedulers.test_userpass_wait
===============================================================================
[ERROR]
Traceback (most recent call last):
Failure: twisted.trial.util.[[DirtyReactorAggregateError]]: Reactor was unclean.
Selectables:
<Broker #0 on 10145>

buildbot.test.integration.test_try_client.Schedulers.test_userpass_wait

Disabled in #2748.


Comment from: @dustin
Date: Apr 11 2015

Fixed in #3246.


Comment from: @unknown_contributor
Date: Aug 15 2015

http://blackjml.livejournal.com/23029.html may describe a better fix.


Comment from: @sa2ajj
Date: Aug 26 2015

Should we port this to nine?


Comment from: @dustin
Date: Aug 27 2015

For the most part, we already do wait for connections to stop.

But looking at the code here, the client's cleanup is just:

    def cleanup(self, res=None):
        if self.buildsetStatus:
            self.buildsetStatus.broker.transport.loseConnection()

with no waiting for that to complete.


Comment from: @dustin
Date: Aug 27 2015

The test seems to fail (hang) repeatably now, actually:

612 2015-08-27 09:36:09-0400 [-] Unhandled error in Deferred:
613 2015-08-27 09:36:09-0400 [-] Unhandled Error
614 Traceback (most recent call last):
615   File "/home/dustin/code/buildbot/t/buildbot/sandbox/lib/python2.7/site-packages/twisted/internet/base.py", line 1192, in run
616     self.mainLoop()
617   File "/home/dustin/code/buildbot/t/buildbot/sandbox/lib/python2.7/site-packages/twisted/internet/base.py", line 1201, in mainLoop
618     self.run[[UntilCurrent]]()
619   File "/home/dustin/code/buildbot/t/buildbot/sandbox/lib/python2.7/site-packages/twisted/internet/base.py", line 824, in run[[UntilCurrent]]
620     call.func(*call.args, **call.kw)
621   File "/home/dustin/code/buildbot/t/buildbot/sandbox/lib/python2.7/site-packages/twisted/internet/task.py", line 218, in __call__
622     d = defer.maybeDeferred(self.f, *self.a, **self.kw)
623 --- <exception caught here> ---
624   File "/home/dustin/code/buildbot/t/buildbot/sandbox/lib/python2.7/site-packages/twisted/internet/defer.py", line 141, in maybeDeferred
625     result = f(*args, **kw)
626 exceptions.[[TypeError]]: <lambda>() takes no arguments (1 given)

Comment from: @dustin
Date: Aug 27 2015

That was a bug in the tests.

The [[RemoteBuildRequest]].remote_subscribe method tries to subscribe to builders.<builderid>.builds which doesn't exist. In other words, this code is pretty badly broken.


Comment from: @dustin
Date: Aug 27 2015

OK, I added a subscription for that pattern, and I'm trying to reproduce the flake.


Comment from: @dustin
Date: Aug 27 2015

#1828

Automatic link from log file to source code

This ticket is a migrated Trac ticket 2926

People contributed to the original ticket: @unknown_contributor, @djmitche
Ticket created on: Oct 06 2014
Ticket last modified on: Oct 07 2014


Log files often contain references to source code.
(E.g. compiler errors and warnings start with file:line:)

It would be a great help if those could be formatted as HTML links to a source code viewer.

Often the source code is already available online so that just the link has to be generated.
The [[WarningCountingShellCommand]] already tracks the working directory and extracts file name and line number from warnings, so we already have all the information for a proper link.

I propose to add a callback function similar to the 'revlink' config to [[WarningCountingShellCommand]], which returns an URL for a given warning. The generated HTML log file should then include this link.
Extra points when we can support the same for error messages! :-)


Comment from: @dustin
Date: Oct 06 2014

Cool idea! Does "I propose" mean you're working on this?


Comment from: @unknown_contributor
Date: Oct 07 2014

Well, I'm interested in this, but I don't know when I can dedicate some time for it.

Flexibility to Stop or Terminate LatentSlave as load drops

This ticket is a migrated Trac ticket 2732

People contributed to the original ticket: @djmitche, @unknown_contributor
Ticket created on: Mar 21 2014
Ticket last modified on: Jun 18 2014


We should in some scenario Stop the Latent slave instead of always Terminating it when timeout expires or load drops. Having the instances be stopped instead of terminated is useful when you are using EBS volumes and want to keep them mounted for the life of the instance and reuse the instance for long periods of time. This can greatly reduce the startup time of the instance since it does not have to build the volume from the snapshot. we can ask users (as a parameter) whether they want to terminate the instance when load goes down or they want to stop.


Comment from: @dustin
Date: Apr 17 2014

This should be an option for the E[[C2LatentSlave]] support.

Add forward-compatibility with Python 3.3+

This ticket is a migrated Trac ticket 2860

People contributed to the original ticket: @unknown_contributor, @djmitche, northlomax@...
Ticket created on: Aug 05 2014
Ticket last modified on: Feb 13 2016


Buildbot is Python-2.x only because Twisted is, but Twisted is making great progress toward 3.x.

When that happens, we should be ready to run on Python-3.x.

We can start now by running the code through 2to3 and fixing the easy, backward-compatible stuff -- print -> print(), range -> xrange, etc.

We can also run tests with the -3 flag, which will warn about non-forward-portable stuff (unfortunately, it warns about a lot of non-Buildbot code, too!)

We can use the six package for a lot of this, and factor out some utility functions for other problematic bits.


Comment from: @dustin
Date: Feb 13 2016

We made great progress on this last year, but as somewhat expected, got caught up in porting Twisted. I don't think there's enough left here for a GSoC project.

Modernize Dockerfile

This ticket is a migrated Trac ticket 2937

People contributed to the original ticket: @unknown_contributor, @sa2ajj
Ticket created on: Oct 10 2014
Ticket last modified on: Nov 19 2014


Docker has evolved quite quickly before reaching 1.0, and the Dockerfile in master/contrib/Dockerfile is no longer following best practices. According to [[GitHub]] discussions, it:

  • does not publish master's web status port (8010)
  • creates an ssh service (omg this is so 2013!)
  • does not expose the config in a separated volume
  • uses echo > /path/to/config instead of ADD command
  • hardcodes the password

Someone should read the current Docker docs and come up with a better solution. It looks like tardyp will be able to review it.


TLS support for BuildMaster <--> BuildSlave

This ticket is a migrated Trac ticket 2880

People contributed to the original ticket: @djmitche, @sa2ajj, @unknown_contributor
Ticket created on: Aug 28 2014
Ticket last modified on: Aug 30 2016


Currently BuildMaster and BuildSlave communicate over plain TCP/IP.

Some people find it rather risky to use Buildbot.

A few links to the relevant Twisted Python documentation:

  • [[Getting Connected With Endpoints|http://twistedmatrix.com/documents/current/core/howto/endpoints.html]]
  • [[server[[FromString]] function|https://twistedmatrix.com/documents/14.0.0/api/twisted.internet.endpoints.server[[FromString]].html]]
  • [[client[[FromString]] function|https://twistedmatrix.com/documents/14.0.0/api/twisted.internet.endpoints.client[[FromString]].html]]

Comment from: @sa2ajj
Date: Aug 28 2014

Related ticket #2129


Comment from: @dustin
Date: Aug 30 2014

I expect this is possible with correct configuration, but I haven't tried that, nor do I know offhand what that congiguration is.

So this may be as simple as a documentation fix.


Comment from: @dustin
Date: Sep 13 2014

A brief bit of experimentation shows that on the master,

c['slavePortnum'] = ''ssl:9999:privateKey=/tmp/key.pem:certKey=/tmp/crt.pem'

works.

However, the slave is not so lucky -- it instantiates an internet.TCPClient directly, using separate host and port parameters.

So this will need some code change on the slave in order to support TLS connections.


Comment from: @unknown_contributor
Date: Aug 30 2016

Working on it this week !

Dependent scheduler gets wrong sourcestamp if builds merged?

This ticket is a migrated Trac ticket 2917

People contributed to the original ticket: @djmitche, @unknown_contributor
Ticket created on: Sep 25 2014
Ticket last modified on: Oct 30 2014


As I was considering using the Dependent scheduler, I ran across a problem which
caused one team to stop using it,
https://code.google.com/p/chromium/issues/detail?id=53226

It's an old bug, probably in 0.7.12, but since Dependent is outdated, maybe it hasn't changed.

The writeup is pretty good; it says in part

"8:16:46 Chromium Builder (dbg) finishes build 24647. A build is created
which merges the 2 pending requests (or the master calls maybe[[StartBuild]]
which checks to see if the 2 requests can be merged and does so). A new
source stamp is now used for Chromium Builder (dbg) which starts build 24648.
The dependent scheduler is not updated.

8:49:44 XP Tests (dbg)(1) starts build 28227 based on the original source
stamp created at 7:47:01. This uses revision r59371, though the dependent
scheduler was fired because Chromium Builder (dbg) finished build 24647
(for revision r59373). The extract build step tries to grab r59371, fails,
grabs the unnumbered .zip. At this point, the console will give wrong
results.

The key is that the tester's source stamp is not actually corrupted or
incorrectly changed anywhere, it's just not updated to reflect its
dependency's source stamp."


Comment from: @dustin
Date: Oct 30 2014

Can you reproduce this with 0.8.9? Codebases (and a lot besides) have been added since 0.7.12, and I think those may cover this particular issue -- although I don't entirely understand the description of the issue. So a reproduction recipe would help.

Inform the user if wrong SQLAlchemy is installed

Steps to reproduce:
1.Install buildbot master
...

  1. Install another thing that updates sqlalchemy
    ....
  2. restart buildbot master

There is no error message, just
The buildmaster took more than 10 seconds to start, so we were unable to
confirm that it started correctly. Please 'tail twistd.log' and look for a
line that says 'configuration update complete' to verify correct startup.

Installing correct SQLAlchemy version (0.6) fixes the issue but the log was empty. Output or log should provide some clue of what's happening

Buildbot does not consistenly do logging

This ticket is a migrated Trac ticket 2839

People contributed to the original ticket: @djmitche, @sa2ajj, @unknown_contributor
Ticket created on: Jul 20 2014
Ticket last modified on: Feb 03 2015


In many places a log.{msg,err} (log is import from twisted.python) is used to log various information that might be useful later.

However there's a significant chunk of code that uses print to produce all kind of debug/error output.

I'm not sure this is a major ticket, I just wonder if there're any kind of guidelines about how logging should be performed and if there's a need to convert those prints to either log.msg or log.err?


Comment from: @dustin
Date: Jul 20 2014

print in a twistd environment ends up in the logfile.

However, in production code we should always be logging, not printing. Based on a quick grep, I see that we almost always adhere to this standard. Legitimate exceptions are:

  • scripts and clients like 'create-master' or 'debug' that actually want to produce stdout
  • contrib (it can do whatever it wants)

Cleaning up any other occurrences would be a good fix.

Forced build never completes

This ticket is a migrated Trac ticket 2813

People contributed to the original ticket: buildbot@..., @djmitche
Ticket created on: May 27 2014
Ticket last modified on: Jun 18 2014


It appears that a forced build will never complete if a build is forced on a build slave with max_builds=1 and that slave is already running a build. In the most recent case where I finally realized what might be happening, I was forcing a run of another build that runs on the same slave.

The build master appears to actually trigger the secondary build (which it shouldn't do because max_builds=1) but then ignores all output from that slave. The slave finishes but the master ignores its results and, eventually, all builds will stall once that slave is scheduled for another build. I can't guarantee that the next scheduled build is what triggers this state but the master will definitely get locked up at some point.

Once the system is in this state, the only way to fix it is to stop and restart the master.


Comment from: @dustin
Date: May 27 2014

Can you include some twistd.log output from the master to indicate what you're seeing? I'm not sure how it could trigger a build and ignore the output..

support retryOnFailure

This ticket is a migrated Trac ticket 2875

People contributed to the original ticket: @djmitche, @unknown_contributor, @sa2ajj
Ticket created on: Aug 18 2014
Ticket last modified on: Feb 08 2015


The metabuildbot's virtualenv step often fails on certain buildslaves. In fact, it's never useful information when that step fails.

It would be great if failures in that step were automatically retried.

But there's no way to indicate that a step's failure should be turned into a RETRY status for the build (flunk[[OnFailure]] turns it into FAILURE, but not RETRY).


Comment from: @sa2ajj
Date: Aug 18 2014

What is the reason for the virtualenv step to fail in the first place?


Comment from: @dustin
Date: Aug 20 2014

Often because there's something wrong with the Python binary, or because npm seems to be unreliable.


Comment from: @unknown_contributor
Date: Oct 26 2014

How many retries to do? One may not be enough. And yes, what is the result status in case of failure after retries


Comment from: @dustin
Date: Oct 28 2014

The number of retries might well be configurable (and perhaps tracked in a property). And, when that runs out, I think that FAILURE is the appropriate status.

Use chardet on incoming bytestrings

This ticket is a migrated Trac ticket 2757

People contributed to the original ticket: @djmitche
Ticket created on: Apr 15 2014
Ticket last modified on: Apr 15 2014


There's a library, chardet, which can do a reasonable job of guessing the charset of a bytestring.

There are a number of places in Buildbot where incoming data is a bytestring. Most of those allow the user to specify an encoding, and default to UTF-8. For example, change sources generally get bytestrings for commit comments, authors, and so on.

In the default case, it may be more convenient for users if we dynamically detect the character encoding of these strings. This would amount to "doing the right thing" when possible, with the fallback option for users to supply an explicit encoding.

Chardet would also be useful in the ascii2unicode method, which currently only allows ascii bytestrings. Then a little mojibake is the unlikely worst case, rather than an exception


Source steps should be made codebase-aware (in copy mode)

This ticket is a migrated Trac ticket 2980

People contributed to the original ticket: @djmitche, benoit.allard@...
Ticket created on: Oct 28 2014
Ticket last modified on: Sep 06 2015


When in copy mode, the source is first checked-out under 'source', and then copied to the right directory.

When multiple codebases are used this way, they fight for the 'source' directory. It is already fixed in SVN by using the source/' directory. It would be good to unify this for all of the steps.


document BuildStep renderables variable

This ticket is a migrated Trac ticket 2929

People contributed to the original ticket: vlovich@..., @djmitche, @sa2ajj
Ticket created on: Oct 07 2014
Ticket last modified on: Aug 16 2016


Documentation for the 'renderables' property is missing which is needed for writing custom steps. Probably add a description in the buildbot customization section?


Comment from: @dustin
Date: Oct 08 2014

Good idea


Comment from: @sa2ajj
Date: Aug 16 2016

I'm curious why http://docs.buildbot.net/latest/manual/cfg-properties.html#using-properties-in-steps and http://docs.buildbot.net/latest/manual/cfg-properties.html#custom-renderables are not good enough documentation.

Is it not easy to find it? Or something else?


Comment from: @unknown_contributor
Date: Aug 16 2016

Something else. This is referring to the "renderables" property on the [[BuildStep]] class that you extend for custom steps. [[BuildBot]] reads this variable to render those properties on the custom step before invoking run. There's not (or wasn't when I wrote the bug) any such documentation describing this process.

handle step interruption properly

This ticket is a migrated Trac ticket 2703

People contributed to the original ticket: @djmitche, @tardyp
Ticket created on: Feb 20 2014
Ticket last modified on: Feb 08 2015


There are some subtleties to handle step interrupts.

For shellcommands this works, but for master steps its more complex.
especially trigger would need special attention.

<tardyp> djmitche, in the case of a new style step, how are you supposed to interrupt a step?
 tardyp converting Trigger to new style..
<+djmitche> tardyp: has that changed?
<tardyp> well if I run interrupt(), we are in the middle of run(), which would be yielding someting
<tardyp> old style step would just call self.finished() so can skip waiting
<+djmitche> interrupt will also interrupt a running command
<+djmitche> so this is only a problem when run() isn't executing a command
<tardyp> I realize that trigger's interrupt is badly written
<tardyp> it should cancel the requests
<tardyp> and stop the builds
<+djmitche> yes
<+djmitche> there should be some instructions for step authors who are not just executing commands to poll self.stopped or something like that
<tardyp> the first is easy, end the second need the stop rpc
<+djmitche> also, initiating a new command after an interrupt should immediately fail
<+djmitche> I don't think it necessarily should stop the triggered builds
<tardyp> well, we have hacks to do so, as my user were requesting this loud
<+djmitche> what if those requests got merged?
<tardyp> it make sense to release the buildfarm, when like use a single build can trigger douzens of parralel builds
<tardyp> the algo is to claim_cancel the buildrequests first
<tardyp> then if the buildrequests are claimed, get to the build status and stop it
<+djmitche> I suppose if we're collapsing requests instead, then if the request was collapsed, you just don't cancel a build
<tardyp> so I believe this works with the collapsing
<+djmitche> yeah
<tardyp> if it is callapsed then its claimed
<+djmitche> as long as it's via the data API I think that's OK :)

Documentation fails to build

Documentation of BuildBot fails to build.
I use Sphinx 1.2.1.

$ make html
sphinx-build -b html -d _build/doctrees  -q -W . _build/html

Warning, treated as error:
/tmp/buildbot/master/docs/manual/installation.rst:657: WARNING: Malformed option description u'Can also be passed directly to the BuildSlave constructor in buildbot.tac.  If', should look like "-opt args", "--opt args" or "/opt args"

Makefile:60: recipe for target 'html' failed
make: *** [html] Error 1

Fix for first error:

--- master/docs/manual/installation.rst
+++ master/docs/manual/installation.rst
@@ -655,6 +655,7 @@
     around. The default is 10.

 .. option:: --allow-shutdown
+
     Can also be passed directly to the BuildSlave constructor in buildbot.tac.  If
     set, it allows the buildslave to initiate a graceful shutdown, meaning that it
     will ask the master to shut down the slave when the current build, if any, is

Provide latent Docker build slave

This ticket is a migrated Trac ticket 2956

People contributed to the original ticket: @sa2ajj, benoit.allard@...
Ticket created on: Oct 22 2014
Ticket last modified on: Nov 26 2014


There's a number of options already implemented:

(There's one more that I can't find atm)


Comment from: @ben
Date: Oct 22 2014

I'm slowly wrapping my head around that concept, was wondering what advantage it would bring, or how could we integrate Docker as a slave or such ... What would help me most would be a blog post to lay down some basis I guess, then, once I got a clear idea of the possible links between Docker and Buildbot, I could try to think about binding them as a Latent slave ...

In that direction, the drone project ([[drone.io|https://drone.io/]]) is based around Docker slave, they provide some base image, and you just tell in your config which one you wanna use ... Something like that ?


Comment from: @sa2ajj
Date: Oct 23 2014

Latent build slaves have a special feature: reproducible environment, which is a parameter:

  • AMI specification for [[AWS|http://docs.buildbot.net/latest/manual/cfg-buildslaves-ec2.html]]
  • base_image for [[libvirt|http://docs.buildbot.net/latest/manual/cfg-buildslaves-libvirt.html]]
  • image for [[OpenStack|http://docs.buildbot.net/latest/manual/cfg-buildslaves-openstack.html]]

The "normal" build slaves would have to create such an environment (e.g. [[Buildbot's builder configuration|https://github.com/buildbot/metabbotcfg/blob/master/builders.py#L27]]).

I agree though that either a blog post or an addition to the documentation would be helpful for more people to start using this kind of build slaves.


Comment from: @ben
Date: Oct 23 2014

So you mean something like taking that class into core buildbot: https://github.com/elbaschid/dockbot/blob/make_single_command_executable/dockbot/slaves/dockerslave.py, or do you mean something more there ? (i.e. being able to install the minimal slave stuff into any base image ?)


Comment from: @sa2ajj
Date: Oct 23 2014

Yes, I am talking about taking that particular class (with amendments since it seems to do a bit too much with respect to parameters) and adding a test case.


Comment from: @ben
Date: Oct 24 2014

Looking forward to it !

I'm struggeling with nine for the moment, but as I have it working, I could have a look at that !


Comment from: @ben
Date: Oct 28 2014

I started looking onto it, unfortunately, I hit a few blockers with master (#2970).

A question I had is about the completness of the docker image. Should we try to create-slave + start at each substiantiation ? Or should the image have a pre-created slave and it should only be started during substantiation ?.

In other words: Should the docker image be buildbot-aware, or should any (well, with limitations) docker image be made buildbot-aware ?


Comment from: @sa2ajj
Date: Oct 28 2014

Replying to [[Ben|comment:6]]:

I started looking onto it, unfortunately, I hit a few blockers with master (#2970).
Let's see how it's resolved.

A question I had is about the completness of the docker image.
Should we try to create-slave + start at each substiantiation ?
Or should the image have a pre-created slave and it should only be started during substantiation ?.
I think the best is make all latent build slaves behave in a similar way; this means that it's a responsibility of the user to create an image with build slave in it as well as to provide a way for the build slave to authenticate against master:

  • some latent build slaves have this information "hardcoded", that's it: one image can only provide one build slave
  • majority of images I saw use environment variables or a bootstrap script to configure the authentication

In other words: Should the docker image be buildbot-aware, or should any (well, with limitations) docker image be made buildbot-aware ?
I think, the former: that's in line with, for example, AWS images.


Comment from: @ben
Date: Nov 10 2014

Been working on that over the weekend. With nice results !

1 trouble: We need to hardcode the master hostname, the slave name and password in the Dockerfile. For the master name, it would be good to use the --add-hosts cli parameter, unfortunately I found no way to give it through the docker API :( probably worth a bug by docker. Or the doc I found is not up-to-date.

I also would love to add the possibility to build a container if the requested one is not found. Probably through giving a Dockerfile as parameter to the slave. This would allow us to require no preliminary setup on the docker side.

I still need to write the doc, add some tests, and we're ready to go !


Comment from: @sa2ajj
Date: Nov 10 2014

I do not think we need to hardcode anything.

A common Docker approach is to pass parameters via environment variables. In case of latent Docker slave, we'd have to pass 4 parameters (names are examples):

  • BUILDBOT_MASTER_HOST
  • BUILDBOT_MASTER_PORT
  • BUILDBOT_SLAVE_NAME
  • BUILDBOT_SLAVE_PASSWORD

and modify buildbot.tac to use these parameters when they are present.


Comment from: @sa2ajj
Date: Nov 10 2014

And, by the way, this modification can go to master: buildslave create-slave can put the provided values so that they are used as defaults, while the abovementioned environment variables would override these defaults.


Comment from: @ben
Date: Nov 10 2014

See GH:1344.


Comment from: @sa2ajj
Date: Nov 10 2014

As agreed, environment variables is going to be in a separate PR.


Comment from: @ben
Date: Nov 26 2014

Can this now be closed ?


Comment from: @sa2ajj
Date: Nov 26 2014

See above my comment about environment variables. I have a branch, I just need to polish it and submit, then this can be closed when the change is merged.


Comment from: @sa2ajj
Date: Nov 26 2014

But first I want to take care of #2851.

handle step interruption properly

This ticket is a migrated Trac ticket 2703

People contributed to the original ticket: @djmitche, @tardyp
Ticket created on: Feb 20 2014
Ticket last modified on: Feb 08 2015


There are some subtleties to handle step interrupts.

For shellcommands this works, but for master steps its more complex.
especially trigger would need special attention.

<tardyp> djmitche, in the case of a new style step, how are you supposed to interrupt a step?
 tardyp converting Trigger to new style..
<+djmitche> tardyp: has that changed?
<tardyp> well if I run interrupt(), we are in the middle of run(), which would be yielding someting
<tardyp> old style step would just call self.finished() so can skip waiting
<+djmitche> interrupt will also interrupt a running command
<+djmitche> so this is only a problem when run() isn't executing a command
<tardyp> I realize that trigger's interrupt is badly written
<tardyp> it should cancel the requests
<tardyp> and stop the builds
<+djmitche> yes
<+djmitche> there should be some instructions for step authors who are not just executing commands to poll self.stopped or something like that
<tardyp> the first is easy, end the second need the stop rpc
<+djmitche> also, initiating a new command after an interrupt should immediately fail
<+djmitche> I don't think it necessarily should stop the triggered builds
<tardyp> well, we have hacks to do so, as my user were requesting this loud
<+djmitche> what if those requests got merged?
<tardyp> it make sense to release the buildfarm, when like use a single build can trigger douzens of parralel builds
<tardyp> the algo is to claim_cancel the buildrequests first
<tardyp> then if the buildrequests are claimed, get to the build status and stop it
<+djmitche> I suppose if we're collapsing requests instead, then if the request was collapsed, you just don't cancel a build
<tardyp> so I believe this works with the collapsing
<+djmitche> yeah
<tardyp> if it is callapsed then its claimed
<+djmitche> as long as it's via the data API I think that's OK :)

GitPoller Not Triggering Builds on New Branches

This ticket is a migrated Trac ticket 2953

People contributed to the original ticket: @djmitche, @sa2ajj, @unknown_contributor
Ticket created on: Oct 20 2014
Ticket last modified on: Oct 23 2014


I'm not sure if this was covered by the original purpose of the pollers, but it'd be really helpful it was.

My company implements a process where we create and commit to branches in a very small time frame. This has become problematic with automatic builds through Buildbot because the GitPoller will not trigger an automatic build of a new branch. It is only when a branch that's already been polled gets a new change that the scheduler will consider that commit.

For example, let's say a user creates a local only branch from master called master+POC44, makes a commit to the master+POC44 branch, and then pushes the POC44 branch to the remote. Buildbot will not pick up that commit and trigger a build through a scheduler configured to look for branches starting with master+POC.

The only way around this problem that I know is for a user to create the branch locally AND on the remote, wait till Buildbot has polled the branch, and THEN make and push a commit. In that case, once Buildbot picks up the commit, a build would be triggered.

This is a rare problem for users at my company because the GitPoller is set to run every two minutes, meaning they'd have to create the branch locally AND on the remote, and make and push a commit in under two minutes for the scheduler to ignore their change.


Buildbot gives up on EC2 spot instance requests before EC2 does

This ticket is a migrated Trac ticket 2935

People contributed to the original ticket: @unknown_contributor, @djmitche
Ticket created on: Oct 09 2014
Ticket last modified on: Dec 19 2015


When Eight receives a spot request status code other than pending-evaluation, pending-fulfillment, or fulfilled, it concludes that the spot request has failed and gives up on it. However, [[several status codes|http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances-bid-status.html#spot-instances-bid-status-lifecycle]] are non-terminal, and EC2 may still fulfill the request at a later time. Nine knows that price-too-low is non-terminal, and so cancels the request when giving up on it, but does not do this for other non-terminal status codes.

As a result, EC2 may launch instances that are not tracked by Buildbot. These will remain running and costing money until the spot price exceeds the bid price, at which point EC2 will automatically terminate the instance. To avoid this, Buildbot needs to cancel spot requests when it gives up on them.

2014-10-09 01:17:00-0400 [-] E[[C2LatentBuildSlave]] el6-amd64 requesting spot instance
2014-10-09 01:18:06-0400 [-] E[[C2LatentBuildSlave]] el6-amd64 has waited 1 minutes for spot request sir-022rlcrg
2014-10-09 01:18:37-0400 [-] E[[C2LatentBuildSlave]] el6-amd64 failed to fulfill spot request sir-022rlcrg with status capacity-oversubscribed
2014-10-09 01:18:37-0400 [-] Buildslave el6-amd64 detached from testsuite-el6-amd64
2014-10-09 01:18:37-0400 [-] while preparing slavebuilder:
        Traceback (most recent call last):
          File "/home/buildbot/env/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 577, in _runCallbacks
            current.result = callback(current.result, *args, **kw)
          File "/home/buildbot/env/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1155, in gotResult
            _inlineCallbacks(r, g, deferred)
          File "/home/buildbot/env/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1097, in _inlineCallbacks
            result = result.throw[[ExceptionIntoGenerator]](g)
          File "/home/buildbot/env/local/lib/python2.7/site-packages/twisted/python/failure.py", line 389, in throw[[ExceptionIntoGenerator]]
            return g.throw(self.type, self.value, self.tb)
        --- <exception caught here> ---
          File "/home/buildbot/env/local/lib/python2.7/site-packages/buildbot/process/builder.py", line 335, in _start[[BuildFor]]
            ready = yield slavebuilder.prepare(self.builder_status, build)
          File "/home/buildbot/env/local/lib/python2.7/site-packages/twisted/python/threadpool.py", line 196, in _worker
            result = context.call(ctx, function, *args, **kwargs)
          File "/home/buildbot/env/local/lib/python2.7/site-packages/twisted/python/context.py", line 118, in call[[WithContext]]
            return self.currentContext().call[[WithContext]](ctx, func, *args, **kw)
          File "/home/buildbot/env/local/lib/python2.7/site-packages/twisted/python/context.py", line 81, in call[[WithContext]]
            return func(*args,**kw)
          File "/home/buildbot/env/local/lib/python2.7/site-packages/buildbot/buildslave/ec2.py", line 364, in _request_spot_instance
            request = self._wait_for_request(reservations[0])
          File "/home/buildbot/env/local/lib/python2.7/site-packages/buildbot/buildslave/ec2.py", line 443, in _wait_for_request
            request.id, request.status)
        buildbot.interfaces.[[LatentBuildSlaveFailedToSubstantiate]]: (u'sir-022rlcrg', <Status: capacity-oversubscribed>)
        
2014-10-09 01:18:37-0400 [-] slave <Build testsuite-el6-amd64> can't build <[[LatentSlaveBuilder]] builder='testsuite-el6-amd64'> after all; re-queueing the request

[...]

2014-10-09 01:32:37-0400 [Broker,32,127.0.0.1] slave 'el6-amd64' attaching from I[[Pv4Address]](TCP, '127.0.0.1', 59401)
2014-10-09 01:32:37-0400 [Broker,32,127.0.0.1] Slave el6-amd64 received connection while not trying to substantiate.  Disconnecting.

Windows escaping seem to produce problems

This ticket is a migrated Trac ticket 2878

People contributed to the original ticket: @djmitche, caktux@..., @tardyp, @unknown_contributor, @unknown_contributor, @unknown_contributor, @unknown_contributor, @sa2ajj
Ticket created on: Aug 22 2014
Ticket last modified on: Apr 22 2016


I am not sure for how long those paste's will be available:

Error message:

'%VCENV_BAT%' is not recognized as an internal or external command,
operable program or batch file.

Discussion on IRC starts at http://irclogs.jackgrigg.com/irc.freenode.net/buildbot/2014-08-22#i_3429967

BAT file that produces the error: http://paste.debian.net/116918

The problem seems to be

... ^&^& ...

The question is why & were escaped this way.


Comment from: @sa2ajj
Date: Aug 22 2014

It seems that it needs to be escaped when passed as a parameter, which does not seem to be our case.


Comment from: @dustin
Date: Aug 24 2014

The pastes are gone already :(


Comment from: @unknown_contributor
Date: Sep 09 2014

If you run the command as a parameter to cmd.exe, then it becomes a parameter and the escaping behaviour is correct.

Try prefixing the list of command line arguments with [["/c"|"cmd",]] (buildbot-0.8.9/buildbot/steps/vstudio.py line 418). I use this in my configuration.


Comment from: @sa2ajj
Date: Sep 09 2014

@otterdam, do you have the code you mentioned modified, or you use some external parameters?

I've been looking at this problem (unfortunately, no more information came my way) and it seems that Buildslave on Windows platform always runs commands using "cmd /c"...


Comment from: @unknown_contributor
Date: Dec 14 2014

otterdam's solution works here(tm), seems "cmd", "/c" should be added to vstudio.py


Comment from: @sa2ajj
Date: Dec 14 2014

Looks like [[logic|http://trac.buildbot.net/browser/slave/buildslave/runprocess.py?rev=7b9fb2d23e9ea07e1757fb31b5c309ab4e51bbf7#L437]] needs to be reviewed.


Comment from: @unknown_contributor
Date: Mar 17 2015

Can anybody express why buildbot doesn't simply use subprocess module for spawning stuff?..


Comment from: @tardyp
Date: Mar 17 2015

buildbot is based on asynchrounous library called twisted. subprocess is synchronous, which means we could only spawn one process at a time. You can however use threads with twisted. If it happens to be proven more stable, I think a patch that go in that way for windows would be gladly accepted.

We are lacking good windows expertise contributions.


Comment from: @unknown_contributor
Date: Mar 18 2015

That is completely untrue :) subprocess could spawn as much processes at a time as you need - just don't do .call() or .wait() unconditionally after subprocess.Popen().

Why I'm asking this is that subprocess handles quoting and stuff internally and is widely-tested given it's included in Python standard library, so IMHO it's much better to rely on this functionality rather than write some code that you cannot even test good enough since you lack Windows resources...

I see some more bugs that I think all could be resolved by switching to subprocess (at least on Windows).

P.S. Moving to Job Objects is a bad choice since there could be no nested Job Objects, and you cannot guarantee that a program you launch isn't using Job Objects inside.


Comment from: @tardyp
Date: Mar 18 2015

just don't do .call() or .wait() unconditionally after subprocess.Popen().
JustAMan, I would advice you make some experiment on the buildslave code to convince your self that this is not as simple as you say.

  • How would you watch for subprocess stdios, and stream the results in realtime to the master?
  • How would you wait for the processes to finish?

The parallelism here is completely independant. The same slave runs arbitrary number of builder at the same time with arbitrary timing on sub process spawn and termination.

For the rest, I agree that subprocess is probably much more reliable on windows than twisted's process management
http://twistedmatrix.com/documents/13.2.0/core/howto/process.html

How can we reuse the windows workaround methods of subprocess inside twisted is the question


Comment from: @unknown_contributor
Date: Mar 19 2015

I suggest we move Buildbot process spawning code to subprocess.
Both issues (watching for stdio and waiting for finish) are almost easy.

We have SCons-based build system where I worked on refactoring how processes are spawned, and this system "tee's" process output to a separate file and to console without much problem. The same goes to waiting for process to finish or killing the whole tree.

I can talk to our system architect who's right now working on open-sourcing it if we can move this process handling code to a subpackage. Or maybe I can just copy-paste it to Buildbot as it's about 15K overall, and is regularly tested on Windows, Linux and MacOS (and now it works on FreeBSD as well).

If that won't suit Buildbot community I suggest we at least move to subprocess.list2cmdline() function instead of ugly manual quoting.


Comment from: @sa2ajj
Date: Mar 19 2015

Any solution to the problem is very welcome.

15K as in "15K lines"?

If you think you could provide a PR, we could take it from there.


Comment from: @unknown_contributor
Date: Mar 20 2015

15K is "15 Kilobytes" of code.

I'll talk to our open-sourcing guy then.


Comment from: @unknown_contributor
Date: Mar 25 2015

I have a general "let's publish this code" approval, will work on that.

On a related note, does anyone have original "pastes" that could be used to reproduce the issue?


Comment from: @unknown_contributor
Date: Mar 09 2016

This is still a problem in Buildbot 0.8.12. I worked around it by modifying the "win32_batch_quote" function in buildslave\runprocess.py. I made two changes:

  1. Added a special case so "&&" is not escaped.
  2. Allowed environment variable expansion by not changing "%" characters to "%%".

I'm not saying this is the right way to do it but it's working for me. Here is the complete function I'm using now:

def win32_batch_quote(cmd_list):
    def escape_arg(arg):
        if arg == '|':
            return arg

        if arg == '&&':
            return arg

        arg = quoteArguments([arg])
        # escape shell special characters
        arg = re.sub(r'[@()^"<>&|]', r'^\g<0>', arg)
        return arg

    return ' '.join(map(escape_arg, cmd_list))

Comment from: @unknown_contributor
Date: Apr 19 2016

Since I'm the one who implemented the current escaping logic, I think I need to weigh in.

Unfortunately, the pastes in the description are no longer available, so I don't know for sure, but I'm going to guess that the string %VCENV_BAT% was used as the first element of a list passed as the command to [[ShellCommand]]. When the command is a list, Buildbot runs the process in such a way that the program's argv are exactly the elements of the list. On Windows, that involves escaping all characters that are meaningful to the shell, including % and &.

This is nothing new. As far as I understand, on Unix-likes it has always been that way. On Windows, the escaping has been rather lax (only handling the pipe character) until I tightened it around 0.8.9. I did this to ensure consistency between Windows and non-Windows, and to solve issues like [[these|https://lists.buildbot.net/pipermail/devel/2013-August/009978.html]] (I had my own problem similar to that one).

To summarize, I don't think there is a problem here. If you want shell processing, you need to tell Buildbot to use the shell by passing the command as a single string. No escaping happens then.


Comment from: @unknown_contributor
Date: Apr 20 2016

Whether dpb is right or wrong, all Visual C++ steps are broken as-is.

http://docs.buildbot.net/current/manual/cfg-buildsteps.html#visual-c

They all use a shell command with a list and expect %VCENV_BAT% to be expanded to a path. So either win32_batch_quote needs to change or the Visual C++ steps need an update to pass the command as a single string.


Comment from: @unknown_contributor
Date: Apr 20 2016

Ah, now I get it. Yeah, that is a problem. I'll see if I can do anything about that.


Comment from: @unknown_contributor
Date: Apr 22 2016

#2159

Add an example of using codebases

This ticket is a migrated Trac ticket 2945

People contributed to the original ticket: @sa2ajj, benoit.allard@...
Ticket created on: Oct 15 2014
Ticket last modified on: Jan 15 2015


While the notion is introduced and all relevant options are documented, it seems to be a good idea to add a separate example where codebases are used.


Comment from: @ben
Date: Oct 20 2014

I could help there. Just need an example where it makes sense to use codebases: i.e. It is necessary to checkout multiple repository in order to run a command that need the outcome of each of them.


Comment from: @ben
Date: Oct 20 2014

It's been suggested on IRC to add a picture about the conversion from changes to sourcestamps to buildset in the introduction chapter. I'll give it a shot.


Comment from: @sa2ajj
Date: Nov 07 2014

Based on the [[IRC discussion|http://irclogs.jackgrigg.com/irc.freenode.net/buildbot/2014-11-07#i_3490587]], a couple of different codebase mappings need to be shown and how they are handled by different schedulers should be explained.


Comment from: @sa2ajj
Date: Jan 07 2015

Things to cover:

  • codebase generator
  • scheduler configuration
  • change filter
  • use of codebase in build steps

Ben, an example where it makes sense is a two repo setup (e.g. lib + app) where we want to build both regardless of whether the change is coming to the lib repo or to the app repo. And in this scenario we'd like to build the revision for the change + master/stable of the other repo.


Comment from: @sa2ajj
Date: Jan 15 2015

One seemingly common use case: checkout two branches (of the same or different repositories) -- one according to the change, one according to the configuration. With the "default" configuration, both are checked out with the revision in the change, which causes errors.

[nine] Add support for status PNGs

This ticket is a migrated Trac ticket 2738

People contributed to the original ticket: @djmitche
Ticket created on: Apr 05 2014
Ticket last modified on: Apr 05 2014


WebStatus has a way to serve PNGs that are based on the status of a build.

That's a good thing to support, but not with a REST API (since the links are directly to the images). We should bring that back.


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.