Age | Commit message (Collapse) | Author |
|
...that come with installing just. This ensures control on the
subdirectories available to the runner, avoiding any possible
conflicting paths.
|
|
Compared to the command line, the environment usually is quite
short. So including it in messages reporting about commands does
not introduce a lot of additional noise. However, knowing the
environment can help understanding an error message. Therefore, it
seems a good trade off to include it. Do so.
|
|
The gsl-lite implementation is slightly more picky in terms of
type conversions and constness resolution in initializers,
therefore small changes were needed.
|
|
...instead of using absolute values.
This was the desidered outcome all along and now it can be done
right thanks to the recently added multiplication expression.
|
|
... in particular that of the absent pragma which is addressed both,
in imports as well as in deduplication.
|
|
The runners used in tests that rely on execution or serve
endpoints to exist can get stuck waiting for these to become
online if for some reason they cannot be set up. This commit fixes
this issue by setting a reasonable timeout, after which we fail
gracefully.
|
|
Also updates the tests and all relevant documentation accordingly.
|
|
|
|
The result of the analysis is a JSON object containing the keys
`"artifacts"`, `"runfiles"`, and `"provides"`. This JSON object, by
default, is logged. However, it might be useful to process the data
contained in it, while, for example, developing new rules.
This patch adds a new command line option (`--dump-result`), reserved
to the subcommand `analyse`, to dump the analysis result to the given
file or stdout (if `-` is given).
|
|
While this can already be expressed by an "if" statement, having
a dedicated function for logical negation makes some expressions
more readable.
|
|
... using, also for the "then" branch, the empty list as default.
In this way, this statement not only more symmetric, but also
allows shorter representations of some typical expressions.
|
|
Lists are somtimes used in configurations as replacement for tuples.
Providing length gives an easy way to detect usage errors.
|
|
|
|
|
|
During compactification, invalid entries must be deleted.
|
|
|
|
... and nothing reconstructed for simple (i.e., non-export) targets.
|
|
During garbage collection split and remove from the storage every entry that is larger than a threshold.
|
|
During garbage collection remove from the storage every entry that has the large entry.
|
|
|
|
|
|
and trees.
|
|
executable files during splitting.
|
|
|
|
Configured targets, by design, cannot distinguish between a value
not occuring in the configuration and occuring there with value
null. Therefore, to understand the conflict, we can as well drop
all the null values of the target configuration when reporting it.
|
|
As we use chunking also for reducing storage, we have to consider
the overhead of block devices which is in the order of kB per file.
So our target chunk size should be at least 2 orders of magnitude
above this. This suggests to minimally aim for a chunk size of
128kB, a target size that also has the advantage the that maximal
chunk size associated with this size is 1MB which is still well
below the maximal transmission size of grpc allowing us to avoid
the streaming API.
As we're scaling everything up by a factor of 16, we also have
to increase the number of bits in the involved masks by 4. We use
this to also extend the window size by using the 2 most significant
octets. Following the advice of the paper proposing FastCDC to
spread out the ones roughly equally suggests 0x4444 as a suitable
value for the two most significant octets.
We also change the suggested extension of the remote-execution API
accordingly. As the precise parameters for FastCDC when announced
over the remote-execution APIs are still under discussion upstream,
we simplify the name to not mention the target size.
|
|
... as this is the only thing the user cares about when trying
to investigate why that action failed.
|
|
|
|
|
|
Also improves and extends accordingly the Git operations tests.
|
|
Also adds an appropriate test for this method.
|
|
Also extends the tests accordingly.
|
|
|
|
Often outputs are only referenced as blobs but not downloaded to the
working directory of the test. This can make it hard to understand
errors, as the respective artifacts are not available for inspection.
This is even more important in case of tests with a provided serve
endpoint as then even the error message of a failed serve build is
only referenced as blob. Solve this by keeping the local build root
of the remote-execution service using the fact that all objects are
transferred between the serve endpoint and the client go through
the remote-execution endpoint.
|
|
|
|
|
|
This test, among others, verifies the archive functionality by
creating an archive with our library, extractting it with the
system command-line tools, and comparing the result. In order to not
depend on the host system having installed tools for all possible
compression algorithms, it tacitly drops the extraction test if the
respective tool could not be found under /usr/bin. This, however,
assumes that /usr/bin is in path; ensure this, by extending PATH
accordingly.
|
|
|
|
operations
This test creates a "file" repository with pragma "to_git". Move to a
subdirectory to avoid including all the tools in that created root.
|
|
For historic reasons (as quite some tests date back till before the
public name of the build tools was decided), the end-to-end tests
assume generic names for the tools. This used to be done by simple
staging the artifacts. As soon as we started to support dynamic
linking, we also have to allow the runtime dependnecies, as provided
by our install-with-deps rule. ae2e515ab84ea3ab08764685f84441c0741f8039
attempted to add those dependencies by replacing the staging by
a generic action doing a copy. This, however, made the "lib" dir
containing the dependencies an opaque tree
- defined by different actions, and, more importantly,
- containing only the run-time dependencies of one of the tools.
This causes staging conflicts between those two lib dirs (currently
hidden by a bug in the computation of the disjoint union) and things
only worked because in the canonical configuration used for testing
both "lib" dirs are empty anyway.
The correct way of adding dependencies while renaming the tool is
still staging; fix this.
|
|
For splicing of large objects from external sources additional checks are performed:
* The digest of the spliced result must be equal to the expected digest;
* The parts of a spliced tree must be in the storage.
Tested:
* Regular splicing of large objects;
* If the result is unexpected, splicing fails;
* If some parts of a tree are missing, splicing fails.
|
|
* Uplink parts of the large entry before entry itself;
* Uplink large entries in LargeObjectCAS::GetEntryPath to not split things two times;
* Promote spliced tree during uplinking of a large tree entry to properly promote parts of the tree;
* Uplink large entries in LocalUplink{Blob, Tree} to support proper uplinking in Action Cache and Target Cache;
Tested:
* Uplink large blobs and trees;
* Uplink a large object that depends on other large objects.
|
|
Implicitly reconstruct objects during regular uplinking of Blobs/Trees.
|
|
* Add LargeObjectCAS fields for files and trees to LocalCAS;
* Add logic for splitting objects located in the main storage.
Tested:
Splitting of large, small and empty objects.
|
|
|
|
|
|
|
|
Now the curl URL API always fails to parse the empty string, so
our test was changed to reflect this.
|
|
Also updates the test-mixed-bootstrap script which must use the
explicit library version.
|
|
Numerical values are used at some places in justbuild: as value for
timeout scaling, as well as by the "range" expression that is used,
e.g., to define repreated test runs. Therefore, improve support
for numerical values by adding basic operations.
|