Age | Commit message (Collapse) | Author |
|
...to reduce the "price" of copying.
|
|
...instead of BazelBlobContainer to not bring bazel_re::Digest to IExecutionApi.
|
|
|
|
|
|
|
|
...where the template parameter is the type of a digest.
|
|
...instead of various iterators.
|
|
...fixing potentially dangerous code (evaluation order is unspecified).
|
|
|
|
|
|
...in LocalApi and BazelApi.
|
|
...in LocalApi and BazelApi.
|
|
This reduces the code duplication between the local and bazel APIs
and improves code maintainability.
|
|
|
|
We can avoid doing extra work in converting between bazel digests
and artifact digests by actually using the API interface.
|
|
|
|
|
|
|
|
Once a RepositoryConfig instance gets populated, it must never be
changed again. Therefore, all functions accepting these instances
should only take them as pointers to const.
|
|
The Emit method of the Logger class, when called with a string as
second argument, expects it to be a format string. It should be
considered a programming error to pass a string variable as that
argument without knowing for certain that it does not contain any
format escape character ('{', '}'); instead, one should be
conservative and use the blind format string "{}" as second
argument and pass the unknown string variable as third argument.
|
|
|
|
|
|
As we use chunking also for reducing storage, we have to consider
the overhead of block devices which is in the order of kB per file.
So our target chunk size should be at least 2 orders of magnitude
above this. This suggests to minimally aim for a chunk size of
128kB, a target size that also has the advantage the that maximal
chunk size associated with this size is 1MB which is still well
below the maximal transmission size of grpc allowing us to avoid
the streaming API.
As we're scaling everything up by a factor of 16, we also have
to increase the number of bits in the involved masks by 4. We use
this to also extend the window size by using the 2 most significant
octets. Following the advice of the paper proposing FastCDC to
spread out the ones roughly equally suggests 0x4444 as a suitable
value for the two most significant octets.
We also change the suggested extension of the remote-execution API
accordingly. As the precise parameters for FastCDC when announced
over the remote-execution APIs are still under discussion upstream,
we simplify the name to not mention the target size.
|
|
|
|
|
|
|
|
|
|
Main culprits:
- std::size_t, std::nullptr_t, and NULL require <cstddef>
- std::move and std::forward require <utility>
- unordered maps and sets require respective includes
- std::for_each and std::all_of require <algorithm>
|
|
Some of the more specific issues addressed:
- missing log_level target/include
- header-only libs wrongly marking deps as private
- missing/misplaced gsl includes
|
|
... so that any updates of the local-build-root layout are correctly
taken into account. In particular, this change also moves the temporary
directory under the emphemeral root, allowing more quick clean up.
Co-authored-by: Paul Cristian Sarbu <paul.cristian.sarbu@huawei.com>
|
|
... as the fs_utils have a lot more dependencies making them usable
in less places. Moreover, this function also serves to shape the
layout of the local build root and hence is more appropriately
placed in the config anyway.
|
|
|
|
We deliberately have many functions that do not abort the process
on failure and instead simply return a corresponding value. It
is then up to the caller to decide how to handle this failure;
in particular, such a failure can be expected, e.g., if we try to
fetch a file from remote execution first, before fetching it from
the upstream location.
To have a consistent user experience, nothing that can occur in
a successfull build should be reported at error level; moreover,
messages that routinely occur during successfull builds should not
be reported at progress or above, except for the (stage) result
messages and the progress reporter.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Currently, the implementations of the split and splice operation are both
hidden behind the Bazel API implementation. This was sufficient to implement
splitting at the server and splicing at the client. In order to support the
other direction of splitting at the client and splicing at the server while
reusing their implementations, the code needs to be refactored. First, the
functionality of split and splice are explicitly exposed at the general
execution API interface and implemented in the sub APIs. Second, the
implementations of split and splice are factored into a separate utils class.
|
|
|
|
|
|
|
|
... glibc provides synchronization stubs for single-threaded
environments as weak symobls. When linking pthreads, these
weak symbols must be replaced by the strong symbols provided
by the pthread library. For dynamically linking pthreads,
this is done automatically. However, to support this for
static linking, we must ensure to link the whole archive.
|
|
|
|
...in accordance to our coding style.
|
|
|
|
|
|
|
|
This was a source of occasional std::bad_variant_access exceptions.
|