Age | Commit message (Collapse) | Author |
|
|
|
|
|
...that is fully replaced by hash_info
|
|
|
|
|
|
|
|
|
|
1. Mark local variables const if needed;
2. Remove redundant fmt::format calls.
|
|
|
|
1. Mark local variables constant if needed;
2. Remove redundant fmt::format calls;
3. Acquire storage's lock after conversion of data.
|
|
|
|
1. Mark local variables constant if needed;
2. Remove redundant fmt::format calls;
3. Return bazel_re::Digest from resourse name parsing.
|
|
|
|
...with ArtifactDigestFactory::HashDataAs
|
|
...with ArtifactDigestFactory::HashFileAs
|
|
...that provides ways to create valid ArtifactDigests.
|
|
|
|
...that validates hashes and stores some additional information about them.
|
|
...from ObjectInfo and ArtifactDigest
|
|
|
|
...to simplify further refactoring.
|
|
...bypassing ArtifactDigest functionality.
|
|
...with ArtifactDigest.
|
|
...with ArtifactDigest.
|
|
|
|
...with ArtifactDigest.
|
|
...with ArtifactDigest.
|
|
...with ArtifactDigest.
|
|
...with ArtifactDigest
|
|
...with ArtifactDigest
|
|
...with ArtifactDigest.
|
|
...with ArtifactDigest.
|
|
|
|
...with ArtifactDigest.
|
|
...with ArtifactDigest.
|
|
...with ArtifactDigest.
|
|
|
|
...and replace obvious redundant conversions to bazel_re::Digest, which were done to ensure that the digest represents a tree.
|
|
Remote execution of actions is handled via long-running operations.
Here we have to be careful with the involved status codes: there
is the status code of the operation and the response contains a
faild that also happens to be a status code. The protocol states
Errors discovered during creation of the `Operation` will be
reported as gRPC Status errors, while errors that occurred while
running the action will be reported in the `status` field of
the `ExecuteResponse`
So we have to distinguish between two kinds of DEADLINE_EXCEEDED.
- If reported by the rpc, it means, we failed to obtain the status
of the ongoing action in a reasonable amount of time; here we
can do nothing but retry.
- If we obtain an answer and that answer has state DEADLINE_EXCEEDED
this means "The execution timed out."; hence we must not retry
and report the result properly to the user.
|
|
The default options of std::filesystem::copy include following
symlinks, resulting in file repositories creating wrong trees if
containing unresolved symlinks, or failing unexpectedly early if
symlink cycles existed.
This is fixed by ensuring the copy_symlinks option is always used.
|
|
The root async map in a chain of calls should always be checked
for missing value, which can happen if, e.g., a cycle happens or
a thread gets killed by the system.
Properly handle this by checking explicitly if a value has been
posted. If not, check for cycles where it makes sense (for
example, in the resolving of symlinks), otherwise report any
pending map keys not yet processed.
This is done for all just-mr commands working with async maps.
|
|
...to be used when reporting pending keys on failure to post value.
|
|
...in async map instances, same as for reporting cycles.
This removes the restriction that the key object has to posses the
ToString method, allowing it to be used, e.g., with just-mr maps.
The now obsolete HasToString concept is removed.
|
|
|
|
Fixes a false assumption that the result of resolving the tree will
always be set if the map doesn't log fatal, when in fact the map
might fail to set a value if, e.g., a thread is killed by the
system or there is a symlinks cycle.
|
|
The separation of cache-key handling and CAS lockup in
e6a91bb733b0738cee0b3ae06ee640f70c1e787f unified the log-level of
two messages to warning: the absence of a cache entry (originally
debug) and a report on a malformed entry in the cache (originally
warning). As we routinely expect non-cached actions in a build,
demote those messages to debug level in order to keep the log
readable and not confuse the user with warnings about expected
behaviour.
|
|
... as those typically are of transient nature as well.
|
|
|
|
We already accept short writes in batch uploads, but when no
progress is made, we cannot simply retry, as this might lead to
an infinite loop. Instead, we give up on batching and upload each
blob one by one.
|
|
... and correctly report the error.
- If we cannot store the bytes we received, this is an internal error.
- If the bytes received have a different hash than announced, report
this user error as INVLID_ARGUMENT.
|