Age | Commit message (Collapse) | Author |
|
If fetching via the primary API failed and there is no fallback,
we should fail rather than tacitly continuing with the next object
to fetch.
|
|
... to allow a more specific signature when passing around
the rehash function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The specification for this status code is as follows.
One or more errors occurred in setting up the action requested,
such as a missing input or command or no worker being available.
The client may be able to fix the errors and retry.
We routinely ensure all inputs are available to the remote execution
before we start an action, so all prerequisites will be there on a
compliant server, however might not actually be on a server where
the CAS only has eventual consistency or is incorrect (due to old
cache entries on CAS purge) in its answer to FindMissingBlobs.
While we have no guarantee that a retry will help, we still retry;
at least in the case of an unavailable worker or CAS entries not
yet available due to eventual consistency, this will help. Also,
we log at debug lvel the full response, including the repeated Any
message. In this way, we can find out what useful information (if
any) is sent by popular remote-execution services and implement
more specific mitigations in the future.
|
|
In BatchUploadBlobs we accept short writes and, in case of no progress,
fall back to single blob upload. Therefore, failure to upload blobs
is not fatal and therefore should not be reported at error level.
Decrease the log level accordingly: a protocol failure to upload is
a performance-related event (as the retry needs additional time),
catching an internal exception is something that shouldn't really
happen, so we warn the user.
|
|
... instead of relying on those dependencies being pulled in
indirectly.
|
|
...irespective of the used protocol.
This api is useful in enabling just-mr and the SourceTree service
of just serve to interact seamlessly with any remote-execution
endpoint.
|
|
...irrespective of the used protocol.
This api is useful in enabling just-mr and the SourceTree service
of just serve to interact seamlessly with any remote-execution
endpoint.
|
|
These allow to read and write file associations between known
digests in different CAS instances.
|
|
|
|
The rpc Execution::Execute returns stream
google.longrunning.Operation. When the client reads the stream, the
server can report that the operation is still in progress and the
client has to wait. Before this patch, we were not checking for this
particular condition. As a result, an ongoing action was interpreted
as an execution failure.
|
|
|
|
...and private members using lower_case_
|
|
|
|
|
|
|
|
|
|
...since we use recursion for trees a lot, but skip this check manually.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Enable performance-enum-size check.
|
|
...proposed by clang-tidy.
Enable bugprone-optional-value-conversion check.
|
|
Enable performance-no-automatic-move check.
|
|
...proposed by clang-tidy.
Enable bugprone-assignment-in-if-condition check.
|
|
|
|
Despite the fact that HashFunction is a small type, it still makes sense to store it by reference to reflect the ownership. StorageConfig becomes the main holder.
Reference holders store HashFunction by const ref and aren't allowed to change it. However, they are free to return HashFunction by value since this doesn't benefit readability anyhow.
|
|
...by calling the generalized CASUtils's implementation.
|
|
...that both use the same templated class CASContentValidator.
|
|
|
|
Although this change doesn't benefit performance anyhow (protobuf's mutable_*() methods allocate memory lazily), it is better to let protobuf do this on its own.
|
|
|
|
...passing constructed Artifact::ObjectInfo by rvalue, to avoid additional copies.
|
|
...and remove split serialization/deserialization logic.
|
|
...and remove split serialization/deserialization implementations.
|
|
...and use the qualified name ByteStreamUtils::kChunkSize
|
|
...since they were used only in tests.
|