Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
... and rename appropriately to reflect contents more precisely
than the generic "common". This separation also disentangles
dependencies a bit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
... instead of relying on those dependencies being pulled in
indirectly.
|
|
|
|
|
|
Invalid entries, currently all upwards symlinks (pending
implementation of a better way of handling them), are now
identified and handled similarly to remote execution: in compatible
mode on the client side, during handling of local response, and in
native mode during the population of the local action result.
|
|
As populating the containers from remote response only takes place
once, no assumptions should be made that this cannot fail (for
example if wrong or invalid entries were produced). Instead, return
error messages on failure to callers that can log accordingly.
|
|
...and move it to the common stage.
|
|
|
|
...with ArtifactDigestFactory::HashDataAs
|
|
|
|
The context is passed by not_null const pointer to avoid binding to
temporaries. The LocalApi also stores the context as const ref for
further access and passing it to LocalAction.
|
|
|
|
|
|
...and replace it with passed instances created early via a builder
pattern.
|
|
|
|
... and continue with the newly created copy as target for the next
hard links. In this way, we get rid of the restriction we used to
have that the number of identical inputs be not greater than the
hardlink limit.
|
|
|
|
Update logic populating containers to use the new method which
is aware of the maximum transfer limit.
|
|
|
|
...in LocalApi and BazelApi.
|
|
This reduces the code duplication between the local and bazel APIs
and improves code maintainability.
|
|
Some of the more specific issues addressed:
- missing log_level target/include
- header-only libs wrongly marking deps as private
- missing/misplaced gsl includes
|
|
Currently, the implementations of the split and splice operation are both
hidden behind the Bazel API implementation. This was sufficient to implement
splitting at the server and splicing at the client. In order to support the
other direction of splitting at the client and splicing at the server while
reusing their implementations, the code needs to be refactored. First, the
functionality of split and splice are explicitly exposed at the general
execution API interface and implemented in the sub APIs. Second, the
implementations of split and splice are factored into a separate utils class.
|
|
With the introduction of 'just serve', export targets can now be
built also independently from one another based on their
corresponding minimal repository configuration, as stored in the
target cache key.
In this context, this commit changes the RepositoryConfig usage
from one global (static) instance to pointers passed as necessary
throughout the code.
|
|
... which are only actions that, besides giving exit code 0 also
created all the outputs they promised to.
|
|
|
|
... with two minor code base changes compared to previous
use of gsl-lite:
- dag.hpp: ActionNode::Ptr and ArtifactNode::Ptr are not
wrapped in gsl::not_null<> anymore, due to lack of support
for wrapping std::unique_ptr<>. More specifically, the
move constructor is missing, rendering it impossible to
use std::vector<>::emplace_back().
- utils/cpp/gsl.hpp: New header file added to implement the
macros ExpectsAudit() and EnsureAudit(), asserts running
only in debug builds, which were available in gsl-lite but
are missing in MS GSL.
|
|
|
|
- deduplicate dependencies
- remove unused dependency
|
|
|
|
The improved GC implementation uses refactored storage
classes instead of directly accessing "unknown" file paths.
The required storage class refactoring is quite substantial
and outlined in the following paragraphs.
The module `buildtool/file_system` was extended by:
- `ObjectCAS`: a plain CAS implementation for
reading/writing blobs and computing digests for a given
`ObjectType`. Depending on that type, files written to the
file system may have different properties (e.g., the x-bit
set) or the digest may be computed differently (e.g., tree
digests in non-compatible mode).
A new module `buildtool/storage` was introduced containing:
- `LocalCAS`: provides a common interface for the "logical
CAS", which internally combines three `ObjectCAS`s, one
for each `ObjectType` (file, executable, tree).
- `LocalAC`: implements the action cache, which needs the
`LocalCAS` for storing cache values.
- `TargetCache`: implements the high-level target cache,
which also needs the `LocalCAS` for storing cache values.
- `LocalStorage`: combines the storage classes `LocalCAS`,
`LocalAC`, and `TargetCache`. Those are initialized with
settings from `StorageConfig`, such as the build root base
path or number of generations for the garbage collector.
`LocalStorage` is templated with a Boolean parameter
`kDoGlobalUplink`, which indicates that, on every
read/write access, the garbage collector should be used
for uplinking across all generations (global).
- `GarbageCollector`: responsible for garbage collection and
the global uplinking across all generations. To do so, it
employs instances of `LocalStorage` with `kDoGlobalUplink`
set to false, in order to avoid endless recursion. The
actual (local) uplinking within two single generations is
performed by the corresponding storage class (e.g.,
`TargetCache` implements uplinking of target cache entries
between two target cache generations etc.). Thereby, the
actual knowledge how data should be uplinked is
implemented by the instance that is responsible for
creating the data in the first place.
|
|
|
|
This code movement is required to break a cyclic dependency coming with the
introduction of the garbage collector. target_cache depends on
garbage_collector and garbage_collector would depend on target_cache to
determine the target-level-cache directory. After moving this calculation to a
more general location, the cycle is broken.
|
|
|
|
|
|
|
|
While there, also add all direct dependencies explicitly; using
directly dependencies that are pulled in only indireclty causes
problems from a maintainability point of view.
|
|
- LocalStorage Add tree CAS and support reading Git trees
- LocalAction: Create Git tree for output directory
- LocalApi: Support availability and upload of Git trees
- LocalStorage: Support dumping tree to stream in native mode
|
|
|