| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
This new test tests that environment variables are preserved in generated
artifacts, and that artifact data is observed rather than irrelevant
local state when integrating an artifact checked out by it's artifact name.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When instantiating an ArtifactElement, use an ArtifactProject to ensure
that the Element does not accidentally have access to any incidentally
existing project loaded from the current working directory.
Also pass along the Artifact to the Element's initializer directly, and
conditionally instantiate the element based on it's artifact instead of
based on loading YAML configuration.
Fixes #1410
Summary of changes:
* _artifactelement.py:
- Now load the Artifact and pass it along to the Element constructor
- Now use an ArtifactProject for the element's project
- Remove overrides of Element methods, instead of behaving differently,
now we just fill in all the blanks for an Element to behave more
naturally when loaded from an artifact.
- Avoid redundantly loading the same artifact twice, if the artifact
was cached then we will load only one artifact.
* element.py:
- Conditionally instantiate from the passed Artifact instead of
considering any YAML loading.
- Error out early in _prepare_sandbox() in case that we are trying
to instantiate a sandbox for an uncached artifact, in which case
we don't have any SandboxConfig at hand to do so.
* _stream.py:
- Clear the ArtifactProject cache after loading artifacts
- Ensure we load a list of unique artifacts without any duplicates
* tests/frontend/buildcheckout.py: Expect a different error when trying
to checkout an uncached artifact
* tests/frontend/push.py, tests/frontend/artifact_show.py: No longer expect
duplicates to show up with wild card statements which would capture multiple
versions of the same artifact (this changes because of #1410 being fixed)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit enriches the metadata we store on artifacts in the
new detatched low/high diversity metadata files:
* The SandboxConfig is now stored in the artifact, allowing
one to perform activities such as launching sandboxes on
artifacts downloaded via artifact name (without backing
project data).
* The environment variables is now stored in the artifact,
similarly allowing one to shell into a downloaded artifacts
which are unrelated to a loaded project.
* The element variables are now stored in the artifact, allowing
more flexibility in what the core can do with a downloaded
ArtifactElement
* The element's strict key
All of these of course can additionally enhance traceability
in the UI with commands such as `bst artifact show`.
Summary of changes:
* _artifact.py:
- Store new data in the new proto digests.
- Added new accessors to extract these new aspects from loaded artifacts.
- Bump the proto version number for compatibility
* _artifactcache.py: Adjusted to push and pull the new blobs and digests.
* element.py:
- Call Artifact.cache() with new parameters
- Expect the strict key from Artifact.get_meta_keys()
- Always specify the strict key when constructing an Artifact
instance which will later be used to cache the artifact
(i.e. the self.__artifact Artifact).
* _versions.py: Bump the global artifact version number, as this breaks
the artifact format.
* tests/cachekey: Updated cache key test for new keys.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit changes SandboxConfig such that it now has a simple constructor
and a new SandboxConfig.new_from_node() classmethod to load it from a YAML
configuration node. The new version of SandboxConfig now uses type annotations.
SandboxConfig also now sports a to_dict() method to help in serialization in
artifacts, this replaces SandboxConfig.get_unique_key() since it does exactly
the same thing, but uses the same names as expected in the YAML configuration
to achieve it.
The element.py code has been updated to use the classmethod, and to
use the to_dict() method when constructing cache keys.
This refactor is meant to allow instantiating a SandboxConfig without
any MappingNode, such that we can later load a SandboxConfig from an
Artifact instead of from an parsed Element.
This commit also updates the cache keys in the cache key test, as
the cache key format is slightly changed by the to_dict() method.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes how the scheduler works and adapts all the code that needs
adapting in order to be able to run in threads instead of in
subprocesses, which helps with Windows support, and will allow some
simplifications in the main pipeline.
This addresses the following issues:
* Fix #810: All CAS calls are now made in the master process, and thus
share the same connection to the cas server
* Fix #93: We don't start as many child processes anymore, so the risk
of starving the machine are way less
* Fix #911: We now use `forkserver` for starting processes. We also
don't use subprocesses for jobs so we should be starting less
subprocesses
And the following highlevel changes where made:
* cascache.py: Run the CasCacheUsageMonitor in a thread instead of a
subprocess.
* casdprocessmanager.py: Ensure start and stop of the process are thread
safe.
* job.py: Run the child in a thread instead of a process, adapt how we
stop a thread, since we ca't use signals anymore.
* _multiprocessing.py: Not needed anymore, we are not using `fork()`.
* scheduler.py: Run the scheduler with a threadpool, to run the child
jobs in. Also adapt how our signal handling is done, since we are not
receiving signals from our children anymore, and can't kill them the
same way.
* sandbox: Stop using blocking signals to wait on the process, and use
timeouts all the time.
* messenger.py: Use a thread-local context for the handler, to allow for
multiple parameters in the same process.
* _remote.py: Ensure the start of the connection is thread safe
* _signal.py: Allow blocking entering in the signal's context managers
by setting an event. This is to ensure no thread runs long-running
code while we asked the scheduler to pause. This also ensures all the
signal handlers is thread safe.
* source.py: Change check around saving the source's ref. We are now
running in the same process, and thus the ref will already have been
changed.
|
|
|
|
|
|
|
|
| |
min-version
This test was broken as it was failing for the wrong reason, even though
in both cases it was a missing yaml key. Fix the test to fail due to it
being missing the required cert specified in the cache config.
|
|
|
|
|
|
| |
This test was broken as it was failing for the wrong reason, even though
in both cases it was a missing yaml key. Fix the test to fail due to it
being missing the required cert specified in the cache config.
|
|
|
|
|
|
| |
This test was broken as it was failing for the wrong reason, even though
in both cases it was a missing yaml key. Fix the test to fail due to it
being missing the required cert specified in the cache config.
|
|
|
|
|
| |
Skip an artifact expiry test in the case we don't have subsecond mtime
precision.
|
| |
|
|
|
|
|
|
| |
This tests a few glob patterns through `bst artifact show` and also
asserts that globs which match both elements and artifacts will produce
an error.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We should not have a different globbing behavior than split rules for
the command line.
This should also make artifact globbing slightly more performant, as
the regular expression under the hood need not be recompiled for each
file being checked.
This commit also updates tests/frontend/artifact_list_contents.py to
use a double star `**` (globstar syntax) in order to match path
separators as well as all other characters in the list contents command.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch does the following things:
* Ensure that we only ever try to match artifacts to user provided
glob patterns if we are performing a command which tries to load
artifacts.
* Stops being selective about glob patterns, if the user provides
a pattern which does not end in ".bst", we still try to match it
against elements.
* Provide a warning when the provided globs did not match anything,
previously this code only provided this warning if artifacts were
not matched to globs, but not elements.
* tests/frontend/artifact_delete.py, tests/frontend/push.py,
tests/frontend/buildcheckout.py:
Fixed tests to to not try to determine success by examining the
wording of a user facing message, use the machine readable errors
instead.
Fixes #959
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Test that we can override an element in a subproject with a local element,
the local element has a dependency on another element in the subproject
through the same junction.
* Test that we can override the dependency in the subproject, proving
that reverse dependencies in that subproject are built against the
overridden element.
* Test that we can override a subproject element using a local link
to another element in the same subproject.
* Test that we can declare an override of a subproject element using
a link in that subproject, and it will be effective even if that
link is not traversed by the actual dependency chain.
* Check that the same element being overridden multiple times in
a subproject is overridden by the highest level project which should
have the highest priority in the overrides.
|
|
|
|
|
|
|
|
|
|
|
| |
Starting from Python 3.9, it seems like the `_replace()` method no
longer works on `platform.uname_result` objects, that are returned by
`platform.uname()`. This causes some of our tests to fail on Python 3.9.
See https://bugs.python.org/issue42163 for upstream issue.
Fix it by slightly changing the way we override the values of the
`platform.uname()` function, such that it works on both Python 3.9 and
3.8 (and below).
|
|
|
|
|
|
|
|
|
|
|
|
| |
When printing log lines to the master log, we ensure that log lines are
printed with the element name and cache key which are related to the task
from which the messages are being issued.
When printing log lines to task specific log lines, we prefer to print
the element names and cache keys which pertain to the element from which
the log line was actually issued.
This new tests asserts this behavior.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This behavior has regressed a while back when introducing the messenger
object in 0026e379 from merge request !1500.
Main behavior change:
- Messages in the master log always appear with the task element's
element name and cache key, even if the element or plugin issuing
the log line is not the primary task element.
- Messages logged in the task specific log, retain the context of the
element names and cache keys which are issuing the log lines.
Changes include:
* _message.py: Added the task element name & key members
* _messenger.py: Log the element key as well if it is provided
* _widget.py: Prefer the task name & key when logging, we fallback
to the element name & key in case messages are being logged outside
of any ongoing task (main process/context)
* job.py: Unconditionally stamp messages with the task name & key
Also removed some unused parameters here, clearing up an XXX comment
* plugin.py: Add new `_message_kwargs` instance property, it is the responsibility
of the core base class to maintain the base keyword arguments which
are to be used as kwargs for Message() instances created on behalf
of the issuing plugin.
Use this method to construct messages in Plugin.__message() and to
pass kwargs along to Messenger.timed_activity().
* element.py: Update the `_message_kwargs` when the cache key is updated
* tests/frontend/logging.py: Fix test to expect the cache key in the logline
* tests/frontend/artifact_log.py: Fix test to expect the cache key in the logline
Fixes #1393
|
|
|
|
|
| |
`--use-buildtree` is now a boolean option for `bst shell`. If no
buildtree is available, an error is raised.
|
|
|
|
|
|
| |
Use single scheduler run for pulling dependencies and buildtree.
Deduplicate cache checks and corresponding warning/error messages
with and without pulling enabled.
|
| |
|
|
|
|
|
|
| |
Skip a test which relies on mtimes differing within a short timespan,
this will fail if it happens fast enough (which it should) on systems
which do not support subsecond precision mtimes.
|
|
|
|
|
| |
Skip the mtime related test if the underlying filesystem does not
support subsecond precision mtime.
|
|
|
|
|
| |
Skip some of the artifact expiry tests in the case we don't have
subsecond mtime precision.
|
|
|
|
| |
Conditionally skip tests which require subsecond mtime precision.
|
| |
|
|
|
|
| |
projects
|
|
|
|
| |
subproject to override
|
|
|
|
|
|
|
|
|
|
|
| |
As per #819, BuildStream should pull missing artifacts by default. The
previous behavior was to only pull missing buildtrees. A top-level
`--no-pull` option can easily be supported in the future.
This change makes it possible to use a single scheduler session (with
concurrent pull and push jobs). This commit also simplifies the code as
it removes the `sched_error_action` emulation, using the regular
frontend code path instead.
|
|
|
|
| |
Test a dictionary instead of a string when given to the filename list.
|
| |
|
|
|
|
| |
Adding some clarifications about an existing test
|
|
|
|
|
| |
This tests that built filter artifacts don't depend on build
dependencies for integration.
|
|
|
|
|
|
|
|
| |
This test checks that:
* We get SKIP messages for tracking local sources
* We get SKIP messages for tracking workspaced elements
* We go no messages at all for elemenents which have no sources
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This was actually deadcode, since node.validate_keys() was called
on the configure dictionary without the legacy command steps. If any
element was using the legacy commands, they would have been met with
a load time error anyway.
This commit also updates the cache key test, since removing these
legacy commands affects BuildElement internally in such a way as
to affect the cache keys.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of relying on Element.search(), use Element.configure_dependencies() to
configure the layout.
Summary of changes:
* scriptelement.py:
Change ScriptElement.layout_add() API to take an Element instead of an element name,
this is now not optional (one cannot specify a `None` element).
This is an API breaking change
* plugins/elements/script.py:
Implement Element.configure_dependencies() in order to call ScriptElement.layout_add().
This is a breaking YAML format change.
* tests/integration: Script integration tests updated to use the new YAML format
* tests/cachekey: Updated for `script` element changes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Element.stage_dependency_artifacts()
This patch adds a new test plugin which implements
Element.configure_dependencies() in order to offer better flexibility for
testing overlaps.
Newly added tests:
* Test overlap warnings and errors when staging elsewhere than
in the sandbox root.
* Test unstaged files failure modes when staging elsewhere than
in the sandbox root.
* Test various overlap behaviors of OverlapAction, when different
calls to Element.stage_dependency_artifacts() cause overlaps to
occur after staging files into separate directories.
|
| |
|
|
|
|
|
|
| |
Test that when the same dependency is added as a build and runtime
dependency separately, they end up being the same dependency which
is both a build & runtime dependency in the loaded build graph.
|
|
|
|
| |
LoadErrorReason.DUPLICATE_DEPENDENCY
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit:
* Removes testing of the deprecated `fail-on-overlap` project configuration
option, this is going away soon and unneeded.
* Tests that warnings are issued whenever they should be (some tests
were happy to see a successful run but failed to check for an
expected warning).
* Test error/warning more evenly across tests, some were missing the
warning mode.
* Use `bst show` instead of `bst build` for the undefined_variable
test, it should fail without needing a build.
|
|
|
|
|
| |
This test element was also staging artifacts in Element.assemble(), which is
now illegal.
|
|
|
|
|
|
| |
This tests that, in non-strict mode, a cached artifact matching the
strict cache key is preferred to a more recent artifact matching only
the weak cache key.
|
| |
|
| |
|
|
|
|
|
| |
This is a regression test for the previously broken dependency cache key
check in non-strict mode.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a large breaking change, a summary of the changes are that:
* The Scope type is now private, since Element plugins do not have
the choice to view any other scopes.
* Element.dependencies() API change
Now it accepts a "selection" (sequence) of dependency elements, so
that Element.dependencies() can iterate over a collection of dependencies,
ensuring that we iterate over every element only once even when we
need to iterate over multiple element's dependencies.
The old API is moved to Element._dependencies() and still used internally.
* Element.stage_dependency_artifacts() API change
This gets the same treatment as Element.dependencies(), and the old
API is also preserved as Element._stage_dependency_artifacts(), so
that the CLI can stage things for `bst artifact checkout` and such.
* Element.search() API change
The Scope argument is removed, and the old API is preserved as
Element._search() temporarily, until we can remove this completely.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This prepares the ground for policing the dependencies which are visible
to an Element plugin, such that plugins are only allowed to see the
elements in their Scope.BUILD scope, even if they call Element.dependencies()
on a dependency.
This commit does the following:
* Element.dependencies() is now a user facing frontend which yields
ElementProxy elements instead of Elements.
* Various core codepaths have been updated to call the internal
Element._dependencies() codepath which still returns Elements.
|