| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes the stateful Pipeline object and leaves behind only a toolbox
of functions for constructing element lists, such as _pipeline.get_selection()
and _pipeline.except_elements(), and some helpers for asserting element states
on lists of elements.
This makes it easier for Stream to manage it's own internal state, so that
Stream can more easily decide to operate without hard requiring a Project
instance be available.
This also adds type annotations to the new version of _pipeline.py.
|
|
|
|
| |
This function is used only once and is quite unnecessary
|
|
|
|
|
|
|
|
|
|
| |
Replaces Pipeline method `track_cross_junction_filter()`.
This changes the error domain for invalid cross junction tracking, so
updating the following two test cases:
* testing/_sourcetests/track.py
* tests/frontend/track.py
|
|
|
|
| |
Replaces Pipeline `resolve_elements()`.
|
|
|
|
|
|
| |
This replaces the pipeline `load()` method.
This does some rewording in `Stream._load_elements_from_targets()`
|
|
|
|
| |
This replaces the corresponding Pipeline method `check_remotes()`.
|
|
|
|
| |
Use Messenger convinence functions instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In errors pertaining to failing to launch a shell with a buildtree.
Other related updates:
- _frontend/cli.py: Propagate machine readable error codes in `bst shell`
This command prefixes a reported error, so it rewraps the error into
an AppError, this needs to propagate the originating machine readable
error.
- tests/integration/shell.py, tests/integration/shellbuildtrees.py:
Updated to use new machine readable errors
|
|
|
|
|
|
|
|
|
| |
When stream is asked for a list of artifacts to show for
the purpose of `bst artifact show`, it was squashing the element
name with the artifact name before it gets displayed in the
frontend.
Instead, make the special casing in the frontend.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We created the State object in the core for the purpose of advertizing
state to the frontend, and the frontend can register callbacks and get
updates to state changes (implicit invocation in the frontend), state
always belongs to the core and the frontend can only read state.
When the frontend asks the core to do something, this should always
be done with an explicit function call, and preferably not via the
State object, as this confuses the use of state, which is only a readonly
state advertizing desk.
This was broken (implemented backwards) for job retries, instead we had
the frontend telling state "It has been requested that this job be retried !",
and then we had the core registering callbacks to that frontend request - this
direction of implicit invocation should not happen (the core should never
have to register callbacks on the State object at all in fact).
Summary of changes:
* _stream.py: Change _failure_retry(), which was for some reason
private albeit called from the frontend, to an explicit function
call named "retry_job()".
Instead of calling into the State object and causing core-side
callbacks to be triggered, later to be handled by the Scheduler,
implement the retry directly from the Stream, since this implementation
deals only with Queues and State, which already directly belong to
the Stream object, there is no reason to trouble the Scheduler
with this.
* _scheduler.py: Remove the callback handling the State "task retry"
event.
* _state.py: Remove the task retry callback chain completely.
* _frontend/app.py: Call stream.retry_job() instead of
stream.failure_retry(), now passing along the task's action name
rather than the task's ID.
This API now assumes that Stream.retry_job() can only be called on
a task which originates from a scheduler Queue, and expects to be
given the action name of the queue in which the given element has
failed and should be retried..
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When instantiating an ArtifactElement, use an ArtifactProject to ensure
that the Element does not accidentally have access to any incidentally
existing project loaded from the current working directory.
Also pass along the Artifact to the Element's initializer directly, and
conditionally instantiate the element based on it's artifact instead of
based on loading YAML configuration.
Fixes #1410
Summary of changes:
* _artifactelement.py:
- Now load the Artifact and pass it along to the Element constructor
- Now use an ArtifactProject for the element's project
- Remove overrides of Element methods, instead of behaving differently,
now we just fill in all the blanks for an Element to behave more
naturally when loaded from an artifact.
- Avoid redundantly loading the same artifact twice, if the artifact
was cached then we will load only one artifact.
* element.py:
- Conditionally instantiate from the passed Artifact instead of
considering any YAML loading.
- Error out early in _prepare_sandbox() in case that we are trying
to instantiate a sandbox for an uncached artifact, in which case
we don't have any SandboxConfig at hand to do so.
* _stream.py:
- Clear the ArtifactProject cache after loading artifacts
- Ensure we load a list of unique artifacts without any duplicates
* tests/frontend/buildcheckout.py: Expect a different error when trying
to checkout an uncached artifact
* tests/frontend/push.py, tests/frontend/artifact_show.py: No longer expect
duplicates to show up with wild card statements which would capture multiple
versions of the same artifact (this changes because of #1410 being fixed)
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of delegating this job to a Project instance, implement
the application state cleanup directly on the Stream object.
This commit also:
* Removes Project.cleanup()
* Reword incorrect API documentation for Element._reset_load_state()
|
|
|
|
|
|
|
|
|
| |
Instead of having _pipeline.py implement load_artifacts() by calling
_project.py's other implementation of load_artifacts(), instead just
implement _load_artifacts() directly in _stream.py.
This of course removes the load_artifacts() implementations from
_pipeline.py and _project.py.
|
|
|
|
|
|
|
|
|
|
|
| |
Don't use fnmatch(), as this has a different behavior from utils.glob(),
which is a bit closer to what we expect from a shell (* matches everything
except path separators, while ** matches path separators), and also
consistent with other places where BuildStream handles globbing, like
when considering split rules.
We should not have a different globbing behavior than split rules for
the command line.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch does the following things:
* Ensure that we only ever try to match artifacts to user provided
glob patterns if we are performing a command which tries to load
artifacts.
* Stops being selective about glob patterns, if the user provides
a pattern which does not end in ".bst", we still try to match it
against elements.
* Provide a warning when the provided globs did not match anything,
previously this code only provided this warning if artifacts were
not matched to globs, but not elements.
* tests/frontend/artifact_delete.py, tests/frontend/push.py,
tests/frontend/buildcheckout.py:
Fixed tests to to not try to determine success by examining the
wording of a user facing message, use the machine readable errors
instead.
Fixes #959
|
|
|
|
|
|
| |
This enqueue_plan can take a long time, as it triggers a verification
of the 'cached' state for sources in some cases, which can take a long
time.
|
|
|
|
|
| |
`--use-buildtree` is now a boolean option for `bst shell`. If no
buildtree is available, an error is raised.
|
|
|
|
|
|
| |
Use single scheduler run for pulling dependencies and buildtree.
Deduplicate cache checks and corresponding warning/error messages
with and without pulling enabled.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
As per #819, BuildStream should pull missing artifacts by default. The
previous behavior was to only pull missing buildtrees. A top-level
`--no-pull` option can easily be supported in the future.
This change makes it possible to use a single scheduler session (with
concurrent pull and push jobs). This commit also simplifies the code as
it removes the `sched_error_action` emulation, using the regular
frontend code path instead.
|
|
|
|
|
|
|
|
| |
`State.add_task()` required the job name to be unique in the session.
However, the tuple `(action_name, full_name)` is not guaranteed to be
unique. E.g., multiple `ArtifactElement` objects with the same element
name may participate in a single session. Use a separate task identifier
to fix this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a large breaking change, a summary of the changes are that:
* The Scope type is now private, since Element plugins do not have
the choice to view any other scopes.
* Element.dependencies() API change
Now it accepts a "selection" (sequence) of dependency elements, so
that Element.dependencies() can iterate over a collection of dependencies,
ensuring that we iterate over every element only once even when we
need to iterate over multiple element's dependencies.
The old API is moved to Element._dependencies() and still used internally.
* Element.stage_dependency_artifacts() API change
This gets the same treatment as Element.dependencies(), and the old
API is also preserved as Element._stage_dependency_artifacts(), so
that the CLI can stage things for `bst artifact checkout` and such.
* Element.search() API change
The Scope argument is removed, and the old API is preserved as
Element._search() temporarily, until we can remove this completely.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This prepares the ground for policing the dependencies which are visible
to an Element plugin, such that plugins are only allowed to see the
elements in their Scope.BUILD scope, even if they call Element.dependencies()
on a dependency.
This commit does the following:
* Element.dependencies() is now a user facing frontend which yields
ElementProxy elements instead of Elements.
* Various core codepaths have been updated to call the internal
Element._dependencies() codepath which still returns Elements.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sources have been cached in CAS individually, except for sources that
transform other sources, which have been cached combined with all
previous sources of the element. This caching structure may be confusing
as sources are specified in the element as a list and this is not a good
fit for #1274 where we want to support caching individual sources in a
Remote Asset server with a BuildStream-independent URI (especially the
`directory` configuration would be problematic).
This replaces the combined caching of 'previous' sources with an
element-level source cache, which caches all sources of an element
staged together. Sources that don't depend on previous sources are still
cached individually.
This also makes it possible to add a list of all element sources to the
source proto used by the element-level source cache.
|
|
|
|
|
|
| |
`skip_cached` skips elements with a cached artifact. However, for
`source_push()` we need the sources of an element and having a cached
artifact does not guarantee that the sources are cached, too.
|
|
|
|
|
|
|
| |
Avoid redundantly announcing the session heading in the frontend
and only selectively announce it once for the main session.
Fixes #1369
|
|
|
|
| |
Call directly the relevant methods from the stream to the scheduler
|
|
|
|
|
| |
This removes all notifications left coming from the scheduler, and
replaces them by callbacks
|
|
|
|
|
| |
The stream is itself calling the `run` method on the scheduler, we don't
need another indirection
|
|
|
|
|
| |
We are calling the scheduler, and it returning correctly already tells
us this.
|
|
|
|
|
| |
We are calling the scheduler, and it returning correctly already tells
us this.
|
|
|
|
|
| |
Stop using 'Notifications' for retries, the state is the one handling
the callbacks required for every change in status of elements
|
|
|
|
|
| |
This moves all implementations of 'start_time' into a single place
for easier handling and removing roundtrips of notifications
|
|
|
|
|
| |
The State is the interface between both, there is no need to do multiple
round-trips to handle such notifications
|
|
|
|
|
| |
The messenger should be the one receiving messages directly, we don't
need this indirection
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of passing around many details though calling signatures
throughout the loader code, create a single LoadContext object
which holds any overall loading state along with any values which
are constant to a full load process.
Overall this patch does:
* _frontend/app.py: No need to pass Stream.fetch_subprojects() along anymore
* _loader/loadelement.pyx: collect_element_no_deps() no longer takes a task argument
* _loader/loader.py: Now the Loader has a `load_context` member, and no more
`_fetch_subprojects` member or `_context` members
Further, `rewritable` and `ticker` is no longer passed along through all
of the recursing calling signatures, and `ticker` itself is finally removed
because this has been replaced a long time ago with `Task` API from `State`.
* _pipeline.py: The load() function no longer has a `rewritable` parameter
* _project.py: The Project() is responsible for creating the toplevel
LoadContext() if one doesn't exist yet, and this is passed through
to the Loader() (and also passed to the Project() constructor by the
Loader() when instantiating subprojects).
* _stream.py: The `Stream._fetch_subprojects()` is now private and set
on the project when giving the Project to the Stream in `Stream.set_project()`,
also the Stream() sets the `rewritable` state on the `LoadContext` at the
earliest opportunity, as the `Stream()` is the one who decides this detail.
Further, some double underscore private functions are now regular single
underscores, there was no reason for this inconsistency.
* tests/internals/loader.py: Updated for API change
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This API was only used to issue a not very useful message in the CLI
when removing a workspace the user happened to have invoked BuildStream
from.
The API also didn't really make any sense, being documented as:
"Checks whether the workspace belonging to element_name is
required to load the project"
This in fact meant whether having that workspace open was required
for BuildStream to be invoked, due to the metadata encoded into the
workspace directory.
|
|
|
|
| |
This reverts commit aa25f6fcf49f0015fae34dfd79b4626a816bf886.
|
|
|
|
| |
This reverts commit 14e32a34f67df754d9146efafe9686bfe6c91e50.
|
|
|
|
|
|
|
| |
Part of https://gitlab.com/BuildStream/buildstream/-/issues/1068.
Make behavior of `shell` command similar to other commands that need
sources like `build`, `workspace open`, `source checkout` etc.
|
| |
|
|
|
|
| |
This is no longer needed now that we support caching buildtrees in CAS.
|
|
|
|
|
|
| |
The new incremental build approach uses the buildtree from the last
build (successful or not) and no longer needs to know any information
about the last successful build.
|
|
|
|
|
|
| |
This will no longer be used in incremental builds. Successful configure
commands will be recorded with a marker file in the buildtree of the
last build artifact.
|
|
|
|
|
|
| |
'_source_cached' is not explicit enough as it doesn't distinguishes
between sources in their respective caches and sources in the global
sourcecache.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PipelineSelection is one of the few stringy types that weren't
converted to FastEnum, presumably because we lacked a mechanism for
only allowing a sub-set of options as CLI arguments.
We've re-designed this since, and as part of the UI/UX refactor we'd
like to generally clean this, but that is probably still a while out.
Since that hasn't happened, for now, this adds a feature to the
FastEnumType that allows specifying only a subset of values is allowed
for a specific command, so that we can use the type as a proper
enum.
We also get rid of a number of accidental uses of strings, and move
PipelineSelection to buildstream.types so that we don't have a
significant import overhead for it.
|
|
|
|
|
| |
Newer pylint versions detect and complain about unnecessary elif/else
after a continue/break/return clause. Let's remove them
|