| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Change-Id: I04ed11272c7ddfb22829c71607772b86f216d457
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Co-authored-by: Sam Thursfield <sam.thursfield@codethink.co.uk>
Since upstream pip do not want to merge https://github.com/pypa/pip/pull/2371
we should avoid depending on this pull request.
To find runtime dependencies we now run pip install inside a virtual env
then run pip freeze to obtain the dependency set, this has the advantage
that nearly all the work is being done by pip.
Originally the python extensions were designed to look for upstream git
repos, in practice this is unreliable and won't be compatible with obtaining
dependencies using pip install, so the downside of this approach is that
all lorries will be tarballs, the upshot is that we can now automatically
import many packages that we couldn't import before.
Another upshot of this approach is that we may be able to consider the
removal of a lot of the spec processing and validation code if we're willing
to worry less about build dependencies, we're not sure whether we should
be willing to worry less about build dependencies though.
We've had encouraging results using this patch so far, we are now able
to import, without user intervention, packages that failed previously,
such as boto, persistent-pineapple, jmespath, coverage, requests also
almost imported successfully but appears to require a release of pytest
that is uploaded as a zip.
Change-Id: I705c6f6bd722df041d17630287382f851008e97a
|
|
|
|
| |
Change-Id: I4719d7a15ef133d0dcd72061ac7e612723b828df
|
|
|
|
| |
Change-Id: I3e8077d1e91a28ac0ed30cb0e8102622c866a8e0
|
|
|
|
| |
Change-Id: Ieb1a0d2047746e68db0e494d9dc73fe5aae782a3
|
|
|
|
| |
Change-Id: I89ea4bbb26cff27a6a2dcb71e3101e52b8f91002
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's nice to have some idea right away of what went wrong.
A common error is running the import tool on a system without the
patched version of Pip. This now produces the following sort of message:
baserockimport/exts/python.find_deps: Failed to get runtime dependencies
for numpy at checkouts/python_numpy-tarball/. Output from Pip:
Usage:
pip install [options] <requirement specifier>
[package-index-options] ...
pip install [options] -r <requirements file> [package-index-options]
...
pip install [options] [-e] <vcs project url> ...
pip install [options] [-e] <local project path> ...
pip install [options] <archive url/path> ...
no such option: --list-dependencies
Change-Id: I902920b46b29b1e2b18736b2ff2b5f4f4cdb42df
|
|
|
|
|
|
|
|
|
| |
Some people now put urls in the registry with '+git' at the start.
The tool doesn't recognise this, but these urls still end in '.git',
so this change just checks for such urls and cuts out the '+git'.
The tool finds the repos once more. Panic over.
Change-Id: I4e807797ed914fa1dbe96cc7e05263228785c86d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a version bug in npm.to_lorry that meant the most recent db entry
was used for repo location, regardless of the version specified.
The npm registry is a database of all versions of all packages, listing names,
source locations, etc. There was a fun bug in npm.to_lorry that meant that the
import tool would try access the repo listed in the database entry for the
latest version of the package, rather than for the version specified. This
meant that if version 1.0.1 of a package had the right url listed, but
something was wrong with the url listed for version 1.4.6, the tool would
error out, even if version 1.0.1 was requested.
Change-Id: If40c8b4c85f5e27fee07ee88daa1e5d2d347944f
|
|
|
|
| |
Change-Id: Id7b0713263f3d68773f69bc4c2dadec414e1d902
|
|
|
|
|
|
| |
python.find_deps uses warn and error, which have moved into the utils module
Change-Id: I7ce58c034cb83b7fc486487d6234282b650edfc1
|
|
|
|
|
|
|
|
| |
This doesn't provide much performance benefit,
since most of our queries go through xmlrpclib,
but caching here does no harm.
Change-Id: Id740a7ffab56defeddb3a6f3f481d81498a4411a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To allow all extensions to use it,
this also modifies the client so that the cache
gets expired after 5 minutes by default.
The error message returned by the rubygems extension on failure to
get gem data will also be slightly more detailed,
old message,
ERROR: Request to http://rubygems.org/api/v1/gems/kittens.json failed: Not Found
new message,
ERROR: Request to http://rubygems.org/api/v1/gems/kittens.json failed: 404 Client Error: Not Found
Change-Id: I63354fc7682bb01b1122007c1435bf35975db1aa
|
|
|
|
|
|
| |
These functions will also be useful for our forthcoming cpan extension
Change-Id: I9df87dee09bbcf43dd0868f062fb873632f1f5ae
|
|
|
|
|
|
|
| |
We switch to unittest.main() here so that the exit code gets set by
unittest, which in turn sets check's exit code
Change-Id: If9ace26ba5373cb78192b30922ee1bd0ea91f36f
|
|
|
|
|
|
| |
Most svn repos use a standard layout, we assume svn repos we want to lorry
are using a standard layout, so make str_repo_lorry add 'layout': 'standard'
for svn repos
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
We also call package_releases with True, so that we also get versions
of releases that have been hidden. pip is willing to install from hidden
releases so we should too, the concept of hidden releases will eventually
disappear from pypi as well.
|
| |
|
| |
|
|
|
|
|
| |
It's sometime useful to see the output of the vcs, but having this enabled
all the time clutters the log
|
|
|
|
| |
The two extensions have diverged so this generic class is no longer useful.
|
| |
|
|
|
|
| |
This is redundant.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds PythonLorryExtension class to python.to_lorry to run the extension
in a more conventional way. Previously the PythonExtension class
was used to execute any of the extensions (it would call the extensions main()
function). We move away from this so the
extension can access useful methods, such as local_data_path(), that are
provided by the ImportExtension class.
This also removes use of pkg_resources.parse_requirement which is redundant.
This also removes the unused import of select.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There isn't yet an official spec for distribution names in python,
however there is a draft at http://legacy.python.org/dev/peps/pep-0426/#name
In particular,
"All comparisons of distribution names MUST be case insensitive,
and MUST consider hyphens and underscores to be equivalent."
pkg_resource.parse_requirements will replace any underscores in the
package name as hyphens, so when we search pypi we need to look for
the package name with underscores as well as with hyphens.
|
|
|
|
|
|
| |
Morph doesn't need a chunk morph to build/install setuptools packages,
but the import tool needs to grow the ability to have 'to_chunk' as an
optional stage before we can remove this part of the python extension.
|
|
|
|
|
| |
python.find_deps does some pretty basic validation of the requirement specs,
this commit adds the tests for this validation.
|
|
|
|
|
| |
This takes source_dir, name, version and returns the dependencies for the
package as json on stdout.
|
| |
|
|
|
|
| |
This takes source, name, version and produces a lorry
|
| |
|
|
|
|
|
|
|
|
| |
The multi_json Gem wasn't being detected as signed, because the
'signing_key' field is an expression that can evaluate to 'nil' in some
cases. In this Gem the 'cert_chain' field was still a standard string.
Hopefully checking for the presence of either will catch all cases
(and false positives should be harmless anyway).
|
|
|
|
|
| |
This was motivated by <https://github.com/mislav/will_paginate>, which
links to <https://github.com/mislav/will_paginate/wiki> as its homepage.
|
|
|
|
|
|
| |
The rubygems.to_chunk tool was assuming the .gemspec file always lived
at the top of the chunk repo, but this isn't the case for
<https://github.com/rails/rails>. Now it is smarter.
|
|
|
|
|
|
|
| |
This is used to ignore Gems which either come built into Ruby (like
Rake) or are supplied with the 'ruby' stratum. As noted in the comment
in the .yaml file it is not an ideal solution, but it should work well
enough for the time being.
|
|
|
|
| |
This is cool because now you can import Ruby on Rails.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By calling Bundler::Dsl.gemspec without specifying :path, this tool
was causing the Bundler::Source::Path instance for the target .gemspec
to be for path '.', which is relative path that is only valid inside
one Dir.chdir block. It seems that all the Bundler resolution code runs
inside that block and so there should be no problem, but unless we
specify an absolute path for the gemspec then errors like this sometimes
appear:
/usr/lib/ruby/gems/2.0.0/gems/bundler-1.7.6/lib/bundler/resolver.rb:357:in
`resolve': Could not find gem 'ffi-yajl (>= 0) ruby' in source at ..
(Bundler::GemNotFound)
Source does not contain any versions of 'ffi-yajl (>= 0) ruby'
This normally happens when an Omnibus import chains to rubygems.to_chunk
for a RubyGem component.
The path is clearly valid at the time Bundler::Dsl.gemspec is called
(otherwise it'd raise a Bundler::InvalidOption exception at the time)
but not valid later (hence the error). Note that the second '.' in that
error message is a full stop and not part of the path!
|
| |
|
|
It's slightly annoying during development, but the exts/ must be inside
the package or it would be installed somewhere silly like
/usr/lib/python2.7/site-packages/exts.
|