| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Change-Id: I7214aeecc3c568d7b0be5db441d62ca7901ec855
|
|
|
|
|
|
| |
This is useful when repacking libraries for python wheels, for example.
Change-Id: Ie7b36584de5054c14a9b77d87a5c5fa5cc7a3719
|
|
|
|
| |
Change-Id: I733c4bcf28d845aa0413ef4af06cdab6bc25cc7b
|
|
|
|
| |
Change-Id: Iaa6cc5bb06e715aafb3ecab86ae7cde6ef30413d
|
|
|
|
| |
Change-Id: I97b85f1b37952aaede168e274d2f4a74d3b9aaa8
|
|
|
|
|
|
|
|
| |
... and vice-versa. We'll fix up frag header values for our output
parameter from liberasurecode_get_fragment_metadata but otherwise
avoid manipulating the in-memory fragment much.
Change-Id: Idd6833bdea60e27c9a0148ee28b4a2c1070be148
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each was only really used in one place, they had some strange return types,
and recent versions of clang on OS X would refuse to compile with
erasurecode_helpers.c:531:26: error: taking address of packed member 'metadata_chksum' of
class or structure 'fragment_header_s' may result in an unaligned pointer value
[-Werror,-Waddress-of-packed-member]
return (uint32_t *) &header->metadata_chksum;
^~~~~~~~~~~~~~~~~~~~~~~
We don't really *care* about the pointer; we just want the value!
Change-Id: I8a5e42312948a75f5dd8b23b6f5ccfa7bd22eb1d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we had our own CRC that was almost but not quite like
zlib's implementation. However,
* it hasn't been subjected to the same rigor with regard to error-detection
properties and
* it may not even get used, depending upon whether zlib happens to get
loaded before or after liberasurecode.
Now, we'll use zlib's CRC-32 when writing new frags, while still
tolerating frags that were created with the old implementation.
Change-Id: Ib5ea2a830c7c23d66bf2ca404a3eb84ad00c5bc5
Closes-Bug: 1666320
|
|\ |
|
| |
| |
| |
| | |
Change-Id: I6903e11a24f548a07f924cef8f0bc8ba3c456ef0
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The well-known idiom to compute a required number of data blocks
of size B to contain data of length d is:
(d + (B-1))/B
The code we use, with ceill(), computes the same value, but does
it in an unorthodox way. This makes a reviewer to doubt himself
and even run tests to make sure we're really computing the
obvious thing.
Apropos the reviewer confusion, the code in Phazr.IO looks weird.
It uses (word_size - hamming_distance) to compute the necessary
number of blocks... but then returns the amount of memory needed
to store blocks of a different size (word_size). We left all of it
alone and return exactly the same values that the old computation
returned.
All these computations were the only thing in the code that used
-lm, so drop that too.
Coincidentially, this patch solves the crash of distro-built
packages of liberasurecode (see Red Hat bug #1454543). But it's
a side effect. Expect a proper patch soon.
Change-Id: Ib297f6df304abf5ca8c27d3392b1107a525e0be0
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, the Galois Field multiplication tables are recalcuated
every time an encode is done. This is wasteful, as they are fixed
by k and m, which is set on init.
Calculate the tables only once, on init.
This trades off a little bit of per-context memory and creation
time for measurably faster encodes when using the same context.
On powerpc64le, when repeatedly encoding a 4kB file with pyeclib,
this increases the measured speed by over 10%.
Change-Id: I2f025aaee2d13cb1717a331e443e179ad5a13302
Signed-off-by: Daniel Axtens <dja@axtens.net>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, there are several implementations of erasure codes that are
available within OpenStack Swift. Most, if not all, of which are based
on the Reed Solomon coding algorithm.
Phazr.IO’s Erasure Coding technology uses a patented algorithm which are
significantly more efficient and improves the speed of coding, decoding
and reconstruction. In addition, Phazr.IO Erasure Code use a non-systematic
algorithm which provides data protection at rest and in transport without
the need to use encryption.
Please contact support@phazr.io for more info on our technology.
Change-Id: I4e40d02a8951e38409ad3c604c5dd6f050fa7ea0
|
|\ |
|
| |
| |
| |
| |
| | |
Change-Id: I1d8d6b5711a503eaa7c57c70b4c20a329f572af2
Signed-off-by: Thiago da Silva <thiago@redhat.com>
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, we have liberasurecode version info in the header and pyeclib
is using the info to detect the version. However it's a bit painful
because it requires to rebuild pyeclib c code for you to see the actual
installed version.
This addition for liberasurecode_get_version enables caller to get the
version integer from compiled shared library file (.so) and it will
rescure to re-compiled operation from pyeclib.
Change-Id: I8161ea7da3b069e83c93e11cb41ce12fa60c6f32
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is for supporting ISA-L cauchy based matrix. The difference
from isa_l_rs_vand is only the matrix to use the encode/decode calculation.
As a known issue, isa_l_rs_vand backend has constraint for the
combinations of the available fragment to be able to decode/reconstuct.
(See related change in detail)
To avoid the constraint, this patch adds another isa-l backend to use
cauchy matrix and keep the backward compatibility, this is in
another isa_l_rs_cauchy namespace.
For implementation consieration, the code is almost same except the matrix
generation fucntion so that this patch makes isa_l_common.c file for
gathering common fucntions like init/encode/decode/reconstruct. And then the
common init funciton takes an extra args "gen_matrix_func_name" for entry
point to load the fucntion by dlsym from isa-l .so file.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Related-Change: Icee788a0931fe692fe0de31fabc4ba450e338a87
Change-Id: I6eb150d9d0c3febf233570fa7729f9f72df2e9be
|
|
|
|
| |
Change-Id: Ia45c7b46ea45dee6f306afe291fe6a908eb41d70
|
|
|
|
|
|
|
|
|
|
| |
As well as any other callers, libersurecode_get_fragment_size should
handle the return value of liberasurecode_get_backend_instance_by_desc.
Otherwise, get_by_desc can return NULL and it causes an invalid memory
access in librerasurecode_get_fragment_size.
Change-Id: I489f8b5d049610863b5e0b477b6ff70ead245b55
|
| |
|
|
|
|
|
|
|
| |
Uses dlopen to check if a backend is present. This may be used by
consumers who need to check which backends are present on a system.
Issue #23
|
|
|
|
|
|
|
| |
There are systems, for example Hurd, which doesn't define this constant
because there are no such limit. See [1] link for explanation.
[1] http://www.gnu.org/software/hurd/community/gsoc/project_ideas/maxpath.html
|
| |
|
|
|
|
|
|
|
| |
Users of liberasurecode <= 1.0.7 used alloc/free helpers
(which they shouldn't have). This change is to make sure
we are still able to those older revs of programs and they
work with newer liberasurecode.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
... to LIBERASURECODE_RS_VAND
|
| |
|
| |
|
|
|
|
| |
This is meant to be used in cases where ISA-L and Jerasure cannot be used.
|
| |
|
| |
|
|
|
|
| |
https://bitbucket.org/tsg-/liberasurecode/issue/12/make-valgrind-test-fails
|
|
|
|
|
|
| |
Also added additional test to test_xor_code to do an exhaustive decode test
(all possible 1 and 2 disk failures) and changed teh default liberasurecode
test to test (3, 3, 3).
|
| |
|
| |
|
|
|
|
| |
This patch renames the "metadata_adder" variable to "backend_metadata_size"
|
|
|
|
|
|
|
|
| |
This patch renames following variables and functions:
- frag_adder_size -> frag_backend_metadata_size
- set_fragment_adder_size() -> set_fragment_backend_metadata_size()
- get_fragment_adder_size() -> get_fragment_backend_metadata_size()
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For get_segment_info function of PyECLib, liberasurecode should
support get_fragment_size function because if pyeclib and liberasurecode
have the calculation of fragment size each other, it might cause
the size mismatch (i.e. it'll be a bug) in the future development work.
This patch introduces liberasurecode_get_fragment_size function to return
the fragment_size calculated at liberasurecode accoring to specified
backend descriptor.
It really usefull to help caller knows how large size it have to expect
and all pyeclib has to do for retrieving fragment_size will be just calling
the liberasurecode_get_fragment_size function on get_segment_info.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch allows to get correct fragment size includes metadata_adder.
Current implementaion automatically allocates extra bytes for the metadata_adder
in alloc_buffer, and then, no information about the extra bytes will be returned
to the api caller side. It's too confusable because callers couldn't know how size they
assumes as the fragment size.
To be easy to find out the size infomation, this patch adds "frag_adder_size"
variable into fragment metadata and also some functions to get fragment size.
The definitions of these size infomation are here,
fragment_meta:
- size-> raw data size used to encode/fragment_to_string
- frag_adder_size-> metadata_adder of backend specification
And the definitions of functions are here,
- get_fragment_size:
-> return sizeof(fragument_header) + size + frag_adder_size
- get_fragment_buffer_size:
-> return size + frag_adder_size
- get_fragment_payload_size:
-> return size
By using these function above, users could get the size information
directly from fragments. It results in enabling to return fragment_len
to the caller side easily.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On the first consideration[1], metadata_adder is defined as a extra byte
size for "each" fragment. However, current implementation is an element
affects to data_len. (i.e. aligned_data_size for original segment data)
We should make metadata_adder to be a fixed value against to each fragment,
otherwise the extra bytes for the fragments will have flexible length depends
on "K". It will be quite complex for backend implementation to know "How large
bytes the raw data size is", "How large bytes the backend could use as extra
bytes for each fragment".
1: https://bitbucket.org/tsg-/liberasurecode/commits/032b57d9b1c7aadc547fccbacf88af786c9067e7?at=master
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch achieves a couple of things as follows:
- Undoing the liberasurecode_encode_cleanup specification to
expect "fragment" pointers as its arguments.
- Ensuring liberasurecode_encode to pass "fratment" pointers to
liberasurecode_encode_cleanup.
liberasurecode_encode_cleanup is used also in pyeclib so that
it is expected that the argument pointers (i.e. encoded_data and
encoded_parity) should be the collection of the heads of "fragment"
pointers.
However, when the backend encode fails, liberasurecode keeps "data"
pointers behind of fragment_header, and then, goes to "out:" statement
to cleanup its memories. It causes invalid pointer failure.
This patch adds a translation function from "data" pointers to "fragment"
pointers and ensure liberasurecode_encode to pass correct pointers to
libersurecode_encode_cleanup.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces a new plug-able backend called "shss" made by
Nippon Telegraph and Telephone corporation (NTT).
Note that this produces a just plug-in to shss erasure coding binary
so that users have to install a shss binary (i.e. libshss.so) aside from
liberasurecode when using shss.
Please contact us if you are insterested in the NTT backend (welcome!):
Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Co-Author: Ryuta Kon <kon.ryuta@po.ntts.co.jp>
|