| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
| |
Use the actual soname rather than the fully unversioned name,
ensuring that systems that don't have -dev packages actually work.
Signed-off-by: James Page <james.page@ubuntu.com>
|
| |
|
|
|
|
|
|
|
| |
Users of liberasurecode <= 1.0.7 used alloc/free helpers
(which they shouldn't have). This change is to make sure
we are still able to those older revs of programs and they
work with newer liberasurecode.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
If the underlying jerasure implementation is old (pre-jerasure.org),
then it will not contain an uninit function for the underlying GF
object. Since this is only used in alg_sig, which is not used by
anything else at the moment, we stub it out if it does not exist.
Once we make the change to have alg_sig use the internal GF functions,
this whole problem goes away.
|
| |
|
| |
|
| |
|
|
|
|
| |
... to LIBERASURECODE_RS_VAND
|
|
|
|
|
|
|
|
|
|
| |
up through Python.
The code that preprocesses decoded fragments to see if it can simply concat
the data fragments instead of decodeing was not properly deduping fragments,
which leads to a failed assertion.
This properly dedups fragments in the fragments_to_string function.
|
|
|
|
|
|
|
| |
specified as "available" by the caller. I feel that only buggy code would do
this...
NOTE: In the future, we should return an error when this happens.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
length
passed up is incorrect.
|
| |
|
| |
|
| |
|
|
|
|
| |
This is meant to be used in cases where ISA-L and Jerasure cannot be used.
|
|
|
|
| |
https://bitbucket.org/tsg-/liberasurecode/issue/12/make-valgrind-test-fails
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
https://bitbucket.org/tsg-/liberasurecode/issue/13/decode-fails-for-many-cases-when-m-k
This fix includes:
1.) Proper buffer allocation for the 'missing_idxs' structure, which was not allocating enough
space when k > m.
2.) Checks to use header fields of parity fragments during decode when *no* data fragments
are available.
3.) Fixed the unit tests to properly handle the case where k <= m.
4.) Extended the unit test framework to support multiple tests per backend
5.) Added tests for all RS implementations: (4,8), (4,4), (10,10)
|
| |
|
|
|
|
|
|
| |
Also added additional test to test_xor_code to do an exhaustive decode test
(all possible 1 and 2 disk failures) and changed teh default liberasurecode
test to test (3, 3, 3).
|
|\
| |
| |
| | |
fix-metadata-check from Kota
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On the current code, get_fragment_partition might touch the
invlid memory area with minus index (that means a invalid header)
and it causes segmentation fault.
This fixes it to handle the minus index as a EBADHEADER and then
no segmentaition fault appeared on the case.
|
|/
|
|
|
|
|
|
| |
when both data and parity was missing. The fix is to just call decode
when reconstructing parity, since it will have to do extra work anyway
when data is missing. We did a little extra work in ISA-L to do better,
but can save that for later, since 99% of the time decode will perform just
fine.
|
|
|
|
| |
Addresses issue#10
|
|
|
|
|
|
| |
header in the xor-encoder.
FWIW, we did conditional compilation in the body of the code, but missed the header include.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
shss always needs to decode but fragments_to_string
will alloc internal_payload as a decoded data. It causes
duplicated memory allocation and memory leak.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This patch renames the "metadata_adder" variable to "backend_metadata_size"
|
|
|
|
|
|
|
|
| |
This patch renames following variables and functions:
- frag_adder_size -> frag_backend_metadata_size
- set_fragment_adder_size() -> set_fragment_backend_metadata_size()
- get_fragment_adder_size() -> get_fragment_backend_metadata_size()
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For get_segment_info function of PyECLib, liberasurecode should
support get_fragment_size function because if pyeclib and liberasurecode
have the calculation of fragment size each other, it might cause
the size mismatch (i.e. it'll be a bug) in the future development work.
This patch introduces liberasurecode_get_fragment_size function to return
the fragment_size calculated at liberasurecode accoring to specified
backend descriptor.
It really usefull to help caller knows how large size it have to expect
and all pyeclib has to do for retrieving fragment_size will be just calling
the liberasurecode_get_fragment_size function on get_segment_info.
|
|
|
|
|
|
|
|
| |
Small fixes as follows:
- Add is_compatible_with function into shss backend
- Remove encoded data check against to shss at liberasurecode_test.c
- Decrease metadata_adder size on shss backend to be correct fixed value
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch allows to get correct fragment size includes metadata_adder.
Current implementaion automatically allocates extra bytes for the metadata_adder
in alloc_buffer, and then, no information about the extra bytes will be returned
to the api caller side. It's too confusable because callers couldn't know how size they
assumes as the fragment size.
To be easy to find out the size infomation, this patch adds "frag_adder_size"
variable into fragment metadata and also some functions to get fragment size.
The definitions of these size infomation are here,
fragment_meta:
- size-> raw data size used to encode/fragment_to_string
- frag_adder_size-> metadata_adder of backend specification
And the definitions of functions are here,
- get_fragment_size:
-> return sizeof(fragument_header) + size + frag_adder_size
- get_fragment_buffer_size:
-> return size + frag_adder_size
- get_fragment_payload_size:
-> return size
By using these function above, users could get the size information
directly from fragments. It results in enabling to return fragment_len
to the caller side easily.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On the first consideration[1], metadata_adder is defined as a extra byte
size for "each" fragment. However, current implementation is an element
affects to data_len. (i.e. aligned_data_size for original segment data)
We should make metadata_adder to be a fixed value against to each fragment,
otherwise the extra bytes for the fragments will have flexible length depends
on "K". It will be quite complex for backend implementation to know "How large
bytes the raw data size is", "How large bytes the backend could use as extra
bytes for each fragment".
1: https://bitbucket.org/tsg-/liberasurecode/commits/032b57d9b1c7aadc547fccbacf88af786c9067e7?at=master
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch achieves a couple of things as follows:
- Undoing the liberasurecode_encode_cleanup specification to
expect "fragment" pointers as its arguments.
- Ensuring liberasurecode_encode to pass "fratment" pointers to
liberasurecode_encode_cleanup.
liberasurecode_encode_cleanup is used also in pyeclib so that
it is expected that the argument pointers (i.e. encoded_data and
encoded_parity) should be the collection of the heads of "fragment"
pointers.
However, when the backend encode fails, liberasurecode keeps "data"
pointers behind of fragment_header, and then, goes to "out:" statement
to cleanup its memories. It causes invalid pointer failure.
This patch adds a translation function from "data" pointers to "fragment"
pointers and ensure liberasurecode_encode to pass correct pointers to
libersurecode_encode_cleanup.
|
|
|
|
|
|
|
|
| |
When num_missing is over than the num of parities (i.e. > m),
get_fragment_partition should return -1 as an error code.
This patch fixes it and adds a test called "test_get_fragment_partition"
into liberasurecode_test.c.
|
| |
|
|
|
|
| |
Signed-off-by: Tushar Gohad <tushar.gohad@intel.com>
|