diff options
author | rguenth <rguenth@138bc75d-0d04-0410-961f-82ee72b054a4> | 2010-07-01 08:49:19 +0000 |
---|---|---|
committer | rguenth <rguenth@138bc75d-0d04-0410-961f-82ee72b054a4> | 2010-07-01 08:49:19 +0000 |
commit | 182cf5a9a415f31df0f9a10e46faed1221484a35 (patch) | |
tree | 4cc8d9c35ed3127dbf885a1f08a83776819bed41 /gcc/matrix-reorg.c | |
parent | c681e76ffbc527d325e07622ddf4a905aba5324d (diff) | |
download | gcc-182cf5a9a415f31df0f9a10e46faed1221484a35.tar.gz |
2010-07-01 Richard Guenther <rguenther@suse.de>
PR middle-end/42834
PR middle-end/44468
* doc/gimple.texi (is_gimple_mem_ref_addr): Document.
* doc/generic.texi (References to storage): Document MEM_REF.
* tree-pretty-print.c (dump_generic_node): Handle MEM_REF.
(print_call_name): Likewise.
* tree.c (recompute_tree_invariant_for_addr_expr): Handle MEM_REF.
(build_simple_mem_ref_loc): New function.
(mem_ref_offset): Likewise.
* tree.h (build_simple_mem_ref_loc): Declare.
(build_simple_mem_ref): Define.
(mem_ref_offset): Declare.
* fold-const.c: Include tree-flow.h.
(operand_equal_p): Handle MEM_REF.
(build_fold_addr_expr_with_type_loc): Likewise.
(fold_comparison): Likewise.
(fold_unary_loc): Fold
VIEW_CONVERT_EXPR <T1, MEM_REF <T2, ...>> to MEM_REF <T1, ...>.
(fold_binary_loc): Fold MEM[&MEM[p, CST1], CST2] to MEM[p, CST1 + CST2],
fold MEM[&a.b, CST2] to MEM[&a, offsetof (a, b) + CST2].
* tree-ssa-alias.c (ptr_deref_may_alias_decl_p): Handle MEM_REF.
(ptr_deref_may_alias_ref_p_1): Likewise.
(ao_ref_base_alias_set): Properly differentiate base object for
offset and TBAA.
(ao_ref_init_from_ptr_and_size): Use MEM_REF.
(indirect_ref_may_alias_decl_p): Handle MEM_REFs properly.
(indirect_refs_may_alias_p): Likewise.
(refs_may_alias_p_1): Likewise. Remove pointer SSA name def
chasing code.
(ref_maybe_used_by_call_p_1): Handle MEM_REF.
(call_may_clobber_ref_p_1): Likewise.
* dwarf2out.c (loc_list_from_tree): Handle MEM_REF.
* expr.c (expand_assignment): Handle MEM_REF.
(store_expr): Handle MEM_REFs from STRING_CSTs.
(store_field): If expanding a MEM_REF of a non-addressable
decl use bitfield operations.
(get_inner_reference): Handle MEM_REF.
(expand_expr_addr_expr_1): Likewise.
(expand_expr_real_1): Likewise.
* tree-eh.c (tree_could_trap_p): Handle MEM_REF.
* alias.c (ao_ref_from_mem): Handle MEM_REF.
(get_alias_set): Likewise. Properly handle VIEW_CONVERT_EXPRs.
* tree-data-ref.c (dr_analyze_innermost): Handle MEM_REF.
(dr_analyze_indices): Likewise.
(dr_analyze_alias): Likewise.
(object_address_invariant_in_loop_p): Likewise.
* gimplify.c (mark_addressable): Handle MEM_REF.
(gimplify_cond_expr): Build MEM_REFs.
(gimplify_modify_expr_to_memcpy): Likewise.
(gimplify_init_ctor_preeval_1): Handle MEM_REF.
(gimple_fold_indirect_ref): Adjust.
(gimplify_expr): Handle MEM_REF. Gimplify INDIRECT_REF to MEM_REF.
* tree.def (MEM_REF): New tree code.
* tree-dfa.c: Include toplev.h.
(get_ref_base_and_extent): Handle MEM_REF.
(get_addr_base_and_unit_offset): New function.
* emit-rtl.c (set_mem_attributes_minus_bitpos): Handle MEM_REF.
* gimple-fold.c (may_propagate_address_into_dereference): Handle
MEM_REF.
(maybe_fold_offset_to_array_ref): Allow possibly out-of bounds
accesses if the array has just one dimension. Remove always true
parameter. Do not require type compatibility here.
(maybe_fold_offset_to_component_ref): Remove.
(maybe_fold_stmt_indirect): Remove.
(maybe_fold_reference): Remove INDIRECT_REF handling.
Fold back to non-MEM_REF.
(maybe_fold_offset_to_address): Simplify. Deal with type
mismatches here.
(maybe_fold_reference): Likewise.
(maybe_fold_stmt_addition): Likewise. Also handle
&ARRAY + I in addition to &ARRAY[0] + I.
(fold_gimple_assign): Handle ADDR_EXPR of MEM_REFs.
(gimple_get_relevant_ref_binfo): Handle MEM_REF.
* cfgexpand.c (expand_debug_expr): Handle MEM_REF.
* tree-ssa.c (useless_type_conversion_p): Make most pointer
conversions useless.
(warn_uninitialized_var): Handle MEM_REF.
(maybe_rewrite_mem_ref_base): New function.
(execute_update_addresses_taken): Implement re-writing of MEM_REFs
to SSA form.
* tree-inline.c (remap_gimple_op_r): Handle MEM_REF, remove
INDIRECT_REF handling.
(copy_tree_body_r): Handle MEM_REF.
* gimple.c (is_gimple_addressable): Adjust.
(is_gimple_address): Likewise.
(is_gimple_invariant_address): ADDR_EXPRs of MEM_REFs with
invariant base are invariant.
(is_gimple_min_lval): Adjust.
(is_gimple_mem_ref_addr): New function.
(get_base_address): Handle MEM_REF.
(count_ptr_derefs): Likewise.
(get_base_loadstore): Likewise.
* gimple.h (is_gimple_mem_ref_addr): Declare.
(gimple_call_fndecl): Handle invariant MEM_REF addresses.
* tree-cfg.c (verify_address): New function, split out from ...
(verify_expr): ... here. Use for verifying ADDR_EXPRs and
the address operand of MEM_REFs. Verify MEM_REFs. Reject
INDIRECT_REFs.
(verify_types_in_gimple_min_lval): Handle MEM_REF. Disallow
INDIRECT_REF. Allow conversions.
(verify_types_in_gimple_reference): Verify VIEW_CONVERT_EXPR of
a register does not change its size.
(verify_types_in_gimple_reference): Verify MEM_REF.
(verify_gimple_assign_single): Disallow INDIRECT_REF.
Handle MEM_REF.
* tree-ssa-operands.c (opf_non_addressable, opf_not_non_addressable):
New.
(mark_address_taken): Handle MEM_REF.
(get_indirect_ref_operands): Pass through opf_not_non_addressable.
(get_asm_expr_operands): Pass opf_not_non_addressable.
(get_expr_operands): Handle opf_[not_]non_addressable.
Handle MEM_REF. Remove INDIRECT_REF handling.
* tree-vrp.c: (check_array_ref): Handle MEM_REF.
(search_for_addr_array): Likewise.
(check_array_bounds): Likewise.
(vrp_stmt_computes_nonzero): Adjust for MEM_REF.
* tree-ssa-loop-im.c (for_each_index): Handle MEM_REF.
(ref_always_accessed_p): Likewise.
(gen_lsm_tmp_name): Likewise. Handle ADDR_EXPR.
* tree-complex.c (extract_component): Do not handle INDIRECT_REF.
Handle MEM_REF.
* cgraphbuild.c (mark_load): Properly check for NULL result
from get_base_address.
(mark_store): Likewise.
* tree-ssa-loop-niter.c (array_at_struct_end_p): Handle MEM_REF.
* tree-loop-distribution.c (generate_builtin): Exchange INDIRECT_REF
handling for MEM_REF.
* tree-scalar-evolution.c (follow_ssa_edge_expr): Handle
&MEM[ptr + CST] similar to POINTER_PLUS_EXPR.
* builtins.c (stabilize_va_list_loc): Use the function ABI
valist type if we couldn't canonicalize the argument type.
Always dereference with the canonical va-list type.
(maybe_emit_free_warning): Handle MEM_REF.
(fold_builtin_memory_op): Simplify and handle MEM_REFs in folding
memmove to memcpy.
* builtins.c (fold_builtin_memory_op): Use ref-all types
for all memcpy foldings.
* omp-low.c (build_receiver_ref): Adjust for MEM_REF.
(build_outer_var_ref): Likewise.
(scan_omp_1_op): Likewise.
(lower_rec_input_clauses): Likewise.
(lower_lastprivate_clauses): Likewise.
(lower_reduction_clauses): Likewise.
(lower_copyprivate_clauses): Likewise.
(expand_omp_atomic_pipeline): Likewise.
(expand_omp_atomic_mutex): Likewise.
(create_task_copyfn): Likewise.
* tree-ssa-sccvn.c (copy_reference_ops_from_ref): Handle MEM_REF.
Remove old union trick. Initialize constant offsets.
(ao_ref_init_from_vn_reference): Likewise. Do not handle
INDIRECT_REF. Init base_alias_set properly.
(vn_reference_lookup_3): Replace INDIRECT_REF handling with
MEM_REF.
(vn_reference_fold_indirect): Adjust for MEM_REFs.
(valueize_refs): Fold MEM_REFs. Re-evaluate constant offset
for ARRAY_REFs.
(may_insert): Remove.
(visit_reference_op_load): Do not test may_insert.
(run_scc_vn): Remove parameter, do not fiddle with may_insert.
* tree-ssa-sccvn.h (struct vn_reference_op_struct): Add
a field to store the constant offset this op applies.
(run_scc_vn): Adjust prototype.
* cgraphunit.c (thunk_adjust): Adjust for MEM_REF.
* tree-ssa-ccp.c (ccp_fold): Replace INDIRECT_REF folding with
MEM_REF. Propagate &foo + CST as &MEM[&foo, CST]. Do not
bother about volatile qualifiers on pointers.
(fold_const_aggregate_ref): Handle MEM_REF, do not handle INDIRECT_REF.
* tree-ssa-loop-ivopts.c
* tree-ssa-loop-ivopts.c (determine_base_object): Adjust
for MEM_REF.
(strip_offset_1): Likewise.
(find_interesting_uses_address): Replace INDIRECT_REF handling with
MEM_REF handling.
(get_computation_cost_at): Likewise.
* ipa-pure-const.c (check_op): Handle MEM_REF.
* tree-stdarg.c (check_all_va_list_escapes): Adjust for MEM_REF.
* tree-ssa-sink.c (is_hidden_global_store): Handle MEM_REF
and constants.
* ipa-inline.c (likely_eliminated_by_inlining_p): Handle MEM_REF.
* tree-parloops.c (take_address_of): Adjust for MEM_REF.
(eliminate_local_variables_1): Likewise.
(create_call_for_reduction_1): Likewise.
(create_loads_for_reductions): Likewise.
(create_loads_and_stores_for_name): Likewise.
* matrix-reorg.c (may_flatten_matrices_1): Sanitize.
(ssa_accessed_in_tree): Handle MEM_REF.
(ssa_accessed_in_assign_rhs): Likewise.
(update_type_size): Likewise.
(analyze_accesses_for_call_stmt): Likewise.
(analyze_accesses_for_assign_stmt): Likewise.
(transform_access_sites): Likewise.
(transform_allocation_sites): Likewise.
* tree-affine.c (tree_to_aff_combination): Handle MEM_REF.
* tree-vect-data-refs.c (vect_create_addr_base_for_vector_ref): Do
not handle INDIRECT_REF.
* tree-ssa-phiopt.c (add_or_mark_expr): Handle MEM_REF.
(cond_store_replacement): Likewise.
* tree-ssa-pre.c (create_component_ref_by_pieces_1): Handle
MEM_REF, no not handle INDIRECT_REFs.
(insert_into_preds_of_block): Properly initialize avail.
(phi_translate_1): Fold MEM_REFs. Re-evaluate constant offset
for ARRAY_REFs. Properly handle reference lookups that
require a bit re-interpretation.
(can_PRE_operation): Do not handle INDIRECT_REF. Handle MEM_REF.
* tree-sra.c
* tree-sra.c (build_access_from_expr_1): Handle MEM_REF.
(build_ref_for_offset_1): Remove.
(build_ref_for_offset): Build MEM_REFs.
(gate_intra_sra): Disable for now.
(sra_ipa_modify_expr): Handle MEM_REF.
(ipa_early_sra_gate): Disable for now.
* tree-sra.c (create_access): Swap INDIRECT_REF handling for
MEM_REF handling.
(disqualify_base_of_expr): Likewise.
(ptr_parm_has_direct_uses): Swap INDIRECT_REF handling for
MEM_REF handling.
(sra_ipa_modify_expr): Remove INDIRECT_REF handling.
Use mem_ref_offset. Remove bogus folding.
(build_access_from_expr_1): Properly handle MEM_REF for
non IPA-SRA.
(make_fancy_name_1): Add support for MEM_REF.
* tree-predcom.c (ref_at_iteration): Handle MEM_REFs.
* tree-mudflap.c (mf_xform_derefs_1): Adjust for MEM_REF.
* ipa-prop.c (compute_complex_assign_jump_func): Handle MEM_REF.
(compute_complex_ancestor_jump_func): Likewise.
(ipa_analyze_virtual_call_uses): Likewise.
* tree-ssa-forwprop.c (forward_propagate_addr_expr_1): Replace
INDIRECT_REF folding with more generalized MEM_REF folding.
(tree_ssa_forward_propagate_single_use_vars): Adjust accordingly.
(forward_propagate_addr_into_variable_array_index): Also handle
&ARRAY + I in addition to &ARRAY[0] + I.
* tree-ssa-dce.c (ref_may_be_aliased): Handle MEM_REF.
* tree-ssa-ter.c (find_replaceable_in_bb): Avoid TER if that
creates assignments with overlap.
* tree-nested.c (get_static_chain): Adjust for MEM_REF.
(get_frame_field): Likewise.
(get_nonlocal_debug_decl): Likewise.
(convert_nonlocal_reference_op): Likewise.
(struct nesting_info): Add mem_refs pointer-set.
(create_nesting_tree): Allocate it.
(convert_local_reference_op): Insert to be folded mem-refs.
(fold_mem_refs): New function.
(finalize_nesting_tree_1): Perform defered folding of mem-refs
(free_nesting_tree): Free the pointer-set.
* tree-vect-stmts.c (vectorizable_store): Adjust for MEM_REF.
(vectorizable_load): Likewise.
* tree-ssa-phiprop.c (phiprop_insert_phi): Adjust for MEM_REF.
(propagate_with_phi): Likewise.
* tree-object-size.c (addr_object_size): Handle MEM_REFs
instead of INDIRECT_REFs.
(compute_object_offset): Handle MEM_REF.
(plus_stmt_object_size): Handle MEM_REF.
(collect_object_sizes_for): Dispatch to plus_stmt_object_size
for &MEM_REF.
* tree-flow.h (get_addr_base_and_unit_offset): Declare.
(symbol_marked_for_renaming): Likewise.
* Makefile.in (tree-dfa.o): Add $(TOPLEV_H).
(fold-const.o): Add $(TREE_FLOW_H).
* tree-ssa-structalias.c (get_constraint_for_1): Handle MEM_REF.
(find_func_clobbers): Likewise.
* ipa-struct-reorg.c (decompose_indirect_ref_acc): Handle MEM_REF.
(decompose_access): Likewise.
(replace_field_acc): Likewise.
(replace_field_access_stmt): Likewise.
(insert_new_var_in_stmt): Likewise.
(get_stmt_accesses): Likewise.
(reorg_structs_drive): Disable.
* config/i386/i386.c (ix86_va_start): Adjust for MEM_REF.
(ix86_canonical_va_list_type): Likewise.
cp/
* cp-gimplify.c (cp_gimplify_expr): Open-code the rhs
predicate we are looking for, allow non-gimplified
INDIRECT_REFs.
testsuite/
* gcc.c-torture/execute/20100316-1.c: New testcase.
* gcc.c-torture/execute/pr44468.c: Likewise.
* gcc.c-torture/compile/20100609-1.c: Likewise.
* gcc.dg/volatile2.c: Adjust.
* gcc.dg/plugin/selfassign.c: Likewise.
* gcc.dg/pr36902.c: Likewise.
* gcc.dg/tree-ssa/foldaddr-2.c: Remove.
* gcc.dg/tree-ssa/foldaddr-3.c: Likewise.
* gcc.dg/tree-ssa/forwprop-8.c: Adjust.
* gcc.dg/tree-ssa/pr17141-1.c: Likewise.
* gcc.dg/tree-ssa/ssa-fre-13.c: Likewise.
* gcc.dg/tree-ssa/ssa-fre-14.c: Likewise.
* gcc.dg/tree-ssa/ssa-ccp-21.c: Likewise.
* gcc.dg/tree-ssa/pta-ptrarith-1.c: Likewise.
* gcc.dg/tree-ssa/20030807-7.c: Likewise.
* gcc.dg/tree-ssa/forwprop-10.c: Likewise.
* gcc.dg/tree-ssa/ssa-fre-1.c: Likewise.
* gcc.dg/tree-ssa/pta-ptrarith-2.c: Likewise.
* gcc.dg/tree-ssa/ssa-ccp-23.c: Likewise.
* gcc.dg/tree-ssa/forwprop-1.c: Likewise.
* gcc.dg/tree-ssa/forwprop-2.c: Likewise.
* gcc.dg/tree-ssa/struct-aliasing-1.c: Likewise.
* gcc.dg/tree-ssa/ssa-ccp-25.c: Likewise.
* gcc.dg/tree-ssa/ssa-pre-26.c: Likewise.
* gcc.dg/tree-ssa/struct-aliasing-2.c: Likewise.
* gcc.dg/tree-ssa/ssa-ccp-26.c: Likewise.
* gcc.dg/tree-ssa/ssa-sccvn-4.c: Likewise.
* gcc.dg/tree-ssa/ssa-pre-7.c: Likewise.
* gcc.dg/tree-ssa/forwprop-5.c: Likewise.
* gcc.dg/struct/w_prof_two_strs.c: XFAIL.
* gcc.dg/struct/wo_prof_escape_arg_to_local.c: Likewise.
* gcc.dg/struct/wo_prof_global_var.c: Likewise.
* gcc.dg/struct/wo_prof_malloc_size_var.c: Likewise.
* gcc.dg/struct/w_prof_local_array.c: Likewise.
* gcc.dg/struct/w_prof_single_str_global.c: Likewise.
* gcc.dg/struct/wo_prof_escape_str_init.c: Likewise.
* gcc.dg/struct/wo_prof_array_through_pointer.c: Likewise.
* gcc.dg/struct/w_prof_global_array.c: Likewise.
* gcc.dg/struct/wo_prof_array_field.c: Likewise.
* gcc.dg/struct/wo_prof_single_str_local.c: Likewise.
* gcc.dg/struct/w_prof_local_var.c: Likewise.
* gcc.dg/struct/wo_prof_two_strs.c: Likewise.
* gcc.dg/struct/wo_prof_empty_str.c: Likewise.
* gcc.dg/struct/wo_prof_local_array.c: Likewise.
* gcc.dg/struct/w_prof_global_var.c: Likewise.
* gcc.dg/struct/wo_prof_single_str_global.c: Likewise.
* gcc.dg/struct/wo_prof_escape_substr_value.c: Likewise.
* gcc.dg/struct/wo_prof_global_array.c: Likewise.
* gcc.dg/struct/wo_prof_escape_return.c: Likewise.
* gcc.dg/struct/wo_prof_escape_substr_array.c: Likewise.
* gcc.dg/struct/wo_prof_double_malloc.c: Likewise.
* gcc.dg/struct/w_ratio_cold_str.c: Likewise.
* gcc.dg/struct/wo_prof_escape_substr_pointer.c: Likewise.
* gcc.dg/struct/wo_prof_local_var.c: Likewise.
* gcc.dg/tree-prof/stringop-1.c: Adjust.
* g++.dg/tree-ssa/pr31146.C: Likewise.
* g++.dg/tree-ssa/copyprop-1.C: Likewise.
* g++.dg/tree-ssa/pr33604.C: Likewise.
* g++.dg/plugin/selfassign.c: Likewise.
* gfortran.dg/array_memcpy_3.f90: Likewise.
* gfortran.dg/array_memcpy_4.f90: Likewise.
* c-c++-common/torture/pr42834.c: New testcase.
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@161655 138bc75d-0d04-0410-961f-82ee72b054a4
Diffstat (limited to 'gcc/matrix-reorg.c')
-rw-r--r-- | gcc/matrix-reorg.c | 80 |
1 files changed, 28 insertions, 52 deletions
diff --git a/gcc/matrix-reorg.c b/gcc/matrix-reorg.c index 4958762169f..84c6e6ea3c2 100644 --- a/gcc/matrix-reorg.c +++ b/gcc/matrix-reorg.c @@ -218,7 +218,7 @@ collect_data_for_malloc_call (gimple stmt, struct malloc_call_data *m_data) initial address and index of each dimension. */ struct access_site_info { - /* The statement (INDIRECT_REF or POINTER_PLUS_EXPR). */ + /* The statement (MEM_REF or POINTER_PLUS_EXPR). */ gimple stmt; /* In case of POINTER_PLUS_EXPR, what is the offset. */ @@ -334,7 +334,7 @@ struct ssa_acc_in_tree /* The variable whose accesses in the tree we are looking for. */ tree ssa_var; /* The tree and code inside it the ssa_var is accessed, currently - it could be an INDIRECT_REF or CALL_EXPR. */ + it could be an MEM_REF or CALL_EXPR. */ enum tree_code t_code; tree t_tree; /* The place in the containing tree. */ @@ -413,33 +413,18 @@ mtt_info_eq (const void *mtt1, const void *mtt2) static bool may_flatten_matrices_1 (gimple stmt) { - tree t; - switch (gimple_code (stmt)) { case GIMPLE_ASSIGN: - if (!gimple_assign_cast_p (stmt)) + case GIMPLE_CALL: + if (!gimple_has_lhs (stmt)) return true; - - t = gimple_assign_rhs1 (stmt); - while (CONVERT_EXPR_P (t)) + if (TREE_CODE (TREE_TYPE (gimple_get_lhs (stmt))) == VECTOR_TYPE) { - if (TREE_TYPE (t) && POINTER_TYPE_P (TREE_TYPE (t))) - { - tree pointee; - - pointee = TREE_TYPE (t); - while (POINTER_TYPE_P (pointee)) - pointee = TREE_TYPE (pointee); - if (TREE_CODE (pointee) == VECTOR_TYPE) - { - if (dump_file) - fprintf (dump_file, - "Found vector type, don't flatten matrix\n"); - return false; - } - } - t = TREE_OPERAND (t, 0); + if (dump_file) + fprintf (dump_file, + "Found vector type, don't flatten matrix\n"); + return false; } break; case GIMPLE_ASM: @@ -602,7 +587,7 @@ mark_min_matrix_escape_level (struct matrix_info *mi, int l, gimple s) /* Find if the SSA variable is accessed inside the tree and record the tree containing it. The only relevant uses are the case of SSA_NAME, or SSA inside - INDIRECT_REF, PLUS_EXPR, POINTER_PLUS_EXPR, MULT_EXPR. */ + MEM_REF, PLUS_EXPR, POINTER_PLUS_EXPR, MULT_EXPR. */ static void ssa_accessed_in_tree (tree t, struct ssa_acc_in_tree *a) { @@ -613,7 +598,7 @@ ssa_accessed_in_tree (tree t, struct ssa_acc_in_tree *a) if (t == a->ssa_var) a->var_found = true; break; - case INDIRECT_REF: + case MEM_REF: if (SSA_VAR_P (TREE_OPERAND (t, 0)) && TREE_OPERAND (t, 0) == a->ssa_var) a->var_found = true; @@ -660,7 +645,7 @@ ssa_accessed_in_assign_rhs (gimple stmt, struct ssa_acc_in_tree *a) tree op1, op2; case SSA_NAME: - case INDIRECT_REF: + case MEM_REF: CASE_CONVERT: case VIEW_CONVERT_EXPR: ssa_accessed_in_tree (gimple_assign_rhs1 (stmt), a); @@ -984,7 +969,7 @@ get_index_from_offset (tree offset, gimple def_stmt) /* update MI->dimension_type_size[CURRENT_INDIRECT_LEVEL] with the size of the type related to the SSA_VAR, or the type related to the - lhs of STMT, in the case that it is an INDIRECT_REF. */ + lhs of STMT, in the case that it is an MEM_REF. */ static void update_type_size (struct matrix_info *mi, gimple stmt, tree ssa_var, int current_indirect_level) @@ -992,9 +977,9 @@ update_type_size (struct matrix_info *mi, gimple stmt, tree ssa_var, tree lhs; HOST_WIDE_INT type_size; - /* Update type according to the type of the INDIRECT_REF expr. */ + /* Update type according to the type of the MEM_REF expr. */ if (is_gimple_assign (stmt) - && TREE_CODE (gimple_assign_lhs (stmt)) == INDIRECT_REF) + && TREE_CODE (gimple_assign_lhs (stmt)) == MEM_REF) { lhs = gimple_assign_lhs (stmt); gcc_assert (POINTER_TYPE_P @@ -1073,7 +1058,7 @@ analyze_accesses_for_call_stmt (struct matrix_info *mi, tree ssa_var, at this level because in this case we cannot calculate the address correctly. */ if ((lhs_acc.var_found && rhs_acc.var_found - && lhs_acc.t_code == INDIRECT_REF) + && lhs_acc.t_code == MEM_REF) || (!rhs_acc.var_found && !lhs_acc.var_found)) { mark_min_matrix_escape_level (mi, current_indirect_level, use_stmt); @@ -1087,7 +1072,7 @@ analyze_accesses_for_call_stmt (struct matrix_info *mi, tree ssa_var, { int l = current_indirect_level + 1; - gcc_assert (lhs_acc.t_code == INDIRECT_REF); + gcc_assert (lhs_acc.t_code == MEM_REF); mark_min_matrix_escape_level (mi, l, use_stmt); return current_indirect_level; } @@ -1213,7 +1198,7 @@ analyze_accesses_for_assign_stmt (struct matrix_info *mi, tree ssa_var, at this level because in this case we cannot calculate the address correctly. */ if ((lhs_acc.var_found && rhs_acc.var_found - && lhs_acc.t_code == INDIRECT_REF) + && lhs_acc.t_code == MEM_REF) || (!rhs_acc.var_found && !lhs_acc.var_found)) { mark_min_matrix_escape_level (mi, current_indirect_level, use_stmt); @@ -1227,7 +1212,7 @@ analyze_accesses_for_assign_stmt (struct matrix_info *mi, tree ssa_var, { int l = current_indirect_level + 1; - gcc_assert (lhs_acc.t_code == INDIRECT_REF); + gcc_assert (lhs_acc.t_code == MEM_REF); if (!(gimple_assign_copy_p (use_stmt) || gimple_assign_cast_p (use_stmt)) @@ -1248,7 +1233,7 @@ analyze_accesses_for_assign_stmt (struct matrix_info *mi, tree ssa_var, is used. */ if (rhs_acc.var_found) { - if (rhs_acc.t_code != INDIRECT_REF + if (rhs_acc.t_code != MEM_REF && rhs_acc.t_code != POINTER_PLUS_EXPR && rhs_acc.t_code != SSA_NAME) { mark_min_matrix_escape_level (mi, current_indirect_level, use_stmt); @@ -1256,7 +1241,7 @@ analyze_accesses_for_assign_stmt (struct matrix_info *mi, tree ssa_var, } /* If the access in the RHS has an indirection increase the indirection level. */ - if (rhs_acc.t_code == INDIRECT_REF) + if (rhs_acc.t_code == MEM_REF) { if (record_accesses) record_access_alloc_site_info (mi, use_stmt, NULL_TREE, @@ -1309,7 +1294,7 @@ analyze_accesses_for_assign_stmt (struct matrix_info *mi, tree ssa_var, } /* If we are storing this level of indirection mark it as escaping. */ - if (lhs_acc.t_code == INDIRECT_REF || TREE_CODE (lhs) != SSA_NAME) + if (lhs_acc.t_code == MEM_REF || TREE_CODE (lhs) != SSA_NAME) { int l = current_indirect_level; @@ -1369,8 +1354,8 @@ analyze_matrix_accesses (struct matrix_info *mi, tree ssa_var, return; /* Now go over the uses of the SSA_NAME and check how it is used in - each one of them. We are mainly looking for the pattern INDIRECT_REF, - then a POINTER_PLUS_EXPR, then INDIRECT_REF etc. while in between there could + each one of them. We are mainly looking for the pattern MEM_REF, + then a POINTER_PLUS_EXPR, then MEM_REF etc. while in between there could be any number of copies and casts. */ gcc_assert (TREE_CODE (ssa_var) == SSA_NAME); @@ -1856,7 +1841,7 @@ transform_access_sites (void **slot, void *data ATTRIBUTE_UNUSED) gimple new_stmt; gcc_assert (gimple_assign_rhs_code (acc_info->stmt) - == INDIRECT_REF); + == MEM_REF); /* Emit convert statement to convert to type of use. */ tmp = create_tmp_var (TREE_TYPE (lhs), "new"); add_referenced_var (tmp); @@ -1878,10 +1863,10 @@ transform_access_sites (void **slot, void *data ATTRIBUTE_UNUSED) continue; } code = gimple_assign_rhs_code (acc_info->stmt); - if (code == INDIRECT_REF + if (code == MEM_REF && acc_info->level < min_escape_l - 1) { - /* Replace the INDIRECT_REF with NOP (cast) usually we are casting + /* Replace the MEM_REF with NOP (cast) usually we are casting from "pointer to type" to "type". */ tree t = build1 (NOP_EXPR, TREE_TYPE (gimple_assign_rhs1 (acc_info->stmt)), @@ -2206,7 +2191,6 @@ transform_allocation_sites (void **slot, void *data ATTRIBUTE_UNUSED) for (i = 1; i < mi->min_indirect_level_escape; i++) { gimple_stmt_iterator gsi; - gimple use_stmt1 = NULL; gimple call_stmt = mi->malloc_for_level[i]; gcc_assert (is_gimple_call (call_stmt)); @@ -2216,17 +2200,9 @@ transform_allocation_sites (void **slot, void *data ATTRIBUTE_UNUSED) gsi = gsi_for_stmt (call_stmt); /* Remove the call stmt. */ gsi_remove (&gsi, true); - /* remove the type cast stmt. */ - FOR_EACH_IMM_USE_STMT (use_stmt, imm_iter, - gimple_call_lhs (call_stmt)) - { - use_stmt1 = use_stmt; - gsi = gsi_for_stmt (use_stmt); - gsi_remove (&gsi, true); - } /* Remove the assignment of the allocated area. */ FOR_EACH_IMM_USE_STMT (use_stmt, imm_iter, - gimple_get_lhs (use_stmt1)) + gimple_call_lhs (call_stmt)) { gsi = gsi_for_stmt (use_stmt); gsi_remove (&gsi, true); |