diff options
author | aoliva <aoliva@138bc75d-0d04-0410-961f-82ee72b054a4> | 2009-09-02 02:42:21 +0000 |
---|---|---|
committer | aoliva <aoliva@138bc75d-0d04-0410-961f-82ee72b054a4> | 2009-09-02 02:42:21 +0000 |
commit | 9845d1202fec65574ca05d780859eb8c25489566 (patch) | |
tree | dee43173429d96027577fbb1e51160bec6fa91a6 /gcc | |
parent | 14151756c821a0ae77fa6a8f90fd83dbf9a99f4e (diff) | |
download | gcc-9845d1202fec65574ca05d780859eb8c25489566.tar.gz |
gcc/ChangeLog:
* doc/invoke.texi (-fvar-tracking-assignments): New.
(-fvar-tracking-assignments-toggle): New.
(-fdump-final-insns=file): Mark filename as optional.
(--param min-nondebug-insn-uid): New.
(-gdwarf-@{version}): Mention version 4.
* opts.c (common_handle_option): Accept it.
* tree-vrp.c (find_assert_locations_1): Skip debug stmts.
* regrename.c (regrename_optimize): Drop last. Don't count debug
insns as uses. Don't reject change because of debug insn.
(do_replace): Reject DEBUG_INSN as chain starter. Take base_regno
from the chain starter, and check for inexact matches in
DEBUG_INSNS.
(scan_rtx_reg): Accept inexact matches in DEBUG_INSNs.
(build_def_use): Simplify and fix the marking of DEBUG_INSNs.
* sched-ebb.c (schedule_ebbs): Skip boundary debug insns.
* fwprop.c (forward_propagate_and_simplify): ...into debug insns.
* doc/gimple.texi (is_gimple_debug): New.
(gimple_debug_bind_p): New.
(is_gimple_call, gimple_assign_cast_p): End sentence with period.
* doc/install.texi (bootstrap-debug): More details.
(bootstrap-debug-big, bootstrap-debug-lean): Document.
(bootstrap-debug-lib): More details.
(bootstrap-debug-ckovw): Update.
(bootstrap-time): New.
* tree-into-ssa.c (mark_def_sites): Skip debug stmts.
(insert_phi_nodes_for): Insert debug stmts.
(rewrite_stmt): Take iterator. Insert debug stmts.
(rewrite_enter_block): Adjust.
(maybe_replace_use_in_debug_stmt): New.
(rewrite_update_stmt): Use it.
(mark_use_interesting): Return early for debug stmts.
* tree-ssa-loop-im.c (rewrite_bittest): Propagate DEFs into debug
stmts before replacing stmt.
(move_computations_stmt): Likewise.
* ira-conflicts.c (add_copies): Skip debug insns.
* regstat.c (regstat_init_n_sets_and_refs): Discount debug insns.
(regstat_bb_compute_ri): Skip debug insns.
* tree-ssa-threadupdate.c (redirection_block_p): Skip debug stmts.
* tree-ssa-loop-manip.c (find_uses_to_rename_stmt,
check_loop_closed_ssa_stmt): Skip debug stmts.
* tree-tailcall.c (find_tail_calls): Likewise.
* tree-ssa-loop-ch.c (should_duplicate_loop_header_p): Likewise.
* tree.h (MAY_HAVE_DEBUG_STMTS): New.
(build_var_debug_value_stat): Declare.
(build_var_debug_value): Define.
(target_for_debug_bind): Declare.
* reload.c (find_equiv_reg): Skip debug insns.
* rtlanal.c (reg_used_between_p): Skip debug insns.
(side_effects_p): Likewise.
(canonicalize_condition): Likewise.
* ddg.c (create_ddg_dep_from_intra_loop_link): Check that non-debug
insns never depend on debug insns.
(create_ddg_dep_no_link): Likewise.
(add_cross_iteration_register_deps): Use ANTI_DEP for debug insns.
Don't add inter-loop dependencies for debug insns.
(build_intra_loop_deps): Likewise.
(create_ddg): Count debug insns.
* ddg.h (struct ddg::num_debug): New.
(num_backargs): Pair up with previous int field.
* diagnostic.c (diagnostic_report_diagnostic): Skip notes on
-fcompare-debug-second.
* final.c (get_attr_length_1): Skip debug insns.
(rest_of_clean-state): Don't dump CFA_RESTORE_STATE.
* gcc.c (invoke_as): Call compare-debug-dump-opt.
(driver_self_specs): Map -fdump-final-insns to
-fdump-final-insns=..
(get_local_tick): New.
(compare_debug_dump_opt_spec_function): Test for . argument and
compute output name. Compute temp output spec without flag name.
Compute -frandom-seed.
(OPT): Undef after use.
* cfgloopanal.c (num_loop_insns): Skip debug insns.
(average_num_loop_insns): Likewise.
* params.h (MIN_NONDEBUG_INSN_UID): New.
* gimple.def (GIMPLE_DEBUG): New.
* ipa-reference.c (scan_stmt_for_static_refs): Skip debug stmts.
* auto-inc-dec.c (merge_in_block): Skip debug insns.
(merge_in_block): Fix whitespace.
* toplev.c (flag_var_tracking): Update comment.
(flag_var_tracking_assignments): New.
(flag_var_tracking_assignments_toggle): New.
(process_options): Don't open final insns dump file if we're not
going to write to it. Compute defaults for var_tracking.
* df-scan.c (df_insn_rescan_debug_internal): New.
(df_uses_record): Handle debug insns.
* haifa-sched.c (ready): Initialize n_debug.
(contributes_to_priority): Skip debug insns.
(dep_list_size): New.
(priority): Use it.
(rank_for_schedule): Likewise. Schedule debug insns as soon as
they're ready. Disregard previous debug insns to make decisions.
(queue_insn): Never queue debug insns.
(ready_add, ready_remove_first, ready_remove): Count debug insns.
(schedule_insn): Don't reject debug insns because of issue rate.
(get_ebb_head_tail, no_real_insns_p): Skip boundary debug insns.
(queue_to_ready): Skip and discount debug insns.
(choose_ready): Let debug insns through.
(schedule_block): Check boundary debug insns. Discount debug
insns, schedule them early. Adjust whitespace.
(set_priorities): Check for boundary debug insns.
(add_jump_dependencies): Use dep_list_size.
(prev_non_location_insn): New.
(check_cfg): Use it.
* tree-ssa-loop-ivopts.c (find-interesting_users): Skip debug
stmts.
(remove_unused_ivs): Reset debug stmts.
* modulo-sched.c (const_iteration_count): Skip debug insns.
(res_MII): Discount debug insns.
(loop_single_full_bb_p): Skip debug insns.
(sms_schedule): Likewise.
(sms_schedule_by_order): Likewise.
(ps_has_conflicts): Likewise.
* caller-save.c (refmarker_fn): New.
(save_call_clobbered_regs): Replace regs with saved mem in
debug insns.
(mark_referenced_regs): Take pointer, mark and arg. Adjust.
Call refmarker_fn mark for hardregnos.
(mark_reg_as_referenced): New.
(replace_reg_with_saved_mem): New.
* ipa-pure-const.c (check_stmt): Skip debug stmts.
* cse.c (cse_insn): Canonicalize debug insns. Skip them when
searching back.
(cse_extended_basic_block): Skip debug insns.
(count_reg_usage): Likewise.
(is_dead_reg): New, split out of...
(set_live_p): ... here.
(insn_live_p): Use it for debug insns.
* tree-stdarg.c (check_all_va_list_escapes): Skip debug stmts.
(execute_optimize_stdarg): Likewise.
* tree-ssa-dom.c (propagate_rhs_into_lhs): Likewise.
* tree-ssa-propagate.c (substitute_and_fold): Don't regard
changes in debug stmts as changes.
* sel-sched.c (moving_insn_creates_bookkeeping_block_p): New.
(moveup_expr): Don't move across debug insns. Don't move
debug insn if it would create a bookkeeping block.
(moveup_expr_cached): Don't use cache for debug insns that
are heads of blocks.
(compute_av_set_inside_bb): Skip debug insns.
(sel_rank_for_schedule): Schedule debug insns first. Remove
dead code.
(block_valid_for_bookkeeping_p); Support lax searches.
(create_block_for_bookkeeping): Adjust block numbers when
encountering debug-only blocks.
(find_place_for_bookkeeping): Deal with debug-only blocks.
(generate_bookkeeping_insn): Accept no place to insert.
(remove_temp_moveop_nops): New argument full_tidying.
(prepare_place_to_insert): Deal with debug insns.
(advance_state_on_fence): Debug insns don't start cycles.
(update_boundaries): Take fence as argument. Deal with
debug insns.
(schedule_expr_on_boundary): No full_tidying on debug insns.
(fill_insns): Deal with debug insns.
(track_scheduled_insns_and_blocks): Don't count debug insns.
(need_nop_to_preserve_insn_bb): New, split out of...
(remove_insn_from_stream): ... this.
(fur_orig_expr_not_found): Skip debug insns.
* rtl.def (VALUE): Move up.
(DEBUG_INSN): New.
* tree-ssa-sink.c (all_immediate_uses_same_place): Skip debug
stmts.
(nearest_common_dominator_of_uses): Take debug_stmts argument.
Set it if debug stmts are found.
(statement_sink_location): Skip debug stmts. Propagate
moving defs into debug stmts.
* ifcvt.c (first_active_insn): Skip debug insns.
(last_active_insns): Likewise.
(cond_exec_process_insns): Likewise.
(noce_process_if_block): Likewise.
(check_cond_move_block): Likewise.
(cond_move_convert_if_block): Likewise.
(block_jumps_and_fallthru_p): Likewise.
(dead_or_predicable): Likewise.
* dwarf2out.c (debug_str_hash_forced): New.
(find_AT_string): Add comment.
(gen_label_for_indirect_string): New.
(get_debug_string_label): New.
(AT_string_form): Use it.
(mem_loc_descriptor): Handle non-TLS symbols. Handle MINUS , DIV,
MOD, AND, IOR, XOR, NOT, ABS, NEG, and CONST_STRING. Accept but
discard COMPARE, IF_THEN_ELSE, ROTATE, ROTATERT, TRUNCATE and
several operations that cannot be represented with DWARF opcodes.
(loc_descriptor): Ignore SIGN_EXTEND and ZERO_EXTEND. Require
dwarf_version 4 for DW_OP_implicit_value and DW_OP_stack_value.
(dwarf2out_var_location): Take during-call mark into account.
(output_indirect_string): Update comment. Output if there are
label and references.
(prune_indirect_string): New.
(prune_unused_types): Call it if debug_str_hash_forced.
More in dwarf2out.c, from Jakub Jelinek <jakub@redhat.com>:
(dw_long_long_const): Remove.
(struct dw_val_struct): Change val_long_long type to rtx.
(print_die, attr_checksum, same_dw_val_p, loc_descriptor): Adjust for
val_long_long change to CONST_DOUBLE rtx from a long hi/lo pair.
(output_die): Likewise. Use HOST_BITS_PER_WIDE_INT size of each
component instead of HOST_BITS_PER_LONG.
(output_loc_operands): Likewise. For const8* assert
HOST_BITS_PER_WIDE_INT rather than HOST_BITS_PER_LONG is >= 64.
(output_loc_operands_raw): For const8* assert HOST_BITS_PER_WIDE_INT
rather than HOST_BITS_PER_LONG is >= 64.
(add_AT_long_long): Remove val_hi and val_lo arguments, add
val_const_double.
(size_of_die): Use HOST_BITS_PER_WIDE_INT size multiplier instead of
HOST_BITS_PER_LONG for dw_val_class_long_long.
(add_const_value_attribute): Adjust add_AT_long_long caller. Don't
handle TLS SYMBOL_REFs. If CONST wraps a constant, tail recurse.
(dwarf_stack_op_name): Handle DW_OP_implicit_value and
DW_OP_stack_value.
(size_of_loc_descr, output_loc_operands, output_loc_operands_raw):
Handle DW_OP_implicit_value.
(extract_int): Move prototype earlier.
(mem_loc_descriptor): For SUBREG punt if inner
mode size is wider than DWARF2_ADDR_SIZE. Handle SIGN_EXTEND
and ZERO_EXTEND by DW_OP_shl and DW_OP_shr{a,}. Handle
EQ, NE, GT, GE, LT, LE, GTU, GEU, LTU, LEU, SMIN, SMAX, UMIN,
UMAX, SIGN_EXTRACT, ZERO_EXTRACT.
(loc_descriptor): Compare mode size with DWARF2_ADDR_SIZE
instead of Pmode size.
(loc_descriptor): Add MODE argument. Handle CONST_INT, CONST_DOUBLE,
CONST_VECTOR, CONST, LABEL_REF and SYMBOL_REF if mode != VOIDmode,
attempt to handle other expressions. Don't handle TLS SYMBOL_REFs.
(concat_loc_descriptor, concatn_loc_descriptor,
loc_descriptor_from_tree_1): Adjust loc_descriptor callers.
(add_location_or_const_value_attribute): Likewise. For single
location loc_lists attempt to use add_const_value_attribute
for constant decls. Add DW_AT_const_value even if
NOTE_VAR_LOCATION is VAR_LOCATION with CONSTANT_P or CONST_STRING
in its expression.
* cfgbuild.c (inside_basic_block_p): Handle debug insns.
(control_flow_insn_p): Likewise.
* tree-parloops.c (eliminate_local_variables_stmt): Handle debug
stmt.
(separate_decls_in_region_debug_bind): New.
(separate_decls_in_region): Process debug bind stmts afterwards.
* recog.c (verify_changes): Handle debug insns.
(extract_insn): Likewise.
(peephole2_optimize): Skip debug insns.
* dse.c (scan_insn): Skip debug insns.
* sel-sched-ir.c (return_nop_to_pool): Take full_tidying argument.
Pass it on.
(setup_id_for_insn): Handle debug insns.
(maybe_tidy_empty_bb): Adjust whitespace.
(tidy_control_flow): Skip debug insns.
(sel_remove_insn): Adjust for debug insns.
(sel_estimate_number_of_insns): Skip debug insns.
(create_insn_rtx_from_pattern): Handle debug insns.
(create_copy_of_insn_rtx): Likewise.
* sel-sched-.h (sel_bb_end): Declare.
(sel_bb_empty_or_nop_p): New.
(get_all_loop_exits): Use it.
(_eligible_successor_edge_p): Likewise.
(return_nop_to_pool): Adjust.
* tree-eh.c (tre_empty_eh_handler_p): Skip debug stmts.
* ira-lives.c (process_bb_node_lives): Skip debug insns.
* gimple-pretty-print.c (dump_gimple_debug): New.
(dump_gimple_stmt): Use it.
(dump_bb_header): Skip gimple debug stmts.
* regmove.c (optimize_reg_copy_1): Discount debug insns.
(fixup_match_2): Likewise.
(regmove_backward_pass): Likewise. Simplify combined
replacement. Handle debug insns.
* function.c (instantiate_virtual_regs): Handle debug insns.
* function.h (struct emit_status): Add x_cur_debug_insn_uid.
* print-rtl.h: Include cselib.h.
(print_rtx): Print VALUEs. Split out and recurse for
VAR_LOCATIONs.
* df.h (df_inns_rescan_debug_internal): Declare.
* gcse.c (alloc_hash_table): Estimate n_insns.
(cprop_insn): Don't regard debug insns as changes.
(bypass_conditional_jumps): Skip debug insns.
(one_pre_gcse_pass): Adjust.
(one_code_hoisting_pass): Likewise.
(compute_ld_motion_mems): Skip debug insns.
(one_cprop_pass): Adjust.
* tree-if-conv.c (tree_if_convert_stmt): Reset debug stmts.
(if_convertible_stmt_p): Handle debug stmts.
* init-regs.c (initialize_uninitialized_regs): Skip debug insns.
* tree-vect-loop.c (vect_is_simple_reduction): Skip debug stmts.
* ira-build.c (create_bb_allocnos): Skip debug insns.
* tree-flow-inline.h (has_zero_uses): Discount debug stmts.
(has_single_use): Likewise.
(single_imm_use): Likewise.
(num_imm_uses): Likewise.
* tree-ssa-phiopt.c (empty_block_p): Skip debug stmts.
* tree-ssa-coalesce.c (build_ssa_conflict_graph): Skip debug stmts.
(create_outofssa_var_map): Likewise.
* lower-subreg.c (adjust_decomposed_uses): New.
(resolve_debug): New.
(decompose_multiword_subregs): Use it.
* tree-dfa.c (find_referenced_vars): Skip debug stmts.
* emit-rtl.c: Include params.h.
(cur_debug_insn_uid): Define.
(set_new_first_and_last_insn): Set cur_debug_insn_uid too.
(copy_rtx_if_shared_1): Handle debug insns.
(reset_used_flags): Likewise.
(set_used_flags): LIkewise.
(get_max_insn_count): New.
(next_nondebug_insn): New.
(prev_nondebug_insn): New.
(make_debug_insn_raw): New.
(emit_insn_before_noloc): Handle debug insns.
(emit_jump_insn_before_noloc): Likewise.
(emit_call_insn_before_noloc): Likewise.
(emit_debug_insn_before_noloc): New.
(emit_insn_after_noloc): Handle debug insns.
(emit_jump_insn_after_noloc): Likewise.
(emit_call_insn_after_noloc): Likewise.
(emit_debug_insn_after_noloc): Likewise.
(emit_insn_after): Take loc from earlier non-debug insn.
(emit_jump_insn_after): Likewise.
(emit_call_insn_after): Likewise.
(emit_debug_insn_after_setloc): New.
(emit_debug_insn_after): New.
(emit_insn_before): Take loc from later non-debug insn.
(emit_jump_insn_before): Likewise.
(emit_call_insn_before): Likewise.
(emit_debug_insn_before_setloc): New.
(emit_debug_insn_before): New.
(emit_insn): Handle debug insns.
(emit_debug_insn): New.
(emit_jump_insn): Handle debug insns.
(emit_call_insn): Likewise.
(emit): Likewise.
(init_emit): Take min-nondebug-insn-uid into account.
Initialize cur_debug_insn_uid.
(emit_copy_of_insn_after): Handle debug insns.
* cfgexpand.c (gimple_assign_rhs_to_tree): Do not overwrite
location of single rhs in place.
(maybe_dump_rtl_for_gimple_stmt): Dump lineno.
(floor_sdiv_adjust): New.
(cell_sdiv_adjust): New.
(cell_udiv_adjust): New.
(round_sdiv_adjust): New.
(round_udiv_adjust): New.
(wrap_constant): Moved from cselib.
(unwrap_constant): New.
(expand_debug_expr): New.
(expand_debug_locations): New.
(expand_gimple_basic_block): Drop hiding redeclaration. Expand
debug bind stmts.
(gimple_expand_cfg): Expand debug locations.
* cselib.c: Include tree-pass.h.
(struct expand_value_data): New.
(cselib_record_sets_hook): New.
(PRESERVED_VALUE_P, LONG_TERM_PRESERVED_VALUE_P): New.
(cselib_clear_table): Move, and implemnet in terms of...
(cselib_reset_table_with_next_value): ... this.
(cselib_get_next_unknown_value): New.
(discard_useless_locs): Don't discard preserved values.
(cselib_preserve_value): New.
(cselib_preserved_value_p): New.
(cselib_preserve_definitely): New.
(cselib_clear_preserve): New.
(cselib_preserve_only_values): New.
(new_cselib_val): Take rtx argument. Dump it in details.
(cselib_lookup_mem): Adjust.
(expand_loc): Take regs_active in struct. Adjust. Silence
dumps unless details are requested.
(cselib_expand_value_rtx_cb): New.
(cselib_expand_value_rtx): Rename and reimplment in terms of...
(cselib_expand_value_rtx_1): ... this. Adjust. Silence dumps
without details. Copy more subregs. Try to resolve values
using a callback. Wrap constants.
(cselib_subst_to_values): Adjust.
(cselib_log_lookup): New.
(cselib_lookup): Call it.
(cselib_invalidate_regno): Don't count preserved values as
useless.
(cselib_invalidate_mem): Likewise.
(cselib_record_set): Likewise.
(struct set): Renamed to cselib_set, moved to cselib.h.
(cselib_record_sets): Adjust. Call hook.
(cselib_process_insn): Reset table when it would be cleared.
(dump_cselib_val): New.
(dump_cselib_table): New.
* tree-cfgcleanup.c (tree_forwarded_block_p): Skip debug stmts.
(remove_forwarder_block): Support moving debug stmts.
* cselib.h (cselib_record_sets_hook): Declare.
(cselib_expand_callback): New type.
(cselib_expand_value_rtx_cb): Declare.
(cselib_reset_table_with_next_value): Declare.
(cselib_get_next_unknown_value): Declare.
(cselib_preserve_value): Declare.
(cselib_preserved_value_p): Declare.
(cselib_preserve_only_values): Declare.
(dump_cselib_table): Declare.
* cfgcleanup.c (flow_find_cross_jump): Skip debug insns.
(try_crossjump_to_edge): Likewise.
(delete_unreachable_blocks): Remove dominant GIMPLE blocks after
dominated blocks when debug stmts are present.
* simplify-rtx.c (delegitimize_mem_from_attrs): New.
* tree-ssa-live.c (remove_unused_locals): Skip debug stmts.
(set_var_live_on_entry): Likewise.
* loop-invariant.c (find_invariants_bb): Skip debug insns.
* cfglayout.c (curr_location, last_location): Make static.
(set_curr_insn_source_location): Don't avoid bouncing.
(get_curr_insn_source_location): New.
(get_curr_insn_block): New.
(duplicate_insn_chain): Handle debug insns.
* tree-ssa-forwprop.c (forward_propagate_addr_expr): Propagate
into debug stmts.
* common.opt (fcompare-debug): Move to sort order.
(fdump-unnumbered-links): Likewise.
(fvar-tracking-assignments): New.
(fvar-tracking-assignments-toggle): New.
* tree-ssa-dce.c (mark_stmt_necessary): Don't mark blocks
because of debug stmts.
(mark_stmt_if_obviously_necessary): Mark debug stmts.
(eliminate_unnecessary_stmts): Walk dominated blocks before
dominators.
* tree-ssa-ter.c (find_replaceable_in_bb): Skip debug stmts.
* ira.c (memref_used_between_p): Skip debug insns.
(update_equiv_regs): Likewise.
* sched-deps.c (sd_lists_size): Accept empty list.
(sd_init_insn): Mark debug insns.
(sd_finish_insn): Unmark them.
(sd_add_dep): Reject non-debug deps on debug insns.
(fixup_sched_groups): Give debug insns group treatment.
Skip debug insns.
(sched_analyze_reg): Don't mark debug insns for sched before call.
(sched_analyze_2): Handle debug insns.
(sched_analyze_insn): Compute next non-debug insn. Handle debug
insns.
(deps_analyze_insn): Handle debug insns.
(deps_start_bb): Skip debug insns.
(init_deps): Initialize last_debug_insn.
* tree-ssa.c (target_for_debug_bind): New.
(find_released_ssa_name): New.
(propagate_var_def_into_debug_stmts): New.
(propagate_defs_into_debug_stmts): New.
(verify_ssa): Skip debug bind stmts without values.
(warn_uninialized_vars): Skip debug stmts.
* target-def.h (TARGET_DELEGITIMIZE_ADDRESS): Set default.
* rtl.c (rtx_equal_p_cb): Handle VALUEs.
(rtx_equal_p): Likewise.
* ira-costs.c (scan_one_insn): Skip debug insns.
(process_bb_node_for_hard_reg_moves): Likewise.
* rtl.h (DEBUG_INSN_P): New.
(NONDEBUG_INSN_P): New.
(MAY_HAVE_DEBUG_INSNS): New.
(INSN_P): Accept debug insns.
(RTX_FRAME_RELATED_P): Likewise.
(INSN_DELETED_P): Likewise
(PAT_VAR_LOCATION_DECL): New.
(PAT_VAR_LOCATION_LOC): New.
(PAT_VAR_OCATION_STATUS): New.
(NOTE_VAR_LOCATION_DECL): Reimplement.
(NOTE_VAR_LOCATION_LOC): Likewise.
(NOTE_VAR_LOCATION_STATUS): Likewise.
(INSN_VAR_LOCATION): New.
(INSN_VAR_LOCATION_DECL): New.
(INSN_VAR_LOCATION_LOC): New.
(INSN_VAR_LOCATION_STATUS): New.
(gen_rtx_UNKNOWN_VAR_LOC): New.
(VAR_LOC_UNKNOWN_P): New.
(NOTE_DURING_CALL_P): New.
(SCHED_GROUP_P): Accept debug insns.
(emit_debug_insn_before): Declare.
(emit_debug_insn_before_noloc): Declare.
(emit_debug_insn_beore_setloc): Declare.
(emit_debug_insn_after): Declare.
(emit_debug_insn_after_noloc): Declare.
(emit_debug_insn_after_setloc): Declare.
(emit_debug_insn): Declare.
(make_debug_insn_raw): Declare.
(prev_nondebug_insn): Declare.
(next_nondebug_insn): Declare.
(delegitimize_mem_from_attrs): Declare.
(get_max_insn_count): Declare.
(wrap_constant): Declare.
(unwrap_constant): Declare.
(get_curr_insn_source_location): Declare.
(get_curr_insn_block): Declare.
* tree-inline.c (insert_debug_decl_map): New.
(processing_debug_stmt): New.
(remap_decl): Don't create new mappings in debug stmts.
(remap_gimple_op_r): Don't add references in debug stmts.
(copy_tree_body_r): Likewise.
(remap_gimple_stmt): Handle debug bind stmts.
(copy_bb): Skip debug stmts.
(copy_edges_for_bb): Likewise.
(copy_debug_stmt): New.
(copy_debug_stmts): New.
(copy_body): Copy debug stmts at the end.
(insert_init_debug_bind): New.
(insert_init_stmt): Take id. Skip and emit debug stmts.
(setup_one_parameter): Remap variable earlier, register debug
mapping.
(estimate_num_insns): Skip debug stmts.
(expand_call_inline): Preserve debug_map.
(optimize_inline_calls): Check for no debug_stmts left-overs.
(unsave_expr_now): Preserve debug_map.
(copy_gimple_seq_and_replace_locals): Likewise.
(tree_function_versioning): Check for no debug_stmts left-overs.
Init and destroy debug_map as needed. Split edges unconditionally.
(build_duplicate_type): Init and destroy debug_map as needed.
* tree-inline.h: Include gimple.h instead of pointer-set.h.
(struct copy_body_data): Add debug_stmts and debug_map.
* sched-int.h (struct ready_list): Add n_debug.
(struct deps): Add last_debug_insn.
(DEBUG_INSN_SCHED_P): New.
(BOUNDARY_DEBUG_INSN_P): New.
(SCHEDULE_DEBUG_INSN_P): New.
(sd_iterator_cond): Accept empty list.
* combine.c (create_log_links): Skip debug insns.
(combine_instructions): Likewise.
(cleanup_auto_inc_dec): New. From Jakub Jelinek: Make sure the
return value is always unshared.
(struct rtx_subst_pair): New.
(auto_adjust_pair): New.
(propagate_for_debug_subst): New.
(propagate_for_debug): New.
(try_combine): Skip debug insns. Propagate removed defs into
debug insns.
(next_nonnote_nondebug_insn): New.
(distribute_notes): Use it. Skip debug insns.
(distribute_links): Skip debug insns.
* tree-outof-ssa.c (set_location_for_edge): Likewise.
* resource.c (mark_target_live_regs): Likewise.
* var-tracking.c: Include cselib.h and target.h.
(enum micro_operation_type): Add MO_VAL_USE, MO_VAL_LOC, and
MO_VAL_SET.
(micro_operation_type_name): New.
(enum emit_note_where): Add EMIT_NOTE_AFTER_CALL_INSN.
(struct micro_operation_def): Update comments.
(decl_or_value): New type. Use instead of decls.
(struct emit_note_data_def): Add vars.
(struct attrs_def): Use decl_or_value.
(struct variable_tracking_info_def): Add permp, flooded.
(struct location_chain_def): Update comment.
(struct variable_part_def): Use decl_or_value.
(struct variable_def): Make var_part a variable length array.
(valvar_pool): New.
(scratch_regs): New.
(cselib_hook_called): New.
(dv_is_decl_p): New.
(dv_is_value_p): New.
(dv_as_decl): New.
(dv_as_value): New.
(dv_as_opaque): New.
(dv_onepart_p): New.
(dv_pool): New.
(IS_DECL_CODE): New.
(check_value_is_not_decl): New.
(dv_from_decl): New.
(dv_from_value): New.
(dv_htab_hash): New.
(variable_htab_hash): Use it.
(variable_htab_eq): Support values.
(variable_htab_free): Free from the right pool.
(attrs_list_member, attrs_list_insert): Use decl_or_value.
(attrs_list_union): Adjust.
(attrs_list_mpdv_union): New.
(tie_break_pointers): New.
(canon_value_cmp): New.
(unshare_variable): Return possibly-modified slot.
(vars_copy_1): Adjust.
(var_reg_decl_set): Adjust. Split out of...
(var_reg_set): ... this.
(get_init_value): Adjust.
(var_reg_delete_and_set): Adjust.
(var_reg_delete): Adjust.
(var_regno_delete): Adjust.
(var_mem_decl_set): Split out of...
(var_mem_set): ... this.
(var_mem_delete_and_set): Adjust.
(var_mem_delete): Adjust.
(val_store): New.
(val_reset): New.
(val_resolve): New.
(variable_union): Adjust. Speed up merge of 1-part vars.
(variable_canonicalize): Use unshared slot.
(VALUED_RECURSED_INTO): New.
(find_loc_in_1pdv): New.
(struct dfset_merge): New.
(insert_into_intersection): New.
(intersect_loc_chains): New.
(loc_cmp): New.
(canonicalize_loc_order_check): New.
(canonicalize_values_mark): New.
(canonicalize_values_star): New.
(variable_merge_over_cur): New.
(variable_merge_over_src): New.
(dataflow_set_merge): New.
(dataflow_set_equiv_regs): New.
(remove_duplicate_values): New.
(struct dfset_post_merge): New.
(variable_post_merge_new_vals): New.
(variable_post_merge_perm_vals): New.
(dataflow_post_merge_adjust): New.
(find_mem_expr_in_1pdv): New.
(dataflow_set_preserve_mem_locs): New.
(dataflow_set_remove_mem_locs): New.
(dataflow_set_clear_at_call): New.
(onepart_variable_different_p): New.
(variable_different_p): Use it.
(dataflow_set_different_1): Adjust. Make detailed dump
more verbose.
(track_expr_p): Add need_rtl parameter. Don't generate rtl
if not needed.
(track_loc_p): Pass it true.
(struct count_use_info): New.
(find_use_val): New.
(replace_expr_with_values): New.
(log_op_type): New.
(use_type): New, partially split out of...
(count_uses): ... this. Count new micro-ops.
(count_uses_1): Adjust.
(count_stores): Adjust.
(count_with_sets): New.
(VAL_NEEDS_RESOLUTION): New.
(VAL_HOLDS_TRACK_EXPR): New.
(VAL_EXPR_IS_COPIED): New.
(VAL_EXPR_IS_CLOBBERED): New.
(add_uses): Adjust. Generate new micro-ops.
(add_uses_1): Adjust.
(add_stores): Generate new micro-ops.
(add_with_sets): New.
(find_src_status): Adjust.
(find_src_set_src): Adjust.
(compute_bb_dataflow): Use dataflow_set_clear_at_call.
Handle new micro-ops. Canonicalize value equivalances.
(vt_find_locations): Compute total size of hash tables for
dumping. Perform merge for var-tracking-assignments. Don't
disregard single-block loops.
(dump_attrs_list): Handle decl_or_value.
(dump_variable): Take variable. Deal with decl_or_value.
(dump_variable_slot): New.
(dump_vars): Use it.
(dump_dataflow_sets): Adjust.
(set_slot_part): New, extended to support one-part variables
after splitting out of...
(set_variable_part): ... this.
(clobber_slot_part): New, split out of...
(clobber_variable_part): ... this.
(delete_slot_part): New, split out of...
(delete_variable_part): .... this.
(check_wrap_constant): New.
(vt_expand_loc_callback): New.
(vt_expand_loc): New.
(emit_note_insn_var_location): Adjust. Handle values. Handle
EMIT_NOTE_AFTER_CALL_INSN.
(emit_notes_for_differences_1): Adjust. Handle values.
(emit_notes_for_differences_2): Likewise.
(emit_notes_for_differences): Adjust.
(emit_notes_in_bb): Take pointer to set. Emit AFTER_CALL_INSN
notes. Adjust. Handle new micro-ops.
(vt_add_function_parameters): Adjust. Create and bind values.
(vt_initialize): Adjust. Initialize scratch_regs and
valvar_pool, flooded and perm.. Initialize and use cselib. Log
operations. Move some code to count_with_sets and add_with_sets.
(delete_debug_insns): New.
(vt_debug_insns_local): New.
(vt_finalize): Release permp, valvar_pool, scratch_regs. Finish
cselib.
(var_tracking_main): If var-tracking-assignments is enabled
but var-tracking isn't, delete debug insns and leave. Likewise
if we exceed limits or fail the stack adjustments tests, and
after all var-tracking processing.
More in var-tracking, from Jakub Jelinek <jakub@redhat.com>:
(dataflow_set): Add traversed_vars.
(value_chain, const_value_chain): New typedefs.
(value_chain_pool, value_chains): New variables.
(value_chain_htab_hash, value_chain_htab_eq, add_value_chain,
add_value_chains, add_cselib_value_chains, remove_value_chain,
remove_value_chains, remove_cselib_value_chains): New functions.
(shared_hash_find_slot_unshare_1, shared_hash_find_slot_1,
shared_hash_find_slot_noinsert_1, shared_hash_find_1): New
static inlines.
(shared_hash_find_slot_unshare, shared_hash_find_slot,
shared_hash_find_slot_noinsert, shared_hash_find): Update.
(dst_can_be_shared): New variable.
(unshare_variable): Unshare set->vars if shared, use shared_hash_*.
Clear dst_can_be_shared. If set->traversed_vars is non-NULL and
different from set->vars, look up slot again instead of using the
passed in slot.
(dataflow_set_init): Initialize traversed_vars.
(variable_union): Use shared_hash_*. Use initially NO_INSERT
lookup if set->vars is shared. Don't keep slot cleared before
calling unshare_variable. Unshare set->vars if needed. Adjust
unshare_variable callers. Clear dst_can_be_shared if needed.
Even ->refcount == 1 vars must be unshared if set->vars is shared
and var needs to be modified.
(dataflow_set_union): Set traversed_vars during canonicalization.
(VALUE_CHANGED, DECL_CHANGED): Define.
(set_dv_changed, dv_changed_p): New static inlines.
(track_expr_p): Clear DECL_CHANGED.
(dump_dataflow_sets): Set it.
(variable_was_changed): Call set_dv_changed.
(emit_note_insn_var_location): Likewise.
(changed_variables_stack): New variable.
(check_changed_vars_1, check_changed_vars_2): New functions.
(emit_notes_for_changes): Do nothing if changed_variables is
empty. Traverse changed_variables with check_changed_vars_1,
call check_changed_vars_2 on each changed_variables_stack entry.
(emit_notes_in_bb): Add SET argument. Just clear it at the
beginning, use it instead of local &set, don't destroy it at the
end.
(vt_emit_notes): Call dataflow_set_clear early on all
VTI(bb)->out sets, never use them, instead use emit_notes_in_bb
computed set, dataflow_set_clear also VTI(bb)->in when we are
done with the basic block. Initialize changed_variables_stack,
free it afterwards. If ENABLE_CHECKING verify that after noting
differences to an empty set value_chains hash table is empty.
(vt_initialize): Initialize value_chains and value_chain_pool.
(vt_finalize): Delete value_chains htab, free value_chain_pool.
(variable_tracking_main): Call dump_dataflow_sets before calling
vt_emit_notes, not after it.
* tree-flow.h (propagate_defs_into_debug_stmts): Declare.
(propagate_var_def_into_debug_stmts): Declare.
* df-problems.c (df_lr_bb_local_compute): Skip debug insns.
(df_set_note): Reject debug insns.
(df_whole_mw_reg_dead_p): Take added_notes_p argument. Don't
add notes to debug insns.
(df_note_bb_compute): Adjust. Likewise.
(df_simulate_uses): Skip debug insns.
(df_simulate_initialize_backwards): Likewise.
* reg-stack.c (subst_stack_regs_in_debug_insn): New.
(subst_stack_regs_pat): Reject debug insns.
(convert_regs_1): Handle debug insns.
* Makefile.in (TREE_INLINE_H): Take pointer-set.h from GIMPLE_H.
(print-rtl.o): Depend on cselib.h.
(cselib.o): Depend on TREE_PASS_H.
(var-tracking.o): Depend on cselib.h and TARGET_H.
* sched-rgn.c (rgn_estimate_number_of_insns): Discount
debug insns.
(init_ready_list): Skip boundary debug insns.
(add_branch_dependences): Skip debug insns.
(free_block_dependencies): Check for blocks with only debug
insns.
(compute_priorities): Likewise.
* gimple.c (gss_for_code): Handle GIMPLE_DEBUG.
(gimple_build_with_ops_stat): Take subcode as unsigned. Adjust
all callers.
(gimple_build_debug_bind_stat): New.
(empty_body_p): Skip debug stmts.
(gimple_has_side_effects): Likewise.
(gimple_rhs_has_side_effects): Likewise.
* gimple.h (enum gimple_debug_subcode, GIMPLE_DEBUG_BIND): New.
(gimple_build_debug_bind_stat): Declare.
(gimple_build_debug_bind): Define.
(is_gimple_debug): New.
(gimple_debug_bind_p): New.
(gimple_debug_bind_get_var): New.
(gimple_debug_bind_get_value): New.
(gimple_debug_bind_get_value_ptr): New.
(gimple_debug_bind_set_var): New.
(gimple_debug_bind_set_value): New.
(GIMPLE_DEBUG_BIND_NOVALUE): New internal temporary macro.
(gimple_debug_bind_reset_value): New.
(gimple_debug_bind_has_value_p): New.
(gsi_next_nondebug): New.
(gsi_prev_nondebug): New.
(gsi_start_nondebug_bb): New.
(gsi_last_nondebug_bb): New.
* sched-vis.c (print_pattern): Handle VAR_LOCATION.
(print_insn): Handle DEBUG_INSN.
* tree-cfg.c (remove_bb): Walk stmts backwards. Let loc
of first insn prevail.
(first_stmt): Skip debug stmts.
(first_non_label_stmt): Likewise.
(last_stmt): Likewise.
(has_zero_uses_1): New.
(single_imm_use_1): New.
(verify_gimple_debug): New.
(verify_types_in_gimple_stmt): Handle debug stmts.
(verify_stmt): Likewise.
(debug_loop_num): Skip debug stmts.
(remove_edge_and_dominated_blocks): Remove dominators last.
* tree-ssa-reasssoc.c (rewrite_expr_tree): Propagate into
debug stmts.
(linearize_expr): Likewise.
* config/i386/i386.c (ix86_delegitimize_address): Call
default implementation.
* config/ia64/ia64.c (ia64_safe_itanium_class): Handle debug
insns.
(group_barrier_needed): Skip debug insns.
(emit_insn_group_barriers): Likewise.
(emit_all_insn_group_barriers): Likewise.
(ia64_variable_issue): Handle debug insns.
(ia64_dfa_new_cycle): Likewise.
(final_emit_insn_group_barriers): Skip debug insns.
(ia64_dwarf2out_def_steady_cfa): Take frame argument. Don't
def cfa without frame.
(process_set): Likewise.
(process_for_unwind_directive): Pass frame on.
* config/rs6000/rs6000.c (TARGET_DELEGITIMIZE_ADDRESS): Define.
(rs6000_delegitimize_address): New.
(rs6000_debug_adjust_cost): Handle debug insns.
(is_microcoded_insn): Likewise.
(is_cracked_insn): Likewise.
(is_nonpipeline_insn): Likewise.
(insn_must_be_first_in_group): Likewise.
(insn_must_be_last_in_group): Likewise.
(force_new_group): Likewise.
* cfgrtl.c (rtl_split_block): Emit INSN_DELETED note if block
contains only debug insns.
(rtl_merge_blocks): Skip debug insns.
(purge_dead_edges): Likewise.
(rtl_block_ends_with_call_p): Skip debug insns.
* dce.c (deletable_insn_p): Handle VAR_LOCATION.
(mark_reg_dependencies): Skip debug insns.
* params.def (PARAM_MIN_NONDEBUG_INSN_UID): New.
* tree-ssanames.c (release_ssa_name): Propagate def into
debug stmts.
* tree-ssa-threadedge.c
(record_temporary_equivalences_from_stmts): Skip debug stmts.
* regcprop.c (replace_oldest_value_addr): Skip debug insns.
(replace_oldest_value_mem): Use ALL_REGS for debug insns.
(copyprop_hardreg_forward_1): Handle debug insns.
* reload1.c (reload): Skip debug insns. Replace unassigned
pseudos in debug insns with their equivalences.
(eliminate_regs_in_insn): Skip debug insns.
(emit_input_reload_insns): Skip debug insns at first, adjust
them later.
* tree-ssa-operands.c (add_virtual_operand): Reject debug stmts.
(get_indirect_ref_operands): Pass opf_no_vops on.
(get_expr_operands): Likewise. Skip debug stmts.
(parse_ssa_operands): Scan debug insns with opf_no_vops.
gcc/testsuite/ChangeLog:
* gcc.dg/guality/guality.c: New.
* gcc.dg/guality/guality.h: New.
* gcc.dg/guality/guality.exp: New.
* gcc.dg/guality/example.c: New.
* lib/gcc-dg.exp (cleanup-dump): Remove .gk files.
(cleanup-saved-temps): Likewise, .gkd files too.
gcc/cp/ChangeLog:
* cp-tree.h (TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS): New.
* cp-lang.c (cxx_dwarf_name): Pass it.
* error.c (count_non_default_template_args): Take flags as
argument. Adjust all callers. Skip counting of default
arguments if the new flag is given.
ChangeLog:
* Makefile.tpl (BUILD_CONFIG): Default to bootstrap-debug.
* Makefile.in: Rebuilt.
contrib/ChangeLog:
* compare-debug: Look for .gkd files and compare them.
config/ChangeLog:
* bootstrap-debug.mk: Add comments.
* bootstrap-debug-big.mk: New.
* bootstrap-debug-lean.mk: New.
* bootstrap-debug-ckovw.mk: Add comments.
* bootstrap-debug-lib.mk: Drop CFLAGS for stages. Use -g0
for TFLAGS in stage1. Drop -fvar-tracking-assignments-toggle.
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@151312 138bc75d-0d04-0410-961f-82ee72b054a4
Diffstat (limited to 'gcc')
126 files changed, 11016 insertions, 1186 deletions
diff --git a/gcc/Makefile.in b/gcc/Makefile.in index 3893f59b477..6bed3390f48 100644 --- a/gcc/Makefile.in +++ b/gcc/Makefile.in @@ -911,7 +911,7 @@ SCEV_H = tree-scalar-evolution.h $(GGC_H) tree-chrec.h $(PARAMS_H) LAMBDA_H = lambda.h $(TREE_H) vec.h $(GGC_H) TREE_DATA_REF_H = tree-data-ref.h $(LAMBDA_H) omega.h graphds.h $(SCEV_H) VARRAY_H = varray.h $(MACHMODE_H) $(SYSTEM_H) coretypes.h $(TM_H) -TREE_INLINE_H = tree-inline.h pointer-set.h +TREE_INLINE_H = tree-inline.h $(GIMPLE_H) REAL_H = real.h $(MACHMODE_H) IRA_INT_H = ira.h ira-int.h $(CFGLOOP_H) alloc-pool.h DBGCNT_H = dbgcnt.h dbgcnt.def @@ -2653,7 +2653,7 @@ rtl.o : rtl.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \ print-rtl.o : print-rtl.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \ $(RTL_H) $(TREE_H) hard-reg-set.h $(BASIC_BLOCK_H) $(FLAGS_H) \ - $(BCONFIG_H) $(REAL_H) $(DIAGNOSTIC_H) + $(BCONFIG_H) $(REAL_H) $(DIAGNOSTIC_H) cselib.h rtlanal.o : rtlanal.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TOPLEV_H) \ $(RTL_H) hard-reg-set.h $(TM_P_H) insn-config.h $(RECOG_H) $(REAL_H) \ $(FLAGS_H) $(REGS_H) output.h $(TARGET_H) $(FUNCTION_H) $(TREE_H) \ @@ -2832,8 +2832,9 @@ coverage.o : coverage.c $(GCOV_IO_H) $(CONFIG_H) $(SYSTEM_H) coretypes.h \ $(HASHTAB_H) tree-iterator.h $(CGRAPH_H) $(TREE_PASS_H) gcov-io.c $(TM_P_H) cselib.o : cselib.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \ $(REGS_H) hard-reg-set.h $(FLAGS_H) $(REAL_H) insn-config.h $(RECOG_H) \ - $(EMIT_RTL_H) $(TOPLEV_H) output.h $(FUNCTION_H) cselib.h $(GGC_H) $(TM_P_H) \ - gt-cselib.h $(PARAMS_H) alloc-pool.h $(HASHTAB_H) $(TARGET_H) + $(EMIT_RTL_H) $(TOPLEV_H) output.h $(FUNCTION_H) $(TREE_PASS_H) \ + cselib.h gt-cselib.h $(GGC_H) $(TM_P_H) $(PARAMS_H) alloc-pool.h \ + $(HASHTAB_H) $(TARGET_H) cse.o : cse.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) $(REGS_H) \ hard-reg-set.h $(FLAGS_H) insn-config.h $(RECOG_H) $(EXPR_H) $(TOPLEV_H) \ output.h $(FUNCTION_H) $(BASIC_BLOCK_H) $(GGC_H) $(TM_P_H) $(TIMEVAR_H) \ @@ -2923,7 +2924,7 @@ regstat.o : regstat.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \ var-tracking.o : var-tracking.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \ $(RTL_H) $(TREE_H) hard-reg-set.h insn-config.h reload.h $(FLAGS_H) \ $(BASIC_BLOCK_H) output.h sbitmap.h alloc-pool.h $(FIBHEAP_H) $(HASHTAB_H) \ - $(REGS_H) $(EXPR_H) $(TIMEVAR_H) $(TREE_PASS_H) + $(REGS_H) $(EXPR_H) $(TIMEVAR_H) $(TREE_PASS_H) cselib.h $(TARGET_H) profile.o : profile.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \ $(TREE_H) $(FLAGS_H) output.h $(REGS_H) $(EXPR_H) $(FUNCTION_H) \ $(TOPLEV_H) $(COVERAGE_H) $(TREE_FLOW_H) value-prof.h cfghooks.h \ diff --git a/gcc/auto-inc-dec.c b/gcc/auto-inc-dec.c index 1e6c564d0ab..929a2dcade8 100644 --- a/gcc/auto-inc-dec.c +++ b/gcc/auto-inc-dec.c @@ -1341,7 +1341,7 @@ merge_in_block (int max_reg, basic_block bb) unsigned int uid = INSN_UID (insn); bool insn_is_add_or_inc = true; - if (!INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) continue; /* This continue is deliberate. We do not want the uses of the @@ -1414,7 +1414,7 @@ merge_in_block (int max_reg, basic_block bb) /* If the inc insn was merged with a mem, the inc insn is gone and there is noting to update. */ - if (DF_INSN_UID_GET(uid)) + if (DF_INSN_UID_GET (uid)) { df_ref *def_rec; df_ref *use_rec; diff --git a/gcc/caller-save.c b/gcc/caller-save.c index e610329fba9..778a3edeec4 100644 --- a/gcc/caller-save.c +++ b/gcc/caller-save.c @@ -98,6 +98,9 @@ static int n_regs_saved; static HARD_REG_SET referenced_regs; +typedef void refmarker_fn (rtx *loc, enum machine_mode mode, int hardregno, + void *mark_arg); + static int reg_save_code (int, enum machine_mode); static int reg_restore_code (int, enum machine_mode); @@ -108,8 +111,9 @@ static void finish_saved_hard_regs (void); static int saved_hard_reg_compare_func (const void *, const void *); static void mark_set_regs (rtx, const_rtx, void *); -static void add_stored_regs (rtx, const_rtx, void *); -static void mark_referenced_regs (rtx); +static void mark_referenced_regs (rtx *, refmarker_fn *mark, void *mark_arg); +static refmarker_fn mark_reg_as_referenced; +static refmarker_fn replace_reg_with_saved_mem; static int insert_save (struct insn_chain *, int, int, HARD_REG_SET *, enum machine_mode *); static int insert_restore (struct insn_chain *, int, int, int, @@ -770,7 +774,7 @@ save_call_clobbered_regs (void) gcc_assert (!chain->is_caller_save_insn); - if (INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) { /* If some registers have been saved, see if INSN references any of them. We must restore them before the insn if so. */ @@ -785,7 +789,8 @@ save_call_clobbered_regs (void) else { CLEAR_HARD_REG_SET (referenced_regs); - mark_referenced_regs (PATTERN (insn)); + mark_referenced_regs (&PATTERN (insn), + mark_reg_as_referenced, NULL); AND_HARD_REG_SET (referenced_regs, hard_regs_saved); } @@ -858,6 +863,10 @@ save_call_clobbered_regs (void) n_regs_saved++; } } + else if (DEBUG_INSN_P (insn) && n_regs_saved) + mark_referenced_regs (&PATTERN (insn), + replace_reg_with_saved_mem, + save_mode); if (chain->next == 0 || chain->next->block != chain->block) { @@ -947,52 +956,57 @@ add_stored_regs (rtx reg, const_rtx setter, void *data) /* Walk X and record all referenced registers in REFERENCED_REGS. */ static void -mark_referenced_regs (rtx x) +mark_referenced_regs (rtx *loc, refmarker_fn *mark, void *arg) { - enum rtx_code code = GET_CODE (x); + enum rtx_code code = GET_CODE (*loc); const char *fmt; int i, j; if (code == SET) - mark_referenced_regs (SET_SRC (x)); + mark_referenced_regs (&SET_SRC (*loc), mark, arg); if (code == SET || code == CLOBBER) { - x = SET_DEST (x); - code = GET_CODE (x); - if ((code == REG && REGNO (x) < FIRST_PSEUDO_REGISTER) + loc = &SET_DEST (*loc); + code = GET_CODE (*loc); + if ((code == REG && REGNO (*loc) < FIRST_PSEUDO_REGISTER) || code == PC || code == CC0 - || (code == SUBREG && REG_P (SUBREG_REG (x)) - && REGNO (SUBREG_REG (x)) < FIRST_PSEUDO_REGISTER + || (code == SUBREG && REG_P (SUBREG_REG (*loc)) + && REGNO (SUBREG_REG (*loc)) < FIRST_PSEUDO_REGISTER /* If we're setting only part of a multi-word register, we shall mark it as referenced, because the words that are not being set should be restored. */ - && ((GET_MODE_SIZE (GET_MODE (x)) - >= GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))) - || (GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))) + && ((GET_MODE_SIZE (GET_MODE (*loc)) + >= GET_MODE_SIZE (GET_MODE (SUBREG_REG (*loc)))) + || (GET_MODE_SIZE (GET_MODE (SUBREG_REG (*loc))) <= UNITS_PER_WORD)))) return; } if (code == MEM || code == SUBREG) { - x = XEXP (x, 0); - code = GET_CODE (x); + loc = &XEXP (*loc, 0); + code = GET_CODE (*loc); } if (code == REG) { - int regno = REGNO (x); + int regno = REGNO (*loc); int hardregno = (regno < FIRST_PSEUDO_REGISTER ? regno : reg_renumber[regno]); if (hardregno >= 0) - add_to_hard_reg_set (&referenced_regs, GET_MODE (x), hardregno); + mark (loc, GET_MODE (*loc), hardregno, arg); + else if (arg) + /* ??? Will we ever end up with an equiv expression in a debug + insn, that would have required restoring a reg, or will + reload take care of it for us? */ + return; /* If this is a pseudo that did not get a hard register, scan its memory location, since it might involve the use of another register, which might be saved. */ else if (reg_equiv_mem[regno] != 0) - mark_referenced_regs (XEXP (reg_equiv_mem[regno], 0)); + mark_referenced_regs (&XEXP (reg_equiv_mem[regno], 0), mark, arg); else if (reg_equiv_address[regno] != 0) - mark_referenced_regs (reg_equiv_address[regno]); + mark_referenced_regs (®_equiv_address[regno], mark, arg); return; } @@ -1000,12 +1014,100 @@ mark_referenced_regs (rtx x) for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--) { if (fmt[i] == 'e') - mark_referenced_regs (XEXP (x, i)); + mark_referenced_regs (&XEXP (*loc, i), mark, arg); else if (fmt[i] == 'E') - for (j = XVECLEN (x, i) - 1; j >= 0; j--) - mark_referenced_regs (XVECEXP (x, i, j)); + for (j = XVECLEN (*loc, i) - 1; j >= 0; j--) + mark_referenced_regs (&XVECEXP (*loc, i, j), mark, arg); + } +} + +/* Parameter function for mark_referenced_regs() that adds registers + present in the insn and in equivalent mems and addresses to + referenced_regs. */ + +static void +mark_reg_as_referenced (rtx *loc ATTRIBUTE_UNUSED, + enum machine_mode mode, + int hardregno, + void *arg ATTRIBUTE_UNUSED) +{ + add_to_hard_reg_set (&referenced_regs, mode, hardregno); +} + +/* Parameter function for mark_referenced_regs() that replaces + registers referenced in a debug_insn that would have been restored, + should it be a non-debug_insn, with their save locations. */ + +static void +replace_reg_with_saved_mem (rtx *loc, + enum machine_mode mode, + int regno, + void *arg) +{ + unsigned int i, nregs = hard_regno_nregs [regno][mode]; + rtx mem; + enum machine_mode *save_mode = (enum machine_mode *)arg; + + for (i = 0; i < nregs; i++) + if (TEST_HARD_REG_BIT (hard_regs_saved, regno + i)) + break; + + /* If none of the registers in the range would need restoring, we're + all set. */ + if (i == nregs) + return; + + while (++i < nregs) + if (!TEST_HARD_REG_BIT (hard_regs_saved, regno + i)) + break; + + if (i == nregs + && regno_save_mem[regno][nregs]) + { + mem = copy_rtx (regno_save_mem[regno][nregs]); + + if (nregs == (unsigned int) hard_regno_nregs[regno][save_mode[regno]]) + mem = adjust_address_nv (mem, save_mode[regno], 0); + + if (GET_MODE (mem) != mode) + { + /* This is gen_lowpart_if_possible(), but without validating + the newly-formed address. */ + int offset = 0; + + if (WORDS_BIG_ENDIAN) + offset = (MAX (GET_MODE_SIZE (GET_MODE (mem)), UNITS_PER_WORD) + - MAX (GET_MODE_SIZE (mode), UNITS_PER_WORD)); + if (BYTES_BIG_ENDIAN) + /* Adjust the address so that the address-after-the-data is + unchanged. */ + offset -= (MIN (UNITS_PER_WORD, GET_MODE_SIZE (mode)) + - MIN (UNITS_PER_WORD, GET_MODE_SIZE (GET_MODE (mem)))); + + mem = adjust_address_nv (mem, mode, offset); + } } + else + { + mem = gen_rtx_CONCATN (mode, rtvec_alloc (nregs)); + for (i = 0; i < nregs; i++) + if (TEST_HARD_REG_BIT (hard_regs_saved, regno + i)) + { + gcc_assert (regno_save_mem[regno + i][1]); + XVECEXP (mem, 0, i) = copy_rtx (regno_save_mem[regno + i][1]); + } + else + { + gcc_assert (save_mode[regno] != VOIDmode); + XVECEXP (mem, 0, i) = gen_rtx_REG (save_mode [regno], + regno + i); + } + } + + gcc_assert (GET_MODE (mem) == mode); + *loc = mem; } + /* Insert a sequence of insns to restore. Place these insns in front of CHAIN if BEFORE_P is nonzero, behind the insn otherwise. MAXRESTORE is diff --git a/gcc/cfgbuild.c b/gcc/cfgbuild.c index 1c91ddbf32d..012bd0b6be7 100644 --- a/gcc/cfgbuild.c +++ b/gcc/cfgbuild.c @@ -62,6 +62,7 @@ inside_basic_block_p (const_rtx insn) case CALL_INSN: case INSN: + case DEBUG_INSN: return true; case BARRIER: @@ -85,6 +86,7 @@ control_flow_insn_p (const_rtx insn) { case NOTE: case CODE_LABEL: + case DEBUG_INSN: return false; case JUMP_INSN: diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c index c631907799e..cfb19b60275 100644 --- a/gcc/cfgcleanup.c +++ b/gcc/cfgcleanup.c @@ -1057,10 +1057,10 @@ flow_find_cross_jump (int mode ATTRIBUTE_UNUSED, basic_block bb1, while (true) { /* Ignore notes. */ - while (!INSN_P (i1) && i1 != BB_HEAD (bb1)) + while (!NONDEBUG_INSN_P (i1) && i1 != BB_HEAD (bb1)) i1 = PREV_INSN (i1); - while (!INSN_P (i2) && i2 != BB_HEAD (bb2)) + while (!NONDEBUG_INSN_P (i2) && i2 != BB_HEAD (bb2)) i2 = PREV_INSN (i2); if (i1 == BB_HEAD (bb1) || i2 == BB_HEAD (bb2)) @@ -1111,13 +1111,13 @@ flow_find_cross_jump (int mode ATTRIBUTE_UNUSED, basic_block bb1, Two, it keeps line number notes as matched as may be. */ if (ninsns) { - while (last1 != BB_HEAD (bb1) && !INSN_P (PREV_INSN (last1))) + while (last1 != BB_HEAD (bb1) && !NONDEBUG_INSN_P (PREV_INSN (last1))) last1 = PREV_INSN (last1); if (last1 != BB_HEAD (bb1) && LABEL_P (PREV_INSN (last1))) last1 = PREV_INSN (last1); - while (last2 != BB_HEAD (bb2) && !INSN_P (PREV_INSN (last2))) + while (last2 != BB_HEAD (bb2) && !NONDEBUG_INSN_P (PREV_INSN (last2))) last2 = PREV_INSN (last2); if (last2 != BB_HEAD (bb2) && LABEL_P (PREV_INSN (last2))) @@ -1557,8 +1557,12 @@ try_crossjump_to_edge (int mode, edge e1, edge e2) /* Skip possible basic block header. */ if (LABEL_P (newpos2)) newpos2 = NEXT_INSN (newpos2); + while (DEBUG_INSN_P (newpos2)) + newpos2 = NEXT_INSN (newpos2); if (NOTE_P (newpos2)) newpos2 = NEXT_INSN (newpos2); + while (DEBUG_INSN_P (newpos2)) + newpos2 = NEXT_INSN (newpos2); } if (dump_file) @@ -1643,9 +1647,16 @@ try_crossjump_to_edge (int mode, edge e1, edge e2) /* Skip possible basic block header. */ if (LABEL_P (newpos1)) newpos1 = NEXT_INSN (newpos1); + + while (DEBUG_INSN_P (newpos1)) + newpos1 = NEXT_INSN (newpos1); + if (NOTE_INSN_BASIC_BLOCK_P (newpos1)) newpos1 = NEXT_INSN (newpos1); + while (DEBUG_INSN_P (newpos1)) + newpos1 = NEXT_INSN (newpos1); + redirect_from = split_block (src1, PREV_INSN (newpos1))->src; to_remove = single_succ (redirect_from); @@ -2032,20 +2043,64 @@ bool delete_unreachable_blocks (void) { bool changed = false; - basic_block b, next_bb; + basic_block b, prev_bb; find_unreachable_blocks (); - /* Delete all unreachable basic blocks. */ - - for (b = ENTRY_BLOCK_PTR->next_bb; b != EXIT_BLOCK_PTR; b = next_bb) + /* When we're in GIMPLE mode and there may be debug insns, we should + delete blocks in reverse dominator order, so as to get a chance + to substitute all released DEFs into debug stmts. If we don't + have dominators information, walking blocks backward gets us a + better chance of retaining most debug information than + otherwise. */ + if (MAY_HAVE_DEBUG_STMTS && current_ir_type () == IR_GIMPLE + && dom_info_available_p (CDI_DOMINATORS)) { - next_bb = b->next_bb; + for (b = EXIT_BLOCK_PTR->prev_bb; b != ENTRY_BLOCK_PTR; b = prev_bb) + { + prev_bb = b->prev_bb; + + if (!(b->flags & BB_REACHABLE)) + { + /* Speed up the removal of blocks that don't dominate + others. Walking backwards, this should be the common + case. */ + if (!first_dom_son (CDI_DOMINATORS, b)) + delete_basic_block (b); + else + { + VEC (basic_block, heap) *h + = get_all_dominated_blocks (CDI_DOMINATORS, b); + + while (VEC_length (basic_block, h)) + { + b = VEC_pop (basic_block, h); + + prev_bb = b->prev_bb; - if (!(b->flags & BB_REACHABLE)) + gcc_assert (!(b->flags & BB_REACHABLE)); + + delete_basic_block (b); + } + + VEC_free (basic_block, heap, h); + } + + changed = true; + } + } + } + else + { + for (b = EXIT_BLOCK_PTR->prev_bb; b != ENTRY_BLOCK_PTR; b = prev_bb) { - delete_basic_block (b); - changed = true; + prev_bb = b->prev_bb; + + if (!(b->flags & BB_REACHABLE)) + { + delete_basic_block (b); + changed = true; + } } } diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c index 07d6dd30974..8bd58b08832 100644 --- a/gcc/cfgexpand.c +++ b/gcc/cfgexpand.c @@ -70,7 +70,13 @@ gimple_assign_rhs_to_tree (gimple stmt) TREE_TYPE (gimple_assign_lhs (stmt)), gimple_assign_rhs1 (stmt)); else if (grhs_class == GIMPLE_SINGLE_RHS) - t = gimple_assign_rhs1 (stmt); + { + t = gimple_assign_rhs1 (stmt); + /* Avoid modifying this tree in place below. */ + if (gimple_has_location (stmt) && CAN_HAVE_LOCATION_P (t) + && gimple_location (stmt) != EXPR_LOCATION (t)) + t = copy_node (t); + } else gcc_unreachable (); @@ -1834,7 +1840,8 @@ maybe_dump_rtl_for_gimple_stmt (gimple stmt, rtx since) if (dump_file && (dump_flags & TDF_DETAILS)) { fprintf (dump_file, "\n;; "); - print_gimple_stmt (dump_file, stmt, 0, TDF_SLIM); + print_gimple_stmt (dump_file, stmt, 0, + TDF_SLIM | (dump_flags & TDF_LINENO)); fprintf (dump_file, "\n"); print_rtl (dump_file, since ? NEXT_INSN (since) : since); @@ -2147,6 +2154,808 @@ expand_gimple_tailcall (basic_block bb, gimple stmt, bool *can_fallthru) return bb; } +/* Return the difference between the floor and the truncated result of + a signed division by OP1 with remainder MOD. */ +static rtx +floor_sdiv_adjust (enum machine_mode mode, rtx mod, rtx op1) +{ + /* (mod != 0 ? (op1 / mod < 0 ? -1 : 0) : 0) */ + return gen_rtx_IF_THEN_ELSE + (mode, gen_rtx_NE (BImode, mod, const0_rtx), + gen_rtx_IF_THEN_ELSE + (mode, gen_rtx_LT (BImode, + gen_rtx_DIV (mode, op1, mod), + const0_rtx), + constm1_rtx, const0_rtx), + const0_rtx); +} + +/* Return the difference between the ceil and the truncated result of + a signed division by OP1 with remainder MOD. */ +static rtx +ceil_sdiv_adjust (enum machine_mode mode, rtx mod, rtx op1) +{ + /* (mod != 0 ? (op1 / mod > 0 ? 1 : 0) : 0) */ + return gen_rtx_IF_THEN_ELSE + (mode, gen_rtx_NE (BImode, mod, const0_rtx), + gen_rtx_IF_THEN_ELSE + (mode, gen_rtx_GT (BImode, + gen_rtx_DIV (mode, op1, mod), + const0_rtx), + const1_rtx, const0_rtx), + const0_rtx); +} + +/* Return the difference between the ceil and the truncated result of + an unsigned division by OP1 with remainder MOD. */ +static rtx +ceil_udiv_adjust (enum machine_mode mode, rtx mod, rtx op1 ATTRIBUTE_UNUSED) +{ + /* (mod != 0 ? 1 : 0) */ + return gen_rtx_IF_THEN_ELSE + (mode, gen_rtx_NE (BImode, mod, const0_rtx), + const1_rtx, const0_rtx); +} + +/* Return the difference between the rounded and the truncated result + of a signed division by OP1 with remainder MOD. Halfway cases are + rounded away from zero, rather than to the nearest even number. */ +static rtx +round_sdiv_adjust (enum machine_mode mode, rtx mod, rtx op1) +{ + /* (abs (mod) >= abs (op1) - abs (mod) + ? (op1 / mod > 0 ? 1 : -1) + : 0) */ + return gen_rtx_IF_THEN_ELSE + (mode, gen_rtx_GE (BImode, gen_rtx_ABS (mode, mod), + gen_rtx_MINUS (mode, + gen_rtx_ABS (mode, op1), + gen_rtx_ABS (mode, mod))), + gen_rtx_IF_THEN_ELSE + (mode, gen_rtx_GT (BImode, + gen_rtx_DIV (mode, op1, mod), + const0_rtx), + const1_rtx, constm1_rtx), + const0_rtx); +} + +/* Return the difference between the rounded and the truncated result + of a unsigned division by OP1 with remainder MOD. Halfway cases + are rounded away from zero, rather than to the nearest even + number. */ +static rtx +round_udiv_adjust (enum machine_mode mode, rtx mod, rtx op1) +{ + /* (mod >= op1 - mod ? 1 : 0) */ + return gen_rtx_IF_THEN_ELSE + (mode, gen_rtx_GE (BImode, mod, + gen_rtx_MINUS (mode, op1, mod)), + const1_rtx, const0_rtx); +} + +/* Wrap modeless constants in CONST:MODE. */ +rtx +wrap_constant (enum machine_mode mode, rtx x) +{ + if (GET_MODE (x) != VOIDmode) + return x; + + if (CONST_INT_P (x) + || GET_CODE (x) == CONST_FIXED + || GET_CODE (x) == CONST_DOUBLE + || GET_CODE (x) == LABEL_REF) + { + gcc_assert (mode != VOIDmode); + + x = gen_rtx_CONST (mode, x); + } + + return x; +} + +/* Remove CONST wrapper added by wrap_constant(). */ +rtx +unwrap_constant (rtx x) +{ + rtx ret = x; + + if (GET_CODE (x) != CONST) + return x; + + x = XEXP (x, 0); + + if (CONST_INT_P (x) + || GET_CODE (x) == CONST_FIXED + || GET_CODE (x) == CONST_DOUBLE + || GET_CODE (x) == LABEL_REF) + ret = x; + + return ret; +} + +/* Return an RTX equivalent to the value of the tree expression + EXP. */ + +static rtx +expand_debug_expr (tree exp) +{ + rtx op0 = NULL_RTX, op1 = NULL_RTX, op2 = NULL_RTX; + enum machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + int unsignedp = TYPE_UNSIGNED (TREE_TYPE (exp)); + + switch (TREE_CODE_CLASS (TREE_CODE (exp))) + { + case tcc_expression: + switch (TREE_CODE (exp)) + { + case COND_EXPR: + goto ternary; + + case TRUTH_ANDIF_EXPR: + case TRUTH_ORIF_EXPR: + case TRUTH_AND_EXPR: + case TRUTH_OR_EXPR: + case TRUTH_XOR_EXPR: + goto binary; + + case TRUTH_NOT_EXPR: + goto unary; + + default: + break; + } + break; + + ternary: + op2 = expand_debug_expr (TREE_OPERAND (exp, 2)); + if (!op2) + return NULL_RTX; + /* Fall through. */ + + binary: + case tcc_binary: + case tcc_comparison: + op1 = expand_debug_expr (TREE_OPERAND (exp, 1)); + if (!op1) + return NULL_RTX; + /* Fall through. */ + + unary: + case tcc_unary: + op0 = expand_debug_expr (TREE_OPERAND (exp, 0)); + if (!op0) + return NULL_RTX; + break; + + case tcc_type: + case tcc_statement: + gcc_unreachable (); + + case tcc_constant: + case tcc_exceptional: + case tcc_declaration: + case tcc_reference: + case tcc_vl_exp: + break; + } + + switch (TREE_CODE (exp)) + { + case STRING_CST: + if (!lookup_constant_def (exp)) + { + op0 = gen_rtx_CONST_STRING (Pmode, TREE_STRING_POINTER (exp)); + op0 = gen_rtx_MEM (BLKmode, op0); + set_mem_attributes (op0, exp, 0); + return op0; + } + /* Fall through... */ + + case INTEGER_CST: + case REAL_CST: + case FIXED_CST: + op0 = expand_expr (exp, NULL_RTX, mode, EXPAND_INITIALIZER); + return op0; + + case COMPLEX_CST: + gcc_assert (COMPLEX_MODE_P (mode)); + op0 = expand_debug_expr (TREE_REALPART (exp)); + op0 = wrap_constant (GET_MODE_INNER (mode), op0); + op1 = expand_debug_expr (TREE_IMAGPART (exp)); + op1 = wrap_constant (GET_MODE_INNER (mode), op1); + return gen_rtx_CONCAT (mode, op0, op1); + + case VAR_DECL: + case PARM_DECL: + case FUNCTION_DECL: + case LABEL_DECL: + case CONST_DECL: + case RESULT_DECL: + op0 = DECL_RTL_IF_SET (exp); + + /* This decl was probably optimized away. */ + if (!op0) + return NULL; + + op0 = copy_rtx (op0); + + if (GET_MODE (op0) == BLKmode) + { + gcc_assert (MEM_P (op0)); + op0 = adjust_address_nv (op0, mode, 0); + return op0; + } + + /* Fall through. */ + + adjust_mode: + case PAREN_EXPR: + case NOP_EXPR: + case CONVERT_EXPR: + { + enum machine_mode inner_mode = GET_MODE (op0); + + if (mode == inner_mode) + return op0; + + if (inner_mode == VOIDmode) + { + inner_mode = TYPE_MODE (TREE_TYPE (TREE_OPERAND (exp, 0))); + if (mode == inner_mode) + return op0; + } + + if (FLOAT_MODE_P (mode) && FLOAT_MODE_P (inner_mode)) + { + if (GET_MODE_BITSIZE (mode) == GET_MODE_BITSIZE (inner_mode)) + op0 = simplify_gen_subreg (mode, op0, inner_mode, 0); + else if (GET_MODE_BITSIZE (mode) < GET_MODE_BITSIZE (inner_mode)) + op0 = simplify_gen_unary (FLOAT_TRUNCATE, mode, op0, inner_mode); + else + op0 = simplify_gen_unary (FLOAT_EXTEND, mode, op0, inner_mode); + } + else if (FLOAT_MODE_P (mode)) + { + if (TYPE_UNSIGNED (TREE_TYPE (TREE_OPERAND (exp, 0)))) + op0 = simplify_gen_unary (UNSIGNED_FLOAT, mode, op0, inner_mode); + else + op0 = simplify_gen_unary (FLOAT, mode, op0, inner_mode); + } + else if (FLOAT_MODE_P (inner_mode)) + { + if (unsignedp) + op0 = simplify_gen_unary (UNSIGNED_FIX, mode, op0, inner_mode); + else + op0 = simplify_gen_unary (FIX, mode, op0, inner_mode); + } + else if (CONSTANT_P (op0) + || GET_MODE_BITSIZE (mode) <= GET_MODE_BITSIZE (inner_mode)) + op0 = simplify_gen_subreg (mode, op0, inner_mode, + subreg_lowpart_offset (mode, + inner_mode)); + else if (unsignedp) + op0 = gen_rtx_ZERO_EXTEND (mode, op0); + else + op0 = gen_rtx_SIGN_EXTEND (mode, op0); + + return op0; + } + + case INDIRECT_REF: + case ALIGN_INDIRECT_REF: + case MISALIGNED_INDIRECT_REF: + op0 = expand_debug_expr (TREE_OPERAND (exp, 0)); + if (!op0) + return NULL; + + gcc_assert (GET_MODE (op0) == Pmode + || GET_CODE (op0) == CONST_INT + || GET_CODE (op0) == CONST_DOUBLE); + + if (TREE_CODE (exp) == ALIGN_INDIRECT_REF) + { + int align = TYPE_ALIGN_UNIT (TREE_TYPE (exp)); + op0 = gen_rtx_AND (Pmode, op0, GEN_INT (-align)); + } + + op0 = gen_rtx_MEM (mode, op0); + + set_mem_attributes (op0, exp, 0); + + return op0; + + case TARGET_MEM_REF: + if (TMR_SYMBOL (exp) && !DECL_RTL_SET_P (TMR_SYMBOL (exp))) + return NULL; + + op0 = expand_debug_expr + (tree_mem_ref_addr (build_pointer_type (TREE_TYPE (exp)), + exp)); + if (!op0) + return NULL; + + gcc_assert (GET_MODE (op0) == Pmode + || GET_CODE (op0) == CONST_INT + || GET_CODE (op0) == CONST_DOUBLE); + + op0 = gen_rtx_MEM (mode, op0); + + set_mem_attributes (op0, exp, 0); + + return op0; + + case ARRAY_REF: + case ARRAY_RANGE_REF: + case COMPONENT_REF: + case BIT_FIELD_REF: + case REALPART_EXPR: + case IMAGPART_EXPR: + case VIEW_CONVERT_EXPR: + { + enum machine_mode mode1; + HOST_WIDE_INT bitsize, bitpos; + tree offset; + int volatilep = 0; + tree tem = get_inner_reference (exp, &bitsize, &bitpos, &offset, + &mode1, &unsignedp, &volatilep, false); + rtx orig_op0; + + orig_op0 = op0 = expand_debug_expr (tem); + + if (!op0) + return NULL; + + if (offset) + { + gcc_assert (MEM_P (op0)); + + op1 = expand_debug_expr (offset); + if (!op1) + return NULL; + + op0 = gen_rtx_MEM (mode, gen_rtx_PLUS (Pmode, XEXP (op0, 0), op1)); + } + + if (MEM_P (op0)) + { + if (bitpos >= BITS_PER_UNIT) + { + op0 = adjust_address_nv (op0, mode1, bitpos / BITS_PER_UNIT); + bitpos %= BITS_PER_UNIT; + } + else if (bitpos < 0) + { + int units = (-bitpos + BITS_PER_UNIT - 1) / BITS_PER_UNIT; + op0 = adjust_address_nv (op0, mode1, units); + bitpos += units * BITS_PER_UNIT; + } + else if (bitpos == 0 && bitsize == GET_MODE_BITSIZE (mode)) + op0 = adjust_address_nv (op0, mode, 0); + else if (GET_MODE (op0) != mode1) + op0 = adjust_address_nv (op0, mode1, 0); + else + op0 = copy_rtx (op0); + if (op0 == orig_op0) + op0 = shallow_copy_rtx (op0); + set_mem_attributes (op0, exp, 0); + } + + if (bitpos == 0 && mode == GET_MODE (op0)) + return op0; + + if ((bitpos % BITS_PER_UNIT) == 0 + && bitsize == GET_MODE_BITSIZE (mode1)) + { + enum machine_mode opmode = GET_MODE (op0); + + gcc_assert (opmode != BLKmode); + + if (opmode == VOIDmode) + opmode = mode1; + + /* This condition may hold if we're expanding the address + right past the end of an array that turned out not to + be addressable (i.e., the address was only computed in + debug stmts). The gen_subreg below would rightfully + crash, and the address doesn't really exist, so just + drop it. */ + if (bitpos >= GET_MODE_BITSIZE (opmode)) + return NULL; + + return simplify_gen_subreg (mode, op0, opmode, + bitpos / BITS_PER_UNIT); + } + + return simplify_gen_ternary (SCALAR_INT_MODE_P (GET_MODE (op0)) + && TYPE_UNSIGNED (TREE_TYPE (exp)) + ? SIGN_EXTRACT + : ZERO_EXTRACT, mode, + GET_MODE (op0) != VOIDmode + ? GET_MODE (op0) : mode1, + op0, GEN_INT (bitsize), GEN_INT (bitpos)); + } + + case EXC_PTR_EXPR: + /* ??? Do not call get_exception_pointer(), we don't want to gen + it if it hasn't been created yet. */ + return get_exception_pointer (); + + case FILTER_EXPR: + /* Likewise get_exception_filter(). */ + return get_exception_filter (); + + case ABS_EXPR: + return gen_rtx_ABS (mode, op0); + + case NEGATE_EXPR: + return gen_rtx_NEG (mode, op0); + + case BIT_NOT_EXPR: + return gen_rtx_NOT (mode, op0); + + case FLOAT_EXPR: + if (unsignedp) + return gen_rtx_UNSIGNED_FLOAT (mode, op0); + else + return gen_rtx_FLOAT (mode, op0); + + case FIX_TRUNC_EXPR: + if (unsignedp) + return gen_rtx_UNSIGNED_FIX (mode, op0); + else + return gen_rtx_FIX (mode, op0); + + case POINTER_PLUS_EXPR: + case PLUS_EXPR: + return gen_rtx_PLUS (mode, op0, op1); + + case MINUS_EXPR: + return gen_rtx_MINUS (mode, op0, op1); + + case MULT_EXPR: + return gen_rtx_MULT (mode, op0, op1); + + case RDIV_EXPR: + case TRUNC_DIV_EXPR: + case EXACT_DIV_EXPR: + if (unsignedp) + return gen_rtx_UDIV (mode, op0, op1); + else + return gen_rtx_DIV (mode, op0, op1); + + case TRUNC_MOD_EXPR: + if (unsignedp) + return gen_rtx_UMOD (mode, op0, op1); + else + return gen_rtx_MOD (mode, op0, op1); + + case FLOOR_DIV_EXPR: + if (unsignedp) + return gen_rtx_UDIV (mode, op0, op1); + else + { + rtx div = gen_rtx_DIV (mode, op0, op1); + rtx mod = gen_rtx_MOD (mode, op0, op1); + rtx adj = floor_sdiv_adjust (mode, mod, op1); + return gen_rtx_PLUS (mode, div, adj); + } + + case FLOOR_MOD_EXPR: + if (unsignedp) + return gen_rtx_UMOD (mode, op0, op1); + else + { + rtx mod = gen_rtx_MOD (mode, op0, op1); + rtx adj = floor_sdiv_adjust (mode, mod, op1); + adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1)); + return gen_rtx_PLUS (mode, mod, adj); + } + + case CEIL_DIV_EXPR: + if (unsignedp) + { + rtx div = gen_rtx_UDIV (mode, op0, op1); + rtx mod = gen_rtx_UMOD (mode, op0, op1); + rtx adj = ceil_udiv_adjust (mode, mod, op1); + return gen_rtx_PLUS (mode, div, adj); + } + else + { + rtx div = gen_rtx_DIV (mode, op0, op1); + rtx mod = gen_rtx_MOD (mode, op0, op1); + rtx adj = ceil_sdiv_adjust (mode, mod, op1); + return gen_rtx_PLUS (mode, div, adj); + } + + case CEIL_MOD_EXPR: + if (unsignedp) + { + rtx mod = gen_rtx_UMOD (mode, op0, op1); + rtx adj = ceil_udiv_adjust (mode, mod, op1); + adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1)); + return gen_rtx_PLUS (mode, mod, adj); + } + else + { + rtx mod = gen_rtx_MOD (mode, op0, op1); + rtx adj = ceil_sdiv_adjust (mode, mod, op1); + adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1)); + return gen_rtx_PLUS (mode, mod, adj); + } + + case ROUND_DIV_EXPR: + if (unsignedp) + { + rtx div = gen_rtx_UDIV (mode, op0, op1); + rtx mod = gen_rtx_UMOD (mode, op0, op1); + rtx adj = round_udiv_adjust (mode, mod, op1); + return gen_rtx_PLUS (mode, div, adj); + } + else + { + rtx div = gen_rtx_DIV (mode, op0, op1); + rtx mod = gen_rtx_MOD (mode, op0, op1); + rtx adj = round_sdiv_adjust (mode, mod, op1); + return gen_rtx_PLUS (mode, div, adj); + } + + case ROUND_MOD_EXPR: + if (unsignedp) + { + rtx mod = gen_rtx_UMOD (mode, op0, op1); + rtx adj = round_udiv_adjust (mode, mod, op1); + adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1)); + return gen_rtx_PLUS (mode, mod, adj); + } + else + { + rtx mod = gen_rtx_MOD (mode, op0, op1); + rtx adj = round_sdiv_adjust (mode, mod, op1); + adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1)); + return gen_rtx_PLUS (mode, mod, adj); + } + + case LSHIFT_EXPR: + return gen_rtx_ASHIFT (mode, op0, op1); + + case RSHIFT_EXPR: + if (unsignedp) + return gen_rtx_LSHIFTRT (mode, op0, op1); + else + return gen_rtx_ASHIFTRT (mode, op0, op1); + + case LROTATE_EXPR: + return gen_rtx_ROTATE (mode, op0, op1); + + case RROTATE_EXPR: + return gen_rtx_ROTATERT (mode, op0, op1); + + case MIN_EXPR: + if (unsignedp) + return gen_rtx_UMIN (mode, op0, op1); + else + return gen_rtx_SMIN (mode, op0, op1); + + case MAX_EXPR: + if (unsignedp) + return gen_rtx_UMAX (mode, op0, op1); + else + return gen_rtx_SMAX (mode, op0, op1); + + case BIT_AND_EXPR: + case TRUTH_AND_EXPR: + return gen_rtx_AND (mode, op0, op1); + + case BIT_IOR_EXPR: + case TRUTH_OR_EXPR: + return gen_rtx_IOR (mode, op0, op1); + + case BIT_XOR_EXPR: + case TRUTH_XOR_EXPR: + return gen_rtx_XOR (mode, op0, op1); + + case TRUTH_ANDIF_EXPR: + return gen_rtx_IF_THEN_ELSE (mode, op0, op1, const0_rtx); + + case TRUTH_ORIF_EXPR: + return gen_rtx_IF_THEN_ELSE (mode, op0, const_true_rtx, op1); + + case TRUTH_NOT_EXPR: + return gen_rtx_EQ (mode, op0, const0_rtx); + + case LT_EXPR: + if (unsignedp) + return gen_rtx_LTU (mode, op0, op1); + else + return gen_rtx_LT (mode, op0, op1); + + case LE_EXPR: + if (unsignedp) + return gen_rtx_LEU (mode, op0, op1); + else + return gen_rtx_LE (mode, op0, op1); + + case GT_EXPR: + if (unsignedp) + return gen_rtx_GTU (mode, op0, op1); + else + return gen_rtx_GT (mode, op0, op1); + + case GE_EXPR: + if (unsignedp) + return gen_rtx_GEU (mode, op0, op1); + else + return gen_rtx_GE (mode, op0, op1); + + case EQ_EXPR: + return gen_rtx_EQ (mode, op0, op1); + + case NE_EXPR: + return gen_rtx_NE (mode, op0, op1); + + case UNORDERED_EXPR: + return gen_rtx_UNORDERED (mode, op0, op1); + + case ORDERED_EXPR: + return gen_rtx_ORDERED (mode, op0, op1); + + case UNLT_EXPR: + return gen_rtx_UNLT (mode, op0, op1); + + case UNLE_EXPR: + return gen_rtx_UNLE (mode, op0, op1); + + case UNGT_EXPR: + return gen_rtx_UNGT (mode, op0, op1); + + case UNGE_EXPR: + return gen_rtx_UNGE (mode, op0, op1); + + case UNEQ_EXPR: + return gen_rtx_UNEQ (mode, op0, op1); + + case LTGT_EXPR: + return gen_rtx_LTGT (mode, op0, op1); + + case COND_EXPR: + return gen_rtx_IF_THEN_ELSE (mode, op0, op1, op2); + + case COMPLEX_EXPR: + gcc_assert (COMPLEX_MODE_P (mode)); + if (GET_MODE (op0) == VOIDmode) + op0 = gen_rtx_CONST (GET_MODE_INNER (mode), op0); + if (GET_MODE (op1) == VOIDmode) + op1 = gen_rtx_CONST (GET_MODE_INNER (mode), op1); + return gen_rtx_CONCAT (mode, op0, op1); + + case ADDR_EXPR: + op0 = expand_debug_expr (TREE_OPERAND (exp, 0)); + if (!op0 || !MEM_P (op0)) + return NULL; + + return XEXP (op0, 0); + + case VECTOR_CST: + exp = build_constructor_from_list (TREE_TYPE (exp), + TREE_VECTOR_CST_ELTS (exp)); + /* Fall through. */ + + case CONSTRUCTOR: + if (TREE_CODE (TREE_TYPE (exp)) == VECTOR_TYPE) + { + unsigned i; + tree val; + + op0 = gen_rtx_CONCATN + (mode, rtvec_alloc (TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)))); + + FOR_EACH_CONSTRUCTOR_VALUE (CONSTRUCTOR_ELTS (exp), i, val) + { + op1 = expand_debug_expr (val); + if (!op1) + return NULL; + XVECEXP (op0, 0, i) = op1; + } + + if (i < TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp))) + { + op1 = expand_debug_expr + (fold_convert (TREE_TYPE (TREE_TYPE (exp)), integer_zero_node)); + + if (!op1) + return NULL; + + for (; i < TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)); i++) + XVECEXP (op0, 0, i) = op1; + } + + return op0; + } + else + goto flag_unsupported; + + case CALL_EXPR: + /* ??? Maybe handle some builtins? */ + return NULL; + + case SSA_NAME: + { + int part = var_to_partition (SA.map, exp); + + if (part == NO_PARTITION) + return NULL; + + gcc_assert (part >= 0 && (unsigned)part < SA.map->num_partitions); + + op0 = SA.partition_to_pseudo[part]; + goto adjust_mode; + } + + case ERROR_MARK: + return NULL; + + default: + flag_unsupported: +#ifdef ENABLE_CHECKING + debug_tree (exp); + gcc_unreachable (); +#else + return NULL; +#endif + } +} + +/* Expand the _LOCs in debug insns. We run this after expanding all + regular insns, so that any variables referenced in the function + will have their DECL_RTLs set. */ + +static void +expand_debug_locations (void) +{ + rtx insn; + rtx last = get_last_insn (); + int save_strict_alias = flag_strict_aliasing; + + /* New alias sets while setting up memory attributes cause + -fcompare-debug failures, even though it doesn't bring about any + codegen changes. */ + flag_strict_aliasing = 0; + + for (insn = get_insns (); insn; insn = NEXT_INSN (insn)) + if (DEBUG_INSN_P (insn)) + { + tree value = (tree)INSN_VAR_LOCATION_LOC (insn); + rtx val; + enum machine_mode mode; + + if (value == NULL_TREE) + val = NULL_RTX; + else + { + val = expand_debug_expr (value); + gcc_assert (last == get_last_insn ()); + } + + if (!val) + val = gen_rtx_UNKNOWN_VAR_LOC (); + else + { + mode = GET_MODE (INSN_VAR_LOCATION (insn)); + + gcc_assert (mode == GET_MODE (val) + || (GET_MODE (val) == VOIDmode + && (CONST_INT_P (val) + || GET_CODE (val) == CONST_FIXED + || GET_CODE (val) == CONST_DOUBLE + || GET_CODE (val) == LABEL_REF))); + } + + INSN_VAR_LOCATION_LOC (insn) = val; + } + + flag_strict_aliasing = save_strict_alias; +} + /* Expand basic block BB from GIMPLE trees to RTL. */ static basic_block @@ -2234,9 +3043,10 @@ expand_gimple_basic_block (basic_block bb) for (; !gsi_end_p (gsi); gsi_next (&gsi)) { - gimple stmt = gsi_stmt (gsi); basic_block new_bb; + stmt = gsi_stmt (gsi); + /* Expand this statement, then evaluate the resulting RTL and fixup the CFG accordingly. */ if (gimple_code (stmt) == GIMPLE_COND) @@ -2245,6 +3055,60 @@ expand_gimple_basic_block (basic_block bb) if (new_bb) return new_bb; } + else if (gimple_debug_bind_p (stmt)) + { + location_t sloc = get_curr_insn_source_location (); + tree sblock = get_curr_insn_block (); + gimple_stmt_iterator nsi = gsi; + + for (;;) + { + tree var = gimple_debug_bind_get_var (stmt); + tree value; + rtx val; + enum machine_mode mode; + + if (gimple_debug_bind_has_value_p (stmt)) + value = gimple_debug_bind_get_value (stmt); + else + value = NULL_TREE; + + last = get_last_insn (); + + set_curr_insn_source_location (gimple_location (stmt)); + set_curr_insn_block (gimple_block (stmt)); + + if (DECL_P (var)) + mode = DECL_MODE (var); + else + mode = TYPE_MODE (TREE_TYPE (var)); + + val = gen_rtx_VAR_LOCATION + (mode, var, (rtx)value, VAR_INIT_STATUS_INITIALIZED); + + val = emit_debug_insn (val); + + if (dump_file && (dump_flags & TDF_DETAILS)) + { + /* We can't dump the insn with a TREE where an RTX + is expected. */ + INSN_VAR_LOCATION_LOC (val) = const0_rtx; + maybe_dump_rtl_for_gimple_stmt (stmt, last); + INSN_VAR_LOCATION_LOC (val) = (rtx)value; + } + + gsi = nsi; + gsi_next (&nsi); + if (gsi_end_p (nsi)) + break; + stmt = gsi_stmt (nsi); + if (!gimple_debug_bind_p (stmt)) + break; + } + + set_curr_insn_source_location (sloc); + set_curr_insn_block (sblock); + } else { if (is_gimple_call (stmt) && gimple_call_tail_p (stmt)) @@ -2718,6 +3582,9 @@ gimple_expand_cfg (void) FOR_BB_BETWEEN (bb, init_block->next_bb, EXIT_BLOCK_PTR, next_bb) bb = expand_gimple_basic_block (bb); + if (MAY_HAVE_DEBUG_INSNS) + expand_debug_locations (); + execute_free_datastructures (); finish_out_of_ssa (&SA); diff --git a/gcc/cfglayout.c b/gcc/cfglayout.c index f718f1e10dd..ca400a8c503 100644 --- a/gcc/cfglayout.c +++ b/gcc/cfglayout.c @@ -238,7 +238,7 @@ int epilogue_locator; /* Hold current location information and last location information, so the datastructures are built lazily only when some instructions in given place are needed. */ -location_t curr_location, last_location; +static location_t curr_location, last_location; static tree curr_block, last_block; static int curr_rtl_loc = -1; @@ -290,12 +290,17 @@ set_curr_insn_source_location (location_t location) time locators are not initialized. */ if (curr_rtl_loc == -1) return; - if (location == last_location) - return; curr_location = location; } -/* Set current scope block. */ +/* Get current location. */ +location_t +get_curr_insn_source_location (void) +{ + return curr_location; +} + +/* Set current scope block. */ void set_curr_insn_block (tree b) { @@ -307,6 +312,13 @@ set_curr_insn_block (tree b) curr_block = b; } +/* Get current scope block. */ +tree +get_curr_insn_block (void) +{ + return curr_block; +} + /* Return current insn locator. */ int curr_insn_locator (void) @@ -1120,6 +1132,7 @@ duplicate_insn_chain (rtx from, rtx to) { switch (GET_CODE (insn)) { + case DEBUG_INSN: case INSN: case CALL_INSN: case JUMP_INSN: diff --git a/gcc/cfgloopanal.c b/gcc/cfgloopanal.c index 36e0d152265..33aff6dbceb 100644 --- a/gcc/cfgloopanal.c +++ b/gcc/cfgloopanal.c @@ -176,8 +176,8 @@ num_loop_insns (const struct loop *loop) { bb = bbs[i]; ninsns++; - for (insn = BB_HEAD (bb); insn != BB_END (bb); insn = NEXT_INSN (insn)) - if (INSN_P (insn)) + FOR_BB_INSNS (bb, insn) + if (NONDEBUG_INSN_P (insn)) ninsns++; } free(bbs); @@ -199,9 +199,9 @@ average_num_loop_insns (const struct loop *loop) { bb = bbs[i]; - binsns = 1; - for (insn = BB_HEAD (bb); insn != BB_END (bb); insn = NEXT_INSN (insn)) - if (INSN_P (insn)) + binsns = 0; + FOR_BB_INSNS (bb, insn) + if (NONDEBUG_INSN_P (insn)) binsns++; ratio = loop->header->frequency == 0 diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c index 3c877c2e1e3..4c4b3b72cc7 100644 --- a/gcc/cfgrtl.c +++ b/gcc/cfgrtl.c @@ -531,7 +531,26 @@ rtl_split_block (basic_block bb, void *insnp) insn = first_insn_after_basic_block_note (bb); if (insn) - insn = PREV_INSN (insn); + { + rtx next = insn; + + insn = PREV_INSN (insn); + + /* If the block contains only debug insns, insn would have + been NULL in a non-debug compilation, and then we'd end + up emitting a DELETED note. For -fcompare-debug + stability, emit the note too. */ + if (insn != BB_END (bb) + && DEBUG_INSN_P (next) + && DEBUG_INSN_P (BB_END (bb))) + { + while (next != BB_END (bb) && DEBUG_INSN_P (next)) + next = NEXT_INSN (next); + + if (next == BB_END (bb)) + emit_note_after (NOTE_INSN_DELETED, next); + } + } else insn = get_last_insn (); } @@ -566,11 +585,15 @@ rtl_merge_blocks (basic_block a, basic_block b) { rtx b_head = BB_HEAD (b), b_end = BB_END (b), a_end = BB_END (a); rtx del_first = NULL_RTX, del_last = NULL_RTX; + rtx b_debug_start = b_end, b_debug_end = b_end; int b_empty = 0; if (dump_file) fprintf (dump_file, "merging block %d into block %d\n", b->index, a->index); + while (DEBUG_INSN_P (b_end)) + b_end = PREV_INSN (b_debug_start = b_end); + /* If there was a CODE_LABEL beginning B, delete it. */ if (LABEL_P (b_head)) { @@ -636,9 +659,21 @@ rtl_merge_blocks (basic_block a, basic_block b) /* Reassociate the insns of B with A. */ if (!b_empty) { - update_bb_for_insn_chain (a_end, b_end, a); + update_bb_for_insn_chain (a_end, b_debug_end, a); - a_end = b_end; + a_end = b_debug_end; + } + else if (b_end != b_debug_end) + { + /* Move any deleted labels and other notes between the end of A + and the debug insns that make up B after the debug insns, + bringing the debug insns into A while keeping the notes after + the end of A. */ + if (NEXT_INSN (a_end) != b_debug_start) + reorder_insns_nobb (NEXT_INSN (a_end), PREV_INSN (b_debug_start), + b_debug_end); + update_bb_for_insn_chain (b_debug_start, b_debug_end, a); + a_end = b_debug_end; } df_bb_delete (b->index); @@ -2162,6 +2197,11 @@ purge_dead_edges (basic_block bb) bool found; edge_iterator ei; + if (DEBUG_INSN_P (insn) && insn != BB_HEAD (bb)) + do + insn = PREV_INSN (insn); + while ((DEBUG_INSN_P (insn) || NOTE_P (insn)) && insn != BB_HEAD (bb)); + /* If this instruction cannot trap, remove REG_EH_REGION notes. */ if (NONJUMP_INSN_P (insn) && (note = find_reg_note (insn, REG_EH_REGION, NULL))) @@ -2182,10 +2222,10 @@ purge_dead_edges (basic_block bb) latter can appear when nonlocal gotos are used. */ if (e->flags & EDGE_EH) { - if (can_throw_internal (BB_END (bb)) + if (can_throw_internal (insn) /* If this is a call edge, verify that this is a call insn. */ && (! (e->flags & EDGE_ABNORMAL_CALL) - || CALL_P (BB_END (bb)))) + || CALL_P (insn))) { ei_next (&ei); continue; @@ -2193,7 +2233,7 @@ purge_dead_edges (basic_block bb) } else if (e->flags & EDGE_ABNORMAL_CALL) { - if (CALL_P (BB_END (bb)) + if (CALL_P (insn) && (! (note = find_reg_note (insn, REG_EH_REGION, NULL)) || INTVAL (XEXP (note, 0)) >= 0)) { @@ -2771,7 +2811,8 @@ rtl_block_ends_with_call_p (basic_block bb) while (!CALL_P (insn) && insn != BB_HEAD (bb) && (keep_with_call_p (insn) - || NOTE_P (insn))) + || NOTE_P (insn) + || DEBUG_INSN_P (insn))) insn = PREV_INSN (insn); return (CALL_P (insn)); } diff --git a/gcc/combine.c b/gcc/combine.c index faa7e0dc038..bc61fbedcf4 100644 --- a/gcc/combine.c +++ b/gcc/combine.c @@ -921,7 +921,7 @@ create_log_links (void) { FOR_BB_INSNS_REVERSE (bb, insn) { - if (!INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) continue; /* Log links are created only once. */ @@ -1129,7 +1129,7 @@ combine_instructions (rtx f, unsigned int nregs) insn = next ? next : NEXT_INSN (insn)) { next = 0; - if (INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) { /* See if we know about function return values before this insn based upon SUBREG flags. */ @@ -2161,6 +2161,209 @@ reg_subword_p (rtx x, rtx reg) && GET_MODE_CLASS (GET_MODE (x)) == MODE_INT; } +#ifdef AUTO_INC_DEC +/* Replace auto-increment addressing modes with explicit operations to + access the same addresses without modifying the corresponding + registers. If AFTER holds, SRC is meant to be reused after the + side effect, otherwise it is to be reused before that. */ + +static rtx +cleanup_auto_inc_dec (rtx src, bool after, enum machine_mode mem_mode) +{ + rtx x = src; + const RTX_CODE code = GET_CODE (x); + int i; + const char *fmt; + + switch (code) + { + case REG: + case CONST_INT: + case CONST_DOUBLE: + case CONST_FIXED: + case CONST_VECTOR: + case SYMBOL_REF: + case CODE_LABEL: + case PC: + case CC0: + case SCRATCH: + /* SCRATCH must be shared because they represent distinct values. */ + return x; + case CLOBBER: + if (REG_P (XEXP (x, 0)) && REGNO (XEXP (x, 0)) < FIRST_PSEUDO_REGISTER) + return x; + break; + + case CONST: + if (shared_const_p (x)) + return x; + break; + + case MEM: + mem_mode = GET_MODE (x); + break; + + case PRE_INC: + case PRE_DEC: + case POST_INC: + case POST_DEC: + gcc_assert (mem_mode != VOIDmode && mem_mode != BLKmode); + if (after == (code == PRE_INC || code == PRE_DEC)) + x = cleanup_auto_inc_dec (XEXP (x, 0), after, mem_mode); + else + x = gen_rtx_PLUS (GET_MODE (x), + cleanup_auto_inc_dec (XEXP (x, 0), after, mem_mode), + GEN_INT ((code == PRE_INC || code == POST_INC) + ? GET_MODE_SIZE (mem_mode) + : -GET_MODE_SIZE (mem_mode))); + return x; + + case PRE_MODIFY: + case POST_MODIFY: + if (after == (code == PRE_MODIFY)) + x = XEXP (x, 0); + else + x = XEXP (x, 1); + return cleanup_auto_inc_dec (x, after, mem_mode); + + default: + break; + } + + /* Copy the various flags, fields, and other information. We assume + that all fields need copying, and then clear the fields that should + not be copied. That is the sensible default behavior, and forces + us to explicitly document why we are *not* copying a flag. */ + x = shallow_copy_rtx (x); + + /* We do not copy the USED flag, which is used as a mark bit during + walks over the RTL. */ + RTX_FLAG (x, used) = 0; + + /* We do not copy FRAME_RELATED for INSNs. */ + if (INSN_P (x)) + RTX_FLAG (x, frame_related) = 0; + + fmt = GET_RTX_FORMAT (code); + for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--) + if (fmt[i] == 'e') + XEXP (x, i) = cleanup_auto_inc_dec (XEXP (x, i), after, mem_mode); + else if (fmt[i] == 'E' || fmt[i] == 'V') + { + int j; + XVEC (x, i) = rtvec_alloc (XVECLEN (x, i)); + for (j = 0; j < XVECLEN (x, i); j++) + XVECEXP (x, i, j) + = cleanup_auto_inc_dec (XVECEXP (src, i, j), after, mem_mode); + } + + return x; +} +#endif + +/* Auxiliary data structure for propagate_for_debug_stmt. */ + +struct rtx_subst_pair +{ + rtx from, to; + bool changed; +#ifdef AUTO_INC_DEC + bool adjusted; + bool after; +#endif +}; + +/* Clean up any auto-updates in PAIR->to the first time it is called + for a PAIR. PAIR->adjusted is used to tell whether we've cleaned + up before. */ + +static void +auto_adjust_pair (struct rtx_subst_pair *pair ATTRIBUTE_UNUSED) +{ +#ifdef AUTO_INC_DEC + if (!pair->adjusted) + { + pair->adjusted = true; + pair->to = cleanup_auto_inc_dec (pair->to, pair->after, VOIDmode); + } +#endif +} + +/* If *LOC is the same as FROM in the struct rtx_subst_pair passed as + DATA, replace it with a copy of TO. Handle SUBREGs of *LOC as + well. */ + +static int +propagate_for_debug_subst (rtx *loc, void *data) +{ + struct rtx_subst_pair *pair = (struct rtx_subst_pair *)data; + rtx from = pair->from, to = pair->to; + rtx x = *loc, s = x; + + if (rtx_equal_p (x, from) + || (GET_CODE (x) == SUBREG && rtx_equal_p ((s = SUBREG_REG (x)), from))) + { + auto_adjust_pair (pair); + if (pair->to != to) + to = pair->to; + else + to = copy_rtx (to); + if (s != x) + { + gcc_assert (GET_CODE (x) == SUBREG && SUBREG_REG (x) == s); + to = simplify_gen_subreg (GET_MODE (x), to, + GET_MODE (from), SUBREG_BYTE (x)); + } + *loc = to; + pair->changed = true; + return -1; + } + + return 0; +} + +/* Replace occurrences of DEST with SRC in DEBUG_INSNs between INSN + and LAST. If MOVE holds, debug insns must also be moved past + LAST. */ + +static void +propagate_for_debug (rtx insn, rtx last, rtx dest, rtx src, bool move) +{ + struct rtx_subst_pair p; + rtx next, move_pos = move ? last : NULL_RTX; + + p.from = dest; + p.to = src; + p.changed = false; + +#ifdef AUTO_INC_DEC + p.adjusted = false; + p.after = move; +#endif + + next = NEXT_INSN (insn); + while (next != last) + { + insn = next; + next = NEXT_INSN (insn); + if (DEBUG_INSN_P (insn)) + { + for_each_rtx (&INSN_VAR_LOCATION_LOC (insn), + propagate_for_debug_subst, &p); + if (!p.changed) + continue; + p.changed = false; + if (move_pos) + { + remove_insn (insn); + PREV_INSN (insn) = NEXT_INSN (insn) = NULL_RTX; + move_pos = emit_debug_insn_after (insn, move_pos); + } + else + df_insn_rescan (insn); + } + } +} /* Delete the conditional jump INSN and adjust the CFG correspondingly. Note that the INSN should be deleted *after* removing dead edges, so @@ -2217,7 +2420,9 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p) I2 and not in I3, a REG_DEAD note must be made. */ rtx i3dest_killed = 0; /* SET_DEST and SET_SRC of I2 and I1. */ - rtx i2dest, i2src, i1dest = 0, i1src = 0; + rtx i2dest = 0, i2src = 0, i1dest = 0, i1src = 0; + /* Set if I2DEST was reused as a scratch register. */ + bool i2scratch = false; /* PATTERN (I1) and PATTERN (I2), or a copy of it in certain cases. */ rtx i1pat = 0, i2pat = 0; /* Indicates if I2DEST or I1DEST is in I2SRC or I1_SRC. */ @@ -2301,7 +2506,7 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p) && GET_CODE (SET_DEST (PATTERN (i3))) != STRICT_LOW_PART && ! reg_overlap_mentioned_p (SET_SRC (PATTERN (i3)), SET_DEST (PATTERN (i3))) - && next_real_insn (i2) == i3) + && next_active_insn (i2) == i3) { rtx p2 = PATTERN (i2); @@ -2334,6 +2539,7 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p) subst_low_luid = DF_INSN_LUID (i2); added_sets_2 = added_sets_1 = 0; + i2src = SET_DEST (PATTERN (i3)); i2dest = SET_SRC (PATTERN (i3)); i2dest_killed = dead_or_set_p (i2, i2dest); @@ -3006,6 +3212,8 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p) undobuf.frees = buf; } } + + i2scratch = m_split != 0; } /* If recog_for_combine has discarded clobbers, try to use them @@ -3100,6 +3308,8 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p) bool subst_done = false; newi2pat = NULL_RTX; + i2scratch = true; + /* Get NEWDEST as a register in the proper mode. We have already validated that we can do this. */ if (GET_MODE (i2dest) != split_mode && split_mode != VOIDmode) @@ -3402,6 +3612,67 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p) return 0; } + if (MAY_HAVE_DEBUG_INSNS) + { + struct undo *undo; + + for (undo = undobuf.undos; undo; undo = undo->next) + if (undo->kind == UNDO_MODE) + { + rtx reg = *undo->where.r; + enum machine_mode new_mode = GET_MODE (reg); + enum machine_mode old_mode = undo->old_contents.m; + + /* Temporarily revert mode back. */ + adjust_reg_mode (reg, old_mode); + + if (reg == i2dest && i2scratch) + { + /* If we used i2dest as a scratch register with a + different mode, substitute it for the original + i2src while its original mode is temporarily + restored, and then clear i2scratch so that we don't + do it again later. */ + propagate_for_debug (i2, i3, reg, i2src, false); + i2scratch = false; + /* Put back the new mode. */ + adjust_reg_mode (reg, new_mode); + } + else + { + rtx tempreg = gen_raw_REG (old_mode, REGNO (reg)); + rtx first, last; + + if (reg == i2dest) + { + first = i2; + last = i3; + } + else + { + first = i3; + last = undobuf.other_insn; + gcc_assert (last); + } + + /* We're dealing with a reg that changed mode but not + meaning, so we want to turn it into a subreg for + the new mode. However, because of REG sharing and + because its mode had already changed, we have to do + it in two steps. First, replace any debug uses of + reg, with its original mode temporarily restored, + with this copy we have created; then, replace the + copy with the SUBREG of the original shared reg, + once again changed to the new mode. */ + propagate_for_debug (first, last, reg, tempreg, false); + adjust_reg_mode (reg, new_mode); + propagate_for_debug (first, last, tempreg, + lowpart_subreg (old_mode, reg, new_mode), + false); + } + } + } + /* If we will be able to accept this, we have made a change to the destination of I3. This requires us to do a few adjustments. */ @@ -3592,16 +3863,24 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p) if (newi2pat) { + if (MAY_HAVE_DEBUG_INSNS && i2scratch) + propagate_for_debug (i2, i3, i2dest, i2src, false); INSN_CODE (i2) = i2_code_number; PATTERN (i2) = newi2pat; } else - SET_INSN_DELETED (i2); + { + if (MAY_HAVE_DEBUG_INSNS && i2src) + propagate_for_debug (i2, i3, i2dest, i2src, i3_subst_into_i2); + SET_INSN_DELETED (i2); + } if (i1) { LOG_LINKS (i1) = 0; REG_NOTES (i1) = 0; + if (MAY_HAVE_DEBUG_INSNS) + propagate_for_debug (i1, i3, i1dest, i1src, false); SET_INSN_DELETED (i1); } @@ -12396,6 +12675,29 @@ reg_bitfield_target_p (rtx x, rtx body) return 0; } + +/* Return the next insn after INSN that is neither a NOTE nor a + DEBUG_INSN. This routine does not look inside SEQUENCEs. */ + +static rtx +next_nonnote_nondebug_insn (rtx insn) +{ + while (insn) + { + insn = NEXT_INSN (insn); + if (insn == 0) + break; + if (NOTE_P (insn)) + continue; + if (DEBUG_INSN_P (insn)) + continue; + break; + } + + return insn; +} + + /* Given a chain of REG_NOTES originally from FROM_INSN, try to place them as appropriate. I3 and I2 are the insns resulting from the combination @@ -12649,7 +12951,7 @@ distribute_notes (rtx notes, rtx from_insn, rtx i3, rtx i2, rtx elim_i2, place = from_insn; else if (reg_referenced_p (XEXP (note, 0), PATTERN (i3))) place = i3; - else if (i2 != 0 && next_nonnote_insn (i2) == i3 + else if (i2 != 0 && next_nonnote_nondebug_insn (i2) == i3 && reg_referenced_p (XEXP (note, 0), PATTERN (i2))) place = i2; else if ((rtx_equal_p (XEXP (note, 0), elim_i2) @@ -12667,7 +12969,7 @@ distribute_notes (rtx notes, rtx from_insn, rtx i3, rtx i2, rtx elim_i2, for (tem = PREV_INSN (tem); place == 0; tem = PREV_INSN (tem)) { - if (! INSN_P (tem)) + if (!NONDEBUG_INSN_P (tem)) { if (tem == BB_HEAD (bb)) break; @@ -12868,7 +13170,7 @@ distribute_notes (rtx notes, rtx from_insn, rtx i3, rtx i2, rtx elim_i2, for (tem = PREV_INSN (place); ; tem = PREV_INSN (tem)) { - if (! INSN_P (tem)) + if (!NONDEBUG_INSN_P (tem)) { if (tem == BB_HEAD (bb)) break; @@ -12958,7 +13260,9 @@ distribute_links (rtx links) (insn && (this_basic_block->next_bb == EXIT_BLOCK_PTR || BB_HEAD (this_basic_block->next_bb) != insn)); insn = NEXT_INSN (insn)) - if (INSN_P (insn) && reg_overlap_mentioned_p (reg, PATTERN (insn))) + if (DEBUG_INSN_P (insn)) + continue; + else if (INSN_P (insn) && reg_overlap_mentioned_p (reg, PATTERN (insn))) { if (reg_referenced_p (reg, PATTERN (insn))) place = insn; diff --git a/gcc/common.opt b/gcc/common.opt index 133bc524d7f..8fe512a76e2 100644 --- a/gcc/common.opt +++ b/gcc/common.opt @@ -380,10 +380,6 @@ fcommon Common Report Var(flag_no_common,0) Optimization Do not put uninitialized globals in the common section -fconserve-stack -Common Var(flag_conserve_stack) Optimization -Do not perform optimizations increasing noticeably stack usage - fcompare-debug= Common JoinedOrMissing RejectNegative Var(flag_compare_debug_opt) -fcompare-debug[=<opts>] Compile with and without e.g. -gtoggle, and compare the final-insns dump @@ -392,6 +388,10 @@ fcompare-debug-second Common RejectNegative Var(flag_compare_debug) Run only the second compilation of -fcompare-debug +fconserve-stack +Common Var(flag_conserve_stack) Optimization +Do not perform optimizations increasing noticeably stack usage + fcprop-registers Common Report Var(flag_cprop_registers) Optimization Perform a register copy-propagation optimization pass @@ -470,14 +470,14 @@ fdump-unnumbered Common Report Var(flag_dump_unnumbered) VarExists Suppress output of instruction numbers, line number notes and addresses in debugging dumps -fdwarf2-cfi-asm -Common Report Var(flag_dwarf2_cfi_asm) Init(HAVE_GAS_CFI_DIRECTIVE) -Enable CFI tables via GAS assembler directives. - fdump-unnumbered-links Common Report Var(flag_dump_unnumbered_links) VarExists Suppress output of previous and next insn numbers in debugging dumps +fdwarf2-cfi-asm +Common Report Var(flag_dwarf2_cfi_asm) Init(HAVE_GAS_CFI_DIRECTIVE) +Enable CFI tables via GAS assembler directives. + fearly-inlining Common Report Var(flag_early_inlining) Init(1) Optimization Perform early inlining @@ -1369,6 +1369,14 @@ fvar-tracking Common Report Var(flag_var_tracking) VarExists Optimization Perform variable tracking +fvar-tracking-assignments +Common Report Var(flag_var_tracking_assignments) VarExists Optimization +Perform variable tracking by annotating assignments + +fvar-tracking-assignments-toggle +Common Report Var(flag_var_tracking_assignments_toggle) VarExists Optimization +Toggle -fvar-tracking-assignments + fvar-tracking-uninit Common Report Var(flag_var_tracking_uninit) Optimization Perform variable tracking and also tag variables that are uninitialized diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c index dfa35f12616..ccb5e2f1459 100644 --- a/gcc/config/i386/i386.c +++ b/gcc/config/i386/i386.c @@ -10752,9 +10752,9 @@ ix86_pic_register_p (rtx x) the DWARF output code. */ static rtx -ix86_delegitimize_address (rtx orig_x) +ix86_delegitimize_address (rtx x) { - rtx x = orig_x; + rtx orig_x = delegitimize_mem_from_attrs (x); /* reg_addend is NULL or a multiple of some register. */ rtx reg_addend = NULL_RTX; /* const_addend is NULL or a const_int. */ @@ -10762,6 +10762,8 @@ ix86_delegitimize_address (rtx orig_x) /* This is the result, or NULL. */ rtx result = NULL_RTX; + x = orig_x; + if (MEM_P (x)) x = XEXP (x, 0); diff --git a/gcc/config/ia64/ia64.c b/gcc/config/ia64/ia64.c index dbc33cbc892..7e757523381 100644 --- a/gcc/config/ia64/ia64.c +++ b/gcc/config/ia64/ia64.c @@ -5521,6 +5521,8 @@ ia64_safe_itanium_class (rtx insn) { if (recog_memoized (insn) >= 0) return get_attr_itanium_class (insn); + else if (DEBUG_INSN_P (insn)) + return ITANIUM_CLASS_IGNORE; else return ITANIUM_CLASS_UNKNOWN; } @@ -6277,6 +6279,7 @@ group_barrier_needed (rtx insn) switch (GET_CODE (insn)) { case NOTE: + case DEBUG_INSN: break; case BARRIER: @@ -6434,7 +6437,7 @@ emit_insn_group_barriers (FILE *dump) init_insn_group_barriers (); last_label = 0; } - else if (INSN_P (insn)) + else if (NONDEBUG_INSN_P (insn)) { insns_since_last_label = 1; @@ -6482,7 +6485,7 @@ emit_all_insn_group_barriers (FILE *dump ATTRIBUTE_UNUSED) init_insn_group_barriers (); } - else if (INSN_P (insn)) + else if (NONDEBUG_INSN_P (insn)) { if (recog_memoized (insn) == CODE_FOR_insn_group_barrier) init_insn_group_barriers (); @@ -6975,6 +6978,9 @@ ia64_variable_issue (FILE *dump ATTRIBUTE_UNUSED, pending_data_specs--; } + if (DEBUG_INSN_P (insn)) + return 1; + last_scheduled_insn = insn; memcpy (prev_cycle_state, curr_state, dfa_state_size); if (reload_completed) @@ -7057,6 +7063,10 @@ ia64_dfa_new_cycle (FILE *dump, int verbose, rtx insn, int last_clock, int setup_clocks_p = FALSE; gcc_assert (insn && INSN_P (insn)); + + if (DEBUG_INSN_P (insn)) + return 0; + /* When a group barrier is needed for insn, last_scheduled_insn should be set. */ gcc_assert (!(reload_completed && safe_group_barrier_needed (insn)) @@ -9043,7 +9053,7 @@ final_emit_insn_group_barriers (FILE *dump ATTRIBUTE_UNUSED) need_barrier_p = 0; prev_insn = NULL_RTX; } - else if (INSN_P (insn)) + else if (NONDEBUG_INSN_P (insn)) { if (recog_memoized (insn) == CODE_FOR_insn_group_barrier) { @@ -9605,15 +9615,18 @@ ia64_emit_deleted_label_after_insn (rtx insn) /* Define the CFA after INSN with the steady-state definition. */ static void -ia64_dwarf2out_def_steady_cfa (rtx insn) +ia64_dwarf2out_def_steady_cfa (rtx insn, bool frame) { rtx fp = frame_pointer_needed ? hard_frame_pointer_rtx : stack_pointer_rtx; + const char *label = ia64_emit_deleted_label_after_insn (insn); + + if (!frame) + return; dwarf2out_def_cfa - (ia64_emit_deleted_label_after_insn (insn), - REGNO (fp), + (label, REGNO (fp), ia64_initial_elimination_offset (REGNO (arg_pointer_rtx), REGNO (fp)) + ARG_POINTER_CFA_OFFSET (current_function_decl)); @@ -9706,8 +9719,7 @@ process_set (FILE *asm_out_file, rtx pat, rtx insn, bool unwind, bool frame) if (unwind) fprintf (asm_out_file, "\t.fframe "HOST_WIDE_INT_PRINT_DEC"\n", -INTVAL (op1)); - if (frame) - ia64_dwarf2out_def_steady_cfa (insn); + ia64_dwarf2out_def_steady_cfa (insn, frame); } else process_epilogue (asm_out_file, insn, unwind, frame); @@ -9765,8 +9777,7 @@ process_set (FILE *asm_out_file, rtx pat, rtx insn, bool unwind, bool frame) if (unwind) fprintf (asm_out_file, "\t.vframe r%d\n", ia64_dbx_register_number (dest_regno)); - if (frame) - ia64_dwarf2out_def_steady_cfa (insn); + ia64_dwarf2out_def_steady_cfa (insn, frame); return 1; default: @@ -9911,8 +9922,8 @@ process_for_unwind_directive (FILE *asm_out_file, rtx insn) fprintf (asm_out_file, "\t.copy_state %d\n", cfun->machine->state_num); } - if (IA64_CHANGE_CFA_IN_EPILOGUE && frame) - ia64_dwarf2out_def_steady_cfa (insn); + if (IA64_CHANGE_CFA_IN_EPILOGUE) + ia64_dwarf2out_def_steady_cfa (insn, frame); need_copy_state = false; } } diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c index 8e31cbbe7a6..40d83900bc9 100644 --- a/gcc/config/rs6000/rs6000.c +++ b/gcc/config/rs6000/rs6000.c @@ -979,6 +979,7 @@ static void rs6000_init_dwarf_reg_sizes_extra (tree); static rtx rs6000_legitimize_address (rtx, rtx, enum machine_mode); static rtx rs6000_debug_legitimize_address (rtx, rtx, enum machine_mode); static rtx rs6000_legitimize_tls_address (rtx, enum tls_model); +static rtx rs6000_delegitimize_address (rtx); static void rs6000_output_dwarf_dtprel (FILE *, int, rtx) ATTRIBUTE_UNUSED; static rtx rs6000_tls_get_addr (void); static rtx rs6000_got_sym (void); @@ -1436,6 +1437,9 @@ static const struct attribute_spec rs6000_attribute_table[] = #undef TARGET_USE_BLOCKS_FOR_CONSTANT_P #define TARGET_USE_BLOCKS_FOR_CONSTANT_P rs6000_use_blocks_for_constant_p +#undef TARGET_DELEGITIMIZE_ADDRESS +#define TARGET_DELEGITIMIZE_ADDRESS rs6000_delegitimize_address + #undef TARGET_BUILTIN_RECIPROCAL #define TARGET_BUILTIN_RECIPROCAL rs6000_builtin_reciprocal @@ -5080,6 +5084,33 @@ rs6000_debug_legitimize_address (rtx x, rtx oldx, enum machine_mode mode) return ret; } +/* If ORIG_X is a constant pool reference, return its known value, + otherwise ORIG_X. */ + +static rtx +rs6000_delegitimize_address (rtx x) +{ + rtx orig_x = delegitimize_mem_from_attrs (x); + + x = orig_x; + + if (!MEM_P (x)) + return orig_x; + + x = XEXP (x, 0); + + if (legitimate_constant_pool_address_p (x) + && GET_CODE (XEXP (x, 1)) == CONST + && GET_CODE (XEXP (XEXP (x, 1), 0)) == MINUS + && GET_CODE (XEXP (XEXP (XEXP (x, 1), 0), 0)) == SYMBOL_REF + && constant_pool_expr_p (XEXP (XEXP (XEXP (x, 1), 0), 0)) + && GET_CODE (XEXP (XEXP (XEXP (x, 1), 0), 1)) == SYMBOL_REF + && toc_relative_expr_p (XEXP (XEXP (XEXP (x, 1), 0), 1))) + return get_pool_constant (XEXP (XEXP (XEXP (x, 1), 0), 0)); + + return orig_x; +} + /* This is called from dwarf2out.c via TARGET_ASM_OUTPUT_DWARF_DTPREL. We need to emit DTP-relative relocations. */ @@ -21304,7 +21335,7 @@ rs6000_debug_adjust_cost (rtx insn, rtx link, rtx dep_insn, int cost) static bool is_microcoded_insn (rtx insn) { - if (!insn || !INSN_P (insn) + if (!insn || !NONDEBUG_INSN_P (insn) || GET_CODE (PATTERN (insn)) == USE || GET_CODE (PATTERN (insn)) == CLOBBER) return false; @@ -21332,7 +21363,7 @@ is_microcoded_insn (rtx insn) static bool is_cracked_insn (rtx insn) { - if (!insn || !INSN_P (insn) + if (!insn || !NONDEBUG_INSN_P (insn) || GET_CODE (PATTERN (insn)) == USE || GET_CODE (PATTERN (insn)) == CLOBBER) return false; @@ -21360,7 +21391,7 @@ is_cracked_insn (rtx insn) static bool is_branch_slot_insn (rtx insn) { - if (!insn || !INSN_P (insn) + if (!insn || !NONDEBUG_INSN_P (insn) || GET_CODE (PATTERN (insn)) == USE || GET_CODE (PATTERN (insn)) == CLOBBER) return false; @@ -21519,7 +21550,7 @@ static bool is_nonpipeline_insn (rtx insn) { enum attr_type type; - if (!insn || !INSN_P (insn) + if (!insn || !NONDEBUG_INSN_P (insn) || GET_CODE (PATTERN (insn)) == USE || GET_CODE (PATTERN (insn)) == CLOBBER) return false; @@ -22098,8 +22129,8 @@ insn_must_be_first_in_group (rtx insn) enum attr_type type; if (!insn - || insn == NULL_RTX || GET_CODE (insn) == NOTE + || DEBUG_INSN_P (insn) || GET_CODE (PATTERN (insn)) == USE || GET_CODE (PATTERN (insn)) == CLOBBER) return false; @@ -22229,8 +22260,8 @@ insn_must_be_last_in_group (rtx insn) enum attr_type type; if (!insn - || insn == NULL_RTX || GET_CODE (insn) == NOTE + || DEBUG_INSN_P (insn) || GET_CODE (PATTERN (insn)) == USE || GET_CODE (PATTERN (insn)) == CLOBBER) return false; @@ -22356,7 +22387,7 @@ force_new_group (int sched_verbose, FILE *dump, rtx *group_insns, bool end = *group_end; int i; - if (next_insn == NULL_RTX) + if (next_insn == NULL_RTX || DEBUG_INSN_P (next_insn)) return can_issue_more; if (rs6000_sched_insert_nops > sched_finish_regroup_exact) diff --git a/gcc/cp/cp-lang.c b/gcc/cp/cp-lang.c index bd35a65c031..0278028a8cb 100644 --- a/gcc/cp/cp-lang.c +++ b/gcc/cp/cp-lang.c @@ -1,5 +1,5 @@ /* Language-dependent hooks for C++. - Copyright 2001, 2002, 2004, 2007, 2008 Free Software Foundation, Inc. + Copyright 2001, 2002, 2004, 2007, 2008, 2009 Free Software Foundation, Inc. Contributed by Alexandre Oliva <aoliva@redhat.com> This file is part of GCC. @@ -124,7 +124,9 @@ cxx_dwarf_name (tree t, int verbosity) gcc_assert (DECL_P (t)); if (verbosity >= 2) - return decl_as_string (t, TFF_DECL_SPECIFIERS | TFF_UNQUALIFIED_NAME); + return decl_as_string (t, + TFF_DECL_SPECIFIERS | TFF_UNQUALIFIED_NAME + | TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS); return cxx_printable_name (t, verbosity); } diff --git a/gcc/cp/cp-tree.h b/gcc/cp/cp-tree.h index 44801032dcf..a76b2d2e8ed 100644 --- a/gcc/cp/cp-tree.h +++ b/gcc/cp/cp-tree.h @@ -3987,7 +3987,9 @@ enum overload_flags { NO_SPECIAL = 0, DTOR_FLAG, OP_FLAG, TYPENAME_FLAG }; TFF_EXPR_IN_PARENS: parenthesize expressions. TFF_NO_FUNCTION_ARGUMENTS: don't show function arguments. TFF_UNQUALIFIED_NAME: do not print the qualifying scope of the - top-level entity. */ + top-level entity. + TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS: do not omit template arguments + identical to their defaults. */ #define TFF_PLAIN_IDENTIFIER (0) #define TFF_SCOPE (1) @@ -4002,6 +4004,7 @@ enum overload_flags { NO_SPECIAL = 0, DTOR_FLAG, OP_FLAG, TYPENAME_FLAG }; #define TFF_EXPR_IN_PARENS (1 << 9) #define TFF_NO_FUNCTION_ARGUMENTS (1 << 10) #define TFF_UNQUALIFIED_NAME (1 << 11) +#define TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS (1 << 12) /* Returns the TEMPLATE_DECL associated to a TEMPLATE_TEMPLATE_PARM node. */ diff --git a/gcc/cp/error.c b/gcc/cp/error.c index 239ff9ac07c..19649292627 100644 --- a/gcc/cp/error.c +++ b/gcc/cp/error.c @@ -84,7 +84,7 @@ static void dump_template_bindings (tree, tree, VEC(tree,gc) *); static void dump_scope (tree, int); static void dump_template_parms (tree, int, int); -static int count_non_default_template_args (tree, tree); +static int count_non_default_template_args (tree, tree, int); static const char *function_category (tree); static void maybe_print_instantiation_context (diagnostic_context *); @@ -163,13 +163,20 @@ dump_template_argument (tree arg, int flags) match the (optional) default template parameter in PARAMS */ static int -count_non_default_template_args (tree args, tree params) +count_non_default_template_args (tree args, tree params, int flags) { tree inner_args = INNERMOST_TEMPLATE_ARGS (args); int n = TREE_VEC_LENGTH (inner_args); int last; - if (params == NULL_TREE || !flag_pretty_templates) + if (params == NULL_TREE + /* We use this flag when generating debug information. We don't + want to expand templates at this point, for this may generate + new decls, which gets decl counts out of sync, which may in + turn cause codegen differences between compilations with and + without -g. */ + || (flags & TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS) != 0 + || !flag_pretty_templates) return n; for (last = n - 1; last >= 0; --last) @@ -201,7 +208,7 @@ count_non_default_template_args (tree args, tree params) static void dump_template_argument_list (tree args, tree parms, int flags) { - int n = count_non_default_template_args (args, parms); + int n = count_non_default_template_args (args, parms, flags); int need_comma = 0; int i; @@ -1448,7 +1455,7 @@ dump_template_parms (tree info, int primary, int flags) ? DECL_INNERMOST_TEMPLATE_PARMS (TI_TEMPLATE (info)) : NULL_TREE); - len = count_non_default_template_args (args, params); + len = count_non_default_template_args (args, params, flags); args = INNERMOST_TEMPLATE_ARGS (args); for (ix = 0; ix != len; ix++) diff --git a/gcc/cse.c b/gcc/cse.c index 5f83892c79f..3f3b863794f 100644 --- a/gcc/cse.c +++ b/gcc/cse.c @@ -4358,6 +4358,8 @@ cse_insn (rtx insn) apply_change_group (); fold_rtx (x, insn); } + else if (DEBUG_INSN_P (insn)) + canon_reg (PATTERN (insn), insn); /* Store the equivalent value in SRC_EQV, if different, or if the DEST is a STRICT_LOW_PART. The latter condition is necessary because SRC_EQV @@ -5788,7 +5790,7 @@ cse_insn (rtx insn) { prev = PREV_INSN (prev); } - while (prev != bb_head && NOTE_P (prev)); + while (prev != bb_head && (NOTE_P (prev) || DEBUG_INSN_P (prev))); /* Do not swap the registers around if the previous instruction attaches a REG_EQUIV note to REG1. @@ -6244,7 +6246,7 @@ cse_extended_basic_block (struct cse_basic_block_data *ebb_data) FIXME: This is a real kludge and needs to be done some other way. */ - if (INSN_P (insn) + if (NONDEBUG_INSN_P (insn) && num_insns++ > PARAM_VALUE (PARAM_MAX_CSE_INSNS)) { flush_hash_table (); @@ -6536,6 +6538,9 @@ count_reg_usage (rtx x, int *counts, rtx dest, int incr) incr); return; + case DEBUG_INSN: + return; + case CALL_INSN: case INSN: case JUMP_INSN: @@ -6608,6 +6613,19 @@ count_reg_usage (rtx x, int *counts, rtx dest, int incr) } } +/* Return true if a register is dead. Can be used in for_each_rtx. */ + +static int +is_dead_reg (rtx *loc, void *data) +{ + rtx x = *loc; + int *counts = (int *)data; + + return (REG_P (x) + && REGNO (x) >= FIRST_PSEUDO_REGISTER + && counts[REGNO (x)] == 0); +} + /* Return true if set is live. */ static bool set_live_p (rtx set, rtx insn ATTRIBUTE_UNUSED, /* Only used with HAVE_cc0. */ @@ -6628,9 +6646,7 @@ set_live_p (rtx set, rtx insn ATTRIBUTE_UNUSED, /* Only used with HAVE_cc0. */ || !reg_referenced_p (cc0_rtx, PATTERN (tem)))) return false; #endif - else if (!REG_P (SET_DEST (set)) - || REGNO (SET_DEST (set)) < FIRST_PSEUDO_REGISTER - || counts[REGNO (SET_DEST (set))] != 0 + else if (!is_dead_reg (&SET_DEST (set), counts) || side_effects_p (SET_SRC (set))) return true; return false; @@ -6662,6 +6678,29 @@ insn_live_p (rtx insn, int *counts) } return false; } + else if (DEBUG_INSN_P (insn)) + { + rtx next; + + for (next = NEXT_INSN (insn); next; next = NEXT_INSN (next)) + if (NOTE_P (next)) + continue; + else if (!DEBUG_INSN_P (next)) + return true; + else if (INSN_VAR_LOCATION_DECL (insn) == INSN_VAR_LOCATION_DECL (next)) + return false; + + /* If this debug insn references a dead register, drop the + location expression for now. ??? We could try to find the + def and see if propagation is possible. */ + if (for_each_rtx (&INSN_VAR_LOCATION_LOC (insn), is_dead_reg, counts)) + { + INSN_VAR_LOCATION_LOC (insn) = gen_rtx_UNKNOWN_VAR_LOC (); + df_insn_rescan (insn); + } + + return true; + } else return true; } diff --git a/gcc/cselib.c b/gcc/cselib.c index 456f1359b6f..8d52c519ff3 100644 --- a/gcc/cselib.c +++ b/gcc/cselib.c @@ -38,6 +38,7 @@ along with GCC; see the file COPYING3. If not see #include "output.h" #include "ggc.h" #include "hashtab.h" +#include "tree-pass.h" #include "cselib.h" #include "params.h" #include "alloc-pool.h" @@ -54,9 +55,8 @@ static void unchain_one_elt_loc_list (struct elt_loc_list **); static int discard_useless_locs (void **, void *); static int discard_useless_values (void **, void *); static void remove_useless_values (void); -static rtx wrap_constant (enum machine_mode, rtx); static unsigned int cselib_hash_rtx (rtx, int); -static cselib_val *new_cselib_val (unsigned int, enum machine_mode); +static cselib_val *new_cselib_val (unsigned int, enum machine_mode, rtx); static void add_mem_for_addr (cselib_val *, cselib_val *, rtx); static cselib_val *cselib_lookup_mem (rtx, int); static void cselib_invalidate_regno (unsigned int, enum machine_mode); @@ -64,6 +64,15 @@ static void cselib_invalidate_mem (rtx); static void cselib_record_set (rtx, cselib_val *, cselib_val *); static void cselib_record_sets (rtx); +struct expand_value_data +{ + bitmap regs_active; + cselib_expand_callback callback; + void *callback_arg; +}; + +static rtx cselib_expand_value_rtx_1 (rtx, struct expand_value_data *, int); + /* There are three ways in which cselib can look up an rtx: - for a REG, the reg_values table (which is indexed by regno) is used - for a MEM, we recursively look up its address and then follow the @@ -134,6 +143,20 @@ static alloc_pool elt_loc_list_pool, elt_list_pool, cselib_val_pool, value_pool; /* If nonnull, cselib will call this function before freeing useless VALUEs. A VALUE is deemed useless if its "locs" field is null. */ void (*cselib_discard_hook) (cselib_val *); + +/* If nonnull, cselib will call this function before recording sets or + even clobbering outputs of INSN. All the recorded sets will be + represented in the array sets[n_sets]. new_val_min can be used to + tell whether values present in sets are introduced by this + instruction. */ +void (*cselib_record_sets_hook) (rtx insn, struct cselib_set *sets, + int n_sets); + +#define PRESERVED_VALUE_P(RTX) \ + (RTL_FLAG_CHECK1("PRESERVED_VALUE_P", (RTX), VALUE)->unchanging) +#define LONG_TERM_PRESERVED_VALUE_P(RTX) \ + (RTL_FLAG_CHECK1("LONG_TERM_PRESERVED_VALUE_P", (RTX), VALUE)->in_struct) + /* Allocate a struct elt_list and fill in its two elements with the @@ -199,12 +222,20 @@ unchain_one_value (cselib_val *v) } /* Remove all entries from the hash table. Also used during - initialization. If CLEAR_ALL isn't set, then only clear the entries - which are known to have been used. */ + initialization. */ void cselib_clear_table (void) { + cselib_reset_table_with_next_value (0); +} + +/* Remove all entries from the hash table, arranging for the next + value to be numbered NUM. */ + +void +cselib_reset_table_with_next_value (unsigned int num) +{ unsigned int i; for (i = 0; i < n_used_regs; i++) @@ -214,15 +245,24 @@ cselib_clear_table (void) n_used_regs = 0; + /* ??? Preserve constants? */ htab_empty (cselib_hash_table); n_useless_values = 0; - next_unknown_value = 0; + next_unknown_value = num; first_containing_mem = &dummy_val; } +/* Return the number of the next value that will be generated. */ + +unsigned int +cselib_get_next_unknown_value (void) +{ + return next_unknown_value; +} + /* The equality test for our hash table. The first argument ENTRY is a table element (i.e. a cselib_val), while the second arg X is an rtx. We know that all callers of htab_find_slot_with_hash will wrap CONST_INTs into a @@ -317,7 +357,7 @@ discard_useless_locs (void **x, void *info ATTRIBUTE_UNUSED) p = &(*p)->next; } - if (had_locs && v->locs == 0) + if (had_locs && v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx)) { n_useless_values++; values_became_useless = 1; @@ -332,7 +372,7 @@ discard_useless_values (void **x, void *info ATTRIBUTE_UNUSED) { cselib_val *v = (cselib_val *)*x; - if (v->locs == 0) + if (v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx)) { if (cselib_discard_hook) cselib_discard_hook (v); @@ -378,6 +418,78 @@ remove_useless_values (void) gcc_assert (!n_useless_values); } +/* Arrange for a value to not be removed from the hash table even if + it becomes useless. */ + +void +cselib_preserve_value (cselib_val *v) +{ + PRESERVED_VALUE_P (v->val_rtx) = 1; +} + +/* Test whether a value is preserved. */ + +bool +cselib_preserved_value_p (cselib_val *v) +{ + return PRESERVED_VALUE_P (v->val_rtx); +} + +/* Mark preserved values as preserved for the long term. */ + +static int +cselib_preserve_definitely (void **slot, void *info ATTRIBUTE_UNUSED) +{ + cselib_val *v = (cselib_val *)*slot; + + if (PRESERVED_VALUE_P (v->val_rtx) + && !LONG_TERM_PRESERVED_VALUE_P (v->val_rtx)) + LONG_TERM_PRESERVED_VALUE_P (v->val_rtx) = true; + + return 1; +} + +/* Clear the preserve marks for values not preserved for the long + term. */ + +static int +cselib_clear_preserve (void **slot, void *info ATTRIBUTE_UNUSED) +{ + cselib_val *v = (cselib_val *)*slot; + + if (PRESERVED_VALUE_P (v->val_rtx) + && !LONG_TERM_PRESERVED_VALUE_P (v->val_rtx)) + { + PRESERVED_VALUE_P (v->val_rtx) = false; + if (!v->locs) + n_useless_values++; + } + + return 1; +} + +/* Clean all non-constant expressions in the hash table, but retain + their values. */ + +void +cselib_preserve_only_values (bool retain) +{ + int i; + + htab_traverse (cselib_hash_table, + retain ? cselib_preserve_definitely : cselib_clear_preserve, + NULL); + + for (i = 0; i < FIRST_PSEUDO_REGISTER; i++) + cselib_invalidate_regno (i, reg_raw_mode[i]); + + cselib_invalidate_mem (callmem); + + remove_useless_values (); + + gcc_assert (first_containing_mem == &dummy_val); +} + /* Return the mode in which a register was last set. If X is not a register, return its mode. If the mode in which the register was set is not known, or the value was already clobbered, return @@ -549,19 +661,6 @@ rtx_equal_for_cselib_p (rtx x, rtx y) return 1; } -/* We need to pass down the mode of constants through the hash table - functions. For that purpose, wrap them in a CONST of the appropriate - mode. */ -static rtx -wrap_constant (enum machine_mode mode, rtx x) -{ - if (!CONST_INT_P (x) && GET_CODE (x) != CONST_FIXED - && (GET_CODE (x) != CONST_DOUBLE || GET_MODE (x) != VOIDmode)) - return x; - gcc_assert (mode != VOIDmode); - return gen_rtx_CONST (mode, x); -} - /* Hash an rtx. Return 0 if we couldn't hash the rtx. For registers and memory locations, we look up their cselib_val structure and return its VALUE element. @@ -748,7 +847,7 @@ cselib_hash_rtx (rtx x, int create) value is MODE. */ static inline cselib_val * -new_cselib_val (unsigned int value, enum machine_mode mode) +new_cselib_val (unsigned int value, enum machine_mode mode, rtx x) { cselib_val *e = (cselib_val *) pool_alloc (cselib_val_pool); @@ -768,6 +867,18 @@ new_cselib_val (unsigned int value, enum machine_mode mode) e->addr_list = 0; e->locs = 0; e->next_containing_mem = 0; + + if (dump_file && (dump_flags & TDF_DETAILS)) + { + fprintf (dump_file, "cselib value %u ", value); + if (flag_dump_noaddr || flag_dump_unnumbered) + fputs ("# ", dump_file); + else + fprintf (dump_file, "%p ", (void*)e); + print_rtl_single (dump_file, x); + fputc ('\n', dump_file); + } + return e; } @@ -827,7 +938,7 @@ cselib_lookup_mem (rtx x, int create) if (! create) return 0; - mem_elt = new_cselib_val (++next_unknown_value, mode); + mem_elt = new_cselib_val (++next_unknown_value, mode, x); add_mem_for_addr (addr, mem_elt, x); slot = htab_find_slot_with_hash (cselib_hash_table, wrap_constant (mode, x), mem_elt->value, INSERT); @@ -842,7 +953,8 @@ cselib_lookup_mem (rtx x, int create) expand to the same place. */ static rtx -expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth) +expand_loc (struct elt_loc_list *p, struct expand_value_data *evd, + int max_depth) { rtx reg_result = NULL; unsigned int regno = UINT_MAX; @@ -854,7 +966,7 @@ expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth) the same reg. */ if ((REG_P (p->loc)) && (REGNO (p->loc) < regno) - && !bitmap_bit_p (regs_active, REGNO (p->loc))) + && !bitmap_bit_p (evd->regs_active, REGNO (p->loc))) { reg_result = p->loc; regno = REGNO (p->loc); @@ -867,7 +979,7 @@ expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth) else if (!REG_P (p->loc)) { rtx result, note; - if (dump_file) + if (dump_file && (dump_flags & TDF_DETAILS)) { print_inline_rtx (dump_file, p->loc, 0); fprintf (dump_file, "\n"); @@ -878,7 +990,7 @@ expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth) && (note = find_reg_note (p->setting_insn, REG_EQUAL, NULL_RTX)) && XEXP (note, 0) == XEXP (p->loc, 1)) return XEXP (p->loc, 1); - result = cselib_expand_value_rtx (p->loc, regs_active, max_depth - 1); + result = cselib_expand_value_rtx_1 (p->loc, evd, max_depth - 1); if (result) return result; } @@ -888,15 +1000,15 @@ expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth) if (regno != UINT_MAX) { rtx result; - if (dump_file) + if (dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, "r%d\n", regno); - result = cselib_expand_value_rtx (reg_result, regs_active, max_depth - 1); + result = cselib_expand_value_rtx_1 (reg_result, evd, max_depth - 1); if (result) return result; } - if (dump_file) + if (dump_file && (dump_flags & TDF_DETAILS)) { if (reg_result) { @@ -931,6 +1043,35 @@ expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth) rtx cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth) { + struct expand_value_data evd; + + evd.regs_active = regs_active; + evd.callback = NULL; + evd.callback_arg = NULL; + + return cselib_expand_value_rtx_1 (orig, &evd, max_depth); +} + +/* Same as cselib_expand_value_rtx, but using a callback to try to + resolve VALUEs that expand to nothing. */ + +rtx +cselib_expand_value_rtx_cb (rtx orig, bitmap regs_active, int max_depth, + cselib_expand_callback cb, void *data) +{ + struct expand_value_data evd; + + evd.regs_active = regs_active; + evd.callback = cb; + evd.callback_arg = data; + + return cselib_expand_value_rtx_1 (orig, &evd, max_depth); +} + +static rtx +cselib_expand_value_rtx_1 (rtx orig, struct expand_value_data *evd, + int max_depth) +{ rtx copy, scopy; int i, j; RTX_CODE code; @@ -980,13 +1121,13 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth) || regno == HARD_FRAME_POINTER_REGNUM) return orig; - bitmap_set_bit (regs_active, regno); + bitmap_set_bit (evd->regs_active, regno); - if (dump_file) + if (dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, "expanding: r%d into: ", regno); - result = expand_loc (l->elt->locs, regs_active, max_depth); - bitmap_clear_bit (regs_active, regno); + result = expand_loc (l->elt->locs, evd, max_depth); + bitmap_clear_bit (evd->regs_active, regno); if (result) return result; @@ -1017,8 +1158,8 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth) case SUBREG: { - rtx subreg = cselib_expand_value_rtx (SUBREG_REG (orig), regs_active, - max_depth - 1); + rtx subreg = cselib_expand_value_rtx_1 (SUBREG_REG (orig), evd, + max_depth - 1); if (!subreg) return NULL; scopy = simplify_gen_subreg (GET_MODE (orig), subreg, @@ -1027,18 +1168,39 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth) if (scopy == NULL || (GET_CODE (scopy) == SUBREG && !REG_P (SUBREG_REG (scopy)) - && !MEM_P (SUBREG_REG (scopy)))) + && !MEM_P (SUBREG_REG (scopy)) + && (REG_P (SUBREG_REG (orig)) + || MEM_P (SUBREG_REG (orig))))) return shallow_copy_rtx (orig); return scopy; } case VALUE: - if (dump_file) - fprintf (dump_file, "expanding value %s into: ", - GET_MODE_NAME (GET_MODE (orig))); + { + rtx result; + if (dump_file && (dump_flags & TDF_DETAILS)) + { + fputs ("\nexpanding ", dump_file); + print_rtl_single (dump_file, orig); + fputs (" into...", dump_file); + } - return expand_loc (CSELIB_VAL_PTR (orig)->locs, regs_active, max_depth); + if (!evd->callback) + result = NULL; + else + { + result = evd->callback (orig, evd->regs_active, max_depth, + evd->callback_arg); + if (result == orig) + result = NULL; + else if (result) + result = cselib_expand_value_rtx_1 (result, evd, max_depth); + } + if (!result) + result = expand_loc (CSELIB_VAL_PTR (orig)->locs, evd, max_depth); + return result; + } default: break; } @@ -1057,7 +1219,8 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth) case 'e': if (XEXP (orig, i) != NULL) { - rtx result = cselib_expand_value_rtx (XEXP (orig, i), regs_active, max_depth - 1); + rtx result = cselib_expand_value_rtx_1 (XEXP (orig, i), evd, + max_depth - 1); if (!result) return NULL; XEXP (copy, i) = result; @@ -1071,7 +1234,8 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth) XVEC (copy, i) = rtvec_alloc (XVECLEN (orig, i)); for (j = 0; j < XVECLEN (copy, i); j++) { - rtx result = cselib_expand_value_rtx (XVECEXP (orig, i, j), regs_active, max_depth - 1); + rtx result = cselib_expand_value_rtx_1 (XVECEXP (orig, i, j), + evd, max_depth - 1); if (!result) return NULL; XVECEXP (copy, i, j) = result; @@ -1155,13 +1319,17 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth) { XEXP (copy, 0) = gen_rtx_CONST (GET_MODE (XEXP (orig, 0)), XEXP (copy, 0)); - if (dump_file) + if (dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, " wrapping const_int result in const to preserve mode %s\n", GET_MODE_NAME (GET_MODE (XEXP (copy, 0)))); } scopy = simplify_rtx (copy); if (scopy) - return scopy; + { + if (GET_MODE (copy) != GET_MODE (scopy)) + scopy = wrap_constant (GET_MODE (copy), scopy); + return scopy; + } return copy; } @@ -1199,7 +1367,7 @@ cselib_subst_to_values (rtx x) { /* This happens for autoincrements. Assign a value that doesn't match any other. */ - e = new_cselib_val (++next_unknown_value, GET_MODE (x)); + e = new_cselib_val (++next_unknown_value, GET_MODE (x), x); } return e->val_rtx; @@ -1215,7 +1383,7 @@ cselib_subst_to_values (rtx x) case PRE_DEC: case POST_MODIFY: case PRE_MODIFY: - e = new_cselib_val (++next_unknown_value, GET_MODE (x)); + e = new_cselib_val (++next_unknown_value, GET_MODE (x), x); return e->val_rtx; default: @@ -1259,6 +1427,21 @@ cselib_subst_to_values (rtx x) return copy; } +/* Log a lookup of X to the cselib table along with the result RET. */ + +static cselib_val * +cselib_log_lookup (rtx x, cselib_val *ret) +{ + if (dump_file && (dump_flags & TDF_DETAILS)) + { + fputs ("cselib lookup ", dump_file); + print_inline_rtx (dump_file, x, 2); + fprintf (dump_file, " => %u\n", ret ? ret->value : 0); + } + + return ret; +} + /* Look up the rtl expression X in our tables and return the value it has. If CREATE is zero, we return NULL if we don't know the value. Otherwise, we create a new one if possible, using mode MODE if X doesn't have a mode @@ -1287,10 +1470,10 @@ cselib_lookup (rtx x, enum machine_mode mode, int create) l = l->next; for (; l; l = l->next) if (mode == GET_MODE (l->elt->val_rtx)) - return l->elt; + return cselib_log_lookup (x, l->elt); if (! create) - return 0; + return cselib_log_lookup (x, 0); if (i < FIRST_PSEUDO_REGISTER) { @@ -1300,7 +1483,7 @@ cselib_lookup (rtx x, enum machine_mode mode, int create) max_value_regs = n; } - e = new_cselib_val (++next_unknown_value, GET_MODE (x)); + e = new_cselib_val (++next_unknown_value, GET_MODE (x), x); e->locs = new_elt_loc_list (e->locs, x); if (REG_VALUES (i) == 0) { @@ -1313,34 +1496,34 @@ cselib_lookup (rtx x, enum machine_mode mode, int create) REG_VALUES (i)->next = new_elt_list (REG_VALUES (i)->next, e); slot = htab_find_slot_with_hash (cselib_hash_table, x, e->value, INSERT); *slot = e; - return e; + return cselib_log_lookup (x, e); } if (MEM_P (x)) - return cselib_lookup_mem (x, create); + return cselib_log_lookup (x, cselib_lookup_mem (x, create)); hashval = cselib_hash_rtx (x, create); /* Can't even create if hashing is not possible. */ if (! hashval) - return 0; + return cselib_log_lookup (x, 0); slot = htab_find_slot_with_hash (cselib_hash_table, wrap_constant (mode, x), hashval, create ? INSERT : NO_INSERT); if (slot == 0) - return 0; + return cselib_log_lookup (x, 0); e = (cselib_val *) *slot; if (e) - return e; + return cselib_log_lookup (x, e); - e = new_cselib_val (hashval, mode); + e = new_cselib_val (hashval, mode, x); /* We have to fill the slot before calling cselib_subst_to_values: the hash table is inconsistent until we do so, and cselib_subst_to_values will need to do lookups. */ *slot = (void *) e; e->locs = new_elt_loc_list (e->locs, cselib_subst_to_values (x)); - return e; + return cselib_log_lookup (x, e); } /* Invalidate any entries in reg_values that overlap REGNO. This is called @@ -1427,7 +1610,7 @@ cselib_invalidate_regno (unsigned int regno, enum machine_mode mode) break; } } - if (v->locs == 0) + if (v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx)) n_useless_values++; } } @@ -1510,7 +1693,7 @@ cselib_invalidate_mem (rtx mem_rtx) unchain_one_elt_loc_list (p); } - if (had_locs && v->locs == 0) + if (had_locs && v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx)) n_useless_values++; next = v->next_containing_mem; @@ -1591,28 +1774,19 @@ cselib_record_set (rtx dest, cselib_val *src_elt, cselib_val *dest_addr_elt) REG_VALUES (dreg)->elt = src_elt; } - if (src_elt->locs == 0) + if (src_elt->locs == 0 && !PRESERVED_VALUE_P (src_elt->val_rtx)) n_useless_values--; src_elt->locs = new_elt_loc_list (src_elt->locs, dest); } else if (MEM_P (dest) && dest_addr_elt != 0 && cselib_record_memory) { - if (src_elt->locs == 0) + if (src_elt->locs == 0 && !PRESERVED_VALUE_P (src_elt->val_rtx)) n_useless_values--; add_mem_for_addr (dest_addr_elt, src_elt, dest); } } -/* Describe a single set that is part of an insn. */ -struct set -{ - rtx src; - rtx dest; - cselib_val *src_elt; - cselib_val *dest_addr_elt; -}; - /* There is no good way to determine how many elements there can be in a PARALLEL. Since it's fairly cheap, use a really large number. */ #define MAX_SETS (FIRST_PSEUDO_REGISTER * 2) @@ -1623,7 +1797,7 @@ cselib_record_sets (rtx insn) { int n_sets = 0; int i; - struct set sets[MAX_SETS]; + struct cselib_set sets[MAX_SETS]; rtx body = PATTERN (insn); rtx cond = 0; @@ -1695,6 +1869,9 @@ cselib_record_sets (rtx insn) } } + if (cselib_record_sets_hook) + cselib_record_sets_hook (insn, sets, n_sets); + /* Invalidate all locations written by this insn. Note that the elts we looked up in the previous loop aren't affected, just some of their locations may go away. */ @@ -1751,7 +1928,7 @@ cselib_process_insn (rtx insn) && GET_CODE (PATTERN (insn)) == ASM_OPERANDS && MEM_VOLATILE_P (PATTERN (insn)))) { - cselib_clear_table (); + cselib_reset_table_with_next_value (next_unknown_value); return; } @@ -1868,4 +2045,92 @@ cselib_finish (void) next_unknown_value = 0; } +/* Dump the cselib_val *X to FILE *info. */ + +static int +dump_cselib_val (void **x, void *info) +{ + cselib_val *v = (cselib_val *)*x; + FILE *out = (FILE *)info; + bool need_lf = true; + + print_inline_rtx (out, v->val_rtx, 0); + + if (v->locs) + { + struct elt_loc_list *l = v->locs; + if (need_lf) + { + fputc ('\n', out); + need_lf = false; + } + fputs (" locs:", out); + do + { + fprintf (out, "\n from insn %i ", + INSN_UID (l->setting_insn)); + print_inline_rtx (out, l->loc, 4); + } + while ((l = l->next)); + fputc ('\n', out); + } + else + { + fputs (" no locs", out); + need_lf = true; + } + + if (v->addr_list) + { + struct elt_list *e = v->addr_list; + if (need_lf) + { + fputc ('\n', out); + need_lf = false; + } + fputs (" addr list:", out); + do + { + fputs ("\n ", out); + print_inline_rtx (out, e->elt->val_rtx, 2); + } + while ((e = e->next)); + fputc ('\n', out); + } + else + { + fputs (" no addrs", out); + need_lf = true; + } + + if (v->next_containing_mem == &dummy_val) + fputs (" last mem\n", out); + else if (v->next_containing_mem) + { + fputs (" next mem ", out); + print_inline_rtx (out, v->next_containing_mem->val_rtx, 2); + fputc ('\n', out); + } + else if (need_lf) + fputc ('\n', out); + + return 1; +} + +/* Dump to OUT everything in the CSELIB table. */ + +void +dump_cselib_table (FILE *out) +{ + fprintf (out, "cselib hash table:\n"); + htab_traverse (cselib_hash_table, dump_cselib_val, out); + if (first_containing_mem != &dummy_val) + { + fputs ("first mem ", out); + print_inline_rtx (out, first_containing_mem->val_rtx, 2); + fputc ('\n', out); + } + fprintf (out, "last unknown value %i\n", next_unknown_value); +} + #include "gt-cselib.h" diff --git a/gcc/cselib.h b/gcc/cselib.h index fccc6a2cd32..433f4deb87d 100644 --- a/gcc/cselib.h +++ b/gcc/cselib.h @@ -53,7 +53,18 @@ struct GTY(()) elt_list { cselib_val *elt; }; +/* Describe a single set that is part of an insn. */ +struct cselib_set +{ + rtx src; + rtx dest; + cselib_val *src_elt; + cselib_val *dest_addr_elt; +}; + extern void (*cselib_discard_hook) (cselib_val *); +extern void (*cselib_record_sets_hook) (rtx insn, struct cselib_set *sets, + int n_sets); extern cselib_val *cselib_lookup (rtx, enum machine_mode, int); extern void cselib_init (bool record_memory); @@ -64,5 +75,16 @@ extern enum machine_mode cselib_reg_set_mode (const_rtx); extern int rtx_equal_for_cselib_p (rtx, rtx); extern int references_value_p (const_rtx, int); extern rtx cselib_expand_value_rtx (rtx, bitmap, int); +typedef rtx (*cselib_expand_callback)(rtx, bitmap, int, void *); +extern rtx cselib_expand_value_rtx_cb (rtx, bitmap, int, + cselib_expand_callback, void*); extern rtx cselib_subst_to_values (rtx); extern void cselib_invalidate_rtx (rtx); + +extern void cselib_reset_table_with_next_value (unsigned int); +extern unsigned int cselib_get_next_unknown_value (void); +extern void cselib_preserve_value (cselib_val *); +extern bool cselib_preserved_value_p (cselib_val *); +extern void cselib_preserve_only_values (bool); + +extern void dump_cselib_table (FILE *); diff --git a/gcc/dce.c b/gcc/dce.c index 2d1bd7ada29..3e1dd47f3a4 100644 --- a/gcc/dce.c +++ b/gcc/dce.c @@ -124,6 +124,7 @@ deletable_insn_p (rtx insn, bool fast, bitmap arg_stores) switch (GET_CODE (body)) { case USE: + case VAR_LOCATION: return false; case CLOBBER: @@ -643,6 +644,9 @@ mark_reg_dependencies (rtx insn) struct df_link *defs; df_ref *use_rec; + if (DEBUG_INSN_P (insn)) + return; + for (use_rec = DF_INSN_USES (insn); *use_rec; use_rec++) { df_ref use = *use_rec; diff --git a/gcc/ddg.c b/gcc/ddg.c index adecb707749..439acd1f434 100644 --- a/gcc/ddg.c +++ b/gcc/ddg.c @@ -166,6 +166,9 @@ create_ddg_dep_from_intra_loop_link (ddg_ptr g, ddg_node_ptr src_node, else if (DEP_TYPE (link) == REG_DEP_OUTPUT) t = OUTPUT_DEP; + gcc_assert (!DEBUG_INSN_P (dest_node->insn) || t == ANTI_DEP); + gcc_assert (!DEBUG_INSN_P (src_node->insn) || DEBUG_INSN_P (dest_node->insn)); + /* We currently choose not to create certain anti-deps edges and compensate for that by generating reg-moves based on the life-range analysis. The anti-deps that will be deleted are the ones which @@ -209,6 +212,9 @@ create_ddg_dep_no_link (ddg_ptr g, ddg_node_ptr from, ddg_node_ptr to, enum reg_note dep_kind; struct _dep _dep, *dep = &_dep; + gcc_assert (!DEBUG_INSN_P (to->insn) || d_t == ANTI_DEP); + gcc_assert (!DEBUG_INSN_P (from->insn) || DEBUG_INSN_P (to->insn)); + if (d_t == ANTI_DEP) dep_kind = REG_DEP_ANTI; else if (d_t == OUTPUT_DEP) @@ -277,10 +283,11 @@ add_cross_iteration_register_deps (ddg_ptr g, df_ref last_def) /* Add true deps from last_def to it's uses in the next iteration. Any such upwards exposed use appears before the last_def def. */ - create_ddg_dep_no_link (g, last_def_node, use_node, TRUE_DEP, + create_ddg_dep_no_link (g, last_def_node, use_node, + DEBUG_INSN_P (use_insn) ? ANTI_DEP : TRUE_DEP, REG_DEP, 1); } - else + else if (!DEBUG_INSN_P (use_insn)) { /* Add anti deps from last_def's uses in the current iteration to the first def in the next iteration. We do not add ANTI @@ -417,6 +424,8 @@ build_intra_loop_deps (ddg_ptr g) for (j = 0; j <= i; j++) { ddg_node_ptr j_node = &g->nodes[j]; + if (DEBUG_INSN_P (j_node->insn)) + continue; if (mem_access_insn_p (j_node->insn)) /* Don't bother calculating inter-loop dep if an intra-loop dep already exists. */ @@ -458,10 +467,15 @@ create_ddg (basic_block bb, int closing_branch_deps) if (! INSN_P (insn) || GET_CODE (PATTERN (insn)) == USE) continue; - if (mem_read_insn_p (insn)) - g->num_loads++; - if (mem_write_insn_p (insn)) - g->num_stores++; + if (DEBUG_INSN_P (insn)) + g->num_debug++; + else + { + if (mem_read_insn_p (insn)) + g->num_loads++; + if (mem_write_insn_p (insn)) + g->num_stores++; + } num_nodes++; } diff --git a/gcc/ddg.h b/gcc/ddg.h index 9eaeea5bd29..fbe2988606c 100644 --- a/gcc/ddg.h +++ b/gcc/ddg.h @@ -1,5 +1,5 @@ /* DDG - Data Dependence Graph - interface. - Copyright (C) 2004, 2005, 2006, 2007 + Copyright (C) 2004, 2005, 2006, 2007, 2008 Free Software Foundation, Inc. Contributed by Ayal Zaks and Mustafa Hagog <zaks,mustafa@il.ibm.com> @@ -121,6 +121,9 @@ struct ddg int num_loads; int num_stores; + /* Number of debug instructions in the BB. */ + int num_debug; + /* This array holds the nodes in the graph; it is indexed by the node cuid, which follows the order of the instructions in the BB. */ ddg_node_ptr nodes; @@ -134,8 +137,8 @@ struct ddg int closing_branch_deps; /* Array and number of backarcs (edges with distance > 0) in the DDG. */ - ddg_edge_ptr *backarcs; int num_backarcs; + ddg_edge_ptr *backarcs; }; diff --git a/gcc/df-problems.c b/gcc/df-problems.c index e19a51ee4b9..cdbc8a19d9d 100644 --- a/gcc/df-problems.c +++ b/gcc/df-problems.c @@ -858,7 +858,7 @@ df_lr_bb_local_compute (unsigned int bb_index) { unsigned int uid = INSN_UID (insn); - if (!INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) continue; for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++) @@ -3182,6 +3182,8 @@ df_set_note (enum reg_note note_type, rtx insn, rtx old, rtx reg) rtx curr = old; rtx prev = NULL; + gcc_assert (!DEBUG_INSN_P (insn)); + while (curr) if (XEXP (curr, 0) == reg) { @@ -3314,9 +3316,12 @@ df_whole_mw_reg_dead_p (struct df_mw_hardreg *mws, static rtx df_set_dead_notes_for_mw (rtx insn, rtx old, struct df_mw_hardreg *mws, bitmap live, bitmap do_not_gen, - bitmap artificial_uses) + bitmap artificial_uses, bool *added_notes_p) { unsigned int r; + bool is_debug = *added_notes_p; + + *added_notes_p = false; #ifdef REG_DEAD_DEBUGGING if (dump_file) @@ -3334,6 +3339,11 @@ df_set_dead_notes_for_mw (rtx insn, rtx old, struct df_mw_hardreg *mws, if (df_whole_mw_reg_dead_p (mws, live, artificial_uses, do_not_gen)) { /* Add a dead note for the entire multi word register. */ + if (is_debug) + { + *added_notes_p = true; + return old; + } old = df_set_note (REG_DEAD, insn, old, mws->mw_reg); #ifdef REG_DEAD_DEBUGGING df_print_note ("adding 1: ", insn, REG_NOTES (insn)); @@ -3346,6 +3356,11 @@ df_set_dead_notes_for_mw (rtx insn, rtx old, struct df_mw_hardreg *mws, && !bitmap_bit_p (artificial_uses, r) && !bitmap_bit_p (do_not_gen, r)) { + if (is_debug) + { + *added_notes_p = true; + return old; + } old = df_set_note (REG_DEAD, insn, old, regno_reg_rtx[r]); #ifdef REG_DEAD_DEBUGGING df_print_note ("adding 2: ", insn, REG_NOTES (insn)); @@ -3456,10 +3471,13 @@ df_note_bb_compute (unsigned int bb_index, struct df_mw_hardreg **mws_rec; rtx old_dead_notes; rtx old_unused_notes; + int debug_insn; if (!INSN_P (insn)) continue; + debug_insn = DEBUG_INSN_P (insn); + bitmap_clear (do_not_gen); df_kill_notes (insn, &old_dead_notes, &old_unused_notes); @@ -3544,10 +3562,18 @@ df_note_bb_compute (unsigned int bb_index, struct df_mw_hardreg *mws = *mws_rec; if ((DF_MWS_REG_DEF_P (mws)) && !df_ignore_stack_reg (mws->start_regno)) - old_dead_notes - = df_set_dead_notes_for_mw (insn, old_dead_notes, - mws, live, do_not_gen, - artificial_uses); + { + bool really_add_notes = debug_insn != 0; + + old_dead_notes + = df_set_dead_notes_for_mw (insn, old_dead_notes, + mws, live, do_not_gen, + artificial_uses, + &really_add_notes); + + if (really_add_notes) + debug_insn = -1; + } mws_rec++; } @@ -3557,7 +3583,7 @@ df_note_bb_compute (unsigned int bb_index, unsigned int uregno = DF_REF_REGNO (use); #ifdef REG_DEAD_DEBUGGING - if (dump_file) + if (dump_file && !debug_insn) { fprintf (dump_file, " regular looking at use "); df_ref_debug (use, dump_file); @@ -3565,6 +3591,12 @@ df_note_bb_compute (unsigned int bb_index, #endif if (!bitmap_bit_p (live, uregno)) { + if (debug_insn) + { + debug_insn = -1; + break; + } + if ( (!(DF_REF_FLAGS (use) & DF_REF_MW_HARDREG)) && (!bitmap_bit_p (do_not_gen, uregno)) && (!bitmap_bit_p (artificial_uses, uregno)) @@ -3596,6 +3628,14 @@ df_note_bb_compute (unsigned int bb_index, free_EXPR_LIST_node (old_dead_notes); old_dead_notes = next; } + + if (debug_insn == -1) + { + /* ??? We could probably do better here, replacing dead + registers with their definitions. */ + INSN_VAR_LOCATION_LOC (insn) = gen_rtx_UNKNOWN_VAR_LOC (); + df_insn_rescan_debug_internal (insn); + } } } @@ -3741,6 +3781,9 @@ df_simulate_uses (rtx insn, bitmap live) df_ref *use_rec; unsigned int uid = INSN_UID (insn); + if (DEBUG_INSN_P (insn)) + return; + for (use_rec = DF_INSN_UID_USES (uid); *use_rec; use_rec++) { df_ref use = *use_rec; @@ -3807,7 +3850,7 @@ df_simulate_initialize_backwards (basic_block bb, bitmap live) void df_simulate_one_insn_backwards (basic_block bb, rtx insn, bitmap live) { - if (! INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) return; df_simulate_defs (insn, live); diff --git a/gcc/df-scan.c b/gcc/df-scan.c index 393c74ca64b..35be03c7629 100644 --- a/gcc/df-scan.c +++ b/gcc/df-scan.c @@ -1310,6 +1310,62 @@ df_insn_rescan (rtx insn) return true; } +/* Same as df_insn_rescan, but don't mark the basic block as + dirty. */ + +bool +df_insn_rescan_debug_internal (rtx insn) +{ + unsigned int uid = INSN_UID (insn); + struct df_insn_info *insn_info; + + gcc_assert (DEBUG_INSN_P (insn)); + gcc_assert (VAR_LOC_UNKNOWN_P (INSN_VAR_LOCATION_LOC (insn))); + + if (!df) + return false; + + insn_info = DF_INSN_UID_SAFE_GET (INSN_UID (insn)); + if (!insn_info) + return false; + + if (dump_file) + fprintf (dump_file, "deleting debug_insn with uid = %d.\n", uid); + + bitmap_clear_bit (df->insns_to_delete, uid); + bitmap_clear_bit (df->insns_to_rescan, uid); + bitmap_clear_bit (df->insns_to_notes_rescan, uid); + + if (!insn_info->defs) + return false; + + if (insn_info->defs == df_null_ref_rec + && insn_info->uses == df_null_ref_rec + && insn_info->eq_uses == df_null_ref_rec + && insn_info->mw_hardregs == df_null_mw_rec) + return false; + + df_mw_hardreg_chain_delete (insn_info->mw_hardregs); + + if (df_chain) + { + df_ref_chain_delete_du_chain (insn_info->defs); + df_ref_chain_delete_du_chain (insn_info->uses); + df_ref_chain_delete_du_chain (insn_info->eq_uses); + } + + df_ref_chain_delete (insn_info->defs); + df_ref_chain_delete (insn_info->uses); + df_ref_chain_delete (insn_info->eq_uses); + + insn_info->defs = df_null_ref_rec; + insn_info->uses = df_null_ref_rec; + insn_info->eq_uses = df_null_ref_rec; + insn_info->mw_hardregs = df_null_mw_rec; + + return true; +} + /* Rescan all of the insns in the function. Note that the artificial uses and defs are not touched. This function will destroy def-se @@ -3267,12 +3323,20 @@ df_uses_record (enum df_ref_class cl, struct df_collection_rec *collection_rec, break; } + case VAR_LOCATION: + df_uses_record (cl, collection_rec, + &PAT_VAR_LOCATION_LOC (x), + DF_REF_REG_USE, bb, insn_info, + flags, width, offset, mode); + return; + case PRE_DEC: case POST_DEC: case PRE_INC: case POST_INC: case PRE_MODIFY: case POST_MODIFY: + gcc_assert (!DEBUG_INSN_P (insn_info->insn)); /* Catch the def of the register being modified. */ df_ref_record (cl, collection_rec, XEXP (x, 0), &XEXP (x, 0), bb, insn_info, @@ -1002,6 +1002,7 @@ extern struct df_insn_info * df_insn_create_insn_record (rtx); extern void df_insn_delete (basic_block, unsigned int); extern void df_bb_refs_record (int, bool); extern bool df_insn_rescan (rtx); +extern bool df_insn_rescan_debug_internal (rtx); extern void df_insn_rescan_all (void); extern void df_process_deferred_rescans (void); extern void df_recompute_luids (basic_block); diff --git a/gcc/diagnostic.c b/gcc/diagnostic.c index 3f7bab19d8b..b8e025ac992 100644 --- a/gcc/diagnostic.c +++ b/gcc/diagnostic.c @@ -322,6 +322,9 @@ diagnostic_report_diagnostic (diagnostic_context *context, && !diagnostic_report_warnings_p (location)) return false; + if (diagnostic->kind == DK_NOTE && flag_compare_debug) + return false; + if (diagnostic->kind == DK_PEDWARN) diagnostic->kind = pedantic_warning_kind (); diff --git a/gcc/doc/gimple.texi b/gcc/doc/gimple.texi index a78c52dcafc..76cc269aefe 100644 --- a/gcc/doc/gimple.texi +++ b/gcc/doc/gimple.texi @@ -691,12 +691,21 @@ Return true if the code of g is @code{GIMPLE_ASSIGN}. @end deftypefn @deftypefn {GIMPLE function} is_gimple_call (gimple g) -Return true if the code of g is @code{GIMPLE_CALL} +Return true if the code of g is @code{GIMPLE_CALL}. @end deftypefn +@deftypefn {GIMPLE function} is_gimple_debug (gimple g) +Return true if the code of g is @code{GIMPLE_DEBUG}. +@end deftypefn + @deftypefn {GIMPLE function} gimple_assign_cast_p (gimple g) Return true if g is a @code{GIMPLE_ASSIGN} that performs a type cast -operation +operation. +@end deftypefn + +@deftypefn {GIMPLE function} gimple_debug_bind_p (gimple g) +Return true if g is a @code{GIMPLE_DEBUG} that binds the value of an +expression to a variable. @end deftypefn @node Manipulating GIMPLE statements diff --git a/gcc/doc/install.texi b/gcc/doc/install.texi index 7284020838c..d0f9839f705 100644 --- a/gcc/doc/install.texi +++ b/gcc/doc/install.texi @@ -2096,8 +2096,53 @@ Removes any @option{-O}-started option from @code{BOOT_CFLAGS}, and adds Analogous to @code{bootstrap-O1}. @item @samp{bootstrap-debug} -Builds stage2 without debug information, and uses -@file{contrib/compare-debug} to compare object files. +Verifies that the compiler generates the same executable code, whether +or not it is asked to emit debug information. To this end, this option +builds stage2 host programs without debug information, and uses +@file{contrib/compare-debug} to compare them with the stripped stage3 +object files. If @code{BOOT_CFLAGS} is overridden so as to not enable +debug information, stage2 will have it, and stage3 won't. This option +is enabled by default when GCC bootstrapping is enabled: in addition to +better test coverage, it makes default bootstraps faster and leaner. + +@item @samp{bootstrap-debug-big} +In addition to the checking performed by @code{bootstrap-debug}, this +option saves internal compiler dumps during stage2 and stage3 and +compares them as well, which helps catch additional potential problems, +but at a great cost in terms of disk space. + +@item @samp{bootstrap-debug-lean} +This option saves disk space compared with @code{bootstrap-debug-big}, +but at the expense of some recompilation. Instead of saving the dumps +of stage2 and stage3 until the final compare, it uses +@option{-fcompare-debug} to generate, compare and remove the dumps +during stage3, repeating the compilation that already took place in +stage2, whose dumps were not saved. + +@item @samp{bootstrap-debug-lib} +This option tests executable code invariance over debug information +generation on target libraries, just like @code{bootstrap-debug-lean} +tests it on host programs. It builds stage3 libraries with +@option{-fcompare-debug}, and it can be used along with any of the +@code{bootstrap-debug} options above. + +There aren't @code{-lean} or @code{-big} counterparts to this option +because most libraries are only build in stage3, so bootstrap compares +would not get significant coverage. Moreover, the few libraries built +in stage2 are used in stage3 host programs, so we wouldn't want to +compile stage2 libraries with different options for comparison purposes. + +@item @samp{bootstrap-debug-ckovw} +Arranges for error messages to be issued if the compiler built on any +stage is run without the option @option{-fcompare-debug}. This is +useful to verify the full @option{-fcompare-debug} testing coverage. It +must be used along with @code{bootstrap-debug-lean} and +@code{bootstrap-debug-lib}. + +@item @samp{bootstrap-time} +Arranges for the run time of each program started by the GCC driver, +built in any stage, to be logged to @file{time.log}, in the top level of +the build tree. @end table diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index 8c1db6d7ed3..4aa4f52f0d6 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -311,6 +311,7 @@ Objective-C and Objective-C++ Dialects}. -frandom-seed=@var{string} -fsched-verbose=@var{n} @gol -fsel-sched-verbose -fsel-sched-dump-cfg -fsel-sched-pipelining-verbose @gol -ftest-coverage -ftime-report -fvar-tracking @gol +-fvar-tracking-assigments -fvar-tracking-assignments-toggle @gol -g -g@var{level} -gtoggle -gcoff -gdwarf-@var{version} @gol -ggdb -gstabs -gstabs+ -gvms -gxcoff -gxcoff+ @gol -fno-merge-debug-strings -fno-dwarf2-cfi-asm @gol @@ -4397,11 +4398,14 @@ assembler (GAS) to fail with an error. @opindex gdwarf-@var{version} Produce debugging information in DWARF format (if that is supported). This is the format used by DBX on IRIX 6. The value -of @var{version} may be either 2 or 3; the default version is 2. +of @var{version} may be either 2, 3 or 4; the default version is 2. Note that with DWARF version 2 some ports require, and will always use, some non-conflicting DWARF 3 extensions in the unwind tables. +Version 4 may require GDB 7.0 and @option{-fvar-tracking-assignments} +for maximum benefit. + @item -gvms @opindex gvms Produce debugging information in VMS debug format (if that is @@ -4445,9 +4449,12 @@ other options are processed, and it does so only once, no matter how many times it is given. This is mainly intended to be used with @option{-fcompare-debug}. -@item -fdump-final-insns=@var{file} -@opindex fdump-final-insns= -Dump the final internal representation (RTL) to @var{file}. +@item -fdump-final-insns@r{[}=@var{file}@r{]} +@opindex fdump-final-insns +Dump the final internal representation (RTL) to @var{file}. If the +optional argument is omitted (or if @var{file} is @code{.}), the name +of the dump file will be determined by appending @code{.gkd} to the +compilation output file name. @item -fcompare-debug@r{[}=@var{opts}@r{]} @opindex fcompare-debug @@ -5446,6 +5453,23 @@ It is enabled by default when compiling with optimization (@option{-Os}, @option{-O}, @option{-O2}, @dots{}), debugging information (@option{-g}) and the debug info format supports it. +@item -fvar-tracking-assignments +@opindex fvar-tracking-assignments +@opindex fno-var-tracking-assignments +Annotate assignments to user variables early in the compilation and +attempt to carry the annotations over throughout the compilation all the +way to the end, in an attempt to improve debug information while +optimizing. Use of @option{-gdwarf-4} is recommended along with it. + +It can be enabled even if var-tracking is disabled, in which case +annotations will be created and maintained, but discarded at the end. + +@item -fvar-tracking-assignments-toggle +@opindex fvar-tracking-assignments-toggle +@opindex fno-var-tracking-assignments-toggle +Toggle @option{-fvar-tracking-assignments}, in the same way that +@option{-gtoggle} toggles @option{-g}. + @item -print-file-name=@var{library} @opindex print-file-name Print the full absolute name of the library file @var{library} that @@ -8094,6 +8118,12 @@ with more basic blocks than this parameter won't have loop invariant motion optimization performed on them. The default value of the parameter is 1000 for -O1 and 10000 for -O2 and above. +@item min-nondebug-insn-uid +Use uids starting at this parameter for nondebug insns. The range below +the parameter is reserved exclusively for debug insns created by +@option{-fvar-tracking-assignments}, but debug insns may get +(non-overlapping) uids above it if the reserved range is exhausted. + @end table @end table diff --git a/gcc/dse.c b/gcc/dse.c index 2338d3240ba..3e6b57d6ca1 100644 --- a/gcc/dse.c +++ b/gcc/dse.c @@ -2387,6 +2387,11 @@ scan_insn (bb_info_t bb_info, rtx insn) insn_info->insn = insn; bb_info->last_insn = insn_info; + if (DEBUG_INSN_P (insn)) + { + insn_info->cannot_delete = true; + return; + } /* Cselib clears the table for this case, so we have to essentially do the same. */ diff --git a/gcc/dwarf2out.c b/gcc/dwarf2out.c index 0dfe4f67066..fd386dd99d1 100644 --- a/gcc/dwarf2out.c +++ b/gcc/dwarf2out.c @@ -407,6 +407,10 @@ struct GTY(()) indirect_string_node { static GTY ((param_is (struct indirect_string_node))) htab_t debug_str_hash; +/* True if the compilation unit has location entries that reference + debug strings. */ +static GTY(()) bool debug_str_hash_forced = false; + static GTY(()) int dw2_string_counter; static GTY(()) unsigned long dwarf2out_cfi_label_num; @@ -4142,15 +4146,6 @@ enum dw_val_class dw_val_class_file }; -/* Describe a double word constant value. */ -/* ??? Every instance of long_long in the code really means CONST_DOUBLE. */ - -typedef struct GTY(()) dw_long_long_struct { - unsigned long hi; - unsigned long low; -} -dw_long_long_const; - /* Describe a floating point constant value, or a vector constant value. */ typedef struct GTY(()) dw_vec_struct { @@ -4173,7 +4168,7 @@ typedef struct GTY(()) dw_val_struct { dw_loc_descr_ref GTY ((tag ("dw_val_class_loc"))) val_loc; HOST_WIDE_INT GTY ((default)) val_int; unsigned HOST_WIDE_INT GTY ((tag ("dw_val_class_unsigned_const"))) val_unsigned; - dw_long_long_const GTY ((tag ("dw_val_class_long_long"))) val_long_long; + rtx GTY ((tag ("dw_val_class_long_long"))) val_long_long; dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec; struct dw_val_die_union { @@ -4528,6 +4523,10 @@ dwarf_stack_op_name (unsigned int op) return "DW_OP_call4"; case DW_OP_call_ref: return "DW_OP_call_ref"; + case DW_OP_implicit_value: + return "DW_OP_implicit_value"; + case DW_OP_stack_value: + return "DW_OP_stack_value"; case DW_OP_form_tls_address: return "DW_OP_form_tls_address"; case DW_OP_call_frame_cfa: @@ -4738,6 +4737,10 @@ size_of_loc_descr (dw_loc_descr_ref loc) case DW_OP_call_ref: size += DWARF2_ADDR_SIZE; break; + case DW_OP_implicit_value: + size += size_of_uleb128 (loc->dw_loc_oprnd1.v.val_unsigned) + + loc->dw_loc_oprnd1.v.val_unsigned; + break; default: break; } @@ -4773,6 +4776,10 @@ size_of_locs (dw_loc_descr_ref loc) return size; } +#ifdef DWARF2_DEBUGGING_INFO +static HOST_WIDE_INT extract_int (const unsigned char *, unsigned); +#endif + /* Output location description stack opcode's operands (if any). */ static void @@ -4794,7 +4801,7 @@ output_loc_operands (dw_loc_descr_ref loc) break; case DW_OP_const8u: case DW_OP_const8s: - gcc_assert (HOST_BITS_PER_LONG >= 64); + gcc_assert (HOST_BITS_PER_WIDE_INT >= 64); dw2_asm_output_data (8, val1->v.val_int, NULL); break; case DW_OP_skip: @@ -4808,6 +4815,60 @@ output_loc_operands (dw_loc_descr_ref loc) dw2_asm_output_data (2, offset, NULL); } break; + case DW_OP_implicit_value: + dw2_asm_output_data_uleb128 (val1->v.val_unsigned, NULL); + switch (val2->val_class) + { + case dw_val_class_const: + dw2_asm_output_data (val1->v.val_unsigned, val2->v.val_int, NULL); + break; + case dw_val_class_vec: + { + unsigned int elt_size = val2->v.val_vec.elt_size; + unsigned int len = val2->v.val_vec.length; + unsigned int i; + unsigned char *p; + + if (elt_size > sizeof (HOST_WIDE_INT)) + { + elt_size /= 2; + len *= 2; + } + for (i = 0, p = val2->v.val_vec.array; + i < len; + i++, p += elt_size) + dw2_asm_output_data (elt_size, extract_int (p, elt_size), + "fp or vector constant word %u", i); + } + break; + case dw_val_class_long_long: + { + unsigned HOST_WIDE_INT first, second; + + if (WORDS_BIG_ENDIAN) + { + first = CONST_DOUBLE_HIGH (val2->v.val_long_long); + second = CONST_DOUBLE_LOW (val2->v.val_long_long); + } + else + { + first = CONST_DOUBLE_LOW (val2->v.val_long_long); + second = CONST_DOUBLE_HIGH (val2->v.val_long_long); + } + dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR, + first, "long long constant"); + dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR, + second, NULL); + } + break; + case dw_val_class_addr: + gcc_assert (val1->v.val_unsigned == DWARF2_ADDR_SIZE); + dw2_asm_output_addr_rtx (DWARF2_ADDR_SIZE, val2->v.val_addr, NULL); + break; + default: + gcc_unreachable (); + } + break; #else case DW_OP_const2u: case DW_OP_const2s: @@ -4817,6 +4878,7 @@ output_loc_operands (dw_loc_descr_ref loc) case DW_OP_const8s: case DW_OP_skip: case DW_OP_bra: + case DW_OP_implicit_value: /* We currently don't make any attempt to make sure these are aligned properly like we do for the main unwind info, so don't support emitting things larger than a byte if we're @@ -4948,6 +5010,7 @@ output_loc_operands_raw (dw_loc_descr_ref loc) switch (loc->dw_loc_opc) { case DW_OP_addr: + case DW_OP_implicit_value: /* We cannot output addresses in .cfi_escape, only bytes. */ gcc_unreachable (); @@ -4974,7 +5037,7 @@ output_loc_operands_raw (dw_loc_descr_ref loc) case DW_OP_const8u: case DW_OP_const8s: - gcc_assert (HOST_BITS_PER_LONG >= 64); + gcc_assert (HOST_BITS_PER_WIDE_INT >= 64); fputc (',', asm_out_file); dw2_asm_output_data_raw (8, val1->v.val_int); break; @@ -5719,8 +5782,7 @@ static void add_AT_int (dw_die_ref, enum dwarf_attribute, HOST_WIDE_INT); static inline HOST_WIDE_INT AT_int (dw_attr_ref); static void add_AT_unsigned (dw_die_ref, enum dwarf_attribute, unsigned HOST_WIDE_INT); static inline unsigned HOST_WIDE_INT AT_unsigned (dw_attr_ref); -static void add_AT_long_long (dw_die_ref, enum dwarf_attribute, unsigned long, - unsigned long); +static void add_AT_long_long (dw_die_ref, enum dwarf_attribute, rtx); static inline void add_AT_vec (dw_die_ref, enum dwarf_attribute, unsigned int, unsigned int, unsigned char *); static hashval_t debug_str_do_hash (const void *); @@ -5852,7 +5914,8 @@ static dw_loc_descr_ref mem_loc_descriptor (rtx, enum machine_mode mode, enum var_init_status); static dw_loc_descr_ref concat_loc_descriptor (rtx, rtx, enum var_init_status); -static dw_loc_descr_ref loc_descriptor (rtx, enum var_init_status); +static dw_loc_descr_ref loc_descriptor (rtx, enum machine_mode mode, + enum var_init_status); static dw_loc_descr_ref loc_descriptor_from_tree_1 (tree, int); static dw_loc_descr_ref loc_descriptor_from_tree (tree); static HOST_WIDE_INT ceiling (HOST_WIDE_INT, unsigned int); @@ -5866,7 +5929,6 @@ static void add_AT_location_description (dw_die_ref, enum dwarf_attribute, static void add_data_member_location_attribute (dw_die_ref, tree); static void add_const_value_attribute (dw_die_ref, rtx); static void insert_int (HOST_WIDE_INT, unsigned, unsigned char *); -static HOST_WIDE_INT extract_int (const unsigned char *, unsigned); static void insert_float (const_rtx, unsigned char *); static rtx rtl_for_decl_location (tree); static void add_location_or_const_value_attribute (dw_die_ref, tree, @@ -6652,14 +6714,13 @@ AT_unsigned (dw_attr_ref a) static inline void add_AT_long_long (dw_die_ref die, enum dwarf_attribute attr_kind, - long unsigned int val_hi, long unsigned int val_low) + rtx val_const_double) { dw_attr_node attr; attr.dw_attr = attr_kind; attr.dw_attr_val.val_class = dw_val_class_long_long; - attr.dw_attr_val.v.val_long_long.hi = val_hi; - attr.dw_attr_val.v.val_long_long.low = val_low; + attr.dw_attr_val.v.val_long_long = val_const_double; add_dwarf_attr (die, &attr); } @@ -6694,6 +6755,8 @@ debug_str_eq (const void *x1, const void *x2) (const char *)x2) == 0; } +/* Add STR to the indirect string hash table. */ + static struct indirect_string_node * find_AT_string (const char *str) { @@ -6736,6 +6799,37 @@ add_AT_string (dw_die_ref die, enum dwarf_attribute attr_kind, const char *str) add_dwarf_attr (die, &attr); } +/* Create a label for an indirect string node, ensuring it is going to + be output, unless its reference count goes down to zero. */ + +static inline void +gen_label_for_indirect_string (struct indirect_string_node *node) +{ + char label[32]; + + if (node->label) + return; + + ASM_GENERATE_INTERNAL_LABEL (label, "LASF", dw2_string_counter); + ++dw2_string_counter; + node->label = xstrdup (label); +} + +/* Create a SYMBOL_REF rtx whose value is the initial address of a + debug string STR. */ + +static inline rtx +get_debug_string_label (const char *str) +{ + struct indirect_string_node *node = find_AT_string (str); + + debug_str_hash_forced = true; + + gen_label_for_indirect_string (node); + + return gen_rtx_SYMBOL_REF (Pmode, node->label); +} + static inline const char * AT_string (dw_attr_ref a) { @@ -6751,7 +6845,6 @@ AT_string_form (dw_attr_ref a) { struct indirect_string_node *node; unsigned int len; - char label[32]; gcc_assert (a && AT_class (a) == dw_val_class_str); @@ -6774,9 +6867,7 @@ AT_string_form (dw_attr_ref a) && (len - DWARF_OFFSET_SIZE) * node->refcount <= len)) return node->form = DW_FORM_string; - ASM_GENERATE_INTERNAL_LABEL (label, "LASF", dw2_string_counter); - ++dw2_string_counter; - node->label = xstrdup (label); + gen_label_for_indirect_string (node); return node->form = DW_FORM_strp; } @@ -7489,9 +7580,10 @@ print_die (dw_die_ref die, FILE *outfile) fprintf (outfile, HOST_WIDE_INT_PRINT_UNSIGNED, AT_unsigned (a)); break; case dw_val_class_long_long: - fprintf (outfile, "constant (%lu,%lu)", - a->dw_attr_val.v.val_long_long.hi, - a->dw_attr_val.v.val_long_long.low); + fprintf (outfile, "constant (" HOST_WIDE_INT_PRINT_UNSIGNED + "," HOST_WIDE_INT_PRINT_UNSIGNED ")", + CONST_DOUBLE_HIGH (a->dw_attr_val.v.val_long_long), + CONST_DOUBLE_LOW (a->dw_attr_val.v.val_long_long)); break; case dw_val_class_vec: fprintf (outfile, "floating-point or vector constant"); @@ -7648,7 +7740,8 @@ attr_checksum (dw_attr_ref at, struct md5_ctx *ctx, int *mark) CHECKSUM (at->dw_attr_val.v.val_unsigned); break; case dw_val_class_long_long: - CHECKSUM (at->dw_attr_val.v.val_long_long); + CHECKSUM (CONST_DOUBLE_HIGH (at->dw_attr_val.v.val_long_long)); + CHECKSUM (CONST_DOUBLE_LOW (at->dw_attr_val.v.val_long_long)); break; case dw_val_class_vec: CHECKSUM (at->dw_attr_val.v.val_vec); @@ -7748,8 +7841,10 @@ same_dw_val_p (const dw_val_node *v1, const dw_val_node *v2, int *mark) case dw_val_class_unsigned_const: return v1->v.val_unsigned == v2->v.val_unsigned; case dw_val_class_long_long: - return v1->v.val_long_long.hi == v2->v.val_long_long.hi - && v1->v.val_long_long.low == v2->v.val_long_long.low; + return CONST_DOUBLE_HIGH (v1->v.val_long_long) + == CONST_DOUBLE_HIGH (v2->v.val_long_long) + && CONST_DOUBLE_LOW (v1->v.val_long_long) + == CONST_DOUBLE_LOW (v2->v.val_long_long); case dw_val_class_vec: if (v1->v.val_vec.length != v2->v.val_vec.length || v1->v.val_vec.elt_size != v2->v.val_vec.elt_size) @@ -8358,7 +8453,7 @@ size_of_die (dw_die_ref die) size += constant_size (AT_unsigned (a)); break; case dw_val_class_long_long: - size += 1 + 2*HOST_BITS_PER_LONG/HOST_BITS_PER_CHAR; /* block */ + size += 1 + 2*HOST_BITS_PER_WIDE_INT/HOST_BITS_PER_CHAR; /* block */ break; case dw_val_class_vec: size += constant_size (a->dw_attr_val.v.val_vec.length @@ -8840,23 +8935,24 @@ output_die (dw_die_ref die) unsigned HOST_WIDE_INT first, second; dw2_asm_output_data (1, - 2 * HOST_BITS_PER_LONG / HOST_BITS_PER_CHAR, + 2 * HOST_BITS_PER_WIDE_INT + / HOST_BITS_PER_CHAR, "%s", name); if (WORDS_BIG_ENDIAN) { - first = a->dw_attr_val.v.val_long_long.hi; - second = a->dw_attr_val.v.val_long_long.low; + first = CONST_DOUBLE_HIGH (a->dw_attr_val.v.val_long_long); + second = CONST_DOUBLE_LOW (a->dw_attr_val.v.val_long_long); } else { - first = a->dw_attr_val.v.val_long_long.low; - second = a->dw_attr_val.v.val_long_long.hi; + first = CONST_DOUBLE_LOW (a->dw_attr_val.v.val_long_long); + second = CONST_DOUBLE_HIGH (a->dw_attr_val.v.val_long_long); } - dw2_asm_output_data (HOST_BITS_PER_LONG / HOST_BITS_PER_CHAR, + dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR, first, "long long constant"); - dw2_asm_output_data (HOST_BITS_PER_LONG / HOST_BITS_PER_CHAR, + dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR, second, NULL); } break; @@ -10922,6 +11018,7 @@ mem_loc_descriptor (rtx rtl, enum machine_mode mode, { dw_loc_descr_ref mem_loc_result = NULL; enum dwarf_location_atom op; + dw_loc_descr_ref op0, op1; /* Note that for a dynamically sized array, the location we will generate a description of here will be the lowest numbered location which is @@ -10947,6 +11044,8 @@ mem_loc_descriptor (rtx rtl, enum machine_mode mode, legitimate to make the Dwarf info refer to the whole register which contains the given subreg. */ rtl = XEXP (rtl, 0); + if (GET_MODE_SIZE (GET_MODE (rtl)) > DWARF2_ADDR_SIZE) + break; /* ... fall through ... */ @@ -10978,6 +11077,29 @@ mem_loc_descriptor (rtx rtl, enum machine_mode mode, } break; + case SIGN_EXTEND: + case ZERO_EXTEND: + op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, + VAR_INIT_STATUS_INITIALIZED); + if (op0 == 0) + break; + else + { + int shift = DWARF2_ADDR_SIZE + - GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))); + shift *= BITS_PER_UNIT; + if (GET_CODE (rtl) == SIGN_EXTEND) + op = DW_OP_shra; + else + op = DW_OP_shr; + mem_loc_result = op0; + add_loc_descr (&mem_loc_result, int_loc_descriptor (shift)); + add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_shl, 0, 0)); + add_loc_descr (&mem_loc_result, int_loc_descriptor (shift)); + add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0)); + } + break; + case MEM: mem_loc_result = mem_loc_descriptor (XEXP (rtl, 0), GET_MODE (rtl), VAR_INIT_STATUS_INITIALIZED); @@ -11022,6 +11144,27 @@ mem_loc_descriptor (rtx rtl, enum machine_mode mode, return 0; } + if (GET_CODE (rtl) == SYMBOL_REF + && SYMBOL_REF_TLS_MODEL (rtl) != TLS_MODEL_NONE) + { + dw_loc_descr_ref temp; + + /* If this is not defined, we have no way to emit the data. */ + if (!targetm.have_tls || !targetm.asm_out.output_dwarf_dtprel) + break; + + temp = new_loc_descr (DW_OP_addr, 0, 0); + temp->dw_loc_oprnd1.val_class = dw_val_class_addr; + temp->dw_loc_oprnd1.v.val_addr = rtl; + temp->dtprel = true; + + mem_loc_result = new_loc_descr (DW_OP_GNU_push_tls_address, 0, 0); + add_loc_descr (&mem_loc_result, temp); + + break; + } + + symref: mem_loc_result = new_loc_descr (DW_OP_addr, 0, 0); mem_loc_result->dw_loc_oprnd1.val_class = dw_val_class_addr; mem_loc_result->dw_loc_oprnd1.v.val_addr = rtl; @@ -11076,10 +11219,22 @@ mem_loc_descriptor (rtx rtl, enum machine_mode mode, /* If a pseudo-reg is optimized away, it is possible for it to be replaced with a MEM containing a multiply or shift. */ + case MINUS: + op = DW_OP_minus; + goto do_binop; + case MULT: op = DW_OP_mul; goto do_binop; + case DIV: + op = DW_OP_div; + goto do_binop; + + case MOD: + op = DW_OP_mod; + goto do_binop; + case ASHIFT: op = DW_OP_shl; goto do_binop; @@ -11092,21 +11247,54 @@ mem_loc_descriptor (rtx rtl, enum machine_mode mode, op = DW_OP_shr; goto do_binop; + case AND: + op = DW_OP_and; + goto do_binop; + + case IOR: + op = DW_OP_or; + goto do_binop; + + case XOR: + op = DW_OP_xor; + goto do_binop; + do_binop: - { - dw_loc_descr_ref op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, - VAR_INIT_STATUS_INITIALIZED); - dw_loc_descr_ref op1 = mem_loc_descriptor (XEXP (rtl, 1), mode, - VAR_INIT_STATUS_INITIALIZED); + op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, + VAR_INIT_STATUS_INITIALIZED); + op1 = mem_loc_descriptor (XEXP (rtl, 1), mode, + VAR_INIT_STATUS_INITIALIZED); - if (op0 == 0 || op1 == 0) - break; + if (op0 == 0 || op1 == 0) + break; + + mem_loc_result = op0; + add_loc_descr (&mem_loc_result, op1); + add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0)); + break; - mem_loc_result = op0; - add_loc_descr (&mem_loc_result, op1); - add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0)); + case NOT: + op = DW_OP_not; + goto do_unop; + + case ABS: + op = DW_OP_abs; + goto do_unop; + + case NEG: + op = DW_OP_neg; + goto do_unop; + + do_unop: + op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, + VAR_INIT_STATUS_INITIALIZED); + + if (op0 == 0) break; - } + + mem_loc_result = op0; + add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0)); + break; case CONST_INT: mem_loc_result = int_loc_descriptor (INTVAL (rtl)); @@ -11117,14 +11305,287 @@ mem_loc_descriptor (rtx rtl, enum machine_mode mode, VAR_INIT_STATUS_INITIALIZED); break; + case EQ: + op = DW_OP_eq; + goto do_scompare; + + case GE: + op = DW_OP_ge; + goto do_scompare; + + case GT: + op = DW_OP_gt; + goto do_scompare; + + case LE: + op = DW_OP_le; + goto do_scompare; + + case LT: + op = DW_OP_lt; + goto do_scompare; + + case NE: + op = DW_OP_ne; + goto do_scompare; + + do_scompare: + if (GET_MODE_CLASS (GET_MODE (XEXP (rtl, 0))) != MODE_INT + || GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) > DWARF2_ADDR_SIZE + || GET_MODE (XEXP (rtl, 0)) != GET_MODE (XEXP (rtl, 1))) + break; + + op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, + VAR_INIT_STATUS_INITIALIZED); + op1 = mem_loc_descriptor (XEXP (rtl, 1), mode, + VAR_INIT_STATUS_INITIALIZED); + + if (op0 == 0 || op1 == 0) + break; + + if (GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) < DWARF2_ADDR_SIZE) + { + int shift = DWARF2_ADDR_SIZE + - GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))); + shift *= BITS_PER_UNIT; + add_loc_descr (&op0, int_loc_descriptor (shift)); + add_loc_descr (&op0, new_loc_descr (DW_OP_shl, 0, 0)); + if (CONST_INT_P (XEXP (rtl, 1))) + op1 = int_loc_descriptor (INTVAL (XEXP (rtl, 1)) << shift); + else + { + add_loc_descr (&op1, int_loc_descriptor (shift)); + add_loc_descr (&op1, new_loc_descr (DW_OP_shl, 0, 0)); + } + } + + do_compare: + mem_loc_result = op0; + add_loc_descr (&mem_loc_result, op1); + add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0)); + if (STORE_FLAG_VALUE != 1) + { + add_loc_descr (&mem_loc_result, + int_loc_descriptor (STORE_FLAG_VALUE)); + add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_mul, 0, 0)); + } + break; + + case GEU: + op = DW_OP_ge; + goto do_ucompare; + + case GTU: + op = DW_OP_gt; + goto do_ucompare; + + case LEU: + op = DW_OP_le; + goto do_ucompare; + + case LTU: + op = DW_OP_lt; + goto do_ucompare; + + do_ucompare: + if (GET_MODE_CLASS (GET_MODE (XEXP (rtl, 0))) != MODE_INT + || GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) > DWARF2_ADDR_SIZE + || GET_MODE (XEXP (rtl, 0)) != GET_MODE (XEXP (rtl, 1))) + break; + + op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, + VAR_INIT_STATUS_INITIALIZED); + op1 = mem_loc_descriptor (XEXP (rtl, 1), mode, + VAR_INIT_STATUS_INITIALIZED); + + if (op0 == 0 || op1 == 0) + break; + + if (GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) < DWARF2_ADDR_SIZE) + { + HOST_WIDE_INT mask = GET_MODE_MASK (GET_MODE (XEXP (rtl, 0))); + add_loc_descr (&op0, int_loc_descriptor (mask)); + add_loc_descr (&op0, new_loc_descr (DW_OP_and, 0, 0)); + if (CONST_INT_P (XEXP (rtl, 1))) + op1 = int_loc_descriptor (INTVAL (XEXP (rtl, 1)) & mask); + else + { + add_loc_descr (&op1, int_loc_descriptor (mask)); + add_loc_descr (&op1, new_loc_descr (DW_OP_and, 0, 0)); + } + } + else + { + HOST_WIDE_INT bias = 1; + bias <<= (DWARF2_ADDR_SIZE * BITS_PER_UNIT - 1); + add_loc_descr (&op0, new_loc_descr (DW_OP_plus_uconst, bias, 0)); + if (CONST_INT_P (XEXP (rtl, 1))) + op1 = int_loc_descriptor ((unsigned HOST_WIDE_INT) bias + + INTVAL (XEXP (rtl, 1))); + else + add_loc_descr (&op1, new_loc_descr (DW_OP_plus_uconst, bias, 0)); + } + goto do_compare; + + case SMIN: + case SMAX: + case UMIN: + case UMAX: + if (GET_MODE_CLASS (GET_MODE (XEXP (rtl, 0))) != MODE_INT + || GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) > DWARF2_ADDR_SIZE + || GET_MODE (XEXP (rtl, 0)) != GET_MODE (XEXP (rtl, 1))) + break; + + op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, + VAR_INIT_STATUS_INITIALIZED); + op1 = mem_loc_descriptor (XEXP (rtl, 1), mode, + VAR_INIT_STATUS_INITIALIZED); + + if (op0 == 0 || op1 == 0) + break; + + add_loc_descr (&op0, new_loc_descr (DW_OP_dup, 0, 0)); + add_loc_descr (&op1, new_loc_descr (DW_OP_swap, 0, 0)); + add_loc_descr (&op1, new_loc_descr (DW_OP_over, 0, 0)); + if (GET_CODE (rtl) == UMIN || GET_CODE (rtl) == UMAX) + { + if (GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) < DWARF2_ADDR_SIZE) + { + HOST_WIDE_INT mask = GET_MODE_MASK (GET_MODE (XEXP (rtl, 0))); + add_loc_descr (&op0, int_loc_descriptor (mask)); + add_loc_descr (&op0, new_loc_descr (DW_OP_and, 0, 0)); + add_loc_descr (&op1, int_loc_descriptor (mask)); + add_loc_descr (&op1, new_loc_descr (DW_OP_and, 0, 0)); + } + else + { + HOST_WIDE_INT bias = 1; + bias <<= (DWARF2_ADDR_SIZE * BITS_PER_UNIT - 1); + add_loc_descr (&op0, new_loc_descr (DW_OP_plus_uconst, bias, 0)); + add_loc_descr (&op1, new_loc_descr (DW_OP_plus_uconst, bias, 0)); + } + } + else if (GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))) < DWARF2_ADDR_SIZE) + { + int shift = DWARF2_ADDR_SIZE + - GET_MODE_SIZE (GET_MODE (XEXP (rtl, 0))); + shift *= BITS_PER_UNIT; + add_loc_descr (&op0, int_loc_descriptor (shift)); + add_loc_descr (&op0, new_loc_descr (DW_OP_shl, 0, 0)); + add_loc_descr (&op1, int_loc_descriptor (shift)); + add_loc_descr (&op1, new_loc_descr (DW_OP_shl, 0, 0)); + } + + if (GET_CODE (rtl) == SMIN || GET_CODE (rtl) == UMIN) + op = DW_OP_lt; + else + op = DW_OP_gt; + mem_loc_result = op0; + add_loc_descr (&mem_loc_result, op1); + add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0)); + { + dw_loc_descr_ref bra_node, drop_node; + + bra_node = new_loc_descr (DW_OP_bra, 0, 0); + add_loc_descr (&mem_loc_result, bra_node); + add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_swap, 0, 0)); + drop_node = new_loc_descr (DW_OP_drop, 0, 0); + add_loc_descr (&mem_loc_result, drop_node); + bra_node->dw_loc_oprnd1.val_class = dw_val_class_loc; + bra_node->dw_loc_oprnd1.v.val_loc = drop_node; + } + break; + + case ZERO_EXTRACT: + case SIGN_EXTRACT: + if (CONST_INT_P (XEXP (rtl, 1)) + && CONST_INT_P (XEXP (rtl, 2)) + && ((unsigned) INTVAL (XEXP (rtl, 1)) + + (unsigned) INTVAL (XEXP (rtl, 2)) + <= GET_MODE_BITSIZE (GET_MODE (rtl))) + && GET_MODE_BITSIZE (GET_MODE (rtl)) <= DWARF2_ADDR_SIZE + && GET_MODE_BITSIZE (GET_MODE (XEXP (rtl, 0))) <= DWARF2_ADDR_SIZE) + { + int shift, size; + op0 = mem_loc_descriptor (XEXP (rtl, 0), mode, + VAR_INIT_STATUS_INITIALIZED); + if (op0 == 0) + break; + if (GET_CODE (rtl) == SIGN_EXTRACT) + op = DW_OP_shra; + else + op = DW_OP_shr; + mem_loc_result = op0; + size = INTVAL (XEXP (rtl, 1)); + shift = INTVAL (XEXP (rtl, 2)); + if (BITS_BIG_ENDIAN) + shift = GET_MODE_BITSIZE (GET_MODE (XEXP (rtl, 0))) + - shift - size; + add_loc_descr (&mem_loc_result, + int_loc_descriptor (DWARF2_ADDR_SIZE - shift - size)); + add_loc_descr (&mem_loc_result, new_loc_descr (DW_OP_shl, 0, 0)); + add_loc_descr (&mem_loc_result, + int_loc_descriptor (DWARF2_ADDR_SIZE - size)); + add_loc_descr (&mem_loc_result, new_loc_descr (op, 0, 0)); + } + break; + + case COMPARE: + case IF_THEN_ELSE: + case ROTATE: + case ROTATERT: + case TRUNCATE: + /* In theory, we could implement the above. */ + /* DWARF cannot represent the unsigned compare operations + natively. */ + case SS_MULT: + case US_MULT: + case SS_DIV: + case US_DIV: + case UDIV: + case UMOD: + case UNORDERED: + case ORDERED: + case UNEQ: + case UNGE: + case UNLE: + case UNLT: + case LTGT: + case FLOAT_EXTEND: + case FLOAT_TRUNCATE: + case FLOAT: + case UNSIGNED_FLOAT: + case FIX: + case UNSIGNED_FIX: + case FRACT_CONVERT: + case UNSIGNED_FRACT_CONVERT: + case SAT_FRACT: + case UNSIGNED_SAT_FRACT: + case SQRT: + case BSWAP: + case FFS: + case CLZ: + case CTZ: + case POPCOUNT: + case PARITY: + case ASM_OPERANDS: case UNSPEC: /* If delegitimize_address couldn't do anything with the UNSPEC, we can't express it in the debug info. This can happen e.g. with some TLS UNSPECs. */ break; + case CONST_STRING: + rtl = get_debug_string_label (XSTR (rtl, 0)); + goto symref; + default: +#ifdef ENABLE_CHECKING + print_rtl (stderr, rtl); gcc_unreachable (); +#else + break; +#endif } if (mem_loc_result && initialized == VAR_INIT_STATUS_UNINITIALIZED) @@ -11140,8 +11601,10 @@ static dw_loc_descr_ref concat_loc_descriptor (rtx x0, rtx x1, enum var_init_status initialized) { dw_loc_descr_ref cc_loc_result = NULL; - dw_loc_descr_ref x0_ref = loc_descriptor (x0, VAR_INIT_STATUS_INITIALIZED); - dw_loc_descr_ref x1_ref = loc_descriptor (x1, VAR_INIT_STATUS_INITIALIZED); + dw_loc_descr_ref x0_ref + = loc_descriptor (x0, VOIDmode, VAR_INIT_STATUS_INITIALIZED); + dw_loc_descr_ref x1_ref + = loc_descriptor (x1, VOIDmode, VAR_INIT_STATUS_INITIALIZED); if (x0_ref == 0 || x1_ref == 0) return 0; @@ -11173,7 +11636,7 @@ concatn_loc_descriptor (rtx concatn, enum var_init_status initialized) dw_loc_descr_ref ref; rtx x = XVECEXP (concatn, 0, i); - ref = loc_descriptor (x, VAR_INIT_STATUS_INITIALIZED); + ref = loc_descriptor (x, VOIDmode, VAR_INIT_STATUS_INITIALIZED); if (ref == NULL) return NULL; @@ -11193,16 +11656,23 @@ concatn_loc_descriptor (rtx concatn, enum var_init_status initialized) memory location we provide a Dwarf postfix expression describing how to generate the (dynamic) address of the object onto the address stack. + MODE is mode of the decl if this loc_descriptor is going to be used in + .debug_loc section where DW_OP_stack_value and DW_OP_implicit_value are + allowed, VOIDmode otherwise. + If we don't know how to describe it, return 0. */ static dw_loc_descr_ref -loc_descriptor (rtx rtl, enum var_init_status initialized) +loc_descriptor (rtx rtl, enum machine_mode mode, + enum var_init_status initialized) { dw_loc_descr_ref loc_result = NULL; switch (GET_CODE (rtl)) { case SUBREG: + case SIGN_EXTEND: + case ZERO_EXTEND: /* The case of a subreg may arise when we have a local (register) variable or a formal (register) parameter which doesn't quite fill up an entire register. For now, just assume that it is @@ -11236,7 +11706,8 @@ loc_descriptor (rtx rtl, enum var_init_status initialized) /* Single part. */ if (GET_CODE (XEXP (rtl, 1)) != PARALLEL) { - loc_result = loc_descriptor (XEXP (XEXP (rtl, 1), 0), initialized); + loc_result = loc_descriptor (XEXP (XEXP (rtl, 1), 0), mode, + initialized); break; } @@ -11252,7 +11723,7 @@ loc_descriptor (rtx rtl, enum var_init_status initialized) /* Create the first one, so we have something to add to. */ loc_result = loc_descriptor (XEXP (RTVEC_ELT (par_elems, 0), 0), - initialized); + VOIDmode, initialized); if (loc_result == NULL) return NULL; mode = GET_MODE (XEXP (RTVEC_ELT (par_elems, 0), 0)); @@ -11262,7 +11733,7 @@ loc_descriptor (rtx rtl, enum var_init_status initialized) dw_loc_descr_ref temp; temp = loc_descriptor (XEXP (RTVEC_ELT (par_elems, i), 0), - initialized); + VOIDmode, initialized); if (temp == NULL) return NULL; add_loc_descr (&loc_result, temp); @@ -11272,8 +11743,206 @@ loc_descriptor (rtx rtl, enum var_init_status initialized) } break; + case CONST_INT: + if (mode != VOIDmode && mode != BLKmode && dwarf_version >= 4) + { + HOST_WIDE_INT i = INTVAL (rtl); + int litsize; + if (i >= 0) + { + if (i <= 31) + litsize = 1; + else if (i <= 0xff) + litsize = 2; + else if (i <= 0xffff) + litsize = 3; + else if (HOST_BITS_PER_WIDE_INT == 32 + || i <= 0xffffffff) + litsize = 5; + else + litsize = 1 + size_of_uleb128 ((unsigned HOST_WIDE_INT) i); + } + else + { + if (i >= -0x80) + litsize = 2; + else if (i >= -0x8000) + litsize = 3; + else if (HOST_BITS_PER_WIDE_INT == 32 + || i >= -0x80000000) + litsize = 5; + else + litsize = 1 + size_of_sleb128 (i); + } + /* Determine if DW_OP_stack_value or DW_OP_implicit_value + is more compact. For DW_OP_stack_value we need: + litsize + 1 (DW_OP_stack_value) + 1 (DW_OP_bit_size) + + 1 (mode size) + and for DW_OP_implicit_value: + 1 (DW_OP_implicit_value) + 1 (length) + mode_size. */ + if (DWARF2_ADDR_SIZE >= GET_MODE_SIZE (mode) + && litsize + 1 + 1 + 1 < 1 + 1 + GET_MODE_SIZE (mode)) + { + loc_result = int_loc_descriptor (i); + add_loc_descr (&loc_result, + new_loc_descr (DW_OP_stack_value, 0, 0)); + add_loc_descr_op_piece (&loc_result, GET_MODE_SIZE (mode)); + return loc_result; + } + + loc_result = new_loc_descr (DW_OP_implicit_value, + GET_MODE_SIZE (mode), 0); + loc_result->dw_loc_oprnd2.val_class = dw_val_class_const; + loc_result->dw_loc_oprnd2.v.val_int = i; + } + break; + + case CONST_DOUBLE: + if (mode != VOIDmode && dwarf_version >= 4) + { + /* Note that a CONST_DOUBLE rtx could represent either an integer + or a floating-point constant. A CONST_DOUBLE is used whenever + the constant requires more than one word in order to be + adequately represented. We output CONST_DOUBLEs as blocks. */ + if (GET_MODE (rtl) != VOIDmode) + mode = GET_MODE (rtl); + + loc_result = new_loc_descr (DW_OP_implicit_value, + GET_MODE_SIZE (mode), 0); + if (SCALAR_FLOAT_MODE_P (mode)) + { + unsigned int length = GET_MODE_SIZE (mode); + unsigned char *array = GGC_NEWVEC (unsigned char, length); + + insert_float (rtl, array); + loc_result->dw_loc_oprnd2.val_class = dw_val_class_vec; + loc_result->dw_loc_oprnd2.v.val_vec.length = length / 4; + loc_result->dw_loc_oprnd2.v.val_vec.elt_size = 4; + loc_result->dw_loc_oprnd2.v.val_vec.array = array; + } + else + { + loc_result->dw_loc_oprnd2.val_class = dw_val_class_long_long; + loc_result->dw_loc_oprnd2.v.val_long_long = rtl; + } + } + break; + + case CONST_VECTOR: + if (mode != VOIDmode && dwarf_version >= 4) + { + unsigned int elt_size = GET_MODE_UNIT_SIZE (GET_MODE (rtl)); + unsigned int length = CONST_VECTOR_NUNITS (rtl); + unsigned char *array = GGC_NEWVEC (unsigned char, length * elt_size); + unsigned int i; + unsigned char *p; + + mode = GET_MODE (rtl); + switch (GET_MODE_CLASS (mode)) + { + case MODE_VECTOR_INT: + for (i = 0, p = array; i < length; i++, p += elt_size) + { + rtx elt = CONST_VECTOR_ELT (rtl, i); + HOST_WIDE_INT lo, hi; + + switch (GET_CODE (elt)) + { + case CONST_INT: + lo = INTVAL (elt); + hi = -(lo < 0); + break; + + case CONST_DOUBLE: + lo = CONST_DOUBLE_LOW (elt); + hi = CONST_DOUBLE_HIGH (elt); + break; + + default: + gcc_unreachable (); + } + + if (elt_size <= sizeof (HOST_WIDE_INT)) + insert_int (lo, elt_size, p); + else + { + unsigned char *p0 = p; + unsigned char *p1 = p + sizeof (HOST_WIDE_INT); + + gcc_assert (elt_size == 2 * sizeof (HOST_WIDE_INT)); + if (WORDS_BIG_ENDIAN) + { + p0 = p1; + p1 = p; + } + insert_int (lo, sizeof (HOST_WIDE_INT), p0); + insert_int (hi, sizeof (HOST_WIDE_INT), p1); + } + } + break; + + case MODE_VECTOR_FLOAT: + for (i = 0, p = array; i < length; i++, p += elt_size) + { + rtx elt = CONST_VECTOR_ELT (rtl, i); + insert_float (elt, p); + } + break; + + default: + gcc_unreachable (); + } + + loc_result = new_loc_descr (DW_OP_implicit_value, + length * elt_size, 0); + loc_result->dw_loc_oprnd2.val_class = dw_val_class_vec; + loc_result->dw_loc_oprnd2.v.val_vec.length = length; + loc_result->dw_loc_oprnd2.v.val_vec.elt_size = elt_size; + loc_result->dw_loc_oprnd2.v.val_vec.array = array; + } + break; + + case CONST: + if (mode == VOIDmode + || GET_CODE (XEXP (rtl, 0)) == CONST_INT + || GET_CODE (XEXP (rtl, 0)) == CONST_DOUBLE + || GET_CODE (XEXP (rtl, 0)) == CONST_VECTOR) + { + loc_result = loc_descriptor (XEXP (rtl, 0), mode, initialized); + break; + } + /* FALLTHROUGH */ + case SYMBOL_REF: + if (GET_CODE (rtl) == SYMBOL_REF + && SYMBOL_REF_TLS_MODEL (rtl) != TLS_MODEL_NONE) + break; + case LABEL_REF: + if (mode != VOIDmode && GET_MODE_SIZE (mode) == DWARF2_ADDR_SIZE + && dwarf_version >= 4) + { + loc_result = new_loc_descr (DW_OP_implicit_value, + DWARF2_ADDR_SIZE, 0); + loc_result->dw_loc_oprnd2.val_class = dw_val_class_addr; + loc_result->dw_loc_oprnd2.v.val_addr = rtl; + VEC_safe_push (rtx, gc, used_rtx_array, rtl); + } + break; + default: - gcc_unreachable (); + if (GET_MODE_CLASS (mode) == MODE_INT && GET_MODE (rtl) == mode + && GET_MODE_SIZE (GET_MODE (rtl)) <= DWARF2_ADDR_SIZE + && dwarf_version >= 4) + { + /* Value expression. */ + loc_result = mem_loc_descriptor (rtl, VOIDmode, initialized); + if (loc_result) + { + add_loc_descr (&loc_result, + new_loc_descr (DW_OP_stack_value, 0, 0)); + add_loc_descr_op_piece (&loc_result, GET_MODE_SIZE (mode)); + } + } + break; } return loc_result; @@ -11416,7 +12085,8 @@ loc_descriptor_from_tree_1 (tree loc, int want_address) /* Certain constructs can only be represented at top-level. */ if (want_address == 2) - return loc_descriptor (rtl, VAR_INIT_STATUS_INITIALIZED); + return loc_descriptor (rtl, VOIDmode, + VAR_INIT_STATUS_INITIALIZED); mode = GET_MODE (rtl); if (MEM_P (rtl)) @@ -12131,13 +12801,7 @@ add_const_value_attribute (dw_die_ref die, rtx rtl) add_AT_vec (die, DW_AT_const_value, length / 4, 4, array); } else - { - /* ??? We really should be using HOST_WIDE_INT throughout. */ - gcc_assert (HOST_BITS_PER_LONG == HOST_BITS_PER_WIDE_INT); - - add_AT_long_long (die, DW_AT_const_value, - CONST_DOUBLE_HIGH (rtl), CONST_DOUBLE_LOW (rtl)); - } + add_AT_long_long (die, DW_AT_const_value, rtl); } break; @@ -12213,9 +12877,18 @@ add_const_value_attribute (dw_die_ref die, rtx rtl) add_AT_string (die, DW_AT_const_value, XSTR (rtl, 0)); break; + case CONST: + if (CONSTANT_P (XEXP (rtl, 0))) + { + add_const_value_attribute (die, XEXP (rtl, 0)); + return; + } + /* FALLTHROUGH */ case SYMBOL_REF: + if (GET_CODE (rtl) == SYMBOL_REF + && SYMBOL_REF_TLS_MODEL (rtl) != TLS_MODEL_NONE) + break; case LABEL_REF: - case CONST: add_AT_addr (die, DW_AT_const_value, rtl); VEC_safe_push (rtx, gc, used_rtx_array, rtl); break; @@ -12761,7 +13434,8 @@ add_location_or_const_value_attribute (dw_die_ref die, tree decl, else initialized = VAR_INIT_STATUS_INITIALIZED; - descr = loc_by_reference (loc_descriptor (varloc, initialized), decl); + descr = loc_by_reference (loc_descriptor (varloc, DECL_MODE (decl), + initialized), decl); list = new_loc_list (descr, node->label, node->next->label, secname, 1); node = node->next; @@ -12773,8 +13447,8 @@ add_location_or_const_value_attribute (dw_die_ref die, tree decl, enum var_init_status initialized = NOTE_VAR_LOCATION_STATUS (node->var_loc_note); varloc = NOTE_VAR_LOCATION (node->var_loc_note); - descr = loc_by_reference (loc_descriptor (varloc, initialized), - decl); + descr = loc_by_reference (loc_descriptor (varloc, DECL_MODE (decl), + initialized), decl); add_loc_descr_to_loc_list (&list, descr, node->label, node->next->label, secname); } @@ -12796,7 +13470,9 @@ add_location_or_const_value_attribute (dw_die_ref die, tree decl, current_function_funcdef_no); endname = ggc_strdup (label_id); } - descr = loc_by_reference (loc_descriptor (varloc, initialized), + descr = loc_by_reference (loc_descriptor (varloc, + DECL_MODE (decl), + initialized), decl); add_loc_descr_to_loc_list (&list, descr, node->label, endname, secname); @@ -12825,7 +13501,17 @@ add_location_or_const_value_attribute (dw_die_ref die, tree decl, enum var_init_status status; node = loc_list->first; status = NOTE_VAR_LOCATION_STATUS (node->var_loc_note); - descr = loc_descriptor (NOTE_VAR_LOCATION (node->var_loc_note), status); + rtl = NOTE_VAR_LOCATION (node->var_loc_note); + if (GET_CODE (rtl) == VAR_LOCATION + && GET_CODE (XEXP (rtl, 1)) != PARALLEL) + rtl = XEXP (XEXP (rtl, 1), 0); + if (CONSTANT_P (rtl) || GET_CODE (rtl) == CONST_STRING) + { + add_const_value_attribute (die, rtl); + return; + } + descr = loc_descriptor (NOTE_VAR_LOCATION (node->var_loc_note), + DECL_MODE (decl), status); if (descr) { descr = loc_by_reference (descr, decl); @@ -16898,10 +17584,11 @@ dwarf2out_set_name (tree decl, tree name) static void dwarf2out_var_location (rtx loc_note) { - char loclabel[MAX_ARTIFICIAL_LABEL_BYTES]; + char loclabel[MAX_ARTIFICIAL_LABEL_BYTES + 2]; struct var_loc_node *newloc; rtx next_real; static const char *last_label; + static const char *last_postcall_label; static bool last_in_cold_section_p; tree decl; @@ -16917,27 +17604,38 @@ dwarf2out_var_location (rtx loc_note) newloc = GGC_CNEW (struct var_loc_node); /* If there were no real insns between note we processed last time and this note, use the label we emitted last time. */ - if (last_var_location_insn != NULL_RTX - && last_var_location_insn == next_real - && last_in_cold_section_p == in_cold_section_p) - newloc->label = last_label; - else + if (last_var_location_insn == NULL_RTX + || last_var_location_insn != next_real + || last_in_cold_section_p != in_cold_section_p) { ASM_GENERATE_INTERNAL_LABEL (loclabel, "LVL", loclabel_num); ASM_OUTPUT_DEBUG_LABEL (asm_out_file, "LVL", loclabel_num); loclabel_num++; - newloc->label = ggc_strdup (loclabel); + last_label = ggc_strdup (loclabel); + if (!NOTE_DURING_CALL_P (loc_note)) + last_postcall_label = NULL; } newloc->var_loc_note = loc_note; newloc->next = NULL; + if (!NOTE_DURING_CALL_P (loc_note)) + newloc->label = last_label; + else + { + if (!last_postcall_label) + { + sprintf (loclabel, "%s-1", last_label); + last_postcall_label = ggc_strdup (loclabel); + } + newloc->label = last_postcall_label; + } + if (cfun && in_cold_section_p) newloc->section_label = crtl->subsections.cold_section_label; else newloc->section_label = text_section_label; last_var_location_insn = next_real; - last_label = newloc->label; last_in_cold_section_p = in_cold_section_p; decl = NOTE_VAR_LOCATION_DECL (loc_note); add_var_loc_to_decl (decl, newloc); @@ -17242,14 +17940,14 @@ dwarf2out_init (const char *filename ATTRIBUTE_UNUSED) } /* A helper function for dwarf2out_finish called through - ht_forall. Emit one queued .debug_str string. */ + htab_traverse. Emit one queued .debug_str string. */ static int output_indirect_string (void **h, void *v ATTRIBUTE_UNUSED) { struct indirect_string_node *node = (struct indirect_string_node *) *h; - if (node->form == DW_FORM_strp) + if (node->label && node->refcount) { switch_to_section (debug_str_section); ASM_OUTPUT_LABEL (asm_out_file, node->label); @@ -17527,6 +18225,20 @@ prune_unused_types_prune (dw_die_ref die) } while (c != die->die_child); } +/* A helper function for dwarf2out_finish called through + htab_traverse. Clear .debug_str strings that we haven't already + decided to emit. */ + +static int +prune_indirect_string (void **h, void *v ATTRIBUTE_UNUSED) +{ + struct indirect_string_node *node = (struct indirect_string_node *) *h; + + if (!node->label || !node->refcount) + htab_clear_slot (debug_str_hash, h); + + return 1; +} /* Remove dies representing declarations that we never use. */ @@ -17557,7 +18269,9 @@ prune_unused_types (void) prune_unused_types_mark (arange_table[i], 1); /* Get rid of nodes that aren't marked; and update the string counts. */ - if (debug_str_hash) + if (debug_str_hash && debug_str_hash_forced) + htab_traverse (debug_str_hash, prune_indirect_string, NULL); + else if (debug_str_hash) htab_empty (debug_str_hash); prune_unused_types_prune (comp_unit_die); for (node = limbo_die_list; node; node = node->next) diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c index 21d8434d457..9096a62dcbf 100644 --- a/gcc/emit-rtl.c +++ b/gcc/emit-rtl.c @@ -58,6 +58,7 @@ along with GCC; see the file COPYING3. If not see #include "langhooks.h" #include "tree-pass.h" #include "df.h" +#include "params.h" /* Commonly used modes. */ @@ -175,6 +176,7 @@ static GTY ((if_marked ("ggc_marked_p"), param_is (struct rtx_def))) #define first_insn (crtl->emit.x_first_insn) #define last_insn (crtl->emit.x_last_insn) #define cur_insn_uid (crtl->emit.x_cur_insn_uid) +#define cur_debug_insn_uid (crtl->emit.x_cur_debug_insn_uid) #define last_location (crtl->emit.x_last_location) #define first_label_num (crtl->emit.x_first_label_num) @@ -2268,8 +2270,31 @@ set_new_first_and_last_insn (rtx first, rtx last) last_insn = last; cur_insn_uid = 0; - for (insn = first; insn; insn = NEXT_INSN (insn)) - cur_insn_uid = MAX (cur_insn_uid, INSN_UID (insn)); + if (MIN_NONDEBUG_INSN_UID || MAY_HAVE_DEBUG_INSNS) + { + int debug_count = 0; + + cur_insn_uid = MIN_NONDEBUG_INSN_UID - 1; + cur_debug_insn_uid = 0; + + for (insn = first; insn; insn = NEXT_INSN (insn)) + if (INSN_UID (insn) < MIN_NONDEBUG_INSN_UID) + cur_debug_insn_uid = MAX (cur_debug_insn_uid, INSN_UID (insn)); + else + { + cur_insn_uid = MAX (cur_insn_uid, INSN_UID (insn)); + if (DEBUG_INSN_P (insn)) + debug_count++; + } + + if (debug_count) + cur_debug_insn_uid = MIN_NONDEBUG_INSN_UID + debug_count; + else + cur_debug_insn_uid++; + } + else + for (insn = first; insn; insn = NEXT_INSN (insn)) + cur_insn_uid = MAX (cur_insn_uid, INSN_UID (insn)); cur_insn_uid++; } @@ -2592,6 +2617,7 @@ repeat: return; break; + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -2698,6 +2724,7 @@ repeat: case CC0: return; + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -2768,6 +2795,7 @@ set_used_flags (rtx x) case CC0: return; + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -2947,6 +2975,27 @@ get_max_uid (void) { return cur_insn_uid; } + +/* Return the number of actual (non-debug) insns emitted in this + function. */ + +int +get_max_insn_count (void) +{ + int n = cur_insn_uid; + + /* The table size must be stable across -g, to avoid codegen + differences due to debug insns, and not be affected by + -fmin-insn-uid, to avoid excessive table size and to simplify + debugging of -fcompare-debug failures. */ + if (cur_debug_insn_uid > MIN_NONDEBUG_INSN_UID) + n -= cur_debug_insn_uid; + else + n -= MIN_NONDEBUG_INSN_UID; + + return n; +} + /* Return the next insn. If it is a SEQUENCE, return the first insn of the sequence. */ @@ -3033,6 +3082,38 @@ prev_nonnote_insn (rtx insn) return insn; } +/* Return the next insn after INSN that is not a DEBUG_INSN. This + routine does not look inside SEQUENCEs. */ + +rtx +next_nondebug_insn (rtx insn) +{ + while (insn) + { + insn = NEXT_INSN (insn); + if (insn == 0 || !DEBUG_INSN_P (insn)) + break; + } + + return insn; +} + +/* Return the previous insn before INSN that is not a DEBUG_INSN. + This routine does not look inside SEQUENCEs. */ + +rtx +prev_nondebug_insn (rtx insn) +{ + while (insn) + { + insn = PREV_INSN (insn); + if (insn == 0 || !DEBUG_INSN_P (insn)) + break; + } + + return insn; +} + /* Return the next INSN, CALL_INSN or JUMP_INSN after INSN; or 0, if there is none. This routine does not look inside SEQUENCEs. */ @@ -3504,6 +3585,27 @@ make_insn_raw (rtx pattern) return insn; } +/* Like `make_insn_raw' but make a DEBUG_INSN instead of an insn. */ + +rtx +make_debug_insn_raw (rtx pattern) +{ + rtx insn; + + insn = rtx_alloc (DEBUG_INSN); + INSN_UID (insn) = cur_debug_insn_uid++; + if (cur_debug_insn_uid > MIN_NONDEBUG_INSN_UID) + INSN_UID (insn) = cur_insn_uid++; + + PATTERN (insn) = pattern; + INSN_CODE (insn) = -1; + REG_NOTES (insn) = NULL; + INSN_LOCATOR (insn) = curr_insn_locator (); + BLOCK_FOR_INSN (insn) = NULL; + + return insn; +} + /* Like `make_insn_raw' but make a JUMP_INSN instead of an insn. */ rtx @@ -3917,6 +4019,7 @@ emit_insn_before_noloc (rtx x, rtx before, basic_block bb) switch (GET_CODE (x)) { + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -3960,6 +4063,7 @@ emit_jump_insn_before_noloc (rtx x, rtx before) switch (GET_CODE (x)) { + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -4003,6 +4107,7 @@ emit_call_insn_before_noloc (rtx x, rtx before) switch (GET_CODE (x)) { + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -4034,6 +4139,50 @@ emit_call_insn_before_noloc (rtx x, rtx before) return last; } +/* Make an instruction with body X and code DEBUG_INSN + and output it before the instruction BEFORE. */ + +rtx +emit_debug_insn_before_noloc (rtx x, rtx before) +{ + rtx last = NULL_RTX, insn; + + gcc_assert (before); + + switch (GET_CODE (x)) + { + case DEBUG_INSN: + case INSN: + case JUMP_INSN: + case CALL_INSN: + case CODE_LABEL: + case BARRIER: + case NOTE: + insn = x; + while (insn) + { + rtx next = NEXT_INSN (insn); + add_insn_before (insn, before, NULL); + last = insn; + insn = next; + } + break; + +#ifdef ENABLE_RTL_CHECKING + case SEQUENCE: + gcc_unreachable (); + break; +#endif + + default: + last = make_debug_insn_raw (x); + add_insn_before (last, before, NULL); + break; + } + + return last; +} + /* Make an insn of code BARRIER and output it before the insn BEFORE. */ @@ -4140,6 +4289,7 @@ emit_insn_after_noloc (rtx x, rtx after, basic_block bb) switch (GET_CODE (x)) { + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -4177,6 +4327,7 @@ emit_jump_insn_after_noloc (rtx x, rtx after) switch (GET_CODE (x)) { + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -4213,6 +4364,7 @@ emit_call_insn_after_noloc (rtx x, rtx after) switch (GET_CODE (x)) { + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -4237,6 +4389,43 @@ emit_call_insn_after_noloc (rtx x, rtx after) return last; } +/* Make an instruction with body X and code CALL_INSN + and output it after the instruction AFTER. */ + +rtx +emit_debug_insn_after_noloc (rtx x, rtx after) +{ + rtx last; + + gcc_assert (after); + + switch (GET_CODE (x)) + { + case DEBUG_INSN: + case INSN: + case JUMP_INSN: + case CALL_INSN: + case CODE_LABEL: + case BARRIER: + case NOTE: + last = emit_insn_after_1 (x, after, NULL); + break; + +#ifdef ENABLE_RTL_CHECKING + case SEQUENCE: + gcc_unreachable (); + break; +#endif + + default: + last = make_debug_insn_raw (x); + add_insn_after (last, after, NULL); + break; + } + + return last; +} + /* Make an insn of code BARRIER and output it after the insn AFTER. */ @@ -4307,8 +4496,13 @@ emit_insn_after_setloc (rtx pattern, rtx after, int loc) rtx emit_insn_after (rtx pattern, rtx after) { - if (INSN_P (after)) - return emit_insn_after_setloc (pattern, after, INSN_LOCATOR (after)); + rtx prev = after; + + while (DEBUG_INSN_P (prev)) + prev = PREV_INSN (prev); + + if (INSN_P (prev)) + return emit_insn_after_setloc (pattern, after, INSN_LOCATOR (prev)); else return emit_insn_after_noloc (pattern, after, NULL); } @@ -4338,8 +4532,13 @@ emit_jump_insn_after_setloc (rtx pattern, rtx after, int loc) rtx emit_jump_insn_after (rtx pattern, rtx after) { - if (INSN_P (after)) - return emit_jump_insn_after_setloc (pattern, after, INSN_LOCATOR (after)); + rtx prev = after; + + while (DEBUG_INSN_P (prev)) + prev = PREV_INSN (prev); + + if (INSN_P (prev)) + return emit_jump_insn_after_setloc (pattern, after, INSN_LOCATOR (prev)); else return emit_jump_insn_after_noloc (pattern, after); } @@ -4369,12 +4568,48 @@ emit_call_insn_after_setloc (rtx pattern, rtx after, int loc) rtx emit_call_insn_after (rtx pattern, rtx after) { - if (INSN_P (after)) - return emit_call_insn_after_setloc (pattern, after, INSN_LOCATOR (after)); + rtx prev = after; + + while (DEBUG_INSN_P (prev)) + prev = PREV_INSN (prev); + + if (INSN_P (prev)) + return emit_call_insn_after_setloc (pattern, after, INSN_LOCATOR (prev)); else return emit_call_insn_after_noloc (pattern, after); } +/* Like emit_debug_insn_after_noloc, but set INSN_LOCATOR according to SCOPE. */ +rtx +emit_debug_insn_after_setloc (rtx pattern, rtx after, int loc) +{ + rtx last = emit_debug_insn_after_noloc (pattern, after); + + if (pattern == NULL_RTX || !loc) + return last; + + after = NEXT_INSN (after); + while (1) + { + if (active_insn_p (after) && !INSN_LOCATOR (after)) + INSN_LOCATOR (after) = loc; + if (after == last) + break; + after = NEXT_INSN (after); + } + return last; +} + +/* Like emit_debug_insn_after_noloc, but set INSN_LOCATOR according to AFTER. */ +rtx +emit_debug_insn_after (rtx pattern, rtx after) +{ + if (INSN_P (after)) + return emit_debug_insn_after_setloc (pattern, after, INSN_LOCATOR (after)); + else + return emit_debug_insn_after_noloc (pattern, after); +} + /* Like emit_insn_before_noloc, but set INSN_LOCATOR according to SCOPE. */ rtx emit_insn_before_setloc (rtx pattern, rtx before, int loc) @@ -4404,8 +4639,13 @@ emit_insn_before_setloc (rtx pattern, rtx before, int loc) rtx emit_insn_before (rtx pattern, rtx before) { - if (INSN_P (before)) - return emit_insn_before_setloc (pattern, before, INSN_LOCATOR (before)); + rtx next = before; + + while (DEBUG_INSN_P (next)) + next = PREV_INSN (next); + + if (INSN_P (next)) + return emit_insn_before_setloc (pattern, before, INSN_LOCATOR (next)); else return emit_insn_before_noloc (pattern, before, NULL); } @@ -4436,8 +4676,13 @@ emit_jump_insn_before_setloc (rtx pattern, rtx before, int loc) rtx emit_jump_insn_before (rtx pattern, rtx before) { - if (INSN_P (before)) - return emit_jump_insn_before_setloc (pattern, before, INSN_LOCATOR (before)); + rtx next = before; + + while (DEBUG_INSN_P (next)) + next = PREV_INSN (next); + + if (INSN_P (next)) + return emit_jump_insn_before_setloc (pattern, before, INSN_LOCATOR (next)); else return emit_jump_insn_before_noloc (pattern, before); } @@ -4469,11 +4714,49 @@ emit_call_insn_before_setloc (rtx pattern, rtx before, int loc) rtx emit_call_insn_before (rtx pattern, rtx before) { - if (INSN_P (before)) - return emit_call_insn_before_setloc (pattern, before, INSN_LOCATOR (before)); + rtx next = before; + + while (DEBUG_INSN_P (next)) + next = PREV_INSN (next); + + if (INSN_P (next)) + return emit_call_insn_before_setloc (pattern, before, INSN_LOCATOR (next)); else return emit_call_insn_before_noloc (pattern, before); } + +/* like emit_insn_before_noloc, but set insn_locator according to scope. */ +rtx +emit_debug_insn_before_setloc (rtx pattern, rtx before, int loc) +{ + rtx first = PREV_INSN (before); + rtx last = emit_debug_insn_before_noloc (pattern, before); + + if (pattern == NULL_RTX) + return last; + + first = NEXT_INSN (first); + while (1) + { + if (active_insn_p (first) && !INSN_LOCATOR (first)) + INSN_LOCATOR (first) = loc; + if (first == last) + break; + first = NEXT_INSN (first); + } + return last; +} + +/* like emit_debug_insn_before_noloc, + but set insn_locator according to before. */ +rtx +emit_debug_insn_before (rtx pattern, rtx before) +{ + if (INSN_P (before)) + return emit_debug_insn_before_setloc (pattern, before, INSN_LOCATOR (before)); + else + return emit_debug_insn_before_noloc (pattern, before); +} /* Take X and emit it at the end of the doubly-linked INSN list. @@ -4491,6 +4774,7 @@ emit_insn (rtx x) switch (GET_CODE (x)) { + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -4522,6 +4806,52 @@ emit_insn (rtx x) return last; } +/* Make an insn of code DEBUG_INSN with pattern X + and add it to the end of the doubly-linked list. */ + +rtx +emit_debug_insn (rtx x) +{ + rtx last = last_insn; + rtx insn; + + if (x == NULL_RTX) + return last; + + switch (GET_CODE (x)) + { + case DEBUG_INSN: + case INSN: + case JUMP_INSN: + case CALL_INSN: + case CODE_LABEL: + case BARRIER: + case NOTE: + insn = x; + while (insn) + { + rtx next = NEXT_INSN (insn); + add_insn (insn); + last = insn; + insn = next; + } + break; + +#ifdef ENABLE_RTL_CHECKING + case SEQUENCE: + gcc_unreachable (); + break; +#endif + + default: + last = make_debug_insn_raw (x); + add_insn (last); + break; + } + + return last; +} + /* Make an insn of code JUMP_INSN with pattern X and add it to the end of the doubly-linked list. */ @@ -4532,6 +4862,7 @@ emit_jump_insn (rtx x) switch (GET_CODE (x)) { + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -4573,6 +4904,7 @@ emit_call_insn (rtx x) switch (GET_CODE (x)) { + case DEBUG_INSN: case INSN: case JUMP_INSN: case CALL_INSN: @@ -4844,6 +5176,8 @@ emit (rtx x) } case CALL_INSN: return emit_call_insn (x); + case DEBUG_INSN: + return emit_debug_insn (x); default: gcc_unreachable (); } @@ -5168,7 +5502,11 @@ init_emit (void) { first_insn = NULL; last_insn = NULL; - cur_insn_uid = 1; + if (MIN_NONDEBUG_INSN_UID) + cur_insn_uid = MIN_NONDEBUG_INSN_UID; + else + cur_insn_uid = 1; + cur_debug_insn_uid = 1; reg_rtx_no = LAST_VIRTUAL_REGISTER + 1; last_location = UNKNOWN_LOCATION; first_label_num = label_num; @@ -5625,6 +5963,10 @@ emit_copy_of_insn_after (rtx insn, rtx after) new_rtx = emit_jump_insn_after (copy_insn (PATTERN (insn)), after); break; + case DEBUG_INSN: + new_rtx = emit_debug_insn_after (copy_insn (PATTERN (insn)), after); + break; + case CALL_INSN: new_rtx = emit_call_insn_after (copy_insn (PATTERN (insn)), after); if (CALL_INSN_FUNCTION_USAGE (insn)) diff --git a/gcc/final.c b/gcc/final.c index cca1883039d..76c52ca100e 100644 --- a/gcc/final.c +++ b/gcc/final.c @@ -391,6 +391,7 @@ get_attr_length_1 (rtx insn ATTRIBUTE_UNUSED, case NOTE: case BARRIER: case CODE_LABEL: + case DEBUG_INSN: return 0; case CALL_INSN: @@ -4381,7 +4382,8 @@ rest_of_clean_state (void) && (!NOTE_P (insn) || (NOTE_KIND (insn) != NOTE_INSN_VAR_LOCATION && NOTE_KIND (insn) != NOTE_INSN_BLOCK_BEG - && NOTE_KIND (insn) != NOTE_INSN_BLOCK_END))) + && NOTE_KIND (insn) != NOTE_INSN_BLOCK_END + && NOTE_KIND (insn) != NOTE_INSN_CFA_RESTORE_STATE))) print_rtl_single (final_output, insn); } diff --git a/gcc/function.c b/gcc/function.c index b1d467c5787..32572544298 100644 --- a/gcc/function.c +++ b/gcc/function.c @@ -1775,8 +1775,11 @@ instantiate_virtual_regs (void) || GET_CODE (PATTERN (insn)) == ADDR_DIFF_VEC || GET_CODE (PATTERN (insn)) == ASM_INPUT) continue; - - instantiate_virtual_regs_in_insn (insn); + else if (DEBUG_INSN_P (insn)) + for_each_rtx (&INSN_VAR_LOCATION (insn), + instantiate_virtual_regs_in_rtx, NULL); + else + instantiate_virtual_regs_in_insn (insn); if (INSN_DELETED_P (insn)) continue; diff --git a/gcc/function.h b/gcc/function.h index ac3a1749549..446bc9d82e1 100644 --- a/gcc/function.h +++ b/gcc/function.h @@ -64,6 +64,10 @@ struct GTY(()) emit_status { Reset to 1 for each function compiled. */ int x_cur_insn_uid; + /* INSN_UID for next debug insn emitted. Only used if + --param min-nondebug-insn-uid=<value> is given with nonzero value. */ + int x_cur_debug_insn_uid; + /* Location the last line-number NOTE emitted. This is used to avoid generating duplicates. */ location_t x_last_location; diff --git a/gcc/fwprop.c b/gcc/fwprop.c index df8c45d3b6b..d3ed74298c0 100644 --- a/gcc/fwprop.c +++ b/gcc/fwprop.c @@ -1208,7 +1208,7 @@ forward_propagate_and_simplify (df_ref use, rtx def_insn, rtx def_set) if (INSN_CODE (use_insn) < 0) asm_use = asm_noperands (PATTERN (use_insn)); - if (!use_set && asm_use < 0) + if (!use_set && asm_use < 0 && !DEBUG_INSN_P (use_insn)) return false; /* Do not propagate into PC, CC0, etc. */ @@ -1265,6 +1265,11 @@ forward_propagate_and_simplify (df_ref use, rtx def_insn, rtx def_set) loc = &SET_DEST (use_set); set_reg_equal = false; } + else if (!use_set) + { + loc = &INSN_VAR_LOCATION_LOC (use_insn); + set_reg_equal = false; + } else { rtx note = find_reg_note (use_insn, REG_EQUAL, NULL_RTX); diff --git a/gcc/gcc.c b/gcc/gcc.c index 5f857fa029d..25deb494a8d 100644 --- a/gcc/gcc.c +++ b/gcc/gcc.c @@ -891,10 +891,10 @@ static const char *asm_options = static const char *invoke_as = #ifdef AS_NEEDS_DASH_FOR_PIPED_INPUT -"%{fcompare-debug=*:%:compare-debug-dump-opt()}\ +"%{fcompare-debug=*|fdump-final-insns=*:%:compare-debug-dump-opt()}\ %{!S:-o %|.s |\n as %(asm_options) %|.s %A }"; #else -"%{fcompare-debug=*:%:compare-debug-dump-opt()}\ +"%{fcompare-debug=*|fdump-final-insns=*:%:compare-debug-dump-opt()}\ %{!S:-o %|.s |\n as %(asm_options) %m.s %A }"; #endif @@ -926,6 +926,7 @@ static const char *const multilib_defaults_raw[] = MULTILIB_DEFAULTS; #endif static const char *const driver_self_specs[] = { + "%{fdump-final-insns:-fdump-final-insns=.} %<fdump-final-insns", DRIVER_SELF_SPECS, GOMP_SELF_SPECS }; @@ -8672,6 +8673,33 @@ print_asm_header_spec_function (int arg ATTRIBUTE_UNUSED, return NULL; } +/* Compute a timestamp to initialize flag_random_seed. */ + +static unsigned +get_local_tick (void) +{ + unsigned ret = 0; + + /* Get some more or less random data. */ +#ifdef HAVE_GETTIMEOFDAY + { + struct timeval tv; + + gettimeofday (&tv, NULL); + ret = tv.tv_sec * 1000 + tv.tv_usec / 1000; + } +#else + { + time_t now = time (NULL); + + if (now != (time_t)-1) + ret = (unsigned) now; + } +#endif + + return ret; +} + /* %:compare-debug-dump-opt spec function. Save the last argument, expected to be the last -fdump-final-insns option, or generate a temporary. */ @@ -8683,41 +8711,61 @@ compare_debug_dump_opt_spec_function (int arg, const char *ret; char *name; int which; + static char random_seed[HOST_BITS_PER_WIDE_INT / 4 + 3]; if (arg != 0) fatal ("too many arguments to %%:compare-debug-dump-opt"); - if (!compare_debug) - return NULL; - do_spec_2 ("%{fdump-final-insns=*:%*}"); do_spec_1 (" ", 0, NULL); - if (argbuf_index > 0) + if (argbuf_index > 0 && strcmp (argv[argbuf_index - 1], ".")) { + if (!compare_debug) + return NULL; + name = xstrdup (argv[argbuf_index - 1]); ret = NULL; } else { -#define OPT "-fdump-final-insns=" - ret = "-fdump-final-insns=%g.gkd"; + const char *ext = NULL; + + if (argbuf_index > 0) + { + do_spec_2 ("%{o*:%*}%{!o:%{!S:%b%O}%{S:%b.s}}"); + ext = ".gkd"; + } + else if (!compare_debug) + return NULL; + else + do_spec_2 ("%g.gkd"); - do_spec_2 (ret + sizeof (OPT) - 1); do_spec_1 (" ", 0, NULL); -#undef OPT gcc_assert (argbuf_index > 0); - name = xstrdup (argbuf[argbuf_index - 1]); + name = concat (argbuf[argbuf_index - 1], ext, NULL); + + ret = concat ("-fdump-final-insns=", name, NULL); } which = compare_debug < 0; debug_check_temp_file[which] = name; -#if 0 - error ("compare-debug: [%i]=\"%s\", ret %s", which, name, ret); -#endif + if (!which) + { + unsigned HOST_WIDE_INT value = get_local_tick () ^ getpid (); + + sprintf (random_seed, HOST_WIDE_INT_PRINT_HEX, value); + } + + if (*random_seed) + ret = concat ("%{!frandom-seed=*:-frandom-seed=", random_seed, "} ", + ret, NULL); + + if (which) + *random_seed = 0; return ret; } @@ -8791,5 +8839,7 @@ compare_debug_auxbase_opt_spec_function (int arg, memcpy (name + sizeof (OPT) - 1, argv[0], len); name[sizeof (OPT) - 1 + len] = '\0'; +#undef OPT + return name; } diff --git a/gcc/gcse.c b/gcc/gcse.c index 5c427ce5719..dc4aa8b9a96 100644 --- a/gcc/gcse.c +++ b/gcc/gcse.c @@ -465,7 +465,7 @@ static void record_last_reg_set_info (rtx, int); static void record_last_mem_set_info (rtx); static void record_last_set_info (rtx, const_rtx, void *); static void compute_hash_table (struct hash_table_d *); -static void alloc_hash_table (int, struct hash_table_d *, int); +static void alloc_hash_table (struct hash_table_d *, int); static void free_hash_table (struct hash_table_d *); static void compute_hash_table_work (struct hash_table_d *); static void dump_hash_table (FILE *, const char *, struct hash_table_d *); @@ -1716,17 +1716,18 @@ compute_hash_table_work (struct hash_table_d *table) } /* Allocate space for the set/expr hash TABLE. - N_INSNS is the number of instructions in the function. It is used to determine the number of buckets to use. SET_P determines whether set or expression table will be created. */ static void -alloc_hash_table (int n_insns, struct hash_table_d *table, int set_p) +alloc_hash_table (struct hash_table_d *table, int set_p) { int n; - table->size = n_insns / 4; + n = get_max_insn_count (); + + table->size = n / 4; if (table->size < 11) table->size = 11; @@ -2610,6 +2611,9 @@ cprop_insn (rtx insn) } } + if (changed && DEBUG_INSN_P (insn)) + return 0; + return changed; } @@ -3137,7 +3141,9 @@ bypass_conditional_jumps (void) { setcc = NULL_RTX; FOR_BB_INSNS (bb, insn) - if (NONJUMP_INSN_P (insn)) + if (DEBUG_INSN_P (insn)) + continue; + else if (NONJUMP_INSN_P (insn)) { if (setcc) break; @@ -3967,7 +3973,7 @@ one_pre_gcse_pass (void) gcc_obstack_init (&gcse_obstack); alloc_gcse_mem (); - alloc_hash_table (get_max_uid (), &expr_hash_table, 0); + alloc_hash_table (&expr_hash_table, 0); add_noreturn_fake_exit_edges (); if (flag_gcse_lm) compute_ld_motion_mems (); @@ -4448,7 +4454,7 @@ one_code_hoisting_pass (void) gcc_obstack_init (&gcse_obstack); alloc_gcse_mem (); - alloc_hash_table (get_max_uid (), &expr_hash_table, 0); + alloc_hash_table (&expr_hash_table, 0); compute_hash_table (&expr_hash_table); if (dump_file) dump_hash_table (dump_file, "Code Hosting Expressions", &expr_hash_table); @@ -4752,7 +4758,7 @@ compute_ld_motion_mems (void) { FOR_BB_INSNS (bb, insn) { - if (INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) { if (GET_CODE (PATTERN (insn)) == SET) { @@ -4988,7 +4994,7 @@ one_cprop_pass (void) implicit_sets = XCNEWVEC (rtx, last_basic_block); find_implicit_sets (); - alloc_hash_table (get_max_uid (), &set_hash_table, 1); + alloc_hash_table (&set_hash_table, 1); compute_hash_table (&set_hash_table); /* Free implicit_sets before peak usage. */ diff --git a/gcc/gimple-pretty-print.c b/gcc/gimple-pretty-print.c index 70ab4e1b800..50180203e2d 100644 --- a/gcc/gimple-pretty-print.c +++ b/gcc/gimple-pretty-print.c @@ -780,6 +780,31 @@ dump_gimple_resx (pretty_printer *buffer, gimple gs, int spc, int flags) dump_gimple_fmt (buffer, spc, flags, "resx %d", gimple_resx_region (gs)); } +/* Dump a GIMPLE_DEBUG tuple on the pretty_printer BUFFER, SPC spaces + of indent. FLAGS specifies details to show in the dump (see TDF_* + in tree-pass.h). */ + +static void +dump_gimple_debug (pretty_printer *buffer, gimple gs, int spc, int flags) +{ + switch (gs->gsbase.subcode) + { + case GIMPLE_DEBUG_BIND: + if (flags & TDF_RAW) + dump_gimple_fmt (buffer, spc, flags, "%G BIND <%T, %T>", gs, + gimple_debug_bind_get_var (gs), + gimple_debug_bind_get_value (gs)); + else + dump_gimple_fmt (buffer, spc, flags, "# DEBUG %T => %T", + gimple_debug_bind_get_var (gs), + gimple_debug_bind_get_value (gs)); + break; + + default: + gcc_unreachable (); + } +} + /* Dump a GIMPLE_OMP_FOR tuple on the pretty_printer BUFFER. */ static void dump_gimple_omp_for (pretty_printer *buffer, gimple gs, int spc, int flags) @@ -1524,6 +1549,10 @@ dump_gimple_stmt (pretty_printer *buffer, gimple gs, int spc, int flags) dump_gimple_resx (buffer, gs, spc, flags); break; + case GIMPLE_DEBUG: + dump_gimple_debug (buffer, gs, spc, flags); + break; + case GIMPLE_PREDICT: pp_string (buffer, "// predicted "); if (gimple_predict_outcome (gs)) @@ -1577,7 +1606,8 @@ dump_bb_header (pretty_printer *buffer, basic_block bb, int indent, int flags) gimple_stmt_iterator gsi; for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) - if (get_lineno (gsi_stmt (gsi)) != -1) + if (!is_gimple_debug (gsi_stmt (gsi)) + && get_lineno (gsi_stmt (gsi)) != UNKNOWN_LOCATION) { pp_string (buffer, ", starting at line "); pp_decimal_int (buffer, get_lineno (gsi_stmt (gsi))); diff --git a/gcc/gimple.c b/gcc/gimple.c index aa30a2e31c0..7db34106e67 100644 --- a/gcc/gimple.c +++ b/gcc/gimple.c @@ -102,6 +102,7 @@ gss_for_code (enum gimple_code code) case GIMPLE_COND: case GIMPLE_GOTO: case GIMPLE_LABEL: + case GIMPLE_DEBUG: case GIMPLE_SWITCH: return GSS_WITH_OPS; case GIMPLE_ASM: return GSS_ASM; case GIMPLE_BIND: return GSS_BIND; @@ -253,7 +254,7 @@ gimple_set_subcode (gimple g, unsigned subcode) gimple_build_with_ops_stat (c, s, n MEM_STAT_INFO) static gimple -gimple_build_with_ops_stat (enum gimple_code code, enum tree_code subcode, +gimple_build_with_ops_stat (enum gimple_code code, unsigned subcode, unsigned num_ops MEM_STAT_DECL) { gimple s = gimple_alloc_stat (code, num_ops PASS_MEM_STAT); @@ -427,7 +428,7 @@ gimple_build_assign_with_ops_stat (enum tree_code subcode, tree lhs, tree op1, code). */ num_ops = get_gimple_rhs_num_ops (subcode) + 1; - p = gimple_build_with_ops_stat (GIMPLE_ASSIGN, subcode, num_ops + p = gimple_build_with_ops_stat (GIMPLE_ASSIGN, (unsigned)subcode, num_ops PASS_MEM_STAT); gimple_assign_set_lhs (p, lhs); gimple_assign_set_rhs1 (p, op1); @@ -831,6 +832,29 @@ gimple_build_switch_vec (tree index, tree default_label, VEC(tree, heap) *args) } +/* Build a new GIMPLE_DEBUG_BIND statement. + + VAR is bound to VALUE; block and location are taken from STMT. */ + +gimple +gimple_build_debug_bind_stat (tree var, tree value, gimple stmt MEM_STAT_DECL) +{ + gimple p = gimple_build_with_ops_stat (GIMPLE_DEBUG, + (unsigned)GIMPLE_DEBUG_BIND, 2 + PASS_MEM_STAT); + + gimple_debug_bind_set_var (p, var); + gimple_debug_bind_set_value (p, value); + if (stmt) + { + gimple_set_block (p, gimple_block (stmt)); + gimple_set_location (p, gimple_location (stmt)); + } + + return p; +} + + /* Build a GIMPLE_OMP_CRITICAL statement. BODY is the sequence of statements for which only one thread can execute. @@ -1213,11 +1237,11 @@ empty_body_p (gimple_seq body) { gimple_stmt_iterator i; - if (gimple_seq_empty_p (body)) return true; for (i = gsi_start (body); !gsi_end_p (i); gsi_next (&i)) - if (!empty_stmt_p (gsi_stmt (i))) + if (!empty_stmt_p (gsi_stmt (i)) + && !is_gimple_debug (gsi_stmt (i))) return false; return true; @@ -2224,6 +2248,9 @@ gimple_has_side_effects (const_gimple s) { unsigned i; + if (is_gimple_debug (s)) + return false; + /* We don't have to scan the arguments to check for volatile arguments, though, at present, we still do a scan to check for TREE_SIDE_EFFECTS. */ @@ -2317,6 +2344,8 @@ gimple_rhs_has_side_effects (const_gimple s) return true; } } + else if (is_gimple_debug (s)) + return false; else { /* For statements without an LHS, examine all arguments. */ diff --git a/gcc/gimple.def b/gcc/gimple.def index cee6753d200..716f6e2acbb 100644 --- a/gcc/gimple.def +++ b/gcc/gimple.def @@ -53,6 +53,9 @@ DEFGSCODE(GIMPLE_ERROR_MARK, "gimple_error_mark", NULL) jump target for the comparison. */ DEFGSCODE(GIMPLE_COND, "gimple_cond", struct gimple_statement_with_ops) +/* GIMPLE_DEBUG represents a debug statement. */ +DEFGSCODE(GIMPLE_DEBUG, "gimple_debug", struct gimple_statement_with_ops) + /* GIMPLE_GOTO <TARGET> represents unconditional jumps. TARGET is a LABEL_DECL or an expression node for computed GOTOs. */ DEFGSCODE(GIMPLE_GOTO, "gimple_goto", struct gimple_statement_with_ops) diff --git a/gcc/gimple.h b/gcc/gimple.h index 2f16c60538a..7a034a1f391 100644 --- a/gcc/gimple.h +++ b/gcc/gimple.h @@ -117,6 +117,14 @@ enum gf_mask { GF_PREDICT_TAKEN = 1 << 15 }; +/* Currently, there's only one type of gimple debug stmt. Others are + envisioned, for example, to enable the generation of is_stmt notes + in line number information, to mark sequence points, etc. This + subcode is to be used to tell them apart. */ +enum gimple_debug_subcode { + GIMPLE_DEBUG_BIND = 0 +}; + /* Masks for selecting a pass local flag (PLF) to work on. These masks are used by gimple_set_plf and gimple_plf. */ enum plf_mask { @@ -754,6 +762,10 @@ gimple gimple_build_assign_with_ops_stat (enum tree_code, tree, tree, #define gimple_build_assign_with_ops(c,o1,o2,o3) \ gimple_build_assign_with_ops_stat (c, o1, o2, o3 MEM_STAT_INFO) +gimple gimple_build_debug_bind_stat (tree, tree, gimple MEM_STAT_DECL); +#define gimple_build_debug_bind(var,val,stmt) \ + gimple_build_debug_bind_stat ((var), (val), (stmt) MEM_STAT_INFO) + gimple gimple_build_call_vec (tree, VEC(tree, heap) *); gimple gimple_build_call (tree, unsigned, ...); gimple gimple_build_call_from_tree (tree); @@ -3158,6 +3170,105 @@ gimple_switch_set_default_label (gimple gs, tree label) gimple_switch_set_label (gs, 0, label); } +/* Return true if GS is a GIMPLE_DEBUG statement. */ + +static inline bool +is_gimple_debug (const_gimple gs) +{ + return gimple_code (gs) == GIMPLE_DEBUG; +} + +/* Return true if S is a GIMPLE_DEBUG BIND statement. */ + +static inline bool +gimple_debug_bind_p (const_gimple s) +{ + if (is_gimple_debug (s)) + return s->gsbase.subcode == GIMPLE_DEBUG_BIND; + + return false; +} + +/* Return the variable bound in a GIMPLE_DEBUG bind statement. */ + +static inline tree +gimple_debug_bind_get_var (gimple dbg) +{ + GIMPLE_CHECK (dbg, GIMPLE_DEBUG); + gcc_assert (gimple_debug_bind_p (dbg)); + return gimple_op (dbg, 0); +} + +/* Return the value bound to the variable in a GIMPLE_DEBUG bind + statement. */ + +static inline tree +gimple_debug_bind_get_value (gimple dbg) +{ + GIMPLE_CHECK (dbg, GIMPLE_DEBUG); + gcc_assert (gimple_debug_bind_p (dbg)); + return gimple_op (dbg, 1); +} + +/* Return a pointer to the value bound to the variable in a + GIMPLE_DEBUG bind statement. */ + +static inline tree * +gimple_debug_bind_get_value_ptr (gimple dbg) +{ + GIMPLE_CHECK (dbg, GIMPLE_DEBUG); + gcc_assert (gimple_debug_bind_p (dbg)); + return gimple_op_ptr (dbg, 1); +} + +/* Set the variable bound in a GIMPLE_DEBUG bind statement. */ + +static inline void +gimple_debug_bind_set_var (gimple dbg, tree var) +{ + GIMPLE_CHECK (dbg, GIMPLE_DEBUG); + gcc_assert (gimple_debug_bind_p (dbg)); + gimple_set_op (dbg, 0, var); +} + +/* Set the value bound to the variable in a GIMPLE_DEBUG bind + statement. */ + +static inline void +gimple_debug_bind_set_value (gimple dbg, tree value) +{ + GIMPLE_CHECK (dbg, GIMPLE_DEBUG); + gcc_assert (gimple_debug_bind_p (dbg)); + gimple_set_op (dbg, 1, value); +} + +/* The second operand of a GIMPLE_DEBUG_BIND, when the value was + optimized away. */ +#define GIMPLE_DEBUG_BIND_NOVALUE NULL_TREE /* error_mark_node */ + +/* Remove the value bound to the variable in a GIMPLE_DEBUG bind + statement. */ + +static inline void +gimple_debug_bind_reset_value (gimple dbg) +{ + GIMPLE_CHECK (dbg, GIMPLE_DEBUG); + gcc_assert (gimple_debug_bind_p (dbg)); + gimple_set_op (dbg, 1, GIMPLE_DEBUG_BIND_NOVALUE); +} + +/* Return true if the GIMPLE_DEBUG bind statement is bound to a + value. */ + +static inline bool +gimple_debug_bind_has_value_p (gimple dbg) +{ + GIMPLE_CHECK (dbg, GIMPLE_DEBUG); + gcc_assert (gimple_debug_bind_p (dbg)); + return gimple_op (dbg, 1) != GIMPLE_DEBUG_BIND_NOVALUE; +} + +#undef GIMPLE_DEBUG_BIND_NOVALUE /* Return the body for the OMP statement GS. */ @@ -4308,6 +4419,58 @@ gsi_after_labels (basic_block bb) return gsi; } +/* Advance the iterator to the next non-debug gimple statement. */ + +static inline void +gsi_next_nondebug (gimple_stmt_iterator *i) +{ + do + { + gsi_next (i); + } + while (!gsi_end_p (*i) && is_gimple_debug (gsi_stmt (*i))); +} + +/* Advance the iterator to the next non-debug gimple statement. */ + +static inline void +gsi_prev_nondebug (gimple_stmt_iterator *i) +{ + do + { + gsi_prev (i); + } + while (!gsi_end_p (*i) && is_gimple_debug (gsi_stmt (*i))); +} + +/* Return a new iterator pointing to the first non-debug statement in + basic block BB. */ + +static inline gimple_stmt_iterator +gsi_start_nondebug_bb (basic_block bb) +{ + gimple_stmt_iterator i = gsi_start_bb (bb); + + if (!gsi_end_p (i) && is_gimple_debug (gsi_stmt (i))) + gsi_next_nondebug (&i); + + return i; +} + +/* Return a new iterator pointing to the last non-debug statement in + basic block BB. */ + +static inline gimple_stmt_iterator +gsi_last_nondebug_bb (basic_block bb) +{ + gimple_stmt_iterator i = gsi_last_bb (bb); + + if (!gsi_end_p (i) && is_gimple_debug (gsi_stmt (i))) + gsi_prev_nondebug (&i); + + return i; +} + /* Return a pointer to the current stmt. NOTE: You may want to use gsi_replace on the iterator itself, diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c index 95cbfc1b1a8..d5072385d22 100644 --- a/gcc/haifa-sched.c +++ b/gcc/haifa-sched.c @@ -310,7 +310,7 @@ size_t dfa_state_size; char *ready_try = NULL; /* The ready list. */ -struct ready_list ready = {NULL, 0, 0, 0}; +struct ready_list ready = {NULL, 0, 0, 0, 0}; /* The pointer to the ready list (to be removed). */ static struct ready_list *readyp = &ready; @@ -748,6 +748,10 @@ increase_insn_priority (rtx insn, int amount) static bool contributes_to_priority_p (dep_t dep) { + if (DEBUG_INSN_P (DEP_CON (dep)) + || DEBUG_INSN_P (DEP_PRO (dep))) + return false; + /* Critical path is meaningful in block boundaries only. */ if (!current_sched_info->contributes_to_priority (DEP_CON (dep), DEP_PRO (dep))) @@ -767,6 +771,31 @@ contributes_to_priority_p (dep_t dep) return true; } +/* Compute the number of nondebug forward deps of an insn. */ + +static int +dep_list_size (rtx insn) +{ + sd_iterator_def sd_it; + dep_t dep; + int dbgcount = 0, nodbgcount = 0; + + if (!MAY_HAVE_DEBUG_INSNS) + return sd_lists_size (insn, SD_LIST_FORW); + + FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep) + { + if (DEBUG_INSN_P (DEP_CON (dep))) + dbgcount++; + else + nodbgcount++; + } + + gcc_assert (dbgcount + nodbgcount == sd_lists_size (insn, SD_LIST_FORW)); + + return nodbgcount; +} + /* Compute the priority number for INSN. */ static int priority (rtx insn) @@ -781,7 +810,7 @@ priority (rtx insn) { int this_priority = -1; - if (sd_lists_empty_p (insn, SD_LIST_FORW)) + if (dep_list_size (insn) == 0) /* ??? We should set INSN_PRIORITY to insn_cost when and insn has some forward deps but all of them are ignored by contributes_to_priority hook. At the moment we set priority of @@ -886,9 +915,19 @@ rank_for_schedule (const void *x, const void *y) { rtx tmp = *(const rtx *) y; rtx tmp2 = *(const rtx *) x; + rtx last; int tmp_class, tmp2_class; int val, priority_val, weight_val, info_val; + if (MAY_HAVE_DEBUG_INSNS) + { + /* Schedule debug insns as early as possible. */ + if (DEBUG_INSN_P (tmp) && !DEBUG_INSN_P (tmp2)) + return -1; + else if (DEBUG_INSN_P (tmp2)) + return 1; + } + /* The insn in a schedule group should be issued the first. */ if (flag_sched_group_heuristic && SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2)) @@ -936,8 +975,20 @@ rank_for_schedule (const void *x, const void *y) if(flag_sched_rank_heuristic && info_val) return info_val; - /* Compare insns based on their relation to the last-scheduled-insn. */ - if (flag_sched_last_insn_heuristic && INSN_P (last_scheduled_insn)) + if (flag_sched_last_insn_heuristic) + { + last = last_scheduled_insn; + + if (DEBUG_INSN_P (last) && last != current_sched_info->prev_head) + do + last = PREV_INSN (last); + while (!NONDEBUG_INSN_P (last) + && last != current_sched_info->prev_head); + } + + /* Compare insns based on their relation to the last scheduled + non-debug insn. */ + if (flag_sched_last_insn_heuristic && NONDEBUG_INSN_P (last)) { dep_t dep1; dep_t dep2; @@ -947,7 +998,7 @@ rank_for_schedule (const void *x, const void *y) 2) Anti/Output dependent on last scheduled insn. 3) Independent of last scheduled insn, or has latency of one. Choose the insn from the highest numbered class if different. */ - dep1 = sd_find_dep_between (last_scheduled_insn, tmp, true); + dep1 = sd_find_dep_between (last, tmp, true); if (dep1 == NULL || dep_cost (dep1) == 1) tmp_class = 3; @@ -957,7 +1008,7 @@ rank_for_schedule (const void *x, const void *y) else tmp_class = 2; - dep2 = sd_find_dep_between (last_scheduled_insn, tmp2, true); + dep2 = sd_find_dep_between (last, tmp2, true); if (dep2 == NULL || dep_cost (dep2) == 1) tmp2_class = 3; @@ -975,8 +1026,7 @@ rank_for_schedule (const void *x, const void *y) This gives the scheduler more freedom when scheduling later instructions at the expense of added register pressure. */ - val = (sd_lists_size (tmp2, SD_LIST_FORW) - - sd_lists_size (tmp, SD_LIST_FORW)); + val = (dep_list_size (tmp2) - dep_list_size (tmp)); if (flag_sched_dep_count_heuristic && val != 0) return val; @@ -1014,6 +1064,7 @@ queue_insn (rtx insn, int n_cycles) rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]); gcc_assert (n_cycles <= max_insn_queue_index); + gcc_assert (!DEBUG_INSN_P (insn)); insn_queue[next_q] = link; q_size += 1; @@ -1081,6 +1132,8 @@ ready_add (struct ready_list *ready, rtx insn, bool first_p) } ready->n_ready++; + if (DEBUG_INSN_P (insn)) + ready->n_debug++; gcc_assert (QUEUE_INDEX (insn) != QUEUE_READY); QUEUE_INDEX (insn) = QUEUE_READY; @@ -1097,6 +1150,8 @@ ready_remove_first (struct ready_list *ready) gcc_assert (ready->n_ready); t = ready->vec[ready->first--]; ready->n_ready--; + if (DEBUG_INSN_P (t)) + ready->n_debug--; /* If the queue becomes empty, reset it. */ if (ready->n_ready == 0) ready->first = ready->veclen - 1; @@ -1138,6 +1193,8 @@ ready_remove (struct ready_list *ready, int index) gcc_assert (ready->n_ready && index < ready->n_ready); t = ready->vec[ready->first - index]; ready->n_ready--; + if (DEBUG_INSN_P (t)) + ready->n_debug--; for (i = index; i < ready->n_ready; i++) ready->vec[ready->first - i] = ready->vec[ready->first - i - 1]; QUEUE_INDEX (t) = QUEUE_NOWHERE; @@ -1316,7 +1373,8 @@ schedule_insn (rtx insn) be aligned. */ if (issue_rate > 1 && GET_CODE (PATTERN (insn)) != USE - && GET_CODE (PATTERN (insn)) != CLOBBER) + && GET_CODE (PATTERN (insn)) != CLOBBER + && !DEBUG_INSN_P (insn)) { if (reload_completed) PUT_MODE (insn, clock_var > last_clock_var ? TImode : VOIDmode); @@ -1428,7 +1486,7 @@ get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp) beg_head = NEXT_INSN (beg_head); while (beg_head != beg_tail) - if (NOTE_P (beg_head)) + if (NOTE_P (beg_head) || BOUNDARY_DEBUG_INSN_P (beg_head)) beg_head = NEXT_INSN (beg_head); else break; @@ -1441,7 +1499,7 @@ get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp) end_head = NEXT_INSN (end_head); while (end_head != end_tail) - if (NOTE_P (end_tail)) + if (NOTE_P (end_tail) || BOUNDARY_DEBUG_INSN_P (end_tail)) end_tail = PREV_INSN (end_tail); else break; @@ -1456,7 +1514,8 @@ no_real_insns_p (const_rtx head, const_rtx tail) { while (head != NEXT_INSN (tail)) { - if (!NOTE_P (head) && !LABEL_P (head)) + if (!NOTE_P (head) && !LABEL_P (head) + && !BOUNDARY_DEBUG_INSN_P (head)) return 0; head = NEXT_INSN (head); } @@ -1627,9 +1686,13 @@ queue_to_ready (struct ready_list *ready) q_ptr = NEXT_Q (q_ptr); if (dbg_cnt (sched_insn) == false) - /* If debug counter is activated do not requeue insn next after - last_scheduled_insn. */ - skip_insn = next_nonnote_insn (last_scheduled_insn); + { + /* If debug counter is activated do not requeue insn next after + last_scheduled_insn. */ + skip_insn = next_nonnote_insn (last_scheduled_insn); + while (skip_insn && DEBUG_INSN_P (skip_insn)) + skip_insn = next_nonnote_insn (skip_insn); + } else skip_insn = NULL_RTX; @@ -1647,7 +1710,7 @@ queue_to_ready (struct ready_list *ready) /* If the ready list is full, delay the insn for 1 cycle. See the comment in schedule_block for the rationale. */ if (!reload_completed - && ready->n_ready > MAX_SCHED_READY_INSNS + && ready->n_ready - ready->n_debug > MAX_SCHED_READY_INSNS && !SCHED_GROUP_P (insn) && insn != skip_insn) { @@ -2255,7 +2318,8 @@ choose_ready (struct ready_list *ready, rtx *insn_ptr) if (targetm.sched.first_cycle_multipass_dfa_lookahead) lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead (); - if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0))) + if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0)) + || DEBUG_INSN_P (ready_element (ready, 0))) { *insn_ptr = ready_remove_first (ready); return 0; @@ -2414,6 +2478,7 @@ schedule_block (basic_block *target_bb) /* Clear the ready list. */ ready.first = ready.veclen - 1; ready.n_ready = 0; + ready.n_debug = 0; /* It is used for first cycle multipass scheduling. */ temp_state = alloca (dfa_state_size); @@ -2424,7 +2489,8 @@ schedule_block (basic_block *target_bb) /* We start inserting insns after PREV_HEAD. */ last_scheduled_insn = prev_head; - gcc_assert (NOTE_P (last_scheduled_insn) + gcc_assert ((NOTE_P (last_scheduled_insn) + || BOUNDARY_DEBUG_INSN_P (last_scheduled_insn)) && BLOCK_FOR_INSN (last_scheduled_insn) == *target_bb); /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the @@ -2445,12 +2511,14 @@ schedule_block (basic_block *target_bb) /* The algorithm is O(n^2) in the number of ready insns at any given time in the worst case. Before reload we are more likely to have big lists so truncate them to a reasonable size. */ - if (!reload_completed && ready.n_ready > MAX_SCHED_READY_INSNS) + if (!reload_completed + && ready.n_ready - ready.n_debug > MAX_SCHED_READY_INSNS) { ready_sort (&ready); - /* Find first free-standing insn past MAX_SCHED_READY_INSNS. */ - for (i = MAX_SCHED_READY_INSNS; i < ready.n_ready; i++) + /* Find first free-standing insn past MAX_SCHED_READY_INSNS. + If there are debug insns, we know they're first. */ + for (i = MAX_SCHED_READY_INSNS + ready.n_debug; i < ready.n_ready; i++) if (!SCHED_GROUP_P (ready_element (&ready, i))) break; @@ -2533,6 +2601,46 @@ schedule_block (basic_block *target_bb) } } + /* We don't want md sched reorder to even see debug isns, so put + them out right away. */ + if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))) + { + if (control_flow_insn_p (last_scheduled_insn)) + { + *target_bb = current_sched_info->advance_target_bb + (*target_bb, 0); + + if (sched_verbose) + { + rtx x; + + x = next_real_insn (last_scheduled_insn); + gcc_assert (x); + dump_new_block_header (1, *target_bb, x, tail); + } + + last_scheduled_insn = bb_note (*target_bb); + } + + while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))) + { + rtx insn = ready_remove_first (&ready); + gcc_assert (DEBUG_INSN_P (insn)); + (*current_sched_info->begin_schedule_ready) (insn, + last_scheduled_insn); + move_insn (insn, last_scheduled_insn, + current_sched_info->next_tail); + last_scheduled_insn = insn; + advance = schedule_insn (insn); + gcc_assert (advance == 0); + if (ready.n_ready > 0) + ready_sort (&ready); + } + + if (!ready.n_ready) + continue; + } + /* Allow the target to reorder the list, typically for better instruction bundling. */ if (sort_p && targetm.sched.reorder @@ -2574,7 +2682,8 @@ schedule_block (basic_block *target_bb) ready_sort (&ready); } - if (ready.n_ready == 0 || !can_issue_more + if (ready.n_ready == 0 + || !can_issue_more || state_dead_lock_p (curr_state) || !(*current_sched_info->schedule_more_p) ()) break; @@ -2711,7 +2820,7 @@ schedule_block (basic_block *target_bb) if (targetm.sched.variable_issue) can_issue_more = targetm.sched.variable_issue (sched_dump, sched_verbose, - insn, can_issue_more); + insn, can_issue_more); /* A naked CLOBBER or USE generates no instruction, so do not count them against the issue rate. */ else if (GET_CODE (PATTERN (insn)) != USE @@ -2734,6 +2843,44 @@ schedule_block (basic_block *target_bb) if (ready.n_ready > 0) ready_sort (&ready); + /* Quickly go through debug insns such that md sched + reorder2 doesn't have to deal with debug insns. */ + if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)) + && (*current_sched_info->schedule_more_p) ()) + { + if (control_flow_insn_p (last_scheduled_insn)) + { + *target_bb = current_sched_info->advance_target_bb + (*target_bb, 0); + + if (sched_verbose) + { + rtx x; + + x = next_real_insn (last_scheduled_insn); + gcc_assert (x); + dump_new_block_header (1, *target_bb, x, tail); + } + + last_scheduled_insn = bb_note (*target_bb); + } + + while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))) + { + insn = ready_remove_first (&ready); + gcc_assert (DEBUG_INSN_P (insn)); + (*current_sched_info->begin_schedule_ready) + (insn, last_scheduled_insn); + move_insn (insn, last_scheduled_insn, + current_sched_info->next_tail); + advance = schedule_insn (insn); + last_scheduled_insn = insn; + gcc_assert (advance == 0); + if (ready.n_ready > 0) + ready_sort (&ready); + } + } + if (targetm.sched.reorder2 && (ready.n_ready == 0 || !SCHED_GROUP_P (ready_element (&ready, 0)))) @@ -2757,7 +2904,7 @@ schedule_block (basic_block *target_bb) if (current_sched_info->queue_must_finish_empty) /* Sanity check -- queue must be empty now. Meaningless if region has multiple bbs. */ - gcc_assert (!q_size && !ready.n_ready); + gcc_assert (!q_size && !ready.n_ready && !ready.n_debug); else { /* We must maintain QUEUE_INDEX between blocks in region. */ @@ -2836,8 +2983,8 @@ set_priorities (rtx head, rtx tail) current_sched_info->sched_max_insns_priority; rtx prev_head; - if (head == tail && (! INSN_P (head))) - return 0; + if (head == tail && (! INSN_P (head) || BOUNDARY_DEBUG_INSN_P (head))) + gcc_unreachable (); n_insn = 0; @@ -4605,7 +4752,7 @@ add_jump_dependencies (rtx insn, rtx jump) if (insn == jump) break; - if (sd_lists_empty_p (insn, SD_LIST_FORW)) + if (dep_list_size (insn) == 0) { dep_def _new_dep, *new_dep = &_new_dep; @@ -4648,6 +4795,19 @@ has_edge_p (VEC(edge,gc) *el, int type) return 0; } +/* Search back, starting at INSN, for an insn that is not a + NOTE_INSN_VAR_LOCATION. Don't search beyond HEAD, and return it if + no such insn can be found. */ +static inline rtx +prev_non_location_insn (rtx insn, rtx head) +{ + while (insn != head && NOTE_P (insn) + && NOTE_KIND (insn) == NOTE_INSN_VAR_LOCATION) + insn = PREV_INSN (insn); + + return insn; +} + /* Check few properties of CFG between HEAD and TAIL. If HEAD (TAIL) is NULL check from the beginning (till the end) of the instruction stream. */ @@ -4707,8 +4867,9 @@ check_cfg (rtx head, rtx tail) { if (control_flow_insn_p (head)) { - gcc_assert (BB_END (bb) == head); - + gcc_assert (prev_non_location_insn (BB_END (bb), head) + == head); + if (any_uncondjump_p (head)) gcc_assert (EDGE_COUNT (bb->succs) == 1 && BARRIER_P (NEXT_INSN (head))); @@ -4724,11 +4885,12 @@ check_cfg (rtx head, rtx tail) if (BB_END (bb) == head) { if (EDGE_COUNT (bb->succs) > 1) - gcc_assert (control_flow_insn_p (head) + gcc_assert (control_flow_insn_p (prev_non_location_insn + (head, BB_HEAD (bb))) || has_edge_p (bb->succs, EDGE_COMPLEX)); bb = 0; } - + head = NEXT_INSN (head); } } diff --git a/gcc/ifcvt.c b/gcc/ifcvt.c index c47dfab7a70..1cf2608a177 100644 --- a/gcc/ifcvt.c +++ b/gcc/ifcvt.c @@ -194,7 +194,7 @@ first_active_insn (basic_block bb) insn = NEXT_INSN (insn); } - while (NOTE_P (insn)) + while (NOTE_P (insn) || DEBUG_INSN_P (insn)) { if (insn == BB_END (bb)) return NULL_RTX; @@ -217,6 +217,7 @@ last_active_insn (basic_block bb, int skip_use_p) while (NOTE_P (insn) || JUMP_P (insn) + || DEBUG_INSN_P (insn) || (skip_use_p && NONJUMP_INSN_P (insn) && GET_CODE (PATTERN (insn)) == USE)) @@ -269,7 +270,7 @@ cond_exec_process_insns (ce_if_block_t *ce_info ATTRIBUTE_UNUSED, for (insn = start; ; insn = NEXT_INSN (insn)) { - if (NOTE_P (insn)) + if (NOTE_P (insn) || DEBUG_INSN_P (insn)) goto insn_done; gcc_assert(NONJUMP_INSN_P (insn) || CALL_P (insn)); @@ -2256,6 +2257,8 @@ noce_process_if_block (struct noce_if_info *if_info) else { insn_b = prev_nonnote_insn (if_info->cond_earliest); + while (insn_b && DEBUG_INSN_P (insn_b)) + insn_b = prev_nonnote_insn (insn_b); /* We're going to be moving the evaluation of B down from above COND_EARLIEST to JUMP. Make sure the relevant data is still intact. */ @@ -2266,14 +2269,13 @@ noce_process_if_block (struct noce_if_info *if_info) || ! rtx_equal_p (x, SET_DEST (set_b)) || ! noce_operand_ok (SET_SRC (set_b)) || reg_overlap_mentioned_p (x, SET_SRC (set_b)) - || modified_between_p (SET_SRC (set_b), - PREV_INSN (if_info->cond_earliest), jump) + || modified_between_p (SET_SRC (set_b), insn_b, jump) /* Likewise with X. In particular this can happen when noce_get_condition looks farther back in the instruction stream than one might expect. */ || reg_overlap_mentioned_p (x, cond) || reg_overlap_mentioned_p (x, a) - || modified_between_p (x, PREV_INSN (if_info->cond_earliest), jump)) + || modified_between_p (x, insn_b, jump)) insn_b = set_b = NULL_RTX; } @@ -2481,7 +2483,7 @@ check_cond_move_block (basic_block bb, rtx *vals, VEC (int, heap) **regs, rtx co { rtx set, dest, src; - if (!INSN_P (insn) || JUMP_P (insn)) + if (!NONDEBUG_INSN_P (insn) || JUMP_P (insn)) continue; set = single_set (insn); if (!set) @@ -2559,7 +2561,8 @@ cond_move_convert_if_block (struct noce_if_info *if_infop, rtx set, target, dest, t, e; unsigned int regno; - if (!INSN_P (insn) || JUMP_P (insn)) + /* ??? Maybe emit conditional debug insn? */ + if (!NONDEBUG_INSN_P (insn) || JUMP_P (insn)) continue; set = single_set (insn); gcc_assert (set && REG_P (SET_DEST (set))); @@ -3120,6 +3123,7 @@ block_jumps_and_fallthru_p (basic_block cur_bb, basic_block target_bb) if (INSN_P (insn) && !JUMP_P (insn) + && !DEBUG_INSN_P (insn) && GET_CODE (PATTERN (insn)) != USE && GET_CODE (PATTERN (insn)) != CLOBBER) n_insns++; @@ -3789,6 +3793,9 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb, head = BB_HEAD (merge_bb); end = BB_END (merge_bb); + while (DEBUG_INSN_P (end) && end != head) + end = PREV_INSN (end); + /* If merge_bb ends with a tablejump, predicating/moving insn's into test_bb and then deleting merge_bb will result in the jumptable that follows merge_bb being removed along with merge_bb and then we @@ -3798,6 +3805,8 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb, if (LABEL_P (head)) head = NEXT_INSN (head); + while (DEBUG_INSN_P (head) && head != end) + head = NEXT_INSN (head); if (NOTE_P (head)) { if (head == end) @@ -3806,6 +3815,8 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb, goto no_body; } head = NEXT_INSN (head); + while (DEBUG_INSN_P (head) && head != end) + head = NEXT_INSN (head); } if (JUMP_P (end)) @@ -3816,6 +3827,8 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb, goto no_body; } end = PREV_INSN (end); + while (DEBUG_INSN_P (end) && end != head) + end = PREV_INSN (end); } /* Disable handling dead code by conditional execution if the machine needs @@ -3876,7 +3889,7 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb, { if (CALL_P (insn)) return FALSE; - if (INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) { if (may_trap_p (PATTERN (insn))) return FALSE; @@ -3922,7 +3935,7 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb, FOR_BB_INSNS (merge_bb, insn) { - if (INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) { unsigned int uid = INSN_UID (insn); df_ref *def_rec; diff --git a/gcc/init-regs.c b/gcc/init-regs.c index 273ab97b77f..f667797b8df 100644 --- a/gcc/init-regs.c +++ b/gcc/init-regs.c @@ -70,7 +70,7 @@ initialize_uninitialized_regs (void) { unsigned int uid = INSN_UID (insn); df_ref *use_rec; - if (!INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) continue; for (use_rec = DF_INSN_UID_USES (uid); *use_rec; use_rec++) diff --git a/gcc/ipa-pure-const.c b/gcc/ipa-pure-const.c index 4e62eb187a4..201dc5996c1 100644 --- a/gcc/ipa-pure-const.c +++ b/gcc/ipa-pure-const.c @@ -411,6 +411,9 @@ check_stmt (gimple_stmt_iterator *gsip, funct_state local, bool ipa) gimple stmt = gsi_stmt (*gsip); unsigned int i = 0; + if (is_gimple_debug (stmt)) + return; + if (dump_file) { fprintf (dump_file, " scanning: "); diff --git a/gcc/ipa-reference.c b/gcc/ipa-reference.c index db63d02554f..10daf56eab6 100644 --- a/gcc/ipa-reference.c +++ b/gcc/ipa-reference.c @@ -442,6 +442,9 @@ scan_stmt_for_static_refs (gimple_stmt_iterator *gsip, gimple stmt = gsi_stmt (*gsip); ipa_reference_local_vars_info_t local = NULL; + if (is_gimple_debug (stmt)) + return NULL; + if (fn) local = get_reference_vars_info (fn)->local; diff --git a/gcc/ira-build.c b/gcc/ira-build.c index 4af927a041f..edb761b0d71 100644 --- a/gcc/ira-build.c +++ b/gcc/ira-build.c @@ -1491,7 +1491,7 @@ create_bb_allocnos (ira_loop_tree_node_t bb_node) curr_bb = bb = bb_node->bb; ira_assert (bb != NULL); FOR_BB_INSNS_REVERSE (bb, insn) - if (INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) create_insn_allocnos (PATTERN (insn), false); /* It might be a allocno living through from one subloop to another. */ diff --git a/gcc/ira-conflicts.c b/gcc/ira-conflicts.c index bc0c0ac11c8..bce5c7f6294 100644 --- a/gcc/ira-conflicts.c +++ b/gcc/ira-conflicts.c @@ -522,7 +522,7 @@ add_copies (ira_loop_tree_node_t loop_tree_node) if (bb == NULL) return; FOR_BB_INSNS (bb, insn) - if (INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) add_insn_allocno_copies (insn); } diff --git a/gcc/ira-costs.c b/gcc/ira-costs.c index 4dd7003adeb..bb51c55bc65 100644 --- a/gcc/ira-costs.c +++ b/gcc/ira-costs.c @@ -995,7 +995,7 @@ scan_one_insn (rtx insn) rtx set, note; int i, k; - if (!INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) return insn; pat_code = GET_CODE (PATTERN (insn)); @@ -1384,7 +1384,7 @@ process_bb_node_for_hard_reg_moves (ira_loop_tree_node_t loop_tree_node) freq = 1; FOR_BB_INSNS (bb, insn) { - if (! INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) continue; set = single_set (insn); if (set == NULL_RTX) diff --git a/gcc/ira-lives.c b/gcc/ira-lives.c index c010f679d37..aa1904092ad 100644 --- a/gcc/ira-lives.c +++ b/gcc/ira-lives.c @@ -910,7 +910,7 @@ process_bb_node_lives (ira_loop_tree_node_t loop_tree_node) df_ref *def_rec, *use_rec; bool call_p; - if (! INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) continue; if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL) diff --git a/gcc/ira.c b/gcc/ira.c index c6a87237621..b6524895efe 100644 --- a/gcc/ira.c +++ b/gcc/ira.c @@ -2234,7 +2234,7 @@ memref_used_between_p (rtx memref, rtx start, rtx end) for (insn = NEXT_INSN (start); insn != NEXT_INSN (end); insn = NEXT_INSN (insn)) { - if (!INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) continue; if (memref_referenced_p (memref, PATTERN (insn))) @@ -2678,7 +2678,7 @@ update_equiv_regs (void) } /* Move the initialization of the register to just before INSN. Update the flow information. */ - else if (PREV_INSN (insn) != equiv_insn) + else if (prev_nondebug_insn (insn) != equiv_insn) { rtx new_insn; diff --git a/gcc/loop-invariant.c b/gcc/loop-invariant.c index 8c1e0e6202d..11a8310f33e 100644 --- a/gcc/loop-invariant.c +++ b/gcc/loop-invariant.c @@ -912,7 +912,7 @@ find_invariants_bb (basic_block bb, bool always_reached, bool always_executed) FOR_BB_INSNS (bb, insn) { - if (!INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) continue; find_invariants_insn (insn, always_reached, always_executed); diff --git a/gcc/lower-subreg.c b/gcc/lower-subreg.c index c8947f9d36a..3ff20eb3de5 100644 --- a/gcc/lower-subreg.c +++ b/gcc/lower-subreg.c @@ -531,6 +531,34 @@ resolve_subreg_use (rtx *px, void *data) return 0; } +/* This is called via for_each_rtx. Look for SUBREGs which can be + decomposed and decomposed REGs that need copying. */ + +static int +adjust_decomposed_uses (rtx *px, void *data ATTRIBUTE_UNUSED) +{ + rtx x = *px; + + if (x == NULL_RTX) + return 0; + + if (resolve_subreg_p (x)) + { + x = simplify_subreg_concatn (GET_MODE (x), SUBREG_REG (x), + SUBREG_BYTE (x)); + + if (x) + *px = x; + else + x = copy_rtx (*px); + } + + if (resolve_reg_p (x)) + *px = copy_rtx (x); + + return 0; +} + /* We are deleting INSN. Move any EH_REGION notes to INSNS. */ static void @@ -886,6 +914,18 @@ resolve_use (rtx pat, rtx insn) return false; } +/* A VAR_LOCATION can be simplified. */ + +static void +resolve_debug (rtx insn) +{ + for_each_rtx (&PATTERN (insn), adjust_decomposed_uses, NULL_RTX); + + df_insn_rescan (insn); + + resolve_reg_notes (insn); +} + /* Checks if INSN is a decomposable multiword-shift or zero-extend and sets the decomposable_context bitmap accordingly. A non-zero value is returned if a decomposable insn has been found. */ @@ -1170,6 +1210,8 @@ decompose_multiword_subregs (void) resolve_clobber (pat, insn); else if (GET_CODE (pat) == USE) resolve_use (pat, insn); + else if (DEBUG_INSN_P (insn)) + resolve_debug (insn); else { rtx set; diff --git a/gcc/modulo-sched.c b/gcc/modulo-sched.c index 5176880a016..fb6f548b0c9 100644 --- a/gcc/modulo-sched.c +++ b/gcc/modulo-sched.c @@ -349,7 +349,7 @@ const_iteration_count (rtx count_reg, basic_block pre_header, get_ebb_head_tail (pre_header, pre_header, &head, &tail); for (insn = tail; insn != PREV_INSN (head); insn = PREV_INSN (insn)) - if (INSN_P (insn) && single_set (insn) && + if (NONDEBUG_INSN_P (insn) && single_set (insn) && rtx_equal_p (count_reg, SET_DEST (single_set (insn)))) { rtx pat = single_set (insn); @@ -375,7 +375,7 @@ res_MII (ddg_ptr g) if (targetm.sched.sms_res_mii) return targetm.sched.sms_res_mii (g); - return (g->num_nodes / issue_rate); + return ((g->num_nodes - g->num_debug) / issue_rate); } @@ -769,7 +769,7 @@ loop_single_full_bb_p (struct loop *loop) for (; head != NEXT_INSN (tail); head = NEXT_INSN (head)) { if (NOTE_P (head) || LABEL_P (head) - || (INSN_P (head) && JUMP_P (head))) + || (INSN_P (head) && (DEBUG_INSN_P (head) || JUMP_P (head)))) continue; empty_bb = false; break; @@ -1020,7 +1020,7 @@ sms_schedule (void) if (CALL_P (insn) || BARRIER_P (insn) - || (INSN_P (insn) && !JUMP_P (insn) + || (NONDEBUG_INSN_P (insn) && !JUMP_P (insn) && !single_set (insn) && GET_CODE (PATTERN (insn)) != USE) || (FIND_REG_INC_NOTE (insn, NULL_RTX) != 0) || (INSN_P (insn) && (set = single_set (insn)) @@ -1038,7 +1038,7 @@ sms_schedule (void) fprintf (dump_file, "SMS loop-with-barrier\n"); else if (FIND_REG_INC_NOTE (insn, NULL_RTX) != 0) fprintf (dump_file, "SMS reg inc\n"); - else if ((INSN_P (insn) && !JUMP_P (insn) + else if ((NONDEBUG_INSN_P (insn) && !JUMP_P (insn) && !single_set (insn) && GET_CODE (PATTERN (insn)) != USE)) fprintf (dump_file, "SMS loop-with-not-single-set\n"); else @@ -1754,7 +1754,7 @@ sms_schedule_by_order (ddg_ptr g, int mii, int maxii, int *nodes_order) ddg_node_ptr u_node = &ps->g->nodes[u]; rtx insn = u_node->insn; - if (!INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) { RESET_BIT (tobe_scheduled, u); continue; @@ -2743,7 +2743,7 @@ ps_has_conflicts (partial_schedule_ptr ps, int from, int to) { rtx insn = crr_insn->node->insn; - if (!INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) continue; /* Check if there is room for the current insn. */ diff --git a/gcc/opts.c b/gcc/opts.c index 59c24b67db1..601132e14a8 100644 --- a/gcc/opts.c +++ b/gcc/opts.c @@ -2054,7 +2054,7 @@ common_handle_option (size_t scode, const char *arg, int value, break; case OPT_gdwarf_: - if (value < 2 || value > 3) + if (value < 2 || value > 4) error ("dwarf version %d is not supported", value); else dwarf_version = value; diff --git a/gcc/params.def b/gcc/params.def index e3a6470120a..dc5ceaf573e 100644 --- a/gcc/params.def +++ b/gcc/params.def @@ -752,6 +752,13 @@ DEFPARAM (PARAM_PREFETCH_MIN_INSN_TO_MEM_RATIO, "min. ratio of insns to mem ops to enable prefetching in a loop", 3, 0, 0) +/* Set minimum insn uid for non-debug insns. */ + +DEFPARAM (PARAM_MIN_NONDEBUG_INSN_UID, + "min-nondebug-insn-uid", + "The minimum UID to be used for a nondebug insn", + 0, 1, 0) + /* Local variables: mode:c diff --git a/gcc/params.h b/gcc/params.h index a098fedc6e5..67a7a05c3de 100644 --- a/gcc/params.h +++ b/gcc/params.h @@ -170,4 +170,6 @@ typedef enum compiler_param PARAM_VALUE (PARAM_MIN_INSN_TO_PREFETCH_RATIO) #define PREFETCH_MIN_INSN_TO_MEM_RATIO \ PARAM_VALUE (PARAM_PREFETCH_MIN_INSN_TO_MEM_RATIO) +#define MIN_NONDEBUG_INSN_UID \ + PARAM_VALUE (PARAM_MIN_NONDEBUG_INSN_UID) #endif /* ! GCC_PARAMS_H */ diff --git a/gcc/print-rtl.c b/gcc/print-rtl.c index fa02699707e..cb09597b579 100644 --- a/gcc/print-rtl.c +++ b/gcc/print-rtl.c @@ -41,6 +41,7 @@ along with GCC; see the file COPYING3. If not see #include "hard-reg-set.h" #include "basic-block.h" #include "diagnostic.h" +#include "cselib.h" #endif static FILE *outfile; @@ -165,6 +166,23 @@ print_rtx (const_rtx in_rtx) /* For other rtl, print the mode if it's not VOID. */ else if (GET_MODE (in_rtx) != VOIDmode) fprintf (outfile, ":%s", GET_MODE_NAME (GET_MODE (in_rtx))); + +#ifndef GENERATOR_FILE + if (GET_CODE (in_rtx) == VAR_LOCATION) + { + if (TREE_CODE (PAT_VAR_LOCATION_DECL (in_rtx)) == STRING_CST) + fputs (" <debug string placeholder>", outfile); + else + print_mem_expr (outfile, PAT_VAR_LOCATION_DECL (in_rtx)); + fputc (' ', outfile); + print_rtx (PAT_VAR_LOCATION_LOC (in_rtx)); + if (PAT_VAR_LOCATION_STATUS (in_rtx) + == VAR_INIT_STATUS_UNINITIALIZED) + fprintf (outfile, " [uninit]"); + sawclose = 1; + i = GET_RTX_LENGTH (VAR_LOCATION); + } +#endif } } @@ -278,14 +296,8 @@ print_rtx (const_rtx in_rtx) case NOTE_INSN_VAR_LOCATION: #ifndef GENERATOR_FILE - fprintf (outfile, " ("); - print_mem_expr (outfile, NOTE_VAR_LOCATION_DECL (in_rtx)); - fprintf (outfile, " "); - print_rtx (NOTE_VAR_LOCATION_LOC (in_rtx)); - if (NOTE_VAR_LOCATION_STATUS (in_rtx) == - VAR_INIT_STATUS_UNINITIALIZED) - fprintf (outfile, " [uninit]"); - fprintf (outfile, ")"); + fputc (' ', outfile); + print_rtx (NOTE_VAR_LOCATION (in_rtx)); #endif break; @@ -296,6 +308,16 @@ print_rtx (const_rtx in_rtx) else if (i == 9 && JUMP_P (in_rtx) && XEXP (in_rtx, i) != NULL) /* Output the JUMP_LABEL reference. */ fprintf (outfile, "\n -> %d", INSN_UID (XEXP (in_rtx, i))); + else if (i == 0 && GET_CODE (in_rtx) == VALUE) + { +#ifndef GENERATOR_FILE + cselib_val *val = CSELIB_VAL_PTR (in_rtx); + + fprintf (outfile, " %i", val->value); + dump_addr (outfile, " @", in_rtx); + dump_addr (outfile, "/", (void*)val); +#endif + } break; case 'e': diff --git a/gcc/recog.c b/gcc/recog.c index 138b03bcd19..c1e25d746a1 100644 --- a/gcc/recog.c +++ b/gcc/recog.c @@ -389,6 +389,8 @@ verify_changes (int num) assemblies if they have been defined as register asm ("x"). */ break; } + else if (DEBUG_INSN_P (object)) + continue; else if (insn_invalid_p (object)) { rtx pat = PATTERN (object); @@ -429,7 +431,8 @@ verify_changes (int num) validate_change (object, &PATTERN (object), newpat, 1); continue; } - else if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) + else if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER + || GET_CODE (pat) == VAR_LOCATION) /* If this insn is a CLOBBER or USE, it is always valid, but is never recognized. */ continue; @@ -2039,6 +2042,7 @@ extract_insn (rtx insn) case ASM_INPUT: case ADDR_VEC: case ADDR_DIFF_VEC: + case VAR_LOCATION: return; case SET: @@ -3119,7 +3123,7 @@ peephole2_optimize (void) for (insn = BB_END (bb); ; insn = prev) { prev = PREV_INSN (insn); - if (INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) { rtx attempt, before_try, x; int match_len; diff --git a/gcc/reg-stack.c b/gcc/reg-stack.c index 2f94958a2e2..ff09ad224d9 100644 --- a/gcc/reg-stack.c +++ b/gcc/reg-stack.c @@ -1327,6 +1327,30 @@ compare_for_stack_reg (rtx insn, stack regstack, rtx pat_src) } } +/* Substitute new registers in LOC, which is part of a debug insn. + REGSTACK is the current register layout. */ + +static int +subst_stack_regs_in_debug_insn (rtx *loc, void *data) +{ + rtx *tloc = get_true_reg (loc); + stack regstack = (stack)data; + int hard_regno; + + if (!STACK_REG_P (*tloc)) + return 0; + + if (tloc != loc) + return 0; + + hard_regno = get_hard_regnum (regstack, *loc); + gcc_assert (hard_regno >= FIRST_STACK_REG); + + replace_reg (loc, hard_regno); + + return -1; +} + /* Substitute new registers in PAT, which is part of INSN. REGSTACK is the current register layout. Return whether a control flow insn was deleted in the process. */ @@ -1360,6 +1384,9 @@ subst_stack_regs_pat (rtx insn, stack regstack, rtx pat) since the REG_DEAD notes are not issued.) */ break; + case VAR_LOCATION: + gcc_unreachable (); + case CLOBBER: { rtx note; @@ -2871,6 +2898,7 @@ convert_regs_1 (basic_block block) int reg; rtx insn, next; bool control_flow_insn_deleted = false; + int debug_insns_with_starting_stack = 0; any_malformed_asm = false; @@ -2923,8 +2951,25 @@ convert_regs_1 (basic_block block) /* Don't bother processing unless there is a stack reg mentioned or if it's a CALL_INSN. */ - if (stack_regs_mentioned (insn) - || CALL_P (insn)) + if (DEBUG_INSN_P (insn)) + { + if (starting_stack_p) + debug_insns_with_starting_stack++; + else + { + for_each_rtx (&PATTERN (insn), subst_stack_regs_in_debug_insn, + ®stack); + + /* Nothing must ever die at a debug insn. If something + is referenced in it that becomes dead, it should have + died before and the reference in the debug insn + should have been removed so as to avoid changing code + generation. */ + gcc_assert (!find_reg_note (insn, REG_DEAD, NULL)); + } + } + else if (stack_regs_mentioned (insn) + || CALL_P (insn)) { if (dump_file) { @@ -2938,6 +2983,24 @@ convert_regs_1 (basic_block block) } while (next); + if (debug_insns_with_starting_stack) + { + /* Since it's the first non-debug instruction that determines + the stack requirements of the current basic block, we refrain + from updating debug insns before it in the loop above, and + fix them up here. */ + for (insn = BB_HEAD (block); debug_insns_with_starting_stack; + insn = NEXT_INSN (insn)) + { + if (!DEBUG_INSN_P (insn)) + continue; + + debug_insns_with_starting_stack--; + for_each_rtx (&PATTERN (insn), subst_stack_regs_in_debug_insn, + &bi->stack_in); + } + } + if (dump_file) { fprintf (dump_file, "Expected live registers ["); diff --git a/gcc/regcprop.c b/gcc/regcprop.c index 87aaf02c409..893751886eb 100644 --- a/gcc/regcprop.c +++ b/gcc/regcprop.c @@ -474,6 +474,9 @@ replace_oldest_value_addr (rtx *loc, enum reg_class cl, switch (code) { case PLUS: + if (DEBUG_INSN_P (insn)) + break; + { rtx orig_op0 = XEXP (x, 0); rtx orig_op1 = XEXP (x, 1); @@ -608,9 +611,14 @@ replace_oldest_value_addr (rtx *loc, enum reg_class cl, static bool replace_oldest_value_mem (rtx x, rtx insn, struct value_data *vd) { - return replace_oldest_value_addr (&XEXP (x, 0), - base_reg_class (GET_MODE (x), MEM, - SCRATCH), + enum reg_class cl; + + if (DEBUG_INSN_P (insn)) + cl = ALL_REGS; + else + cl = base_reg_class (GET_MODE (x), MEM, SCRATCH); + + return replace_oldest_value_addr (&XEXP (x, 0), cl, GET_MODE (x), insn, vd); } @@ -619,7 +627,7 @@ replace_oldest_value_mem (rtx x, rtx insn, struct value_data *vd) static bool copyprop_hardreg_forward_1 (basic_block bb, struct value_data *vd) { - bool changed = false; + bool anything_changed = false; rtx insn; for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn)) @@ -628,9 +636,25 @@ copyprop_hardreg_forward_1 (basic_block bb, struct value_data *vd) bool is_asm, any_replacements; rtx set; bool replaced[MAX_RECOG_OPERANDS]; + bool changed = false; - if (! INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) { + if (DEBUG_INSN_P (insn)) + { + rtx loc = INSN_VAR_LOCATION_LOC (insn); + if (!VAR_LOC_UNKNOWN_P (loc) + && replace_oldest_value_addr (&INSN_VAR_LOCATION_LOC (insn), + ALL_REGS, GET_MODE (loc), + insn, vd)) + { + changed = apply_change_group (); + gcc_assert (changed); + df_insn_rescan (insn); + anything_changed = true; + } + } + if (insn == BB_END (bb)) break; else @@ -817,6 +841,12 @@ copyprop_hardreg_forward_1 (basic_block bb, struct value_data *vd) } did_replacement: + if (changed) + { + df_insn_rescan (insn); + anything_changed = true; + } + /* Clobber call-clobbered registers. */ if (CALL_P (insn)) for (i = 0; i < FIRST_PSEUDO_REGISTER; i++) @@ -834,7 +864,7 @@ copyprop_hardreg_forward_1 (basic_block bb, struct value_data *vd) break; } - return changed; + return anything_changed; } /* Main entry point for the forward copy propagation optimization. */ diff --git a/gcc/regmove.c b/gcc/regmove.c index 3341d3f159d..ab1a4696d36 100644 --- a/gcc/regmove.c +++ b/gcc/regmove.c @@ -321,9 +321,12 @@ optimize_reg_copy_1 (rtx insn, rtx dest, rtx src) /* For SREGNO, count the total number of insns scanned. For DREGNO, count the total number of insns scanned after passing the death note for DREGNO. */ - s_length++; - if (dest_death) - d_length++; + if (!DEBUG_INSN_P (p)) + { + s_length++; + if (dest_death) + d_length++; + } /* If the insn in which SRC dies is a CALL_INSN, don't count it as a call that has been crossed. Otherwise, count it. */ @@ -767,7 +770,7 @@ fixup_match_2 (rtx insn, rtx dst, rtx src, rtx offset) if (find_regno_note (p, REG_DEAD, REGNO (dst))) dst_death = p; - if (! dst_death) + if (! dst_death && !DEBUG_INSN_P (p)) length++; pset = single_set (p); @@ -1095,7 +1098,8 @@ regmove_backward_pass (void) if (BLOCK_FOR_INSN (p) != bb) break; - length++; + if (!DEBUG_INSN_P (p)) + length++; /* ??? See if all of SRC is set in P. This test is much more conservative than it needs to be. */ @@ -1103,24 +1107,13 @@ regmove_backward_pass (void) if (pset && SET_DEST (pset) == src) { /* We use validate_replace_rtx, in case there - are multiple identical source operands. All of - them have to be changed at the same time. */ + are multiple identical source operands. All + of them have to be changed at the same time: + when validate_replace_rtx() calls + apply_change_group(). */ + validate_change (p, &SET_DEST (pset), dst, 1); if (validate_replace_rtx (src, dst, insn)) - { - if (validate_change (p, &SET_DEST (pset), - dst, 0)) - success = 1; - else - { - /* Change all source operands back. - This modifies the dst as a side-effect. */ - validate_replace_rtx (dst, src, insn); - /* Now make sure the dst is right. */ - validate_change (insn, - recog_data.operand_loc[match_no], - dst, 0); - } - } + success = 1; break; } @@ -1129,9 +1122,21 @@ regmove_backward_pass (void) eliminate SRC. We can't make this change if DST is mentioned at all in P, since we are going to change its value. */ - if (reg_overlap_mentioned_p (src, PATTERN (p)) - || reg_mentioned_p (dst, PATTERN (p))) - break; + if (reg_overlap_mentioned_p (src, PATTERN (p))) + { + if (DEBUG_INSN_P (p)) + validate_replace_rtx_group (dst, src, insn); + else + break; + } + if (reg_mentioned_p (dst, PATTERN (p))) + { + if (DEBUG_INSN_P (p)) + validate_change (p, &INSN_VAR_LOCATION_LOC (p), + gen_rtx_UNKNOWN_VAR_LOC (), 1); + else + break; + } /* If we have passed a call instruction, and the pseudo-reg DST is not already live across a call, @@ -1193,6 +1198,8 @@ regmove_backward_pass (void) break; } + else if (num_changes_pending () > 0) + cancel_changes (0); } /* If we weren't able to replace any of the alternatives, try an diff --git a/gcc/regrename.c b/gcc/regrename.c index fcdaaf79e81..03aba8073a2 100644 --- a/gcc/regrename.c +++ b/gcc/regrename.c @@ -230,7 +230,7 @@ regrename_optimize (void) int new_reg, best_new_reg; int n_uses; struct du_chain *this_du = all_chains; - struct du_chain *tmp, *last; + struct du_chain *tmp; HARD_REG_SET this_unavailable; int reg = REGNO (*this_du->loc); int i; @@ -259,21 +259,20 @@ regrename_optimize (void) COPY_HARD_REG_SET (this_unavailable, unavailable); - /* Find last entry on chain (which has the need_caller_save bit), - count number of uses, and narrow the set of registers we can + /* Count number of uses, and narrow the set of registers we can use for renaming. */ n_uses = 0; - for (last = this_du; last->next_use; last = last->next_use) + for (tmp = this_du; tmp; tmp = tmp->next_use) { + if (DEBUG_INSN_P (tmp->insn)) + continue; n_uses++; IOR_COMPL_HARD_REG_SET (this_unavailable, - reg_class_contents[last->cl]); + reg_class_contents[tmp->cl]); } - if (n_uses < 1) - continue; - IOR_COMPL_HARD_REG_SET (this_unavailable, - reg_class_contents[last->cl]); + if (n_uses < 2) + continue; if (this_du->need_caller_save_reg) IOR_HARD_REG_SET (this_unavailable, call_used_reg_set); @@ -310,7 +309,8 @@ regrename_optimize (void) /* See whether it accepts all modes that occur in definition and uses. */ for (tmp = this_du; tmp; tmp = tmp->next_use) - if (! HARD_REGNO_MODE_OK (new_reg, GET_MODE (*tmp->loc)) + if ((! HARD_REGNO_MODE_OK (new_reg, GET_MODE (*tmp->loc)) + && ! DEBUG_INSN_P (tmp->insn)) || (tmp->need_caller_save_reg && ! (HARD_REGNO_CALL_PART_CLOBBERED (reg, GET_MODE (*tmp->loc))) @@ -327,8 +327,8 @@ regrename_optimize (void) if (dump_file) { fprintf (dump_file, "Register %s in insn %d", - reg_names[reg], INSN_UID (last->insn)); - if (last->need_caller_save_reg) + reg_names[reg], INSN_UID (this_du->insn)); + if (this_du->need_caller_save_reg) fprintf (dump_file, " crosses a call"); } @@ -362,17 +362,27 @@ regrename_optimize (void) static void do_replace (struct du_chain *chain, int reg) { + unsigned int base_regno = REGNO (*chain->loc); + + gcc_assert (! DEBUG_INSN_P (chain->insn)); + while (chain) { unsigned int regno = ORIGINAL_REGNO (*chain->loc); struct reg_attrs * attr = REG_ATTRS (*chain->loc); int reg_ptr = REG_POINTER (*chain->loc); - *chain->loc = gen_raw_REG (GET_MODE (*chain->loc), reg); - if (regno >= FIRST_PSEUDO_REGISTER) - ORIGINAL_REGNO (*chain->loc) = regno; - REG_ATTRS (*chain->loc) = attr; - REG_POINTER (*chain->loc) = reg_ptr; + if (DEBUG_INSN_P (chain->insn) && REGNO (*chain->loc) != base_regno) + INSN_VAR_LOCATION_LOC (chain->insn) = gen_rtx_UNKNOWN_VAR_LOC (); + else + { + *chain->loc = gen_raw_REG (GET_MODE (*chain->loc), reg); + if (regno >= FIRST_PSEUDO_REGISTER) + ORIGINAL_REGNO (*chain->loc) = regno; + REG_ATTRS (*chain->loc) = attr; + REG_POINTER (*chain->loc) = reg_ptr; + } + df_insn_rescan (chain->insn); chain = chain->next_use; } @@ -440,7 +450,7 @@ scan_rtx_reg (rtx insn, rtx *loc, enum reg_class cl, if (action == mark_read || action == mark_access) { - gcc_assert (exact_match); + gcc_assert (exact_match || DEBUG_INSN_P (insn)); /* ??? Class NO_REGS can happen if the md file makes use of EXTRA_CONSTRAINTS to match registers. Which is arguably @@ -744,7 +754,7 @@ build_def_use (basic_block bb) for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn)) { - if (INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) { int n_ops; rtx note; @@ -970,6 +980,12 @@ build_def_use (basic_block bb) scan_rtx (insn, &XEXP (note, 0), NO_REGS, terminate_dead, OP_IN, 0); } + else if (DEBUG_INSN_P (insn) + && !VAR_LOC_UNKNOWN_P (INSN_VAR_LOCATION_LOC (insn))) + { + scan_rtx (insn, &INSN_VAR_LOCATION_LOC (insn), + ALL_REGS, mark_read, OP_IN, 0); + } if (insn == BB_END (bb)) break; } diff --git a/gcc/regstat.c b/gcc/regstat.c index 097d0fa8ea8..70ddfa4d84f 100644 --- a/gcc/regstat.c +++ b/gcc/regstat.c @@ -61,11 +61,27 @@ regstat_init_n_sets_and_refs (void) regstat_n_sets_and_refs = XNEWVEC (struct regstat_n_sets_and_refs_t, max_regno); - for (i = 0; i < max_regno; i++) - { - SET_REG_N_SETS (i, DF_REG_DEF_COUNT (i)); - SET_REG_N_REFS (i, DF_REG_USE_COUNT (i) + REG_N_SETS (i)); - } + if (MAY_HAVE_DEBUG_INSNS) + for (i = 0; i < max_regno; i++) + { + int use_count; + df_ref use; + + use_count = DF_REG_USE_COUNT (i); + for (use = DF_REG_USE_CHAIN (i); use; use = DF_REF_NEXT_REG (use)) + if (DF_REF_INSN_INFO (use) && DEBUG_INSN_P (DF_REF_INSN (use))) + use_count--; + + + SET_REG_N_SETS (i, DF_REG_DEF_COUNT (i)); + SET_REG_N_REFS (i, use_count + REG_N_SETS (i)); + } + else + for (i = 0; i < max_regno; i++) + { + SET_REG_N_SETS (i, DF_REG_DEF_COUNT (i)); + SET_REG_N_REFS (i, DF_REG_USE_COUNT (i) + REG_N_SETS (i)); + } timevar_pop (TV_REG_STATS); } @@ -149,7 +165,7 @@ regstat_bb_compute_ri (unsigned int bb_index, struct df_mw_hardreg **mws_rec; rtx link; - if (!INSN_P (insn)) + if (!NONDEBUG_INSN_P (insn)) continue; /* Increment the live_length for all of the registers that diff --git a/gcc/reload.c b/gcc/reload.c index 257acd0a509..87bdfde32ba 100644 --- a/gcc/reload.c +++ b/gcc/reload.c @@ -6736,6 +6736,8 @@ find_equiv_reg (rtx goal, rtx insn, enum reg_class rclass, int other, while (1) { p = PREV_INSN (p); + if (p && DEBUG_INSN_P (p)) + continue; num++; if (p == 0 || LABEL_P (p) || num > PARAM_VALUE (PARAM_MAX_RELOAD_SEARCH_INSNS)) diff --git a/gcc/reload1.c b/gcc/reload1.c index 25af8404e10..d5cd37ce0bd 100644 --- a/gcc/reload1.c +++ b/gcc/reload1.c @@ -801,7 +801,7 @@ reload (rtx first, int global) && GET_MODE (insn) != VOIDmode) PUT_MODE (insn, VOIDmode); - if (INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) scan_paradoxical_subregs (PATTERN (insn)); if (set != 0 && REG_P (SET_DEST (set))) @@ -1234,6 +1234,48 @@ reload (rtx first, int global) else if (reg_equiv_mem[i]) XEXP (reg_equiv_mem[i], 0) = addr; } + + /* We don't want complex addressing modes in debug insns + if simpler ones will do, so delegitimize equivalences + in debug insns. */ + if (MAY_HAVE_DEBUG_INSNS && reg_renumber[i] < 0) + { + rtx reg = regno_reg_rtx[i]; + rtx equiv = 0; + df_ref use; + + if (reg_equiv_constant[i]) + equiv = reg_equiv_constant[i]; + else if (reg_equiv_invariant[i]) + equiv = reg_equiv_invariant[i]; + else if (reg && MEM_P (reg)) + { + equiv = targetm.delegitimize_address (reg); + if (equiv == reg) + equiv = 0; + } + else if (reg && REG_P (reg) && (int)REGNO (reg) != i) + equiv = reg; + + if (equiv) + for (use = DF_REG_USE_CHAIN (i); use; + use = DF_REF_NEXT_REG (use)) + if (DEBUG_INSN_P (DF_REF_INSN (use))) + { + rtx *loc = DF_REF_LOC (use); + rtx x = *loc; + + if (x == reg) + *loc = copy_rtx (equiv); + else if (GET_CODE (x) == SUBREG + && SUBREG_REG (x) == reg) + *loc = simplify_gen_subreg (GET_MODE (x), equiv, + GET_MODE (reg), + SUBREG_BYTE (x)); + else + gcc_unreachable (); + } + } } /* We must set reload_completed now since the cleanup_subreg_operands call @@ -3151,7 +3193,8 @@ eliminate_regs_in_insn (rtx insn, int replace) || GET_CODE (PATTERN (insn)) == CLOBBER || GET_CODE (PATTERN (insn)) == ADDR_VEC || GET_CODE (PATTERN (insn)) == ADDR_DIFF_VEC - || GET_CODE (PATTERN (insn)) == ASM_INPUT); + || GET_CODE (PATTERN (insn)) == ASM_INPUT + || DEBUG_INSN_P (insn)); return 0; } @@ -6941,7 +6984,7 @@ emit_input_reload_insns (struct insn_chain *chain, struct reload *rl, rl->when_needed, old, rl->out, j, 0)) { rtx temp = PREV_INSN (insn); - while (temp && NOTE_P (temp)) + while (temp && (NOTE_P (temp) || DEBUG_INSN_P (temp))) temp = PREV_INSN (temp); if (temp && NONJUMP_INSN_P (temp) @@ -6984,6 +7027,13 @@ emit_input_reload_insns (struct insn_chain *chain, struct reload *rl, alter_reg (REGNO (old), -1, false); } special = 1; + + /* Adjust any debug insns between temp and insn. */ + while ((temp = NEXT_INSN (temp)) != insn) + if (DEBUG_INSN_P (temp)) + replace_rtx (PATTERN (temp), old, reloadreg); + else + gcc_assert (NOTE_P (temp)); } else { diff --git a/gcc/resource.c b/gcc/resource.c index 08a805519cd..2bb3a1ad1e8 100644 --- a/gcc/resource.c +++ b/gcc/resource.c @@ -976,6 +976,9 @@ mark_target_live_regs (rtx insns, rtx target, struct resources *res) rtx real_insn = insn; enum rtx_code code = GET_CODE (insn); + if (DEBUG_INSN_P (insn)) + continue; + /* If this insn is from the target of a branch, it isn't going to be used in the sequel. If it is used in both cases, this test will not be true. */ diff --git a/gcc/rtl.c b/gcc/rtl.c index 6d68fe87561..feeb40bf61b 100644 --- a/gcc/rtl.c +++ b/gcc/rtl.c @@ -1,6 +1,6 @@ /* RTL utility routines. Copyright (C) 1987, 1988, 1991, 1994, 1997, 1998, 1999, 2000, 2001, 2002, - 2003, 2004, 2005, 2006, 2007, 2008 Free Software Foundation, Inc. + 2003, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, Inc. This file is part of GCC. @@ -381,6 +381,7 @@ rtx_equal_p_cb (const_rtx x, const_rtx y, rtx_equal_p_callback_function cb) case SYMBOL_REF: return XSTR (x, 0) == XSTR (y, 0); + case VALUE: case SCRATCH: case CONST_DOUBLE: case CONST_INT: @@ -495,6 +496,7 @@ rtx_equal_p (const_rtx x, const_rtx y) case SYMBOL_REF: return XSTR (x, 0) == XSTR (y, 0); + case VALUE: case SCRATCH: case CONST_DOUBLE: case CONST_INT: diff --git a/gcc/rtl.def b/gcc/rtl.def index 090546b3ebe..bcb5cbcd9b0 100644 --- a/gcc/rtl.def +++ b/gcc/rtl.def @@ -2,7 +2,7 @@ Register Transfer Expressions (rtx's) that make up the Register Transfer Language (rtl) used in the Back End of the GNU compiler. Copyright (C) 1987, 1988, 1992, 1994, 1995, 1997, 1998, 1999, 2000, 2004, - 2005, 2006, 2007, 2008 + 2005, 2006, 2007, 2008, 2009 Free Software Foundation, Inc. This file is part of GCC. @@ -81,6 +81,13 @@ along with GCC; see the file COPYING3. If not see value zero. */ DEF_RTL_EXPR(UNKNOWN, "UnKnown", "*", RTX_EXTRA) +/* Used in the cselib routines to describe a value. Objects of this + kind are only allocated in cselib.c, in an alloc pool instead of in + GC memory. The only operand of a VALUE is a cselib_val_struct. + var-tracking requires this to have a distinct integral value from + DECL codes in trees. */ +DEF_RTL_EXPR(VALUE, "value", "0", RTX_OBJ) + /* --------------------------------------------------------------------- Expressions used in constructing lists. --------------------------------------------------------------------- */ @@ -111,6 +118,9 @@ DEF_RTL_EXPR(ADDRESS, "address", "e", RTX_MATCH) ---------------------------------------------------------------------- */ +/* An annotation for variable assignment tracking. */ +DEF_RTL_EXPR(DEBUG_INSN, "debug_insn", "iuuBieie", RTX_INSN) + /* An instruction that cannot jump. */ DEF_RTL_EXPR(INSN, "insn", "iuuBieie", RTX_INSN) @@ -329,11 +339,6 @@ DEF_RTL_EXPR(CONST, "const", "e", RTX_CONST_OBJ) by a SET whose first operand is (PC). */ DEF_RTL_EXPR(PC, "pc", "", RTX_OBJ) -/* Used in the cselib routines to describe a value. Objects of this - kind are only allocated in cselib.c, in an alloc pool instead of - in GC memory. The only operand of a VALUE is a cselib_val_struct. */ -DEF_RTL_EXPR(VALUE, "value", "0", RTX_OBJ) - /* A register. The "operand" is the register number, accessed with the REGNO macro. If this number is less than FIRST_PSEUDO_REGISTER than a hardware register is being referred to. The second operand diff --git a/gcc/rtl.h b/gcc/rtl.h index cf07348a3cb..64dba7aff08 100644 --- a/gcc/rtl.h +++ b/gcc/rtl.h @@ -385,9 +385,18 @@ struct GTY(()) rtvec_def { /* Predicate yielding nonzero iff X is an insn that cannot jump. */ #define NONJUMP_INSN_P(X) (GET_CODE (X) == INSN) +/* Predicate yielding nonzero iff X is a debug note/insn. */ +#define DEBUG_INSN_P(X) (GET_CODE (X) == DEBUG_INSN) + +/* Predicate yielding nonzero iff X is an insn that is not a debug insn. */ +#define NONDEBUG_INSN_P(X) (INSN_P (X) && !DEBUG_INSN_P (X)) + +/* Nonzero if DEBUG_INSN_P may possibly hold. */ +#define MAY_HAVE_DEBUG_INSNS MAY_HAVE_DEBUG_STMTS + /* Predicate yielding nonzero iff X is a real insn. */ #define INSN_P(X) \ - (NONJUMP_INSN_P (X) || JUMP_P (X) || CALL_P (X)) + (NONJUMP_INSN_P (X) || DEBUG_INSN_P (X) || JUMP_P (X) || CALL_P (X)) /* Predicate yielding nonzero iff X is a note insn. */ #define NOTE_P(X) (GET_CODE (X) == NOTE) @@ -764,12 +773,13 @@ extern void rtl_check_failed_flag (const char *, const_rtx, const char *, #define INSN_CODE(INSN) XINT (INSN, 6) #define RTX_FRAME_RELATED_P(RTX) \ - (RTL_FLAG_CHECK5("RTX_FRAME_RELATED_P", (RTX), INSN, CALL_INSN, \ - JUMP_INSN, BARRIER, SET)->frame_related) + (RTL_FLAG_CHECK6("RTX_FRAME_RELATED_P", (RTX), DEBUG_INSN, INSN, \ + CALL_INSN, JUMP_INSN, BARRIER, SET)->frame_related) /* 1 if RTX is an insn that has been deleted. */ #define INSN_DELETED_P(RTX) \ - (RTL_FLAG_CHECK6("INSN_DELETED_P", (RTX), INSN, CALL_INSN, JUMP_INSN, \ + (RTL_FLAG_CHECK7("INSN_DELETED_P", (RTX), DEBUG_INSN, INSN, \ + CALL_INSN, JUMP_INSN, \ CODE_LABEL, BARRIER, NOTE)->volatil) /* 1 if RTX is a call to a const function. Built from ECF_CONST and @@ -878,16 +888,46 @@ extern const char * const reg_note_name[]; && NOTE_KIND (INSN) == NOTE_INSN_BASIC_BLOCK) /* Variable declaration and the location of a variable. */ -#define NOTE_VAR_LOCATION_DECL(INSN) (XCTREE (XCEXP (INSN, 4, NOTE), \ - 0, VAR_LOCATION)) -#define NOTE_VAR_LOCATION_LOC(INSN) (XCEXP (XCEXP (INSN, 4, NOTE), \ - 1, VAR_LOCATION)) +#define PAT_VAR_LOCATION_DECL(PAT) (XCTREE ((PAT), 0, VAR_LOCATION)) +#define PAT_VAR_LOCATION_LOC(PAT) (XCEXP ((PAT), 1, VAR_LOCATION)) /* Initialization status of the variable in the location. Status can be unknown, uninitialized or initialized. See enumeration type below. */ -#define NOTE_VAR_LOCATION_STATUS(INSN) \ - ((enum var_init_status) (XCINT (XCEXP (INSN, 4, NOTE), 2, VAR_LOCATION))) +#define PAT_VAR_LOCATION_STATUS(PAT) \ + ((enum var_init_status) (XCINT ((PAT), 2, VAR_LOCATION))) + +/* Accessors for a NOTE_INSN_VAR_LOCATION. */ +#define NOTE_VAR_LOCATION_DECL(NOTE) \ + PAT_VAR_LOCATION_DECL (NOTE_VAR_LOCATION (NOTE)) +#define NOTE_VAR_LOCATION_LOC(NOTE) \ + PAT_VAR_LOCATION_LOC (NOTE_VAR_LOCATION (NOTE)) +#define NOTE_VAR_LOCATION_STATUS(NOTE) \ + PAT_VAR_LOCATION_STATUS (NOTE_VAR_LOCATION (NOTE)) + +/* The VAR_LOCATION rtx in a DEBUG_INSN. */ +#define INSN_VAR_LOCATION(INSN) PATTERN (INSN) + +/* Accessors for a tree-expanded var location debug insn. */ +#define INSN_VAR_LOCATION_DECL(INSN) \ + PAT_VAR_LOCATION_DECL (INSN_VAR_LOCATION (INSN)) +#define INSN_VAR_LOCATION_LOC(INSN) \ + PAT_VAR_LOCATION_LOC (INSN_VAR_LOCATION (INSN)) +#define INSN_VAR_LOCATION_STATUS(INSN) \ + PAT_VAR_LOCATION_STATUS (INSN_VAR_LOCATION (INSN)) + +/* Expand to the RTL that denotes an unknown variable location in a + DEBUG_INSN. */ +#define gen_rtx_UNKNOWN_VAR_LOC() (gen_rtx_CLOBBER (VOIDmode, const0_rtx)) + +/* Determine whether X is such an unknown location. */ +#define VAR_LOC_UNKNOWN_P(X) \ + (GET_CODE (X) == CLOBBER && XEXP ((X), 0) == const0_rtx) + +/* 1 if RTX is emitted after a call, but it should take effect before + the call returns. */ +#define NOTE_DURING_CALL_P(RTX) \ + (RTL_FLAG_CHECK1("NOTE_VAR_LOCATION_DURING_CALL_P", (RTX), NOTE)->call) /* Possible initialization status of a variable. When requested by the user, this information is tracked and recorded in the DWARF @@ -1259,8 +1299,9 @@ do { \ /* During sched, 1 if RTX is an insn that must be scheduled together with the preceding insn. */ #define SCHED_GROUP_P(RTX) \ - (RTL_FLAG_CHECK3("SCHED_GROUP_P", (RTX), INSN, JUMP_INSN, CALL_INSN \ - )->in_struct) + (RTL_FLAG_CHECK4("SCHED_GROUP_P", (RTX), DEBUG_INSN, INSN, \ + JUMP_INSN, CALL_INSN \ + )->in_struct) /* For a SET rtx, SET_DEST is the place that is set and SET_SRC is the value it is set to. */ @@ -1593,6 +1634,9 @@ extern rtx emit_jump_insn_before_setloc (rtx, rtx, int); extern rtx emit_call_insn_before (rtx, rtx); extern rtx emit_call_insn_before_noloc (rtx, rtx); extern rtx emit_call_insn_before_setloc (rtx, rtx, int); +extern rtx emit_debug_insn_before (rtx, rtx); +extern rtx emit_debug_insn_before_noloc (rtx, rtx); +extern rtx emit_debug_insn_before_setloc (rtx, rtx, int); extern rtx emit_barrier_before (rtx); extern rtx emit_label_before (rtx, rtx); extern rtx emit_note_before (enum insn_note, rtx); @@ -1605,10 +1649,14 @@ extern rtx emit_jump_insn_after_setloc (rtx, rtx, int); extern rtx emit_call_insn_after (rtx, rtx); extern rtx emit_call_insn_after_noloc (rtx, rtx); extern rtx emit_call_insn_after_setloc (rtx, rtx, int); +extern rtx emit_debug_insn_after (rtx, rtx); +extern rtx emit_debug_insn_after_noloc (rtx, rtx); +extern rtx emit_debug_insn_after_setloc (rtx, rtx, int); extern rtx emit_barrier_after (rtx); extern rtx emit_label_after (rtx, rtx); extern rtx emit_note_after (enum insn_note, rtx); extern rtx emit_insn (rtx); +extern rtx emit_debug_insn (rtx); extern rtx emit_jump_insn (rtx); extern rtx emit_call_insn (rtx); extern rtx emit_label (rtx); @@ -1620,6 +1668,7 @@ extern rtx emit_clobber (rtx); extern rtx gen_use (rtx); extern rtx emit_use (rtx); extern rtx make_insn_raw (rtx); +extern rtx make_debug_insn_raw (rtx); extern rtx make_jump_insn_raw (rtx); extern void add_function_usage_to (rtx, rtx); extern rtx last_call_insn (void); @@ -1628,6 +1677,8 @@ extern rtx next_insn (rtx); extern rtx prev_nonnote_insn (rtx); extern rtx next_nonnote_insn (rtx); extern rtx next_nonnote_insn_bb (rtx); +extern rtx prev_nondebug_insn (rtx); +extern rtx next_nondebug_insn (rtx); extern rtx prev_real_insn (rtx); extern rtx next_real_insn (rtx); extern rtx prev_active_insn (rtx); @@ -1699,6 +1750,7 @@ extern rtx simplify_gen_subreg (enum machine_mode, rtx, enum machine_mode, extern rtx simplify_replace_rtx (rtx, const_rtx, rtx); extern rtx simplify_rtx (const_rtx); extern rtx avoid_constant_pool_reference (rtx); +extern rtx delegitimize_mem_from_attrs (rtx); extern bool mode_signbit_p (enum machine_mode, const_rtx); /* In reginfo.c */ @@ -2127,6 +2179,7 @@ extern void set_used_flags (rtx); extern void reorder_insns (rtx, rtx, rtx); extern void reorder_insns_nobb (rtx, rtx, rtx); extern int get_max_uid (void); +extern int get_max_insn_count (void); extern int in_sequence_p (void); extern void force_next_line_note (void); extern void init_emit (void); @@ -2327,6 +2380,8 @@ extern void invert_br_probabilities (rtx); extern bool expensive_function_p (int); /* In cfgexpand.c */ extern void add_reg_br_prob_note (rtx last, int probability); +extern rtx wrap_constant (enum machine_mode, rtx); +extern rtx unwrap_constant (rtx); /* In var-tracking.c */ extern unsigned int variable_tracking_main (void); @@ -2372,7 +2427,9 @@ extern void insn_locators_alloc (void); extern void insn_locators_free (void); extern void insn_locators_finalize (void); extern void set_curr_insn_source_location (location_t); +extern location_t get_curr_insn_source_location (void); extern void set_curr_insn_block (tree); +extern tree get_curr_insn_block (void); extern int curr_insn_locator (void); extern bool optimize_insn_for_size_p (void); extern bool optimize_insn_for_speed_p (void); diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c index 27a46d9e73d..7a734eb66e5 100644 --- a/gcc/rtlanal.c +++ b/gcc/rtlanal.c @@ -741,7 +741,7 @@ reg_used_between_p (const_rtx reg, const_rtx from_insn, const_rtx to_insn) return 0; for (insn = NEXT_INSN (from_insn); insn != to_insn; insn = NEXT_INSN (insn)) - if (INSN_P (insn) + if (NONDEBUG_INSN_P (insn) && (reg_overlap_mentioned_p (reg, PATTERN (insn)) || (CALL_P (insn) && find_reg_fusage (insn, USE, reg)))) return 1; @@ -2148,6 +2148,7 @@ side_effects_p (const_rtx x) case SCRATCH: case ADDR_VEC: case ADDR_DIFF_VEC: + case VAR_LOCATION: return 0; case CLOBBER: @@ -4725,7 +4726,11 @@ canonicalize_condition (rtx insn, rtx cond, int reverse, rtx *earliest, stop if it isn't a single set or if it has a REG_INC note because we don't want to bother dealing with it. */ - if ((prev = prev_nonnote_insn (prev)) == 0 + do + prev = prev_nonnote_insn (prev); + while (prev && DEBUG_INSN_P (prev)); + + if (prev == 0 || !NONJUMP_INSN_P (prev) || FIND_REG_INC_NOTE (prev, NULL_RTX) /* In cfglayout mode, there do not have to be labels at the diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c index 8175ac06fea..17df6a5d1cd 100644 --- a/gcc/sched-deps.c +++ b/gcc/sched-deps.c @@ -1,7 +1,7 @@ /* Instruction scheduling pass. This file computes dependencies between instructions. Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, - 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 + 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, Inc. Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by, and currently maintained by, Jim Wilson (wilson@cygnus.com) @@ -650,7 +650,8 @@ sd_lists_size (const_rtx insn, sd_list_types_def list_types) bool resolved_p; sd_next_list (insn, &list_types, &list, &resolved_p); - size += DEPS_LIST_N_LINKS (list); + if (list) + size += DEPS_LIST_N_LINKS (list); } return size; @@ -673,6 +674,9 @@ sd_init_insn (rtx insn) INSN_FORW_DEPS (insn) = create_deps_list (); INSN_RESOLVED_FORW_DEPS (insn) = create_deps_list (); + if (DEBUG_INSN_P (insn)) + DEBUG_INSN_SCHED_P (insn) = TRUE; + /* ??? It would be nice to allocate dependency caches here. */ } @@ -682,6 +686,12 @@ sd_finish_insn (rtx insn) { /* ??? It would be nice to deallocate dependency caches here. */ + if (DEBUG_INSN_P (insn)) + { + gcc_assert (DEBUG_INSN_SCHED_P (insn)); + DEBUG_INSN_SCHED_P (insn) = FALSE; + } + free_deps_list (INSN_HARD_BACK_DEPS (insn)); INSN_HARD_BACK_DEPS (insn) = NULL; @@ -1181,6 +1191,7 @@ sd_add_dep (dep_t dep, bool resolved_p) rtx insn = DEP_CON (dep); gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem); + gcc_assert (!DEBUG_INSN_P (elem) || DEBUG_INSN_P (insn)); if ((current_sched_info->flags & DO_SPECULATION) && !sched_insn_is_legitimate_for_speculation_p (insn, DEP_STATUS (dep))) @@ -1462,7 +1473,7 @@ fixup_sched_groups (rtx insn) if (pro == i) goto next_link; - } while (SCHED_GROUP_P (i)); + } while (SCHED_GROUP_P (i) || DEBUG_INSN_P (i)); if (! sched_insns_conditions_mutex_p (i, pro)) add_dependence (i, pro, DEP_TYPE (dep)); @@ -1472,6 +1483,8 @@ fixup_sched_groups (rtx insn) delete_all_dependences (insn); prev_nonnote = prev_nonnote_insn (insn); + while (DEBUG_INSN_P (prev_nonnote)) + prev_nonnote = prev_nonnote_insn (prev_nonnote); if (BLOCK_FOR_INSN (insn) == BLOCK_FOR_INSN (prev_nonnote) && ! sched_insns_conditions_mutex_p (insn, prev_nonnote)) add_dependence (insn, prev_nonnote, REG_DEP_ANTI); @@ -1801,8 +1814,7 @@ sched_analyze_reg (struct deps *deps, int regno, enum machine_mode mode, already cross one. */ if (REG_N_CALLS_CROSSED (regno) == 0) { - if (!deps->readonly - && ref == USE) + if (!deps->readonly && ref == USE && !DEBUG_INSN_P (insn)) deps->sched_before_next_call = alloc_INSN_LIST (insn, deps->sched_before_next_call); else @@ -2059,6 +2071,12 @@ sched_analyze_2 (struct deps *deps, rtx x, rtx insn) rtx pending, pending_mem; rtx t = x; + if (DEBUG_INSN_P (insn)) + { + sched_analyze_2 (deps, XEXP (x, 0), insn); + return; + } + if (sched_deps_info->use_cselib) { t = shallow_copy_rtx (t); @@ -2287,6 +2305,8 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) { rtx next; next = next_nonnote_insn (insn); + while (next && DEBUG_INSN_P (next)) + next = next_nonnote_insn (next); if (next && BARRIER_P (next)) reg_pending_barrier = MOVE_BARRIER; else @@ -2361,9 +2381,49 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) || (NONJUMP_INSN_P (insn) && control_flow_insn_p (insn))) reg_pending_barrier = MOVE_BARRIER; + /* Add register dependencies for insn. */ + if (DEBUG_INSN_P (insn)) + { + rtx prev = deps->last_debug_insn; + rtx u; + + if (!deps->readonly) + deps->last_debug_insn = insn; + + if (prev) + add_dependence (insn, prev, REG_DEP_ANTI); + + add_dependence_list (insn, deps->last_function_call, 1, + REG_DEP_ANTI); + + for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1)) + if (! JUMP_P (XEXP (u, 0)) + || !sel_sched_p ()) + add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI); + + EXECUTE_IF_SET_IN_REG_SET (reg_pending_uses, 0, i, rsi) + { + struct deps_reg *reg_last = &deps->reg_last[i]; + add_dependence_list (insn, reg_last->sets, 1, REG_DEP_ANTI); + add_dependence_list (insn, reg_last->clobbers, 1, REG_DEP_ANTI); + } + CLEAR_REG_SET (reg_pending_uses); + + /* Quite often, a debug insn will refer to stuff in the + previous instruction, but the reason we want this + dependency here is to make sure the scheduler doesn't + gratuitously move a debug insn ahead. This could dirty + DF flags and cause additional analysis that wouldn't have + occurred in compilation without debug insns, and such + additional analysis can modify the generated code. */ + prev = PREV_INSN (insn); + + if (prev && NONDEBUG_INSN_P (prev)) + add_dependence (insn, prev, REG_DEP_ANTI); + } /* If the current insn is conditional, we can't free any of the lists. */ - if (sched_has_condition_p (insn)) + else if (sched_has_condition_p (insn)) { EXECUTE_IF_SET_IN_REG_SET (reg_pending_uses, 0, i, rsi) { @@ -2557,7 +2617,30 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) int src_regno, dest_regno; if (set == NULL) - goto end_call_group; + { + if (DEBUG_INSN_P (insn)) + /* We don't want to mark debug insns as part of the same + sched group. We know they really aren't, but if we use + debug insns to tell that a call group is over, we'll + get different code if debug insns are not there and + instructions that follow seem like they should be part + of the call group. + + Also, if we did, fixup_sched_groups() would move the + deps of the debug insn to the call insn, modifying + non-debug post-dependency counts of the debug insn + dependencies and otherwise messing with the scheduling + order. + + Instead, let such debug insns be scheduled freely, but + keep the call group open in case there are insns that + should be part of it afterwards. Since we grant debug + insns higher priority than even sched group insns, it + will all turn out all right. */ + goto debug_dont_end_call_group; + else + goto end_call_group; + } tmp = SET_DEST (set); if (GET_CODE (tmp) == SUBREG) @@ -2602,6 +2685,7 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn) } } + debug_dont_end_call_group: if ((current_sched_info->flags & DO_SPECULATION) && !sched_insn_is_legitimate_for_speculation_p (insn, 0)) /* INSN has an internal dependency (e.g. r14 = [r14]) and thus cannot @@ -2628,7 +2712,7 @@ deps_analyze_insn (struct deps *deps, rtx insn) if (sched_deps_info->start_insn) sched_deps_info->start_insn (insn); - if (NONJUMP_INSN_P (insn) || JUMP_P (insn)) + if (NONJUMP_INSN_P (insn) || DEBUG_INSN_P (insn) || JUMP_P (insn)) { /* Make each JUMP_INSN (but not a speculative check) a scheduling barrier for memory references. */ @@ -2758,6 +2842,8 @@ deps_start_bb (struct deps *deps, rtx head) { rtx insn = prev_nonnote_insn (head); + while (insn && DEBUG_INSN_P (insn)) + insn = prev_nonnote_insn (insn); if (insn && CALL_P (insn)) deps->in_post_call_group_p = post_call_initial; } @@ -2873,6 +2959,7 @@ init_deps (struct deps *deps) deps->last_function_call = 0; deps->sched_before_next_call = 0; deps->in_post_call_group_p = not_post_call; + deps->last_debug_insn = 0; deps->last_reg_pending_barrier = NOT_A_BARRIER; deps->readonly = 0; } diff --git a/gcc/sched-ebb.c b/gcc/sched-ebb.c index b3e6c7a7265..c6dc55330d9 100644 --- a/gcc/sched-ebb.c +++ b/gcc/sched-ebb.c @@ -607,9 +607,9 @@ schedule_ebbs (void) a note or two. */ while (head != tail) { - if (NOTE_P (head)) + if (NOTE_P (head) || BOUNDARY_DEBUG_INSN_P (head)) head = NEXT_INSN (head); - else if (NOTE_P (tail)) + else if (NOTE_P (tail) || BOUNDARY_DEBUG_INSN_P (tail)) tail = PREV_INSN (tail); else if (LABEL_P (head)) head = NEXT_INSN (head); diff --git a/gcc/sched-int.h b/gcc/sched-int.h index aa5007ba863..518fcb53e28 100644 --- a/gcc/sched-int.h +++ b/gcc/sched-int.h @@ -181,13 +181,15 @@ extern bool sel_insn_is_speculation_check (rtx); FIRST is the index of the element with the highest priority; i.e. the last one in the ready list, since elements are ordered by ascending priority. - N_READY determines how many insns are on the ready list. */ + N_READY determines how many insns are on the ready list. + N_DEBUG determines how many debug insns are on the ready list. */ struct ready_list { rtx *vec; int veclen; int first; int n_ready; + int n_debug; }; extern char *ready_try; @@ -509,6 +511,9 @@ struct deps the call. */ enum post_call_group in_post_call_group_p; + /* The last debug insn we've seen. */ + rtx last_debug_insn; + /* The maximum register number for the following arrays. Before reload this is max_reg_num; after reload it is FIRST_PSEUDO_REGISTER. */ int max_reg; @@ -800,6 +805,23 @@ extern VEC(haifa_deps_insn_data_def, heap) *h_d_i_d; #define IS_SPECULATION_BRANCHY_CHECK_P(INSN) \ (RECOVERY_BLOCK (INSN) != NULL && RECOVERY_BLOCK (INSN) != EXIT_BLOCK_PTR) +/* The unchanging bit tracks whether a debug insn is to be handled + like an insn (i.e., schedule it) or like a note (e.g., it is next + to a basic block boundary. */ +#define DEBUG_INSN_SCHED_P(insn) \ + (RTL_FLAG_CHECK1("DEBUG_INSN_SCHED_P", (insn), DEBUG_INSN)->unchanging) + +/* True if INSN is a debug insn that is next to a basic block + boundary, i.e., it is to be handled by the scheduler like a + note. */ +#define BOUNDARY_DEBUG_INSN_P(insn) \ + (DEBUG_INSN_P (insn) && !DEBUG_INSN_SCHED_P (insn)) +/* True if INSN is a debug insn that is not next to a basic block + boundary, i.e., it is to be handled by the scheduler like an + insn. */ +#define SCHEDULE_DEBUG_INSN_P(insn) \ + (DEBUG_INSN_P (insn) && DEBUG_INSN_SCHED_P (insn)) + /* Dep status (aka ds_t) of the link encapsulates information, that is needed for speculative scheduling. Namely, it is 4 integers in the range [0, MAX_DEP_WEAK] and 3 bits. @@ -1342,7 +1364,8 @@ sd_iterator_cond (sd_iterator_def *it_ptr, dep_t *dep_ptr) it_ptr->linkp = &DEPS_LIST_FIRST (list); - return sd_iterator_cond (it_ptr, dep_ptr); + if (list) + return sd_iterator_cond (it_ptr, dep_ptr); } *dep_ptr = NULL; diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c index a913faa217a..91ac01050ff 100644 --- a/gcc/sched-rgn.c +++ b/gcc/sched-rgn.c @@ -530,7 +530,20 @@ find_single_block_region (bool ebbs_p) static int rgn_estimate_number_of_insns (basic_block bb) { - return INSN_LUID (BB_END (bb)) - INSN_LUID (BB_HEAD (bb)); + int count; + + count = INSN_LUID (BB_END (bb)) - INSN_LUID (BB_HEAD (bb)); + + if (MAY_HAVE_DEBUG_INSNS) + { + rtx insn; + + FOR_BB_INSNS (bb, insn) + if (DEBUG_INSN_P (insn)) + count--; + } + + return count; } /* Update number of blocks and the estimate for number of insns @@ -2129,7 +2142,7 @@ init_ready_list (void) src_head = head; for (insn = src_head; insn != src_next_tail; insn = NEXT_INSN (insn)) - if (INSN_P (insn)) + if (INSN_P (insn) && !BOUNDARY_DEBUG_INSN_P (insn)) try_ready (insn); } } @@ -2438,6 +2451,9 @@ add_branch_dependences (rtx head, rtx tail) are not moved before reload because we can wind up with register allocation failures. */ + while (tail != head && DEBUG_INSN_P (tail)) + tail = PREV_INSN (tail); + insn = tail; last = 0; while (CALL_P (insn) @@ -2472,7 +2488,9 @@ add_branch_dependences (rtx head, rtx tail) if (insn == head) break; - insn = PREV_INSN (insn); + do + insn = PREV_INSN (insn); + while (insn != head && DEBUG_INSN_P (insn)); } /* Make sure these insns are scheduled last in their block. */ @@ -2482,7 +2500,8 @@ add_branch_dependences (rtx head, rtx tail) { insn = prev_nonnote_insn (insn); - if (TEST_BIT (insn_referenced, INSN_LUID (insn))) + if (TEST_BIT (insn_referenced, INSN_LUID (insn)) + || DEBUG_INSN_P (insn)) continue; if (! sched_insns_conditions_mutex_p (last, insn)) @@ -2719,6 +2738,9 @@ free_block_dependencies (int bb) get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail); + if (no_real_insns_p (head, tail)) + return; + sched_free_deps (head, tail, true); } @@ -2876,6 +2898,9 @@ compute_priorities (void) gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb)); get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail); + if (no_real_insns_p (head, tail)) + continue; + rgn_n_insns += set_priorities (head, tail); } current_sched_info->sched_max_insns_priority++; diff --git a/gcc/sched-vis.c b/gcc/sched-vis.c index b54f47e4337..89230efa34c 100644 --- a/gcc/sched-vis.c +++ b/gcc/sched-vis.c @@ -556,6 +556,10 @@ print_pattern (char *buf, const_rtx x, int verbose) print_value (t1, XEXP (x, 0), verbose); sprintf (buf, "use %s", t1); break; + case VAR_LOCATION: + print_value (t1, PAT_VAR_LOCATION_LOC (x), verbose); + sprintf (buf, "loc %s", t1); + break; case COND_EXEC: if (GET_CODE (COND_EXEC_TEST (x)) == NE && XEXP (COND_EXEC_TEST (x), 1) == const0_rtx) @@ -658,6 +662,34 @@ print_insn (char *buf, const_rtx x, int verbose) #endif sprintf (buf, " %4d %s", INSN_UID (x), t); break; + + case DEBUG_INSN: + { + const char *name = "?"; + + if (DECL_P (INSN_VAR_LOCATION_DECL (insn))) + { + tree id = DECL_NAME (INSN_VAR_LOCATION_DECL (insn)); + if (id) + name = IDENTIFIER_POINTER (id); + else + { + char idbuf[32]; + sprintf (idbuf, "D.%i", + DECL_UID (INSN_VAR_LOCATION_DECL (insn))); + name = idbuf; + } + } + if (VAR_LOC_UNKNOWN_P (INSN_VAR_LOCATION_LOC (insn))) + sprintf (buf, " %4d: debug %s optimized away", INSN_UID (insn), name); + else + { + print_pattern (t, INSN_VAR_LOCATION_LOC (insn), verbose); + sprintf (buf, " %4d: debug %s => %s", INSN_UID (insn), name, t); + } + } + break; + case JUMP_INSN: print_pattern (t, PATTERN (x), verbose); #ifdef INSN_SCHEDULING diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c index 2932800f569..9a61ed84dca 100644 --- a/gcc/sel-sched-ir.c +++ b/gcc/sel-sched-ir.c @@ -157,6 +157,7 @@ static void sel_remove_loop_preheader (void); static bool insn_is_the_only_one_in_bb_p (insn_t); static void create_initial_data_sets (basic_block); +static void free_av_set (basic_block); static void invalidate_av_set (basic_block); static void extend_insn_data (void); static void sel_init_new_insn (insn_t, int); @@ -1044,10 +1045,10 @@ get_nop_from_pool (insn_t insn) /* Remove NOP from the instruction stream and return it to the pool. */ void -return_nop_to_pool (insn_t nop) +return_nop_to_pool (insn_t nop, bool full_tidying) { gcc_assert (INSN_IN_STREAM_P (nop)); - sel_remove_insn (nop, false, true); + sel_remove_insn (nop, false, full_tidying); if (nop_pool.n == nop_pool.s) nop_pool.v = XRESIZEVEC (rtx, nop_pool.v, @@ -2362,6 +2363,8 @@ setup_id_for_insn (idata_t id, insn_t insn, bool force_unique_p) type = SET; else if (type == JUMP_INSN && simplejump_p (insn)) type = PC; + else if (type == DEBUG_INSN) + type = !force_unique_p ? USE : INSN; IDATA_TYPE (id) = type; IDATA_REG_SETS (id) = get_clear_regset_from_pool (); @@ -3487,7 +3490,7 @@ maybe_tidy_empty_bb (basic_block bb) /* Keep empty bb only if this block immediately precedes EXIT and has incoming non-fallthrough edge. Otherwise remove it. */ - if (!sel_bb_empty_p (bb) + if (!sel_bb_empty_p (bb) || (single_succ_p (bb) && single_succ (bb) == EXIT_BLOCK_PTR && (!single_pred_p (bb) @@ -3559,6 +3562,7 @@ bool tidy_control_flow (basic_block xbb, bool full_tidying) { bool changed = true; + insn_t first, last; /* First check whether XBB is empty. */ changed = maybe_tidy_empty_bb (xbb); @@ -3575,6 +3579,20 @@ tidy_control_flow (basic_block xbb, bool full_tidying) tidy_fallthru_edge (EDGE_SUCC (xbb, 0)); } + first = sel_bb_head (xbb); + last = sel_bb_end (xbb); + if (MAY_HAVE_DEBUG_INSNS) + { + if (first != last && DEBUG_INSN_P (first)) + do + first = NEXT_INSN (first); + while (first != last && (DEBUG_INSN_P (first) || NOTE_P (first))); + + if (first != last && DEBUG_INSN_P (last)) + do + last = PREV_INSN (last); + while (first != last && (DEBUG_INSN_P (last) || NOTE_P (last))); + } /* Check if there is an unnecessary jump in previous basic block leading to next basic block left after removing INSN from stream. If it is so, remove that jump and redirect edge to current @@ -3582,9 +3600,9 @@ tidy_control_flow (basic_block xbb, bool full_tidying) when NOP will be deleted several instructions later with its basic block we will not get a jump to next instruction, which can be harmful. */ - if (sel_bb_head (xbb) == sel_bb_end (xbb) + if (first == last && !sel_bb_empty_p (xbb) - && INSN_NOP_P (sel_bb_end (xbb)) + && INSN_NOP_P (last) /* Flow goes fallthru from current block to the next. */ && EDGE_COUNT (xbb->succs) == 1 && (EDGE_SUCC (xbb, 0)->flags & EDGE_FALLTHRU) @@ -3624,6 +3642,21 @@ sel_remove_insn (insn_t insn, bool only_disconnect, bool full_tidying) gcc_assert (INSN_IN_STREAM_P (insn)); + if (DEBUG_INSN_P (insn) && BB_AV_SET_VALID_P (bb)) + { + expr_t expr; + av_set_iterator i; + + /* When we remove a debug insn that is head of a BB, it remains + in the AV_SET of the block, but it shouldn't. */ + FOR_EACH_EXPR_1 (expr, i, &BB_AV_SET (bb)) + if (EXPR_INSN_RTX (expr) == insn) + { + av_set_iter_remove (&i); + break; + } + } + if (only_disconnect) { insn_t prev = PREV_INSN (insn); @@ -3662,7 +3695,7 @@ sel_estimate_number_of_insns (basic_block bb) insn_t insn = NEXT_INSN (BB_HEAD (bb)), next_tail = NEXT_INSN (BB_END (bb)); for (; insn != next_tail; insn = NEXT_INSN (insn)) - if (INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) res++; return res; @@ -5363,6 +5396,8 @@ create_insn_rtx_from_pattern (rtx pattern, rtx label) if (label == NULL_RTX) insn_rtx = emit_insn (pattern); + else if (DEBUG_INSN_P (label)) + insn_rtx = emit_debug_insn (pattern); else { insn_rtx = emit_jump_insn (pattern); @@ -5398,6 +5433,10 @@ create_copy_of_insn_rtx (rtx insn_rtx) { rtx res; + if (DEBUG_INSN_P (insn_rtx)) + return create_insn_rtx_from_pattern (copy_rtx (PATTERN (insn_rtx)), + insn_rtx); + gcc_assert (NONJUMP_INSN_P (insn_rtx)); res = create_insn_rtx_from_pattern (copy_rtx (PATTERN (insn_rtx)), diff --git a/gcc/sel-sched-ir.h b/gcc/sel-sched-ir.h index 3d219e1568a..9bc90bda559 100644 --- a/gcc/sel-sched-ir.h +++ b/gcc/sel-sched-ir.h @@ -1,6 +1,6 @@ /* Instruction scheduling pass. This file contains definitions used internally in the scheduler. - Copyright (C) 2006, 2007, 2008 Free Software Foundation, Inc. + Copyright (C) 2006, 2007, 2008, 2009 Free Software Foundation, Inc. This file is part of GCC. @@ -1019,6 +1019,7 @@ struct succs_info extern basic_block after_recovery; extern insn_t sel_bb_head (basic_block); +extern insn_t sel_bb_end (basic_block); extern bool sel_bb_empty_p (basic_block); extern bool in_current_region_p (basic_block); @@ -1079,6 +1080,27 @@ get_loop_exit_edges_unique_dests (const struct loop *loop) return edges; } +static bool +sel_bb_empty_or_nop_p (basic_block bb) +{ + insn_t first = sel_bb_head (bb), last; + + if (first == NULL_RTX) + return true; + + if (!INSN_NOP_P (first)) + return false; + + if (bb == EXIT_BLOCK_PTR) + return false; + + last = sel_bb_end (bb); + if (first != last) + return false; + + return true; +} + /* Collect all loop exits recursively, skipping empty BBs between them. E.g. if BB is a loop header which has several loop exits, traverse all of them and if any of them turns out to be another loop header @@ -1091,7 +1113,7 @@ get_all_loop_exits (basic_block bb) /* If bb is empty, and we're skipping to loop exits, then consider bb as a possible gate to the inner loop now. */ - while (sel_bb_empty_p (bb) + while (sel_bb_empty_or_nop_p (bb) && in_current_region_p (bb)) { bb = single_succ (bb); @@ -1350,7 +1372,24 @@ _eligible_successor_edge_p (edge e1, succ_iterator *ip) while (1) { if (!sel_bb_empty_p (bb)) - break; + { + edge ne; + basic_block nbb; + + if (!sel_bb_empty_or_nop_p (bb)) + break; + + ne = EDGE_SUCC (bb, 0); + nbb = ne->dest; + + if (!in_current_region_p (nbb) + && !(flags & SUCCS_OUT)) + break; + + e2 = ne; + bb = nbb; + continue; + } if (!in_current_region_p (bb) && !(flags & SUCCS_OUT)) @@ -1470,7 +1509,7 @@ extern void return_regset_to_pool (regset); extern void free_regset_pool (void); extern insn_t get_nop_from_pool (insn_t); -extern void return_nop_to_pool (insn_t); +extern void return_nop_to_pool (insn_t, bool); extern void free_nop_pool (void); /* Vinsns functions. */ diff --git a/gcc/sel-sched.c b/gcc/sel-sched.c index 7ec31f18c3f..102dc19187f 100644 --- a/gcc/sel-sched.c +++ b/gcc/sel-sched.c @@ -557,6 +557,7 @@ static int stat_substitutions_total; static bool rtx_ok_for_substitution_p (rtx, rtx); static int sel_rank_for_schedule (const void *, const void *); static av_set_t find_sequential_best_exprs (bnd_t, expr_t, bool); +static basic_block find_block_for_bookkeeping (edge e1, edge e2, bool lax); static rtx get_dest_from_orig_ops (av_set_t); static basic_block generate_bookkeeping_insn (expr_t, edge, edge); @@ -2059,6 +2060,56 @@ moveup_expr_inside_insn_group (expr_t expr, insn_t through_insn) /* True when a conflict on a target register was found during moveup_expr. */ static bool was_target_conflict = false; +/* Return true when moving a debug INSN across THROUGH_INSN will + create a bookkeeping block. We don't want to create such blocks, + for they would cause codegen differences between compilations with + and without debug info. */ + +static bool +moving_insn_creates_bookkeeping_block_p (insn_t insn, + insn_t through_insn) +{ + basic_block bbi, bbt; + edge e1, e2; + edge_iterator ei1, ei2; + + if (!bookkeeping_can_be_created_if_moved_through_p (through_insn)) + { + if (sched_verbose >= 9) + sel_print ("no bookkeeping required: "); + return FALSE; + } + + bbi = BLOCK_FOR_INSN (insn); + + if (EDGE_COUNT (bbi->preds) == 1) + { + if (sched_verbose >= 9) + sel_print ("only one pred edge: "); + return TRUE; + } + + bbt = BLOCK_FOR_INSN (through_insn); + + FOR_EACH_EDGE (e1, ei1, bbt->succs) + { + FOR_EACH_EDGE (e2, ei2, bbi->preds) + { + if (find_block_for_bookkeeping (e1, e2, TRUE)) + { + if (sched_verbose >= 9) + sel_print ("found existing block: "); + return FALSE; + } + } + } + + if (sched_verbose >= 9) + sel_print ("would create bookkeeping block: "); + + return TRUE; +} + /* Modifies EXPR so it can be moved through the THROUGH_INSN, performing necessary transformations. Record the type of transformation made in PTRANS_TYPE, when it is not NULL. When INSIDE_INSN_GROUP, @@ -2110,7 +2161,8 @@ moveup_expr (expr_t expr, insn_t through_insn, bool inside_insn_group, /* And it should be mutually exclusive with through_insn, or be an unconditional jump. */ if (! any_uncondjump_p (insn) - && ! sched_insns_conditions_mutex_p (insn, through_insn)) + && ! sched_insns_conditions_mutex_p (insn, through_insn) + && ! DEBUG_INSN_P (through_insn)) return MOVEUP_EXPR_NULL; } @@ -2131,6 +2183,12 @@ moveup_expr (expr_t expr, insn_t through_insn, bool inside_insn_group, else gcc_assert (!control_flow_insn_p (insn)); + /* Don't move debug insns if this would require bookkeeping. */ + if (DEBUG_INSN_P (insn) + && BLOCK_FOR_INSN (through_insn) != BLOCK_FOR_INSN (insn) + && moving_insn_creates_bookkeeping_block_p (insn, through_insn)) + return MOVEUP_EXPR_NULL; + /* Deal with data dependencies. */ was_target_conflict = false; full_ds = has_dependence_p (expr, through_insn, &has_dep_p); @@ -2440,7 +2498,12 @@ moveup_expr_cached (expr_t expr, insn_t insn, bool inside_insn_group) sel_print (" through %d: ", INSN_UID (insn)); } - if (try_bitmap_cache (expr, insn, inside_insn_group, &res)) + if (DEBUG_INSN_P (EXPR_INSN_RTX (expr)) + && (sel_bb_head (BLOCK_FOR_INSN (EXPR_INSN_RTX (expr))) + == EXPR_INSN_RTX (expr))) + /* Don't use cached information for debug insns that are heads of + basic blocks. */; + else if (try_bitmap_cache (expr, insn, inside_insn_group, &res)) /* When inside insn group, we do not want remove stores conflicting with previosly issued loads. */ got_answer = ! inside_insn_group || res != MOVEUP_EXPR_NULL; @@ -2852,6 +2915,9 @@ compute_av_set_inside_bb (insn_t first_insn, ilist_t p, int ws, break; } + if (DEBUG_INSN_P (last_insn)) + continue; + if (end_ws > max_ws) { /* We can reach max lookahead size at bb_header, so clean av_set @@ -3261,6 +3327,12 @@ sel_rank_for_schedule (const void *x, const void *y) tmp_insn = EXPR_INSN_RTX (tmp); tmp2_insn = EXPR_INSN_RTX (tmp2); + /* Schedule debug insns as early as possible. */ + if (DEBUG_INSN_P (tmp_insn) && !DEBUG_INSN_P (tmp2_insn)) + return -1; + else if (DEBUG_INSN_P (tmp2_insn)) + return 1; + /* Prefer SCHED_GROUP_P insns to any others. */ if (SCHED_GROUP_P (tmp_insn) != SCHED_GROUP_P (tmp2_insn)) { @@ -3332,9 +3404,6 @@ sel_rank_for_schedule (const void *x, const void *y) return dw; } - tmp_insn = EXPR_INSN_RTX (tmp); - tmp2_insn = EXPR_INSN_RTX (tmp2); - /* Prefer an old insn to a bookkeeping insn. */ if (INSN_UID (tmp_insn) < first_emitted_uid && INSN_UID (tmp2_insn) >= first_emitted_uid) @@ -4412,15 +4481,16 @@ block_valid_for_bookkeeping_p (basic_block bb) /* Attempt to find a block that can hold bookkeeping code for path(s) incoming into E2->dest, except from E1->src (there may be a sequence of empty basic blocks between E1->src and E2->dest). Return found block, or NULL if new - one must be created. */ + one must be created. If LAX holds, don't assume there is a simple path + from E1->src to E2->dest. */ static basic_block -find_block_for_bookkeeping (edge e1, edge e2) +find_block_for_bookkeeping (edge e1, edge e2, bool lax) { basic_block candidate_block = NULL; edge e; /* Loop over edges from E1 to E2, inclusive. */ - for (e = e1; ; e = EDGE_SUCC (e->dest, 0)) + for (e = e1; !lax || e->dest != EXIT_BLOCK_PTR; e = EDGE_SUCC (e->dest, 0)) { if (EDGE_COUNT (e->dest->preds) == 2) { @@ -4438,10 +4508,18 @@ find_block_for_bookkeeping (edge e1, edge e2) return NULL; if (e == e2) - return (block_valid_for_bookkeeping_p (candidate_block) + return ((!lax || candidate_block) + && block_valid_for_bookkeeping_p (candidate_block) ? candidate_block : NULL); + + if (lax && EDGE_COUNT (e->dest->succs) != 1) + return NULL; } + + if (lax) + return NULL; + gcc_unreachable (); } @@ -4485,6 +4563,101 @@ create_block_for_bookkeeping (edge e1, edge e2) gcc_assert (e1->dest == new_bb); gcc_assert (sel_bb_empty_p (bb)); + /* To keep basic block numbers in sync between debug and non-debug + compilations, we have to rotate blocks here. Consider that we + started from (a,b)->d, (c,d)->e, and d contained only debug + insns. It would have been removed before if the debug insns + weren't there, so we'd have split e rather than d. So what we do + now is to swap the block numbers of new_bb and + single_succ(new_bb) == e, so that the insns that were in e before + get the new block number. */ + + if (MAY_HAVE_DEBUG_INSNS) + { + basic_block succ; + insn_t insn = sel_bb_head (new_bb); + insn_t last; + + if (DEBUG_INSN_P (insn) + && single_succ_p (new_bb) + && (succ = single_succ (new_bb)) + && succ != EXIT_BLOCK_PTR + && DEBUG_INSN_P ((last = sel_bb_end (new_bb)))) + { + while (insn != last && (DEBUG_INSN_P (insn) || NOTE_P (insn))) + insn = NEXT_INSN (insn); + + if (insn == last) + { + sel_global_bb_info_def gbi; + sel_region_bb_info_def rbi; + int i; + + if (sched_verbose >= 2) + sel_print ("Swapping block ids %i and %i\n", + new_bb->index, succ->index); + + i = new_bb->index; + new_bb->index = succ->index; + succ->index = i; + + SET_BASIC_BLOCK (new_bb->index, new_bb); + SET_BASIC_BLOCK (succ->index, succ); + + memcpy (&gbi, SEL_GLOBAL_BB_INFO (new_bb), sizeof (gbi)); + memcpy (SEL_GLOBAL_BB_INFO (new_bb), SEL_GLOBAL_BB_INFO (succ), + sizeof (gbi)); + memcpy (SEL_GLOBAL_BB_INFO (succ), &gbi, sizeof (gbi)); + + memcpy (&rbi, SEL_REGION_BB_INFO (new_bb), sizeof (rbi)); + memcpy (SEL_REGION_BB_INFO (new_bb), SEL_REGION_BB_INFO (succ), + sizeof (rbi)); + memcpy (SEL_REGION_BB_INFO (succ), &rbi, sizeof (rbi)); + + i = BLOCK_TO_BB (new_bb->index); + BLOCK_TO_BB (new_bb->index) = BLOCK_TO_BB (succ->index); + BLOCK_TO_BB (succ->index) = i; + + i = CONTAINING_RGN (new_bb->index); + CONTAINING_RGN (new_bb->index) = CONTAINING_RGN (succ->index); + CONTAINING_RGN (succ->index) = i; + + for (i = 0; i < current_nr_blocks; i++) + if (BB_TO_BLOCK (i) == succ->index) + BB_TO_BLOCK (i) = new_bb->index; + else if (BB_TO_BLOCK (i) == new_bb->index) + BB_TO_BLOCK (i) = succ->index; + + FOR_BB_INSNS (new_bb, insn) + if (INSN_P (insn)) + EXPR_ORIG_BB_INDEX (INSN_EXPR (insn)) = new_bb->index; + + FOR_BB_INSNS (succ, insn) + if (INSN_P (insn)) + EXPR_ORIG_BB_INDEX (INSN_EXPR (insn)) = succ->index; + + if (bitmap_bit_p (code_motion_visited_blocks, new_bb->index)) + { + bitmap_set_bit (code_motion_visited_blocks, succ->index); + bitmap_clear_bit (code_motion_visited_blocks, new_bb->index); + } + + gcc_assert (LABEL_P (BB_HEAD (new_bb)) + && LABEL_P (BB_HEAD (succ))); + + if (sched_verbose >= 4) + sel_print ("Swapping code labels %i and %i\n", + CODE_LABEL_NUMBER (BB_HEAD (new_bb)), + CODE_LABEL_NUMBER (BB_HEAD (succ))); + + i = CODE_LABEL_NUMBER (BB_HEAD (new_bb)); + CODE_LABEL_NUMBER (BB_HEAD (new_bb)) + = CODE_LABEL_NUMBER (BB_HEAD (succ)); + CODE_LABEL_NUMBER (BB_HEAD (succ)) = i; + } + } + } + return bb; } @@ -4496,12 +4669,42 @@ find_place_for_bookkeeping (edge e1, edge e2) insn_t place_to_insert; /* Find a basic block that can hold bookkeeping. If it can be found, do not create new basic block, but insert bookkeeping there. */ - basic_block book_block = find_block_for_bookkeeping (e1, e2); + basic_block book_block = find_block_for_bookkeeping (e1, e2, FALSE); - if (!book_block) - book_block = create_block_for_bookkeeping (e1, e2); + if (book_block) + { + place_to_insert = BB_END (book_block); + + /* Don't use a block containing only debug insns for + bookkeeping, this causes scheduling differences between debug + and non-debug compilations, for the block would have been + removed already. */ + if (DEBUG_INSN_P (place_to_insert)) + { + rtx insn = sel_bb_head (book_block); - place_to_insert = BB_END (book_block); + while (insn != place_to_insert && + (DEBUG_INSN_P (insn) || NOTE_P (insn))) + insn = NEXT_INSN (insn); + + if (insn == place_to_insert) + book_block = NULL; + } + } + + if (!book_block) + { + book_block = create_block_for_bookkeeping (e1, e2); + place_to_insert = BB_END (book_block); + if (sched_verbose >= 9) + sel_print ("New block is %i, split from bookkeeping block %i\n", + EDGE_SUCC (book_block, 0)->dest->index, book_block->index); + } + else + { + if (sched_verbose >= 9) + sel_print ("Pre-existing bookkeeping block is %i\n", book_block->index); + } /* If basic block ends with a jump, insert bookkeeping code right before it. */ if (INSN_P (place_to_insert) && control_flow_insn_p (place_to_insert)) @@ -4587,6 +4790,8 @@ generate_bookkeeping_insn (expr_t c_expr, edge e1, edge e2) join_point = sel_bb_head (e2->dest); place_to_insert = find_place_for_bookkeeping (e1, e2); + if (!place_to_insert) + return NULL; new_seqno = find_seqno_for_bookkeeping (place_to_insert, join_point); need_to_exchange_data_sets = sel_bb_empty_p (BLOCK_FOR_INSN (place_to_insert)); @@ -4748,7 +4953,7 @@ move_cond_jump (rtx insn, bnd_t bnd) /* Remove nops generated during move_op for preventing removal of empty basic blocks. */ static void -remove_temp_moveop_nops (void) +remove_temp_moveop_nops (bool full_tidying) { int i; insn_t insn; @@ -4756,7 +4961,7 @@ remove_temp_moveop_nops (void) for (i = 0; VEC_iterate (insn_t, vec_temp_moveop_nops, i, insn); i++) { gcc_assert (INSN_NOP_P (insn)); - return_nop_to_pool (insn); + return_nop_to_pool (insn, full_tidying); } /* Empty the vector. */ @@ -4949,8 +5154,20 @@ prepare_place_to_insert (bnd_t bnd) { /* Add it after last scheduled. */ place_to_insert = ILIST_INSN (BND_PTR (bnd)); + if (DEBUG_INSN_P (place_to_insert)) + { + ilist_t l = BND_PTR (bnd); + while ((l = ILIST_NEXT (l)) && + DEBUG_INSN_P (ILIST_INSN (l))) + ; + if (!l) + place_to_insert = NULL; + } } else + place_to_insert = NULL; + + if (!place_to_insert) { /* Add it before BND_TO. The difference is in the basic block, where INSN will be added. */ @@ -5058,7 +5275,8 @@ advance_state_on_fence (fence_t fence, insn_t insn) if (sched_verbose >= 2) debug_state (FENCE_STATE (fence)); - FENCE_STARTS_CYCLE_P (fence) = 0; + if (!DEBUG_INSN_P (insn)) + FENCE_STARTS_CYCLE_P (fence) = 0; return asm_p; } @@ -5117,10 +5335,11 @@ update_fence_and_insn (fence_t fence, insn_t insn, int need_stall) } } -/* Update boundary BND with INSN, remove the old boundary from - BNDSP, add new boundaries to BNDS_TAIL_P and return it. */ +/* Update boundary BND (and, if needed, FENCE) with INSN, remove the + old boundary from BNDSP, add new boundaries to BNDS_TAIL_P and + return it. */ static blist_t * -update_boundaries (bnd_t bnd, insn_t insn, blist_t *bndsp, +update_boundaries (fence_t fence, bnd_t bnd, insn_t insn, blist_t *bndsp, blist_t *bnds_tailp) { succ_iterator si; @@ -5133,6 +5352,21 @@ update_boundaries (bnd_t bnd, insn_t insn, blist_t *bndsp, ilist_t ptr = ilist_copy (BND_PTR (bnd)); ilist_add (&ptr, insn); + + if (DEBUG_INSN_P (insn) && sel_bb_end_p (insn) + && is_ineligible_successor (succ, ptr)) + { + ilist_clear (&ptr); + continue; + } + + if (FENCE_INSN (fence) == insn && !sel_bb_end_p (insn)) + { + if (sched_verbose >= 9) + sel_print ("Updating fence insn from %i to %i\n", + INSN_UID (insn), INSN_UID (succ)); + FENCE_INSN (fence) = succ; + } blist_add (bnds_tailp, succ, ptr, BND_DC (bnd)); bnds_tailp = &BLIST_NEXT (*bnds_tailp); } @@ -5192,8 +5426,8 @@ schedule_expr_on_boundary (bnd_t bnd, expr_t expr_vliw, int seqno) /* Return the nops generated for preserving of data sets back into pool. */ if (INSN_NOP_P (place_to_insert)) - return_nop_to_pool (place_to_insert); - remove_temp_moveop_nops (); + return_nop_to_pool (place_to_insert, !DEBUG_INSN_P (insn)); + remove_temp_moveop_nops (!DEBUG_INSN_P (insn)); av_set_clear (&expr_seq); @@ -5251,7 +5485,9 @@ fill_insns (fence_t fence, int seqno, ilist_t **scheduled_insns_tailpp) int was_stall = 0, scheduled_insns = 0, stall_iterations = 0; int max_insns = pipelining_p ? issue_rate : 2 * issue_rate; int max_stall = pipelining_p ? 1 : 3; - + bool last_insn_was_debug = false; + bool was_debug_bb_end_p = false; + compute_av_set_on_boundaries (fence, bnds, &av_vliw); remove_insns_that_need_bookkeeping (fence, &av_vliw); remove_insns_for_debug (bnds, &av_vliw); @@ -5309,8 +5545,11 @@ fill_insns (fence_t fence, int seqno, ilist_t **scheduled_insns_tailpp) } insn = schedule_expr_on_boundary (bnd, expr_vliw, seqno); + last_insn_was_debug = DEBUG_INSN_P (insn); + if (last_insn_was_debug) + was_debug_bb_end_p = (insn == BND_TO (bnd) && sel_bb_end_p (insn)); update_fence_and_insn (fence, insn, need_stall); - bnds_tailp = update_boundaries (bnd, insn, bndsp, bnds_tailp); + bnds_tailp = update_boundaries (fence, bnd, insn, bndsp, bnds_tailp); /* Add insn to the list of scheduled on this cycle instructions. */ ilist_add (*scheduled_insns_tailpp, insn); @@ -5319,13 +5558,14 @@ fill_insns (fence_t fence, int seqno, ilist_t **scheduled_insns_tailpp) while (*bndsp != *bnds_tailp1); av_set_clear (&av_vliw); - scheduled_insns++; + if (!last_insn_was_debug) + scheduled_insns++; /* We currently support information about candidate blocks only for one 'target_bb' block. Hence we can't schedule after jump insn, as this will bring two boundaries and, hence, necessity to handle information for two or more blocks concurrently. */ - if (sel_bb_end_p (insn) + if ((last_insn_was_debug ? was_debug_bb_end_p : sel_bb_end_p (insn)) || (was_stall && (was_stall >= max_stall || scheduled_insns >= max_insns))) @@ -5544,7 +5784,7 @@ track_scheduled_insns_and_blocks (rtx insn) instruction out of it. */ if (INSN_SCHED_TIMES (insn) > 0) bitmap_set_bit (blocks_to_reschedule, BLOCK_FOR_INSN (insn)->index); - else if (INSN_UID (insn) < first_emitted_uid) + else if (INSN_UID (insn) < first_emitted_uid && !DEBUG_INSN_P (insn)) num_insns_scheduled++; } else @@ -5636,32 +5876,63 @@ handle_emitting_transformations (rtx insn, expr_t expr, return insn_emitted; } -/* Remove INSN from stream. When ONLY_DISCONNECT is true, its data - is not removed but reused when INSN is re-emitted. */ -static void -remove_insn_from_stream (rtx insn, bool only_disconnect) +/* If INSN is the only insn in the basic block (not counting JUMP, + which may be a jump to next insn, and DEBUG_INSNs), we want to + leave a NOP there till the return to fill_insns. */ + +static bool +need_nop_to_preserve_insn_bb (rtx insn) { - insn_t nop, bb_head, bb_end; - bool need_nop_to_preserve_bb; + insn_t bb_head, bb_end, bb_next, in_next; basic_block bb = BLOCK_FOR_INSN (insn); - /* If INSN is the only insn in the basic block (not counting JUMP, - which may be a jump to next insn), leave NOP there till the - return to fill_insns. */ bb_head = sel_bb_head (bb); bb_end = sel_bb_end (bb); - need_nop_to_preserve_bb = ((bb_head == bb_end) - || (NEXT_INSN (bb_head) == bb_end - && JUMP_P (bb_end)) - || IN_CURRENT_FENCE_P (NEXT_INSN (insn))); + if (bb_head == bb_end) + return true; + + while (bb_head != bb_end && DEBUG_INSN_P (bb_head)) + bb_head = NEXT_INSN (bb_head); + + if (bb_head == bb_end) + return true; + + while (bb_head != bb_end && DEBUG_INSN_P (bb_end)) + bb_end = PREV_INSN (bb_end); + + if (bb_head == bb_end) + return true; + + bb_next = NEXT_INSN (bb_head); + while (bb_next != bb_end && DEBUG_INSN_P (bb_next)) + bb_next = NEXT_INSN (bb_next); + + if (bb_next == bb_end && JUMP_P (bb_end)) + return true; + + in_next = NEXT_INSN (insn); + while (DEBUG_INSN_P (in_next)) + in_next = NEXT_INSN (in_next); + + if (IN_CURRENT_FENCE_P (in_next)) + return true; + + return false; +} + +/* Remove INSN from stream. When ONLY_DISCONNECT is true, its data + is not removed but reused when INSN is re-emitted. */ +static void +remove_insn_from_stream (rtx insn, bool only_disconnect) +{ /* If there's only one insn in the BB, make sure that a nop is inserted into it, so the basic block won't disappear when we'll delete INSN below with sel_remove_insn. It should also survive till the return to fill_insns. */ - if (need_nop_to_preserve_bb) + if (need_nop_to_preserve_insn_bb (insn)) { - nop = get_nop_from_pool (insn); + insn_t nop = get_nop_from_pool (insn); gcc_assert (INSN_NOP_P (nop)); VEC_safe_push (insn_t, heap, vec_temp_moveop_nops, nop); } @@ -5925,6 +6196,8 @@ fur_orig_expr_not_found (insn_t insn, av_set_t orig_ops, void *static_params) if (CALL_P (insn)) sparams->crosses_call = true; + else if (DEBUG_INSN_P (insn)) + return true; /* If current insn we are looking at cannot be executed together with original insn, then we can skip it safely. diff --git a/gcc/simplify-rtx.c b/gcc/simplify-rtx.c index 0cf1dd0ce94..0450ea083f4 100644 --- a/gcc/simplify-rtx.c +++ b/gcc/simplify-rtx.c @@ -202,6 +202,106 @@ avoid_constant_pool_reference (rtx x) return x; } +/* Simplify a MEM based on its attributes. This is the default + delegitimize_address target hook, and it's recommended that every + overrider call it. */ + +rtx +delegitimize_mem_from_attrs (rtx x) +{ + if (MEM_P (x) + && MEM_EXPR (x) + && (!MEM_OFFSET (x) + || GET_CODE (MEM_OFFSET (x)) == CONST_INT)) + { + tree decl = MEM_EXPR (x); + enum machine_mode mode = GET_MODE (x); + HOST_WIDE_INT offset = 0; + + switch (TREE_CODE (decl)) + { + default: + decl = NULL; + break; + + case VAR_DECL: + break; + + case ARRAY_REF: + case ARRAY_RANGE_REF: + case COMPONENT_REF: + case BIT_FIELD_REF: + case REALPART_EXPR: + case IMAGPART_EXPR: + case VIEW_CONVERT_EXPR: + { + HOST_WIDE_INT bitsize, bitpos; + tree toffset; + int unsignedp = 0, volatilep = 0; + + decl = get_inner_reference (decl, &bitsize, &bitpos, &toffset, + &mode, &unsignedp, &volatilep, false); + if (bitsize != GET_MODE_BITSIZE (mode) + || (bitpos % BITS_PER_UNIT) + || (toffset && !host_integerp (toffset, 0))) + decl = NULL; + else + { + offset += bitpos / BITS_PER_UNIT; + if (toffset) + offset += TREE_INT_CST_LOW (toffset); + } + break; + } + } + + if (decl + && mode == GET_MODE (x) + && TREE_CODE (decl) == VAR_DECL + && (TREE_STATIC (decl) + || DECL_THREAD_LOCAL_P (decl)) + && DECL_RTL_SET_P (decl) + && MEM_P (DECL_RTL (decl))) + { + rtx newx; + + if (MEM_OFFSET (x)) + offset += INTVAL (MEM_OFFSET (x)); + + newx = DECL_RTL (decl); + + if (MEM_P (newx)) + { + rtx n = XEXP (newx, 0), o = XEXP (x, 0); + + /* Avoid creating a new MEM needlessly if we already had + the same address. We do if there's no OFFSET and the + old address X is identical to NEWX, or if X is of the + form (plus NEWX OFFSET), or the NEWX is of the form + (plus Y (const_int Z)) and X is that with the offset + added: (plus Y (const_int Z+OFFSET)). */ + if (!((offset == 0 + || (GET_CODE (o) == PLUS + && GET_CODE (XEXP (o, 1)) == CONST_INT + && (offset == INTVAL (XEXP (o, 1)) + || (GET_CODE (n) == PLUS + && GET_CODE (XEXP (n, 1)) == CONST_INT + && (INTVAL (XEXP (n, 1)) + offset + == INTVAL (XEXP (o, 1))) + && (n = XEXP (n, 0)))) + && (o = XEXP (o, 0)))) + && rtx_equal_p (o, n))) + x = adjust_address_nv (newx, mode, offset); + } + else if (GET_MODE (x) == GET_MODE (newx) + && offset == 0) + x = newx; + } + } + + return x; +} + /* Make a unary operation by first seeing if it folds and otherwise making the specified operation. */ diff --git a/gcc/target-def.h b/gcc/target-def.h index 3cef10a55cc..286e1e68cd7 100644 --- a/gcc/target-def.h +++ b/gcc/target-def.h @@ -488,7 +488,7 @@ #define TARGET_CANNOT_COPY_INSN_P NULL #define TARGET_COMMUTATIVE_P hook_bool_const_rtx_commutative_p #define TARGET_LEGITIMIZE_ADDRESS default_legitimize_address -#define TARGET_DELEGITIMIZE_ADDRESS hook_rtx_rtx_identity +#define TARGET_DELEGITIMIZE_ADDRESS delegitimize_mem_from_attrs #define TARGET_LEGITIMATE_ADDRESS_P default_legitimate_address_p #define TARGET_USE_BLOCKS_FOR_CONSTANT_P hook_bool_mode_const_rtx_false #define TARGET_MIN_ANCHOR_OFFSET 0 diff --git a/gcc/testsuite/gcc.dg/guality/example.c b/gcc/testsuite/gcc.dg/guality/example.c new file mode 100644 index 00000000000..e02066ee339 --- /dev/null +++ b/gcc/testsuite/gcc.dg/guality/example.c @@ -0,0 +1,138 @@ +/* { dg-do run } */ +/* { dg-options "-g" } */ + +#define GUALITY_DONT_FORCE_LIVE_AFTER -1 + +#ifndef STATIC_INLINE +#define STATIC_INLINE /*static*/ +#endif + +#include "guality.h" + +#include <assert.h> + +/* Test the debug info for the functions used in the VTA + presentation at the GCC Summit 2008. */ + +typedef struct list { + struct list *n; + int v; +} elt, *node; + +STATIC_INLINE node +find_val (node c, int v, node e) +{ + while (c < e) + { + GUALCHK (c); + GUALCHK (v); + GUALCHK (e); + if (c->v == v) + return c; + GUALCHK (c); + GUALCHK (v); + GUALCHK (e); + c++; + } + return NULL; +} + +STATIC_INLINE node +find_prev (node c, node w) +{ + while (c) + { + node o = c; + c = c->n; + GUALCHK (c); + GUALCHK (o); + GUALCHK (w); + if (c == w) + return o; + GUALCHK (c); + GUALCHK (o); + GUALCHK (w); + } + return NULL; +} + +STATIC_INLINE node +check_arr (node c, node e) +{ + if (c == e) + return NULL; + e--; + while (c < e) + { + GUALCHK (c); + GUALCHK (e); + if (c->v > (c+1)->v) + return c; + GUALCHK (c); + GUALCHK (e); + c++; + } + return NULL; +} + +STATIC_INLINE node +check_list (node c, node t) +{ + while (c != t) + { + node n = c->n; + GUALCHK (c); + GUALCHK (n); + GUALCHK (t); + if (c->v > n->v) + return c; + GUALCHK (c); + GUALCHK (n); + GUALCHK (t); + c = n; + } + return NULL; +} + +struct list testme[] = { + { &testme[1], 2 }, + { &testme[2], 3 }, + { &testme[3], 5 }, + { &testme[4], 7 }, + { &testme[5], 11 }, + { NULL, 13 }, +}; + +int +main (int argc, char *argv[]) +{ + int n = sizeof (testme) / sizeof (*testme); + node first, last, begin, end, ret; + + GUALCHKXPR (n); + + begin = first = &testme[0]; + last = &testme[n-1]; + end = &testme[n]; + + GUALCHKXPR (first); + GUALCHKXPR (last); + GUALCHKXPR (begin); + GUALCHKXPR (end); + + ret = find_val (begin, 13, end); + GUALCHK (ret); + assert (ret == last); + + ret = find_prev (first, last); + GUALCHK (ret); + assert (ret == &testme[n-2]); + + ret = check_arr (begin, end); + GUALCHK (ret); + assert (!ret); + + ret = check_list (first, last); + GUALCHK (ret); + assert (!ret); +} diff --git a/gcc/testsuite/gcc.dg/guality/guality.c b/gcc/testsuite/gcc.dg/guality/guality.c new file mode 100644 index 00000000000..0e47d0155ae --- /dev/null +++ b/gcc/testsuite/gcc.dg/guality/guality.c @@ -0,0 +1,28 @@ +/* { dg-do run } */ +/* { dg-options "-g" } */ + +#include "guality.h" + +/* Some silly sanity checking. */ + +int +main (int argc, char *argv[]) +{ + int i = argc+1; + int j = argc-2; + int k = 5; + + GUALCHKXPR (argc); + GUALCHKXPR (i); + GUALCHKXPR (j); + GUALCHKXPR (k); + GUALCHKXPR (&i); + GUALCHKFLA (argc); + GUALCHKFLA (i); + GUALCHKFLA (j); + GUALCHKXPR (i); + GUALCHKXPR (j); + GUALCHKXPRVAL ("k", 5, 1); + GUALCHKXPRVAL ("0x40", 64, 0); + /* GUALCHKXPRVAL ("0", 0, 0); *//* XFAIL */ +} diff --git a/gcc/testsuite/gcc.dg/guality/guality.exp b/gcc/testsuite/gcc.dg/guality/guality.exp new file mode 100644 index 00000000000..b151c2e0772 --- /dev/null +++ b/gcc/testsuite/gcc.dg/guality/guality.exp @@ -0,0 +1,7 @@ +# This harness is for tests that should be run at all optimisation levels. + +load_lib gcc-dg.exp + +dg-init +gcc-dg-runtest [lsort [glob $srcdir/$subdir/*.c]] "" +dg-finish diff --git a/gcc/testsuite/gcc.dg/guality/guality.h b/gcc/testsuite/gcc.dg/guality/guality.h new file mode 100644 index 00000000000..6025da8b028 --- /dev/null +++ b/gcc/testsuite/gcc.dg/guality/guality.h @@ -0,0 +1,330 @@ +/* Infrastructure to test the quality of debug information. + Copyright (C) 2008, 2009 Free Software Foundation, Inc. + Contributed by Alexandre Oliva <aoliva@redhat.com>. + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 3, or (at your option) +any later version. + +GCC is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with GCC; see the file COPYING3. If not see +<http://www.gnu.org/licenses/>. */ + +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <unistd.h> +#include <sys/types.h> +#include <sys/wait.h> + +/* This is a first cut at checking that debug information matches + run-time. The idea is to annotate programs with GUALCHK* macros + that guide the tests. + + In the current implementation, all of the macros expand to function + calls. On the one hand, this interferes with optimizations; on the + other hand, it establishes an optimization barrier and a clear + inspection point, where previous operations (as in the abstract + machine) should have been completed and have their effects visible, + and future operations shouldn't have started yet. + + In the current implementation of guality_check(), we fork a child + process that runs gdb, attaches to the parent process (the one that + called guality_check), moves up one stack frame (to the caller of + guality_check) and then examines the given expression. + + If it matches the expected value, we have a PASS. If it differs, + we have a FAILure. If it is missing, we'll have a FAIL or an + UNRESOLVED depending on whether the variable or expression might be + unavailable at that point, as indicated by the third argument. + + We envision a future alternate implementation with two compilation + and execution cycles, say one that runs the program and uses the + macros to log expressions and expected values, another in which the + macros expand to nothing and the logs are used to guide a debug + session that tests the values. How to identify the inspection + points in the second case is yet to be determined. It is + recommended that GUALCHK* macros be by themselves in source lines, + so that __FILE__ and __LINE__ will be usable to identify them. +*/ + +/* Attach a debugger to the current process and verify that the string + EXPR, evaluated by the debugger, yields the long long number VAL. + If the debugger cannot compute the expression, say because the + variable is unavailable, this will count as an error, unless unkok + is nonzero. */ + +#define GUALCHKXPRVAL(expr, val, unkok) \ + guality_check ((expr), (val), (unkok)) + +/* Check that a debugger knows that EXPR evaluates to the run-time + value of EXPR. Unknown values are marked as acceptable, + considering that EXPR may die right after this call. This will + affect the generated code in that EXPR will be evaluated and forced + to remain live at least until right before the call to + guality_check, although not necessarily after the call. */ + +#define GUALCHKXPR(expr) \ + GUALCHKXPRVAL (#expr, (long long)(expr), 1) + +/* Same as GUALCHKXPR, but issue an error if the variable is optimized + away. */ + +#define GUALCHKVAL(expr) \ + GUALCHKXPRVAL (#expr, (long long)(expr), 0) + +/* Check that a debugger knows that EXPR evaluates to the run-time + value of EXPR. Unknown values are marked as errors, because the + value of EXPR is forced to be available right after the call, for a + range of at least one instruction. This will affect the generated + code, in that EXPR *will* be evaluated before and preserved until + after the call to guality_check. */ + +#define GUALCHKFLA(expr) do { \ + __typeof(expr) volatile __preserve_after; \ + __typeof(expr) __preserve_before = (expr); \ + GUALCHKXPRVAL (#expr, (long long)(__preserve_before), 0); \ + __preserve_after = __preserve_before; \ + asm ("" : : "m" (__preserve_after)); \ + } while (0) + +/* GUALCHK is the simplest way to assert that debug information for an + expression matches its run-time value. Whether to force the + expression live after the call, so as to flag incompleteness + errors, can be disabled by defining GUALITY_DONT_FORCE_LIVE_AFTER. + Setting it to -1, an error is issued for optimized out variables, + even though they are not forced live. */ + +#if ! GUALITY_DONT_FORCE_LIVE_AFTER +#define GUALCHK(var) GUALCHKFLA(var) +#elif GUALITY_DONT_FORCE_LIVE_AFTER < 0 +#define GUALCHK(var) GUALCHKVAL(var) +#else +#define GUALCHK(var) GUALCHKXPR(var) +#endif + +/* The name of the GDB program, with arguments to make it quiet. This + is GUALITY_GDB_DEFAULT GUALITY_GDB_ARGS by default, but it can be + overridden by setting the GUALITY_GDB environment variable, whereas + GUALITY_GDB_DEFAULT can be overridden by setting the + GUALITY_GDB_NAME environment variable. */ + +static const char *guality_gdb_command; +#define GUALITY_GDB_DEFAULT "gdb" +#define GUALITY_GDB_ARGS " -nx -nw --quiet > /dev/null 2>&1" + +/* Kinds of results communicated as exit status from child process + that runs gdb to the parent process that's being monitored. */ + +enum guality_counter { PASS, INCORRECT, INCOMPLETE }; + +/* Count of passes and errors. */ + +static int guality_count[INCOMPLETE+1]; + +/* If --guality-skip is given in the command line, all the monitoring, + forking and debugger-attaching action will be disabled. This is + useful to run the monitor program within a debugger. */ + +static int guality_skip; + +/* This is a file descriptor to which we'll issue gdb commands to + probe and test. */ +FILE *guality_gdb_input; + +/* This holds the line number where we're supposed to set a + breakpoint. */ +int guality_breakpoint_line; + +/* GDB should set this to true once it's connected. */ +int volatile guality_attached; + +/* This function is the main guality program. It may actually be + defined as main, because we #define main to it afterwards. Because + of this wrapping, guality_main may not have an empty argument + list. */ + +extern int guality_main (int argc, char *argv[]); + +static void __attribute__((noinline)) +guality_check (const char *name, long long value, int unknown_ok); + +/* Set things up, run guality_main, then print a summary and quit. */ + +int +main (int argc, char *argv[]) +{ + int i; + char *argv0 = argv[0]; + + guality_gdb_command = getenv ("GUALITY_GDB"); + if (!guality_gdb_command) + { + guality_gdb_command = getenv ("GUALITY_GDB_NAME"); + if (!guality_gdb_command) + guality_gdb_command = GUALITY_GDB_DEFAULT GUALITY_GDB_ARGS; + else + { + int len = strlen (guality_gdb_command) + sizeof (GUALITY_GDB_ARGS); + char *buf = __builtin_alloca (len); + strcpy (buf, guality_gdb_command); + strcat (buf, GUALITY_GDB_ARGS); + guality_gdb_command = buf; + } + } + + for (i = 1; i < argc; i++) + if (strcmp (argv[i], "--guality-skip") == 0) + guality_skip = 1; + else + break; + + if (!guality_skip) + { + guality_gdb_input = popen (guality_gdb_command, "w"); + /* This call sets guality_breakpoint_line. */ + guality_check (NULL, 0, 0); + if (!guality_gdb_input + || fprintf (guality_gdb_input, "\ +set height 0\n\ +attach %i\n\ +set guality_attached = 1\n\ +b %i\n\ +continue\n\ +", (int)getpid (), guality_breakpoint_line) <= 0 + || fflush (guality_gdb_input)) + { + perror ("gdb"); + abort (); + } + } + + argv[--i] = argv0; + + guality_main (argc - i, argv + i); + + i = guality_count[INCORRECT]; + + fprintf (stderr, "%s: %i PASS, %i FAIL, %i UNRESOLVED\n", + i ? "FAIL" : "PASS", + guality_count[PASS], guality_count[INCORRECT], + guality_count[INCOMPLETE]); + + return i; +} + +#define main guality_main + +/* Tell the GDB child process to evaluate NAME in the caller. If it + matches VALUE, we have a PASS; if it's unknown and UNKNOWN_OK, we + have an UNRESOLVED. Otherwise, it's a FAIL. */ + +static void __attribute__((noinline)) +guality_check (const char *name, long long value, int unknown_ok) +{ + int result; + + if (guality_skip) + return; + + { + volatile long long xvalue = -1; + volatile int unavailable = 0; + if (name) + { + /* The sequence below cannot distinguish an optimized away + variable from one mapped to a non-lvalue zero. */ + if (fprintf (guality_gdb_input, "\ +up\n\ +set $value1 = 0\n\ +set $value1 = (%s)\n\ +set $value2 = -1\n\ +set $value2 = (%s)\n\ +set $value3 = $value1 - 1\n\ +set $value4 = $value1 + 1\n\ +set $value3 = (%s)++\n\ +set $value4 = --(%s)\n\ +down\n\ +set xvalue = $value1\n\ +set unavailable = $value1 != $value2 ? -1 : $value3 != $value4 ? 1 : 0\n\ +continue\n\ +", name, name, name, name) <= 0 + || fflush (guality_gdb_input)) + { + perror ("gdb"); + abort (); + } + else if (!guality_attached) + { + unsigned int timeout = 0; + + /* Give GDB some more time to attach. Wrapping around a + 32-bit counter takes some seconds, it should be plenty + of time for GDB to get a chance to start up and attach, + but not long enough that, if GDB is unavailable or + broken, we'll take far too long to give up. */ + while (--timeout && !guality_attached) + ; + if (!timeout && !guality_attached) + { + fprintf (stderr, "gdb: took too long to attach\n"); + abort (); + } + } + } + else + { + guality_breakpoint_line = __LINE__ + 5; + return; + } + /* Do NOT add lines between the __LINE__ above and the line below, + without also adjusting the added constant to match. */ + if (!unavailable || (unavailable > 0 && xvalue)) + { + if (xvalue == value) + result = PASS; + else + result = INCORRECT; + } + else + result = INCOMPLETE; + asm ("" : : "X" (name), "X" (value), "X" (unknown_ok), "m" (xvalue)); + switch (result) + { + case PASS: + fprintf (stderr, "PASS: %s is %lli\n", name, value); + break; + case INCORRECT: + fprintf (stderr, "FAIL: %s is %lli, not %lli\n", name, xvalue, value); + break; + case INCOMPLETE: + fprintf (stderr, "%s: %s is %s, expected %lli\n", + unknown_ok ? "UNRESOLVED" : "FAIL", name, + unavailable < 0 ? "not computable" : "optimized away", value); + result = unknown_ok ? INCOMPLETE : INCORRECT; + break; + default: + abort (); + } + } + + switch (result) + { + case PASS: + case INCORRECT: + case INCOMPLETE: + ++guality_count[result]; + break; + + default: + abort (); + } +} diff --git a/gcc/testsuite/lib/gcc-dg.exp b/gcc/testsuite/lib/gcc-dg.exp index 7e684171be9..feec5058214 100644 --- a/gcc/testsuite/lib/gcc-dg.exp +++ b/gcc/testsuite/lib/gcc-dg.exp @@ -449,11 +449,15 @@ proc cleanup-dump { suffix } { # The name might include a list of options; extract the file name. set src [file tail [lindex $testcase 0]] remove-build-file "[file tail $src].$suffix" + # -fcompare-debug dumps + remove-build-file "[file tail $src].gk.$suffix" # Clean up dump files for additional source files. if [info exists additional_sources] { foreach srcfile $additional_sources { remove-build-file "[file tail $srcfile].$suffix" + # -fcompare-debug dumps + remove-build-file "[file tail $srcfile].gk.$suffix" } } } @@ -468,7 +472,7 @@ proc cleanup-saved-temps { args } { set suffixes {} # add the to-be-kept suffixes - foreach suffix {".ii" ".i" ".s" ".o"} { + foreach suffix {".ii" ".i" ".s" ".o" ".gkd"} { if {[lsearch $args $suffix] < 0} { lappend suffixes $suffix } @@ -480,6 +484,8 @@ proc cleanup-saved-temps { args } { upvar 2 name testcase foreach suffix $suffixes { remove-build-file "[file rootname [file tail $testcase]]$suffix" + # -fcompare-debug dumps + remove-build-file "[file rootname [file tail $testcase]].gk$suffix" } # Clean up saved temp files for additional source files. @@ -487,6 +493,8 @@ proc cleanup-saved-temps { args } { foreach srcfile $additional_sources { foreach suffix $suffixes { remove-build-file "[file rootname [file tail $srcfile]]$suffix" + # -fcompare-debug dumps + remove-build-file "[file rootname [file tail $srcfile]].gk$suffix" } } } diff --git a/gcc/toplev.c b/gcc/toplev.c index bb7633f09fc..b0e7039ca1d 100644 --- a/gcc/toplev.c +++ b/gcc/toplev.c @@ -319,11 +319,23 @@ int flag_dump_rtl_in_asm = 0; the support provided depends on the backend. */ rtx stack_limit_rtx; -/* Nonzero if we should track variables. When - flag_var_tracking == AUTODETECT_VALUE it will be set according - to optimize, debug_info_level and debug_hooks in process_options (). */ +/* Positive if we should track variables, negative if we should run + the var-tracking pass only to discard debug annotations, zero if + we're not to run it. When flag_var_tracking == AUTODETECT_VALUE it + will be set according to optimize, debug_info_level and debug_hooks + in process_options (). */ int flag_var_tracking = AUTODETECT_VALUE; +/* Positive if we should track variables at assignments, negative if + we should run the var-tracking pass only to discard debug + annotations. When flag_var_tracking_assignments == + AUTODETECT_VALUE it will be set according to flag_var_tracking. */ +int flag_var_tracking_assignments = AUTODETECT_VALUE; + +/* Nonzero if we should toggle flag_var_tracking_assignments after + processing options and computing its default. */ +int flag_var_tracking_assignments_toggle = 0; + /* Type of stack check. */ enum stack_check_type flag_stack_check = NO_STACK_CHECK; @@ -1876,7 +1888,7 @@ process_options (void) debug_info_level = DINFO_LEVEL_NONE; } - if (flag_dump_final_insns) + if (flag_dump_final_insns && !flag_syntax_only && !no_backend) { FILE *final_output = fopen (flag_dump_final_insns, "w"); if (!final_output) @@ -1977,6 +1989,15 @@ process_options (void) if (flag_var_tracking == AUTODETECT_VALUE) flag_var_tracking = optimize >= 1; + if (flag_var_tracking_assignments == AUTODETECT_VALUE) + flag_var_tracking_assignments = 0; + + if (flag_var_tracking_assignments_toggle) + flag_var_tracking_assignments = !flag_var_tracking_assignments; + + if (flag_var_tracking_assignments && !flag_var_tracking) + flag_var_tracking = flag_var_tracking_assignments = -1; + if (flag_tree_cselim == AUTODETECT_VALUE) #ifdef HAVE_conditional_move flag_tree_cselim = 1; diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c index 639c4ec710c..81d95d75e6e 100644 --- a/gcc/tree-cfg.c +++ b/gcc/tree-cfg.c @@ -1395,6 +1395,49 @@ gimple_can_merge_blocks_p (basic_block a, basic_block b) return true; } +/* Return true if the var whose chain of uses starts at PTR has no + nondebug uses. */ +bool +has_zero_uses_1 (const ssa_use_operand_t *head) +{ + const ssa_use_operand_t *ptr; + + for (ptr = head->next; ptr != head; ptr = ptr->next) + if (!is_gimple_debug (USE_STMT (ptr))) + return false; + + return true; +} + +/* Return true if the var whose chain of uses starts at PTR has a + single nondebug use. Set USE_P and STMT to that single nondebug + use, if so, or to NULL otherwise. */ +bool +single_imm_use_1 (const ssa_use_operand_t *head, + use_operand_p *use_p, gimple *stmt) +{ + ssa_use_operand_t *ptr, *single_use = 0; + + for (ptr = head->next; ptr != head; ptr = ptr->next) + if (!is_gimple_debug (USE_STMT (ptr))) + { + if (single_use) + { + single_use = NULL; + break; + } + single_use = ptr; + } + + if (use_p) + *use_p = single_use; + + if (stmt) + *stmt = single_use ? single_use->loc.stmt : NULL; + + return !!single_use; +} + /* Replaces all uses of NAME by VAL. */ void @@ -2263,7 +2306,11 @@ remove_bb (basic_block bb) /* Remove all the instructions in the block. */ if (bb_seq (bb) != NULL) { - for (i = gsi_start_bb (bb); !gsi_end_p (i);) + /* Walk backwards so as to get a chance to substitute all + released DEFs into debug stmts. See + eliminate_unnecessary_stmts() in tree-ssa-dce.c for more + details. */ + for (i = gsi_last_bb (bb); !gsi_end_p (i);) { gimple stmt = gsi_stmt (i); if (gimple_code (stmt) == GIMPLE_LABEL @@ -2299,13 +2346,17 @@ remove_bb (basic_block bb) gsi_remove (&i, true); } + if (gsi_end_p (i)) + i = gsi_last_bb (bb); + else + gsi_prev (&i); + /* Don't warn for removed gotos. Gotos are often removed due to jump threading, thus resulting in bogus warnings. Not great, since this way we lose warnings for gotos in the original program that are indeed unreachable. */ if (gimple_code (stmt) != GIMPLE_GOTO - && gimple_has_location (stmt) - && !loc) + && gimple_has_location (stmt)) loc = gimple_location (stmt); } } @@ -2807,7 +2858,14 @@ gimple first_stmt (basic_block bb) { gimple_stmt_iterator i = gsi_start_bb (bb); - return !gsi_end_p (i) ? gsi_stmt (i) : NULL; + gimple stmt = NULL; + + while (!gsi_end_p (i) && is_gimple_debug ((stmt = gsi_stmt (i)))) + { + gsi_next (&i); + stmt = NULL; + } + return stmt; } /* Return the first non-label statement in basic block BB. */ @@ -2826,8 +2884,15 @@ first_non_label_stmt (basic_block bb) gimple last_stmt (basic_block bb) { - gimple_stmt_iterator b = gsi_last_bb (bb); - return !gsi_end_p (b) ? gsi_stmt (b) : NULL; + gimple_stmt_iterator i = gsi_last_bb (bb); + gimple stmt = NULL; + + while (!gsi_end_p (i) && is_gimple_debug ((stmt = gsi_stmt (i)))) + { + gsi_prev (&i); + stmt = NULL; + } + return stmt; } /* Return the last statement of an otherwise empty block. Return NULL @@ -2837,14 +2902,14 @@ last_stmt (basic_block bb) gimple last_and_only_stmt (basic_block bb) { - gimple_stmt_iterator i = gsi_last_bb (bb); + gimple_stmt_iterator i = gsi_last_nondebug_bb (bb); gimple last, prev; if (gsi_end_p (i)) return NULL; last = gsi_stmt (i); - gsi_prev (&i); + gsi_prev_nondebug (&i); if (gsi_end_p (i)) return last; @@ -4109,6 +4174,22 @@ verify_gimple_phi (gimple stmt) } +/* Verify a gimple debug statement STMT. + Returns true if anything is wrong. */ + +static bool +verify_gimple_debug (gimple stmt ATTRIBUTE_UNUSED) +{ + /* There isn't much that could be wrong in a gimple debug stmt. A + gimple debug bind stmt, for example, maps a tree, that's usually + a VAR_DECL or a PARM_DECL, but that could also be some scalarized + component or member of an aggregate type, to another tree, that + can be an arbitrary expression. These stmts expand into debug + insns, and are converted to debug notes by var-tracking.c. */ + return false; +} + + /* Verify the GIMPLE statement STMT. Returns true if there is an error, otherwise false. */ @@ -4163,6 +4244,9 @@ verify_types_in_gimple_stmt (gimple stmt) case GIMPLE_PREDICT: return false; + case GIMPLE_DEBUG: + return verify_gimple_debug (stmt); + default: gcc_unreachable (); } @@ -4269,6 +4353,9 @@ verify_stmt (gimple_stmt_iterator *gsi) } } + if (is_gimple_debug (stmt)) + return false; + memset (&wi, 0, sizeof (wi)); addr = walk_gimple_op (gsi_stmt (*gsi), verify_expr, &wi); if (addr) @@ -6618,7 +6705,7 @@ debug_loop_num (unsigned num, int verbosity) static bool gimple_block_ends_with_call_p (basic_block bb) { - gimple_stmt_iterator gsi = gsi_last_bb (bb); + gimple_stmt_iterator gsi = gsi_last_nondebug_bb (bb); return is_gimple_call (gsi_stmt (gsi)); } @@ -6924,8 +7011,12 @@ remove_edge_and_dominated_blocks (edge e) remove_edge (e); else { - for (i = 0; VEC_iterate (basic_block, bbs_to_remove, i, bb); i++) - delete_basic_block (bb); + /* Walk backwards so as to get a chance to substitute all + released DEFs into debug stmts. See + eliminate_unnecessary_stmts() in tree-ssa-dce.c for more + details. */ + for (i = VEC_length (basic_block, bbs_to_remove); i-- > 0; ) + delete_basic_block (VEC_index (basic_block, bbs_to_remove, i)); } /* Update the dominance information. The immediate dominator may change only diff --git a/gcc/tree-cfgcleanup.c b/gcc/tree-cfgcleanup.c index 34cfc80bbee..5cce1b6eec7 100644 --- a/gcc/tree-cfgcleanup.c +++ b/gcc/tree-cfgcleanup.c @@ -252,6 +252,11 @@ tree_forwarder_block_p (basic_block bb, bool phi_wanted) return false; break; + /* ??? For now, hope there's a corresponding debug + assignment at the destination. */ + case GIMPLE_DEBUG: + break; + default: return false; } @@ -415,9 +420,10 @@ remove_forwarder_block (basic_block bb) for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); ) { label = gsi_stmt (gsi); - gcc_assert (gimple_code (label) == GIMPLE_LABEL); + gcc_assert (gimple_code (label) == GIMPLE_LABEL + || is_gimple_debug (label)); gsi_remove (&gsi, false); - gsi_insert_before (&gsi_to, label, GSI_CONTINUE_LINKING); + gsi_insert_before (&gsi_to, label, GSI_SAME_STMT); } } diff --git a/gcc/tree-dfa.c b/gcc/tree-dfa.c index 4147a286669..b6eff5ea8f1 100644 --- a/gcc/tree-dfa.c +++ b/gcc/tree-dfa.c @@ -88,7 +88,12 @@ find_referenced_vars (void) FOR_EACH_BB (bb) { for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si)) - find_referenced_vars_in (gsi_stmt (si)); + { + gimple stmt = gsi_stmt (si); + if (is_gimple_debug (stmt)) + continue; + find_referenced_vars_in (gsi_stmt (si)); + } for (si = gsi_start_phis (bb); !gsi_end_p (si); gsi_next (&si)) find_referenced_vars_in (gsi_stmt (si)); diff --git a/gcc/tree-eh.c b/gcc/tree-eh.c index b8972063c9d..d9baf711379 100644 --- a/gcc/tree-eh.c +++ b/gcc/tree-eh.c @@ -2823,7 +2823,7 @@ tree_empty_eh_handler_p (basic_block bb) region = gimple_resx_region (gsi_stmt (gsi)); /* filter_object set. */ - gsi_prev (&gsi); + gsi_prev_nondebug (&gsi); if (gsi_end_p (gsi)) return 0; if (gimple_code (gsi_stmt (gsi)) == GIMPLE_ASSIGN) @@ -2836,7 +2836,7 @@ tree_empty_eh_handler_p (basic_block bb) filter_tmp = gimple_assign_rhs1 (gsi_stmt (gsi)); /* filter_object set. */ - gsi_prev (&gsi); + gsi_prev_nondebug (&gsi); if (gsi_end_p (gsi)) return 0; if (gimple_code (gsi_stmt (gsi)) != GIMPLE_ASSIGN) @@ -2848,7 +2848,7 @@ tree_empty_eh_handler_p (basic_block bb) /* exc_ptr get. */ if (TREE_CODE (exc_ptr_tmp) != EXC_PTR_EXPR) { - gsi_prev (&gsi); + gsi_prev_nondebug (&gsi); if (gsi_end_p (gsi)) return 0; if (gimple_code (gsi_stmt (gsi)) != GIMPLE_ASSIGN) @@ -2864,7 +2864,7 @@ tree_empty_eh_handler_p (basic_block bb) /* filter_object get. */ if (TREE_CODE (filter_tmp) != FILTER_EXPR) { - gsi_prev (&gsi); + gsi_prev_nondebug (&gsi); if (gsi_end_p (gsi)) return 0; if (gimple_code (gsi_stmt (gsi)) != GIMPLE_ASSIGN) @@ -2878,7 +2878,7 @@ tree_empty_eh_handler_p (basic_block bb) } /* label. */ - gsi_prev (&gsi); + gsi_prev_nondebug (&gsi); if (gsi_end_p (gsi)) return 0; } diff --git a/gcc/tree-flow-inline.h b/gcc/tree-flow-inline.h index f56ecea7db7..eef2f4324a9 100644 --- a/gcc/tree-flow-inline.h +++ b/gcc/tree-flow-inline.h @@ -358,43 +358,88 @@ next_readonly_imm_use (imm_use_iterator *imm) return imm->imm_use; } -/* Return true if VAR has no uses. */ +/* tree-cfg.c */ +extern bool has_zero_uses_1 (const ssa_use_operand_t *head); +extern bool single_imm_use_1 (const ssa_use_operand_t *head, + use_operand_p *use_p, gimple *stmt); + +/* Return true if VAR has no nondebug uses. */ static inline bool has_zero_uses (const_tree var) { const ssa_use_operand_t *const ptr = &(SSA_NAME_IMM_USE_NODE (var)); - /* A single use means there is no items in the list. */ - return (ptr == ptr->next); + + /* A single use_operand means there is no items in the list. */ + if (ptr == ptr->next) + return true; + + /* If there are debug stmts, we have to look at each use and see + whether there are any nondebug uses. */ + if (!MAY_HAVE_DEBUG_STMTS) + return false; + + return has_zero_uses_1 (ptr); } -/* Return true if VAR has a single use. */ +/* Return true if VAR has a single nondebug use. */ static inline bool has_single_use (const_tree var) { const ssa_use_operand_t *const ptr = &(SSA_NAME_IMM_USE_NODE (var)); - /* A single use means there is one item in the list. */ - return (ptr != ptr->next && ptr == ptr->next->next); + + /* If there aren't any uses whatsoever, we're done. */ + if (ptr == ptr->next) + return false; + + /* If there's a single use, check that it's not a debug stmt. */ + if (ptr == ptr->next->next) + return !is_gimple_debug (USE_STMT (ptr->next)); + + /* If there are debug stmts, we have to look at each of them. */ + if (!MAY_HAVE_DEBUG_STMTS) + return false; + + return single_imm_use_1 (ptr, NULL, NULL); } -/* If VAR has only a single immediate use, return true, and set USE_P and STMT - to the use pointer and stmt of occurrence. */ +/* If VAR has only a single immediate nondebug use, return true, and + set USE_P and STMT to the use pointer and stmt of occurrence. */ static inline bool single_imm_use (const_tree var, use_operand_p *use_p, gimple *stmt) { const ssa_use_operand_t *const ptr = &(SSA_NAME_IMM_USE_NODE (var)); - if (ptr != ptr->next && ptr == ptr->next->next) + + /* If there aren't any uses whatsoever, we're done. */ + if (ptr == ptr->next) { - *use_p = ptr->next; - *stmt = ptr->next->loc.stmt; - return true; + return_false: + *use_p = NULL_USE_OPERAND_P; + *stmt = NULL; + return false; } - *use_p = NULL_USE_OPERAND_P; - *stmt = NULL; - return false; + + /* If there's a single use, check that it's not a debug stmt. */ + if (ptr == ptr->next->next) + { + if (!is_gimple_debug (USE_STMT (ptr->next))) + { + *use_p = ptr->next; + *stmt = ptr->next->loc.stmt; + return true; + } + else + goto return_false; + } + + /* If there are debug stmts, we have to look at each of them. */ + if (!MAY_HAVE_DEBUG_STMTS) + goto return_false; + + return single_imm_use_1 (ptr, use_p, stmt); } -/* Return the number of immediate uses of VAR. */ +/* Return the number of nondebug immediate uses of VAR. */ static inline unsigned int num_imm_uses (const_tree var) { @@ -402,8 +447,13 @@ num_imm_uses (const_tree var) const ssa_use_operand_t *ptr; unsigned int num = 0; - for (ptr = start->next; ptr != start; ptr = ptr->next) - num++; + if (!MAY_HAVE_DEBUG_STMTS) + for (ptr = start->next; ptr != start; ptr = ptr->next) + num++; + else + for (ptr = start->next; ptr != start; ptr = ptr->next) + if (!is_gimple_debug (USE_STMT (ptr))) + num++; return num; } diff --git a/gcc/tree-flow.h b/gcc/tree-flow.h index 8de4675cf20..11b67120e39 100644 --- a/gcc/tree-flow.h +++ b/gcc/tree-flow.h @@ -636,6 +636,10 @@ typedef bool (*walk_use_def_chains_fn) (tree, gimple, void *); extern void walk_use_def_chains (tree, walk_use_def_chains_fn, void *, bool); +void propagate_defs_into_debug_stmts (gimple, basic_block, + const gimple_stmt_iterator *); +void propagate_var_def_into_debug_stmts (tree, basic_block, + const gimple_stmt_iterator *); /* In tree-into-ssa.c */ void update_ssa (unsigned); diff --git a/gcc/tree-if-conv.c b/gcc/tree-if-conv.c index bfd0c293156..7f00a63453f 100644 --- a/gcc/tree-if-conv.c +++ b/gcc/tree-if-conv.c @@ -239,6 +239,15 @@ tree_if_convert_stmt (struct loop * loop, gimple t, tree cond, case GIMPLE_LABEL: break; + case GIMPLE_DEBUG: + /* ??? Should there be conditional GIMPLE_DEBUG_BINDs? */ + if (gimple_debug_bind_p (gsi_stmt (*gsi))) + { + gimple_debug_bind_reset_value (gsi_stmt (*gsi)); + update_stmt (gsi_stmt (*gsi)); + } + break; + case GIMPLE_ASSIGN: /* This GIMPLE_ASSIGN is killing previous value of LHS. Appropriate value will be selected by PHI node based on condition. It is possible @@ -423,8 +432,10 @@ if_convertible_stmt_p (struct loop *loop, basic_block bb, gimple stmt) case GIMPLE_LABEL: break; - case GIMPLE_ASSIGN: + case GIMPLE_DEBUG: + break; + case GIMPLE_ASSIGN: if (!if_convertible_gimple_assign_stmt_p (loop, bb, stmt)) return false; break; diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c index 97c9261b469..fbd973b4281 100644 --- a/gcc/tree-inline.c +++ b/gcc/tree-inline.c @@ -147,6 +147,30 @@ insert_decl_map (copy_body_data *id, tree key, tree value) *pointer_map_insert (id->decl_map, value) = value; } +/* Insert a tree->tree mapping for ID. This is only used for + variables. */ + +static void +insert_debug_decl_map (copy_body_data *id, tree key, tree value) +{ + if (!gimple_in_ssa_p (id->src_cfun)) + return; + + if (!MAY_HAVE_DEBUG_STMTS) + return; + + if (!target_for_debug_bind (key)) + return; + + gcc_assert (TREE_CODE (key) == PARM_DECL); + gcc_assert (TREE_CODE (value) == VAR_DECL); + + if (!id->debug_map) + id->debug_map = pointer_map_create (); + + *pointer_map_insert (id->debug_map, key) = value; +} + /* Construct new SSA name for old NAME. ID is the inline context. */ static tree @@ -220,6 +244,12 @@ remap_ssa_name (tree name, copy_body_data *id) return new_tree; } +/* If nonzero, we're remapping the contents of inlined debug + statements. If negative, an error has occurred, such as a + reference to a variable that isn't available in the inlined + context. */ +int processing_debug_stmt = 0; + /* Remap DECL during the copying of the BLOCK tree for the function. */ tree @@ -235,6 +265,12 @@ remap_decl (tree decl, copy_body_data *id) n = (tree *) pointer_map_contains (id->decl_map, decl); + if (!n && processing_debug_stmt) + { + processing_debug_stmt = -1; + return decl; + } + /* If we didn't already have an equivalent for this declaration, create one now. */ if (!n) @@ -812,7 +848,8 @@ remap_gimple_op_r (tree *tp, int *walk_subtrees, void *data) vars. If not referenced from types only. */ if (gimple_in_ssa_p (cfun) && TREE_CODE (*tp) == VAR_DECL - && id->remapping_type_depth == 0) + && id->remapping_type_depth == 0 + && !processing_debug_stmt) add_referenced_var (*tp); /* We should never have TREE_BLOCK set on non-statements. */ @@ -1043,10 +1080,11 @@ copy_tree_body_r (tree *tp, int *walk_subtrees, void *data) copy_tree_r (tp, walk_subtrees, NULL); /* Global variables we haven't seen yet needs to go into referenced - vars. If not referenced from types only. */ + vars. If not referenced from types or debug stmts only. */ if (gimple_in_ssa_p (cfun) && TREE_CODE (*tp) == VAR_DECL - && id->remapping_type_depth == 0) + && id->remapping_type_depth == 0 + && !processing_debug_stmt) add_referenced_var (*tp); /* If EXPR has block defined, map it to newly constructed block. @@ -1292,8 +1330,17 @@ remap_gimple_stmt (gimple stmt, copy_body_data *id) } } - /* Create a new deep copy of the statement. */ - copy = gimple_copy (stmt); + if (gimple_debug_bind_p (stmt)) + { + copy = gimple_build_debug_bind (gimple_debug_bind_get_var (stmt), + gimple_debug_bind_get_value (stmt), + stmt); + VEC_safe_push (gimple, heap, id->debug_stmts, copy); + return copy; + } + else + /* Create a new deep copy of the statement. */ + copy = gimple_copy (stmt); } /* If STMT has a block defined, map it to the newly constructed @@ -1310,6 +1357,9 @@ remap_gimple_stmt (gimple stmt, copy_body_data *id) gimple_set_block (copy, new_block); + if (gimple_debug_bind_p (copy)) + return copy; + /* Remap all the operands in COPY. */ memset (&wi, 0, sizeof (wi)); wi.info = id; @@ -1604,7 +1654,7 @@ copy_bb (copy_body_data *id, basic_block bb, int frequency_scale, add_stmt_to_eh_region (stmt, id->eh_region); } - if (gimple_in_ssa_p (cfun)) + if (gimple_in_ssa_p (cfun) && !is_gimple_debug (stmt)) { ssa_op_iter i; tree def; @@ -1733,9 +1783,12 @@ copy_edges_for_bb (basic_block bb, gcov_type count_scale, basic_block ret_bb) bool can_throw, nonlocal_goto; copy_stmt = gsi_stmt (si); - update_stmt (copy_stmt); - if (gimple_in_ssa_p (cfun)) - mark_symbols_for_renaming (copy_stmt); + if (!is_gimple_debug (copy_stmt)) + { + update_stmt (copy_stmt); + if (gimple_in_ssa_p (cfun)) + mark_symbols_for_renaming (copy_stmt); + } /* Do this before the possible split_block. */ gsi_next (&si); @@ -2011,6 +2064,82 @@ copy_cfg_body (copy_body_data * id, gcov_type count, int frequency, return new_fndecl; } +/* Copy the debug STMT using ID. We deal with these statements in a + special way: if any variable in their VALUE expression wasn't + remapped yet, we won't remap it, because that would get decl uids + out of sync, causing codegen differences between -g and -g0. If + this arises, we drop the VALUE expression altogether. */ + +static void +copy_debug_stmt (gimple stmt, copy_body_data *id) +{ + tree t, *n; + struct walk_stmt_info wi; + + t = id->block; + if (gimple_block (stmt)) + { + tree *n; + n = (tree *) pointer_map_contains (id->decl_map, gimple_block (stmt)); + if (n) + t = *n; + } + gimple_set_block (stmt, t); + + /* Remap all the operands in COPY. */ + memset (&wi, 0, sizeof (wi)); + wi.info = id; + + processing_debug_stmt = 1; + + t = gimple_debug_bind_get_var (stmt); + + if (TREE_CODE (t) == PARM_DECL && id->debug_map + && (n = (tree *) pointer_map_contains (id->debug_map, t))) + { + gcc_assert (TREE_CODE (*n) == VAR_DECL); + t = *n; + } + else + walk_tree (&t, remap_gimple_op_r, &wi, NULL); + + gimple_debug_bind_set_var (stmt, t); + + if (gimple_debug_bind_has_value_p (stmt)) + walk_tree (gimple_debug_bind_get_value_ptr (stmt), + remap_gimple_op_r, &wi, NULL); + + /* Punt if any decl couldn't be remapped. */ + if (processing_debug_stmt < 0) + gimple_debug_bind_reset_value (stmt); + + processing_debug_stmt = 0; + + update_stmt (stmt); + if (gimple_in_ssa_p (cfun)) + mark_symbols_for_renaming (stmt); +} + +/* Process deferred debug stmts. In order to give values better odds + of being successfully remapped, we delay the processing of debug + stmts until all other stmts that might require remapping are + processed. */ + +static void +copy_debug_stmts (copy_body_data *id) +{ + size_t i; + gimple stmt; + + if (!id->debug_stmts) + return; + + for (i = 0; VEC_iterate (gimple, id->debug_stmts, i, stmt); i++) + copy_debug_stmt (stmt, id); + + VEC_free (gimple, heap, id->debug_stmts); +} + /* Make a copy of the body of SRC_FN so that it can be inserted inline in another function. */ @@ -2025,6 +2154,9 @@ copy_tree_body (copy_body_data *id) return body; } +/* Make a copy of the body of FN so that it can be inserted inline in + another function. */ + static tree copy_body (copy_body_data *id, gcov_type count, int frequency, basic_block entry_block_map, basic_block exit_block_map) @@ -2035,6 +2167,7 @@ copy_body (copy_body_data *id, gcov_type count, int frequency, /* If this body has a CFG, walk CFG and copy. */ gcc_assert (ENTRY_BLOCK_PTR_FOR_FUNCTION (DECL_STRUCT_FUNCTION (fndecl))); body = copy_cfg_body (id, count, frequency, entry_block_map, exit_block_map); + copy_debug_stmts (id); return body; } @@ -2055,8 +2188,51 @@ self_inlining_addr_expr (tree value, tree fn) return var && auto_var_in_fn_p (var, fn); } +/* Append to BB a debug annotation that binds VAR to VALUE, inheriting + lexical block and line number information from base_stmt, if given, + or from the last stmt of the block otherwise. */ + +static gimple +insert_init_debug_bind (copy_body_data *id, + basic_block bb, tree var, tree value, + gimple base_stmt) +{ + gimple note; + gimple_stmt_iterator gsi; + tree tracked_var; + + if (!gimple_in_ssa_p (id->src_cfun)) + return NULL; + + if (!MAY_HAVE_DEBUG_STMTS) + return NULL; + + tracked_var = target_for_debug_bind (var); + if (!tracked_var) + return NULL; + + if (bb) + { + gsi = gsi_last_bb (bb); + if (!base_stmt && !gsi_end_p (gsi)) + base_stmt = gsi_stmt (gsi); + } + + note = gimple_build_debug_bind (tracked_var, value, base_stmt); + + if (bb) + { + if (!gsi_end_p (gsi)) + gsi_insert_after (&gsi, note, GSI_SAME_STMT); + else + gsi_insert_before (&gsi, note, GSI_SAME_STMT); + } + + return note; +} + static void -insert_init_stmt (basic_block bb, gimple init_stmt) +insert_init_stmt (copy_body_data *id, basic_block bb, gimple init_stmt) { /* If VAR represents a zero-sized variable, it's possible that the assignment statement may result in no gimple statements. */ @@ -2068,7 +2244,8 @@ insert_init_stmt (basic_block bb, gimple init_stmt) from a rhs with a conversion. Handle that here by forcing the rhs into a temporary. gimple_regimplify_operands is not prepared to do this for us. */ - if (!is_gimple_reg (gimple_assign_lhs (init_stmt)) + if (!is_gimple_debug (init_stmt) + && !is_gimple_reg (gimple_assign_lhs (init_stmt)) && is_gimple_reg_type (TREE_TYPE (gimple_assign_lhs (init_stmt))) && gimple_assign_rhs_class (init_stmt) == GIMPLE_UNARY_RHS) { @@ -2083,6 +2260,18 @@ insert_init_stmt (basic_block bb, gimple init_stmt) gsi_insert_after (&si, init_stmt, GSI_NEW_STMT); gimple_regimplify_operands (init_stmt, &si); mark_symbols_for_renaming (init_stmt); + + if (!is_gimple_debug (init_stmt) && MAY_HAVE_DEBUG_STMTS) + { + tree var, def = gimple_assign_lhs (init_stmt); + + if (TREE_CODE (def) == SSA_NAME) + var = SSA_NAME_VAR (def); + else + var = def; + + insert_init_debug_bind (id, bb, var, def, init_stmt); + } } } @@ -2113,9 +2302,29 @@ setup_one_parameter (copy_body_data *id, tree p, tree value, tree fn, rhs = fold_build1 (VIEW_CONVERT_EXPR, TREE_TYPE (p), value); } + /* Make an equivalent VAR_DECL. Note that we must NOT remap the type + here since the type of this decl must be visible to the calling + function. */ + var = copy_decl_to_var (p, id); + + /* We're actually using the newly-created var. */ + if (gimple_in_ssa_p (cfun) && TREE_CODE (var) == VAR_DECL) + { + get_var_ann (var); + add_referenced_var (var); + } + + /* Declare this new variable. */ + TREE_CHAIN (var) = *vars; + *vars = var; + + /* Make gimplifier happy about this variable. */ + DECL_SEEN_IN_BIND_EXPR_P (var) = 1; + /* If the parameter is never assigned to, has no SSA_NAMEs created, - we may not need to create a new variable here at all. Instead, we may - be able to just use the argument value. */ + we would not need to create a new variable here at all, if it + weren't for debug info. Still, we can just use the argument + value. */ if (TREE_READONLY (p) && !TREE_ADDRESSABLE (p) && value && !TREE_SIDE_EFFECTS (value) @@ -2136,32 +2345,16 @@ setup_one_parameter (copy_body_data *id, tree p, tree value, tree fn, && ! self_inlining_addr_expr (value, fn)) { insert_decl_map (id, p, value); - return NULL; + insert_debug_decl_map (id, p, var); + return insert_init_debug_bind (id, bb, var, value, NULL); } } - /* Make an equivalent VAR_DECL. Note that we must NOT remap the type - here since the type of this decl must be visible to the calling - function. */ - var = copy_decl_to_var (p, id); - if (gimple_in_ssa_p (cfun) && TREE_CODE (var) == VAR_DECL) - { - get_var_ann (var); - add_referenced_var (var); - } - /* Register the VAR_DECL as the equivalent for the PARM_DECL; that way, when the PARM_DECL is encountered, it will be automatically replaced by the VAR_DECL. */ insert_decl_map (id, p, var); - /* Declare this new variable. */ - TREE_CHAIN (var) = *vars; - *vars = var; - - /* Make gimplifier happy about this variable. */ - DECL_SEEN_IN_BIND_EXPR_P (var) = 1; - /* Even if P was TREE_READONLY, the new VAR should not be. In the original code, we would have constructed a temporary, and then the function body would have never @@ -2183,15 +2376,7 @@ setup_one_parameter (copy_body_data *id, tree p, tree value, tree fn, Do replacement at -O0 for const arguments replaced by constant. This is important for builtin_constant_p and other construct requiring - constant argument to be visible in inlined function body. - - FIXME: This usually kills the last connection in between inlined - function parameter and the actual value in debug info. Can we do - better here? If we just inserted the statement, copy propagation - would kill it anyway as it always did in older versions of GCC. - - We might want to introduce a notion that single SSA_NAME might - represent multiple variables for purposes of debugging. */ + constant argument to be visible in inlined function body. */ if (gimple_in_ssa_p (cfun) && rhs && def && is_gimple_reg (p) && (optimize || (TREE_READONLY (p) @@ -2201,7 +2386,7 @@ setup_one_parameter (copy_body_data *id, tree p, tree value, tree fn, && !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (def)) { insert_decl_map (id, def, rhs); - return NULL; + return insert_init_debug_bind (id, bb, var, rhs, NULL); } /* If the value of argument is never used, don't care about initializing @@ -2209,7 +2394,7 @@ setup_one_parameter (copy_body_data *id, tree p, tree value, tree fn, if (optimize && gimple_in_ssa_p (cfun) && !def && is_gimple_reg (p)) { gcc_assert (!value || !TREE_SIDE_EFFECTS (value)); - return NULL; + return insert_init_debug_bind (id, bb, var, rhs, NULL); } /* Initialize this VAR_DECL from the equivalent argument. Convert @@ -2219,7 +2404,7 @@ setup_one_parameter (copy_body_data *id, tree p, tree value, tree fn, if (rhs == error_mark_node) { insert_decl_map (id, p, var); - return NULL; + return insert_init_debug_bind (id, bb, var, rhs, NULL); } STRIP_USELESS_TYPE_CONVERSION (rhs); @@ -2237,7 +2422,7 @@ setup_one_parameter (copy_body_data *id, tree p, tree value, tree fn, init_stmt = gimple_build_assign (var, rhs); if (bb && init_stmt) - insert_init_stmt (bb, init_stmt); + insert_init_stmt (id, bb, init_stmt); } return init_stmt; } @@ -3118,6 +3303,7 @@ estimate_num_insns (gimple stmt, eni_weights *weights) case GIMPLE_PHI: case GIMPLE_RETURN: case GIMPLE_PREDICT: + case GIMPLE_DEBUG: return 0; case GIMPLE_ASM: @@ -3262,7 +3448,7 @@ expand_call_inline (basic_block bb, gimple stmt, copy_body_data *id) { tree retvar, use_retvar; tree fn; - struct pointer_map_t *st; + struct pointer_map_t *st, *dst; tree return_slot; tree modify_dest; location_t saved_location; @@ -3402,6 +3588,8 @@ expand_call_inline (basic_block bb, gimple stmt, copy_body_data *id) map. */ st = id->decl_map; id->decl_map = pointer_map_create (); + dst = id->debug_map; + id->debug_map = NULL; /* Record the function we are about to inline. */ id->src_fn = fn; @@ -3498,6 +3686,11 @@ expand_call_inline (basic_block bb, gimple stmt, copy_body_data *id) } /* Clean up. */ + if (id->debug_map) + { + pointer_map_destroy (id->debug_map); + id->debug_map = dst; + } pointer_map_destroy (id->decl_map); id->decl_map = st; @@ -3726,6 +3919,8 @@ optimize_inline_calls (tree fn) fold_marked_statements (last, id.statements_to_fold); pointer_set_destroy (id.statements_to_fold); + gcc_assert (!id.debug_stmts); + /* Renumber the (code) basic_blocks consecutively. */ compact_blocks (); /* Renumber the lexical scoping (non-code) blocks consecutively. */ @@ -3961,6 +4156,7 @@ unsave_expr_now (tree expr) id.src_fn = current_function_decl; id.dst_fn = current_function_decl; id.decl_map = pointer_map_create (); + id.debug_map = NULL; id.copy_decl = copy_decl_no_change; id.transform_call_graph_edges = CB_CGE_DUPLICATE; @@ -3976,6 +4172,8 @@ unsave_expr_now (tree expr) /* Clean up. */ pointer_map_destroy (id.decl_map); + if (id.debug_map) + pointer_map_destroy (id.debug_map); return expr; } @@ -4107,6 +4305,7 @@ copy_gimple_seq_and_replace_locals (gimple_seq seq) id.src_fn = current_function_decl; id.dst_fn = current_function_decl; id.decl_map = pointer_map_create (); + id.debug_map = NULL; id.copy_decl = copy_decl_no_change; id.transform_call_graph_edges = CB_CGE_DUPLICATE; @@ -4131,6 +4330,8 @@ copy_gimple_seq_and_replace_locals (gimple_seq seq) /* Clean up. */ pointer_map_destroy (id.decl_map); + if (id.debug_map) + pointer_map_destroy (id.debug_map); return copy; } @@ -4506,7 +4707,7 @@ tree_function_versioning (tree old_decl, tree new_decl, tree p; unsigned i; struct ipa_replace_map *replace_info; - basic_block old_entry_block; + basic_block old_entry_block, bb; VEC (gimple, heap) *init_stmts = VEC_alloc (gimple, heap, 10); tree t_step; @@ -4534,8 +4735,9 @@ tree_function_versioning (tree old_decl, tree new_decl, /* Generate a new name for the new version. */ id.statements_to_fold = pointer_set_create (); - + id.decl_map = pointer_map_create (); + id.debug_map = NULL; id.src_fn = old_decl; id.dst_fn = new_decl; id.src_node = old_version_node; @@ -4637,12 +4839,12 @@ tree_function_versioning (tree old_decl, tree new_decl, /* Renumber the lexical scoping (non-code) blocks consecutively. */ number_blocks (new_decl); - if (VEC_length (gimple, init_stmts)) - { - basic_block bb = split_edge (single_succ_edge (ENTRY_BLOCK_PTR)); - while (VEC_length (gimple, init_stmts)) - insert_init_stmt (bb, VEC_pop (gimple, init_stmts)); - } + /* We want to create the BB unconditionally, so that the addition of + debug stmts doesn't affect BB count, which may in the end cause + codegen differences. */ + bb = split_edge (single_succ_edge (ENTRY_BLOCK_PTR)); + while (VEC_length (gimple, init_stmts)) + insert_init_stmt (&id, bb, VEC_pop (gimple, init_stmts)); update_clone_info (&id); /* Remap the nonlocal_goto_save_area, if any. */ @@ -4657,6 +4859,8 @@ tree_function_versioning (tree old_decl, tree new_decl, /* Clean up. */ pointer_map_destroy (id.decl_map); + if (id.debug_map) + pointer_map_destroy (id.debug_map); free_dominance_info (CDI_DOMINATORS); free_dominance_info (CDI_POST_DOMINATORS); @@ -4668,6 +4872,7 @@ tree_function_versioning (tree old_decl, tree new_decl, free_dominance_info (CDI_DOMINATORS); free_dominance_info (CDI_POST_DOMINATORS); + gcc_assert (!id.debug_stmts); VEC_free (gimple, heap, init_stmts); pop_cfun (); current_function_decl = old_current_function_decl; @@ -4742,11 +4947,14 @@ build_duplicate_type (tree type) id.dst_fn = current_function_decl; id.src_cfun = cfun; id.decl_map = pointer_map_create (); + id.debug_map = NULL; id.copy_decl = copy_decl_no_change; type = remap_type_1 (type, &id); pointer_map_destroy (id.decl_map); + if (id.debug_map) + pointer_map_destroy (id.debug_map); TYPE_CANONICAL (type) = type; diff --git a/gcc/tree-inline.h b/gcc/tree-inline.h index 76ac17adcee..f04a3f0a843 100644 --- a/gcc/tree-inline.h +++ b/gcc/tree-inline.h @@ -22,7 +22,7 @@ along with GCC; see the file COPYING3. If not see #ifndef GCC_TREE_INLINE_H #define GCC_TREE_INLINE_H -#include "pointer-set.h" +#include "gimple.h" struct cgraph_edge; @@ -117,6 +117,15 @@ typedef struct copy_body_data /* Entry basic block to currently copied body. */ struct basic_block_def *entry_bb; + + /* Debug statements that need processing. */ + VEC(gimple,heap) *debug_stmts; + + /* A map from local declarations in the inlined function to + equivalents in the function into which it is being inlined, where + the originals have been mapped to a value rather than to a + variable. */ + struct pointer_map_t *debug_map; } copy_body_data; /* Weights of constructions for estimate_num_insns. */ diff --git a/gcc/tree-into-ssa.c b/gcc/tree-into-ssa.c index bdec08063e4..9f06e8c5d04 100644 --- a/gcc/tree-into-ssa.c +++ b/gcc/tree-into-ssa.c @@ -749,6 +749,9 @@ mark_def_sites (basic_block bb, gimple stmt, bitmap kills) set_register_defs (stmt, false); set_rewrite_uses (stmt, false); + if (is_gimple_debug (stmt)) + return; + /* If a variable is used before being set, then the variable is live across a block boundary, so mark it live-on-entry to BB. */ FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_USE) @@ -1051,7 +1054,6 @@ mark_phi_for_rewrite (basic_block bb, gimple phi) VEC_replace (gimple_vec, phis_to_rewrite, idx, phis); } - /* Insert PHI nodes for variable VAR using the iterated dominance frontier given in PHI_INSERTION_POINTS. If UPDATE_P is true, this function assumes that the caller is incrementally updating the @@ -1118,8 +1120,17 @@ insert_phi_nodes_for (tree var, bitmap phi_insertion_points, bool update_p) } else { + tree tracked_var; gcc_assert (DECL_P (var)); phi = create_phi_node (var, bb); + if (!update_p && (tracked_var = target_for_debug_bind (var))) + { + gimple note = gimple_build_debug_bind (tracked_var, + PHI_RESULT (phi), + phi); + gimple_stmt_iterator si = gsi_after_labels (bb); + gsi_insert_before (&si, note, GSI_SAME_STMT); + } } /* Mark this PHI node as interesting for update_ssa. */ @@ -1260,11 +1271,12 @@ get_reaching_def (tree var) definition of a variable when a new real or virtual definition is found. */ static void -rewrite_stmt (gimple stmt) +rewrite_stmt (gimple_stmt_iterator si) { use_operand_p use_p; def_operand_p def_p; ssa_op_iter iter; + gimple stmt = gsi_stmt (si); /* If mark_def_sites decided that we don't need to rewrite this statement, ignore it. */ @@ -1293,9 +1305,18 @@ rewrite_stmt (gimple stmt) FOR_EACH_SSA_DEF_OPERAND (def_p, stmt, iter, SSA_OP_DEF) { tree var = DEF_FROM_PTR (def_p); + tree name = make_ssa_name (var, stmt); + tree tracked_var; gcc_assert (DECL_P (var)); - SET_DEF (def_p, make_ssa_name (var, stmt)); + SET_DEF (def_p, name); register_new_def (DEF_FROM_PTR (def_p), var); + + tracked_var = target_for_debug_bind (var); + if (tracked_var) + { + gimple note = gimple_build_debug_bind (tracked_var, name, stmt); + gsi_insert_after (&si, note, GSI_SAME_STMT); + } } } @@ -1366,7 +1387,7 @@ rewrite_enter_block (struct dom_walk_data *walk_data ATTRIBUTE_UNUSED, of a variable when a new real or virtual definition is found. */ if (TEST_BIT (interesting_blocks, bb->index)) for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) - rewrite_stmt (gsi_stmt (gsi)); + rewrite_stmt (gsi); /* Step 3. Visit all the successor blocks of BB looking for PHI nodes. For every PHI node found, add a new argument containing the current @@ -1759,6 +1780,38 @@ maybe_replace_use (use_operand_p use_p) } +/* Same as maybe_replace_use, but without introducing default stmts, + returning false to indicate a need to do so. */ + +static inline bool +maybe_replace_use_in_debug_stmt (use_operand_p use_p) +{ + tree rdef = NULL_TREE; + tree use = USE_FROM_PTR (use_p); + tree sym = DECL_P (use) ? use : SSA_NAME_VAR (use); + + if (symbol_marked_for_renaming (sym)) + rdef = get_current_def (sym); + else if (is_old_name (use)) + { + rdef = get_current_def (use); + /* We can't assume that, if there's no current definition, the + default one should be used. It could be the case that we've + rearranged blocks so that the earlier definition no longer + dominates the use. */ + if (!rdef && SSA_NAME_IS_DEFAULT_DEF (use)) + rdef = use; + } + else + rdef = use; + + if (rdef && rdef != use) + SET_USE (use_p, rdef); + + return rdef != NULL_TREE; +} + + /* If the operand pointed to by DEF_P is an SSA name in NEW_SSA_NAMES or OLD_SSA_NAMES, or if it is a symbol marked for renaming, register it as the current definition for the names replaced by @@ -1825,8 +1878,42 @@ rewrite_update_stmt (gimple stmt) /* Rewrite USES included in OLD_SSA_NAMES and USES whose underlying symbol is marked for renaming. */ if (rewrite_uses_p (stmt)) - FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_ALL_USES) - maybe_replace_use (use_p); + { + if (is_gimple_debug (stmt)) + { + bool failed = false; + + FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_USE) + if (!maybe_replace_use_in_debug_stmt (use_p)) + { + failed = true; + break; + } + + if (failed) + { + /* DOM sometimes threads jumps in such a way that a + debug stmt ends up referencing a SSA variable that no + longer dominates the debug stmt, but such that all + incoming definitions refer to the same definition in + an earlier dominator. We could try to recover that + definition somehow, but this will have to do for now. + + Introducing a default definition, which is what + maybe_replace_use() would do in such cases, may + modify code generation, for the otherwise-unused + default definition would never go away, modifying SSA + version numbers all over. */ + gimple_debug_bind_reset_value (stmt); + update_stmt (stmt); + } + } + else + { + FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_ALL_USES) + maybe_replace_use (use_p); + } + } /* Register definitions of names in NEW_SSA_NAMES and OLD_SSA_NAMES. Also register definitions for names whose underlying symbol is @@ -2325,7 +2412,12 @@ mark_use_interesting (tree var, gimple stmt, basic_block bb, bool insert_phi_p) if (gimple_code (stmt) == GIMPLE_PHI) mark_phi_for_rewrite (def_bb, stmt); else - set_rewrite_uses (stmt, true); + { + set_rewrite_uses (stmt, true); + + if (is_gimple_debug (stmt)) + return; + } /* If VAR has not been defined in BB, then it is live-on-entry to BB. Note that we cannot just use the block holding VAR's diff --git a/gcc/tree-outof-ssa.c b/gcc/tree-outof-ssa.c index 220171ca7f9..d3901c34f0e 100644 --- a/gcc/tree-outof-ssa.c +++ b/gcc/tree-outof-ssa.c @@ -1,5 +1,6 @@ /* Convert a program in SSA form into Normal form. - Copyright (C) 2004, 2005, 2006, 2007, 2008 Free Software Foundation, Inc. + Copyright (C) 2004, 2005, 2006, 2007, 2008, 2009 + Free Software Foundation, Inc. Contributed by Andrew Macleod <amacleod@redhat.com> This file is part of GCC. @@ -119,6 +120,8 @@ set_location_for_edge (edge e) for (gsi = gsi_last_bb (bb); !gsi_end_p (gsi); gsi_prev (&gsi)) { gimple stmt = gsi_stmt (gsi); + if (is_gimple_debug (stmt)) + continue; if (gimple_has_location (stmt) || gimple_block (stmt)) { set_curr_insn_source_location (gimple_location (stmt)); diff --git a/gcc/tree-parloops.c b/gcc/tree-parloops.c index 9b9ac758dc2..a6d8f215914 100644 --- a/gcc/tree-parloops.c +++ b/gcc/tree-parloops.c @@ -508,7 +508,11 @@ eliminate_local_variables_stmt (edge entry, gimple stmt, dta.decl_address = decl_address; dta.changed = false; - walk_gimple_op (stmt, eliminate_local_variables_1, &dta.info); + if (gimple_debug_bind_p (stmt)) + walk_tree (gimple_debug_bind_get_value_ptr (stmt), + eliminate_local_variables_1, &dta.info, NULL); + else + walk_gimple_op (stmt, eliminate_local_variables_1, &dta.info); if (dta.changed) update_stmt (stmt); @@ -692,6 +696,53 @@ separate_decls_in_region_stmt (edge entry, edge exit, gimple stmt, } } +/* Finds the ssa names used in STMT that are defined outside the + region between ENTRY and EXIT and replaces such ssa names with + their duplicates. The duplicates are stored to NAME_COPIES. Base + decls of all ssa names used in STMT (including those defined in + LOOP) are replaced with the new temporary variables; the + replacement decls are stored in DECL_COPIES. */ + +static bool +separate_decls_in_region_debug_bind (gimple stmt, + htab_t name_copies, htab_t decl_copies) +{ + use_operand_p use; + ssa_op_iter oi; + tree var, name; + struct int_tree_map ielt; + struct name_to_copy_elt elt; + void **slot, **dslot; + + var = gimple_debug_bind_get_var (stmt); + gcc_assert (DECL_P (var) && SSA_VAR_P (var)); + ielt.uid = DECL_UID (var); + dslot = htab_find_slot_with_hash (decl_copies, &ielt, ielt.uid, NO_INSERT); + if (!dslot) + return true; + gimple_debug_bind_set_var (stmt, ((struct int_tree_map *) *dslot)->to); + + FOR_EACH_PHI_OR_STMT_USE (use, stmt, oi, SSA_OP_USE) + { + name = USE_FROM_PTR (use); + if (TREE_CODE (name) != SSA_NAME) + continue; + + elt.version = SSA_NAME_VERSION (name); + slot = htab_find_slot_with_hash (name_copies, &elt, elt.version, NO_INSERT); + if (!slot) + { + gimple_debug_bind_reset_value (stmt); + update_stmt (stmt); + break; + } + + SET_USE (use, ((struct name_to_copy_elt *) *slot)->new_name); + } + + return false; +} + /* Callback for htab_traverse. Adds a field corresponding to the reduction specified in SLOT. The type is passed in DATA. */ @@ -1027,6 +1078,7 @@ separate_decls_in_region (edge entry, edge exit, htab_t reduction_list, basic_block bb; basic_block entry_bb = bb1; basic_block exit_bb = exit->dest; + bool has_debug_stmt = false; entry = single_succ_edge (entry_bb); gather_blocks_in_sese_region (entry_bb, exit_bb, &body); @@ -1040,11 +1092,47 @@ separate_decls_in_region (edge entry, edge exit, htab_t reduction_list, name_copies, decl_copies); for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) - separate_decls_in_region_stmt (entry, exit, gsi_stmt (gsi), - name_copies, decl_copies); + { + gimple stmt = gsi_stmt (gsi); + + if (is_gimple_debug (stmt)) + has_debug_stmt = true; + else + separate_decls_in_region_stmt (entry, exit, stmt, + name_copies, decl_copies); + } } } + /* Now process debug bind stmts. We must not create decls while + processing debug stmts, so we defer their processing so as to + make sure we will have debug info for as many variables as + possible (all of those that were dealt with in the loop above), + and discard those for which we know there's nothing we can + do. */ + if (has_debug_stmt) + for (i = 0; VEC_iterate (basic_block, body, i, bb); i++) + if (bb != entry_bb && bb != exit_bb) + { + for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi);) + { + gimple stmt = gsi_stmt (gsi); + + if (gimple_debug_bind_p (stmt)) + { + if (separate_decls_in_region_debug_bind (stmt, + name_copies, + decl_copies)) + { + gsi_remove (&gsi, true); + continue; + } + } + + gsi_next (&gsi); + } + } + VEC_free (basic_block, heap, body); if (htab_elements (name_copies) == 0 && reduction_list == 0) diff --git a/gcc/tree-ssa-coalesce.c b/gcc/tree-ssa-coalesce.c index ec26a5dc959..0164eca7ddb 100644 --- a/gcc/tree-ssa-coalesce.c +++ b/gcc/tree-ssa-coalesce.c @@ -884,6 +884,8 @@ build_ssa_conflict_graph (tree_live_info_p liveinfo) && TREE_CODE (rhs1) == SSA_NAME) live_track_clear_var (live, rhs1); } + else if (is_gimple_debug (stmt)) + continue; FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF) live_track_process_def (live, var, graph); @@ -1048,6 +1050,9 @@ create_outofssa_var_map (coalesce_list_p cl, bitmap used_in_copy) { stmt = gsi_stmt (gsi); + if (is_gimple_debug (stmt)) + continue; + /* Register USE and DEF operands in each statement. */ FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, (SSA_OP_DEF|SSA_OP_USE)) register_ssa_partition (map, var); diff --git a/gcc/tree-ssa-dce.c b/gcc/tree-ssa-dce.c index 2eec3147886..99a039fffee 100644 --- a/gcc/tree-ssa-dce.c +++ b/gcc/tree-ssa-dce.c @@ -1,5 +1,5 @@ /* Dead code elimination pass for the GNU compiler. - Copyright (C) 2002, 2003, 2004, 2005, 2006, 2007, 2008 + Copyright (C) 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, Inc. Contributed by Ben Elliston <bje@redhat.com> and Andrew MacLeod <amacleod@redhat.com> @@ -221,7 +221,7 @@ mark_stmt_necessary (gimple stmt, bool add_to_worklist) gimple_set_plf (stmt, STMT_NECESSARY, true); if (add_to_worklist) VEC_safe_push (gimple, heap, worklist, stmt); - if (bb_contains_live_stmts) + if (bb_contains_live_stmts && !is_gimple_debug (stmt)) SET_BIT (bb_contains_live_stmts, gimple_bb (stmt)->index); } @@ -333,6 +333,10 @@ mark_stmt_if_obviously_necessary (gimple stmt, bool aggressive) } break; + case GIMPLE_DEBUG: + mark_stmt_necessary (stmt, false); + return; + case GIMPLE_GOTO: gcc_assert (!simple_goto_p (stmt)); mark_stmt_necessary (stmt, true); @@ -1063,7 +1067,6 @@ remove_dead_stmt (gimple_stmt_iterator *i, basic_block bb) release_defs (stmt); } - /* Eliminate unnecessary statements. Any instruction not marked as necessary contributes nothing to the program, and can be deleted. */ @@ -1075,16 +1078,44 @@ eliminate_unnecessary_stmts (void) gimple_stmt_iterator gsi; gimple stmt; tree call; + VEC (basic_block, heap) *h; if (dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, "\nEliminating unnecessary statements:\n"); clear_special_calls (); - FOR_EACH_BB (bb) + /* Walking basic blocks and statements in reverse order avoids + releasing SSA names before any other DEFs that refer to them are + released. This helps avoid loss of debug information, as we get + a chance to propagate all RHSs of removed SSAs into debug uses, + rather than only the latest ones. E.g., consider: + + x_3 = y_1 + z_2; + a_5 = x_3 - b_4; + # DEBUG a => a_5 + + If we were to release x_3 before a_5, when we reached a_5 and + tried to substitute it into the debug stmt, we'd see x_3 there, + but x_3's DEF, type, etc would have already been disconnected. + By going backwards, the debug stmt first changes to: + + # DEBUG a => x_3 - b_4 + + and then to: + + # DEBUG a => y_1 + z_2 - b_4 + + as desired. */ + gcc_assert (dom_info_available_p (CDI_DOMINATORS)); + h = get_all_dominated_blocks (CDI_DOMINATORS, single_succ (ENTRY_BLOCK_PTR)); + + while (VEC_length (basic_block, h)) { + bb = VEC_pop (basic_block, h); + /* Remove dead statements. */ - for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi);) + for (gsi = gsi_last_bb (bb); !gsi_end_p (gsi);) { stmt = gsi_stmt (gsi); @@ -1095,6 +1126,14 @@ eliminate_unnecessary_stmts (void) { remove_dead_stmt (&gsi, bb); something_changed = true; + + /* If stmt was the last stmt in the block, we want to + move gsi to the stmt that became the last stmt, but + gsi_prev would crash. */ + if (gsi_end_p (gsi)) + gsi = gsi_last_bb (bb); + else + gsi_prev (&gsi); } else if (is_gimple_call (stmt)) { @@ -1124,24 +1163,29 @@ eliminate_unnecessary_stmts (void) } notice_special_calls (stmt); } - gsi_next (&gsi); + gsi_prev (&gsi); } else - { - gsi_next (&gsi); - } + gsi_prev (&gsi); } } + + VEC_free (basic_block, heap, h); + /* Since we don't track liveness of virtual PHI nodes, it is possible that we rendered some PHI nodes unreachable while they are still in use. Mark them for renaming. */ if (cfg_altered) { - basic_block next_bb; + basic_block prev_bb; + find_unreachable_blocks (); - for (bb = ENTRY_BLOCK_PTR->next_bb; bb != EXIT_BLOCK_PTR; bb = next_bb) + + /* Delete all unreachable basic blocks in reverse dominator order. */ + for (bb = EXIT_BLOCK_PTR->prev_bb; bb != ENTRY_BLOCK_PTR; bb = prev_bb) { - next_bb = bb->next_bb; + prev_bb = bb->prev_bb; + if (!TEST_BIT (bb_contains_live_stmts, bb->index) || !(bb->flags & BB_REACHABLE)) { @@ -1165,8 +1209,36 @@ eliminate_unnecessary_stmts (void) if (found) mark_virtual_phi_result_for_renaming (gsi_stmt (gsi)); } + if (!(bb->flags & BB_REACHABLE)) - delete_basic_block (bb); + { + /* Speed up the removal of blocks that don't + dominate others. Walking backwards, this should + be the common case. ??? Do we need to recompute + dominators because of cfg_altered? */ + if (!MAY_HAVE_DEBUG_STMTS + || !first_dom_son (CDI_DOMINATORS, bb)) + delete_basic_block (bb); + else + { + h = get_all_dominated_blocks (CDI_DOMINATORS, bb); + + while (VEC_length (basic_block, h)) + { + bb = VEC_pop (basic_block, h); + prev_bb = bb->prev_bb; + /* Rearrangements to the CFG may have failed + to update the dominators tree, so that + formerly-dominated blocks are now + otherwise reachable. */ + if (!!(bb->flags & BB_REACHABLE)) + continue; + delete_basic_block (bb); + } + + VEC_free (basic_block, heap, h); + } + } } } } diff --git a/gcc/tree-ssa-dom.c b/gcc/tree-ssa-dom.c index 2fa8da25eb6..3f7cbfe4874 100644 --- a/gcc/tree-ssa-dom.c +++ b/gcc/tree-ssa-dom.c @@ -1,5 +1,5 @@ /* SSA Dominator optimizations for trees - Copyright (C) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 + Copyright (C) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, Inc. Contributed by Diego Novillo <dnovillo@redhat.com> @@ -2526,6 +2526,11 @@ propagate_rhs_into_lhs (gimple stmt, tree lhs, tree rhs, bitmap interesting_name be successful would be if the use occurs in an ASM_EXPR. */ FOR_EACH_IMM_USE_STMT (use_stmt, iter, lhs) { + /* Leave debug stmts alone. If we succeed in propagating + all non-debug uses, we'll drop the DEF, and propagation + into debug stmts will occur then. */ + if (gimple_debug_bind_p (use_stmt)) + continue; /* It's not always safe to propagate into an ASM_EXPR. */ if (gimple_code (use_stmt) == GIMPLE_ASM diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c index 26a82461600..5aec33415d4 100644 --- a/gcc/tree-ssa-forwprop.c +++ b/gcc/tree-ssa-forwprop.c @@ -937,6 +937,7 @@ forward_propagate_addr_expr (tree name, tree rhs) gimple use_stmt; bool all = true; bool single_use_p = has_single_use (name); + bool debug = false; FOR_EACH_IMM_USE_STMT (use_stmt, iter, name) { @@ -947,7 +948,10 @@ forward_propagate_addr_expr (tree name, tree rhs) there is nothing we can do. */ if (gimple_code (use_stmt) != GIMPLE_ASSIGN) { - all = false; + if (is_gimple_debug (use_stmt)) + debug = true; + else + all = false; continue; } @@ -989,6 +993,9 @@ forward_propagate_addr_expr (tree name, tree rhs) } } + if (all && debug) + propagate_var_def_into_debug_stmts (name, NULL, NULL); + return all; } diff --git a/gcc/tree-ssa-live.c b/gcc/tree-ssa-live.c index 6de001c8ac3..6d2fb32e585 100644 --- a/gcc/tree-ssa-live.c +++ b/gcc/tree-ssa-live.c @@ -703,6 +703,9 @@ remove_unused_locals (void) gimple stmt = gsi_stmt (gsi); tree b = gimple_block (stmt); + if (is_gimple_debug (stmt)) + continue; + if (b) TREE_USED (b) = true; @@ -988,6 +991,8 @@ set_var_live_on_entry (tree ssa_name, tree_live_info_p live) add_block = e->src; } } + else if (is_gimple_debug (use_stmt)) + continue; else { /* If its not defined in this block, its live on entry. */ diff --git a/gcc/tree-ssa-loop-ch.c b/gcc/tree-ssa-loop-ch.c index 9f1f4c3d040..dffaf49ba06 100644 --- a/gcc/tree-ssa-loop-ch.c +++ b/gcc/tree-ssa-loop-ch.c @@ -90,6 +90,9 @@ should_duplicate_loop_header_p (basic_block header, struct loop *loop, if (gimple_code (last) == GIMPLE_LABEL) continue; + if (is_gimple_debug (last)) + continue; + if (is_gimple_call (last)) return false; diff --git a/gcc/tree-ssa-loop-im.c b/gcc/tree-ssa-loop-im.c index d8ee787cc47..738249445b0 100644 --- a/gcc/tree-ssa-loop-im.c +++ b/gcc/tree-ssa-loop-im.c @@ -879,6 +879,7 @@ rewrite_bittest (gimple_stmt_iterator *bsi) gimple_cond_set_rhs (use_stmt, build_int_cst_type (TREE_TYPE (name), 0)); gsi_insert_before (bsi, stmt1, GSI_SAME_STMT); + propagate_defs_into_debug_stmts (gsi_stmt (*bsi), NULL, NULL); gsi_replace (bsi, stmt2, true); return stmt1; @@ -1059,6 +1060,7 @@ move_computations_stmt (struct dom_walk_data *dw_data ATTRIBUTE_UNUSED, mark_virtual_ops_for_renaming (stmt); gsi_insert_on_edge (loop_preheader_edge (level), stmt); + propagate_defs_into_debug_stmts (gsi_stmt (bsi), NULL, NULL); gsi_remove (&bsi, false); } } diff --git a/gcc/tree-ssa-loop-ivopts.c b/gcc/tree-ssa-loop-ivopts.c index 71d4e17064d..05988636489 100644 --- a/gcc/tree-ssa-loop-ivopts.c +++ b/gcc/tree-ssa-loop-ivopts.c @@ -1849,7 +1849,8 @@ find_interesting_uses (struct ivopts_data *data) for (bsi = gsi_start_phis (bb); !gsi_end_p (bsi); gsi_next (&bsi)) find_interesting_uses_stmt (data, gsi_stmt (bsi)); for (bsi = gsi_start_bb (bb); !gsi_end_p (bsi); gsi_next (&bsi)) - find_interesting_uses_stmt (data, gsi_stmt (bsi)); + if (!is_gimple_debug (gsi_stmt (bsi))) + find_interesting_uses_stmt (data, gsi_stmt (bsi)); } if (dump_file && (dump_flags & TDF_DETAILS)) @@ -5621,7 +5622,24 @@ remove_unused_ivs (struct ivopts_data *data) && !info->inv_id && !info->iv->have_use_for && !info->preserve_biv) - remove_statement (SSA_NAME_DEF_STMT (info->iv->ssa_name), true); + { + if (MAY_HAVE_DEBUG_STMTS) + { + gimple stmt; + imm_use_iterator iter; + + FOR_EACH_IMM_USE_STMT (stmt, iter, info->iv->ssa_name) + { + if (!gimple_debug_bind_p (stmt)) + continue; + + /* ??? We can probably do better than this. */ + gimple_debug_bind_reset_value (stmt); + update_stmt (stmt); + } + } + remove_statement (SSA_NAME_DEF_STMT (info->iv->ssa_name), true); + } } } diff --git a/gcc/tree-ssa-loop-manip.c b/gcc/tree-ssa-loop-manip.c index e43c0bc404a..bc5c3392a0f 100644 --- a/gcc/tree-ssa-loop-manip.c +++ b/gcc/tree-ssa-loop-manip.c @@ -279,6 +279,9 @@ find_uses_to_rename_stmt (gimple stmt, bitmap *use_blocks, bitmap need_phis) tree var; basic_block bb = gimple_bb (stmt); + if (is_gimple_debug (stmt)) + return; + FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_ALL_USES) find_uses_to_rename_use (bb, var, use_blocks, need_phis); } @@ -429,6 +432,9 @@ check_loop_closed_ssa_stmt (basic_block bb, gimple stmt) ssa_op_iter iter; tree var; + if (is_gimple_debug (stmt)) + return; + FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_ALL_USES) check_loop_closed_ssa_use (bb, var); } diff --git a/gcc/tree-ssa-operands.c b/gcc/tree-ssa-operands.c index b12f5c17826..ac84fb978b0 100644 --- a/gcc/tree-ssa-operands.c +++ b/gcc/tree-ssa-operands.c @@ -1,5 +1,5 @@ /* SSA operands management for trees. - Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008 + Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, Inc. This file is part of GCC. @@ -635,6 +635,8 @@ add_virtual_operand (gimple stmt ATTRIBUTE_UNUSED, int flags) if (flags & opf_no_vops) return; + gcc_assert (!is_gimple_debug (stmt)); + if (flags & opf_def) append_vdef (gimple_vop (cfun)); else @@ -722,7 +724,8 @@ get_indirect_ref_operands (gimple stmt, tree expr, int flags, /* If requested, add a USE operand for the base pointer. */ if (recurse_on_base) - get_expr_operands (stmt, pptr, opf_use); + get_expr_operands (stmt, pptr, + opf_use | (flags & opf_no_vops)); } @@ -846,10 +849,14 @@ get_expr_operands (gimple stmt, tree *expr_p, int flags) enum tree_code code; enum tree_code_class codeclass; tree expr = *expr_p; + int uflags = opf_use; if (expr == NULL) return; + if (is_gimple_debug (stmt)) + uflags |= (flags & opf_no_vops); + code = TREE_CODE (expr); codeclass = TREE_CODE_CLASS (code); @@ -860,7 +867,8 @@ get_expr_operands (gimple stmt, tree *expr_p, int flags) reference to it, but the fact that the statement takes its address will be of interest to some passes (e.g. alias resolution). */ - mark_address_taken (TREE_OPERAND (expr, 0)); + if (!is_gimple_debug (stmt)) + mark_address_taken (TREE_OPERAND (expr, 0)); /* If the address is invariant, there may be no interesting variable references inside. */ @@ -914,13 +922,13 @@ get_expr_operands (gimple stmt, tree *expr_p, int flags) { if (TREE_THIS_VOLATILE (TREE_OPERAND (expr, 1))) gimple_set_has_volatile_ops (stmt, true); - get_expr_operands (stmt, &TREE_OPERAND (expr, 2), opf_use); + get_expr_operands (stmt, &TREE_OPERAND (expr, 2), uflags); } else if (code == ARRAY_REF || code == ARRAY_RANGE_REF) { - get_expr_operands (stmt, &TREE_OPERAND (expr, 1), opf_use); - get_expr_operands (stmt, &TREE_OPERAND (expr, 2), opf_use); - get_expr_operands (stmt, &TREE_OPERAND (expr, 3), opf_use); + get_expr_operands (stmt, &TREE_OPERAND (expr, 1), uflags); + get_expr_operands (stmt, &TREE_OPERAND (expr, 2), uflags); + get_expr_operands (stmt, &TREE_OPERAND (expr, 3), uflags); } return; @@ -929,15 +937,15 @@ get_expr_operands (gimple stmt, tree *expr_p, int flags) case WITH_SIZE_EXPR: /* WITH_SIZE_EXPR is a pass-through reference to its first argument, and an rvalue reference to its second argument. */ - get_expr_operands (stmt, &TREE_OPERAND (expr, 1), opf_use); + get_expr_operands (stmt, &TREE_OPERAND (expr, 1), uflags); get_expr_operands (stmt, &TREE_OPERAND (expr, 0), flags); return; case COND_EXPR: case VEC_COND_EXPR: - get_expr_operands (stmt, &TREE_OPERAND (expr, 0), opf_use); - get_expr_operands (stmt, &TREE_OPERAND (expr, 1), opf_use); - get_expr_operands (stmt, &TREE_OPERAND (expr, 2), opf_use); + get_expr_operands (stmt, &TREE_OPERAND (expr, 0), uflags); + get_expr_operands (stmt, &TREE_OPERAND (expr, 1), uflags); + get_expr_operands (stmt, &TREE_OPERAND (expr, 2), uflags); return; case CONSTRUCTOR: @@ -950,7 +958,7 @@ get_expr_operands (gimple stmt, tree *expr_p, int flags) for (idx = 0; VEC_iterate (constructor_elt, CONSTRUCTOR_ELTS (expr), idx, ce); idx++) - get_expr_operands (stmt, &ce->value, opf_use); + get_expr_operands (stmt, &ce->value, uflags); return; } @@ -1026,6 +1034,13 @@ parse_ssa_operands (gimple stmt) if (code == GIMPLE_ASM) get_asm_expr_operands (stmt); + else if (is_gimple_debug (stmt)) + { + if (gimple_debug_bind_p (stmt) + && gimple_debug_bind_has_value_p (stmt)) + get_expr_operands (stmt, gimple_debug_bind_get_value_ptr (stmt), + opf_use | opf_no_vops); + } else { size_t i, start = 0; diff --git a/gcc/tree-ssa-phiopt.c b/gcc/tree-ssa-phiopt.c index 97847f4c888..b809ab30f8d 100644 --- a/gcc/tree-ssa-phiopt.c +++ b/gcc/tree-ssa-phiopt.c @@ -384,7 +384,12 @@ bool empty_block_p (basic_block bb) { /* BB must have no executable statements. */ - return gsi_end_p (gsi_after_labels (bb)); + gimple_stmt_iterator gsi = gsi_after_labels (bb); + if (gsi_end_p (gsi)) + return true; + if (is_gimple_debug (gsi_stmt (gsi))) + gsi_next_nondebug (&gsi); + return gsi_end_p (gsi); } /* Replace PHI node element whose edge is E in block BB with variable NEW. diff --git a/gcc/tree-ssa-propagate.c b/gcc/tree-ssa-propagate.c index a3a87cbf7c3..ab9cee34a21 100644 --- a/gcc/tree-ssa-propagate.c +++ b/gcc/tree-ssa-propagate.c @@ -1170,7 +1170,8 @@ substitute_and_fold (prop_value_t *prop_value, bool use_ranges_p) /* Determine what needs to be done to update the SSA form. */ update_stmt (stmt); - something_changed = true; + if (!is_gimple_debug (stmt)) + something_changed = true; } if (dump_file && (dump_flags & TDF_DETAILS)) diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c index 416409f1305..d97f51367e2 100644 --- a/gcc/tree-ssa-reassoc.c +++ b/gcc/tree-ssa-reassoc.c @@ -1405,6 +1405,7 @@ rewrite_expr_tree (gimple stmt, unsigned int opindex, { stmt2 = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (stmt1)); gsirhs1 = gsi_for_stmt (stmt2); + propagate_defs_into_debug_stmts (stmt2, gimple_bb (stmt), &gsinow); gsi_move_before (&gsirhs1, &gsinow); gsi_prev (&gsinow); stmt1 = stmt2; @@ -1451,6 +1452,7 @@ linearize_expr (gimple stmt) gsinow = gsi_for_stmt (stmt); gsirhs = gsi_for_stmt (binrhs); + propagate_defs_into_debug_stmts (binrhs, gimple_bb (stmt), &gsinow); gsi_move_before (&gsirhs, &gsinow); gimple_assign_set_rhs2 (stmt, gimple_assign_rhs1 (binrhs)); diff --git a/gcc/tree-ssa-sink.c b/gcc/tree-ssa-sink.c index 4f16addb323..5b9b4be3090 100644 --- a/gcc/tree-ssa-sink.c +++ b/gcc/tree-ssa-sink.c @@ -120,6 +120,8 @@ all_immediate_uses_same_place (gimple stmt) { FOR_EACH_IMM_USE_FAST (use_p, imm_iter, var) { + if (is_gimple_debug (USE_STMT (use_p))) + continue; if (firstuse == NULL) firstuse = USE_STMT (use_p); else @@ -202,7 +204,7 @@ is_hidden_global_store (gimple stmt) /* Find the nearest common dominator of all of the immediate uses in IMM. */ static basic_block -nearest_common_dominator_of_uses (gimple stmt) +nearest_common_dominator_of_uses (gimple stmt, bool *debug_stmts) { bitmap blocks = BITMAP_ALLOC (NULL); basic_block commondom; @@ -227,6 +229,11 @@ nearest_common_dominator_of_uses (gimple stmt) useblock = gimple_phi_arg_edge (usestmt, idx)->src; } + else if (is_gimple_debug (usestmt)) + { + *debug_stmts = true; + continue; + } else { useblock = gimple_bb (usestmt); @@ -272,6 +279,9 @@ statement_sink_location (gimple stmt, basic_block frombb, { FOR_EACH_IMM_USE_FAST (one_use, imm_iter, def) { + if (is_gimple_debug (USE_STMT (one_use))) + continue; + break; } if (one_use != NULL_USE_OPERAND_P) @@ -343,7 +353,9 @@ statement_sink_location (gimple stmt, basic_block frombb, that is where insertion would have to take place. */ if (!all_immediate_uses_same_place (stmt)) { - basic_block commondom = nearest_common_dominator_of_uses (stmt); + bool debug_stmts = false; + basic_block commondom = nearest_common_dominator_of_uses (stmt, + &debug_stmts); if (commondom == frombb) return false; @@ -372,7 +384,12 @@ statement_sink_location (gimple stmt, basic_block frombb, fprintf (dump_file, "Common dominator of all uses is %d\n", commondom->index); } + *togsi = gsi_after_labels (commondom); + + if (debug_stmts) + propagate_defs_into_debug_stmts (stmt, commondom, togsi); + return true; } @@ -390,6 +407,9 @@ statement_sink_location (gimple stmt, basic_block frombb, return false; *togsi = gsi_for_stmt (use); + + propagate_defs_into_debug_stmts (stmt, sinkbb, togsi); + return true; } @@ -423,6 +443,8 @@ statement_sink_location (gimple stmt, basic_block frombb, *togsi = gsi_after_labels (sinkbb); + propagate_defs_into_debug_stmts (stmt, sinkbb, togsi); + return true; } diff --git a/gcc/tree-ssa-ter.c b/gcc/tree-ssa-ter.c index 3bbc8b9f866..c35d6336beb 100644 --- a/gcc/tree-ssa-ter.c +++ b/gcc/tree-ssa-ter.c @@ -585,6 +585,9 @@ find_replaceable_in_bb (temp_expr_table_p tab, basic_block bb) { stmt = gsi_stmt (bsi); + if (is_gimple_debug (stmt)) + continue; + stmt_replaceable = is_replaceable_p (stmt); /* Determine if this stmt finishes an existing expression. */ diff --git a/gcc/tree-ssa-threadedge.c b/gcc/tree-ssa-threadedge.c index f503ffc9271..1bcf2bf1804 100644 --- a/gcc/tree-ssa-threadedge.c +++ b/gcc/tree-ssa-threadedge.c @@ -308,7 +308,9 @@ record_temporary_equivalences_from_stmts_at_dest (edge e, stmt = gsi_stmt (gsi); /* Ignore empty statements and labels. */ - if (gimple_code (stmt) == GIMPLE_NOP || gimple_code (stmt) == GIMPLE_LABEL) + if (gimple_code (stmt) == GIMPLE_NOP + || gimple_code (stmt) == GIMPLE_LABEL + || is_gimple_debug (stmt)) continue; /* If the statement has volatile operands, then we assume we diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c index 71a34957bdf..62524bb1460 100644 --- a/gcc/tree-ssa-threadupdate.c +++ b/gcc/tree-ssa-threadupdate.c @@ -478,6 +478,7 @@ redirection_block_p (basic_block bb) gsi = gsi_start_bb (bb); while (!gsi_end_p (gsi) && (gimple_code (gsi_stmt (gsi)) == GIMPLE_LABEL + || is_gimple_debug (gsi_stmt (gsi)) || gimple_nop_p (gsi_stmt (gsi)))) gsi_next (&gsi); diff --git a/gcc/tree-ssa.c b/gcc/tree-ssa.c index 51b16899121..db707fb35a9 100644 --- a/gcc/tree-ssa.c +++ b/gcc/tree-ssa.c @@ -243,6 +243,207 @@ flush_pending_stmts (edge e) redirect_edge_var_map_clear (e); } +/* Given a tree for an expression for which we might want to emit + locations or values in debug information (generally a variable, but + we might deal with other kinds of trees in the future), return the + tree that should be used as the variable of a DEBUG_BIND STMT or + VAR_LOCATION INSN or NOTE. Return NULL if VAR is not to be tracked. */ + +tree +target_for_debug_bind (tree var) +{ + if (!MAY_HAVE_DEBUG_STMTS) + return NULL_TREE; + + if (TREE_CODE (var) != VAR_DECL + && TREE_CODE (var) != PARM_DECL) + return NULL_TREE; + + if (DECL_HAS_VALUE_EXPR_P (var)) + return target_for_debug_bind (DECL_VALUE_EXPR (var)); + + if (DECL_IGNORED_P (var)) + return NULL_TREE; + + if (!is_gimple_reg (var)) + return NULL_TREE; + + return var; +} + +/* Called via walk_tree, look for SSA_NAMEs that have already been + released. */ + +static tree +find_released_ssa_name (tree *tp, int *walk_subtrees, void *data_) +{ + struct walk_stmt_info *wi = (struct walk_stmt_info *) data_; + + if (wi->is_lhs) + return NULL_TREE; + + if (TREE_CODE (*tp) == SSA_NAME) + { + if (SSA_NAME_IN_FREE_LIST (*tp)) + return *tp; + + *walk_subtrees = 0; + } + else if (IS_TYPE_OR_DECL_P (*tp)) + *walk_subtrees = 0; + + return NULL_TREE; +} + +/* Given a VAR whose definition STMT is to be moved to the iterator + position TOGSIP in the TOBB basic block, verify whether we're + moving it across any of the debug statements that use it, and + adjust them as needed. If TOBB is NULL, then the definition is + understood as being removed, and TOGSIP is unused. */ +void +propagate_var_def_into_debug_stmts (tree var, + basic_block tobb, + const gimple_stmt_iterator *togsip) +{ + imm_use_iterator imm_iter; + gimple stmt; + use_operand_p use_p; + tree value = NULL; + bool no_value = false; + + if (!MAY_HAVE_DEBUG_STMTS) + return; + + FOR_EACH_IMM_USE_STMT (stmt, imm_iter, var) + { + basic_block bb; + gimple_stmt_iterator si; + + if (!is_gimple_debug (stmt)) + continue; + + if (tobb) + { + bb = gimple_bb (stmt); + + if (bb != tobb) + { + gcc_assert (dom_info_available_p (CDI_DOMINATORS)); + if (dominated_by_p (CDI_DOMINATORS, bb, tobb)) + continue; + } + else + { + si = *togsip; + + if (gsi_end_p (si)) + continue; + + do + { + gsi_prev (&si); + if (gsi_end_p (si)) + break; + } + while (gsi_stmt (si) != stmt); + + if (gsi_end_p (si)) + continue; + } + } + + /* Here we compute (lazily) the value assigned to VAR, but we + remember if we tried before and failed, so that we don't try + again. */ + if (!value && !no_value) + { + gimple def_stmt = SSA_NAME_DEF_STMT (var); + + if (is_gimple_assign (def_stmt)) + { + if (!dom_info_available_p (CDI_DOMINATORS)) + { + struct walk_stmt_info wi; + + memset (&wi, 0, sizeof (wi)); + + /* When removing blocks without following reverse + dominance order, we may sometimes encounter SSA_NAMEs + that have already been released, referenced in other + SSA_DEFs that we're about to release. Consider: + + <bb X>: + v_1 = foo; + + <bb Y>: + w_2 = v_1 + bar; + # DEBUG w => w_2 + + If we deleted BB X first, propagating the value of + w_2 won't do us any good. It's too late to recover + their original definition of v_1: when it was + deleted, it was only referenced in other DEFs, it + couldn't possibly know it should have been retained, + and propagating every single DEF just in case it + might have to be propagated into a DEBUG STMT would + probably be too wasteful. + + When dominator information is not readily + available, we check for and accept some loss of + debug information. But if it is available, + there's no excuse for us to remove blocks in the + wrong order, so we don't even check for dead SSA + NAMEs. SSA verification shall catch any + errors. */ + if (!walk_gimple_op (def_stmt, find_released_ssa_name, &wi)) + no_value = true; + } + + if (!no_value) + value = gimple_assign_rhs_to_tree (def_stmt); + } + + if (!value) + no_value = true; + } + + if (no_value) + gimple_debug_bind_reset_value (stmt); + else + FOR_EACH_IMM_USE_ON_STMT (use_p, imm_iter) + SET_USE (use_p, unshare_expr (value)); + + update_stmt (stmt); + } +} + + +/* Given a STMT to be moved to the iterator position TOBSIP in the + TOBB basic block, verify whether we're moving it across any of the + debug statements that use it. If TOBB is NULL, then the definition + is understood as being removed, and TOBSIP is unused. */ + +void +propagate_defs_into_debug_stmts (gimple def, basic_block tobb, + const gimple_stmt_iterator *togsip) +{ + ssa_op_iter op_iter; + def_operand_p def_p; + + if (!MAY_HAVE_DEBUG_STMTS) + return; + + FOR_EACH_SSA_DEF_OPERAND (def_p, def, op_iter, SSA_OP_DEF) + { + tree var = DEF_FROM_PTR (def_p); + + if (TREE_CODE (var) != SSA_NAME) + continue; + + propagate_var_def_into_debug_stmts (var, tobb, togsip); + } +} + /* Return true if SSA_NAME is malformed and mark it visited. IS_VIRTUAL is true if this SSA_NAME was found inside a virtual @@ -636,6 +837,9 @@ verify_ssa (bool check_modified_stmt) goto err; } } + else if (gimple_debug_bind_p (stmt) + && !gimple_debug_bind_has_value_p (stmt)) + continue; /* Verify the single virtual operand and its constraints. */ has_err = false; @@ -1480,6 +1684,8 @@ warn_uninitialized_vars (bool warn_possibly_uninitialized) { struct walk_stmt_info wi; data.stmt = gsi_stmt (gsi); + if (is_gimple_debug (data.stmt)) + continue; memset (&wi, 0, sizeof (wi)); wi.info = &data; walk_gimple_op (gsi_stmt (gsi), warn_uninitialized_var, &wi); diff --git a/gcc/tree-ssanames.c b/gcc/tree-ssanames.c index c019074cfae..45183218a2c 100644 --- a/gcc/tree-ssanames.c +++ b/gcc/tree-ssanames.c @@ -205,6 +205,9 @@ release_ssa_name (tree var) int saved_ssa_name_version = SSA_NAME_VERSION (var); use_operand_p imm = &(SSA_NAME_IMM_USE_NODE (var)); + if (MAY_HAVE_DEBUG_STMTS) + propagate_var_def_into_debug_stmts (var, NULL, NULL); + #ifdef ENABLE_CHECKING verify_imm_links (stderr, var); #endif diff --git a/gcc/tree-stdarg.c b/gcc/tree-stdarg.c index 4e030b12fa9..9e7369fc3c1 100644 --- a/gcc/tree-stdarg.c +++ b/gcc/tree-stdarg.c @@ -496,6 +496,9 @@ check_all_va_list_escapes (struct stdarg_info *si) tree use; ssa_op_iter iter; + if (is_gimple_debug (stmt)) + continue; + FOR_EACH_SSA_TREE_OPERAND (use, stmt, iter, SSA_OP_ALL_USES) { if (! bitmap_bit_p (si->va_list_escape_vars, @@ -837,6 +840,8 @@ execute_optimize_stdarg (void) continue; } } + else if (is_gimple_debug (stmt)) + continue; /* All other uses of va_list are either va_copy (that is not handled in this optimization), taking address of va_list variable or diff --git a/gcc/tree-tailcall.c b/gcc/tree-tailcall.c index efd6bc2c029..d1f6dc1488a 100644 --- a/gcc/tree-tailcall.c +++ b/gcc/tree-tailcall.c @@ -395,7 +395,7 @@ find_tail_calls (basic_block bb, struct tailcall **ret) stmt = gsi_stmt (gsi); /* Ignore labels. */ - if (gimple_code (stmt) == GIMPLE_LABEL) + if (gimple_code (stmt) == GIMPLE_LABEL || is_gimple_debug (stmt)) continue; /* Check for a call. */ @@ -501,6 +501,9 @@ find_tail_calls (basic_block bb, struct tailcall **ret) if (gimple_code (stmt) == GIMPLE_RETURN) break; + if (is_gimple_debug (stmt)) + continue; + if (gimple_code (stmt) != GIMPLE_ASSIGN) return; diff --git a/gcc/tree-vect-loop.c b/gcc/tree-vect-loop.c index 83833b137fa..c23577034b1 100644 --- a/gcc/tree-vect-loop.c +++ b/gcc/tree-vect-loop.c @@ -1590,6 +1590,8 @@ vect_is_simple_reduction (loop_vec_info loop_info, gimple phi, FOR_EACH_IMM_USE_FAST (use_p, imm_iter, name) { gimple use_stmt = USE_STMT (use_p); + if (is_gimple_debug (use_stmt)) + continue; if (flow_bb_inside_loop_p (loop, gimple_bb (use_stmt)) && vinfo_for_stmt (use_stmt) && !is_pattern_stmt_p (vinfo_for_stmt (use_stmt))) @@ -1642,6 +1644,8 @@ vect_is_simple_reduction (loop_vec_info loop_info, gimple phi, FOR_EACH_IMM_USE_FAST (use_p, imm_iter, name) { gimple use_stmt = USE_STMT (use_p); + if (is_gimple_debug (use_stmt)) + continue; if (flow_bb_inside_loop_p (loop, gimple_bb (use_stmt)) && vinfo_for_stmt (use_stmt) && !is_pattern_stmt_p (vinfo_for_stmt (use_stmt))) diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c index 237245e761c..514a3ec661e 100644 --- a/gcc/tree-vrp.c +++ b/gcc/tree-vrp.c @@ -4651,6 +4651,9 @@ find_assert_locations_1 (basic_block bb, sbitmap live) stmt = gsi_stmt (si); + if (is_gimple_debug (stmt)) + continue; + /* See if we can derive an assertion for any of STMT's operands. */ FOR_EACH_SSA_TREE_OPERAND (op, stmt, i, SSA_OP_USE) { diff --git a/gcc/tree.h b/gcc/tree.h index 110beb10099..4121af74ccc 100644 --- a/gcc/tree.h +++ b/gcc/tree.h @@ -1532,6 +1532,9 @@ struct GTY(()) tree_constructor { #define VL_EXP_OPERAND_LENGTH(NODE) \ ((int)TREE_INT_CST_LOW (VL_EXP_CHECK (NODE)->exp.operands[0])) +/* Nonzero if is_gimple_debug() may possibly hold. */ +#define MAY_HAVE_DEBUG_STMTS (flag_var_tracking_assignments) + /* In a LOOP_EXPR node. */ #define LOOP_EXPR_BODY(NODE) TREE_OPERAND_CHECK_CODE (NODE, LOOP_EXPR, 0) @@ -3807,6 +3810,10 @@ extern tree build6_stat (enum tree_code, tree, tree, tree, tree, tree, #define build6(c,t1,t2,t3,t4,t5,t6,t7) \ build6_stat (c,t1,t2,t3,t4,t5,t6,t7 MEM_STAT_INFO) +extern tree build_var_debug_value_stat (tree, tree MEM_STAT_DECL); +#define build_var_debug_value(t1,t2) \ + build_var_debug_value_stat (t1,t2 MEM_STAT_INFO) + extern tree build_int_cst (tree, HOST_WIDE_INT); extern tree build_int_cst_type (tree, HOST_WIDE_INT); extern tree build_int_cstu (tree, unsigned HOST_WIDE_INT); @@ -5209,6 +5216,10 @@ struct GTY(()) tree_priority_map { #define tree_priority_map_hash tree_map_base_hash #define tree_priority_map_marked_p tree_map_base_marked_p +/* In tree-ssa.c */ + +tree target_for_debug_bind (tree); + /* In tree-ssa-ccp.c */ extern tree maybe_fold_offset_to_reference (location_t, tree, tree, tree); extern tree maybe_fold_offset_to_address (location_t, tree, tree, tree); diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c index 76354d79100..a24755fe07e 100644 --- a/gcc/var-tracking.c +++ b/gcc/var-tracking.c @@ -106,6 +106,8 @@ #include "expr.h" #include "timevar.h" #include "tree-pass.h" +#include "cselib.h" +#include "target.h" /* Type of micro operation. */ enum micro_operation_type @@ -113,19 +115,40 @@ enum micro_operation_type MO_USE, /* Use location (REG or MEM). */ MO_USE_NO_VAR,/* Use location which is not associated with a variable or the variable is not trackable. */ + MO_VAL_USE, /* Use location which is associated with a value. */ + MO_VAL_LOC, /* Use location which appears in a debug insn. */ + MO_VAL_SET, /* Set location associated with a value. */ MO_SET, /* Set location. */ MO_COPY, /* Copy the same portion of a variable from one location to another. */ MO_CLOBBER, /* Clobber location. */ MO_CALL, /* Call insn. */ MO_ADJUST /* Adjust stack pointer. */ + +}; + +static const char * const ATTRIBUTE_UNUSED +micro_operation_type_name[] = { + "MO_USE", + "MO_USE_NO_VAR", + "MO_VAL_USE", + "MO_VAL_LOC", + "MO_VAL_SET", + "MO_SET", + "MO_COPY", + "MO_CLOBBER", + "MO_CALL", + "MO_ADJUST" }; -/* Where shall the note be emitted? BEFORE or AFTER the instruction. */ +/* Where shall the note be emitted? BEFORE or AFTER the instruction. + Notes emitted as AFTER_CALL are to take effect during the call, + rather than after the call. */ enum emit_note_where { EMIT_NOTE_BEFORE_INSN, - EMIT_NOTE_AFTER_INSN + EMIT_NOTE_AFTER_INSN, + EMIT_NOTE_AFTER_CALL_INSN }; /* Structure holding information about micro operation. */ @@ -135,9 +158,12 @@ typedef struct micro_operation_def enum micro_operation_type type; union { - /* Location. For MO_SET and MO_COPY, this is the SET that performs - the assignment, if known, otherwise it is the target of the - assignment. */ + /* Location. For MO_SET and MO_COPY, this is the SET that + performs the assignment, if known, otherwise it is the target + of the assignment. For MO_VAL_USE and MO_VAL_SET, it is a + CONCAT of the VALUE and the LOC associated with it. For + MO_VAL_LOC, it is a CONCAT of the VALUE and the VAR_LOCATION + associated with it. */ rtx loc; /* Stack adjustment. */ @@ -152,6 +178,10 @@ typedef struct micro_operation_def rtx insn; } micro_operation; +/* A declaration of a variable, or an RTL value being handled like a + declaration. */ +typedef void *decl_or_value; + /* Structure for passing some other parameters to function emit_note_insn_var_location. */ typedef struct emit_note_data_def @@ -161,6 +191,9 @@ typedef struct emit_note_data_def /* Where the note will be emitted (before/after insn)? */ enum emit_note_where where; + + /* The variables and values active at this point. */ + htab_t vars; } emit_note_data; /* Description of location of a part of a variable. The content of a physical @@ -176,7 +209,7 @@ typedef struct attrs_def rtx loc; /* The declaration corresponding to LOC. */ - tree decl; + decl_or_value dv; /* Offset from start of DECL. */ HOST_WIDE_INT offset; @@ -204,6 +237,9 @@ typedef struct dataflow_set_def /* Variable locations. */ shared_hash vars; + + /* Vars that is being traversed. */ + shared_hash traversed_vars; } dataflow_set; /* The structure (one for each basic block) containing the information @@ -220,8 +256,18 @@ typedef struct variable_tracking_info_def dataflow_set in; dataflow_set out; + /* The permanent-in dataflow set for this block. This is used to + hold values for which we had to compute entry values. ??? This + should probably be dynamically allocated, to avoid using more + memory in non-debug builds. */ + dataflow_set *permp; + /* Has the block been visited in DFS? */ bool visited; + + /* Has the block been flooded in VTA? */ + bool flooded; + } *variable_tracking_info; /* Structure for chaining the locations. */ @@ -230,7 +276,7 @@ typedef struct location_chain_def /* Next element in the chain. */ struct location_chain_def *next; - /* The location (REG or MEM). */ + /* The location (REG, MEM or VALUE). */ rtx loc; /* The "value" stored in this location. */ @@ -259,8 +305,9 @@ typedef struct variable_part_def /* Structure describing where the variable is located. */ typedef struct variable_def { - /* The declaration of the variable. */ - tree decl; + /* The declaration of the variable, or an RTL value being handled + like a declaration. */ + decl_or_value dv; /* Reference count. */ int refcount; @@ -269,10 +316,27 @@ typedef struct variable_def int n_var_parts; /* The variable parts. */ - variable_part var_part[MAX_VAR_PARTS]; + variable_part var_part[1]; } *variable; typedef const struct variable_def *const_variable; +/* Structure for chaining backlinks from referenced VALUEs to + DVs that are referencing them. */ +typedef struct value_chain_def +{ + /* Next value_chain entry. */ + struct value_chain_def *next; + + /* The declaration of the variable, or an RTL value + being handled like a declaration, whose var_parts[0].loc_chain + references the VALUE owning this value_chain. */ + decl_or_value dv; + + /* Reference count. */ + int refcount; +} *value_chain; +typedef const struct value_chain_def *const_value_chain; + /* Hash function for DECL for VARIABLE_HTAB. */ #define VARIABLE_HASH_VAL(decl) (DECL_UID (decl)) @@ -285,24 +349,39 @@ typedef const struct variable_def *const_variable; /* Alloc pool for struct attrs_def. */ static alloc_pool attrs_pool; -/* Alloc pool for struct variable_def. */ +/* Alloc pool for struct variable_def with MAX_VAR_PARTS entries. */ static alloc_pool var_pool; +/* Alloc pool for struct variable_def with a single var_part entry. */ +static alloc_pool valvar_pool; + /* Alloc pool for struct location_chain_def. */ static alloc_pool loc_chain_pool; /* Alloc pool for struct shared_hash_def. */ static alloc_pool shared_hash_pool; +/* Alloc pool for struct value_chain_def. */ +static alloc_pool value_chain_pool; + /* Changed variables, notes will be emitted for them. */ static htab_t changed_variables; +/* Links from VALUEs to DVs referencing them in their current loc_chains. */ +static htab_t value_chains; + /* Shall notes be emitted? */ static bool emit_notes; /* Empty shared hashtable. */ static shared_hash empty_shared_hash; +/* Scratch register bitmap used by cselib_expand_value_rtx. */ +static bitmap scratch_regs = NULL; + +/* Variable used to tell whether cselib_process_insn called our hook. */ +static bool cselib_hook_called; + /* Local function prototypes. */ static void stack_adjust_offset_pre_post (rtx, HOST_WIDE_INT *, HOST_WIDE_INT *); @@ -317,13 +396,13 @@ static void variable_htab_free (void *); static void init_attrs_list_set (attrs *); static void attrs_list_clear (attrs *); -static attrs attrs_list_member (attrs, tree, HOST_WIDE_INT); -static void attrs_list_insert (attrs *, tree, HOST_WIDE_INT, rtx); +static attrs attrs_list_member (attrs, decl_or_value, HOST_WIDE_INT); +static void attrs_list_insert (attrs *, decl_or_value, HOST_WIDE_INT, rtx); static void attrs_list_copy (attrs *, attrs); static void attrs_list_union (attrs *, attrs); -static variable unshare_variable (dataflow_set *set, variable var, - enum var_init_status); +static void **unshare_variable (dataflow_set *set, void **slot, variable var, + enum var_init_status); static int vars_copy_1 (void **, void *); static void vars_copy (htab_t, htab_t); static tree var_debug_decl (tree); @@ -344,14 +423,18 @@ static int variable_union_info_cmp_pos (const void *, const void *); static int variable_union (void **, void *); static int variable_canonicalize (void **, void *); static void dataflow_set_union (dataflow_set *, dataflow_set *); +static location_chain find_loc_in_1pdv (rtx, variable, htab_t); +static bool canon_value_cmp (rtx, rtx); +static int loc_cmp (rtx, rtx); static bool variable_part_different_p (variable_part *, variable_part *); +static bool onepart_variable_different_p (variable, variable); static bool variable_different_p (variable, variable, bool); static int dataflow_set_different_1 (void **, void *); static bool dataflow_set_different (dataflow_set *, dataflow_set *); static void dataflow_set_destroy (dataflow_set *); static bool contains_symbol_ref (rtx); -static bool track_expr_p (tree); +static bool track_expr_p (tree, bool); static bool same_variable_part_p (rtx, tree, HOST_WIDE_INT); static int count_uses (rtx *, void *); static void count_uses_1 (rtx *, void *); @@ -363,23 +446,32 @@ static bool compute_bb_dataflow (basic_block); static void vt_find_locations (void); static void dump_attrs_list (attrs); -static int dump_variable (void **, void *); +static int dump_variable_slot (void **, void *); +static void dump_variable (variable); static void dump_vars (htab_t); static void dump_dataflow_set (dataflow_set *); static void dump_dataflow_sets (void); static void variable_was_changed (variable, dataflow_set *); -static void set_variable_part (dataflow_set *, rtx, tree, HOST_WIDE_INT, - enum var_init_status, rtx); -static void clobber_variable_part (dataflow_set *, rtx, tree, HOST_WIDE_INT, - rtx); -static void delete_variable_part (dataflow_set *, rtx, tree, HOST_WIDE_INT); +static void **set_slot_part (dataflow_set *, rtx, void **, + decl_or_value, HOST_WIDE_INT, + enum var_init_status, rtx); +static void set_variable_part (dataflow_set *, rtx, + decl_or_value, HOST_WIDE_INT, + enum var_init_status, rtx, enum insert_option); +static void **clobber_slot_part (dataflow_set *, rtx, + void **, HOST_WIDE_INT, rtx); +static void clobber_variable_part (dataflow_set *, rtx, + decl_or_value, HOST_WIDE_INT, rtx); +static void **delete_slot_part (dataflow_set *, rtx, void **, HOST_WIDE_INT); +static void delete_variable_part (dataflow_set *, rtx, + decl_or_value, HOST_WIDE_INT); static int emit_note_insn_var_location (void **, void *); -static void emit_notes_for_changes (rtx, enum emit_note_where); +static void emit_notes_for_changes (rtx, enum emit_note_where, shared_hash); static int emit_notes_for_differences_1 (void **, void *); static int emit_notes_for_differences_2 (void **, void *); static void emit_notes_for_differences (rtx, dataflow_set *, dataflow_set *); -static void emit_notes_in_bb (basic_block); +static void emit_notes_in_bb (basic_block, dataflow_set *); static void vt_emit_notes (void); static bool vt_get_decl_and_offset (rtx, tree *, HOST_WIDE_INT *); @@ -626,6 +718,115 @@ adjust_stack_reference (rtx mem, HOST_WIDE_INT adjustment) return replace_equiv_address_nv (mem, addr); } +/* Return true if a decl_or_value DV is a DECL or NULL. */ +static inline bool +dv_is_decl_p (decl_or_value dv) +{ + if (!dv) + return false; + + if (GET_CODE ((rtx)dv) == VALUE) + return false; + + return true; +} + +/* Return true if a decl_or_value is a VALUE rtl. */ +static inline bool +dv_is_value_p (decl_or_value dv) +{ + return dv && !dv_is_decl_p (dv); +} + +/* Return the decl in the decl_or_value. */ +static inline tree +dv_as_decl (decl_or_value dv) +{ + gcc_assert (dv_is_decl_p (dv)); + return (tree) dv; +} + +/* Return the value in the decl_or_value. */ +static inline rtx +dv_as_value (decl_or_value dv) +{ + gcc_assert (dv_is_value_p (dv)); + return (rtx)dv; +} + +/* Return the opaque pointer in the decl_or_value. */ +static inline void * +dv_as_opaque (decl_or_value dv) +{ + return dv; +} + +/* Return true if a decl_or_value must not have more than one variable + part. */ +static inline bool +dv_onepart_p (decl_or_value dv) +{ + tree decl; + + if (!MAY_HAVE_DEBUG_INSNS) + return false; + + if (dv_is_value_p (dv)) + return true; + + decl = dv_as_decl (dv); + + if (!decl) + return true; + + return (target_for_debug_bind (decl) != NULL_TREE); +} + +/* Return the variable pool to be used for dv, depending on whether it + can have multiple parts or not. */ +static inline alloc_pool +dv_pool (decl_or_value dv) +{ + return dv_onepart_p (dv) ? valvar_pool : var_pool; +} + +#define IS_DECL_CODE(C) ((C) == VAR_DECL || (C) == PARM_DECL \ + || (C) == RESULT_DECL || (C) == COMPONENT_REF) + +/* Check that VALUE won't ever look like a DECL. */ +static char check_value_is_not_decl [(!IS_DECL_CODE ((enum tree_code)VALUE)) + ? 1 : -1] ATTRIBUTE_UNUSED; + + +/* Build a decl_or_value out of a decl. */ +static inline decl_or_value +dv_from_decl (tree decl) +{ + decl_or_value dv; + gcc_assert (!decl || IS_DECL_CODE (TREE_CODE (decl))); + dv = decl; + return dv; +} + +/* Build a decl_or_value out of a value. */ +static inline decl_or_value +dv_from_value (rtx value) +{ + decl_or_value dv; + gcc_assert (value); + dv = value; + return dv; +} + +static inline hashval_t +dv_htab_hash (decl_or_value dv) +{ + if (dv_is_value_p (dv)) + return -(hashval_t)(CSELIB_VAL_PTR (dv_as_value (dv))->value); + else + return (VARIABLE_HASH_VAL (dv_as_decl (dv))); +} + /* The hash function for variable_htab, computes the hash value from the declaration of variable X. */ @@ -634,7 +835,7 @@ variable_htab_hash (const void *x) { const_variable const v = (const_variable) x; - return (VARIABLE_HASH_VAL (v->decl)); + return dv_htab_hash (v->dv); } /* Compare the declaration of variable X with declaration Y. */ @@ -643,9 +844,31 @@ static int variable_htab_eq (const void *x, const void *y) { const_variable const v = (const_variable) x; - const_tree const decl = (const_tree) y; + decl_or_value dv = CONST_CAST2 (decl_or_value, const void *, y); + + if (dv_as_opaque (v->dv) == dv_as_opaque (dv)) + return true; + +#if ENABLE_CHECKING + { + bool visv, dvisv; + + visv = dv_is_value_p (v->dv); + dvisv = dv_is_value_p (dv); + + if (visv != dvisv) + return false; - return (VARIABLE_HASH_VAL (v->decl) == VARIABLE_HASH_VAL (decl)); + if (visv) + gcc_assert (CSELIB_VAL_PTR (dv_as_value (v->dv)) + != CSELIB_VAL_PTR (dv_as_value (dv))); + else + gcc_assert (VARIABLE_HASH_VAL (dv_as_decl (v->dv)) + != VARIABLE_HASH_VAL (dv_as_decl (dv))); + } +#endif + + return false; } /* Free the element of VARIABLE_HTAB (its type is struct variable_def). */ @@ -672,7 +895,29 @@ variable_htab_free (void *elem) } var->var_part[i].loc_chain = NULL; } - pool_free (var_pool, var); + pool_free (dv_pool (var->dv), var); +} + +/* The hash function for value_chains htab, computes the hash value + from the VALUE. */ + +static hashval_t +value_chain_htab_hash (const void *x) +{ + const_value_chain const v = (const_value_chain) x; + + return dv_htab_hash (v->dv); +} + +/* Compare the VALUE X with VALUE Y. */ + +static int +value_chain_htab_eq (const void *x, const void *y) +{ + const_value_chain const v = (const_value_chain) x; + decl_or_value dv = CONST_CAST2 (decl_or_value, const void *, y); + + return dv_as_opaque (v->dv) == dv_as_opaque (dv); } /* Initialize the set (array) SET of attrs to empty lists. */ @@ -704,10 +949,10 @@ attrs_list_clear (attrs *listp) /* Return true if the pair of DECL and OFFSET is the member of the LIST. */ static attrs -attrs_list_member (attrs list, tree decl, HOST_WIDE_INT offset) +attrs_list_member (attrs list, decl_or_value dv, HOST_WIDE_INT offset) { for (; list; list = list->next) - if (list->decl == decl && list->offset == offset) + if (dv_as_opaque (list->dv) == dv_as_opaque (dv) && list->offset == offset) return list; return NULL; } @@ -715,13 +960,14 @@ attrs_list_member (attrs list, tree decl, HOST_WIDE_INT offset) /* Insert the triplet DECL, OFFSET, LOC to the list *LISTP. */ static void -attrs_list_insert (attrs *listp, tree decl, HOST_WIDE_INT offset, rtx loc) +attrs_list_insert (attrs *listp, decl_or_value dv, + HOST_WIDE_INT offset, rtx loc) { attrs list; list = (attrs) pool_alloc (attrs_pool); list->loc = loc; - list->decl = decl; + list->dv = dv; list->offset = offset; list->next = *listp; *listp = list; @@ -739,7 +985,7 @@ attrs_list_copy (attrs *dstp, attrs src) { n = (attrs) pool_alloc (attrs_pool); n->loc = src->loc; - n->decl = src->decl; + n->dv = src->dv; n->offset = src->offset; n->next = *dstp; *dstp = n; @@ -753,8 +999,28 @@ attrs_list_union (attrs *dstp, attrs src) { for (; src; src = src->next) { - if (!attrs_list_member (*dstp, src->decl, src->offset)) - attrs_list_insert (dstp, src->decl, src->offset, src->loc); + if (!attrs_list_member (*dstp, src->dv, src->offset)) + attrs_list_insert (dstp, src->dv, src->offset, src->loc); + } +} + +/* Combine nodes that are not onepart nodes from SRC and SRC2 into + *DSTP. */ + +static void +attrs_list_mpdv_union (attrs *dstp, attrs src, attrs src2) +{ + gcc_assert (!*dstp); + for (; src; src = src->next) + { + if (!dv_onepart_p (src->dv)) + attrs_list_insert (dstp, src->dv, src->offset, src->loc); + } + for (src = src2; src; src = src->next) + { + if (!dv_onepart_p (src->dv) + && !attrs_list_member (*dstp, src->dv, src->offset)) + attrs_list_insert (dstp, src->dv, src->offset, src->loc); } } @@ -815,64 +1081,122 @@ shared_hash_destroy (shared_hash vars) } } -/* Unshare *PVARS if shared and return slot for DECL. If INS is +/* Unshare *PVARS if shared and return slot for DV. If INS is INSERT, insert it if not already present. */ static inline void ** -shared_hash_find_slot_unshare (shared_hash *pvars, tree decl, - enum insert_option ins) +shared_hash_find_slot_unshare_1 (shared_hash *pvars, decl_or_value dv, + hashval_t dvhash, enum insert_option ins) { if (shared_hash_shared (*pvars)) *pvars = shared_hash_unshare (*pvars); - return htab_find_slot_with_hash (shared_hash_htab (*pvars), decl, - VARIABLE_HASH_VAL (decl), ins); + return htab_find_slot_with_hash (shared_hash_htab (*pvars), dv, dvhash, ins); } -/* Return slot for DECL, if it is already present in the hash table. +static inline void ** +shared_hash_find_slot_unshare (shared_hash *pvars, decl_or_value dv, + enum insert_option ins) +{ + return shared_hash_find_slot_unshare_1 (pvars, dv, dv_htab_hash (dv), ins); +} + +/* Return slot for DV, if it is already present in the hash table. If it is not present, insert it only VARS is not shared, otherwise return NULL. */ static inline void ** -shared_hash_find_slot (shared_hash vars, tree decl) +shared_hash_find_slot_1 (shared_hash vars, decl_or_value dv, hashval_t dvhash) { - return htab_find_slot_with_hash (shared_hash_htab (vars), decl, - VARIABLE_HASH_VAL (decl), + return htab_find_slot_with_hash (shared_hash_htab (vars), dv, dvhash, shared_hash_shared (vars) ? NO_INSERT : INSERT); } -/* Return slot for DECL only if it is already present in the hash table. */ +static inline void ** +shared_hash_find_slot (shared_hash vars, decl_or_value dv) +{ + return shared_hash_find_slot_1 (vars, dv, dv_htab_hash (dv)); +} + +/* Return slot for DV only if it is already present in the hash table. */ static inline void ** -shared_hash_find_slot_noinsert (shared_hash vars, tree decl) +shared_hash_find_slot_noinsert_1 (shared_hash vars, decl_or_value dv, + hashval_t dvhash) { - return htab_find_slot_with_hash (shared_hash_htab (vars), decl, - VARIABLE_HASH_VAL (decl), NO_INSERT); + return htab_find_slot_with_hash (shared_hash_htab (vars), dv, dvhash, + NO_INSERT); } -/* Return variable for DECL or NULL if not already present in the hash +static inline void ** +shared_hash_find_slot_noinsert (shared_hash vars, decl_or_value dv) +{ + return shared_hash_find_slot_noinsert_1 (vars, dv, dv_htab_hash (dv)); +} + +/* Return variable for DV or NULL if not already present in the hash table. */ static inline variable -shared_hash_find (shared_hash vars, tree decl) +shared_hash_find_1 (shared_hash vars, decl_or_value dv, hashval_t dvhash) { - return (variable) - htab_find_with_hash (shared_hash_htab (vars), decl, - VARIABLE_HASH_VAL (decl)); + return (variable) htab_find_with_hash (shared_hash_htab (vars), dv, dvhash); } +static inline variable +shared_hash_find (shared_hash vars, decl_or_value dv) +{ + return shared_hash_find_1 (vars, dv, dv_htab_hash (dv)); +} + +/* Determine a total order between two distinct pointers. Compare the + pointers as integral types if size_t is wide enough, otherwise + resort to bitwise memory compare. The actual order does not + matter, we just need to be consistent, so endianness is + irrelevant. */ + +static int +tie_break_pointers (const void *p1, const void *p2) +{ + gcc_assert (p1 != p2); + + if (sizeof (size_t) >= sizeof (void*)) + return (size_t)p1 < (size_t)p2 ? -1 : 1; + else + return memcmp (&p1, &p2, sizeof (p1)); +} + +/* Return true if TVAL is better than CVAL as a canonival value. We + choose lowest-numbered VALUEs, using the RTX address as a + tie-breaker. The idea is to arrange them into a star topology, + such that all of them are at most one step away from the canonical + value, and the canonical value has backlinks to all of them, in + addition to all the actual locations. We don't enforce this + topology throughout the entire dataflow analysis, though. + */ + +static inline bool +canon_value_cmp (rtx tval, rtx cval) +{ + return !cval + || CSELIB_VAL_PTR (tval)->value < CSELIB_VAL_PTR (cval)->value + || (CSELIB_VAL_PTR (tval)->value == CSELIB_VAL_PTR (cval)->value + && tie_break_pointers (tval, cval) < 0); +} + +static bool dst_can_be_shared; + /* Return a copy of a variable VAR and insert it to dataflow set SET. */ -static variable -unshare_variable (dataflow_set *set, variable var, +static void ** +unshare_variable (dataflow_set *set, void **slot, variable var, enum var_init_status initialized) { - void **slot; variable new_var; int i; - new_var = (variable) pool_alloc (var_pool); - new_var->decl = var->decl; + new_var = (variable) pool_alloc (dv_pool (var->dv)); + new_var->dv = var->dv; new_var->refcount = 1; var->refcount--; new_var->n_var_parts = var->n_var_parts; @@ -915,9 +1239,13 @@ unshare_variable (dataflow_set *set, variable var, new_var->var_part[i].cur_loc = NULL; } - slot = shared_hash_find_slot_unshare (&set->vars, new_var->decl, INSERT); + dst_can_be_shared = false; + if (shared_hash_shared (set->vars)) + slot = shared_hash_find_slot_unshare (&set->vars, var->dv, NO_INSERT); + else if (set->traversed_vars && set->vars != set->traversed_vars) + slot = shared_hash_find_slot_noinsert (set->vars, var->dv); *slot = new_var; - return new_var; + return slot; } /* Add a variable from *SLOT to hash table DATA and increase its reference @@ -927,14 +1255,15 @@ static int vars_copy_1 (void **slot, void *data) { htab_t dst = (htab_t) data; - variable src, *dstp; + variable src; + void **dstp; - src = *(variable *) slot; + src = (variable) *slot; src->refcount++; - dstp = (variable *) htab_find_slot_with_hash (dst, src->decl, - VARIABLE_HASH_VAL (src->decl), - INSERT); + dstp = htab_find_slot_with_hash (dst, src->dv, + dv_htab_hash (src->dv), + INSERT); *dstp = src; /* Continue traversing the hash table. */ @@ -962,28 +1291,43 @@ var_debug_decl (tree decl) return decl; } -/* Set the register to contain REG_EXPR (LOC), REG_OFFSET (LOC). */ +/* Set the register LOC to contain DV, OFFSET. */ static void -var_reg_set (dataflow_set *set, rtx loc, enum var_init_status initialized, - rtx set_src) +var_reg_decl_set (dataflow_set *set, rtx loc, enum var_init_status initialized, + decl_or_value dv, HOST_WIDE_INT offset, rtx set_src, + enum insert_option iopt) { - tree decl = REG_EXPR (loc); - HOST_WIDE_INT offset = REG_OFFSET (loc); attrs node; + bool decl_p = dv_is_decl_p (dv); - decl = var_debug_decl (decl); + if (decl_p) + dv = dv_from_decl (var_debug_decl (dv_as_decl (dv))); for (node = set->regs[REGNO (loc)]; node; node = node->next) - if (node->decl == decl && node->offset == offset) + if (dv_as_opaque (node->dv) == dv_as_opaque (dv) + && node->offset == offset) break; if (!node) - attrs_list_insert (&set->regs[REGNO (loc)], decl, offset, loc); - set_variable_part (set, loc, decl, offset, initialized, set_src); + attrs_list_insert (&set->regs[REGNO (loc)], dv, offset, loc); + set_variable_part (set, loc, dv, offset, initialized, set_src, iopt); +} + +/* Set the register to contain REG_EXPR (LOC), REG_OFFSET (LOC). */ + +static void +var_reg_set (dataflow_set *set, rtx loc, enum var_init_status initialized, + rtx set_src) +{ + tree decl = REG_EXPR (loc); + HOST_WIDE_INT offset = REG_OFFSET (loc); + + var_reg_decl_set (set, loc, initialized, + dv_from_decl (decl), offset, set_src, INSERT); } static enum var_init_status -get_init_value (dataflow_set *set, rtx loc, tree decl) +get_init_value (dataflow_set *set, rtx loc, decl_or_value dv) { variable var; int i; @@ -992,7 +1336,7 @@ get_init_value (dataflow_set *set, rtx loc, tree decl) if (! flag_var_tracking_uninit) return VAR_INIT_STATUS_INITIALIZED; - var = shared_hash_find (set->vars, decl); + var = shared_hash_find (set->vars, dv); if (var) { for (i = 0; i < var->n_var_parts && ret_val == VAR_INIT_STATUS_UNKNOWN; i++) @@ -1029,15 +1373,15 @@ var_reg_delete_and_set (dataflow_set *set, rtx loc, bool modify, decl = var_debug_decl (decl); if (initialized == VAR_INIT_STATUS_UNKNOWN) - initialized = get_init_value (set, loc, decl); + initialized = get_init_value (set, loc, dv_from_decl (decl)); nextp = &set->regs[REGNO (loc)]; for (node = *nextp; node; node = next) { next = node->next; - if (node->decl != decl || node->offset != offset) + if (dv_as_opaque (node->dv) != decl || node->offset != offset) { - delete_variable_part (set, node->loc, node->decl, node->offset); + delete_variable_part (set, node->loc, node->dv, node->offset); pool_free (attrs_pool, node); *nextp = next; } @@ -1048,7 +1392,7 @@ var_reg_delete_and_set (dataflow_set *set, rtx loc, bool modify, } } if (modify) - clobber_variable_part (set, loc, decl, offset, set_src); + clobber_variable_part (set, loc, dv_from_decl (decl), offset, set_src); var_reg_set (set, loc, initialized, set_src); } @@ -1069,13 +1413,13 @@ var_reg_delete (dataflow_set *set, rtx loc, bool clobber) decl = var_debug_decl (decl); - clobber_variable_part (set, NULL, decl, offset, NULL); + clobber_variable_part (set, NULL, dv_from_decl (decl), offset, NULL); } for (node = *reg; node; node = next) { next = node->next; - delete_variable_part (set, node->loc, node->decl, node->offset); + delete_variable_part (set, node->loc, node->dv, node->offset); pool_free (attrs_pool, node); } *reg = NULL; @@ -1092,12 +1436,25 @@ var_regno_delete (dataflow_set *set, int regno) for (node = *reg; node; node = next) { next = node->next; - delete_variable_part (set, node->loc, node->decl, node->offset); + delete_variable_part (set, node->loc, node->dv, node->offset); pool_free (attrs_pool, node); } *reg = NULL; } +/* Set the location of DV, OFFSET as the MEM LOC. */ + +static void +var_mem_decl_set (dataflow_set *set, rtx loc, enum var_init_status initialized, + decl_or_value dv, HOST_WIDE_INT offset, rtx set_src, + enum insert_option iopt) +{ + if (dv_is_decl_p (dv)) + dv = dv_from_decl (var_debug_decl (dv_as_decl (dv))); + + set_variable_part (set, loc, dv, offset, initialized, set_src, iopt); +} + /* Set the location part of variable MEM_EXPR (LOC) in dataflow set SET to LOC. Adjust the address first if it is stack pointer based. */ @@ -1109,9 +1466,8 @@ var_mem_set (dataflow_set *set, rtx loc, enum var_init_status initialized, tree decl = MEM_EXPR (loc); HOST_WIDE_INT offset = INT_MEM_OFFSET (loc); - decl = var_debug_decl (decl); - - set_variable_part (set, loc, decl, offset, initialized, set_src); + var_mem_decl_set (set, loc, initialized, + dv_from_decl (decl), offset, set_src, INSERT); } /* Delete and set the location part of variable MEM_EXPR (LOC) in @@ -1131,10 +1487,10 @@ var_mem_delete_and_set (dataflow_set *set, rtx loc, bool modify, decl = var_debug_decl (decl); if (initialized == VAR_INIT_STATUS_UNKNOWN) - initialized = get_init_value (set, loc, decl); + initialized = get_init_value (set, loc, dv_from_decl (decl)); if (modify) - clobber_variable_part (set, NULL, decl, offset, set_src); + clobber_variable_part (set, NULL, dv_from_decl (decl), offset, set_src); var_mem_set (set, loc, initialized, set_src); } @@ -1150,8 +1506,180 @@ var_mem_delete (dataflow_set *set, rtx loc, bool clobber) decl = var_debug_decl (decl); if (clobber) - clobber_variable_part (set, NULL, decl, offset, NULL); - delete_variable_part (set, loc, decl, offset); + clobber_variable_part (set, NULL, dv_from_decl (decl), offset, NULL); + delete_variable_part (set, loc, dv_from_decl (decl), offset); +} + +/* Map a value to a location it was just stored in. */ + +static void +val_store (dataflow_set *set, rtx val, rtx loc, rtx insn) +{ + cselib_val *v = CSELIB_VAL_PTR (val); + + gcc_assert (cselib_preserved_value_p (v)); + + if (dump_file) + { + fprintf (dump_file, "%i: ", INSN_UID (insn)); + print_inline_rtx (dump_file, val, 0); + fprintf (dump_file, " stored in "); + print_inline_rtx (dump_file, loc, 0); + if (v->locs) + { + struct elt_loc_list *l; + for (l = v->locs; l; l = l->next) + { + fprintf (dump_file, "\n%i: ", INSN_UID (l->setting_insn)); + print_inline_rtx (dump_file, l->loc, 0); + } + } + fprintf (dump_file, "\n"); + } + + if (REG_P (loc)) + { + var_regno_delete (set, REGNO (loc)); + var_reg_decl_set (set, loc, VAR_INIT_STATUS_INITIALIZED, + dv_from_value (val), 0, NULL_RTX, INSERT); + } + else if (MEM_P (loc)) + var_mem_decl_set (set, loc, VAR_INIT_STATUS_INITIALIZED, + dv_from_value (val), 0, NULL_RTX, INSERT); + else + set_variable_part (set, loc, dv_from_value (val), 0, + VAR_INIT_STATUS_INITIALIZED, NULL_RTX, INSERT); +} + +/* Reset this node, detaching all its equivalences. Return the slot + in the variable hash table that holds dv, if there is one. */ + +static void +val_reset (dataflow_set *set, decl_or_value dv) +{ + variable var = shared_hash_find (set->vars, dv) ; + location_chain node; + rtx cval; + + if (!var || !var->n_var_parts) + return; + + gcc_assert (var->n_var_parts == 1); + + cval = NULL; + for (node = var->var_part[0].loc_chain; node; node = node->next) + if (GET_CODE (node->loc) == VALUE + && canon_value_cmp (node->loc, cval)) + cval = node->loc; + + for (node = var->var_part[0].loc_chain; node; node = node->next) + if (GET_CODE (node->loc) == VALUE && cval != node->loc) + { + /* Redirect the equivalence link to the new canonical + value, or simply remove it if it would point at + itself. */ + if (cval) + set_variable_part (set, cval, dv_from_value (node->loc), + 0, node->init, node->set_src, NO_INSERT); + delete_variable_part (set, dv_as_value (dv), + dv_from_value (node->loc), 0); + } + + if (cval) + { + decl_or_value cdv = dv_from_value (cval); + + /* Keep the remaining values connected, accummulating links + in the canonical value. */ + for (node = var->var_part[0].loc_chain; node; node = node->next) + { + if (node->loc == cval) + continue; + else if (GET_CODE (node->loc) == REG) + var_reg_decl_set (set, node->loc, node->init, cdv, 0, + node->set_src, NO_INSERT); + else if (GET_CODE (node->loc) == MEM) + var_mem_decl_set (set, node->loc, node->init, cdv, 0, + node->set_src, NO_INSERT); + else + set_variable_part (set, node->loc, cdv, 0, + node->init, node->set_src, NO_INSERT); + } + } + + /* We remove this last, to make sure that the canonical value is not + removed to the point of requiring reinsertion. */ + if (cval) + delete_variable_part (set, dv_as_value (dv), dv_from_value (cval), 0); + + clobber_variable_part (set, NULL, dv, 0, NULL); + + /* ??? Should we make sure there aren't other available values or + variables whose values involve this one other than by + equivalence? E.g., at the very least we should reset MEMs, those + shouldn't be too hard to find cselib-looking up the value as an + address, then locating the resulting value in our own hash + table. */ +} + +/* Find the values in a given location and map the val to another + value, if it is unique, or add the location as one holding the + value. */ + +static void +val_resolve (dataflow_set *set, rtx val, rtx loc, rtx insn) +{ + decl_or_value dv = dv_from_value (val); + + if (dump_file && (dump_flags & TDF_DETAILS)) + { + if (insn) + fprintf (dump_file, "%i: ", INSN_UID (insn)); + else + fprintf (dump_file, "head: "); + print_inline_rtx (dump_file, val, 0); + fputs (" is at ", dump_file); + print_inline_rtx (dump_file, loc, 0); + fputc ('\n', dump_file); + } + + val_reset (set, dv); + + if (REG_P (loc)) + { + attrs node, found = NULL; + + for (node = set->regs[REGNO (loc)]; node; node = node->next) + if (dv_is_value_p (node->dv) + && GET_MODE (dv_as_value (node->dv)) == GET_MODE (loc)) + { + found = node; + + /* Map incoming equivalences. ??? Wouldn't it be nice if + we just started sharing the location lists? Maybe a + circular list ending at the value itself or some + such. */ + set_variable_part (set, dv_as_value (node->dv), + dv_from_value (val), node->offset, + VAR_INIT_STATUS_INITIALIZED, NULL_RTX, INSERT); + set_variable_part (set, val, node->dv, node->offset, + VAR_INIT_STATUS_INITIALIZED, NULL_RTX, INSERT); + } + + /* If we didn't find any equivalence, we need to remember that + this value is held in the named register. */ + if (!found) + var_reg_decl_set (set, loc, VAR_INIT_STATUS_INITIALIZED, + dv_from_value (val), 0, NULL_RTX, INSERT); + } + else if (MEM_P (loc)) + /* ??? Merge equivalent MEMs. */ + var_mem_decl_set (set, loc, VAR_INIT_STATUS_INITIALIZED, + dv_from_value (val), 0, NULL_RTX, INSERT); + else + /* ??? Merge equivalent expressions. */ + set_variable_part (set, loc, dv_from_value (val), 0, + VAR_INIT_STATUS_INITIALIZED, NULL_RTX, INSERT); } /* Initialize dataflow set SET to be empty. @@ -1163,6 +1691,7 @@ dataflow_set_init (dataflow_set *set) init_attrs_list_set (set->regs); set->vars = shared_hash_copy (empty_shared_hash); set->stack_adjust = 0; + set->traversed_vars = NULL; } /* Delete the contents of dataflow set SET. */ @@ -1246,12 +1775,18 @@ variable_union (void **slot, void *data) dataflow_set *set = (dataflow_set *) data; int i, j, k; - src = *(variable *) slot; - dstp = shared_hash_find_slot (set->vars, src->decl); + src = (variable) *slot; + dstp = shared_hash_find_slot (set->vars, src->dv); if (!dstp || !*dstp) { src->refcount++; + dst_can_be_shared = false; + if (!dstp) + dstp = shared_hash_find_slot_unshare (&set->vars, src->dv, INSERT); + + *dstp = src; + /* If CUR_LOC of some variable part is not the first element of the location chain we are going to change it so we have to make a copy of the variable. */ @@ -1267,18 +1802,7 @@ variable_union (void **slot, void *data) } } if (k < src->n_var_parts) - { - if (dstp) - *dstp = (void *) src; - unshare_variable (set, src, VAR_INIT_STATUS_UNKNOWN); - } - else - { - if (!dstp) - dstp = shared_hash_find_slot_unshare (&set->vars, src->decl, - INSERT); - *dstp = (void *) src; - } + dstp = unshare_variable (set, dstp, src, VAR_INIT_STATUS_UNKNOWN); /* Continue traversing the hash table. */ return 1; @@ -1288,6 +1812,66 @@ variable_union (void **slot, void *data) gcc_assert (src->n_var_parts); + /* We can combine one-part variables very efficiently, because their + entries are in canonical order. */ + if (dv_onepart_p (src->dv)) + { + location_chain *nodep, dnode, snode; + + gcc_assert (src->n_var_parts == 1); + gcc_assert (dst->n_var_parts == 1); + + snode = src->var_part[0].loc_chain; + gcc_assert (snode); + + restart_onepart_unshared: + nodep = &dst->var_part[0].loc_chain; + dnode = *nodep; + gcc_assert (dnode); + + while (snode) + { + int r = dnode ? loc_cmp (dnode->loc, snode->loc) : 1; + + if (r > 0) + { + location_chain nnode; + + if (dst->refcount != 1 || shared_hash_shared (set->vars)) + { + dstp = unshare_variable (set, dstp, dst, + VAR_INIT_STATUS_INITIALIZED); + dst = (variable)*dstp; + goto restart_onepart_unshared; + } + + *nodep = nnode = (location_chain) pool_alloc (loc_chain_pool); + nnode->loc = snode->loc; + nnode->init = snode->init; + if (!snode->set_src || MEM_P (snode->set_src)) + nnode->set_src = NULL; + else + nnode->set_src = snode->set_src; + nnode->next = dnode; + dnode = nnode; + } +#ifdef ENABLE_CHECKING + else if (r == 0) + gcc_assert (rtx_equal_p (dnode->loc, snode->loc)); +#endif + + if (r >= 0) + snode = snode->next; + + nodep = &dnode->next; + dnode = *nodep; + } + + dst->var_part[0].cur_loc = dst->var_part[0].loc_chain->loc; + + return 1; + } + /* Count the number of location parts, result is K. */ for (i = 0, j = 0, k = 0; i < src->n_var_parts && j < dst->n_var_parts; k++) @@ -1307,11 +1891,14 @@ variable_union (void **slot, void *data) /* We track only variables whose size is <= MAX_VAR_PARTS bytes thus there are at most MAX_VAR_PARTS different offsets. */ - gcc_assert (k <= MAX_VAR_PARTS); + gcc_assert (dv_onepart_p (dst->dv) ? k == 1 : k <= MAX_VAR_PARTS); if ((dst->refcount > 1 || shared_hash_shared (set->vars)) && dst->n_var_parts != k) - dst = unshare_variable (set, dst, VAR_INIT_STATUS_UNKNOWN); + { + dstp = unshare_variable (set, dstp, dst, VAR_INIT_STATUS_UNKNOWN); + dst = (variable)*dstp; + } i = src->n_var_parts - 1; j = dst->n_var_parts - 1; @@ -1351,7 +1938,11 @@ variable_union (void **slot, void *data) } } if (node || node2) - dst = unshare_variable (set, dst, VAR_INIT_STATUS_UNKNOWN); + { + dstp = unshare_variable (set, dstp, dst, + VAR_INIT_STATUS_UNKNOWN); + dst = (variable)*dstp; + } } src_l = 0; @@ -1599,7 +2190,7 @@ variable_canonicalize (void **slot, void *data) } } if (k < src->n_var_parts) - unshare_variable (set, src, VAR_INIT_STATUS_UNKNOWN); + slot = unshare_variable (set, slot, src, VAR_INIT_STATUS_UNKNOWN); return 1; } @@ -1617,12 +2208,1678 @@ dataflow_set_union (dataflow_set *dst, dataflow_set *src) { shared_hash_destroy (dst->vars); dst->vars = shared_hash_copy (src->vars); - htab_traverse (shared_hash_htab (src->vars), variable_canonicalize, dst); + dst->traversed_vars = dst->vars; + htab_traverse (shared_hash_htab (dst->vars), variable_canonicalize, dst); + dst->traversed_vars = NULL; } else htab_traverse (shared_hash_htab (src->vars), variable_union, dst); } +/* Whether the value is currently being expanded. */ +#define VALUE_RECURSED_INTO(x) \ + (RTL_FLAG_CHECK1 ("VALUE_RECURSED_INTO", (x), VALUE)->used) +/* Whether the value is in changed_variables hash table. */ +#define VALUE_CHANGED(x) \ + (RTL_FLAG_CHECK1 ("VALUE_CHANGED", (x), VALUE)->frame_related) +/* Whether the decl is in changed_variables hash table. */ +#define DECL_CHANGED(x) TREE_VISITED (x) + +/* Record that DV has been added into resp. removed from changed_variables + hashtable. */ + +static inline void +set_dv_changed (decl_or_value dv, bool newv) +{ + if (dv_is_value_p (dv)) + VALUE_CHANGED (dv_as_value (dv)) = newv; + else + DECL_CHANGED (dv_as_decl (dv)) = newv; +} + +/* Return true if DV is present in changed_variables hash table. */ + +static inline bool +dv_changed_p (decl_or_value dv) +{ + return (dv_is_value_p (dv) + ? VALUE_CHANGED (dv_as_value (dv)) + : DECL_CHANGED (dv_as_decl (dv))); +} + +/* Return a location list node whose loc is rtx_equal to LOC, in the + location list of a one-part variable or value VAR, or in that of + any values recursively mentioned in the location lists. */ + +static location_chain +find_loc_in_1pdv (rtx loc, variable var, htab_t vars) +{ + location_chain node; + + if (!var) + return NULL; + + gcc_assert (dv_onepart_p (var->dv)); + + if (!var->n_var_parts) + return NULL; + + gcc_assert (var->var_part[0].offset == 0); + + for (node = var->var_part[0].loc_chain; node; node = node->next) + if (rtx_equal_p (loc, node->loc)) + return node; + else if (GET_CODE (node->loc) == VALUE + && !VALUE_RECURSED_INTO (node->loc)) + { + decl_or_value dv = dv_from_value (node->loc); + variable var = (variable) + htab_find_with_hash (vars, dv, dv_htab_hash (dv)); + + if (var) + { + location_chain where; + VALUE_RECURSED_INTO (node->loc) = true; + if ((where = find_loc_in_1pdv (loc, var, vars))) + { + VALUE_RECURSED_INTO (node->loc) = false; + return where; + } + VALUE_RECURSED_INTO (node->loc) = false; + } + } + + return NULL; +} + +/* Hash table iteration argument passed to variable_merge. */ +struct dfset_merge +{ + /* The set in which the merge is to be inserted. */ + dataflow_set *dst; + /* The set that we're iterating in. */ + dataflow_set *cur; + /* The set that may contain the other dv we are to merge with. */ + dataflow_set *src; + /* Number of onepart dvs in src. */ + int src_onepart_cnt; +}; + +/* Insert LOC in *DNODE, if it's not there yet. The list must be in + loc_cmp order, and it is maintained as such. */ + +static void +insert_into_intersection (location_chain *nodep, rtx loc, + enum var_init_status status) +{ + location_chain node; + int r; + + for (node = *nodep; node; nodep = &node->next, node = *nodep) + if ((r = loc_cmp (node->loc, loc)) == 0) + { + node->init = MIN (node->init, status); + return; + } + else if (r > 0) + break; + + node = (location_chain) pool_alloc (loc_chain_pool); + + node->loc = loc; + node->set_src = NULL; + node->init = status; + node->next = *nodep; + *nodep = node; +} + +/* Insert in DEST the intersection the locations present in both + S1NODE and S2VAR, directly or indirectly. S1NODE is from a + variable in DSM->cur, whereas S2VAR is from DSM->src. dvar is in + DSM->dst. */ + +static void +intersect_loc_chains (rtx val, location_chain *dest, struct dfset_merge *dsm, + location_chain s1node, variable s2var) +{ + dataflow_set *s1set = dsm->cur; + dataflow_set *s2set = dsm->src; + location_chain found; + + for (; s1node; s1node = s1node->next) + { + if (s1node->loc == val) + continue; + + if ((found = find_loc_in_1pdv (s1node->loc, s2var, + shared_hash_htab (s2set->vars)))) + { + insert_into_intersection (dest, s1node->loc, + MIN (s1node->init, found->init)); + continue; + } + + if (GET_CODE (s1node->loc) == VALUE + && !VALUE_RECURSED_INTO (s1node->loc)) + { + decl_or_value dv = dv_from_value (s1node->loc); + variable svar = shared_hash_find (s1set->vars, dv); + if (svar) + { + if (svar->n_var_parts == 1) + { + VALUE_RECURSED_INTO (s1node->loc) = true; + intersect_loc_chains (val, dest, dsm, + svar->var_part[0].loc_chain, + s2var); + VALUE_RECURSED_INTO (s1node->loc) = false; + } + } + } + + /* ??? if the location is equivalent to any location in src, + searched recursively + + add to dst the values needed to represent the equivalence + + telling whether locations S is equivalent to another dv's + location list: + + for each location D in the list + + if S and D satisfy rtx_equal_p, then it is present + + else if D is a value, recurse without cycles + + else if S and D have the same CODE and MODE + + for each operand oS and the corresponding oD + + if oS and oD are not equivalent, then S an D are not equivalent + + else if they are RTX vectors + + if any vector oS element is not equivalent to its respective oD, + then S and D are not equivalent + + */ + + + } +} + +/* Return -1 if X should be before Y in a location list for a 1-part + variable, 1 if Y should be before X, and 0 if they're equivalent + and should not appear in the list. */ + +static int +loc_cmp (rtx x, rtx y) +{ + int i, j, r; + RTX_CODE code = GET_CODE (x); + const char *fmt; + + if (x == y) + return 0; + + if (REG_P (x)) + { + if (!REG_P (y)) + return -1; + gcc_assert (GET_MODE (x) == GET_MODE (y)); + if (REGNO (x) == REGNO (y)) + return 0; + else if (REGNO (x) < REGNO (y)) + return -1; + else + return 1; + } + + if (REG_P (y)) + return 1; + + if (MEM_P (x)) + { + if (!MEM_P (y)) + return -1; + gcc_assert (GET_MODE (x) == GET_MODE (y)); + return loc_cmp (XEXP (x, 0), XEXP (y, 0)); + } + + if (MEM_P (y)) + return 1; + + if (GET_CODE (x) == VALUE) + { + if (GET_CODE (y) != VALUE) + return -1; + gcc_assert (GET_MODE (x) == GET_MODE (y)); + if (canon_value_cmp (x, y)) + return -1; + else + return 1; + } + + if (GET_CODE (y) == VALUE) + return 1; + + if (GET_CODE (x) == GET_CODE (y)) + /* Compare operands below. */; + else if (GET_CODE (x) < GET_CODE (y)) + return -1; + else + return 1; + + gcc_assert (GET_MODE (x) == GET_MODE (y)); + + fmt = GET_RTX_FORMAT (code); + for (i = 0; i < GET_RTX_LENGTH (code); i++) + switch (fmt[i]) + { + case 'w': + if (XWINT (x, i) == XWINT (y, i)) + break; + else if (XWINT (x, i) < XWINT (y, i)) + return -1; + else + return 1; + + case 'n': + case 'i': + if (XINT (x, i) == XINT (y, i)) + break; + else if (XINT (x, i) < XINT (y, i)) + return -1; + else + return 1; + + case 'V': + case 'E': + /* Compare the vector length first. */ + if (XVECLEN (x, i) == XVECLEN (y, i)) + /* Compare the vectors elements. */; + else if (XVECLEN (x, i) < XVECLEN (y, i)) + return -1; + else + return 1; + + for (j = 0; j < XVECLEN (x, i); j++) + if ((r = loc_cmp (XVECEXP (x, i, j), + XVECEXP (y, i, j)))) + return r; + break; + + case 'e': + if ((r = loc_cmp (XEXP (x, i), XEXP (y, i)))) + return r; + break; + + case 'S': + case 's': + if (XSTR (x, i) == XSTR (y, i)) + break; + if (!XSTR (x, i)) + return -1; + if (!XSTR (y, i)) + return 1; + if ((r = strcmp (XSTR (x, i), XSTR (y, i))) == 0) + break; + else if (r < 0) + return -1; + else + return 1; + + case 'u': + /* These are just backpointers, so they don't matter. */ + break; + + case '0': + case 't': + break; + + /* It is believed that rtx's at this level will never + contain anything but integers and other rtx's, + except for within LABEL_REFs and SYMBOL_REFs. */ + default: + gcc_unreachable (); + } + + return 0; +} + +/* If decl or value DVP refers to VALUE from *LOC, add backlinks + from VALUE to DVP. */ + +static int +add_value_chain (rtx *loc, void *dvp) +{ + if (GET_CODE (*loc) == VALUE && (void *) *loc != dvp) + { + decl_or_value dv = (decl_or_value) dvp; + decl_or_value ldv = dv_from_value (*loc); + value_chain vc, nvc; + void **slot = htab_find_slot_with_hash (value_chains, ldv, + dv_htab_hash (ldv), INSERT); + if (!*slot) + { + vc = (value_chain) pool_alloc (value_chain_pool); + vc->dv = ldv; + vc->next = NULL; + vc->refcount = 0; + *slot = (void *) vc; + } + else + { + for (vc = ((value_chain) *slot)->next; vc; vc = vc->next) + if (dv_as_opaque (vc->dv) == dv_as_opaque (dv)) + break; + if (vc) + { + vc->refcount++; + return 0; + } + } + vc = (value_chain) *slot; + nvc = (value_chain) pool_alloc (value_chain_pool); + nvc->dv = dv; + nvc->next = vc->next; + nvc->refcount = 1; + vc->next = nvc; + } + return 0; +} + +/* If decl or value DVP refers to VALUEs from within LOC, add backlinks + from those VALUEs to DVP. */ + +static void +add_value_chains (decl_or_value dv, rtx loc) +{ + if (GET_CODE (loc) == VALUE) + { + add_value_chain (&loc, dv_as_opaque (dv)); + return; + } + if (REG_P (loc)) + return; + if (MEM_P (loc)) + loc = XEXP (loc, 0); + for_each_rtx (&loc, add_value_chain, dv_as_opaque (dv)); +} + +/* If CSELIB_VAL_PTR of value DV refer to VALUEs, add backlinks from those + VALUEs to DV. */ + +static void +add_cselib_value_chains (decl_or_value dv) +{ + struct elt_loc_list *l; + + for (l = CSELIB_VAL_PTR (dv_as_value (dv))->locs; l; l = l->next) + for_each_rtx (&l->loc, add_value_chain, dv_as_opaque (dv)); +} + +/* If decl or value DVP refers to VALUE from *LOC, remove backlinks + from VALUE to DVP. */ + +static int +remove_value_chain (rtx *loc, void *dvp) +{ + if (GET_CODE (*loc) == VALUE && (void *) *loc != dvp) + { + decl_or_value dv = (decl_or_value) dvp; + decl_or_value ldv = dv_from_value (*loc); + value_chain vc, dvc = NULL; + void **slot = htab_find_slot_with_hash (value_chains, ldv, + dv_htab_hash (ldv), NO_INSERT); + for (vc = (value_chain) *slot; vc->next; vc = vc->next) + if (dv_as_opaque (vc->next->dv) == dv_as_opaque (dv)) + { + dvc = vc->next; + gcc_assert (dvc->refcount > 0); + if (--dvc->refcount == 0) + { + vc->next = dvc->next; + pool_free (value_chain_pool, dvc); + if (vc->next == NULL && vc == (value_chain) *slot) + { + pool_free (value_chain_pool, vc); + htab_clear_slot (value_chains, slot); + } + } + return 0; + } + gcc_unreachable (); + } + return 0; +} + +/* If decl or value DVP refers to VALUEs from within LOC, remove backlinks + from those VALUEs to DVP. */ + +static void +remove_value_chains (decl_or_value dv, rtx loc) +{ + if (GET_CODE (loc) == VALUE) + { + remove_value_chain (&loc, dv_as_opaque (dv)); + return; + } + if (REG_P (loc)) + return; + if (MEM_P (loc)) + loc = XEXP (loc, 0); + for_each_rtx (&loc, remove_value_chain, dv_as_opaque (dv)); +} + +/* If CSELIB_VAL_PTR of value DV refer to VALUEs, remove backlinks from those + VALUEs to DV. */ + +static void +remove_cselib_value_chains (decl_or_value dv) +{ + struct elt_loc_list *l; + + for (l = CSELIB_VAL_PTR (dv_as_value (dv))->locs; l; l = l->next) + for_each_rtx (&l->loc, remove_value_chain, dv_as_opaque (dv)); +} + +#if ENABLE_CHECKING +/* Check the order of entries in one-part variables. */ + +static int +canonicalize_loc_order_check (void **slot, void *data ATTRIBUTE_UNUSED) +{ + variable var = (variable) *slot; + decl_or_value dv = var->dv; + location_chain node, next; + + if (!dv_onepart_p (dv)) + return 1; + + gcc_assert (var->n_var_parts == 1); + node = var->var_part[0].loc_chain; + gcc_assert (node); + + while ((next = node->next)) + { + gcc_assert (loc_cmp (node->loc, next->loc) < 0); + node = next; + } + + return 1; +} +#endif + +/* Mark with VALUE_RECURSED_INTO values that have neighbors that are + more likely to be chosen as canonical for an equivalence set. + Ensure less likely values can reach more likely neighbors, making + the connections bidirectional. */ + +static int +canonicalize_values_mark (void **slot, void *data) +{ + dataflow_set *set = (dataflow_set *)data; + variable var = (variable) *slot; + decl_or_value dv = var->dv; + rtx val; + location_chain node; + + if (!dv_is_value_p (dv)) + return 1; + + gcc_assert (var->n_var_parts == 1); + + val = dv_as_value (dv); + + for (node = var->var_part[0].loc_chain; node; node = node->next) + if (GET_CODE (node->loc) == VALUE) + { + if (canon_value_cmp (node->loc, val)) + VALUE_RECURSED_INTO (val) = true; + else + { + decl_or_value odv = dv_from_value (node->loc); + void **oslot = shared_hash_find_slot_noinsert (set->vars, odv); + + oslot = set_slot_part (set, val, oslot, odv, 0, + node->init, NULL_RTX); + + VALUE_RECURSED_INTO (node->loc) = true; + } + } + + return 1; +} + +/* Remove redundant entries from equivalence lists in onepart + variables, canonicalizing equivalence sets into star shapes. */ + +static int +canonicalize_values_star (void **slot, void *data) +{ + dataflow_set *set = (dataflow_set *)data; + variable var = (variable) *slot; + decl_or_value dv = var->dv; + location_chain node; + decl_or_value cdv; + rtx val, cval; + void **cslot; + bool has_value; + bool has_marks; + + if (!dv_onepart_p (dv)) + return 1; + + gcc_assert (var->n_var_parts == 1); + + if (dv_is_value_p (dv)) + { + cval = dv_as_value (dv); + if (!VALUE_RECURSED_INTO (cval)) + return 1; + VALUE_RECURSED_INTO (cval) = false; + } + else + cval = NULL_RTX; + + restart: + val = cval; + has_value = false; + has_marks = false; + + gcc_assert (var->n_var_parts == 1); + + for (node = var->var_part[0].loc_chain; node; node = node->next) + if (GET_CODE (node->loc) == VALUE) + { + has_value = true; + if (VALUE_RECURSED_INTO (node->loc)) + has_marks = true; + if (canon_value_cmp (node->loc, cval)) + cval = node->loc; + } + + if (!has_value) + return 1; + + if (cval == val) + { + if (!has_marks || dv_is_decl_p (dv)) + return 1; + + /* Keep it marked so that we revisit it, either after visiting a + child node, or after visiting a new parent that might be + found out. */ + VALUE_RECURSED_INTO (val) = true; + + for (node = var->var_part[0].loc_chain; node; node = node->next) + if (GET_CODE (node->loc) == VALUE + && VALUE_RECURSED_INTO (node->loc)) + { + cval = node->loc; + restart_with_cval: + VALUE_RECURSED_INTO (cval) = false; + dv = dv_from_value (cval); + slot = shared_hash_find_slot_noinsert (set->vars, dv); + if (!slot) + { + gcc_assert (dv_is_decl_p (var->dv)); + /* The canonical value was reset and dropped. + Remove it. */ + clobber_variable_part (set, NULL, var->dv, 0, NULL); + return 1; + } + var = (variable)*slot; + gcc_assert (dv_is_value_p (var->dv)); + if (var->n_var_parts == 0) + return 1; + gcc_assert (var->n_var_parts == 1); + goto restart; + } + + VALUE_RECURSED_INTO (val) = false; + + return 1; + } + + /* Push values to the canonical one. */ + cdv = dv_from_value (cval); + cslot = shared_hash_find_slot_noinsert (set->vars, cdv); + + for (node = var->var_part[0].loc_chain; node; node = node->next) + if (node->loc != cval) + { + cslot = set_slot_part (set, node->loc, cslot, cdv, 0, + node->init, NULL_RTX); + if (GET_CODE (node->loc) == VALUE) + { + decl_or_value ndv = dv_from_value (node->loc); + + set_variable_part (set, cval, ndv, 0, node->init, NULL_RTX, + NO_INSERT); + + if (canon_value_cmp (node->loc, val)) + { + /* If it could have been a local minimum, it's not any more, + since it's now neighbor to cval, so it may have to push + to it. Conversely, if it wouldn't have prevailed over + val, then whatever mark it has is fine: if it was to + push, it will now push to a more canonical node, but if + it wasn't, then it has already pushed any values it might + have to. */ + VALUE_RECURSED_INTO (node->loc) = true; + /* Make sure we visit node->loc by ensuring we cval is + visited too. */ + VALUE_RECURSED_INTO (cval) = true; + } + else if (!VALUE_RECURSED_INTO (node->loc)) + /* If we have no need to "recurse" into this node, it's + already "canonicalized", so drop the link to the old + parent. */ + clobber_variable_part (set, cval, ndv, 0, NULL); + } + else if (GET_CODE (node->loc) == REG) + { + attrs list = set->regs[REGNO (node->loc)], *listp; + + /* Change an existing attribute referring to dv so that it + refers to cdv, removing any duplicate this might + introduce, and checking that no previous duplicates + existed, all in a single pass. */ + + while (list) + { + if (list->offset == 0 + && (dv_as_opaque (list->dv) == dv_as_opaque (dv) + || dv_as_opaque (list->dv) == dv_as_opaque (cdv))) + break; + + list = list->next; + } + + gcc_assert (list); + if (dv_as_opaque (list->dv) == dv_as_opaque (dv)) + { + list->dv = cdv; + for (listp = &list->next; (list = *listp); listp = &list->next) + { + if (list->offset) + continue; + + if (dv_as_opaque (list->dv) == dv_as_opaque (cdv)) + { + *listp = list->next; + pool_free (attrs_pool, list); + list = *listp; + break; + } + + gcc_assert (dv_as_opaque (list->dv) != dv_as_opaque (dv)); + } + } + else if (dv_as_opaque (list->dv) == dv_as_opaque (cdv)) + { + for (listp = &list->next; (list = *listp); listp = &list->next) + { + if (list->offset) + continue; + + if (dv_as_opaque (list->dv) == dv_as_opaque (dv)) + { + *listp = list->next; + pool_free (attrs_pool, list); + list = *listp; + break; + } + + gcc_assert (dv_as_opaque (list->dv) != dv_as_opaque (cdv)); + } + } + else + gcc_unreachable (); + +#if ENABLE_CHECKING + while (list) + { + if (list->offset == 0 + && (dv_as_opaque (list->dv) == dv_as_opaque (dv) + || dv_as_opaque (list->dv) == dv_as_opaque (cdv))) + gcc_unreachable (); + + list = list->next; + } +#endif + } + } + + if (val) + cslot = set_slot_part (set, val, cslot, cdv, 0, + VAR_INIT_STATUS_INITIALIZED, NULL_RTX); + + slot = clobber_slot_part (set, cval, slot, 0, NULL); + + /* Variable may have been unshared. */ + var = (variable)*slot; + gcc_assert (var->n_var_parts && var->var_part[0].loc_chain->loc == cval + && var->var_part[0].loc_chain->next == NULL); + + if (VALUE_RECURSED_INTO (cval)) + goto restart_with_cval; + + return 1; +} + +/* Combine variable or value in *S1SLOT (in DSM->cur) with the + corresponding entry in DSM->src. Multi-part variables are combined + with variable_union, whereas onepart dvs are combined with + intersection. */ + +static int +variable_merge_over_cur (void **s1slot, void *data) +{ + struct dfset_merge *dsm = (struct dfset_merge *)data; + dataflow_set *dst = dsm->dst; + void **dstslot; + variable s1var = (variable) *s1slot; + variable s2var, dvar = NULL; + decl_or_value dv = s1var->dv; + bool onepart = dv_onepart_p (dv); + rtx val; + hashval_t dvhash; + location_chain node, *nodep; + + /* If the incoming onepart variable has an empty location list, then + the intersection will be just as empty. For other variables, + it's always union. */ + gcc_assert (s1var->n_var_parts); + gcc_assert (s1var->var_part[0].loc_chain); + + if (!onepart) + return variable_union (s1slot, dst); + + gcc_assert (s1var->n_var_parts == 1); + gcc_assert (s1var->var_part[0].offset == 0); + + dvhash = dv_htab_hash (dv); + if (dv_is_value_p (dv)) + val = dv_as_value (dv); + else + val = NULL; + + s2var = shared_hash_find_1 (dsm->src->vars, dv, dvhash); + if (!s2var) + { + dst_can_be_shared = false; + return 1; + } + + dsm->src_onepart_cnt--; + gcc_assert (s2var->var_part[0].loc_chain); + gcc_assert (s2var->n_var_parts == 1); + gcc_assert (s2var->var_part[0].offset == 0); + + dstslot = shared_hash_find_slot_noinsert_1 (dst->vars, dv, dvhash); + if (dstslot) + { + dvar = (variable)*dstslot; + gcc_assert (dvar->refcount == 1); + gcc_assert (dvar->n_var_parts == 1); + gcc_assert (dvar->var_part[0].offset == 0); + nodep = &dvar->var_part[0].loc_chain; + } + else + { + nodep = &node; + node = NULL; + } + + if (!dstslot && !onepart_variable_different_p (s1var, s2var)) + { + dstslot = shared_hash_find_slot_unshare_1 (&dst->vars, dv, + dvhash, INSERT); + *dstslot = dvar = s2var; + dvar->refcount++; + } + else + { + dst_can_be_shared = false; + + intersect_loc_chains (val, nodep, dsm, + s1var->var_part[0].loc_chain, s2var); + + if (!dstslot) + { + if (node) + { + dvar = (variable) pool_alloc (dv_pool (dv)); + dvar->dv = dv; + dvar->refcount = 1; + dvar->n_var_parts = 1; + dvar->var_part[0].offset = 0; + dvar->var_part[0].loc_chain = node; + dvar->var_part[0].cur_loc = node->loc; + + dstslot + = shared_hash_find_slot_unshare_1 (&dst->vars, dv, dvhash, + INSERT); + gcc_assert (!*dstslot); + *dstslot = dvar; + } + else + return 1; + } + } + + nodep = &dvar->var_part[0].loc_chain; + while ((node = *nodep)) + { + location_chain *nextp = &node->next; + + if (GET_CODE (node->loc) == REG) + { + attrs list; + + for (list = dst->regs[REGNO (node->loc)]; list; list = list->next) + if (GET_MODE (node->loc) == GET_MODE (list->loc) + && dv_is_value_p (list->dv)) + break; + + if (!list) + attrs_list_insert (&dst->regs[REGNO (node->loc)], + dv, 0, node->loc); + /* If this value became canonical for another value that had + this register, we want to leave it alone. */ + else if (dv_as_value (list->dv) != val) + { + dstslot = set_slot_part (dst, dv_as_value (list->dv), + dstslot, dv, 0, + node->init, NULL_RTX); + dstslot = delete_slot_part (dst, node->loc, dstslot, 0); + + /* Since nextp points into the removed node, we can't + use it. The pointer to the next node moved to nodep. + However, if the variable we're walking is unshared + during our walk, we'll keep walking the location list + of the previously-shared variable, in which case the + node won't have been removed, and we'll want to skip + it. That's why we test *nodep here. */ + if (*nodep != node) + nextp = nodep; + } + } + else + /* Canonicalization puts registers first, so we don't have to + walk it all. */ + break; + nodep = nextp; + } + + if (dvar != (variable)*dstslot) + dvar = (variable)*dstslot; + nodep = &dvar->var_part[0].loc_chain; + + if (val) + { + /* Mark all referenced nodes for canonicalization, and make sure + we have mutual equivalence links. */ + VALUE_RECURSED_INTO (val) = true; + for (node = *nodep; node; node = node->next) + if (GET_CODE (node->loc) == VALUE) + { + VALUE_RECURSED_INTO (node->loc) = true; + set_variable_part (dst, val, dv_from_value (node->loc), 0, + node->init, NULL, INSERT); + } + + dstslot = shared_hash_find_slot_noinsert_1 (dst->vars, dv, dvhash); + gcc_assert (*dstslot == dvar); + canonicalize_values_star (dstslot, dst); +#ifdef ENABLE_CHECKING + gcc_assert (dstslot + == shared_hash_find_slot_noinsert_1 (dst->vars, dv, dvhash)); +#endif + dvar = (variable)*dstslot; + } + else + { + bool has_value = false, has_other = false; + + /* If we have one value and anything else, we're going to + canonicalize this, so make sure all values have an entry in + the table and are marked for canonicalization. */ + for (node = *nodep; node; node = node->next) + { + if (GET_CODE (node->loc) == VALUE) + { + /* If this was marked during register canonicalization, + we know we have to canonicalize values. */ + if (has_value) + has_other = true; + has_value = true; + if (has_other) + break; + } + else + { + has_other = true; + if (has_value) + break; + } + } + + if (has_value && has_other) + { + for (node = *nodep; node; node = node->next) + { + if (GET_CODE (node->loc) == VALUE) + { + decl_or_value dv = dv_from_value (node->loc); + void **slot = NULL; + + if (shared_hash_shared (dst->vars)) + slot = shared_hash_find_slot_noinsert (dst->vars, dv); + if (!slot) + slot = shared_hash_find_slot_unshare (&dst->vars, dv, + INSERT); + if (!*slot) + { + variable var = (variable) pool_alloc (dv_pool (dv)); + var->dv = dv; + var->refcount = 1; + var->n_var_parts = 1; + var->var_part[0].offset = 0; + var->var_part[0].loc_chain = NULL; + var->var_part[0].cur_loc = NULL; + *slot = var; + } + + VALUE_RECURSED_INTO (node->loc) = true; + } + } + + dstslot = shared_hash_find_slot_noinsert_1 (dst->vars, dv, dvhash); + gcc_assert (*dstslot == dvar); + canonicalize_values_star (dstslot, dst); +#ifdef ENABLE_CHECKING + gcc_assert (dstslot + == shared_hash_find_slot_noinsert_1 (dst->vars, + dv, dvhash)); +#endif + dvar = (variable)*dstslot; + } + } + + if (!onepart_variable_different_p (dvar, s2var)) + { + variable_htab_free (dvar); + *dstslot = dvar = s2var; + dvar->refcount++; + } + else if (s2var != s1var && !onepart_variable_different_p (dvar, s1var)) + { + variable_htab_free (dvar); + *dstslot = dvar = s1var; + dvar->refcount++; + dst_can_be_shared = false; + } + else + { + if (dvar->refcount == 1) + dvar->var_part[0].cur_loc = dvar->var_part[0].loc_chain->loc; + dst_can_be_shared = false; + } + + return 1; +} + +/* Combine variable in *S1SLOT (in DSM->src) with the corresponding + entry in DSM->src. Only multi-part variables are combined, using + variable_union. onepart dvs were already combined with + intersection in variable_merge_over_cur(). */ + +static int +variable_merge_over_src (void **s2slot, void *data) +{ + struct dfset_merge *dsm = (struct dfset_merge *)data; + dataflow_set *dst = dsm->dst; + variable s2var = (variable) *s2slot; + decl_or_value dv = s2var->dv; + bool onepart = dv_onepart_p (dv); + + if (!onepart) + { + void **dstp = shared_hash_find_slot (dst->vars, dv); + *dstp = s2var; + s2var->refcount++; + return variable_canonicalize (dstp, dst); + } + + dsm->src_onepart_cnt++; + return 1; +} + +/* Combine dataflow set information from SRC into DST, using PDST + to carry over information across passes. */ + +static void +dataflow_set_merge (dataflow_set *dst, dataflow_set *src) +{ + dataflow_set src2 = *dst; + struct dfset_merge dsm; + int i; + size_t src_elems, dst_elems; + + src_elems = htab_elements (shared_hash_htab (src->vars)); + dst_elems = htab_elements (shared_hash_htab (src2.vars)); + dataflow_set_init (dst); + dst->stack_adjust = src2.stack_adjust; + shared_hash_destroy (dst->vars); + dst->vars = (shared_hash) pool_alloc (shared_hash_pool); + dst->vars->refcount = 1; + dst->vars->htab + = htab_create (MAX (src_elems, dst_elems), variable_htab_hash, + variable_htab_eq, variable_htab_free); + + for (i = 0; i < FIRST_PSEUDO_REGISTER; i++) + attrs_list_mpdv_union (&dst->regs[i], src->regs[i], src2.regs[i]); + + dsm.dst = dst; + dsm.src = &src2; + dsm.cur = src; + dsm.src_onepart_cnt = 0; + + htab_traverse (shared_hash_htab (dsm.src->vars), variable_merge_over_src, + &dsm); + htab_traverse (shared_hash_htab (dsm.cur->vars), variable_merge_over_cur, + &dsm); + + if (dsm.src_onepart_cnt) + dst_can_be_shared = false; + + dataflow_set_destroy (&src2); +} + +/* Mark register equivalences. */ + +static void +dataflow_set_equiv_regs (dataflow_set *set) +{ + int i; + attrs list, *listp; + + for (i = 0; i < FIRST_PSEUDO_REGISTER; i++) + { + rtx canon[NUM_MACHINE_MODES]; + + memset (canon, 0, sizeof (canon)); + + for (list = set->regs[i]; list; list = list->next) + if (list->offset == 0 && dv_is_value_p (list->dv)) + { + rtx val = dv_as_value (list->dv); + rtx *cvalp = &canon[(int)GET_MODE (val)]; + rtx cval = *cvalp; + + if (canon_value_cmp (val, cval)) + *cvalp = val; + } + + for (list = set->regs[i]; list; list = list->next) + if (list->offset == 0 && dv_onepart_p (list->dv)) + { + rtx cval = canon[(int)GET_MODE (list->loc)]; + + if (!cval) + continue; + + if (dv_is_value_p (list->dv)) + { + rtx val = dv_as_value (list->dv); + + if (val == cval) + continue; + + VALUE_RECURSED_INTO (val) = true; + set_variable_part (set, val, dv_from_value (cval), 0, + VAR_INIT_STATUS_INITIALIZED, + NULL, NO_INSERT); + } + + VALUE_RECURSED_INTO (cval) = true; + set_variable_part (set, cval, list->dv, 0, + VAR_INIT_STATUS_INITIALIZED, NULL, NO_INSERT); + } + + for (listp = &set->regs[i]; (list = *listp); + listp = list ? &list->next : listp) + if (list->offset == 0 && dv_onepart_p (list->dv)) + { + rtx cval = canon[(int)GET_MODE (list->loc)]; + void **slot; + + if (!cval) + continue; + + if (dv_is_value_p (list->dv)) + { + rtx val = dv_as_value (list->dv); + if (!VALUE_RECURSED_INTO (val)) + continue; + } + + slot = shared_hash_find_slot_noinsert (set->vars, list->dv); + canonicalize_values_star (slot, set); + if (*listp != list) + list = NULL; + } + } +} + +/* Remove any redundant values in the location list of VAR, which must + be unshared and 1-part. */ + +static void +remove_duplicate_values (variable var) +{ + location_chain node, *nodep; + + gcc_assert (dv_onepart_p (var->dv)); + gcc_assert (var->n_var_parts == 1); + gcc_assert (var->refcount == 1); + + for (nodep = &var->var_part[0].loc_chain; (node = *nodep); ) + { + if (GET_CODE (node->loc) == VALUE) + { + if (VALUE_RECURSED_INTO (node->loc)) + { + /* Remove duplicate value node. */ + *nodep = node->next; + pool_free (loc_chain_pool, node); + continue; + } + else + VALUE_RECURSED_INTO (node->loc) = true; + } + nodep = &node->next; + } + + for (node = var->var_part[0].loc_chain; node; node = node->next) + if (GET_CODE (node->loc) == VALUE) + { + gcc_assert (VALUE_RECURSED_INTO (node->loc)); + VALUE_RECURSED_INTO (node->loc) = false; + } +} + + +/* Hash table iteration argument passed to variable_post_merge. */ +struct dfset_post_merge +{ + /* The new input set for the current block. */ + dataflow_set *set; + /* Pointer to the permanent input set for the current block, or + NULL. */ + dataflow_set **permp; +}; + +/* Create values for incoming expressions associated with one-part + variables that don't have value numbers for them. */ + +static int +variable_post_merge_new_vals (void **slot, void *info) +{ + struct dfset_post_merge *dfpm = (struct dfset_post_merge *)info; + dataflow_set *set = dfpm->set; + variable var = (variable)*slot; + location_chain node; + + if (!dv_onepart_p (var->dv) || !var->n_var_parts) + return 1; + + gcc_assert (var->n_var_parts == 1); + + if (dv_is_decl_p (var->dv)) + { + bool check_dupes = false; + + restart: + for (node = var->var_part[0].loc_chain; node; node = node->next) + { + if (GET_CODE (node->loc) == VALUE) + gcc_assert (!VALUE_RECURSED_INTO (node->loc)); + else if (GET_CODE (node->loc) == REG) + { + attrs att, *attp, *curp = NULL; + + if (var->refcount != 1) + { + slot = unshare_variable (set, slot, var, + VAR_INIT_STATUS_INITIALIZED); + var = (variable)*slot; + goto restart; + } + + for (attp = &set->regs[REGNO (node->loc)]; (att = *attp); + attp = &att->next) + if (att->offset == 0 + && GET_MODE (att->loc) == GET_MODE (node->loc)) + { + if (dv_is_value_p (att->dv)) + { + rtx cval = dv_as_value (att->dv); + node->loc = cval; + check_dupes = true; + break; + } + else if (dv_as_opaque (att->dv) == dv_as_opaque (var->dv)) + curp = attp; + } + + if (!curp) + { + curp = attp; + while (*curp) + if ((*curp)->offset == 0 + && GET_MODE ((*curp)->loc) == GET_MODE (node->loc) + && dv_as_opaque ((*curp)->dv) == dv_as_opaque (var->dv)) + break; + else + curp = &(*curp)->next; + gcc_assert (*curp); + } + + if (!att) + { + decl_or_value cdv; + rtx cval; + + if (!*dfpm->permp) + { + *dfpm->permp = XNEW (dataflow_set); + dataflow_set_init (*dfpm->permp); + } + + for (att = (*dfpm->permp)->regs[REGNO (node->loc)]; + att; att = att->next) + if (GET_MODE (att->loc) == GET_MODE (node->loc)) + { + gcc_assert (att->offset == 0); + gcc_assert (dv_is_value_p (att->dv)); + val_reset (set, att->dv); + break; + } + + if (att) + { + cdv = att->dv; + cval = dv_as_value (cdv); + } + else + { + /* Create a unique value to hold this register, + that ought to be found and reused in + subsequent rounds. */ + cselib_val *v; + gcc_assert (!cselib_lookup (node->loc, + GET_MODE (node->loc), 0)); + v = cselib_lookup (node->loc, GET_MODE (node->loc), 1); + cselib_preserve_value (v); + cselib_invalidate_rtx (node->loc); + cval = v->val_rtx; + cdv = dv_from_value (cval); + if (dump_file) + fprintf (dump_file, + "Created new value %i for reg %i\n", + v->value, REGNO (node->loc)); + } + + var_reg_decl_set (*dfpm->permp, node->loc, + VAR_INIT_STATUS_INITIALIZED, + cdv, 0, NULL, INSERT); + + node->loc = cval; + check_dupes = true; + } + + /* Remove attribute referring to the decl, which now + uses the value for the register, already existing or + to be added when we bring perm in. */ + att = *curp; + *curp = att->next; + pool_free (attrs_pool, att); + } + } + + if (check_dupes) + remove_duplicate_values (var); + } + + return 1; +} + +/* Reset values in the permanent set that are not associated with the + chosen expression. */ + +static int +variable_post_merge_perm_vals (void **pslot, void *info) +{ + struct dfset_post_merge *dfpm = (struct dfset_post_merge *)info; + dataflow_set *set = dfpm->set; + variable pvar = (variable)*pslot, var; + location_chain pnode; + decl_or_value dv; + attrs att; + + gcc_assert (dv_is_value_p (pvar->dv)); + gcc_assert (pvar->n_var_parts == 1); + pnode = pvar->var_part[0].loc_chain; + gcc_assert (pnode); + gcc_assert (!pnode->next); + gcc_assert (REG_P (pnode->loc)); + + dv = pvar->dv; + + var = shared_hash_find (set->vars, dv); + if (var) + { + if (find_loc_in_1pdv (pnode->loc, var, shared_hash_htab (set->vars))) + return 1; + val_reset (set, dv); + } + + for (att = set->regs[REGNO (pnode->loc)]; att; att = att->next) + if (att->offset == 0 + && GET_MODE (att->loc) == GET_MODE (pnode->loc) + && dv_is_value_p (att->dv)) + break; + + /* If there is a value associated with this register already, create + an equivalence. */ + if (att && dv_as_value (att->dv) != dv_as_value (dv)) + { + rtx cval = dv_as_value (att->dv); + set_variable_part (set, cval, dv, 0, pnode->init, NULL, INSERT); + set_variable_part (set, dv_as_value (dv), att->dv, 0, pnode->init, + NULL, INSERT); + } + else if (!att) + { + attrs_list_insert (&set->regs[REGNO (pnode->loc)], + dv, 0, pnode->loc); + variable_union (pslot, set); + } + + return 1; +} + +/* Just checking stuff and registering register attributes for + now. */ + +static void +dataflow_post_merge_adjust (dataflow_set *set, dataflow_set **permp) +{ + struct dfset_post_merge dfpm; + + dfpm.set = set; + dfpm.permp = permp; + + htab_traverse (shared_hash_htab (set->vars), variable_post_merge_new_vals, + &dfpm); + if (*permp) + htab_traverse (shared_hash_htab ((*permp)->vars), + variable_post_merge_perm_vals, &dfpm); + htab_traverse (shared_hash_htab (set->vars), canonicalize_values_star, set); +} + +/* Return a node whose loc is a MEM that refers to EXPR in the + location list of a one-part variable or value VAR, or in that of + any values recursively mentioned in the location lists. */ + +static location_chain +find_mem_expr_in_1pdv (tree expr, rtx val, htab_t vars) +{ + location_chain node; + decl_or_value dv; + variable var; + location_chain where = NULL; + + if (!val) + return NULL; + + gcc_assert (GET_CODE (val) == VALUE); + + gcc_assert (!VALUE_RECURSED_INTO (val)); + + dv = dv_from_value (val); + var = (variable) htab_find_with_hash (vars, dv, dv_htab_hash (dv)); + + if (!var) + return NULL; + + gcc_assert (dv_onepart_p (var->dv)); + + if (!var->n_var_parts) + return NULL; + + gcc_assert (var->var_part[0].offset == 0); + + VALUE_RECURSED_INTO (val) = true; + + for (node = var->var_part[0].loc_chain; node; node = node->next) + if (MEM_P (node->loc) && MEM_EXPR (node->loc) == expr + && MEM_OFFSET (node->loc) == 0) + { + where = node; + break; + } + else if (GET_CODE (node->loc) == VALUE + && !VALUE_RECURSED_INTO (node->loc) + && (where = find_mem_expr_in_1pdv (expr, node->loc, vars))) + break; + + VALUE_RECURSED_INTO (val) = false; + + return where; +} + +/* Remove all MEMs from the location list of a hash table entry for a + one-part variable, except those whose MEM attributes map back to + the variable itself, directly or within a VALUE. + + ??? We could also preserve MEMs that reference stack slots that are + annotated as not addressable. This is arguably even more reliable + than the current heuristic. */ + +static int +dataflow_set_preserve_mem_locs (void **slot, void *data) +{ + dataflow_set *set = (dataflow_set *) data; + variable var = (variable) *slot; + + if (dv_is_decl_p (var->dv) && dv_onepart_p (var->dv)) + { + tree decl = dv_as_decl (var->dv); + location_chain loc, *locp; + + if (!var->n_var_parts) + return 1; + + gcc_assert (var->n_var_parts == 1); + + if (var->refcount > 1 || shared_hash_shared (set->vars)) + { + for (loc = var->var_part[0].loc_chain; loc; loc = loc->next) + { + /* We want to remove a MEM that doesn't refer to DECL. */ + if (GET_CODE (loc->loc) == MEM + && (MEM_EXPR (loc->loc) != decl + || MEM_OFFSET (loc->loc))) + break; + /* We want to move here a MEM that does refer to DECL. */ + else if (GET_CODE (loc->loc) == VALUE + && find_mem_expr_in_1pdv (decl, loc->loc, + shared_hash_htab (set->vars))) + break; + } + + if (!loc) + return 1; + + slot = unshare_variable (set, slot, var, VAR_INIT_STATUS_UNKNOWN); + var = (variable)*slot; + gcc_assert (var->n_var_parts == 1); + } + + for (locp = &var->var_part[0].loc_chain, loc = *locp; + loc; loc = *locp) + { + rtx old_loc = loc->loc; + if (GET_CODE (old_loc) == VALUE) + { + location_chain mem_node + = find_mem_expr_in_1pdv (decl, loc->loc, + shared_hash_htab (set->vars)); + + /* ??? This picks up only one out of multiple MEMs that + refer to the same variable. Do we ever need to be + concerned about dealing with more than one, or, given + that they should all map to the same variable + location, their addresses will have been merged and + they will be regarded as equivalent? */ + if (mem_node) + { + loc->loc = mem_node->loc; + loc->set_src = mem_node->set_src; + loc->init = MIN (loc->init, mem_node->init); + } + } + + if (GET_CODE (loc->loc) != MEM + || (MEM_EXPR (loc->loc) == decl + && MEM_OFFSET (loc->loc) == 0)) + { + if (old_loc != loc->loc && emit_notes) + { + add_value_chains (var->dv, loc->loc); + remove_value_chains (var->dv, old_loc); + } + locp = &loc->next; + continue; + } + + if (emit_notes) + remove_value_chains (var->dv, old_loc); + *locp = loc->next; + pool_free (loc_chain_pool, loc); + } + + if (!var->var_part[0].loc_chain) + { + var->n_var_parts--; + if (emit_notes && dv_is_value_p (var->dv)) + remove_cselib_value_chains (var->dv); + variable_was_changed (var, set); + } + } + + return 1; +} + +/* Remove all MEMs from the location list of a hash table entry for a + value. */ + +static int +dataflow_set_remove_mem_locs (void **slot, void *data) +{ + dataflow_set *set = (dataflow_set *) data; + variable var = (variable) *slot; + + if (dv_is_value_p (var->dv)) + { + location_chain loc, *locp; + bool changed = false; + + gcc_assert (var->n_var_parts == 1); + + if (var->refcount > 1 || shared_hash_shared (set->vars)) + { + for (loc = var->var_part[0].loc_chain; loc; loc = loc->next) + if (GET_CODE (loc->loc) == MEM) + break; + + if (!loc) + return 1; + + slot = unshare_variable (set, slot, var, VAR_INIT_STATUS_UNKNOWN); + var = (variable)*slot; + gcc_assert (var->n_var_parts == 1); + } + + for (locp = &var->var_part[0].loc_chain, loc = *locp; + loc; loc = *locp) + { + if (GET_CODE (loc->loc) != MEM) + { + locp = &loc->next; + continue; + } + + if (emit_notes) + remove_value_chains (var->dv, loc->loc); + *locp = loc->next; + /* If we have deleted the location which was last emitted + we have to emit new location so add the variable to set + of changed variables. */ + if (var->var_part[0].cur_loc + && rtx_equal_p (loc->loc, var->var_part[0].cur_loc)) + changed = true; + pool_free (loc_chain_pool, loc); + } + + if (!var->var_part[0].loc_chain) + { + var->n_var_parts--; + if (emit_notes && dv_is_value_p (var->dv)) + remove_cselib_value_chains (var->dv); + gcc_assert (changed); + } + if (changed) + { + if (var->n_var_parts && var->var_part[0].loc_chain) + var->var_part[0].cur_loc = var->var_part[0].loc_chain->loc; + variable_was_changed (var, set); + } + } + + return 1; +} + +/* Remove all variable-location information about call-clobbered + registers, as well as associations between MEMs and VALUEs. */ + +static void +dataflow_set_clear_at_call (dataflow_set *set) +{ + int r; + + for (r = 0; r < FIRST_PSEUDO_REGISTER; r++) + if (TEST_HARD_REG_BIT (call_used_reg_set, r)) + var_regno_delete (set, r); + + if (MAY_HAVE_DEBUG_INSNS) + { + set->traversed_vars = set->vars; + htab_traverse (shared_hash_htab (set->vars), + dataflow_set_preserve_mem_locs, set); + set->traversed_vars = set->vars; + htab_traverse (shared_hash_htab (set->vars), dataflow_set_remove_mem_locs, + set); + set->traversed_vars = NULL; + } +} + /* Flag whether two dataflow sets being compared contain different data. */ static bool dataflow_set_different_value; @@ -1650,6 +3907,37 @@ variable_part_different_p (variable_part *vp1, variable_part *vp2) return false; } +/* Return true if one-part variables VAR1 and VAR2 are different. + They must be in canonical order. */ + +static bool +onepart_variable_different_p (variable var1, variable var2) +{ + location_chain lc1, lc2; + + if (var1 == var2) + return false; + + gcc_assert (var1->n_var_parts == 1); + gcc_assert (var2->n_var_parts == 1); + + lc1 = var1->var_part[0].loc_chain; + lc2 = var2->var_part[0].loc_chain; + + gcc_assert (lc1); + gcc_assert (lc2); + + while (lc1 && lc2) + { + if (loc_cmp (lc1->loc, lc2->loc)) + return true; + lc1 = lc1->next; + lc2 = lc2->next; + } + + return lc1 != lc2; +} + /* Return true if variables VAR1 and VAR2 are different. If COMPARE_CURRENT_LOCATION is true compare also the cur_loc of each variable part. */ @@ -1680,6 +3968,13 @@ variable_different_p (variable var1, variable var2, var2->var_part[i].cur_loc))) return true; } + /* One-part values have locations in a canonical order. */ + if (i == 0 && var1->var_part[i].offset == 0 && dv_onepart_p (var1->dv)) + { + gcc_assert (var1->n_var_parts == 1); + gcc_assert (dv_as_opaque (var1->dv) == dv_as_opaque (var2->dv)); + return onepart_variable_different_p (var1, var2); + } if (variable_part_different_p (&var1->var_part[i], &var2->var_part[i])) return true; if (variable_part_different_p (&var2->var_part[i], &var1->var_part[i])) @@ -1697,13 +3992,19 @@ dataflow_set_different_1 (void **slot, void *data) htab_t htab = (htab_t) data; variable var1, var2; - var1 = *(variable *) slot; - var2 = (variable) htab_find_with_hash (htab, var1->decl, - VARIABLE_HASH_VAL (var1->decl)); + var1 = (variable) *slot; + var2 = (variable) htab_find_with_hash (htab, var1->dv, + dv_htab_hash (var1->dv)); if (!var2) { dataflow_set_different_value = true; + if (dump_file && (dump_flags & TDF_DETAILS)) + { + fprintf (dump_file, "dataflow difference found: removal of:\n"); + dump_variable (var1); + } + /* Stop traversing the hash table. */ return 0; } @@ -1712,6 +4013,13 @@ dataflow_set_different_1 (void **slot, void *data) { dataflow_set_different_value = true; + if (dump_file && (dump_flags & TDF_DETAILS)) + { + fprintf (dump_file, "dataflow difference found: old and new follow:\n"); + dump_variable (var1); + dump_variable (var2); + } + /* Stop traversing the hash table. */ return 0; } @@ -1795,7 +4103,7 @@ contains_symbol_ref (rtx x) /* Shall EXPR be tracked? */ static bool -track_expr_p (tree expr) +track_expr_p (tree expr, bool need_rtl) { rtx decl_rtl; tree realdecl; @@ -1810,7 +4118,7 @@ track_expr_p (tree expr) /* ... and a RTL assigned to it. */ decl_rtl = DECL_RTL_IF_SET (expr); - if (!decl_rtl) + if (!decl_rtl && need_rtl) return 0; /* If this expression is really a debug alias of some other declaration, we @@ -1844,13 +4152,13 @@ track_expr_p (tree expr) extern char **_dl_argv_internal __attribute__ ((alias ("_dl_argv"))); char **_dl_argv; */ - if (MEM_P (decl_rtl) + if (decl_rtl && MEM_P (decl_rtl) && contains_symbol_ref (XEXP (decl_rtl, 0))) return 0; /* If RTX is a memory it should not be very large (because it would be an array or struct). */ - if (MEM_P (decl_rtl)) + if (decl_rtl && MEM_P (decl_rtl)) { /* Do not track structures and arrays. */ if (GET_MODE (decl_rtl) == BLKmode @@ -1861,6 +4169,8 @@ track_expr_p (tree expr) return 0; } + DECL_CHANGED (expr) = 0; + DECL_CHANGED (realdecl) = 0; return 1; } @@ -1914,7 +4224,7 @@ track_loc_p (rtx loc, tree expr, HOST_WIDE_INT offset, bool store_reg_p, { enum machine_mode mode; - if (expr == NULL || !track_expr_p (expr)) + if (expr == NULL || !track_expr_p (expr, true)) return false; /* If REG was a paradoxical subreg, its REG_ATTRS will describe the @@ -1985,82 +4295,447 @@ var_lowpart (enum machine_mode mode, rtx loc) return gen_rtx_REG_offset (loc, mode, regno, offset); } -/* Count uses (register and memory references) LOC which will be tracked. - INSN is instruction which the LOC is part of. */ +/* Carry information about uses and stores while walking rtx. */ -static int -count_uses (rtx *loc, void *insn) +struct count_use_info +{ + /* The insn where the RTX is. */ + rtx insn; + + /* The basic block where insn is. */ + basic_block bb; + + /* The array of n_sets sets in the insn, as determined by cselib. */ + struct cselib_set *sets; + int n_sets; + + /* True if we're counting stores, false otherwise. */ + bool store_p; +}; + +/* Find a VALUE corresponding to X. */ + +static inline cselib_val * +find_use_val (rtx x, enum machine_mode mode, struct count_use_info *cui) +{ + int i; + + if (cui->sets) + { + /* This is called after uses are set up and before stores are + processed bycselib, so it's safe to look up srcs, but not + dsts. So we look up expressions that appear in srcs or in + dest expressions, but we search the sets array for dests of + stores. */ + if (cui->store_p) + { + for (i = 0; i < cui->n_sets; i++) + if (cui->sets[i].dest == x) + return cui->sets[i].src_elt; + } + else + return cselib_lookup (x, mode, 0); + } + + return NULL; +} + +/* Replace all registers and addresses in an expression with VALUE + expressions that map back to them, unless the expression is a + register. If no mapping is or can be performed, returns NULL. */ + +static rtx +replace_expr_with_values (rtx loc) +{ + if (REG_P (loc)) + return NULL; + else if (MEM_P (loc)) + { + cselib_val *addr = cselib_lookup (XEXP (loc, 0), Pmode, 0); + if (addr) + return replace_equiv_address_nv (loc, addr->val_rtx); + else + return NULL; + } + else + return cselib_subst_to_values (loc); +} + +/* Determine what kind of micro operation to choose for a USE. Return + MO_CLOBBER if no micro operation is to be generated. */ + +static enum micro_operation_type +use_type (rtx *loc, struct count_use_info *cui, enum machine_mode *modep) { - basic_block bb = BLOCK_FOR_INSN ((rtx) insn); + tree expr; + cselib_val *val; + + if (cui && cui->sets) + { + if (GET_CODE (*loc) == VAR_LOCATION) + { + if (track_expr_p (PAT_VAR_LOCATION_DECL (*loc), false)) + { + rtx ploc = PAT_VAR_LOCATION_LOC (*loc); + cselib_val *val = cselib_lookup (ploc, GET_MODE (*loc), 1); + + /* ??? flag_float_store and volatile mems are never + given values, but we could in theory use them for + locations. */ + gcc_assert (val || 1); + return MO_VAL_LOC; + } + else + return MO_CLOBBER; + } + + if ((REG_P (*loc) || MEM_P (*loc)) + && (val = find_use_val (*loc, GET_MODE (*loc), cui))) + { + if (modep) + *modep = GET_MODE (*loc); + if (cui->store_p) + { + if (REG_P (*loc) + || cselib_lookup (XEXP (*loc, 0), GET_MODE (*loc), 0)) + return MO_VAL_SET; + } + else if (!cselib_preserved_value_p (val)) + return MO_VAL_USE; + } + } if (REG_P (*loc)) { gcc_assert (REGNO (*loc) < FIRST_PSEUDO_REGISTER); - VTI (bb)->n_mos++; + + expr = REG_EXPR (*loc); + + if (!expr) + return MO_USE_NO_VAR; + else if (target_for_debug_bind (var_debug_decl (expr))) + return MO_CLOBBER; + else if (track_loc_p (*loc, expr, REG_OFFSET (*loc), + false, modep, NULL)) + return MO_USE; + else + return MO_USE_NO_VAR; } - else if (MEM_P (*loc) - && track_loc_p (*loc, MEM_EXPR (*loc), INT_MEM_OFFSET (*loc), - false, NULL, NULL)) + else if (MEM_P (*loc)) { - VTI (bb)->n_mos++; + expr = MEM_EXPR (*loc); + + if (!expr) + return MO_CLOBBER; + else if (target_for_debug_bind (var_debug_decl (expr))) + return MO_CLOBBER; + else if (track_loc_p (*loc, expr, INT_MEM_OFFSET (*loc), + false, modep, NULL)) + return MO_USE; + else + return MO_CLOBBER; + } + + return MO_CLOBBER; +} + +/* Log to OUT information about micro-operation MOPT involving X in + INSN of BB. */ + +static inline void +log_op_type (rtx x, basic_block bb, rtx insn, + enum micro_operation_type mopt, FILE *out) +{ + fprintf (out, "bb %i op %i insn %i %s ", + bb->index, VTI (bb)->n_mos - 1, + INSN_UID (insn), micro_operation_type_name[mopt]); + print_inline_rtx (out, x, 2); + fputc ('\n', out); +} + +/* Count uses (register and memory references) LOC which will be tracked. + INSN is instruction which the LOC is part of. */ + +static int +count_uses (rtx *loc, void *cuip) +{ + struct count_use_info *cui = (struct count_use_info *) cuip; + enum micro_operation_type mopt = use_type (loc, cui, NULL); + + if (mopt != MO_CLOBBER) + { + cselib_val *val; + enum machine_mode mode = GET_MODE (*loc); + + VTI (cui->bb)->n_mos++; + + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (*loc, cui->bb, cui->insn, mopt, dump_file); + + switch (mopt) + { + case MO_VAL_LOC: + loc = &PAT_VAR_LOCATION_LOC (*loc); + if (VAR_LOC_UNKNOWN_P (*loc)) + break; + /* Fall through. */ + + case MO_VAL_USE: + case MO_VAL_SET: + if (MEM_P (*loc) + && !REG_P (XEXP (*loc, 0)) && !MEM_P (XEXP (*loc, 0))) + { + val = cselib_lookup (XEXP (*loc, 0), Pmode, false); + + if (val && !cselib_preserved_value_p (val)) + { + VTI (cui->bb)->n_mos++; + cselib_preserve_value (val); + } + } + + val = find_use_val (*loc, mode, cui); + if (val) + cselib_preserve_value (val); + else + gcc_assert (mopt == MO_VAL_LOC); + + break; + + default: + break; + } } return 0; } -/* Helper function for finding all uses of REG/MEM in X in insn INSN. */ +/* Helper function for finding all uses of REG/MEM in X in CUI's + insn. */ static void -count_uses_1 (rtx *x, void *insn) +count_uses_1 (rtx *x, void *cui) { - for_each_rtx (x, count_uses, insn); + for_each_rtx (x, count_uses, cui); } -/* Count stores (register and memory references) LOC which will be tracked. - INSN is instruction which the LOC is part of. */ +/* Count stores (register and memory references) LOC which will be + tracked. CUI is a count_use_info object containing the instruction + which the LOC is part of. */ static void -count_stores (rtx loc, const_rtx expr ATTRIBUTE_UNUSED, void *insn) +count_stores (rtx loc, const_rtx expr ATTRIBUTE_UNUSED, void *cui) { - count_uses (&loc, insn); + count_uses (&loc, cui); +} + +/* Callback for cselib_record_sets_hook, that counts how many micro + operations it takes for uses and stores in an insn after + cselib_record_sets has analyzed the sets in an insn, but before it + modifies the stored values in the internal tables, unless + cselib_record_sets doesn't call it directly (perhaps because we're + not doing cselib in the first place, in which case sets and n_sets + will be 0). */ + +static void +count_with_sets (rtx insn, struct cselib_set *sets, int n_sets) +{ + basic_block bb = BLOCK_FOR_INSN (insn); + struct count_use_info cui; + + cselib_hook_called = true; + + cui.insn = insn; + cui.bb = bb; + cui.sets = sets; + cui.n_sets = n_sets; + + cui.store_p = false; + note_uses (&PATTERN (insn), count_uses_1, &cui); + cui.store_p = true; + note_stores (PATTERN (insn), count_stores, &cui); } +/* Tell whether the CONCAT used to holds a VALUE and its location + needs value resolution, i.e., an attempt of mapping the location + back to other incoming values. */ +#define VAL_NEEDS_RESOLUTION(x) \ + (RTL_FLAG_CHECK1 ("VAL_NEEDS_RESOLUTION", (x), CONCAT)->volatil) +/* Whether the location in the CONCAT is a tracked expression, that + should also be handled like a MO_USE. */ +#define VAL_HOLDS_TRACK_EXPR(x) \ + (RTL_FLAG_CHECK1 ("VAL_HOLDS_TRACK_EXPR", (x), CONCAT)->used) +/* Whether the location in the CONCAT should be handled like a MO_COPY + as well. */ +#define VAL_EXPR_IS_COPIED(x) \ + (RTL_FLAG_CHECK1 ("VAL_EXPR_IS_COPIED", (x), CONCAT)->jump) +/* Whether the location in the CONCAT should be handled like a + MO_CLOBBER as well. */ +#define VAL_EXPR_IS_CLOBBERED(x) \ + (RTL_FLAG_CHECK1 ("VAL_EXPR_IS_CLOBBERED", (x), CONCAT)->unchanging) + /* Add uses (register and memory references) LOC which will be tracked to VTI (bb)->mos. INSN is instruction which the LOC is part of. */ static int -add_uses (rtx *loc, void *insn) +add_uses (rtx *loc, void *data) { - enum machine_mode mode; + enum machine_mode mode = VOIDmode; + struct count_use_info *cui = (struct count_use_info *)data; + enum micro_operation_type type = use_type (loc, cui, &mode); - if (REG_P (*loc)) + if (type != MO_CLOBBER) { - basic_block bb = BLOCK_FOR_INSN ((rtx) insn); + basic_block bb = cui->bb; micro_operation *mo = VTI (bb)->mos + VTI (bb)->n_mos++; - if (track_loc_p (*loc, REG_EXPR (*loc), REG_OFFSET (*loc), - false, &mode, NULL)) + mo->type = type; + mo->u.loc = type == MO_USE ? var_lowpart (mode, *loc) : *loc; + mo->insn = cui->insn; + + if (type == MO_VAL_LOC) { - mo->type = MO_USE; - mo->u.loc = var_lowpart (mode, *loc); + rtx oloc = *loc; + rtx vloc = PAT_VAR_LOCATION_LOC (oloc); + cselib_val *val; + + gcc_assert (cui->sets); + + if (MEM_P (vloc) + && !REG_P (XEXP (vloc, 0)) && !MEM_P (XEXP (vloc, 0))) + { + rtx mloc = vloc; + cselib_val *val = cselib_lookup (XEXP (mloc, 0), Pmode, 0); + + if (val && !cselib_preserved_value_p (val)) + { + micro_operation *mon = VTI (bb)->mos + VTI (bb)->n_mos++; + mon->type = mo->type; + mon->u.loc = mo->u.loc; + mon->insn = mo->insn; + cselib_preserve_value (val); + mo->type = MO_VAL_USE; + mloc = cselib_subst_to_values (XEXP (mloc, 0)); + mo->u.loc = gen_rtx_CONCAT (Pmode, val->val_rtx, mloc); + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (mo->u.loc, cui->bb, cui->insn, + mo->type, dump_file); + mo = mon; + } + } + + if (!VAR_LOC_UNKNOWN_P (vloc) + && (val = find_use_val (vloc, GET_MODE (oloc), cui))) + { + enum machine_mode mode2; + enum micro_operation_type type2; + rtx nloc = replace_expr_with_values (vloc); + + if (nloc) + { + oloc = shallow_copy_rtx (oloc); + PAT_VAR_LOCATION_LOC (oloc) = nloc; + } + + oloc = gen_rtx_CONCAT (mode, val->val_rtx, oloc); + + type2 = use_type (&vloc, 0, &mode2); + + gcc_assert (type2 == MO_USE || type2 == MO_USE_NO_VAR + || type2 == MO_CLOBBER); + + if (type2 == MO_CLOBBER + && !cselib_preserved_value_p (val)) + { + VAL_NEEDS_RESOLUTION (oloc) = 1; + cselib_preserve_value (val); + } + } + else if (!VAR_LOC_UNKNOWN_P (vloc)) + { + oloc = shallow_copy_rtx (oloc); + PAT_VAR_LOCATION_LOC (oloc) = gen_rtx_UNKNOWN_VAR_LOC (); + } + + mo->u.loc = oloc; } - else + else if (type == MO_VAL_USE) { - mo->type = MO_USE_NO_VAR; - mo->u.loc = *loc; + enum machine_mode mode2 = VOIDmode; + enum micro_operation_type type2; + cselib_val *val = find_use_val (*loc, GET_MODE (*loc), cui); + rtx vloc, oloc = *loc, nloc; + + gcc_assert (cui->sets); + + if (MEM_P (oloc) + && !REG_P (XEXP (oloc, 0)) && !MEM_P (XEXP (oloc, 0))) + { + rtx mloc = oloc; + cselib_val *val = cselib_lookup (XEXP (mloc, 0), Pmode, 0); + + if (val && !cselib_preserved_value_p (val)) + { + micro_operation *mon = VTI (bb)->mos + VTI (bb)->n_mos++; + mon->type = mo->type; + mon->u.loc = mo->u.loc; + mon->insn = mo->insn; + cselib_preserve_value (val); + mo->type = MO_VAL_USE; + mloc = cselib_subst_to_values (XEXP (mloc, 0)); + mo->u.loc = gen_rtx_CONCAT (Pmode, val->val_rtx, mloc); + mo->insn = cui->insn; + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (mo->u.loc, cui->bb, cui->insn, + mo->type, dump_file); + mo = mon; + } + } + + type2 = use_type (loc, 0, &mode2); + + gcc_assert (type2 == MO_USE || type2 == MO_USE_NO_VAR + || type2 == MO_CLOBBER); + + if (type2 == MO_USE) + vloc = var_lowpart (mode2, *loc); + else + vloc = oloc; + + /* The loc of a MO_VAL_USE may have two forms: + + (concat val src): val is at src, a value-based + representation. + + (concat (concat val use) src): same as above, with use as + the MO_USE tracked value, if it differs from src. + + */ + + nloc = replace_expr_with_values (*loc); + if (!nloc) + nloc = oloc; + + if (vloc != nloc) + oloc = gen_rtx_CONCAT (mode2, val->val_rtx, vloc); + else + oloc = val->val_rtx; + + mo->u.loc = gen_rtx_CONCAT (mode, oloc, nloc); + + if (type2 == MO_USE) + VAL_HOLDS_TRACK_EXPR (mo->u.loc) = 1; + if (!cselib_preserved_value_p (val)) + { + VAL_NEEDS_RESOLUTION (mo->u.loc) = 1; + cselib_preserve_value (val); + } } - mo->insn = (rtx) insn; - } - else if (MEM_P (*loc) - && track_loc_p (*loc, MEM_EXPR (*loc), INT_MEM_OFFSET (*loc), - false, &mode, NULL)) - { - basic_block bb = BLOCK_FOR_INSN ((rtx) insn); - micro_operation *mo = VTI (bb)->mos + VTI (bb)->n_mos++; + else + gcc_assert (type == MO_USE || type == MO_USE_NO_VAR); - mo->type = MO_USE; - mo->u.loc = var_lowpart (mode, *loc); - mo->insn = (rtx) insn; + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (mo->u.loc, cui->bb, cui->insn, mo->type, dump_file); } return 0; @@ -2069,39 +4744,49 @@ add_uses (rtx *loc, void *insn) /* Helper function for finding all uses of REG/MEM in X in insn INSN. */ static void -add_uses_1 (rtx *x, void *insn) +add_uses_1 (rtx *x, void *cui) { - for_each_rtx (x, add_uses, insn); + for_each_rtx (x, add_uses, cui); } /* Add stores (register and memory references) LOC which will be tracked - to VTI (bb)->mos. EXPR is the RTL expression containing the store. - INSN is instruction which the LOC is part of. */ + to VTI (bb)->mos. EXPR is the RTL expression containing the store. + CUIP->insn is instruction which the LOC is part of. */ static void -add_stores (rtx loc, const_rtx expr, void *insn) +add_stores (rtx loc, const_rtx expr, void *cuip) { - enum machine_mode mode; + enum machine_mode mode = VOIDmode, mode2; + struct count_use_info *cui = (struct count_use_info *)cuip; + basic_block bb = cui->bb; + micro_operation *mo; + rtx oloc = loc, nloc, src = NULL; + enum micro_operation_type type = use_type (&loc, cui, &mode); + bool track_p = false; + cselib_val *v; + bool resolve, preserve; + + if (type == MO_CLOBBER) + return; + + mode2 = mode; if (REG_P (loc)) { - basic_block bb = BLOCK_FOR_INSN ((rtx) insn); - micro_operation *mo = VTI (bb)->mos + VTI (bb)->n_mos++; + mo = VTI (bb)->mos + VTI (bb)->n_mos++; - if (GET_CODE (expr) == CLOBBER - || !track_loc_p (loc, REG_EXPR (loc), REG_OFFSET (loc), - true, &mode, NULL)) + if ((GET_CODE (expr) == CLOBBER && type != MO_VAL_SET) + || !(track_p = use_type (&loc, NULL, &mode2) == MO_USE) + || GET_CODE (expr) == CLOBBER) { mo->type = MO_CLOBBER; mo->u.loc = loc; } else { - rtx src = NULL; - if (GET_CODE (expr) == SET && SET_DEST (expr) == loc) - src = var_lowpart (mode, SET_SRC (expr)); - loc = var_lowpart (mode, loc); + src = var_lowpart (mode2, SET_SRC (expr)); + loc = var_lowpart (mode2, loc); if (src == NULL) { @@ -2119,27 +4804,44 @@ add_stores (rtx loc, const_rtx expr, void *insn) mo->u.loc = CONST_CAST_RTX (expr); } } - mo->insn = (rtx) insn; + mo->insn = cui->insn; } else if (MEM_P (loc) - && track_loc_p (loc, MEM_EXPR (loc), INT_MEM_OFFSET (loc), - false, &mode, NULL)) + && ((track_p = use_type (&loc, NULL, &mode2) == MO_USE) + || cui->sets)) { - basic_block bb = BLOCK_FOR_INSN ((rtx) insn); - micro_operation *mo = VTI (bb)->mos + VTI (bb)->n_mos++; + mo = VTI (bb)->mos + VTI (bb)->n_mos++; - if (GET_CODE (expr) == CLOBBER) + if (MEM_P (loc) && type == MO_VAL_SET + && !REG_P (XEXP (loc, 0)) && !MEM_P (XEXP (loc, 0))) + { + rtx mloc = loc; + cselib_val *val = cselib_lookup (XEXP (mloc, 0), Pmode, 0); + + if (val && !cselib_preserved_value_p (val)) + { + cselib_preserve_value (val); + mo->type = MO_VAL_USE; + mloc = cselib_subst_to_values (XEXP (mloc, 0)); + mo->u.loc = gen_rtx_CONCAT (Pmode, val->val_rtx, mloc); + mo->insn = cui->insn; + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (mo->u.loc, cui->bb, cui->insn, + mo->type, dump_file); + mo = VTI (bb)->mos + VTI (bb)->n_mos++; + } + } + + if (GET_CODE (expr) == CLOBBER || !track_p) { mo->type = MO_CLOBBER; - mo->u.loc = var_lowpart (mode, loc); + mo->u.loc = track_p ? var_lowpart (mode2, loc) : loc; } else { - rtx src = NULL; - if (GET_CODE (expr) == SET && SET_DEST (expr) == loc) - src = var_lowpart (mode, SET_SRC (expr)); - loc = var_lowpart (mode, loc); + src = var_lowpart (mode2, SET_SRC (expr)); + loc = var_lowpart (mode2, loc); if (src == NULL) { @@ -2159,7 +4861,170 @@ add_stores (rtx loc, const_rtx expr, void *insn) mo->u.loc = CONST_CAST_RTX (expr); } } - mo->insn = (rtx) insn; + mo->insn = cui->insn; + } + else + return; + + if (type != MO_VAL_SET) + goto log_and_return; + + v = find_use_val (oloc, mode, cui); + + resolve = preserve = !cselib_preserved_value_p (v); + + nloc = replace_expr_with_values (oloc); + if (nloc) + oloc = nloc; + + if (resolve && GET_CODE (mo->u.loc) == SET) + { + nloc = replace_expr_with_values (SET_SRC (mo->u.loc)); + + if (nloc) + oloc = gen_rtx_SET (GET_MODE (mo->u.loc), oloc, nloc); + else + { + if (oloc == SET_DEST (mo->u.loc)) + /* No point in duplicating. */ + oloc = mo->u.loc; + if (!REG_P (SET_SRC (mo->u.loc))) + resolve = false; + } + } + else if (!resolve) + { + if (GET_CODE (mo->u.loc) == SET + && oloc == SET_DEST (mo->u.loc)) + /* No point in duplicating. */ + oloc = mo->u.loc; + } + else + resolve = false; + + loc = gen_rtx_CONCAT (mode, v->val_rtx, oloc); + + if (mo->u.loc != oloc) + loc = gen_rtx_CONCAT (GET_MODE (mo->u.loc), loc, mo->u.loc); + + /* The loc of a MO_VAL_SET may have various forms: + + (concat val dst): dst now holds val + + (concat val (set dst src)): dst now holds val, copied from src + + (concat (concat val dstv) dst): dst now holds val; dstv is dst + after replacing mems and non-top-level regs with values. + + (concat (concat val dstv) (set dst src)): dst now holds val, + copied from src. dstv is a value-based representation of dst, if + it differs from dst. If resolution is needed, src is a REG. + + (concat (concat val (set dstv srcv)) (set dst src)): src + copied to dst, holding val. dstv and srcv are value-based + representations of dst and src, respectively. + + */ + + mo->u.loc = loc; + + if (track_p) + VAL_HOLDS_TRACK_EXPR (loc) = 1; + if (preserve) + { + VAL_NEEDS_RESOLUTION (loc) = resolve; + cselib_preserve_value (v); + } + if (mo->type == MO_CLOBBER) + VAL_EXPR_IS_CLOBBERED (loc) = 1; + if (mo->type == MO_COPY) + VAL_EXPR_IS_COPIED (loc) = 1; + + mo->type = MO_VAL_SET; + + log_and_return: + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (mo->u.loc, cui->bb, cui->insn, mo->type, dump_file); +} + +/* Callback for cselib_record_sets_hook, that records as micro + operations uses and stores in an insn after cselib_record_sets has + analyzed the sets in an insn, but before it modifies the stored + values in the internal tables, unless cselib_record_sets doesn't + call it directly (perhaps because we're not doing cselib in the + first place, in which case sets and n_sets will be 0). */ + +static void +add_with_sets (rtx insn, struct cselib_set *sets, int n_sets) +{ + basic_block bb = BLOCK_FOR_INSN (insn); + int n1, n2; + struct count_use_info cui; + + cselib_hook_called = true; + + cui.insn = insn; + cui.bb = bb; + cui.sets = sets; + cui.n_sets = n_sets; + + n1 = VTI (bb)->n_mos; + cui.store_p = false; + note_uses (&PATTERN (insn), add_uses_1, &cui); + n2 = VTI (bb)->n_mos - 1; + + /* Order the MO_USEs to be before MO_USE_NO_VARs, + MO_VAL_LOC and MO_VAL_USE. */ + while (n1 < n2) + { + while (n1 < n2 && VTI (bb)->mos[n1].type == MO_USE) + n1++; + while (n1 < n2 && VTI (bb)->mos[n2].type != MO_USE) + n2--; + if (n1 < n2) + { + micro_operation sw; + + sw = VTI (bb)->mos[n1]; + VTI (bb)->mos[n1] = VTI (bb)->mos[n2]; + VTI (bb)->mos[n2] = sw; + } + } + + if (CALL_P (insn)) + { + micro_operation *mo = VTI (bb)->mos + VTI (bb)->n_mos++; + + mo->type = MO_CALL; + mo->insn = insn; + + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (PATTERN (insn), bb, insn, mo->type, dump_file); + } + + n1 = VTI (bb)->n_mos; + /* This will record NEXT_INSN (insn), such that we can + insert notes before it without worrying about any + notes that MO_USEs might emit after the insn. */ + cui.store_p = true; + note_stores (PATTERN (insn), add_stores, &cui); + n2 = VTI (bb)->n_mos - 1; + + /* Order the MO_CLOBBERs to be before MO_SETs. */ + while (n1 < n2) + { + while (n1 < n2 && VTI (bb)->mos[n1].type == MO_CLOBBER) + n1++; + while (n1 < n2 && VTI (bb)->mos[n2].type != MO_CLOBBER) + n2--; + if (n1 < n2) + { + micro_operation sw; + + sw = VTI (bb)->mos[n1]; + VTI (bb)->mos[n1] = VTI (bb)->mos[n2]; + VTI (bb)->mos[n2] = sw; + } } } @@ -2178,7 +5043,7 @@ find_src_status (dataflow_set *in, rtx src) decl = var_debug_decl (MEM_EXPR (src)); if (src && decl) - status = get_init_value (in, src, decl); + status = get_init_value (in, src, dv_from_decl (decl)); return status; } @@ -2204,7 +5069,9 @@ find_src_set_src (dataflow_set *set, rtx src) if (src && decl) { - var = shared_hash_find (set->vars, decl); + decl_or_value dv = dv_from_decl (decl); + + var = shared_hash_find (set->vars, dv); if (var) { found = false; @@ -2228,7 +5095,7 @@ find_src_set_src (dataflow_set *set, rtx src) static bool compute_bb_dataflow (basic_block bb) { - int i, n, r; + int i, n; bool changed; dataflow_set old_out; dataflow_set *in = &VTI (bb)->in; @@ -2241,12 +5108,12 @@ compute_bb_dataflow (basic_block bb) n = VTI (bb)->n_mos; for (i = 0; i < n; i++) { + rtx insn = VTI (bb)->mos[i].insn; + switch (VTI (bb)->mos[i].type) { case MO_CALL: - for (r = 0; r < FIRST_PSEUDO_REGISTER; r++) - if (TEST_HARD_REG_BIT (call_used_reg_set, r)) - var_regno_delete (out, r); + dataflow_set_clear_at_call (out); break; case MO_USE: @@ -2260,6 +5127,149 @@ compute_bb_dataflow (basic_block bb) } break; + case MO_VAL_LOC: + { + rtx loc = VTI (bb)->mos[i].u.loc; + rtx val, vloc; + tree var; + + if (GET_CODE (loc) == CONCAT) + { + val = XEXP (loc, 0); + vloc = XEXP (loc, 1); + } + else + { + val = NULL_RTX; + vloc = loc; + } + + var = PAT_VAR_LOCATION_DECL (vloc); + + clobber_variable_part (out, NULL_RTX, + dv_from_decl (var), 0, NULL_RTX); + if (val) + { + if (VAL_NEEDS_RESOLUTION (loc)) + val_resolve (out, val, PAT_VAR_LOCATION_LOC (vloc), insn); + set_variable_part (out, val, dv_from_decl (var), 0, + VAR_INIT_STATUS_INITIALIZED, NULL_RTX, + INSERT); + } + } + break; + + case MO_VAL_USE: + { + rtx loc = VTI (bb)->mos[i].u.loc; + rtx val, vloc, uloc; + + vloc = uloc = XEXP (loc, 1); + val = XEXP (loc, 0); + + if (GET_CODE (val) == CONCAT) + { + uloc = XEXP (val, 1); + val = XEXP (val, 0); + } + + if (VAL_NEEDS_RESOLUTION (loc)) + val_resolve (out, val, vloc, insn); + + if (VAL_HOLDS_TRACK_EXPR (loc)) + { + if (GET_CODE (uloc) == REG) + var_reg_set (out, uloc, VAR_INIT_STATUS_UNINITIALIZED, + NULL); + else if (GET_CODE (uloc) == MEM) + var_mem_set (out, uloc, VAR_INIT_STATUS_UNINITIALIZED, + NULL); + } + } + break; + + case MO_VAL_SET: + { + rtx loc = VTI (bb)->mos[i].u.loc; + rtx val, vloc, uloc; + + vloc = uloc = XEXP (loc, 1); + val = XEXP (loc, 0); + + if (GET_CODE (val) == CONCAT) + { + vloc = XEXP (val, 1); + val = XEXP (val, 0); + } + + if (GET_CODE (vloc) == SET) + { + rtx vsrc = SET_SRC (vloc); + + gcc_assert (val != vsrc); + gcc_assert (vloc == uloc || VAL_NEEDS_RESOLUTION (loc)); + + vloc = SET_DEST (vloc); + + if (VAL_NEEDS_RESOLUTION (loc)) + val_resolve (out, val, vsrc, insn); + } + else if (VAL_NEEDS_RESOLUTION (loc)) + { + gcc_assert (GET_CODE (uloc) == SET + && GET_CODE (SET_SRC (uloc)) == REG); + val_resolve (out, val, SET_SRC (uloc), insn); + } + + if (VAL_HOLDS_TRACK_EXPR (loc)) + { + if (VAL_EXPR_IS_CLOBBERED (loc)) + { + if (REG_P (uloc)) + var_reg_delete (out, uloc, true); + else if (MEM_P (uloc)) + var_mem_delete (out, uloc, true); + } + else + { + bool copied_p = VAL_EXPR_IS_COPIED (loc); + rtx set_src = NULL; + enum var_init_status status = VAR_INIT_STATUS_INITIALIZED; + + if (GET_CODE (uloc) == SET) + { + set_src = SET_SRC (uloc); + uloc = SET_DEST (uloc); + } + + if (copied_p) + { + if (flag_var_tracking_uninit) + { + status = find_src_status (in, set_src); + + if (status == VAR_INIT_STATUS_UNKNOWN) + status = find_src_status (out, set_src); + } + + set_src = find_src_set_src (in, set_src); + } + + if (REG_P (uloc)) + var_reg_delete_and_set (out, uloc, !copied_p, + status, set_src); + else if (MEM_P (uloc)) + var_mem_delete_and_set (out, uloc, !copied_p, + status, set_src); + } + } + else if (REG_P (uloc)) + var_regno_delete (out, REGNO (uloc)); + + val_store (out, val, vloc, insn); + } + break; + case MO_SET: { rtx loc = VTI (bb)->mos[i].u.loc; @@ -2339,6 +5349,18 @@ compute_bb_dataflow (basic_block bb) } } + if (MAY_HAVE_DEBUG_INSNS) + { + dataflow_set_equiv_regs (out); + htab_traverse (shared_hash_htab (out->vars), canonicalize_values_mark, + out); + htab_traverse (shared_hash_htab (out->vars), canonicalize_values_star, + out); +#if ENABLE_CHECKING + htab_traverse (shared_hash_htab (out->vars), + canonicalize_loc_order_check, out); +#endif + } changed = dataflow_set_different (&old_out, out); dataflow_set_destroy (&old_out); return changed; @@ -2356,6 +5378,7 @@ vt_find_locations (void) int *bb_order; int *rc_order; int i; + int htabsz = 0; /* Compute reverse completion order of depth first search of the CFG so that the data-flow runs faster. */ @@ -2396,17 +5419,82 @@ vt_find_locations (void) { bool changed; edge_iterator ei; + int oldinsz, oldoutsz; SET_BIT (visited, bb->index); - /* Calculate the IN set as union of predecessor OUT sets. */ - dataflow_set_clear (&VTI (bb)->in); - FOR_EACH_EDGE (e, ei, bb->preds) + if (dump_file && VTI (bb)->in.vars) + { + htabsz + -= htab_size (shared_hash_htab (VTI (bb)->in.vars)) + + htab_size (shared_hash_htab (VTI (bb)->out.vars)); + oldinsz + = htab_elements (shared_hash_htab (VTI (bb)->in.vars)); + oldoutsz + = htab_elements (shared_hash_htab (VTI (bb)->out.vars)); + } + else + oldinsz = oldoutsz = 0; + + if (MAY_HAVE_DEBUG_INSNS) + { + dataflow_set *in = &VTI (bb)->in, *first_out = NULL; + bool first = true, adjust = false; + + /* Calculate the IN set as the intersection of + predecessor OUT sets. */ + + dataflow_set_clear (in); + dst_can_be_shared = true; + + FOR_EACH_EDGE (e, ei, bb->preds) + if (!VTI (e->src)->flooded) + gcc_assert (bb_order[bb->index] + <= bb_order[e->src->index]); + else if (first) + { + dataflow_set_copy (in, &VTI (e->src)->out); + first_out = &VTI (e->src)->out; + first = false; + } + else + { + dataflow_set_merge (in, &VTI (e->src)->out); + adjust = true; + } + + if (adjust) + { + dataflow_post_merge_adjust (in, &VTI (bb)->permp); +#if ENABLE_CHECKING + /* Merge and merge_adjust should keep entries in + canonical order. */ + htab_traverse (shared_hash_htab (in->vars), + canonicalize_loc_order_check, + in); +#endif + if (dst_can_be_shared) + { + shared_hash_destroy (in->vars); + in->vars = shared_hash_copy (first_out->vars); + } + } + + VTI (bb)->flooded = true; + } + else { - dataflow_set_union (&VTI (bb)->in, &VTI (e->src)->out); + /* Calculate the IN set as union of predecessor OUT sets. */ + dataflow_set_clear (&VTI (bb)->in); + FOR_EACH_EDGE (e, ei, bb->preds) + dataflow_set_union (&VTI (bb)->in, &VTI (e->src)->out); } changed = compute_bb_dataflow (bb); + if (dump_file) + htabsz += htab_size (shared_hash_htab (VTI (bb)->in.vars)) + + htab_size (shared_hash_htab (VTI (bb)->out.vars)); + if (changed) { FOR_EACH_EDGE (e, ei, bb->succs) @@ -2414,9 +5502,6 @@ vt_find_locations (void) if (e->dest == EXIT_BLOCK_PTR) continue; - if (e->dest == bb) - continue; - if (TEST_BIT (visited, e->dest->index)) { if (!TEST_BIT (in_pending, e->dest->index)) @@ -2437,10 +5522,32 @@ vt_find_locations (void) } } } + + if (dump_file) + fprintf (dump_file, + "BB %i: in %i (was %i), out %i (was %i), rem %i + %i, tsz %i\n", + bb->index, + (int)htab_elements (shared_hash_htab (VTI (bb)->in.vars)), + oldinsz, + (int)htab_elements (shared_hash_htab (VTI (bb)->out.vars)), + oldoutsz, + (int)worklist->nodes, (int)pending->nodes, htabsz); + + if (dump_file && (dump_flags & TDF_DETAILS)) + { + fprintf (dump_file, "BB %i IN:\n", bb->index); + dump_dataflow_set (&VTI (bb)->in); + fprintf (dump_file, "BB %i OUT:\n", bb->index); + dump_dataflow_set (&VTI (bb)->out); + } } } } + if (MAY_HAVE_DEBUG_INSNS) + FOR_EACH_BB (bb) + gcc_assert (VTI (bb)->flooded); + free (bb_order); fibheap_delete (worklist); fibheap_delete (pending); @@ -2456,7 +5563,10 @@ dump_attrs_list (attrs list) { for (; list; list = list->next) { - print_mem_expr (dump_file, list->decl); + if (dv_is_decl_p (list->dv)) + print_mem_expr (dump_file, dv_as_decl (list->dv)); + else + print_rtl_single (dump_file, dv_as_value (list->dv)); fprintf (dump_file, "+" HOST_WIDE_INT_PRINT_DEC, list->offset); } fprintf (dump_file, "\n"); @@ -2465,18 +5575,43 @@ dump_attrs_list (attrs list) /* Print the information about variable *SLOT to dump file. */ static int -dump_variable (void **slot, void *data ATTRIBUTE_UNUSED) +dump_variable_slot (void **slot, void *data ATTRIBUTE_UNUSED) +{ + variable var = (variable) *slot; + + dump_variable (var); + + /* Continue traversing the hash table. */ + return 1; +} + +/* Print the information about variable VAR to dump file. */ + +static void +dump_variable (variable var) { - variable var = *(variable *) slot; int i; location_chain node; - fprintf (dump_file, " name: %s", - IDENTIFIER_POINTER (DECL_NAME (var->decl))); - if (dump_flags & TDF_UID) - fprintf (dump_file, " D.%u\n", DECL_UID (var->decl)); + if (dv_is_decl_p (var->dv)) + { + const_tree decl = dv_as_decl (var->dv); + + if (DECL_NAME (decl)) + fprintf (dump_file, " name: %s", + IDENTIFIER_POINTER (DECL_NAME (decl))); + else + fprintf (dump_file, " name: D.%u", DECL_UID (decl)); + if (dump_flags & TDF_UID) + fprintf (dump_file, " D.%u\n", DECL_UID (decl)); + else + fprintf (dump_file, "\n"); + } else - fprintf (dump_file, "\n"); + { + fputc (' ', dump_file); + print_rtl_single (dump_file, dv_as_value (var->dv)); + } for (i = 0; i < var->n_var_parts; i++) { @@ -2490,9 +5625,6 @@ dump_variable (void **slot, void *data ATTRIBUTE_UNUSED) print_rtl_single (dump_file, node->loc); } } - - /* Continue traversing the hash table. */ - return 1; } /* Print the information about variables from hash table VARS to dump file. */ @@ -2503,7 +5635,7 @@ dump_vars (htab_t vars) if (htab_elements (vars) > 0) { fprintf (dump_file, "Variables:\n"); - htab_traverse (vars, dump_variable, NULL); + htab_traverse (vars, dump_variable_slot, NULL); } } @@ -2551,21 +5683,25 @@ dump_dataflow_sets (void) static void variable_was_changed (variable var, dataflow_set *set) { - hashval_t hash = VARIABLE_HASH_VAL (var->decl); + hashval_t hash = dv_htab_hash (var->dv); if (emit_notes) { - variable *slot; + void **slot; - slot = (variable *) htab_find_slot_with_hash (changed_variables, - var->decl, hash, INSERT); + /* Remember this decl or VALUE has been added to changed_variables. */ + set_dv_changed (var->dv, true); + + slot = htab_find_slot_with_hash (changed_variables, + var->dv, + hash, INSERT); if (set && var->n_var_parts == 0) { variable empty_var; - empty_var = (variable) pool_alloc (var_pool); - empty_var->decl = var->decl; + empty_var = (variable) pool_alloc (dv_pool (var->dv)); + empty_var->dv = var->dv; empty_var->refcount = 1; empty_var->n_var_parts = 0; *slot = empty_var; @@ -2585,11 +5721,11 @@ variable_was_changed (variable var, dataflow_set *set) void **slot; drop_var: - slot = shared_hash_find_slot_noinsert (set->vars, var->decl); + slot = shared_hash_find_slot_noinsert (set->vars, var->dv); if (slot) { if (shared_hash_shared (set->vars)) - slot = shared_hash_find_slot_unshare (&set->vars, var->decl, + slot = shared_hash_find_slot_unshare (&set->vars, var->dv, NO_INSERT); htab_clear_slot (shared_hash_htab (set->vars), slot); } @@ -2630,30 +5766,30 @@ find_variable_location_part (variable var, HOST_WIDE_INT offset, return -1; } -/* Set the part of variable's location in the dataflow set SET. The variable - part is specified by variable's declaration DECL and offset OFFSET and the - part's location by LOC. */ - -static void -set_variable_part (dataflow_set *set, rtx loc, tree decl, HOST_WIDE_INT offset, - enum var_init_status initialized, rtx set_src) +static void ** +set_slot_part (dataflow_set *set, rtx loc, void **slot, + decl_or_value dv, HOST_WIDE_INT offset, + enum var_init_status initialized, rtx set_src) { int pos; location_chain node, next; location_chain *nextp; variable var; - void **slot = shared_hash_find_slot (set->vars, decl); + bool onepart = dv_onepart_p (dv); + + gcc_assert (offset == 0 || !onepart); + gcc_assert (loc != dv_as_opaque (dv)); + + var = (variable) *slot; if (! flag_var_tracking_uninit) initialized = VAR_INIT_STATUS_INITIALIZED; - if (!slot || !*slot) + if (!var) { - if (!slot) - slot = shared_hash_find_slot_unshare (&set->vars, decl, INSERT); /* Create new variable information. */ - var = (variable) pool_alloc (var_pool); - var->decl = decl; + var = (variable) pool_alloc (dv_pool (dv)); + var->dv = dv; var->refcount = 1; var->n_var_parts = 1; var->var_part[0].offset = offset; @@ -2661,12 +5797,113 @@ set_variable_part (dataflow_set *set, rtx loc, tree decl, HOST_WIDE_INT offset, var->var_part[0].cur_loc = NULL; *slot = var; pos = 0; + nextp = &var->var_part[0].loc_chain; + if (emit_notes && dv_is_value_p (dv)) + add_cselib_value_chains (dv); + } + else if (onepart) + { + int r = -1, c = 0; + + gcc_assert (dv_as_opaque (var->dv) == dv_as_opaque (dv)); + + pos = 0; + + if (GET_CODE (loc) == VALUE) + { + for (nextp = &var->var_part[0].loc_chain; (node = *nextp); + nextp = &node->next) + if (GET_CODE (node->loc) == VALUE) + { + if (node->loc == loc) + { + r = 0; + break; + } + if (canon_value_cmp (node->loc, loc)) + c++; + else + { + r = 1; + break; + } + } + else if (REG_P (node->loc) || MEM_P (node->loc)) + c++; + else + { + r = 1; + break; + } + } + else if (REG_P (loc)) + { + for (nextp = &var->var_part[0].loc_chain; (node = *nextp); + nextp = &node->next) + if (REG_P (node->loc)) + { + if (REGNO (node->loc) < REGNO (loc)) + c++; + else + { + if (REGNO (node->loc) == REGNO (loc)) + r = 0; + else + r = 1; + break; + } + } + else + { + r = 1; + break; + } + } + else if (MEM_P (loc)) + { + for (nextp = &var->var_part[0].loc_chain; (node = *nextp); + nextp = &node->next) + if (REG_P (node->loc)) + c++; + else if (MEM_P (node->loc)) + { + if ((r = loc_cmp (XEXP (node->loc, 0), XEXP (loc, 0))) >= 0) + break; + else + c++; + } + else + { + r = 1; + break; + } + } + else + for (nextp = &var->var_part[0].loc_chain; (node = *nextp); + nextp = &node->next) + if ((r = loc_cmp (node->loc, loc)) >= 0) + break; + else + c++; + + if (r == 0) + return slot; + + if (var->refcount > 1 || shared_hash_shared (set->vars)) + { + slot = unshare_variable (set, slot, var, initialized); + var = (variable)*slot; + for (nextp = &var->var_part[0].loc_chain; c; + nextp = &(*nextp)->next) + c--; + gcc_assert ((!node && !*nextp) || node->loc == (*nextp)->loc); + } } else { int inspos = 0; - var = (variable) *slot; + gcc_assert (dv_as_decl (var->dv) == dv_as_decl (dv)); pos = find_variable_location_part (var, offset, &inspos); @@ -2686,13 +5923,16 @@ set_variable_part (dataflow_set *set, rtx loc, tree decl, HOST_WIDE_INT offset, if (set_src != NULL) node->set_src = set_src; - return; + return slot; } else { /* We have to make a copy of a shared variable. */ if (var->refcount > 1 || shared_hash_shared (set->vars)) - var = unshare_variable (set, var, initialized); + { + slot = unshare_variable (set, slot, var, initialized); + var = (variable)*slot; + } } } else @@ -2701,11 +5941,15 @@ set_variable_part (dataflow_set *set, rtx loc, tree decl, HOST_WIDE_INT offset, /* We have to make a copy of the shared variable. */ if (var->refcount > 1 || shared_hash_shared (set->vars)) - var = unshare_variable (set, var, initialized); + { + slot = unshare_variable (set, slot, var, initialized); + var = (variable)*slot; + } /* We track only variables whose size is <= MAX_VAR_PARTS bytes thus there are at most MAX_VAR_PARTS different offsets. */ - gcc_assert (var->n_var_parts < MAX_VAR_PARTS); + gcc_assert (var->n_var_parts < MAX_VAR_PARTS + && (!var->n_var_parts || !dv_onepart_p (var->dv))); /* We have to move the elements of array starting at index inspos to the next position. */ @@ -2717,29 +5961,31 @@ set_variable_part (dataflow_set *set, rtx loc, tree decl, HOST_WIDE_INT offset, var->var_part[pos].loc_chain = NULL; var->var_part[pos].cur_loc = NULL; } - } - /* Delete the location from the list. */ - nextp = &var->var_part[pos].loc_chain; - for (node = var->var_part[pos].loc_chain; node; node = next) - { - next = node->next; - if ((REG_P (node->loc) && REG_P (loc) - && REGNO (node->loc) == REGNO (loc)) - || rtx_equal_p (node->loc, loc)) + /* Delete the location from the list. */ + nextp = &var->var_part[pos].loc_chain; + for (node = var->var_part[pos].loc_chain; node; node = next) { - /* Save these values, to assign to the new node, before - deleting this one. */ - if (node->init > initialized) - initialized = node->init; - if (node->set_src != NULL && set_src == NULL) - set_src = node->set_src; - pool_free (loc_chain_pool, node); - *nextp = next; - break; + next = node->next; + if ((REG_P (node->loc) && REG_P (loc) + && REGNO (node->loc) == REGNO (loc)) + || rtx_equal_p (node->loc, loc)) + { + /* Save these values, to assign to the new node, before + deleting this one. */ + if (node->init > initialized) + initialized = node->init; + if (node->set_src != NULL && set_src == NULL) + set_src = node->set_src; + pool_free (loc_chain_pool, node); + *nextp = next; + break; + } + else + nextp = &node->next; } - else - nextp = &node->next; + + nextp = &var->var_part[pos].loc_chain; } /* Add the location to the beginning. */ @@ -2747,8 +5993,11 @@ set_variable_part (dataflow_set *set, rtx loc, tree decl, HOST_WIDE_INT offset, node->loc = loc; node->init = initialized; node->set_src = set_src; - node->next = var->var_part[pos].loc_chain; - var->var_part[pos].loc_chain = node; + node->next = *nextp; + *nextp = node; + + if (onepart && emit_notes) + add_value_chains (var->dv, loc); /* If no location was emitted do so. */ if (var->var_part[pos].cur_loc == NULL) @@ -2756,168 +6005,315 @@ set_variable_part (dataflow_set *set, rtx loc, tree decl, HOST_WIDE_INT offset, var->var_part[pos].cur_loc = loc; variable_was_changed (var, set); } + + return slot; } -/* Remove all recorded register locations for the given variable part - from dataflow set SET, except for those that are identical to loc. - The variable part is specified by variable's declaration DECL and - offset OFFSET. */ +/* Set the part of variable's location in the dataflow set SET. The + variable part is specified by variable's declaration in DV and + offset OFFSET and the part's location by LOC. IOPT should be + NO_INSERT if the variable is known to be in SET already and the + variable hash table must not be resized, and INSERT otherwise. */ static void -clobber_variable_part (dataflow_set *set, rtx loc, tree decl, - HOST_WIDE_INT offset, rtx set_src) +set_variable_part (dataflow_set *set, rtx loc, + decl_or_value dv, HOST_WIDE_INT offset, + enum var_init_status initialized, rtx set_src, + enum insert_option iopt) { - variable var; + void **slot; - if (! decl || ! DECL_P (decl)) - return; + if (iopt == NO_INSERT) + slot = shared_hash_find_slot_noinsert (set->vars, dv); + else + { + slot = shared_hash_find_slot (set->vars, dv); + if (!slot) + slot = shared_hash_find_slot_unshare (&set->vars, dv, iopt); + } + slot = set_slot_part (set, loc, slot, dv, offset, initialized, set_src); +} - var = shared_hash_find (set->vars, decl); - if (var) +/* Remove all recorded register locations for the given variable part + from dataflow set SET, except for those that are identical to loc. + The variable part is specified by variable's declaration or value + DV and offset OFFSET. */ + +static void ** +clobber_slot_part (dataflow_set *set, rtx loc, void **slot, + HOST_WIDE_INT offset, rtx set_src) +{ + variable var = (variable) *slot; + int pos = find_variable_location_part (var, offset, NULL); + + if (pos >= 0) { - int pos = find_variable_location_part (var, offset, NULL); + location_chain node, next; - if (pos >= 0) + /* Remove the register locations from the dataflow set. */ + next = var->var_part[pos].loc_chain; + for (node = next; node; node = next) { - location_chain node, next; - - /* Remove the register locations from the dataflow set. */ - next = var->var_part[pos].loc_chain; - for (node = next; node; node = next) + next = node->next; + if (node->loc != loc + && (!flag_var_tracking_uninit + || !set_src + || MEM_P (set_src) + || !rtx_equal_p (set_src, node->set_src))) { - next = node->next; - if (node->loc != loc - && (!flag_var_tracking_uninit - || !set_src - || MEM_P (set_src) - || !rtx_equal_p (set_src, node->set_src))) + if (REG_P (node->loc)) { - if (REG_P (node->loc)) + attrs anode, anext; + attrs *anextp; + + /* Remove the variable part from the register's + list, but preserve any other variable parts + that might be regarded as live in that same + register. */ + anextp = &set->regs[REGNO (node->loc)]; + for (anode = *anextp; anode; anode = anext) { - attrs anode, anext; - attrs *anextp; - - /* Remove the variable part from the register's - list, but preserve any other variable parts - that might be regarded as live in that same - register. */ - anextp = &set->regs[REGNO (node->loc)]; - for (anode = *anextp; anode; anode = anext) + anext = anode->next; + if (dv_as_opaque (anode->dv) == dv_as_opaque (var->dv) + && anode->offset == offset) { - anext = anode->next; - if (anode->decl == decl - && anode->offset == offset) - { - pool_free (attrs_pool, anode); - *anextp = anext; - } - else - anextp = &anode->next; + pool_free (attrs_pool, anode); + *anextp = anext; } + else + anextp = &anode->next; } - - delete_variable_part (set, node->loc, decl, offset); } + + slot = delete_slot_part (set, node->loc, slot, offset); } } } + + return slot; } -/* Delete the part of variable's location from dataflow set SET. The variable - part is specified by variable's declaration DECL and offset OFFSET and the - part's location by LOC. */ +/* Remove all recorded register locations for the given variable part + from dataflow set SET, except for those that are identical to loc. + The variable part is specified by variable's declaration or value + DV and offset OFFSET. */ static void -delete_variable_part (dataflow_set *set, rtx loc, tree decl, - HOST_WIDE_INT offset) +clobber_variable_part (dataflow_set *set, rtx loc, decl_or_value dv, + HOST_WIDE_INT offset, rtx set_src) { - variable var = shared_hash_find (set->vars, decl);; - if (var) - { - int pos = find_variable_location_part (var, offset, NULL); + void **slot; - if (pos >= 0) - { - location_chain node, next; - location_chain *nextp; - bool changed; + if (!dv_as_opaque (dv) + || (!dv_is_value_p (dv) && ! DECL_P (dv_as_decl (dv)))) + return; - if (var->refcount > 1 || shared_hash_shared (set->vars)) - { - /* If the variable contains the location part we have to - make a copy of the variable. */ - for (node = var->var_part[pos].loc_chain; node; - node = node->next) - { - if ((REG_P (node->loc) && REG_P (loc) - && REGNO (node->loc) == REGNO (loc)) - || rtx_equal_p (node->loc, loc)) - { - var = unshare_variable (set, var, - VAR_INIT_STATUS_UNKNOWN); - break; - } - } - } + slot = shared_hash_find_slot_noinsert (set->vars, dv); + if (!slot) + return; - /* Delete the location part. */ - nextp = &var->var_part[pos].loc_chain; - for (node = *nextp; node; node = next) + slot = clobber_slot_part (set, loc, slot, offset, set_src); +} + +/* Delete the part of variable's location from dataflow set SET. The + variable part is specified by its SET->vars slot SLOT and offset + OFFSET and the part's location by LOC. */ + +static void ** +delete_slot_part (dataflow_set *set, rtx loc, void **slot, + HOST_WIDE_INT offset) +{ + variable var = (variable) *slot; + int pos = find_variable_location_part (var, offset, NULL); + + if (pos >= 0) + { + location_chain node, next; + location_chain *nextp; + bool changed; + + if (var->refcount > 1 || shared_hash_shared (set->vars)) + { + /* If the variable contains the location part we have to + make a copy of the variable. */ + for (node = var->var_part[pos].loc_chain; node; + node = node->next) { - next = node->next; if ((REG_P (node->loc) && REG_P (loc) && REGNO (node->loc) == REGNO (loc)) || rtx_equal_p (node->loc, loc)) { - pool_free (loc_chain_pool, node); - *nextp = next; + slot = unshare_variable (set, slot, var, + VAR_INIT_STATUS_UNKNOWN); + var = (variable)*slot; break; } - else - nextp = &node->next; } + } - /* If we have deleted the location which was last emitted - we have to emit new location so add the variable to set - of changed variables. */ - if (var->var_part[pos].cur_loc - && ((REG_P (loc) - && REG_P (var->var_part[pos].cur_loc) - && REGNO (loc) == REGNO (var->var_part[pos].cur_loc)) - || rtx_equal_p (loc, var->var_part[pos].cur_loc))) + /* Delete the location part. */ + nextp = &var->var_part[pos].loc_chain; + for (node = *nextp; node; node = next) + { + next = node->next; + if ((REG_P (node->loc) && REG_P (loc) + && REGNO (node->loc) == REGNO (loc)) + || rtx_equal_p (node->loc, loc)) { - changed = true; - if (var->var_part[pos].loc_chain) - var->var_part[pos].cur_loc = var->var_part[pos].loc_chain->loc; + if (emit_notes && pos == 0 && dv_onepart_p (var->dv)) + remove_value_chains (var->dv, node->loc); + pool_free (loc_chain_pool, node); + *nextp = next; + break; } else - changed = false; + nextp = &node->next; + } + + /* If we have deleted the location which was last emitted + we have to emit new location so add the variable to set + of changed variables. */ + if (var->var_part[pos].cur_loc + && ((REG_P (loc) + && REG_P (var->var_part[pos].cur_loc) + && REGNO (loc) == REGNO (var->var_part[pos].cur_loc)) + || rtx_equal_p (loc, var->var_part[pos].cur_loc))) + { + changed = true; + if (var->var_part[pos].loc_chain) + var->var_part[pos].cur_loc = var->var_part[pos].loc_chain->loc; + } + else + changed = false; - if (var->var_part[pos].loc_chain == NULL) + if (var->var_part[pos].loc_chain == NULL) + { + gcc_assert (changed); + var->n_var_parts--; + if (emit_notes && var->n_var_parts == 0 && dv_is_value_p (var->dv)) + remove_cselib_value_chains (var->dv); + while (pos < var->n_var_parts) { - var->n_var_parts--; - while (pos < var->n_var_parts) - { - var->var_part[pos] = var->var_part[pos + 1]; - pos++; - } + var->var_part[pos] = var->var_part[pos + 1]; + pos++; } - if (changed) - variable_was_changed (var, set); } + if (changed) + variable_was_changed (var, set); } + + return slot; +} + +/* Delete the part of variable's location from dataflow set SET. The + variable part is specified by variable's declaration or value DV + and offset OFFSET and the part's location by LOC. */ + +static void +delete_variable_part (dataflow_set *set, rtx loc, decl_or_value dv, + HOST_WIDE_INT offset) +{ + void **slot = shared_hash_find_slot_noinsert (set->vars, dv); + if (!slot) + return; + + slot = delete_slot_part (set, loc, slot, offset); +} + +/* Wrap result in CONST:MODE if needed to preserve the mode. */ + +static rtx +check_wrap_constant (enum machine_mode mode, rtx result) +{ + if (!result || GET_MODE (result) == mode) + return result; + + if (dump_file && (dump_flags & TDF_DETAILS)) + fprintf (dump_file, " wrapping result in const to preserve mode %s\n", + GET_MODE_NAME (mode)); + + result = wrap_constant (mode, result); + gcc_assert (GET_MODE (result) == mode); + + return result; +} + +/* Callback for cselib_expand_value, that looks for expressions + holding the value in the var-tracking hash tables. */ + +static rtx +vt_expand_loc_callback (rtx x, bitmap regs, int max_depth, void *data) +{ + htab_t vars = (htab_t)data; + decl_or_value dv; + variable var; + location_chain loc; + rtx result; + + gcc_assert (GET_CODE (x) == VALUE); + + if (VALUE_RECURSED_INTO (x)) + return NULL; + + dv = dv_from_value (x); + var = (variable) htab_find_with_hash (vars, dv, dv_htab_hash (dv)); + + if (!var) + return NULL; + + if (var->n_var_parts == 0) + return NULL; + + gcc_assert (var->n_var_parts == 1); + + VALUE_RECURSED_INTO (x) = true; + result = NULL; + + for (loc = var->var_part[0].loc_chain; loc; loc = loc->next) + { + result = cselib_expand_value_rtx_cb (loc->loc, regs, max_depth, + vt_expand_loc_callback, vars); + result = check_wrap_constant (GET_MODE (loc->loc), result); + if (result) + break; + } + + VALUE_RECURSED_INTO (x) = false; + return result; +} + +/* Expand VALUEs in LOC, using VARS as well as cselib's equivalence + tables. */ + +static rtx +vt_expand_loc (rtx loc, htab_t vars) +{ + rtx newloc; + + if (!MAY_HAVE_DEBUG_INSNS) + return loc; + + newloc = cselib_expand_value_rtx_cb (loc, scratch_regs, 5, + vt_expand_loc_callback, vars); + loc = check_wrap_constant (GET_MODE (loc), newloc); + + if (loc && MEM_P (loc)) + loc = targetm.delegitimize_address (loc); + + return loc; } /* Emit the NOTE_INSN_VAR_LOCATION for variable *VARP. DATA contains additional parameters: WHERE specifies whether the note shall be emitted - before of after instruction INSN. */ + before or after instruction INSN. */ static int emit_note_insn_var_location (void **varp, void *data) { - variable var = *(variable *) varp; + variable var = (variable) *varp; rtx insn = ((emit_note_data *)data)->insn; enum emit_note_where where = ((emit_note_data *)data)->where; + htab_t vars = ((emit_note_data *)data)->vars; rtx note; int i, j, n_var_parts; bool complete; @@ -2926,8 +6322,14 @@ emit_note_insn_var_location (void **varp, void *data) tree type_size_unit; HOST_WIDE_INT offsets[MAX_VAR_PARTS]; rtx loc[MAX_VAR_PARTS]; + tree decl; + + if (dv_is_value_p (var->dv)) + goto clear; + + decl = dv_as_decl (var->dv); - gcc_assert (var->decl); + gcc_assert (decl); complete = true; last_limit = 0; @@ -2935,6 +6337,7 @@ emit_note_insn_var_location (void **varp, void *data) for (i = 0; i < var->n_var_parts; i++) { enum machine_mode mode, wider_mode; + rtx loc2; if (last_limit < var->var_part[i].offset) { @@ -2944,7 +6347,13 @@ emit_note_insn_var_location (void **varp, void *data) else if (last_limit > var->var_part[i].offset) continue; offsets[n_var_parts] = var->var_part[i].offset; - loc[n_var_parts] = var->var_part[i].loc_chain->loc; + loc2 = vt_expand_loc (var->var_part[i].loc_chain->loc, vars); + if (!loc2) + { + complete = false; + continue; + } + loc[n_var_parts] = loc2; mode = GET_MODE (loc[n_var_parts]); initialized = var->var_part[i].loc_chain->init; last_limit = offsets[n_var_parts] + GET_MODE_SIZE (mode); @@ -2956,13 +6365,12 @@ emit_note_insn_var_location (void **varp, void *data) break; if (j < var->n_var_parts && wider_mode != VOIDmode - && GET_CODE (loc[n_var_parts]) - == GET_CODE (var->var_part[j].loc_chain->loc) - && mode == GET_MODE (var->var_part[j].loc_chain->loc) + && (loc2 = vt_expand_loc (var->var_part[j].loc_chain->loc, vars)) + && GET_CODE (loc[n_var_parts]) == GET_CODE (loc2) + && mode == GET_MODE (loc2) && last_limit == var->var_part[j].offset) { rtx new_loc = NULL; - rtx loc2 = var->var_part[j].loc_chain->loc; if (REG_P (loc[n_var_parts]) && hard_regno_nregs[REGNO (loc[n_var_parts])][mode] * 2 @@ -3015,12 +6423,16 @@ emit_note_insn_var_location (void **varp, void *data) } ++n_var_parts; } - type_size_unit = TYPE_SIZE_UNIT (TREE_TYPE (var->decl)); + type_size_unit = TYPE_SIZE_UNIT (TREE_TYPE (decl)); if ((unsigned HOST_WIDE_INT) last_limit < TREE_INT_CST_LOW (type_size_unit)) complete = false; - if (where == EMIT_NOTE_AFTER_INSN) - note = emit_note_after (NOTE_INSN_VAR_LOCATION, insn); + if (where != EMIT_NOTE_BEFORE_INSN) + { + note = emit_note_after (NOTE_INSN_VAR_LOCATION, insn); + if (where == EMIT_NOTE_AFTER_CALL_INSN) + NOTE_DURING_CALL_P (note) = true; + } else note = emit_note_before (NOTE_INSN_VAR_LOCATION, insn); @@ -3029,7 +6441,7 @@ emit_note_insn_var_location (void **varp, void *data) if (!complete) { - NOTE_VAR_LOCATION (note) = gen_rtx_VAR_LOCATION (VOIDmode, var->decl, + NOTE_VAR_LOCATION (note) = gen_rtx_VAR_LOCATION (VOIDmode, decl, NULL_RTX, (int) initialized); } else if (n_var_parts == 1) @@ -3037,7 +6449,7 @@ emit_note_insn_var_location (void **varp, void *data) rtx expr_list = gen_rtx_EXPR_LIST (VOIDmode, loc[0], GEN_INT (offsets[0])); - NOTE_VAR_LOCATION (note) = gen_rtx_VAR_LOCATION (VOIDmode, var->decl, + NOTE_VAR_LOCATION (note) = gen_rtx_VAR_LOCATION (VOIDmode, decl, expr_list, (int) initialized); } @@ -3051,28 +6463,115 @@ emit_note_insn_var_location (void **varp, void *data) parallel = gen_rtx_PARALLEL (VOIDmode, gen_rtvec_v (n_var_parts, loc)); - NOTE_VAR_LOCATION (note) = gen_rtx_VAR_LOCATION (VOIDmode, var->decl, + NOTE_VAR_LOCATION (note) = gen_rtx_VAR_LOCATION (VOIDmode, decl, parallel, (int) initialized); } + clear: + set_dv_changed (var->dv, false); htab_clear_slot (changed_variables, varp); /* Continue traversing the hash table. */ return 1; } +DEF_VEC_P (variable); +DEF_VEC_ALLOC_P (variable, heap); + +/* Stack of variable_def pointers that need processing with + check_changed_vars_2. */ + +static VEC (variable, heap) *changed_variables_stack; + +/* Populate changed_variables_stack with variable_def pointers + that need variable_was_changed called on them. */ + +static int +check_changed_vars_1 (void **slot, void *data) +{ + variable var = (variable) *slot; + htab_t htab = (htab_t) data; + + if (dv_is_value_p (var->dv)) + { + value_chain vc + = (value_chain) htab_find_with_hash (value_chains, var->dv, + dv_htab_hash (var->dv)); + + if (vc == NULL) + return 1; + for (vc = vc->next; vc; vc = vc->next) + if (!dv_changed_p (vc->dv)) + { + variable vcvar + = (variable) htab_find_with_hash (htab, vc->dv, + dv_htab_hash (vc->dv)); + if (vcvar) + VEC_safe_push (variable, heap, changed_variables_stack, + vcvar); + } + } + return 1; +} + +/* Add VAR to changed_variables and also for VALUEs add recursively + all DVs that aren't in changed_variables yet but reference the + VALUE from its loc_chain. */ + +static void +check_changed_vars_2 (variable var, htab_t htab) +{ + variable_was_changed (var, NULL); + if (dv_is_value_p (var->dv)) + { + value_chain vc + = (value_chain) htab_find_with_hash (value_chains, var->dv, + dv_htab_hash (var->dv)); + + if (vc == NULL) + return; + for (vc = vc->next; vc; vc = vc->next) + if (!dv_changed_p (vc->dv)) + { + variable vcvar + = (variable) htab_find_with_hash (htab, vc->dv, + dv_htab_hash (vc->dv)); + if (vcvar) + check_changed_vars_2 (vcvar, htab); + } + } +} + /* Emit NOTE_INSN_VAR_LOCATION note for each variable from a chain CHANGED_VARIABLES and delete this chain. WHERE specifies whether the notes shall be emitted before of after instruction INSN. */ static void -emit_notes_for_changes (rtx insn, enum emit_note_where where) +emit_notes_for_changes (rtx insn, enum emit_note_where where, + shared_hash vars) { emit_note_data data; + htab_t htab = shared_hash_htab (vars); + + if (!htab_elements (changed_variables)) + return; + + if (MAY_HAVE_DEBUG_INSNS) + { + /* Unfortunately this has to be done in two steps, because + we can't traverse a hashtab into which we are inserting + through variable_was_changed. */ + htab_traverse (changed_variables, check_changed_vars_1, htab); + while (VEC_length (variable, changed_variables_stack) > 0) + check_changed_vars_2 (VEC_pop (variable, changed_variables_stack), + htab); + } data.insn = insn; data.where = where; + data.vars = htab; + htab_traverse (changed_variables, emit_note_insn_var_location, &data); } @@ -3085,23 +6584,54 @@ emit_notes_for_differences_1 (void **slot, void *data) htab_t new_vars = (htab_t) data; variable old_var, new_var; - old_var = *(variable *) slot; - new_var = (variable) htab_find_with_hash (new_vars, old_var->decl, - VARIABLE_HASH_VAL (old_var->decl)); + old_var = (variable) *slot; + new_var = (variable) htab_find_with_hash (new_vars, old_var->dv, + dv_htab_hash (old_var->dv)); if (!new_var) { /* Variable has disappeared. */ variable empty_var; - empty_var = (variable) pool_alloc (var_pool); - empty_var->decl = old_var->decl; + empty_var = (variable) pool_alloc (dv_pool (old_var->dv)); + empty_var->dv = old_var->dv; empty_var->refcount = 0; empty_var->n_var_parts = 0; + if (dv_onepart_p (old_var->dv)) + { + location_chain lc; + + gcc_assert (old_var->n_var_parts == 1); + for (lc = old_var->var_part[0].loc_chain; lc; lc = lc->next) + remove_value_chains (old_var->dv, lc->loc); + if (dv_is_value_p (old_var->dv)) + remove_cselib_value_chains (old_var->dv); + } variable_was_changed (empty_var, NULL); } else if (variable_different_p (old_var, new_var, true)) { + if (dv_onepart_p (old_var->dv)) + { + location_chain lc1, lc2; + + gcc_assert (old_var->n_var_parts == 1); + gcc_assert (new_var->n_var_parts == 1); + lc1 = old_var->var_part[0].loc_chain; + lc2 = new_var->var_part[0].loc_chain; + while (lc1 + && lc2 + && ((REG_P (lc1->loc) && REG_P (lc2->loc)) + || rtx_equal_p (lc1->loc, lc2->loc))) + { + lc1 = lc1->next; + lc2 = lc2->next; + } + for (; lc2; lc2 = lc2->next) + add_value_chains (old_var->dv, lc2->loc); + for (; lc1; lc1 = lc1->next) + remove_value_chains (old_var->dv, lc1->loc); + } variable_was_changed (new_var, NULL); } @@ -3118,12 +6648,22 @@ emit_notes_for_differences_2 (void **slot, void *data) htab_t old_vars = (htab_t) data; variable old_var, new_var; - new_var = *(variable *) slot; - old_var = (variable) htab_find_with_hash (old_vars, new_var->decl, - VARIABLE_HASH_VAL (new_var->decl)); + new_var = (variable) *slot; + old_var = (variable) htab_find_with_hash (old_vars, new_var->dv, + dv_htab_hash (new_var->dv)); if (!old_var) { /* Variable has appeared. */ + if (dv_onepart_p (new_var->dv)) + { + location_chain lc; + + gcc_assert (new_var->n_var_parts == 1); + for (lc = new_var->var_part[0].loc_chain; lc; lc = lc->next) + add_value_chains (new_var->dv, lc->loc); + if (dv_is_value_p (new_var->dv)) + add_cselib_value_chains (new_var->dv); + } variable_was_changed (new_var, NULL); } @@ -3144,19 +6684,18 @@ emit_notes_for_differences (rtx insn, dataflow_set *old_set, htab_traverse (shared_hash_htab (new_set->vars), emit_notes_for_differences_2, shared_hash_htab (old_set->vars)); - emit_notes_for_changes (insn, EMIT_NOTE_BEFORE_INSN); + emit_notes_for_changes (insn, EMIT_NOTE_BEFORE_INSN, new_set->vars); } /* Emit the notes for changes of location parts in the basic block BB. */ static void -emit_notes_in_bb (basic_block bb) +emit_notes_in_bb (basic_block bb, dataflow_set *set) { int i; - dataflow_set set; - dataflow_set_init (&set); - dataflow_set_copy (&set, &VTI (bb)->in); + dataflow_set_clear (set); + dataflow_set_copy (set, &VTI (bb)->in); for (i = 0; i < VTI (bb)->n_mos; i++) { @@ -3165,28 +6704,164 @@ emit_notes_in_bb (basic_block bb) switch (VTI (bb)->mos[i].type) { case MO_CALL: + dataflow_set_clear_at_call (set); + emit_notes_for_changes (insn, EMIT_NOTE_AFTER_CALL_INSN, set->vars); + break; + + case MO_USE: { - int r; + rtx loc = VTI (bb)->mos[i].u.loc; - for (r = 0; r < FIRST_PSEUDO_REGISTER; r++) - if (TEST_HARD_REG_BIT (call_used_reg_set, r)) - { - var_regno_delete (&set, r); - } - emit_notes_for_changes (insn, EMIT_NOTE_AFTER_INSN); + if (REG_P (loc)) + var_reg_set (set, loc, VAR_INIT_STATUS_UNINITIALIZED, NULL); + else + var_mem_set (set, loc, VAR_INIT_STATUS_UNINITIALIZED, NULL); + + emit_notes_for_changes (insn, EMIT_NOTE_AFTER_INSN, set->vars); } break; - case MO_USE: + case MO_VAL_LOC: { rtx loc = VTI (bb)->mos[i].u.loc; + rtx val, vloc; + tree var; - if (REG_P (loc)) - var_reg_set (&set, loc, VAR_INIT_STATUS_UNINITIALIZED, NULL); + if (GET_CODE (loc) == CONCAT) + { + val = XEXP (loc, 0); + vloc = XEXP (loc, 1); + } else - var_mem_set (&set, loc, VAR_INIT_STATUS_UNINITIALIZED, NULL); + { + val = NULL_RTX; + vloc = loc; + } + + var = PAT_VAR_LOCATION_DECL (vloc); - emit_notes_for_changes (insn, EMIT_NOTE_AFTER_INSN); + clobber_variable_part (set, NULL_RTX, + dv_from_decl (var), 0, NULL_RTX); + if (val) + { + if (VAL_NEEDS_RESOLUTION (loc)) + val_resolve (set, val, PAT_VAR_LOCATION_LOC (vloc), insn); + set_variable_part (set, val, dv_from_decl (var), 0, + VAR_INIT_STATUS_INITIALIZED, NULL_RTX, + INSERT); + } + + emit_notes_for_changes (insn, EMIT_NOTE_AFTER_INSN, set->vars); + } + break; + + case MO_VAL_USE: + { + rtx loc = VTI (bb)->mos[i].u.loc; + rtx val, vloc, uloc; + + vloc = uloc = XEXP (loc, 1); + val = XEXP (loc, 0); + + if (GET_CODE (val) == CONCAT) + { + uloc = XEXP (val, 1); + val = XEXP (val, 0); + } + + if (VAL_NEEDS_RESOLUTION (loc)) + val_resolve (set, val, vloc, insn); + + if (VAL_HOLDS_TRACK_EXPR (loc)) + { + if (GET_CODE (uloc) == REG) + var_reg_set (set, uloc, VAR_INIT_STATUS_UNINITIALIZED, + NULL); + else if (GET_CODE (uloc) == MEM) + var_mem_set (set, uloc, VAR_INIT_STATUS_UNINITIALIZED, + NULL); + } + + emit_notes_for_changes (insn, EMIT_NOTE_BEFORE_INSN, set->vars); + } + break; + + case MO_VAL_SET: + { + rtx loc = VTI (bb)->mos[i].u.loc; + rtx val, vloc, uloc; + + vloc = uloc = XEXP (loc, 1); + val = XEXP (loc, 0); + + if (GET_CODE (val) == CONCAT) + { + vloc = XEXP (val, 1); + val = XEXP (val, 0); + } + + if (GET_CODE (vloc) == SET) + { + rtx vsrc = SET_SRC (vloc); + + gcc_assert (val != vsrc); + gcc_assert (vloc == uloc || VAL_NEEDS_RESOLUTION (loc)); + + vloc = SET_DEST (vloc); + + if (VAL_NEEDS_RESOLUTION (loc)) + val_resolve (set, val, vsrc, insn); + } + else if (VAL_NEEDS_RESOLUTION (loc)) + { + gcc_assert (GET_CODE (uloc) == SET + && GET_CODE (SET_SRC (uloc)) == REG); + val_resolve (set, val, SET_SRC (uloc), insn); + } + + if (VAL_HOLDS_TRACK_EXPR (loc)) + { + if (VAL_EXPR_IS_CLOBBERED (loc)) + { + if (REG_P (uloc)) + var_reg_delete (set, uloc, true); + else if (MEM_P (uloc)) + var_mem_delete (set, uloc, true); + } + else + { + bool copied_p = VAL_EXPR_IS_COPIED (loc); + rtx set_src = NULL; + enum var_init_status status = VAR_INIT_STATUS_INITIALIZED; + + if (GET_CODE (uloc) == SET) + { + set_src = SET_SRC (uloc); + uloc = SET_DEST (uloc); + } + + if (copied_p) + { + status = find_src_status (set, set_src); + + set_src = find_src_set_src (set, set_src); + } + + if (REG_P (uloc)) + var_reg_delete_and_set (set, uloc, !copied_p, + status, set_src); + else if (MEM_P (uloc)) + var_mem_delete_and_set (set, uloc, !copied_p, + status, set_src); + } + } + else if (REG_P (uloc)) + var_regno_delete (set, REGNO (uloc)); + + val_store (set, val, vloc, insn); + + emit_notes_for_changes (NEXT_INSN (insn), EMIT_NOTE_BEFORE_INSN, + set->vars); } break; @@ -3202,13 +6877,14 @@ emit_notes_in_bb (basic_block bb) } if (REG_P (loc)) - var_reg_delete_and_set (&set, loc, true, VAR_INIT_STATUS_INITIALIZED, + var_reg_delete_and_set (set, loc, true, VAR_INIT_STATUS_INITIALIZED, set_src); else - var_mem_delete_and_set (&set, loc, true, VAR_INIT_STATUS_INITIALIZED, + var_mem_delete_and_set (set, loc, true, VAR_INIT_STATUS_INITIALIZED, set_src); - emit_notes_for_changes (NEXT_INSN (insn), EMIT_NOTE_BEFORE_INSN); + emit_notes_for_changes (NEXT_INSN (insn), EMIT_NOTE_BEFORE_INSN, + set->vars); } break; @@ -3224,15 +6900,16 @@ emit_notes_in_bb (basic_block bb) loc = SET_DEST (loc); } - src_status = find_src_status (&set, set_src); - set_src = find_src_set_src (&set, set_src); + src_status = find_src_status (set, set_src); + set_src = find_src_set_src (set, set_src); if (REG_P (loc)) - var_reg_delete_and_set (&set, loc, false, src_status, set_src); + var_reg_delete_and_set (set, loc, false, src_status, set_src); else - var_mem_delete_and_set (&set, loc, false, src_status, set_src); + var_mem_delete_and_set (set, loc, false, src_status, set_src); - emit_notes_for_changes (NEXT_INSN (insn), EMIT_NOTE_BEFORE_INSN); + emit_notes_for_changes (NEXT_INSN (insn), EMIT_NOTE_BEFORE_INSN, + set->vars); } break; @@ -3241,11 +6918,11 @@ emit_notes_in_bb (basic_block bb) rtx loc = VTI (bb)->mos[i].u.loc; if (REG_P (loc)) - var_reg_delete (&set, loc, false); + var_reg_delete (set, loc, false); else - var_mem_delete (&set, loc, false); + var_mem_delete (set, loc, false); - emit_notes_for_changes (insn, EMIT_NOTE_AFTER_INSN); + emit_notes_for_changes (insn, EMIT_NOTE_AFTER_INSN, set->vars); } break; @@ -3254,20 +6931,20 @@ emit_notes_in_bb (basic_block bb) rtx loc = VTI (bb)->mos[i].u.loc; if (REG_P (loc)) - var_reg_delete (&set, loc, true); + var_reg_delete (set, loc, true); else - var_mem_delete (&set, loc, true); + var_mem_delete (set, loc, true); - emit_notes_for_changes (NEXT_INSN (insn), EMIT_NOTE_BEFORE_INSN); + emit_notes_for_changes (NEXT_INSN (insn), EMIT_NOTE_BEFORE_INSN, + set->vars); } break; case MO_ADJUST: - set.stack_adjust += VTI (bb)->mos[i].u.adjust; + set->stack_adjust += VTI (bb)->mos[i].u.adjust; break; } } - dataflow_set_destroy (&set); } /* Emit notes for the whole function. */ @@ -3276,30 +6953,49 @@ static void vt_emit_notes (void) { basic_block bb; - dataflow_set *last_out; - dataflow_set empty; + dataflow_set cur; gcc_assert (!htab_elements (changed_variables)); + /* Free memory occupied by the out hash tables, as they aren't used + anymore. */ + FOR_EACH_BB (bb) + dataflow_set_clear (&VTI (bb)->out); + /* Enable emitting notes by functions (mainly by set_variable_part and delete_variable_part). */ emit_notes = true; - dataflow_set_init (&empty); - last_out = ∅ + if (MAY_HAVE_DEBUG_INSNS) + changed_variables_stack = VEC_alloc (variable, heap, 40); + + dataflow_set_init (&cur); FOR_EACH_BB (bb) { /* Emit the notes for changes of variable locations between two subsequent basic blocks. */ - emit_notes_for_differences (BB_HEAD (bb), last_out, &VTI (bb)->in); + emit_notes_for_differences (BB_HEAD (bb), &cur, &VTI (bb)->in); /* Emit the notes for the changes in the basic block itself. */ - emit_notes_in_bb (bb); + emit_notes_in_bb (bb, &cur); - last_out = &VTI (bb)->out; + /* Free memory occupied by the in hash table, we won't need it + again. */ + dataflow_set_clear (&VTI (bb)->in); } - dataflow_set_destroy (&empty); +#ifdef ENABLE_CHECKING + htab_traverse (shared_hash_htab (cur.vars), + emit_notes_for_differences_1, + shared_hash_htab (empty_shared_hash)); + if (MAY_HAVE_DEBUG_INSNS) + gcc_assert (htab_elements (value_chains) == 0); +#endif + dataflow_set_destroy (&cur); + + if (MAY_HAVE_DEBUG_INSNS) + VEC_free (variable, heap, changed_variables_stack); + emit_notes = false; } @@ -3346,6 +7042,7 @@ vt_add_function_parameters (void) enum machine_mode mode; HOST_WIDE_INT offset; dataflow_set *out; + decl_or_value dv; if (TREE_CODE (parm) != PARM_DECL) continue; @@ -3386,22 +7083,60 @@ vt_add_function_parameters (void) out = &VTI (ENTRY_BLOCK_PTR)->out; + dv = dv_from_decl (parm); + + if (target_for_debug_bind (parm) + /* We can't deal with these right now, because this kind of + variable is single-part. ??? We could handle parallels + that describe multiple locations for the same single + value, but ATM we don't. */ + && GET_CODE (incoming) != PARALLEL) + { + cselib_val *val; + + /* ??? We shouldn't ever hit this, but it may happen because + arguments passed by invisible reference aren't dealt with + above: incoming-rtl will have Pmode rather than the + expected mode for the type. */ + if (offset) + continue; + + val = cselib_lookup (var_lowpart (mode, incoming), mode, true); + + /* ??? Float-typed values in memory are not handled by + cselib. */ + if (val) + { + cselib_preserve_value (val); + set_variable_part (out, val->val_rtx, dv, offset, + VAR_INIT_STATUS_INITIALIZED, NULL, INSERT); + dv = dv_from_value (val->val_rtx); + } + } + if (REG_P (incoming)) { incoming = var_lowpart (mode, incoming); gcc_assert (REGNO (incoming) < FIRST_PSEUDO_REGISTER); - attrs_list_insert (&out->regs[REGNO (incoming)], - parm, offset, incoming); - set_variable_part (out, incoming, parm, offset, VAR_INIT_STATUS_INITIALIZED, - NULL); + attrs_list_insert (&out->regs[REGNO (incoming)], dv, offset, + incoming); + set_variable_part (out, incoming, dv, offset, + VAR_INIT_STATUS_INITIALIZED, NULL, INSERT); } else if (MEM_P (incoming)) { incoming = var_lowpart (mode, incoming); - set_variable_part (out, incoming, parm, offset, - VAR_INIT_STATUS_INITIALIZED, NULL); + set_variable_part (out, incoming, dv, offset, + VAR_INIT_STATUS_INITIALIZED, NULL, INSERT); } } + + if (MAY_HAVE_DEBUG_INSNS) + { + cselib_preserve_only_values (true); + cselib_reset_table_with_next_value (cselib_get_next_unknown_value ()); + } + } /* Allocate and initialize the data structures for variable tracking @@ -3414,10 +7149,34 @@ vt_initialize (void) alloc_aux_for_blocks (sizeof (struct variable_tracking_info_def)); + if (MAY_HAVE_DEBUG_INSNS) + { + cselib_init (true); + scratch_regs = BITMAP_ALLOC (NULL); + valvar_pool = create_alloc_pool ("small variable_def pool", + sizeof (struct variable_def), 256); + } + else + { + scratch_regs = NULL; + valvar_pool = NULL; + } + FOR_EACH_BB (bb) { rtx insn; HOST_WIDE_INT pre, post = 0; + int count; + unsigned int next_value_before = cselib_get_next_unknown_value (); + unsigned int next_value_after = next_value_before; + + if (MAY_HAVE_DEBUG_INSNS) + { + cselib_record_sets_hook = count_with_sets; + if (dump_file && (dump_flags & TDF_DETAILS)) + fprintf (dump_file, "first value: %i\n", + cselib_get_next_unknown_value ()); + } /* Count the number of micro operations. */ VTI (bb)->n_mos = 0; @@ -3430,17 +7189,55 @@ vt_initialize (void) { insn_stack_adjust_offset_pre_post (insn, &pre, &post); if (pre) - VTI (bb)->n_mos++; + { + VTI (bb)->n_mos++; + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (GEN_INT (pre), bb, insn, + MO_ADJUST, dump_file); + } if (post) - VTI (bb)->n_mos++; + { + VTI (bb)->n_mos++; + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (GEN_INT (post), bb, insn, + MO_ADJUST, dump_file); + } } - note_uses (&PATTERN (insn), count_uses_1, insn); - note_stores (PATTERN (insn), count_stores, insn); + cselib_hook_called = false; + if (MAY_HAVE_DEBUG_INSNS) + { + cselib_process_insn (insn); + if (dump_file && (dump_flags & TDF_DETAILS)) + { + print_rtl_single (dump_file, insn); + dump_cselib_table (dump_file); + } + } + if (!cselib_hook_called) + count_with_sets (insn, 0, 0); if (CALL_P (insn)) - VTI (bb)->n_mos++; + { + VTI (bb)->n_mos++; + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (PATTERN (insn), bb, insn, + MO_CALL, dump_file); + } } } + count = VTI (bb)->n_mos; + + if (MAY_HAVE_DEBUG_INSNS) + { + cselib_preserve_only_values (false); + next_value_after = cselib_get_next_unknown_value (); + cselib_reset_table_with_next_value (next_value_before); + cselib_record_sets_hook = add_with_sets; + if (dump_file && (dump_flags & TDF_DETAILS)) + fprintf (dump_file, "first value: %i\n", + cselib_get_next_unknown_value ()); + } + /* Add the micro-operations to the array. */ VTI (bb)->mos = XNEWVEC (micro_operation, VTI (bb)->n_mos); VTI (bb)->n_mos = 0; @@ -3449,8 +7246,6 @@ vt_initialize (void) { if (INSN_P (insn)) { - int n1, n2; - if (!frame_pointer_needed) { insn_stack_adjust_offset_pre_post (insn, &pre, &post); @@ -3461,62 +7256,25 @@ vt_initialize (void) mo->type = MO_ADJUST; mo->u.adjust = pre; mo->insn = insn; - } - } - - n1 = VTI (bb)->n_mos; - note_uses (&PATTERN (insn), add_uses_1, insn); - n2 = VTI (bb)->n_mos - 1; - - /* Order the MO_USEs to be before MO_USE_NO_VARs. */ - while (n1 < n2) - { - while (n1 < n2 && VTI (bb)->mos[n1].type == MO_USE) - n1++; - while (n1 < n2 && VTI (bb)->mos[n2].type == MO_USE_NO_VAR) - n2--; - if (n1 < n2) - { - micro_operation sw; - sw = VTI (bb)->mos[n1]; - VTI (bb)->mos[n1] = VTI (bb)->mos[n2]; - VTI (bb)->mos[n2] = sw; + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (PATTERN (insn), bb, insn, + MO_ADJUST, dump_file); } } - if (CALL_P (insn)) + cselib_hook_called = false; + if (MAY_HAVE_DEBUG_INSNS) { - micro_operation *mo = VTI (bb)->mos + VTI (bb)->n_mos++; - - mo->type = MO_CALL; - mo->insn = insn; - } - - n1 = VTI (bb)->n_mos; - /* This will record NEXT_INSN (insn), such that we can - insert notes before it without worrying about any - notes that MO_USEs might emit after the insn. */ - note_stores (PATTERN (insn), add_stores, insn); - n2 = VTI (bb)->n_mos - 1; - - /* Order the MO_CLOBBERs to be before MO_SETs. */ - while (n1 < n2) - { - while (n1 < n2 && VTI (bb)->mos[n1].type == MO_CLOBBER) - n1++; - while (n1 < n2 && (VTI (bb)->mos[n2].type == MO_SET - || VTI (bb)->mos[n2].type == MO_COPY)) - n2--; - if (n1 < n2) + cselib_process_insn (insn); + if (dump_file && (dump_flags & TDF_DETAILS)) { - micro_operation sw; - - sw = VTI (bb)->mos[n1]; - VTI (bb)->mos[n1] = VTI (bb)->mos[n2]; - VTI (bb)->mos[n2] = sw; + print_rtl_single (dump_file, insn); + dump_cselib_table (dump_file); } } + if (!cselib_hook_called) + add_with_sets (insn, 0, 0); if (!frame_pointer_needed && post) { @@ -3525,15 +7283,29 @@ vt_initialize (void) mo->type = MO_ADJUST; mo->u.adjust = post; mo->insn = insn; + + if (dump_file && (dump_flags & TDF_DETAILS)) + log_op_type (PATTERN (insn), bb, insn, + MO_ADJUST, dump_file); } } } + gcc_assert (count == VTI (bb)->n_mos); + if (MAY_HAVE_DEBUG_INSNS) + { + cselib_preserve_only_values (true); + gcc_assert (next_value_after == cselib_get_next_unknown_value ()); + cselib_reset_table_with_next_value (next_value_after); + cselib_record_sets_hook = NULL; + } } attrs_pool = create_alloc_pool ("attrs_def pool", sizeof (struct attrs_def), 1024); var_pool = create_alloc_pool ("variable_def pool", - sizeof (struct variable_def), 64); + sizeof (struct variable_def) + + (MAX_VAR_PARTS - 1) + * sizeof (((variable)NULL)->var_part[0]), 64); loc_chain_pool = create_alloc_pool ("location_chain_def pool", sizeof (struct location_chain_def), 1024); @@ -3546,18 +7318,61 @@ vt_initialize (void) variable_htab_free); changed_variables = htab_create (10, variable_htab_hash, variable_htab_eq, variable_htab_free); + if (MAY_HAVE_DEBUG_INSNS) + { + value_chain_pool = create_alloc_pool ("value_chain_def pool", + sizeof (struct value_chain_def), + 1024); + value_chains = htab_create (32, value_chain_htab_hash, + value_chain_htab_eq, NULL); + } /* Init the IN and OUT sets. */ FOR_ALL_BB (bb) { VTI (bb)->visited = false; + VTI (bb)->flooded = false; dataflow_set_init (&VTI (bb)->in); dataflow_set_init (&VTI (bb)->out); + VTI (bb)->permp = NULL; } + VTI (ENTRY_BLOCK_PTR)->flooded = true; vt_add_function_parameters (); } +/* Get rid of all debug insns from the insn stream. */ + +static void +delete_debug_insns (void) +{ + basic_block bb; + rtx insn, next; + + if (!MAY_HAVE_DEBUG_INSNS) + return; + + FOR_EACH_BB (bb) + { + FOR_BB_INSNS_SAFE (bb, insn, next) + if (DEBUG_INSN_P (insn)) + delete_insn (insn); + } +} + +/* Run a fast, BB-local only version of var tracking, to take care of + information that we don't do global analysis on, such that not all + information is lost. If SKIPPED holds, we're skipping the global + pass entirely, so we should try to use information it would have + handled as well.. */ + +static void +vt_debug_insns_local (bool skipped ATTRIBUTE_UNUSED) +{ + /* ??? Just skip it all for now. */ + delete_debug_insns (); +} + /* Free the data structures needed for variable tracking. */ static void @@ -3574,6 +7389,11 @@ vt_finalize (void) { dataflow_set_destroy (&VTI (bb)->in); dataflow_set_destroy (&VTI (bb)->out); + if (VTI (bb)->permp) + { + dataflow_set_destroy (VTI (bb)->permp); + XDELETE (VTI (bb)->permp); + } } free_aux_for_blocks (); htab_delete (empty_shared_hash->htab); @@ -3582,8 +7402,19 @@ vt_finalize (void) free_alloc_pool (var_pool); free_alloc_pool (loc_chain_pool); free_alloc_pool (shared_hash_pool); + + if (MAY_HAVE_DEBUG_INSNS) + { + htab_delete (value_chains); + free_alloc_pool (value_chain_pool); + free_alloc_pool (valvar_pool); + cselib_finish (); + BITMAP_FREE (scratch_regs); + scratch_regs = NULL; + } + if (vui_vec) - free (vui_vec); + XDELETEVEC (vui_vec); vui_vec = NULL; vui_allocated = 0; } @@ -3593,8 +7424,17 @@ vt_finalize (void) unsigned int variable_tracking_main (void) { + if (flag_var_tracking_assignments < 0) + { + delete_debug_insns (); + return 0; + } + if (n_basic_blocks > 500 && n_edges / n_basic_blocks >= 20) - return 0; + { + vt_debug_insns_local (true); + return 0; + } mark_dfs_back_edges (); vt_initialize (); @@ -3603,12 +7443,12 @@ variable_tracking_main (void) if (!vt_stack_adjustments ()) { vt_finalize (); + vt_debug_insns_local (true); return 0; } } vt_find_locations (); - vt_emit_notes (); if (dump_file && (dump_flags & TDF_DETAILS)) { @@ -3616,7 +7456,10 @@ variable_tracking_main (void) dump_flow_info (dump_file, dump_flags); } + vt_emit_notes (); + vt_finalize (); + vt_debug_insns_local (false); return 0; } |