From mm@mail2.vx.sk  Sun Mar  6 10:48:15 2011
Return-Path: <mm@mail2.vx.sk>
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34])
	by hub.freebsd.org (Postfix) with ESMTP id 0B5DD106566C
	for <FreeBSD-gnats-submit@freebsd.org>; Sun,  6 Mar 2011 10:48:15 +0000 (UTC)
	(envelope-from mm@mail2.vx.sk)
Received: from mail2.vx.sk (mail2.vx.sk [IPv6:2a01:4f8:100:84a3::3])
	by mx1.freebsd.org (Postfix) with ESMTP id 484538FC15
	for <FreeBSD-gnats-submit@freebsd.org>; Sun,  6 Mar 2011 10:48:13 +0000 (UTC)
Received: from neo.vx.sk (localhost [127.0.0.1])
	by mail2.vx.sk (Postfix) with ESMTP id 58860F1DEA
	for <FreeBSD-gnats-submit@freebsd.org>; Sun,  6 Mar 2011 11:48:12 +0100 (CET)
Received: from mail2.vx.sk ([127.0.0.1])
	by neo.vx.sk (mail.vx.sk [127.0.0.1]) (amavisd-new, port 10024)
	with LMTP id vU0ij4I571ee for <FreeBSD-gnats-submit@freebsd.org>;
	Sun,  6 Mar 2011 11:43:09 +0100 (CET)
Received: by mail2.vx.sk (Postfix, from userid 1001)
	id BD757F1DD6; Sun,  6 Mar 2011 11:43:09 +0100 (CET)
Message-Id: <20110306104309.BD757F1DD6@mail2.vx.sk>
Date: Sun,  6 Mar 2011 11:43:09 +0100 (CET)
From: Martin Matuska <mm@FreeBSD.org>
Reply-To: Martin Matuska <mm@FreeBSD.org>
To: FreeBSD-gnats-submit@freebsd.org
Cc:
Subject: [PATCH] gcc: backport geode and core2 march options
X-Send-Pr-Version: 3.113
X-GNATS-Notify:

>Number:         155308
>Category:       gnu
>Synopsis:       [PATCH] gcc: backport geode and core2 march options
>Confidential:   no
>Severity:       non-critical
>Priority:       low
>Responsible:    mm
>State:          closed
>Quarter:        
>Keywords:       
>Date-Required:  
>Class:          change-request
>Submitter-Id:   current-users
>Arrival-Date:   Sun Mar 06 10:50:08 UTC 2011
>Closed-Date:    Sat Apr 30 22:19:46 UTC 2011
>Last-Modified:  Mon May  2 08:40:17 UTC 2011
>Originator:     Martin Matuska
>Release:        FreeBSD 9-CURRENT amd64
>Organization:
>Environment:
>Description:
Backport CPU types AMD geode and core2 to gcc.

These options are supported in all later GCC's, the change is 
GPLv2-licensed. I recommend backporting these options to contrib/gcc
and adding the new flags to share/mk/bsd.cpu.mk

The gcc-4.3 branch SVN revisions are 118090, 118973 and 120846, are fully
compatible with our code and apply almost completely cleanly (the only
reject is one line where bswap32/64 have been added).

The SSSE3 instruction set is of course not supported, but this is the same
as in gcc-4.3 r118973 (it was added to gcc-4.3 later than core2/geode
support). The latest binutils import in base of head supports SSSE3,
so the SSSE3 support may be imported at a later point.

References:
http://gcc.gnu.org/viewcvs?view=revision&revision=118090
http://gcc.gnu.org/viewcvs?view=revision&revision=118973
http://gcc.gnu.org/viewcvs?view=revision&revision=120846
>How-To-Repeat:
>Fix:
Index: contrib/gcc/doc/invoke.texi
===================================================================
--- contrib/gcc/doc/invoke.texi	(revision 219330)
+++ contrib/gcc/doc/invoke.texi	(working copy)
@@ -9369,6 +9369,9 @@
 @item nocona
 Improved version of Intel Pentium4 CPU with 64-bit extensions, MMX, SSE,
 SSE2 and SSE3 instruction set support.
+@item core2
+Intel Core2 CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3 and SSSE3
+instruction set support.
 @item k6
 AMD K6 CPU with MMX instruction set support.
 @item k6-2, k6-3
@@ -9396,6 +9399,8 @@
 @item c3-2
 Via C3-2 CPU with MMX and SSE instruction set support.  (No scheduling is
 implemented for this chip.)
+@item geode
+Embedded AMD CPU with MMX and 3dNOW! instruction set support.
 @end table
 
 While picking a specific @var{cpu-type} will schedule things appropriately
Index: contrib/gcc/config.gcc
===================================================================
--- contrib/gcc/config.gcc	(revision 219330)
+++ contrib/gcc/config.gcc	(working copy)
@@ -1207,14 +1207,14 @@
 		# FIXME: -m64 for i[34567]86-*-* should be allowed just
 		# like -m32 for x86_64-*-*.
 		case X"${with_cpu}" in
-		Xgeneric|Xnocona|Xx86-64|Xk8|Xopteron|Xathlon64|Xathlon-fx)
+		Xgeneric|Xcore2|Xnocona|Xx86-64|Xk8|Xopteron|Xathlon64|Xathlon-fx)
 			;;
 		X)
 			with_cpu=generic
 			;;
 		*)
 			echo "Unsupported CPU used in --with-cpu=$with_cpu, supported values:" 1>&2
-			echo "generic nocona x86-64 k8 opteron athlon64 athlon-fx" 1>&2
+			echo "generic core2 nocona x86-64 k8 opteron athlon64 athlon-fx" 1>&2
 			exit 1
 			;;
 		esac
@@ -2537,6 +2537,9 @@
         nocona-*)
           with_cpu=nocona
           ;;
+	core2-*)
+	  with_cpu=core2
+	  ;;
         pentium_m-*)
           with_cpu=pentium-m
           ;;
@@ -2556,6 +2559,9 @@
         nocona-*)
           with_cpu=nocona
           ;;
+	core2-*)
+	  with_cpu=core2
+	  ;;
         *)
           with_cpu=generic
           ;;
@@ -2787,7 +2793,7 @@
 				esac
 				# OK
 				;;
-			"" | k8 | opteron | athlon64 | athlon-fx | nocona | generic)
+			"" | k8 | opteron | athlon64 | athlon-fx | nocona | core2 | generic)
 				# OK
 				;;
 			*)
Index: contrib/gcc/config/i386/i386.h
===================================================================
--- contrib/gcc/config/i386/i386.h	(revision 219330)
+++ contrib/gcc/config/i386/i386.h	(working copy)
@@ -130,12 +130,14 @@
 #define TARGET_486 (ix86_tune == PROCESSOR_I486)
 #define TARGET_PENTIUM (ix86_tune == PROCESSOR_PENTIUM)
 #define TARGET_PENTIUMPRO (ix86_tune == PROCESSOR_PENTIUMPRO)
+#define TARGET_GEODE (ix86_tune == PROCESSOR_GEODE)
 #define TARGET_K6 (ix86_tune == PROCESSOR_K6)
 #define TARGET_ATHLON (ix86_tune == PROCESSOR_ATHLON)
 #define TARGET_PENTIUM4 (ix86_tune == PROCESSOR_PENTIUM4)
 #define TARGET_K8 (ix86_tune == PROCESSOR_K8)
 #define TARGET_ATHLON_K8 (TARGET_K8 || TARGET_ATHLON)
 #define TARGET_NOCONA (ix86_tune == PROCESSOR_NOCONA)
+#define TARGET_CORE2 (ix86_tune == PROCESSOR_CORE2)
 #define TARGET_GENERIC32 (ix86_tune == PROCESSOR_GENERIC32)
 #define TARGET_GENERIC64 (ix86_tune == PROCESSOR_GENERIC64)
 #define TARGET_GENERIC (TARGET_GENERIC32 || TARGET_GENERIC64)
@@ -376,6 +378,10 @@
 	      break;						\
 	    }							\
 	}							\
+      else if (TARGET_GEODE)					\
+	{							\
+	  builtin_define ("__tune_geode__");			\
+	}							\
       else if (TARGET_K6)					\
 	{							\
 	  builtin_define ("__tune_k6__");			\
@@ -397,6 +403,8 @@
 	builtin_define ("__tune_pentium4__");			\
       else if (TARGET_NOCONA)					\
 	builtin_define ("__tune_nocona__");			\
+      else if (TARGET_CORE2)					\
+	builtin_define ("__tune_core2__");			\
 								\
       if (TARGET_MMX)						\
 	builtin_define ("__MMX__");				\
@@ -437,6 +445,11 @@
 	  builtin_define ("__pentiumpro");			\
 	  builtin_define ("__pentiumpro__");			\
 	}							\
+      else if (ix86_arch == PROCESSOR_GEODE)			\
+	{							\
+	  builtin_define ("__geode");				\
+	  builtin_define ("__geode__");				\
+	}							\
       else if (ix86_arch == PROCESSOR_K6)			\
 	{							\
 								\
@@ -470,6 +483,11 @@
 	  builtin_define ("__nocona");				\
 	  builtin_define ("__nocona__");			\
 	}							\
+      else if (ix86_arch == PROCESSOR_CORE2)			\
+	{							\
+	  builtin_define ("__core2");				\
+	  builtin_define ("__core2__");				\
+	}							\
     }								\
   while (0)
 
@@ -481,23 +499,25 @@
 #define TARGET_CPU_DEFAULT_pentium2 5
 #define TARGET_CPU_DEFAULT_pentium3 6
 #define TARGET_CPU_DEFAULT_pentium4 7
-#define TARGET_CPU_DEFAULT_k6 8
-#define TARGET_CPU_DEFAULT_k6_2 9
-#define TARGET_CPU_DEFAULT_k6_3 10
-#define TARGET_CPU_DEFAULT_athlon 11
-#define TARGET_CPU_DEFAULT_athlon_sse 12
-#define TARGET_CPU_DEFAULT_k8 13
-#define TARGET_CPU_DEFAULT_pentium_m 14
-#define TARGET_CPU_DEFAULT_prescott 15
-#define TARGET_CPU_DEFAULT_nocona 16
-#define TARGET_CPU_DEFAULT_generic 17
+#define TARGET_CPU_DEFAULT_geode 8
+#define TARGET_CPU_DEFAULT_k6 9
+#define TARGET_CPU_DEFAULT_k6_2 10
+#define TARGET_CPU_DEFAULT_k6_3 11
+#define TARGET_CPU_DEFAULT_athlon 12
+#define TARGET_CPU_DEFAULT_athlon_sse 13
+#define TARGET_CPU_DEFAULT_k8 14
+#define TARGET_CPU_DEFAULT_pentium_m 15
+#define TARGET_CPU_DEFAULT_prescott 16
+#define TARGET_CPU_DEFAULT_nocona 17
+#define TARGET_CPU_DEFAULT_core2 18
+#define TARGET_CPU_DEFAULT_generic 19
 
 #define TARGET_CPU_DEFAULT_NAMES {"i386", "i486", "pentium", "pentium-mmx",\
 				  "pentiumpro", "pentium2", "pentium3", \
-				  "pentium4", "k6", "k6-2", "k6-3",\
+                                  "pentium4", "geode", "k6", "k6-2", "k6-3", \
 				  "athlon", "athlon-4", "k8", \
 				  "pentium-m", "prescott", "nocona", \
-				  "generic"}
+				  "core2", "generic"}
 
 #ifndef CC1_SPEC
 #define CC1_SPEC "%(cc1_cpu) "
@@ -2077,11 +2097,13 @@
   PROCESSOR_I486,			/* 80486DX, 80486SX, 80486DX[24] */
   PROCESSOR_PENTIUM,
   PROCESSOR_PENTIUMPRO,
+  PROCESSOR_GEODE,
   PROCESSOR_K6,
   PROCESSOR_ATHLON,
   PROCESSOR_PENTIUM4,
   PROCESSOR_K8,
   PROCESSOR_NOCONA,
+  PROCESSOR_CORE2,
   PROCESSOR_GENERIC32,
   PROCESSOR_GENERIC64,
   PROCESSOR_max
Index: contrib/gcc/config/i386/i386.md
===================================================================
--- contrib/gcc/config/i386/i386.md	(revision 219330)
+++ contrib/gcc/config/i386/i386.md	(working copy)
@@ -187,7 +187,7 @@
 
 ;; Processor type.  This attribute must exactly match the processor_type
 ;; enumeration in i386.h.
-(define_attr "cpu" "i386,i486,pentium,pentiumpro,k6,athlon,pentium4,k8,nocona,generic32,generic64"
+(define_attr "cpu" "i386,i486,pentium,pentiumpro,geode,k6,athlon,pentium4,k8,nocona,core2,generic32,generic64"
   (const (symbol_ref "ix86_tune")))
 
 ;; A basic instruction type.  Refinements due to arguments to be
@@ -473,6 +473,7 @@
 (include "ppro.md")
 (include "k6.md")
 (include "athlon.md")
+(include "geode.md")
 
 
 ;; Operand and operator predicates and constraints
Index: contrib/gcc/config/i386/geode.md
===================================================================
--- contrib/gcc/config/i386/geode.md	(revision 0)
+++ contrib/gcc/config/i386/geode.md	(revision 0)
@@ -0,0 +1,153 @@
+;; Geode Scheduling
+;; Copyright (C) 2006
+;; Free Software Foundation, Inc.
+;;
+;; This file is part of GCC.
+;;
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 2, or (at your option)
+;; any later version.
+;;
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;; GNU General Public License for more details.
+;;
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING.  If not, write to
+;; the Free Software Foundation, 51 Franklin Street, Fifth Floor,
+;; Boston, MA 02110-1301, USA.
+;;
+;; The Geode architecture is one insn issue processor.
+;;
+;; This description is based on data from the following documents:
+;;
+;;    "AMD Geode GX Processor Data Book"
+;;    Advanced Micro Devices, Inc., Aug 2005.
+;;
+;;    "AMD Geode LX Processor Data Book"
+;;    Advanced Micro Devices, Inc., Jan 2006.
+;;
+;;
+;; CPU execution units of the Geode:
+;;
+;; issue	describes the issue pipeline.
+;; alu		describes the Integer unit
+;; fpu		describes the FP unit
+;;
+;; The fp unit is out of order execution unit with register renaming.
+;; There is also memory management unit and execution pipeline for
+;; load/store operations.  We ignore it and difference between insns
+;; using memory and registers.
+
+(define_automaton "geode")
+
+(define_cpu_unit "geode_issue,geode_alu,geode_fpu" "geode")
+
+(define_insn_reservation "alu" 1
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "alu,alu1,negnot,icmp,lea,test,imov,imovx,icmov,incdec,setcc"))
+			 "geode_issue,geode_alu")
+
+(define_insn_reservation "shift" 2
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "ishift,ishift1,rotate,rotate1,cld"))
+			 "geode_issue,geode_alu*2")
+
+(define_insn_reservation "imul" 7
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "imul"))
+			 "geode_issue,geode_alu*7")
+
+(define_insn_reservation "idiv" 40
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "idiv"))
+			 "geode_issue,geode_alu*40")
+
+;; The branch unit.
+(define_insn_reservation "call" 2
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "call,callv"))
+			 "geode_issue,geode_alu*2")
+
+(define_insn_reservation "geode_branch" 1
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "ibr"))
+			 "geode_issue,geode_alu")
+
+(define_insn_reservation "geode_pop_push" 1
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "pop,push"))
+			 "geode_issue,geode_alu")
+
+(define_insn_reservation "geode_leave" 2
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "leave"))
+			 "geode_issue,geode_alu*2")
+
+(define_insn_reservation "geode_load_str" 4
+			 (and (eq_attr "cpu" "geode")
+			      (and (eq_attr "type" "str")
+				   (eq_attr "memory" "load,both")))
+			 "geode_issue,geode_alu*4")
+
+(define_insn_reservation "geode_store_str" 2
+			 (and (eq_attr "cpu" "geode")
+			      (and (eq_attr "type" "str")
+				   (eq_attr "memory" "store")))
+			 "geode_issue,geode_alu*2")
+
+;; Be optimistic
+(define_insn_reservation "geode_unknown" 1
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "multi,other"))
+			 "geode_issue,geode_alu")
+
+;; FPU
+
+(define_insn_reservation "geode_fop" 6
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "fop,fcmp"))
+			 "geode_issue,geode_fpu*6")
+
+(define_insn_reservation "geode_fsimple" 1
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "fmov,fcmov,fsgn,fxch"))
+			 "geode_issue,geode_fpu")
+
+(define_insn_reservation "geode_fist" 4
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "fistp,fisttp"))
+			 "geode_issue,geode_fpu*4")
+
+(define_insn_reservation "geode_fmul" 10
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "fmul"))
+			 "geode_issue,geode_fpu*10")
+
+(define_insn_reservation "geode_fdiv" 47
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "fdiv"))
+			 "geode_issue,geode_fpu*47")
+
+;; We use minimal latency (fsin) here
+(define_insn_reservation "geode_fpspc" 54
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "fpspc"))
+			 "geode_issue,geode_fpu*54")
+
+(define_insn_reservation "geode_frndint" 12
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "frndint"))
+			 "geode_issue,geode_fpu*12")
+
+(define_insn_reservation "geode_mmxmov" 1
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "mmxmov"))
+			 "geode_issue,geode_fpu")
+
+(define_insn_reservation "geode_mmx" 2
+			 (and (eq_attr "cpu" "geode")
+			      (eq_attr "type" "mmx,mmxadd,mmxmul,mmxcmp,mmxcvt,mmxshft"))
+			 "geode_issue,geode_fpu*2")
Index: contrib/gcc/config/i386/i386.c
===================================================================
--- contrib/gcc/config/i386/i386.c	(revision 219330)
+++ contrib/gcc/config/i386/i386.c	(working copy)
@@ -336,6 +336,60 @@
 };
 
 static const
+struct processor_costs geode_cost = {
+  COSTS_N_INSNS (1),			/* cost of an add instruction */
+  COSTS_N_INSNS (1),			/* cost of a lea instruction */
+  COSTS_N_INSNS (2),			/* variable shift costs */
+  COSTS_N_INSNS (1),			/* constant shift costs */
+  {COSTS_N_INSNS (3),			/* cost of starting multiply for QI */
+   COSTS_N_INSNS (4),			/*                               HI */
+   COSTS_N_INSNS (7),			/*                               SI */
+   COSTS_N_INSNS (7),			/*                               DI */
+   COSTS_N_INSNS (7)},			/*                               other */
+  0,					/* cost of multiply per each bit set */
+  {COSTS_N_INSNS (15),			/* cost of a divide/mod for QI */
+   COSTS_N_INSNS (23),			/*                          HI */
+   COSTS_N_INSNS (39),			/*                          SI */
+   COSTS_N_INSNS (39),			/*                          DI */
+   COSTS_N_INSNS (39)},			/*                          other */
+  COSTS_N_INSNS (1),			/* cost of movsx */
+  COSTS_N_INSNS (1),			/* cost of movzx */
+  8,					/* "large" insn */
+  4,					/* MOVE_RATIO */
+  1,					/* cost for loading QImode using movzbl */
+  {1, 1, 1},				/* cost of loading integer registers
+					   in QImode, HImode and SImode.
+					   Relative to reg-reg move (2).  */
+  {1, 1, 1},				/* cost of storing integer registers */
+  1,					/* cost of reg,reg fld/fst */
+  {1, 1, 1},				/* cost of loading fp registers
+					   in SFmode, DFmode and XFmode */
+  {4, 6, 6},				/* cost of storing fp registers
+					   in SFmode, DFmode and XFmode */
+
+  1,					/* cost of moving MMX register */
+  {1, 1},				/* cost of loading MMX registers
+					   in SImode and DImode */
+  {1, 1},				/* cost of storing MMX registers
+					   in SImode and DImode */
+  1,					/* cost of moving SSE register */
+  {1, 1, 1},				/* cost of loading SSE registers
+					   in SImode, DImode and TImode */
+  {1, 1, 1},				/* cost of storing SSE registers
+					   in SImode, DImode and TImode */
+  1,					/* MMX or SSE register to integer */
+  32,					/* size of prefetch block */
+  1,					/* number of parallel prefetches */
+  1,					/* Branch cost */
+  COSTS_N_INSNS (6),			/* cost of FADD and FSUB insns.  */
+  COSTS_N_INSNS (11),			/* cost of FMUL instruction.  */
+  COSTS_N_INSNS (47),			/* cost of FDIV instruction.  */
+  COSTS_N_INSNS (1),			/* cost of FABS instruction.  */
+  COSTS_N_INSNS (1),			/* cost of FCHS instruction.  */
+  COSTS_N_INSNS (54),			/* cost of FSQRT instruction.  */
+};
+
+static const
 struct processor_costs k6_cost = {
   COSTS_N_INSNS (1),			/* cost of an add instruction */
   COSTS_N_INSNS (2),			/* cost of a lea instruction */
@@ -600,6 +654,58 @@
   COSTS_N_INSNS (44),			/* cost of FSQRT instruction.  */
 };
 
+static const
+struct processor_costs core2_cost = {
+  COSTS_N_INSNS (1),			/* cost of an add instruction */
+  COSTS_N_INSNS (1) + 1,		/* cost of a lea instruction */
+  COSTS_N_INSNS (1),			/* variable shift costs */
+  COSTS_N_INSNS (1),			/* constant shift costs */
+  {COSTS_N_INSNS (3),			/* cost of starting multiply for QI */
+   COSTS_N_INSNS (3),			/*                               HI */
+   COSTS_N_INSNS (3),			/*                               SI */
+   COSTS_N_INSNS (3),			/*                               DI */
+   COSTS_N_INSNS (3)},			/*                               other */
+  0,					/* cost of multiply per each bit set */
+  {COSTS_N_INSNS (22),			/* cost of a divide/mod for QI */
+   COSTS_N_INSNS (22),			/*                          HI */
+   COSTS_N_INSNS (22),			/*                          SI */
+   COSTS_N_INSNS (22),			/*                          DI */
+   COSTS_N_INSNS (22)},			/*                          other */
+  COSTS_N_INSNS (1),			/* cost of movsx */
+  COSTS_N_INSNS (1),			/* cost of movzx */
+  8,					/* "large" insn */
+  16,					/* MOVE_RATIO */
+  2,					/* cost for loading QImode using movzbl */
+  {6, 6, 6},				/* cost of loading integer registers
+					   in QImode, HImode and SImode.
+					   Relative to reg-reg move (2).  */
+  {4, 4, 4},				/* cost of storing integer registers */
+  2,					/* cost of reg,reg fld/fst */
+  {6, 6, 6},				/* cost of loading fp registers
+					   in SFmode, DFmode and XFmode */
+  {4, 4, 4},				/* cost of loading integer registers */
+  2,					/* cost of moving MMX register */
+  {6, 6},				/* cost of loading MMX registers
+					   in SImode and DImode */
+  {4, 4},				/* cost of storing MMX registers
+					   in SImode and DImode */
+  2,					/* cost of moving SSE register */
+  {6, 6, 6},				/* cost of loading SSE registers
+					   in SImode, DImode and TImode */
+  {4, 4, 4},				/* cost of storing SSE registers
+					   in SImode, DImode and TImode */
+  2,					/* MMX or SSE register to integer */
+  128,					/* size of prefetch block */
+  8,					/* number of parallel prefetches */
+  3,					/* Branch cost */
+  COSTS_N_INSNS (3),			/* cost of FADD and FSUB insns.  */
+  COSTS_N_INSNS (5),			/* cost of FMUL instruction.  */
+  COSTS_N_INSNS (32),			/* cost of FDIV instruction.  */
+  COSTS_N_INSNS (1),			/* cost of FABS instruction.  */
+  COSTS_N_INSNS (1),			/* cost of FCHS instruction.  */
+  COSTS_N_INSNS (58),			/* cost of FSQRT instruction.  */
+};
+
 /* Generic64 should produce code tuned for Nocona and K8.  */
 static const
 struct processor_costs generic64_cost = {
@@ -721,38 +827,41 @@
 #define m_486 (1<<PROCESSOR_I486)
 #define m_PENT (1<<PROCESSOR_PENTIUM)
 #define m_PPRO (1<<PROCESSOR_PENTIUMPRO)
+#define m_GEODE  (1<<PROCESSOR_GEODE)
+#define m_K6_GEODE  (m_K6 | m_GEODE)
 #define m_K6  (1<<PROCESSOR_K6)
 #define m_ATHLON  (1<<PROCESSOR_ATHLON)
 #define m_PENT4  (1<<PROCESSOR_PENTIUM4)
 #define m_K8  (1<<PROCESSOR_K8)
 #define m_ATHLON_K8  (m_K8 | m_ATHLON)
 #define m_NOCONA  (1<<PROCESSOR_NOCONA)
+#define m_CORE2  (1<<PROCESSOR_CORE2)
 #define m_GENERIC32 (1<<PROCESSOR_GENERIC32)
 #define m_GENERIC64 (1<<PROCESSOR_GENERIC64)
 #define m_GENERIC (m_GENERIC32 | m_GENERIC64)
 
 /* Generic instruction choice should be common subset of supported CPUs
-   (PPro/PENT4/NOCONA/Athlon/K8).  */
+   (PPro/PENT4/NOCONA/CORE2/Athlon/K8).  */
 
 /* Leave is not affecting Nocona SPEC2000 results negatively, so enabling for
    Generic64 seems like good code size tradeoff.  We can't enable it for 32bit
    generic because it is not working well with PPro base chips.  */
-const int x86_use_leave = m_386 | m_K6 | m_ATHLON_K8 | m_GENERIC64;
-const int x86_push_memory = m_386 | m_K6 | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
+const int x86_use_leave = m_386 | m_K6_GEODE | m_ATHLON_K8 | m_CORE2 | m_GENERIC64;
+const int x86_push_memory = m_386 | m_K6_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 const int x86_zero_extend_with_and = m_486 | m_PENT;
-const int x86_movx = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_GENERIC /* m_386 | m_K6 */;
+const int x86_movx = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC | m_GEODE /* m_386 | m_K6 */;
 const int x86_double_with_add = ~m_386;
 const int x86_use_bit_test = m_386;
-const int x86_unroll_strlen = m_486 | m_PENT | m_PPRO | m_ATHLON_K8 | m_K6 | m_GENERIC;
-const int x86_cmove = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA;
+const int x86_unroll_strlen = m_486 | m_PENT | m_PPRO | m_ATHLON_K8 | m_K6 | m_CORE2 | m_GENERIC;
+const int x86_cmove = m_PPRO | m_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA;
 const int x86_3dnow_a = m_ATHLON_K8;
-const int x86_deep_branch = m_PPRO | m_K6 | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
+const int x86_deep_branch = m_PPRO | m_K6_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 /* Branch hints were put in P4 based on simulation result. But
    after P4 was made, no performance benefit was observed with
    branch hints. It also increases the code size. As the result,
    icc never generates branch hints.  */
 const int x86_branch_hints = 0;
-const int x86_use_sahf = m_PPRO | m_K6 | m_PENT4 | m_NOCONA | m_GENERIC32; /*m_GENERIC | m_ATHLON_K8 ? */
+const int x86_use_sahf = m_PPRO | m_K6_GEODE | m_PENT4 | m_NOCONA | m_GENERIC32; /*m_GENERIC | m_ATHLON_K8 ? */
 /* We probably ought to watch for partial register stalls on Generic32
    compilation setting as well.  However in current implementation the
    partial register stalls are not eliminated very well - they can
@@ -762,15 +871,15 @@
    with partial reg. dependencies used by Athlon/P4 based chips, it is better
    to leave it off for generic32 for now.  */
 const int x86_partial_reg_stall = m_PPRO;
-const int x86_partial_flag_reg_stall = m_GENERIC;
-const int x86_use_himode_fiop = m_386 | m_486 | m_K6;
-const int x86_use_simode_fiop = ~(m_PPRO | m_ATHLON_K8 | m_PENT | m_GENERIC);
+const int x86_partial_flag_reg_stall =  m_CORE2 | m_GENERIC;
+const int x86_use_himode_fiop = m_386 | m_486 | m_K6_GEODE;
+const int x86_use_simode_fiop = ~(m_PPRO | m_ATHLON_K8 | m_PENT | m_CORE2 | m_GENERIC);
 const int x86_use_mov0 = m_K6;
-const int x86_use_cltd = ~(m_PENT | m_K6 | m_GENERIC);
+const int x86_use_cltd = ~(m_PENT | m_K6 | m_CORE2 | m_GENERIC);
 const int x86_read_modify_write = ~m_PENT;
 const int x86_read_modify = ~(m_PENT | m_PPRO);
 const int x86_split_long_moves = m_PPRO;
-const int x86_promote_QImode = m_K6 | m_PENT | m_386 | m_486 | m_ATHLON_K8 | m_GENERIC; /* m_PENT4 ? */
+const int x86_promote_QImode = m_K6_GEODE | m_PENT | m_386 | m_486 | m_ATHLON_K8 | m_CORE2 | m_GENERIC; /* m_PENT4 ? */
 const int x86_fast_prefix = ~(m_PENT | m_486 | m_386);
 const int x86_single_stringop = m_386 | m_PENT4 | m_NOCONA;
 const int x86_qimode_math = ~(0);
@@ -780,18 +889,18 @@
    if our scheme for avoiding partial stalls was more effective.  */
 const int x86_himode_math = ~(m_PPRO);
 const int x86_promote_hi_regs = m_PPRO;
-const int x86_sub_esp_4 = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_GENERIC;
-const int x86_sub_esp_8 = m_ATHLON_K8 | m_PPRO | m_386 | m_486 | m_PENT4 | m_NOCONA | m_GENERIC;
-const int x86_add_esp_4 = m_ATHLON_K8 | m_K6 | m_PENT4 | m_NOCONA | m_GENERIC;
-const int x86_add_esp_8 = m_ATHLON_K8 | m_PPRO | m_K6 | m_386 | m_486 | m_PENT4 | m_NOCONA | m_GENERIC;
-const int x86_integer_DFmode_moves = ~(m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC);
-const int x86_partial_reg_dependency = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
-const int x86_memory_mismatch_stall = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
-const int x86_accumulate_outgoing_args = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC;
-const int x86_prologue_using_move = m_ATHLON_K8 | m_PPRO | m_GENERIC;
-const int x86_epilogue_using_move = m_ATHLON_K8 | m_PPRO | m_GENERIC;
+const int x86_sub_esp_4 = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
+const int x86_sub_esp_8 = m_ATHLON_K8 | m_PPRO | m_386 | m_486 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
+const int x86_add_esp_4 = m_ATHLON_K8 | m_K6_GEODE | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
+const int x86_add_esp_8 = m_ATHLON_K8 | m_PPRO | m_K6_GEODE | m_386 | m_486 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
+const int x86_integer_DFmode_moves = ~(m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC | m_GEODE);
+const int x86_partial_reg_dependency = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
+const int x86_memory_mismatch_stall = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
+const int x86_accumulate_outgoing_args = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC;
+const int x86_prologue_using_move = m_ATHLON_K8 | m_PPRO | m_CORE2 | m_GENERIC;
+const int x86_epilogue_using_move = m_ATHLON_K8 | m_PPRO | m_CORE2 | m_GENERIC;
 const int x86_shift1 = ~m_486;
-const int x86_arch_always_fancy_math_387 = m_PENT | m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
+const int x86_arch_always_fancy_math_387 = m_PENT | m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 /* In Generic model we have an conflict here in between PPro/Pentium4 based chips
    that thread 128bit SSE registers as single units versus K8 based chips that
    divide SSE registers to two 64bit halves.
@@ -801,7 +910,7 @@
    this option on P4 brings over 20% SPECfp regression, while enabling it on
    K8 brings roughly 2.4% regression that can be partly masked by careful scheduling
    of moves.  */
-const int x86_sse_partial_reg_dependency = m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC;
+const int x86_sse_partial_reg_dependency = m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC;
 /* Set for machines where the type and dependencies are resolved on SSE
    register parts instead of whole registers, so we may maintain just
    lower part of scalar values in proper format leaving the upper part
@@ -810,18 +919,18 @@
 const int x86_sse_typeless_stores = m_ATHLON_K8;
 const int x86_sse_load0_by_pxor = m_PPRO | m_PENT4 | m_NOCONA;
 const int x86_use_ffreep = m_ATHLON_K8;
-const int x86_rep_movl_optimal = m_386 | m_PENT | m_PPRO | m_K6;
-const int x86_use_incdec = ~(m_PENT4 | m_NOCONA | m_GENERIC);
+const int x86_rep_movl_optimal = m_386 | m_PENT | m_PPRO | m_K6_GEODE | m_CORE2;
+const int x86_use_incdec = ~(m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC);
 
 /* ??? Allowing interunit moves makes it all too easy for the compiler to put
    integer data in xmm registers.  Which results in pretty abysmal code.  */
 const int x86_inter_unit_moves = 0 /* ~(m_ATHLON_K8) */;
 
-const int x86_ext_80387_constants = m_K6 | m_ATHLON | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC32;
+const int x86_ext_80387_constants = m_K6_GEODE | m_ATHLON | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC32;
 /* Some CPU cores are not able to predict more than 4 branch instructions in
    the 16 byte window.  */
-const int x86_four_jump_limit = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
-const int x86_schedule = m_PPRO | m_ATHLON_K8 | m_K6 | m_PENT | m_GENERIC;
+const int x86_four_jump_limit = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
+const int x86_schedule = m_PPRO | m_ATHLON_K8 | m_K6_GEODE | m_PENT | m_CORE2 | m_GENERIC;
 const int x86_use_bt = m_ATHLON_K8;
 /* Compare and exchange was added for 80486.  */
 const int x86_cmpxchg = ~m_386;
@@ -831,7 +940,7 @@
 const int x86_cmpxchg16b = m_NOCONA;
 /* Exchange and add was added for 80486.  */
 const int x86_xadd = ~m_386;
-const int x86_pad_returns = m_ATHLON_K8 | m_GENERIC;
+const int x86_pad_returns = m_ATHLON_K8 | m_CORE2 | m_GENERIC;
 
 /* In case the average insn count for single function invocation is
    lower than this constant, emit fast (but longer) prologue and
@@ -1455,11 +1564,13 @@
       {&i486_cost, 0, 0, 16, 15, 16, 15, 16},
       {&pentium_cost, 0, 0, 16, 7, 16, 7, 16},
       {&pentiumpro_cost, 0, 0, 16, 15, 16, 7, 16},
+      {&geode_cost, 0, 0, 0, 0, 0, 0, 0},
       {&k6_cost, 0, 0, 32, 7, 32, 7, 32},
       {&athlon_cost, 0, 0, 16, 7, 16, 7, 16},
       {&pentium4_cost, 0, 0, 0, 0, 0, 0, 0},
       {&k8_cost, 0, 0, 16, 7, 16, 7, 16},
       {&nocona_cost, 0, 0, 0, 0, 0, 0, 0},
+      {&core2_cost, 0, 0, 16, 7, 16, 7, 16},
       {&generic32_cost, 0, 0, 16, 7, 16, 7, 16},
       {&generic64_cost, 0, 0, 16, 7, 16, 7, 16}
     };
@@ -1506,6 +1617,11 @@
 				        | PTA_MMX | PTA_PREFETCH_SSE},
       {"nocona", PROCESSOR_NOCONA, PTA_SSE | PTA_SSE2 | PTA_SSE3 | PTA_64BIT
 				        | PTA_MMX | PTA_PREFETCH_SSE},
+      {"core2", PROCESSOR_CORE2, PTA_SSE | PTA_SSE2 | PTA_SSE3
+                                        | PTA_64BIT | PTA_MMX
+                                        | PTA_PREFETCH_SSE},
+      {"geode", PROCESSOR_GEODE, PTA_MMX | PTA_PREFETCH_SSE | PTA_3DNOW
+				   | PTA_3DNOW_A},
       {"k6", PROCESSOR_K6, PTA_MMX},
       {"k6-2", PROCESSOR_K6, PTA_MMX | PTA_3DNOW},
       {"k6-3", PROCESSOR_K6, PTA_MMX | PTA_3DNOW},
@@ -13706,6 +13822,9 @@
     case PROCESSOR_GENERIC64:
       return 3;
 
+    case PROCESSOR_CORE2:
+      return 4;
+
     default:
       return 1;
     }
Index: share/mk/bsd.cpu.mk
===================================================================
--- share/mk/bsd.cpu.mk	(revision 219330)
+++ share/mk/bsd.cpu.mk	(working copy)
@@ -55,11 +55,11 @@
 .  elif ${CPUTYPE} == "opteron" || ${CPUTYPE} == "athlon64" || \
      ${CPUTYPE} == "k8"
 CPUTYPE = athlon-mp
-.  elif ${CPUTYPE} == "k7"
+.  elif ${CPUTYPE} == "k7" || ${CPUTYPE} == "geode"
 CPUTYPE = athlon
 .  endif
 . elif ${MACHINE_CPUARCH} == "amd64"
-.  if ${CPUTYPE} == "prescott" || ${CPUTYPE} == "core2"
+.  if ${CPUTYPE} == "prescott"
 CPUTYPE = nocona
 .  endif
 . elif ${MACHINE_ARCH} == "sparc64"
@@ -206,7 +206,7 @@
 MACHINE_CPU = k8 3dnow sse3
 .  elif ${CPUTYPE} == "opteron" || ${CPUTYPE} == "athlon64" || ${CPUTYPE} == "k8"
 MACHINE_CPU = k8 3dnow
-.  elif ${CPUTYPE} == "nocona"
+.  elif ${CPUTYPE} == "nocona" || ${CPUTYPE} == "core2"
 MACHINE_CPU = sse3
 .  endif
 MACHINE_CPU += amd64 sse2 sse mmx
Index: share/examples/etc/make.conf
===================================================================
--- share/examples/etc/make.conf	(revision 219330)
+++ share/examples/etc/make.conf	(working copy)
@@ -32,7 +32,7 @@
 #   Intel x86 architecture:
 #       (AMD CPUs)	opteron-sse3 opteron athlon64-sse3 athlon64 athlon-mp
 #			athlon-xp athlon-4 athlon-tbird athlon k8-sse3 k8
-#			k6-3 k6-2 k6 k5
+#			geode k6-3 k6-2 k6 k5
 #       (Intel CPUs)	core2 core nocona pentium4m pentium4 prescott
 #			pentium3m pentium3 pentium-m pentium2
 #			pentiumpro pentium-mmx pentium i486 i386
>Release-Note:
>Audit-Trail:

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: gnu/155308: commit references a PR
Date: Mon,  7 Mar 2011 14:48:35 +0000 (UTC)

 Author: mm
 Date: Mon Mar  7 14:48:22 2011
 New Revision: 219374
 URL: http://svn.freebsd.org/changeset/base/219374
 
 Log:
   Backport Intel Core 2 and AMD Geode CPU types from gcc-4.3 (GPLv2)
   These options are supported in this shape in all newer GCC versions.
   
   PR:		gnu/155308
   Obtained from:	gcc 4.3 (rev. 118090, 118973, 120846; GPLv2)
   MFC after:	2 weeks
 
 Added:
   head/contrib/gcc/config/i386/geode.md
 Modified:
   head/contrib/gcc/config.gcc
   head/contrib/gcc/config/i386/i386.c
   head/contrib/gcc/config/i386/i386.h
   head/contrib/gcc/config/i386/i386.md
   head/contrib/gcc/doc/gcc.1
   head/contrib/gcc/doc/invoke.texi
 
 Modified: head/contrib/gcc/config.gcc
 ==============================================================================
 --- head/contrib/gcc/config.gcc	Mon Mar  7 10:42:28 2011	(r219373)
 +++ head/contrib/gcc/config.gcc	Mon Mar  7 14:48:22 2011	(r219374)
 @@ -1207,14 +1207,14 @@ i[34567]86-*-solaris2*)
  		# FIXME: -m64 for i[34567]86-*-* should be allowed just
  		# like -m32 for x86_64-*-*.
  		case X"${with_cpu}" in
 -		Xgeneric|Xnocona|Xx86-64|Xk8|Xopteron|Xathlon64|Xathlon-fx)
 +		Xgeneric|Xcore2|Xnocona|Xx86-64|Xk8|Xopteron|Xathlon64|Xathlon-fx)
  			;;
  		X)
  			with_cpu=generic
  			;;
  		*)
  			echo "Unsupported CPU used in --with-cpu=$with_cpu, supported values:" 1>&2
 -			echo "generic nocona x86-64 k8 opteron athlon64 athlon-fx" 1>&2
 +			echo "generic core2 nocona x86-64 k8 opteron athlon64 athlon-fx" 1>&2
  			exit 1
  			;;
  		esac
 @@ -2537,6 +2537,9 @@ if test x$with_cpu = x ; then
          nocona-*)
            with_cpu=nocona
            ;;
 +	core2-*)
 +	  with_cpu=core2
 +	  ;;
          pentium_m-*)
            with_cpu=pentium-m
            ;;
 @@ -2556,6 +2559,9 @@ if test x$with_cpu = x ; then
          nocona-*)
            with_cpu=nocona
            ;;
 +	core2-*)
 +	  with_cpu=core2
 +	  ;;
          *)
            with_cpu=generic
            ;;
 @@ -2787,7 +2793,7 @@ case "${target}" in
  				esac
  				# OK
  				;;
 -			"" | k8 | opteron | athlon64 | athlon-fx | nocona | generic)
 +			"" | k8 | opteron | athlon64 | athlon-fx | nocona | core2 | generic)
  				# OK
  				;;
  			*)
 
 Added: head/contrib/gcc/config/i386/geode.md
 ==============================================================================
 --- /dev/null	00:00:00 1970	(empty, because file is newly added)
 +++ head/contrib/gcc/config/i386/geode.md	Mon Mar  7 14:48:22 2011	(r219374)
 @@ -0,0 +1,153 @@
 +;; Geode Scheduling
 +;; Copyright (C) 2006
 +;; Free Software Foundation, Inc.
 +;;
 +;; This file is part of GCC.
 +;;
 +;; GCC is free software; you can redistribute it and/or modify
 +;; it under the terms of the GNU General Public License as published by
 +;; the Free Software Foundation; either version 2, or (at your option)
 +;; any later version.
 +;;
 +;; GCC is distributed in the hope that it will be useful,
 +;; but WITHOUT ANY WARRANTY; without even the implied warranty of
 +;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 +;; GNU General Public License for more details.
 +;;
 +;; You should have received a copy of the GNU General Public License
 +;; along with GCC; see the file COPYING.  If not, write to
 +;; the Free Software Foundation, 51 Franklin Street, Fifth Floor,
 +;; Boston, MA 02110-1301, USA.
 +;;
 +;; The Geode architecture is one insn issue processor.
 +;;
 +;; This description is based on data from the following documents:
 +;;
 +;;    "AMD Geode GX Processor Data Book"
 +;;    Advanced Micro Devices, Inc., Aug 2005.
 +;;
 +;;    "AMD Geode LX Processor Data Book"
 +;;    Advanced Micro Devices, Inc., Jan 2006.
 +;;
 +;;
 +;; CPU execution units of the Geode:
 +;;
 +;; issue	describes the issue pipeline.
 +;; alu		describes the Integer unit
 +;; fpu		describes the FP unit
 +;;
 +;; The fp unit is out of order execution unit with register renaming.
 +;; There is also memory management unit and execution pipeline for
 +;; load/store operations.  We ignore it and difference between insns
 +;; using memory and registers.
 +
 +(define_automaton "geode")
 +
 +(define_cpu_unit "geode_issue,geode_alu,geode_fpu" "geode")
 +
 +(define_insn_reservation "alu" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "alu,alu1,negnot,icmp,lea,test,imov,imovx,icmov,incdec,setcc"))
 +			 "geode_issue,geode_alu")
 +
 +(define_insn_reservation "shift" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "ishift,ishift1,rotate,rotate1,cld"))
 +			 "geode_issue,geode_alu*2")
 +
 +(define_insn_reservation "imul" 7
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "imul"))
 +			 "geode_issue,geode_alu*7")
 +
 +(define_insn_reservation "idiv" 40
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "idiv"))
 +			 "geode_issue,geode_alu*40")
 +
 +;; The branch unit.
 +(define_insn_reservation "call" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "call,callv"))
 +			 "geode_issue,geode_alu*2")
 +
 +(define_insn_reservation "geode_branch" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "ibr"))
 +			 "geode_issue,geode_alu")
 +
 +(define_insn_reservation "geode_pop_push" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "pop,push"))
 +			 "geode_issue,geode_alu")
 +
 +(define_insn_reservation "geode_leave" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "leave"))
 +			 "geode_issue,geode_alu*2")
 +
 +(define_insn_reservation "geode_load_str" 4
 +			 (and (eq_attr "cpu" "geode")
 +			      (and (eq_attr "type" "str")
 +				   (eq_attr "memory" "load,both")))
 +			 "geode_issue,geode_alu*4")
 +
 +(define_insn_reservation "geode_store_str" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (and (eq_attr "type" "str")
 +				   (eq_attr "memory" "store")))
 +			 "geode_issue,geode_alu*2")
 +
 +;; Be optimistic
 +(define_insn_reservation "geode_unknown" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "multi,other"))
 +			 "geode_issue,geode_alu")
 +
 +;; FPU
 +
 +(define_insn_reservation "geode_fop" 6
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fop,fcmp"))
 +			 "geode_issue,geode_fpu*6")
 +
 +(define_insn_reservation "geode_fsimple" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fmov,fcmov,fsgn,fxch"))
 +			 "geode_issue,geode_fpu")
 +
 +(define_insn_reservation "geode_fist" 4
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fistp,fisttp"))
 +			 "geode_issue,geode_fpu*4")
 +
 +(define_insn_reservation "geode_fmul" 10
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fmul"))
 +			 "geode_issue,geode_fpu*10")
 +
 +(define_insn_reservation "geode_fdiv" 47
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fdiv"))
 +			 "geode_issue,geode_fpu*47")
 +
 +;; We use minimal latency (fsin) here
 +(define_insn_reservation "geode_fpspc" 54
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fpspc"))
 +			 "geode_issue,geode_fpu*54")
 +
 +(define_insn_reservation "geode_frndint" 12
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "frndint"))
 +			 "geode_issue,geode_fpu*12")
 +
 +(define_insn_reservation "geode_mmxmov" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "mmxmov"))
 +			 "geode_issue,geode_fpu")
 +
 +(define_insn_reservation "geode_mmx" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "mmx,mmxadd,mmxmul,mmxcmp,mmxcvt,mmxshft"))
 +			 "geode_issue,geode_fpu*2")
 
 Modified: head/contrib/gcc/config/i386/i386.c
 ==============================================================================
 --- head/contrib/gcc/config/i386/i386.c	Mon Mar  7 10:42:28 2011	(r219373)
 +++ head/contrib/gcc/config/i386/i386.c	Mon Mar  7 14:48:22 2011	(r219374)
 @@ -336,6 +336,60 @@ struct processor_costs pentiumpro_cost =
  };
  
  static const
 +struct processor_costs geode_cost = {
 +  COSTS_N_INSNS (1),			/* cost of an add instruction */
 +  COSTS_N_INSNS (1),			/* cost of a lea instruction */
 +  COSTS_N_INSNS (2),			/* variable shift costs */
 +  COSTS_N_INSNS (1),			/* constant shift costs */
 +  {COSTS_N_INSNS (3),			/* cost of starting multiply for QI */
 +   COSTS_N_INSNS (4),			/*                               HI */
 +   COSTS_N_INSNS (7),			/*                               SI */
 +   COSTS_N_INSNS (7),			/*                               DI */
 +   COSTS_N_INSNS (7)},			/*                               other */
 +  0,					/* cost of multiply per each bit set */
 +  {COSTS_N_INSNS (15),			/* cost of a divide/mod for QI */
 +   COSTS_N_INSNS (23),			/*                          HI */
 +   COSTS_N_INSNS (39),			/*                          SI */
 +   COSTS_N_INSNS (39),			/*                          DI */
 +   COSTS_N_INSNS (39)},			/*                          other */
 +  COSTS_N_INSNS (1),			/* cost of movsx */
 +  COSTS_N_INSNS (1),			/* cost of movzx */
 +  8,					/* "large" insn */
 +  4,					/* MOVE_RATIO */
 +  1,					/* cost for loading QImode using movzbl */
 +  {1, 1, 1},				/* cost of loading integer registers
 +					   in QImode, HImode and SImode.
 +					   Relative to reg-reg move (2).  */
 +  {1, 1, 1},				/* cost of storing integer registers */
 +  1,					/* cost of reg,reg fld/fst */
 +  {1, 1, 1},				/* cost of loading fp registers
 +					   in SFmode, DFmode and XFmode */
 +  {4, 6, 6},				/* cost of storing fp registers
 +					   in SFmode, DFmode and XFmode */
 +
 +  1,					/* cost of moving MMX register */
 +  {1, 1},				/* cost of loading MMX registers
 +					   in SImode and DImode */
 +  {1, 1},				/* cost of storing MMX registers
 +					   in SImode and DImode */
 +  1,					/* cost of moving SSE register */
 +  {1, 1, 1},				/* cost of loading SSE registers
 +					   in SImode, DImode and TImode */
 +  {1, 1, 1},				/* cost of storing SSE registers
 +					   in SImode, DImode and TImode */
 +  1,					/* MMX or SSE register to integer */
 +  32,					/* size of prefetch block */
 +  1,					/* number of parallel prefetches */
 +  1,					/* Branch cost */
 +  COSTS_N_INSNS (6),			/* cost of FADD and FSUB insns.  */
 +  COSTS_N_INSNS (11),			/* cost of FMUL instruction.  */
 +  COSTS_N_INSNS (47),			/* cost of FDIV instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FABS instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FCHS instruction.  */
 +  COSTS_N_INSNS (54),			/* cost of FSQRT instruction.  */
 +};
 +
 +static const
  struct processor_costs k6_cost = {
    COSTS_N_INSNS (1),			/* cost of an add instruction */
    COSTS_N_INSNS (2),			/* cost of a lea instruction */
 @@ -600,6 +654,58 @@ struct processor_costs nocona_cost = {
    COSTS_N_INSNS (44),			/* cost of FSQRT instruction.  */
  };
  
 +static const
 +struct processor_costs core2_cost = {
 +  COSTS_N_INSNS (1),			/* cost of an add instruction */
 +  COSTS_N_INSNS (1) + 1,		/* cost of a lea instruction */
 +  COSTS_N_INSNS (1),			/* variable shift costs */
 +  COSTS_N_INSNS (1),			/* constant shift costs */
 +  {COSTS_N_INSNS (3),			/* cost of starting multiply for QI */
 +   COSTS_N_INSNS (3),			/*                               HI */
 +   COSTS_N_INSNS (3),			/*                               SI */
 +   COSTS_N_INSNS (3),			/*                               DI */
 +   COSTS_N_INSNS (3)},			/*                               other */
 +  0,					/* cost of multiply per each bit set */
 +  {COSTS_N_INSNS (22),			/* cost of a divide/mod for QI */
 +   COSTS_N_INSNS (22),			/*                          HI */
 +   COSTS_N_INSNS (22),			/*                          SI */
 +   COSTS_N_INSNS (22),			/*                          DI */
 +   COSTS_N_INSNS (22)},			/*                          other */
 +  COSTS_N_INSNS (1),			/* cost of movsx */
 +  COSTS_N_INSNS (1),			/* cost of movzx */
 +  8,					/* "large" insn */
 +  16,					/* MOVE_RATIO */
 +  2,					/* cost for loading QImode using movzbl */
 +  {6, 6, 6},				/* cost of loading integer registers
 +					   in QImode, HImode and SImode.
 +					   Relative to reg-reg move (2).  */
 +  {4, 4, 4},				/* cost of storing integer registers */
 +  2,					/* cost of reg,reg fld/fst */
 +  {6, 6, 6},				/* cost of loading fp registers
 +					   in SFmode, DFmode and XFmode */
 +  {4, 4, 4},				/* cost of loading integer registers */
 +  2,					/* cost of moving MMX register */
 +  {6, 6},				/* cost of loading MMX registers
 +					   in SImode and DImode */
 +  {4, 4},				/* cost of storing MMX registers
 +					   in SImode and DImode */
 +  2,					/* cost of moving SSE register */
 +  {6, 6, 6},				/* cost of loading SSE registers
 +					   in SImode, DImode and TImode */
 +  {4, 4, 4},				/* cost of storing SSE registers
 +					   in SImode, DImode and TImode */
 +  2,					/* MMX or SSE register to integer */
 +  128,					/* size of prefetch block */
 +  8,					/* number of parallel prefetches */
 +  3,					/* Branch cost */
 +  COSTS_N_INSNS (3),			/* cost of FADD and FSUB insns.  */
 +  COSTS_N_INSNS (5),			/* cost of FMUL instruction.  */
 +  COSTS_N_INSNS (32),			/* cost of FDIV instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FABS instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FCHS instruction.  */
 +  COSTS_N_INSNS (58),			/* cost of FSQRT instruction.  */
 +};
 +
  /* Generic64 should produce code tuned for Nocona and K8.  */
  static const
  struct processor_costs generic64_cost = {
 @@ -721,38 +827,41 @@ const struct processor_costs *ix86_cost 
  #define m_486 (1<<PROCESSOR_I486)
  #define m_PENT (1<<PROCESSOR_PENTIUM)
  #define m_PPRO (1<<PROCESSOR_PENTIUMPRO)
 +#define m_GEODE  (1<<PROCESSOR_GEODE)
 +#define m_K6_GEODE  (m_K6 | m_GEODE)
  #define m_K6  (1<<PROCESSOR_K6)
  #define m_ATHLON  (1<<PROCESSOR_ATHLON)
  #define m_PENT4  (1<<PROCESSOR_PENTIUM4)
  #define m_K8  (1<<PROCESSOR_K8)
  #define m_ATHLON_K8  (m_K8 | m_ATHLON)
  #define m_NOCONA  (1<<PROCESSOR_NOCONA)
 +#define m_CORE2  (1<<PROCESSOR_CORE2)
  #define m_GENERIC32 (1<<PROCESSOR_GENERIC32)
  #define m_GENERIC64 (1<<PROCESSOR_GENERIC64)
  #define m_GENERIC (m_GENERIC32 | m_GENERIC64)
  
  /* Generic instruction choice should be common subset of supported CPUs
 -   (PPro/PENT4/NOCONA/Athlon/K8).  */
 +   (PPro/PENT4/NOCONA/CORE2/Athlon/K8).  */
  
  /* Leave is not affecting Nocona SPEC2000 results negatively, so enabling for
     Generic64 seems like good code size tradeoff.  We can't enable it for 32bit
     generic because it is not working well with PPro base chips.  */
 -const int x86_use_leave = m_386 | m_K6 | m_ATHLON_K8 | m_GENERIC64;
 -const int x86_push_memory = m_386 | m_K6 | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 +const int x86_use_leave = m_386 | m_K6_GEODE | m_ATHLON_K8 | m_CORE2 | m_GENERIC64;
 +const int x86_push_memory = m_386 | m_K6_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
  const int x86_zero_extend_with_and = m_486 | m_PENT;
 -const int x86_movx = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_GENERIC /* m_386 | m_K6 */;
 +const int x86_movx = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC | m_GEODE /* m_386 | m_K6 */;
  const int x86_double_with_add = ~m_386;
  const int x86_use_bit_test = m_386;
 -const int x86_unroll_strlen = m_486 | m_PENT | m_PPRO | m_ATHLON_K8 | m_K6 | m_GENERIC;
 -const int x86_cmove = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA;
 +const int x86_unroll_strlen = m_486 | m_PENT | m_PPRO | m_ATHLON_K8 | m_K6 | m_CORE2 | m_GENERIC;
 +const int x86_cmove = m_PPRO | m_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA;
  const int x86_3dnow_a = m_ATHLON_K8;
 -const int x86_deep_branch = m_PPRO | m_K6 | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 +const int x86_deep_branch = m_PPRO | m_K6_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
  /* Branch hints were put in P4 based on simulation result. But
     after P4 was made, no performance benefit was observed with
     branch hints. It also increases the code size. As the result,
     icc never generates branch hints.  */
  const int x86_branch_hints = 0;
 -const int x86_use_sahf = m_PPRO | m_K6 | m_PENT4 | m_NOCONA | m_GENERIC32; /*m_GENERIC | m_ATHLON_K8 ? */
 +const int x86_use_sahf = m_PPRO | m_K6_GEODE | m_PENT4 | m_NOCONA | m_GENERIC32; /*m_GENERIC | m_ATHLON_K8 ? */
  /* We probably ought to watch for partial register stalls on Generic32
     compilation setting as well.  However in current implementation the
     partial register stalls are not eliminated very well - they can
 @@ -762,15 +871,15 @@ const int x86_use_sahf = m_PPRO | m_K6 |
     with partial reg. dependencies used by Athlon/P4 based chips, it is better
     to leave it off for generic32 for now.  */
  const int x86_partial_reg_stall = m_PPRO;
 -const int x86_partial_flag_reg_stall = m_GENERIC;
 -const int x86_use_himode_fiop = m_386 | m_486 | m_K6;
 -const int x86_use_simode_fiop = ~(m_PPRO | m_ATHLON_K8 | m_PENT | m_GENERIC);
 +const int x86_partial_flag_reg_stall =  m_CORE2 | m_GENERIC;
 +const int x86_use_himode_fiop = m_386 | m_486 | m_K6_GEODE;
 +const int x86_use_simode_fiop = ~(m_PPRO | m_ATHLON_K8 | m_PENT | m_CORE2 | m_GENERIC);
  const int x86_use_mov0 = m_K6;
 -const int x86_use_cltd = ~(m_PENT | m_K6 | m_GENERIC);
 +const int x86_use_cltd = ~(m_PENT | m_K6 | m_CORE2 | m_GENERIC);
  const int x86_read_modify_write = ~m_PENT;
  const int x86_read_modify = ~(m_PENT | m_PPRO);
  const int x86_split_long_moves = m_PPRO;
 -const int x86_promote_QImode = m_K6 | m_PENT | m_386 | m_486 | m_ATHLON_K8 | m_GENERIC; /* m_PENT4 ? */
 +const int x86_promote_QImode = m_K6_GEODE | m_PENT | m_386 | m_486 | m_ATHLON_K8 | m_CORE2 | m_GENERIC; /* m_PENT4 ? */
  const int x86_fast_prefix = ~(m_PENT | m_486 | m_386);
  const int x86_single_stringop = m_386 | m_PENT4 | m_NOCONA;
  const int x86_qimode_math = ~(0);
 @@ -780,18 +889,18 @@ const int x86_promote_qi_regs = 0;
     if our scheme for avoiding partial stalls was more effective.  */
  const int x86_himode_math = ~(m_PPRO);
  const int x86_promote_hi_regs = m_PPRO;
 -const int x86_sub_esp_4 = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_sub_esp_8 = m_ATHLON_K8 | m_PPRO | m_386 | m_486 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_add_esp_4 = m_ATHLON_K8 | m_K6 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_add_esp_8 = m_ATHLON_K8 | m_PPRO | m_K6 | m_386 | m_486 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_integer_DFmode_moves = ~(m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC);
 -const int x86_partial_reg_dependency = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_memory_mismatch_stall = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_accumulate_outgoing_args = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC;
 -const int x86_prologue_using_move = m_ATHLON_K8 | m_PPRO | m_GENERIC;
 -const int x86_epilogue_using_move = m_ATHLON_K8 | m_PPRO | m_GENERIC;
 +const int x86_sub_esp_4 = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_sub_esp_8 = m_ATHLON_K8 | m_PPRO | m_386 | m_486 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_add_esp_4 = m_ATHLON_K8 | m_K6_GEODE | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_add_esp_8 = m_ATHLON_K8 | m_PPRO | m_K6_GEODE | m_386 | m_486 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_integer_DFmode_moves = ~(m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC | m_GEODE);
 +const int x86_partial_reg_dependency = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_memory_mismatch_stall = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_accumulate_outgoing_args = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC;
 +const int x86_prologue_using_move = m_ATHLON_K8 | m_PPRO | m_CORE2 | m_GENERIC;
 +const int x86_epilogue_using_move = m_ATHLON_K8 | m_PPRO | m_CORE2 | m_GENERIC;
  const int x86_shift1 = ~m_486;
 -const int x86_arch_always_fancy_math_387 = m_PENT | m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 +const int x86_arch_always_fancy_math_387 = m_PENT | m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
  /* In Generic model we have an conflict here in between PPro/Pentium4 based chips
     that thread 128bit SSE registers as single units versus K8 based chips that
     divide SSE registers to two 64bit halves.
 @@ -801,7 +910,7 @@ const int x86_arch_always_fancy_math_387
     this option on P4 brings over 20% SPECfp regression, while enabling it on
     K8 brings roughly 2.4% regression that can be partly masked by careful scheduling
     of moves.  */
 -const int x86_sse_partial_reg_dependency = m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC;
 +const int x86_sse_partial_reg_dependency = m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC;
  /* Set for machines where the type and dependencies are resolved on SSE
     register parts instead of whole registers, so we may maintain just
     lower part of scalar values in proper format leaving the upper part
 @@ -810,18 +919,18 @@ const int x86_sse_split_regs = m_ATHLON_
  const int x86_sse_typeless_stores = m_ATHLON_K8;
  const int x86_sse_load0_by_pxor = m_PPRO | m_PENT4 | m_NOCONA;
  const int x86_use_ffreep = m_ATHLON_K8;
 -const int x86_rep_movl_optimal = m_386 | m_PENT | m_PPRO | m_K6;
 -const int x86_use_incdec = ~(m_PENT4 | m_NOCONA | m_GENERIC);
 +const int x86_rep_movl_optimal = m_386 | m_PENT | m_PPRO | m_K6_GEODE | m_CORE2;
 +const int x86_use_incdec = ~(m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC);
  
  /* ??? Allowing interunit moves makes it all too easy for the compiler to put
     integer data in xmm registers.  Which results in pretty abysmal code.  */
  const int x86_inter_unit_moves = 0 /* ~(m_ATHLON_K8) */;
  
 -const int x86_ext_80387_constants = m_K6 | m_ATHLON | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC32;
 +const int x86_ext_80387_constants = m_K6_GEODE | m_ATHLON | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC32;
  /* Some CPU cores are not able to predict more than 4 branch instructions in
     the 16 byte window.  */
 -const int x86_four_jump_limit = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_schedule = m_PPRO | m_ATHLON_K8 | m_K6 | m_PENT | m_GENERIC;
 +const int x86_four_jump_limit = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_schedule = m_PPRO | m_ATHLON_K8 | m_K6_GEODE | m_PENT | m_CORE2 | m_GENERIC;
  const int x86_use_bt = m_ATHLON_K8;
  /* Compare and exchange was added for 80486.  */
  const int x86_cmpxchg = ~m_386;
 @@ -831,7 +940,7 @@ const int x86_cmpxchg8b = ~(m_386 | m_48
  const int x86_cmpxchg16b = m_NOCONA;
  /* Exchange and add was added for 80486.  */
  const int x86_xadd = ~m_386;
 -const int x86_pad_returns = m_ATHLON_K8 | m_GENERIC;
 +const int x86_pad_returns = m_ATHLON_K8 | m_CORE2 | m_GENERIC;
  
  /* In case the average insn count for single function invocation is
     lower than this constant, emit fast (but longer) prologue and
 @@ -1455,11 +1564,13 @@ override_options (void)
        {&i486_cost, 0, 0, 16, 15, 16, 15, 16},
        {&pentium_cost, 0, 0, 16, 7, 16, 7, 16},
        {&pentiumpro_cost, 0, 0, 16, 15, 16, 7, 16},
 +      {&geode_cost, 0, 0, 0, 0, 0, 0, 0},
        {&k6_cost, 0, 0, 32, 7, 32, 7, 32},
        {&athlon_cost, 0, 0, 16, 7, 16, 7, 16},
        {&pentium4_cost, 0, 0, 0, 0, 0, 0, 0},
        {&k8_cost, 0, 0, 16, 7, 16, 7, 16},
        {&nocona_cost, 0, 0, 0, 0, 0, 0, 0},
 +      {&core2_cost, 0, 0, 16, 7, 16, 7, 16},
        {&generic32_cost, 0, 0, 16, 7, 16, 7, 16},
        {&generic64_cost, 0, 0, 16, 7, 16, 7, 16}
      };
 @@ -1506,6 +1617,11 @@ override_options (void)
  				        | PTA_MMX | PTA_PREFETCH_SSE},
        {"nocona", PROCESSOR_NOCONA, PTA_SSE | PTA_SSE2 | PTA_SSE3 | PTA_64BIT
  				        | PTA_MMX | PTA_PREFETCH_SSE},
 +      {"core2", PROCESSOR_CORE2, PTA_SSE | PTA_SSE2 | PTA_SSE3
 +                                        | PTA_64BIT | PTA_MMX
 +                                        | PTA_PREFETCH_SSE},
 +      {"geode", PROCESSOR_GEODE, PTA_MMX | PTA_PREFETCH_SSE | PTA_3DNOW
 +				   | PTA_3DNOW_A},
        {"k6", PROCESSOR_K6, PTA_MMX},
        {"k6-2", PROCESSOR_K6, PTA_MMX | PTA_3DNOW},
        {"k6-3", PROCESSOR_K6, PTA_MMX | PTA_3DNOW},
 @@ -13706,6 +13822,9 @@ ix86_issue_rate (void)
      case PROCESSOR_GENERIC64:
        return 3;
  
 +    case PROCESSOR_CORE2:
 +      return 4;
 +
      default:
        return 1;
      }
 
 Modified: head/contrib/gcc/config/i386/i386.h
 ==============================================================================
 --- head/contrib/gcc/config/i386/i386.h	Mon Mar  7 10:42:28 2011	(r219373)
 +++ head/contrib/gcc/config/i386/i386.h	Mon Mar  7 14:48:22 2011	(r219374)
 @@ -130,12 +130,14 @@ extern const struct processor_costs *ix8
  #define TARGET_486 (ix86_tune == PROCESSOR_I486)
  #define TARGET_PENTIUM (ix86_tune == PROCESSOR_PENTIUM)
  #define TARGET_PENTIUMPRO (ix86_tune == PROCESSOR_PENTIUMPRO)
 +#define TARGET_GEODE (ix86_tune == PROCESSOR_GEODE)
  #define TARGET_K6 (ix86_tune == PROCESSOR_K6)
  #define TARGET_ATHLON (ix86_tune == PROCESSOR_ATHLON)
  #define TARGET_PENTIUM4 (ix86_tune == PROCESSOR_PENTIUM4)
  #define TARGET_K8 (ix86_tune == PROCESSOR_K8)
  #define TARGET_ATHLON_K8 (TARGET_K8 || TARGET_ATHLON)
  #define TARGET_NOCONA (ix86_tune == PROCESSOR_NOCONA)
 +#define TARGET_CORE2 (ix86_tune == PROCESSOR_CORE2)
  #define TARGET_GENERIC32 (ix86_tune == PROCESSOR_GENERIC32)
  #define TARGET_GENERIC64 (ix86_tune == PROCESSOR_GENERIC64)
  #define TARGET_GENERIC (TARGET_GENERIC32 || TARGET_GENERIC64)
 @@ -376,6 +378,10 @@ extern const char *host_detect_local_cpu
  	      break;						\
  	    }							\
  	}							\
 +      else if (TARGET_GEODE)					\
 +	{							\
 +	  builtin_define ("__tune_geode__");			\
 +	}							\
        else if (TARGET_K6)					\
  	{							\
  	  builtin_define ("__tune_k6__");			\
 @@ -397,6 +403,8 @@ extern const char *host_detect_local_cpu
  	builtin_define ("__tune_pentium4__");			\
        else if (TARGET_NOCONA)					\
  	builtin_define ("__tune_nocona__");			\
 +      else if (TARGET_CORE2)					\
 +	builtin_define ("__tune_core2__");			\
  								\
        if (TARGET_MMX)						\
  	builtin_define ("__MMX__");				\
 @@ -437,6 +445,11 @@ extern const char *host_detect_local_cpu
  	  builtin_define ("__pentiumpro");			\
  	  builtin_define ("__pentiumpro__");			\
  	}							\
 +      else if (ix86_arch == PROCESSOR_GEODE)			\
 +	{							\
 +	  builtin_define ("__geode");				\
 +	  builtin_define ("__geode__");				\
 +	}							\
        else if (ix86_arch == PROCESSOR_K6)			\
  	{							\
  								\
 @@ -470,6 +483,11 @@ extern const char *host_detect_local_cpu
  	  builtin_define ("__nocona");				\
  	  builtin_define ("__nocona__");			\
  	}							\
 +      else if (ix86_arch == PROCESSOR_CORE2)			\
 +	{							\
 +	  builtin_define ("__core2");				\
 +	  builtin_define ("__core2__");				\
 +	}							\
      }								\
    while (0)
  
 @@ -481,23 +499,25 @@ extern const char *host_detect_local_cpu
  #define TARGET_CPU_DEFAULT_pentium2 5
  #define TARGET_CPU_DEFAULT_pentium3 6
  #define TARGET_CPU_DEFAULT_pentium4 7
 -#define TARGET_CPU_DEFAULT_k6 8
 -#define TARGET_CPU_DEFAULT_k6_2 9
 -#define TARGET_CPU_DEFAULT_k6_3 10
 -#define TARGET_CPU_DEFAULT_athlon 11
 -#define TARGET_CPU_DEFAULT_athlon_sse 12
 -#define TARGET_CPU_DEFAULT_k8 13
 -#define TARGET_CPU_DEFAULT_pentium_m 14
 -#define TARGET_CPU_DEFAULT_prescott 15
 -#define TARGET_CPU_DEFAULT_nocona 16
 -#define TARGET_CPU_DEFAULT_generic 17
 +#define TARGET_CPU_DEFAULT_geode 8
 +#define TARGET_CPU_DEFAULT_k6 9
 +#define TARGET_CPU_DEFAULT_k6_2 10
 +#define TARGET_CPU_DEFAULT_k6_3 11
 +#define TARGET_CPU_DEFAULT_athlon 12
 +#define TARGET_CPU_DEFAULT_athlon_sse 13
 +#define TARGET_CPU_DEFAULT_k8 14
 +#define TARGET_CPU_DEFAULT_pentium_m 15
 +#define TARGET_CPU_DEFAULT_prescott 16
 +#define TARGET_CPU_DEFAULT_nocona 17
 +#define TARGET_CPU_DEFAULT_core2 18
 +#define TARGET_CPU_DEFAULT_generic 19
  
  #define TARGET_CPU_DEFAULT_NAMES {"i386", "i486", "pentium", "pentium-mmx",\
  				  "pentiumpro", "pentium2", "pentium3", \
 -				  "pentium4", "k6", "k6-2", "k6-3",\
 +                                  "pentium4", "geode", "k6", "k6-2", "k6-3", \
  				  "athlon", "athlon-4", "k8", \
  				  "pentium-m", "prescott", "nocona", \
 -				  "generic"}
 +				  "core2", "generic"}
  
  #ifndef CC1_SPEC
  #define CC1_SPEC "%(cc1_cpu) "
 @@ -2077,11 +2097,13 @@ enum processor_type
    PROCESSOR_I486,			/* 80486DX, 80486SX, 80486DX[24] */
    PROCESSOR_PENTIUM,
    PROCESSOR_PENTIUMPRO,
 +  PROCESSOR_GEODE,
    PROCESSOR_K6,
    PROCESSOR_ATHLON,
    PROCESSOR_PENTIUM4,
    PROCESSOR_K8,
    PROCESSOR_NOCONA,
 +  PROCESSOR_CORE2,
    PROCESSOR_GENERIC32,
    PROCESSOR_GENERIC64,
    PROCESSOR_max
 
 Modified: head/contrib/gcc/config/i386/i386.md
 ==============================================================================
 --- head/contrib/gcc/config/i386/i386.md	Mon Mar  7 10:42:28 2011	(r219373)
 +++ head/contrib/gcc/config/i386/i386.md	Mon Mar  7 14:48:22 2011	(r219374)
 @@ -187,7 +187,7 @@
  
  ;; Processor type.  This attribute must exactly match the processor_type
  ;; enumeration in i386.h.
 -(define_attr "cpu" "i386,i486,pentium,pentiumpro,k6,athlon,pentium4,k8,nocona,generic32,generic64"
 +(define_attr "cpu" "i386,i486,pentium,pentiumpro,geode,k6,athlon,pentium4,k8,nocona,core2,generic32,generic64"
    (const (symbol_ref "ix86_tune")))
  
  ;; A basic instruction type.  Refinements due to arguments to be
 @@ -473,6 +473,7 @@
  (include "ppro.md")
  (include "k6.md")
  (include "athlon.md")
 +(include "geode.md")
  
  
  ;; Operand and operator predicates and constraints
 
 Modified: head/contrib/gcc/doc/gcc.1
 ==============================================================================
 --- head/contrib/gcc/doc/gcc.1	Mon Mar  7 10:42:28 2011	(r219373)
 +++ head/contrib/gcc/doc/gcc.1	Mon Mar  7 14:48:22 2011	(r219374)
 @@ -129,7 +129,7 @@
  .\" ========================================================================
  .\"
  .IX Title "GCC 1"
 -.TH GCC 1 "2011-02-20" "gcc-4.2.1" "GNU"
 +.TH GCC 1 "2011-03-07" "gcc-4.2.1" "GNU"
  .SH "NAME"
  gcc \- GNU project C and C++ compiler
  .SH "SYNOPSIS"
 @@ -8733,6 +8733,10 @@ set support.
  .IX Item "nocona"
  Improved version of Intel Pentium4 \s-1CPU\s0 with 64\-bit extensions, \s-1MMX\s0, \s-1SSE\s0,
  \&\s-1SSE2\s0 and \s-1SSE3\s0 instruction set support.
 +.IP "\fIcore2\fR" 4
 +.IX Item "core2"
 +Intel Core2 \s-1CPU\s0 with 64\-bit extensions, \s-1MMX\s0, \s-1SSE\s0, \s-1SSE2\s0 and \s-1SSE3\s0
 +instruction set support.
  .IP "\fIk6\fR" 4
  .IX Item "k6"
  \&\s-1AMD\s0 K6 \s-1CPU\s0 with \s-1MMX\s0 instruction set support.
 @@ -8770,6 +8774,9 @@ implemented for this chip.)
  .IX Item "c3-2"
  Via C3\-2 \s-1CPU\s0 with \s-1MMX\s0 and \s-1SSE\s0 instruction set support.  (No scheduling is
  implemented for this chip.)
 +.IP "\fIgeode\fR" 4
 +.IX Item "geode"
 +Embedded AMD \s-1CPU\s0 with \s-1MMX\s0 and 3dNOW! instruction set support.
  .RE
  .RS 4
  .Sp
 
 Modified: head/contrib/gcc/doc/invoke.texi
 ==============================================================================
 --- head/contrib/gcc/doc/invoke.texi	Mon Mar  7 10:42:28 2011	(r219373)
 +++ head/contrib/gcc/doc/invoke.texi	Mon Mar  7 14:48:22 2011	(r219374)
 @@ -9369,6 +9369,9 @@ set support.
  @item nocona
  Improved version of Intel Pentium4 CPU with 64-bit extensions, MMX, SSE,
  SSE2 and SSE3 instruction set support.
 +@item core2
 +Intel Core2 CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3 and SSSE3
 +instruction set support.
  @item k6
  AMD K6 CPU with MMX instruction set support.
  @item k6-2, k6-3
 @@ -9396,6 +9399,8 @@ implemented for this chip.)
  @item c3-2
  Via C3-2 CPU with MMX and SSE instruction set support.  (No scheduling is
  implemented for this chip.)
 +@item geode
 +Embedded AMD CPU with MMX and 3dNOW! instruction set support.
  @end table
  
  While picking a specific @var{cpu-type} will schedule things appropriately
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: gnu/155308: commit references a PR
Date: Mon,  7 Mar 2011 14:58:36 +0000 (UTC)

 Author: mm
 Date: Mon Mar  7 14:58:23 2011
 New Revision: 219376
 URL: http://svn.freebsd.org/changeset/base/219376
 
 Log:
   Add AMD Geode CPU type to bsd.cpu.mk and examples/etc/make.conf
   For CPUTYPE=core2 use -march=core2
   
   PR:		gnu/155308
   MFC after:	2 weeks
 
 Modified:
   head/share/examples/etc/make.conf
   head/share/mk/bsd.cpu.mk
 
 Modified: head/share/examples/etc/make.conf
 ==============================================================================
 --- head/share/examples/etc/make.conf	Mon Mar  7 14:54:45 2011	(r219375)
 +++ head/share/examples/etc/make.conf	Mon Mar  7 14:58:23 2011	(r219376)
 @@ -32,7 +32,7 @@
  #   Intel x86 architecture:
  #       (AMD CPUs)	opteron-sse3 opteron athlon64-sse3 athlon64 athlon-mp
  #			athlon-xp athlon-4 athlon-tbird athlon k8-sse3 k8
 -#			k6-3 k6-2 k6 k5
 +#			geode k6-3 k6-2 k6 k5
  #       (Intel CPUs)	core2 core nocona pentium4m pentium4 prescott
  #			pentium3m pentium3 pentium-m pentium2
  #			pentiumpro pentium-mmx pentium i486 i386
 
 Modified: head/share/mk/bsd.cpu.mk
 ==============================================================================
 --- head/share/mk/bsd.cpu.mk	Mon Mar  7 14:54:45 2011	(r219375)
 +++ head/share/mk/bsd.cpu.mk	Mon Mar  7 14:58:23 2011	(r219376)
 @@ -59,7 +59,7 @@ CPUTYPE = athlon-mp
  CPUTYPE = athlon
  .  endif
  . elif ${MACHINE_CPUARCH} == "amd64"
 -.  if ${CPUTYPE} == "prescott" || ${CPUTYPE} == "core2"
 +.  if ${CPUTYPE} == "prescott"
  CPUTYPE = nocona
  .  endif
  . elif ${MACHINE_ARCH} == "sparc64"
 @@ -170,7 +170,7 @@ MACHINE_CPU = athlon-xp athlon k7 3dnow 
  MACHINE_CPU = athlon-xp athlon k7 3dnow sse mmx k6 k5 i586 i486 i386
  .  elif ${CPUTYPE} == "athlon" || ${CPUTYPE} == "athlon-tbird"
  MACHINE_CPU = athlon k7 3dnow mmx k6 k5 i586 i486 i386
 -.  elif ${CPUTYPE} == "k6-3" || ${CPUTYPE} == "k6-2"
 +.  elif ${CPUTYPE} == "k6-3" || ${CPUTYPE} == "k6-2" || ${CPUTYPE} == "geode"
  MACHINE_CPU = 3dnow mmx k6 k5 i586 i486 i386
  .  elif ${CPUTYPE} == "k6"
  MACHINE_CPU = mmx k6 k5 i586 i486 i386
 @@ -206,7 +206,7 @@ MACHINE_CPU = i386
  MACHINE_CPU = k8 3dnow sse3
  .  elif ${CPUTYPE} == "opteron" || ${CPUTYPE} == "athlon64" || ${CPUTYPE} == "k8"
  MACHINE_CPU = k8 3dnow
 -.  elif ${CPUTYPE} == "nocona"
 +.  elif ${CPUTYPE} == "nocona" || ${CPUTYPE} == "core2"
  MACHINE_CPU = sse3
  .  endif
  MACHINE_CPU += amd64 sse2 sse mmx
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 
State-Changed-From-To: open->patched 
State-Changed-By: mm 
State-Changed-When: Tue Mar 29 13:14:16 UTC 2011 
State-Changed-Why:  
My pr. 


Responsible-Changed-From-To: freebsd-bugs->mm 
Responsible-Changed-By: mm 
Responsible-Changed-When: Tue Mar 29 13:14:16 UTC 2011 
Responsible-Changed-Why:  
My pr. 

http://www.freebsd.org/cgi/query-pr.cgi?pr=155308 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: gnu/155308: commit references a PR
Date: Sat, 30 Apr 2011 22:07:31 +0000 (UTC)

 Author: mm
 Date: Sat Apr 30 22:07:04 2011
 New Revision: 221274
 URL: http://svn.freebsd.org/changeset/base/221274
 
 Log:
   MFC 219374, 219376, 219639, 219640, 219697, 219711, 220150:
   
   MFC r219374:
   Backport Intel Core 2 and AMD Geode CPU types from gcc-4.3 (GPLv2)
   These options are supported in this shape in all newer GCC versions.
   
   Source:	gcc-4_3-branch (rev. 118090, 118973, 120846; GPLv2)
   
   MFC r219376:
   Add AMD Geode CPU type to bsd.cpu.mk and examples/etc/make.conf
   For CPUTYPE=core2 use -march=core2
   
   MFC r219639:
   Backport SSSE3 instruction set support to base gcc.
   Enabled by default for -march=core2
   
   Source:	gcc-4_3-branch (rev. 117958, 121687, 121726, 123639; GPLv2)
   
   MFC r219640:
   Add ssse3 capability for CPUTYPE=core2 to MACHINE_CPU in bsd.cpu.mk
   
   MFC r219697:
   Fix -march/-mtune=native autodetection for Intel Core 2 CPUs
   
   Source:	gcc-4_3-branch (partial rev. 119454; GPLv2)
   
   MFC r219711:
   Backport missing tunings for -march=core2:
   - enable extra 80387 mathematical constants (ext_80387_constants)
   - enable compare and exchange 16 bytes (cmpxchg16b)
   
   Verified against llvm-gcc (and apple gcc)
   Source:	gcc-4_3-branch (ref. svn revs. 119260, 121140; GPLv2)
   
   MFC r220150:
   Upgrade of base gcc and libstdc++ to the last GPLv2-licensed revision
   (rev. 127959 of gcc-4_2-branch).
   
   Resolved GCC bugs:
   	c++: 17763, 29365, 30535, 30917, 31337, 31941, 32108, 32112, 32346,
   	     32898, 32992
   	debug: 32610, 32914
   	libstdc++: 33084, 33128
   	middle-end: 32563
   	rtl-optimization: 33148
   	tree-optimization: 25413, 32723
   	target: 32218
   
   Source:	gcc-4_2-branch (up to rev. 127959)
   
   Obtained from:	gcc (var. revs of gcc-4_2-branch and gcc-4_3-branch; GPLv2)
   PR:		gnu/153298, gnu/153959, gnu/154385, gnu/155308
 
 Added:
   stable/8/contrib/gcc/config/i386/geode.md
      - copied unchanged from r219374, head/contrib/gcc/config/i386/geode.md
   stable/8/contrib/gcc/config/i386/tmmintrin.h
      - copied unchanged from r219639, head/contrib/gcc/config/i386/tmmintrin.h
 Modified:
   stable/8/contrib/gcc/BASE-VER
   stable/8/contrib/gcc/ChangeLog
   stable/8/contrib/gcc/DATESTAMP
   stable/8/contrib/gcc/DEV-PHASE
   stable/8/contrib/gcc/config.gcc
   stable/8/contrib/gcc/config/i386/driver-i386.c
   stable/8/contrib/gcc/config/i386/i386.c
   stable/8/contrib/gcc/config/i386/i386.h
   stable/8/contrib/gcc/config/i386/i386.md
   stable/8/contrib/gcc/config/i386/i386.opt
   stable/8/contrib/gcc/config/i386/sse.md
   stable/8/contrib/gcc/config/i386/xmmintrin.h
   stable/8/contrib/gcc/config/mips/predicates.md
   stable/8/contrib/gcc/config/rs6000/rs6000.c
   stable/8/contrib/gcc/config/s390/s390.md
   stable/8/contrib/gcc/cp/ChangeLog
   stable/8/contrib/gcc/cp/call.c
   stable/8/contrib/gcc/cp/cp-tree.h
   stable/8/contrib/gcc/cp/cxx-pretty-print.c
   stable/8/contrib/gcc/cp/decl.c
   stable/8/contrib/gcc/cp/decl2.c
   stable/8/contrib/gcc/cp/error.c
   stable/8/contrib/gcc/cp/lex.c
   stable/8/contrib/gcc/cp/name-lookup.c
   stable/8/contrib/gcc/cp/pt.c
   stable/8/contrib/gcc/cp/semantics.c
   stable/8/contrib/gcc/cp/typeck.c
   stable/8/contrib/gcc/doc/contrib.texi
   stable/8/contrib/gcc/doc/extend.texi
   stable/8/contrib/gcc/doc/gcc.1
   stable/8/contrib/gcc/doc/invoke.texi
   stable/8/contrib/gcc/dwarf2out.c
   stable/8/contrib/gcc/expr.c
   stable/8/contrib/gcc/fold-const.c
   stable/8/contrib/gcc/gimplify.c
   stable/8/contrib/gcc/reload1.c
   stable/8/contrib/gcc/simplify-rtx.c
   stable/8/contrib/gcc/target-def.h
   stable/8/contrib/gcc/target.h
   stable/8/contrib/gcc/targhooks.c
   stable/8/contrib/gcc/targhooks.h
   stable/8/contrib/gcc/tree-if-conv.c
   stable/8/contrib/gcc/tree-ssa-structalias.c
   stable/8/contrib/gcc/tree-vect-analyze.c
   stable/8/contrib/gcc/tree-vect-patterns.c
   stable/8/contrib/gcc/tree.c
   stable/8/contrib/gcc/tree.h
   stable/8/contrib/libstdc++/ChangeLog
   stable/8/contrib/libstdc++/include/std/std_valarray.h
   stable/8/contrib/libstdc++/include/tr1/random
   stable/8/share/examples/etc/make.conf
   stable/8/share/mk/bsd.cpu.mk
 Directory Properties:
   stable/8/contrib/gcc/   (props changed)
   stable/8/contrib/libstdc++/   (props changed)
   stable/8/share/examples/   (props changed)
   stable/8/share/examples/cvsup/   (props changed)
   stable/8/share/mk/   (props changed)
 
 Modified: stable/8/contrib/gcc/BASE-VER
 ==============================================================================
 --- stable/8/contrib/gcc/BASE-VER	Sat Apr 30 21:55:08 2011	(r221273)
 +++ stable/8/contrib/gcc/BASE-VER	Sat Apr 30 22:07:04 2011	(r221274)
 @@ -1 +1 @@
 -4.2.1
 +4.2.2
 
 Modified: stable/8/contrib/gcc/ChangeLog
 ==============================================================================
 --- stable/8/contrib/gcc/ChangeLog	Sat Apr 30 21:55:08 2011	(r221273)
 +++ stable/8/contrib/gcc/ChangeLog	Sat Apr 30 22:07:04 2011	(r221274)
 @@ -1,3 +1,121 @@
 +2007-08-31  Jakub Jelinek  <jakub@redhat.com>
 +
 +	PR rtl-optimization/33148
 +	* simplify-rtx.c (simplify_unary_operation_1): Only optimize
 +	(neg (lt X 0)) if X has scalar int mode.
 +
 +	PR debug/32914
 +	* dwarf2out.c (rtl_for_decl_init): If vector decl has CONSTRUCTOR
 +	initializer, use build_vector_from_ctor if possible to create
 +	VECTOR_CST out of it.  If vector initializer is not VECTOR_CST
 +	even after this, return NULL.
 +
 +2007-08-27  Jason Merrill  <jason@redhat.com>
 +
 +	PR c++/31337
 +	* gimplify.c (gimplify_modify_expr): Discard the assignment of 
 +	zero-sized types after calling gimplify_modify_expr_rhs.
 +
 +2007-08-24  Jakub Jelinek  <jakub@redhat.com>
 +
 +	PR debug/32610
 +	* dwarf2out.c (gen_decl_die): Don't call
 +	gen_tagged_type_instantiation_die if decl doesn't have tagged type.
 +
 +2007-08-24  Richard Guenther  <rguenther@suse.de>
 +
 +	* expr.c (get_inner_reference): Remove unused variable.
 +
 +2007-08-24  Richard Guenther  <rguenther@suse.de>
 +
 +	* expr.c (get_inner_reference): Do computation of bitoffset
 +	from offset in a way we can detect overflow reliably.
 +
 +2007-08-22  Richard Guenther  <rguenther@suse.de>
 +
 +	PR middle-end/32563
 +	* tree.c (host_integerp): Treat sizetype as signed as it is
 +	sign-extended.
 +
 +2007-08-20  Adam Nemet  <anemet@caviumnetworks.com>
 +
 +	* config/mips/predicates.md (const_call_insn_operand): Invoke
 +	SYMBOL_REF_LONG_CALL_P only on SYMBOL_REFs.
 +
 +2007-08-17  Chen liqin  <liqin@sunnorth.com.cn>
 +
 +        * config/score/score.md : Update pattern tablejump.
 +        * config/score/score.c : Update score_initialize_trampoline 
 +        function.
 +        * config/score/score.h (TRAMPOLINE_TEMPLATE): Added macro.
 +        (TRAMPOLINE_INSNS, TRAMPOLINE_SIZE) Update macro.
 +        * doc/contrib.texi: Add my entry.
 +
 +2007-08-02  Andreas Krebbel  <krebbel1@de.ibm.com>
 +
 +	* config/s390/s390.md ("*xordi3_cconly"): Change xr to xg.
 +
 +2007-08-01  Andreas Krebbel  <krebbel1@de.ibm.com>
 +
 +	* config/s390/s390.md (TF in GPR splitter): Change operand_subword
 +	parameter to TFmode.
 +
 +2007-07-30  Mark Mitchell  <mark@codesourcery.com>
 +
 +	* BASE-VER: Bump.
 +	* DEV-PHASE: Mark as prerelease.
 +
 +2007-07-25  Steve Ellcey  <sje@cup.hp.com>
 +
 +	PR target/32218
 +	* tree-vect-patterns.c (vect_pattern_recog_1): Check for valid type.
 +
 +2007-07-25  Dorit Nuzman  <dorit@il.ibm.com>
 +	    Devang Patel  <dpatel@apple.com>
 +
 +	PR tree-optimization/25413
 +	* targhooks.c (default_builtin_vector_alignment_reachable): New.
 +	* targhooks.h (default_builtin_vector_alignment_reachable): New.
 +	* tree.h (contains_packed_reference): New.
 +	* expr.c (contains_packed_reference): New.
 +	* tree-vect-analyze.c (vector_alignment_reachable_p): New.
 +	(vect_enhance_data_refs_alignment): Call
 +	vector_alignment_reachable_p.
 +	* target.h (vector_alignment_reachable): New builtin.
 +	* target-def.h (TARGET_VECTOR_ALIGNMENT_REACHABLE): New.
 +	* config/rs6000/rs6000.c (rs6000_vector_alignment_reachable): New.
 +	(TARGET_VECTOR_ALIGNMENT_REACHABLE): Define.
 +
 +2007-07-24  Richard Guenther  <rguenther@suse.de>
 +
 +	Backport from mainline:
 +	2007-07-16  Richard Guenther  <rguenther@suse.de>
 +		    Uros Bizjak  <ubizjak@gmail.com>
 +
 +	* tree-if-conv.c (find_phi_replacement_condition): Unshare "*cond"
 +	before forcing it to gimple operand.
 +
 +2007-07-24  Richard Guenther  <rguenther@suse.de>
 +
 +	PR tree-optimization/32723
 +	Backport from mainline:
 +	2007-03-09  Daniel Berlin  <dberlin@dberlin.org>
 +
 +        * tree-ssa-structalias.c (shared_bitmap_info_t): New structure.
 +        (shared_bitmap_table): New variable.
 +        (shared_bitmap_hash): New function.
 +        (shared_bitmap_eq): Ditto
 +        (shared_bitmap_lookup): Ditto.
 +        (shared_bitmap_add): Ditto.
 +        (find_what_p_points_to): Rewrite to use shared bitmap hashtable.
 +        (init_alias_vars): Init shared bitmap hashtable.
 +        (delete_points_to_sets): Delete shared bitmap hashtable.
 +
 +2007-07-23  Bernd Schmidt  <bernd.schmidt@analog.com>
 +
 +	* reload1.c (choose_reload_regs): Set reload_spill_index for regs
 +	chosen during find_reloads.
 +
  2007-07-19  Release Manager
  
  	* GCC 4.2.1 released.
 
 Modified: stable/8/contrib/gcc/DATESTAMP
 ==============================================================================
 --- stable/8/contrib/gcc/DATESTAMP	Sat Apr 30 21:55:08 2011	(r221273)
 +++ stable/8/contrib/gcc/DATESTAMP	Sat Apr 30 22:07:04 2011	(r221274)
 @@ -1 +1 @@
 -20070719
 +20070831
 
 Modified: stable/8/contrib/gcc/DEV-PHASE
 ==============================================================================
 --- stable/8/contrib/gcc/DEV-PHASE	Sat Apr 30 21:55:08 2011	(r221273)
 +++ stable/8/contrib/gcc/DEV-PHASE	Sat Apr 30 22:07:04 2011	(r221274)
 @@ -0,0 +1 @@
 +prerelease
 
 Modified: stable/8/contrib/gcc/config.gcc
 ==============================================================================
 --- stable/8/contrib/gcc/config.gcc	Sat Apr 30 21:55:08 2011	(r221273)
 +++ stable/8/contrib/gcc/config.gcc	Sat Apr 30 22:07:04 2011	(r221274)
 @@ -268,11 +268,13 @@ xscale-*-*)
  	;;
  i[34567]86-*-*)
  	cpu_type=i386
 -	extra_headers="mmintrin.h mm3dnow.h xmmintrin.h emmintrin.h pmmintrin.h"
 +	extra_headers="mmintrin.h mm3dnow.h xmmintrin.h emmintrin.h
 +		       pmmintrin.h tmmintrin.h"
  	;;
  x86_64-*-*)
  	cpu_type=i386
 -	extra_headers="mmintrin.h mm3dnow.h xmmintrin.h emmintrin.h pmmintrin.h"
 +	extra_headers="mmintrin.h mm3dnow.h xmmintrin.h emmintrin.h
 +		       pmmintrin.h tmmintrin.h"
  	need_64bit_hwint=yes
  	;;
  ia64-*-*)
 @@ -1207,14 +1209,14 @@ i[34567]86-*-solaris2*)
  		# FIXME: -m64 for i[34567]86-*-* should be allowed just
  		# like -m32 for x86_64-*-*.
  		case X"${with_cpu}" in
 -		Xgeneric|Xnocona|Xx86-64|Xk8|Xopteron|Xathlon64|Xathlon-fx)
 +		Xgeneric|Xcore2|Xnocona|Xx86-64|Xk8|Xopteron|Xathlon64|Xathlon-fx)
  			;;
  		X)
  			with_cpu=generic
  			;;
  		*)
  			echo "Unsupported CPU used in --with-cpu=$with_cpu, supported values:" 1>&2
 -			echo "generic nocona x86-64 k8 opteron athlon64 athlon-fx" 1>&2
 +			echo "generic core2 nocona x86-64 k8 opteron athlon64 athlon-fx" 1>&2
  			exit 1
  			;;
  		esac
 @@ -2537,6 +2539,9 @@ if test x$with_cpu = x ; then
          nocona-*)
            with_cpu=nocona
            ;;
 +	core2-*)
 +	  with_cpu=core2
 +	  ;;
          pentium_m-*)
            with_cpu=pentium-m
            ;;
 @@ -2556,6 +2561,9 @@ if test x$with_cpu = x ; then
          nocona-*)
            with_cpu=nocona
            ;;
 +	core2-*)
 +	  with_cpu=core2
 +	  ;;
          *)
            with_cpu=generic
            ;;
 @@ -2787,7 +2795,7 @@ case "${target}" in
  				esac
  				# OK
  				;;
 -			"" | k8 | opteron | athlon64 | athlon-fx | nocona | generic)
 +			"" | k8 | opteron | athlon64 | athlon-fx | nocona | core2 | generic)
  				# OK
  				;;
  			*)
 
 Modified: stable/8/contrib/gcc/config/i386/driver-i386.c
 ==============================================================================
 --- stable/8/contrib/gcc/config/i386/driver-i386.c	Sat Apr 30 21:55:08 2011	(r221273)
 +++ stable/8/contrib/gcc/config/i386/driver-i386.c	Sat Apr 30 22:07:04 2011	(r221274)
 @@ -39,6 +39,7 @@ const char *host_detect_local_cpu (int a
  #define bit_SSE2 (1 << 26)
  
  #define bit_SSE3 (1 << 0)
 +#define bit_SSSE3 (1 << 9)
  #define bit_CMPXCHG16B (1 << 13)
  
  #define bit_3DNOW (1 << 31)
 @@ -66,7 +67,7 @@ const char *host_detect_local_cpu (int a
    unsigned int vendor;
    unsigned int ext_level;
    unsigned char has_mmx = 0, has_3dnow = 0, has_3dnowp = 0, has_sse = 0;
 -  unsigned char has_sse2 = 0, has_sse3 = 0, has_cmov = 0;
 +  unsigned char has_sse2 = 0, has_sse3 = 0, has_ssse3 = 0, has_cmov = 0;
    unsigned char has_longmode = 0, has_cmpxchg8b = 0;
    unsigned char is_amd = 0;
    unsigned int family = 0;
 @@ -107,6 +108,7 @@ const char *host_detect_local_cpu (int a
    has_sse = !!(edx & bit_SSE);
    has_sse2 = !!(edx & bit_SSE2);
    has_sse3 = !!(ecx & bit_SSE3);
 +  has_ssse3 = !!(ecx & bit_SSSE3);
    /* We don't care for extended family.  */
    family = (eax >> 8) & ~(1 << 4);
  
 @@ -148,7 +150,9 @@ const char *host_detect_local_cpu (int a
  	  /* We have no idea.  Use something reasonable.  */
  	  if (arch)
  	    {
 -	      if (has_sse3)
 +	      if (has_ssse3)
 +		cpu = "core2";
 +	      else if (has_sse3)
  		{
  		  if (has_longmode)
  		    cpu = "nocona";
 @@ -230,6 +234,9 @@ const char *host_detect_local_cpu (int a
  	  cpu = "generic";
  	}
        break;
 +    case PROCESSOR_GEODE:
 +      cpu = "geode";
 +      break;
      case PROCESSOR_K6:
        if (has_3dnow)
          cpu = "k6-3";
 
 Copied: stable/8/contrib/gcc/config/i386/geode.md (from r219374, head/contrib/gcc/config/i386/geode.md)
 ==============================================================================
 --- /dev/null	00:00:00 1970	(empty, because file is newly added)
 +++ stable/8/contrib/gcc/config/i386/geode.md	Sat Apr 30 22:07:04 2011	(r221274, copy of r219374, head/contrib/gcc/config/i386/geode.md)
 @@ -0,0 +1,153 @@
 +;; Geode Scheduling
 +;; Copyright (C) 2006
 +;; Free Software Foundation, Inc.
 +;;
 +;; This file is part of GCC.
 +;;
 +;; GCC is free software; you can redistribute it and/or modify
 +;; it under the terms of the GNU General Public License as published by
 +;; the Free Software Foundation; either version 2, or (at your option)
 +;; any later version.
 +;;
 +;; GCC is distributed in the hope that it will be useful,
 +;; but WITHOUT ANY WARRANTY; without even the implied warranty of
 +;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 +;; GNU General Public License for more details.
 +;;
 +;; You should have received a copy of the GNU General Public License
 +;; along with GCC; see the file COPYING.  If not, write to
 +;; the Free Software Foundation, 51 Franklin Street, Fifth Floor,
 +;; Boston, MA 02110-1301, USA.
 +;;
 +;; The Geode architecture is one insn issue processor.
 +;;
 +;; This description is based on data from the following documents:
 +;;
 +;;    "AMD Geode GX Processor Data Book"
 +;;    Advanced Micro Devices, Inc., Aug 2005.
 +;;
 +;;    "AMD Geode LX Processor Data Book"
 +;;    Advanced Micro Devices, Inc., Jan 2006.
 +;;
 +;;
 +;; CPU execution units of the Geode:
 +;;
 +;; issue	describes the issue pipeline.
 +;; alu		describes the Integer unit
 +;; fpu		describes the FP unit
 +;;
 +;; The fp unit is out of order execution unit with register renaming.
 +;; There is also memory management unit and execution pipeline for
 +;; load/store operations.  We ignore it and difference between insns
 +;; using memory and registers.
 +
 +(define_automaton "geode")
 +
 +(define_cpu_unit "geode_issue,geode_alu,geode_fpu" "geode")
 +
 +(define_insn_reservation "alu" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "alu,alu1,negnot,icmp,lea,test,imov,imovx,icmov,incdec,setcc"))
 +			 "geode_issue,geode_alu")
 +
 +(define_insn_reservation "shift" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "ishift,ishift1,rotate,rotate1,cld"))
 +			 "geode_issue,geode_alu*2")
 +
 +(define_insn_reservation "imul" 7
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "imul"))
 +			 "geode_issue,geode_alu*7")
 +
 +(define_insn_reservation "idiv" 40
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "idiv"))
 +			 "geode_issue,geode_alu*40")
 +
 +;; The branch unit.
 +(define_insn_reservation "call" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "call,callv"))
 +			 "geode_issue,geode_alu*2")
 +
 +(define_insn_reservation "geode_branch" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "ibr"))
 +			 "geode_issue,geode_alu")
 +
 +(define_insn_reservation "geode_pop_push" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "pop,push"))
 +			 "geode_issue,geode_alu")
 +
 +(define_insn_reservation "geode_leave" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "leave"))
 +			 "geode_issue,geode_alu*2")
 +
 +(define_insn_reservation "geode_load_str" 4
 +			 (and (eq_attr "cpu" "geode")
 +			      (and (eq_attr "type" "str")
 +				   (eq_attr "memory" "load,both")))
 +			 "geode_issue,geode_alu*4")
 +
 +(define_insn_reservation "geode_store_str" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (and (eq_attr "type" "str")
 +				   (eq_attr "memory" "store")))
 +			 "geode_issue,geode_alu*2")
 +
 +;; Be optimistic
 +(define_insn_reservation "geode_unknown" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "multi,other"))
 +			 "geode_issue,geode_alu")
 +
 +;; FPU
 +
 +(define_insn_reservation "geode_fop" 6
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fop,fcmp"))
 +			 "geode_issue,geode_fpu*6")
 +
 +(define_insn_reservation "geode_fsimple" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fmov,fcmov,fsgn,fxch"))
 +			 "geode_issue,geode_fpu")
 +
 +(define_insn_reservation "geode_fist" 4
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fistp,fisttp"))
 +			 "geode_issue,geode_fpu*4")
 +
 +(define_insn_reservation "geode_fmul" 10
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fmul"))
 +			 "geode_issue,geode_fpu*10")
 +
 +(define_insn_reservation "geode_fdiv" 47
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fdiv"))
 +			 "geode_issue,geode_fpu*47")
 +
 +;; We use minimal latency (fsin) here
 +(define_insn_reservation "geode_fpspc" 54
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fpspc"))
 +			 "geode_issue,geode_fpu*54")
 +
 +(define_insn_reservation "geode_frndint" 12
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "frndint"))
 +			 "geode_issue,geode_fpu*12")
 +
 +(define_insn_reservation "geode_mmxmov" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "mmxmov"))
 +			 "geode_issue,geode_fpu")
 +
 +(define_insn_reservation "geode_mmx" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "mmx,mmxadd,mmxmul,mmxcmp,mmxcvt,mmxshft"))
 +			 "geode_issue,geode_fpu*2")
 
 Modified: stable/8/contrib/gcc/config/i386/i386.c
 ==============================================================================
 --- stable/8/contrib/gcc/config/i386/i386.c	Sat Apr 30 21:55:08 2011	(r221273)
 +++ stable/8/contrib/gcc/config/i386/i386.c	Sat Apr 30 22:07:04 2011	(r221274)
 @@ -336,6 +336,60 @@ struct processor_costs pentiumpro_cost =
  };
  
  static const
 +struct processor_costs geode_cost = {
 +  COSTS_N_INSNS (1),			/* cost of an add instruction */
 +  COSTS_N_INSNS (1),			/* cost of a lea instruction */
 +  COSTS_N_INSNS (2),			/* variable shift costs */
 +  COSTS_N_INSNS (1),			/* constant shift costs */
 +  {COSTS_N_INSNS (3),			/* cost of starting multiply for QI */
 +   COSTS_N_INSNS (4),			/*                               HI */
 +   COSTS_N_INSNS (7),			/*                               SI */
 +   COSTS_N_INSNS (7),			/*                               DI */
 +   COSTS_N_INSNS (7)},			/*                               other */
 +  0,					/* cost of multiply per each bit set */
 +  {COSTS_N_INSNS (15),			/* cost of a divide/mod for QI */
 +   COSTS_N_INSNS (23),			/*                          HI */
 +   COSTS_N_INSNS (39),			/*                          SI */
 +   COSTS_N_INSNS (39),			/*                          DI */
 +   COSTS_N_INSNS (39)},			/*                          other */
 +  COSTS_N_INSNS (1),			/* cost of movsx */
 +  COSTS_N_INSNS (1),			/* cost of movzx */
 +  8,					/* "large" insn */
 +  4,					/* MOVE_RATIO */
 +  1,					/* cost for loading QImode using movzbl */
 +  {1, 1, 1},				/* cost of loading integer registers
 +					   in QImode, HImode and SImode.
 +					   Relative to reg-reg move (2).  */
 +  {1, 1, 1},				/* cost of storing integer registers */
 +  1,					/* cost of reg,reg fld/fst */
 +  {1, 1, 1},				/* cost of loading fp registers
 +					   in SFmode, DFmode and XFmode */
 +  {4, 6, 6},				/* cost of storing fp registers
 +					   in SFmode, DFmode and XFmode */
 +
 +  1,					/* cost of moving MMX register */
 +  {1, 1},				/* cost of loading MMX registers
 +					   in SImode and DImode */
 +  {1, 1},				/* cost of storing MMX registers
 +					   in SImode and DImode */
 +  1,					/* cost of moving SSE register */
 +  {1, 1, 1},				/* cost of loading SSE registers
 +					   in SImode, DImode and TImode */
 +  {1, 1, 1},				/* cost of storing SSE registers
 +					   in SImode, DImode and TImode */
 +  1,					/* MMX or SSE register to integer */
 +  32,					/* size of prefetch block */
 +  1,					/* number of parallel prefetches */
 +  1,					/* Branch cost */
 +  COSTS_N_INSNS (6),			/* cost of FADD and FSUB insns.  */
 +  COSTS_N_INSNS (11),			/* cost of FMUL instruction.  */
 +  COSTS_N_INSNS (47),			/* cost of FDIV instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FABS instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FCHS instruction.  */
 +  COSTS_N_INSNS (54),			/* cost of FSQRT instruction.  */
 +};
 +
 +static const
  struct processor_costs k6_cost = {
    COSTS_N_INSNS (1),			/* cost of an add instruction */
    COSTS_N_INSNS (2),			/* cost of a lea instruction */
 @@ -600,6 +654,58 @@ struct processor_costs nocona_cost = {
    COSTS_N_INSNS (44),			/* cost of FSQRT instruction.  */
  };
  
 +static const
 +struct processor_costs core2_cost = {
 +  COSTS_N_INSNS (1),			/* cost of an add instruction */
 +  COSTS_N_INSNS (1) + 1,		/* cost of a lea instruction */
 +  COSTS_N_INSNS (1),			/* variable shift costs */
 +  COSTS_N_INSNS (1),			/* constant shift costs */
 +  {COSTS_N_INSNS (3),			/* cost of starting multiply for QI */
 +   COSTS_N_INSNS (3),			/*                               HI */
 +   COSTS_N_INSNS (3),			/*                               SI */
 +   COSTS_N_INSNS (3),			/*                               DI */
 +   COSTS_N_INSNS (3)},			/*                               other */
 +  0,					/* cost of multiply per each bit set */
 +  {COSTS_N_INSNS (22),			/* cost of a divide/mod for QI */
 +   COSTS_N_INSNS (22),			/*                          HI */
 +   COSTS_N_INSNS (22),			/*                          SI */
 +   COSTS_N_INSNS (22),			/*                          DI */
 +   COSTS_N_INSNS (22)},			/*                          other */
 +  COSTS_N_INSNS (1),			/* cost of movsx */
 +  COSTS_N_INSNS (1),			/* cost of movzx */
 +  8,					/* "large" insn */
 +  16,					/* MOVE_RATIO */
 +  2,					/* cost for loading QImode using movzbl */
 +  {6, 6, 6},				/* cost of loading integer registers
 +					   in QImode, HImode and SImode.
 +					   Relative to reg-reg move (2).  */
 +  {4, 4, 4},				/* cost of storing integer registers */
 +  2,					/* cost of reg,reg fld/fst */
 +  {6, 6, 6},				/* cost of loading fp registers
 +					   in SFmode, DFmode and XFmode */
 +  {4, 4, 4},				/* cost of loading integer registers */
 +  2,					/* cost of moving MMX register */
 +  {6, 6},				/* cost of loading MMX registers
 +					   in SImode and DImode */
 +  {4, 4},				/* cost of storing MMX registers
 +					   in SImode and DImode */
 +  2,					/* cost of moving SSE register */
 +  {6, 6, 6},				/* cost of loading SSE registers
 +					   in SImode, DImode and TImode */
 +  {4, 4, 4},				/* cost of storing SSE registers
 +					   in SImode, DImode and TImode */
 +  2,					/* MMX or SSE register to integer */
 +  128,					/* size of prefetch block */
 +  8,					/* number of parallel prefetches */
 +  3,					/* Branch cost */
 +  COSTS_N_INSNS (3),			/* cost of FADD and FSUB insns.  */
 +  COSTS_N_INSNS (5),			/* cost of FMUL instruction.  */
 +  COSTS_N_INSNS (32),			/* cost of FDIV instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FABS instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FCHS instruction.  */
 +  COSTS_N_INSNS (58),			/* cost of FSQRT instruction.  */
 +};
 +
  /* Generic64 should produce code tuned for Nocona and K8.  */
  static const
  struct processor_costs generic64_cost = {
 @@ -721,38 +827,41 @@ const struct processor_costs *ix86_cost 
  #define m_486 (1<<PROCESSOR_I486)
  #define m_PENT (1<<PROCESSOR_PENTIUM)
  #define m_PPRO (1<<PROCESSOR_PENTIUMPRO)
 +#define m_GEODE  (1<<PROCESSOR_GEODE)
 +#define m_K6_GEODE  (m_K6 | m_GEODE)
  #define m_K6  (1<<PROCESSOR_K6)
  #define m_ATHLON  (1<<PROCESSOR_ATHLON)
  #define m_PENT4  (1<<PROCESSOR_PENTIUM4)
  #define m_K8  (1<<PROCESSOR_K8)
  #define m_ATHLON_K8  (m_K8 | m_ATHLON)
  #define m_NOCONA  (1<<PROCESSOR_NOCONA)
 +#define m_CORE2  (1<<PROCESSOR_CORE2)
  #define m_GENERIC32 (1<<PROCESSOR_GENERIC32)
  #define m_GENERIC64 (1<<PROCESSOR_GENERIC64)
  #define m_GENERIC (m_GENERIC32 | m_GENERIC64)
  
  /* Generic instruction choice should be common subset of supported CPUs
 -   (PPro/PENT4/NOCONA/Athlon/K8).  */
 +   (PPro/PENT4/NOCONA/CORE2/Athlon/K8).  */
  
  /* Leave is not affecting Nocona SPEC2000 results negatively, so enabling for
     Generic64 seems like good code size tradeoff.  We can't enable it for 32bit
     generic because it is not working well with PPro base chips.  */
 -const int x86_use_leave = m_386 | m_K6 | m_ATHLON_K8 | m_GENERIC64;
 -const int x86_push_memory = m_386 | m_K6 | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 +const int x86_use_leave = m_386 | m_K6_GEODE | m_ATHLON_K8 | m_CORE2 | m_GENERIC64;
 +const int x86_push_memory = m_386 | m_K6_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
  const int x86_zero_extend_with_and = m_486 | m_PENT;
 -const int x86_movx = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_GENERIC /* m_386 | m_K6 */;
 +const int x86_movx = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC | m_GEODE /* m_386 | m_K6 */;
  const int x86_double_with_add = ~m_386;
  const int x86_use_bit_test = m_386;
 -const int x86_unroll_strlen = m_486 | m_PENT | m_PPRO | m_ATHLON_K8 | m_K6 | m_GENERIC;
 -const int x86_cmove = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA;
 +const int x86_unroll_strlen = m_486 | m_PENT | m_PPRO | m_ATHLON_K8 | m_K6 | m_CORE2 | m_GENERIC;
 +const int x86_cmove = m_PPRO | m_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA;
  const int x86_3dnow_a = m_ATHLON_K8;
 -const int x86_deep_branch = m_PPRO | m_K6 | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 +const int x86_deep_branch = m_PPRO | m_K6_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
  /* Branch hints were put in P4 based on simulation result. But
     after P4 was made, no performance benefit was observed with
     branch hints. It also increases the code size. As the result,
     icc never generates branch hints.  */
  const int x86_branch_hints = 0;
 -const int x86_use_sahf = m_PPRO | m_K6 | m_PENT4 | m_NOCONA | m_GENERIC32; /*m_GENERIC | m_ATHLON_K8 ? */
 +const int x86_use_sahf = m_PPRO | m_K6_GEODE | m_PENT4 | m_NOCONA | m_GENERIC32; /*m_GENERIC | m_ATHLON_K8 ? */
  /* We probably ought to watch for partial register stalls on Generic32
     compilation setting as well.  However in current implementation the
     partial register stalls are not eliminated very well - they can
 @@ -762,15 +871,15 @@ const int x86_use_sahf = m_PPRO | m_K6 |
     with partial reg. dependencies used by Athlon/P4 based chips, it is better
     to leave it off for generic32 for now.  */
  const int x86_partial_reg_stall = m_PPRO;
 -const int x86_partial_flag_reg_stall = m_GENERIC;
 -const int x86_use_himode_fiop = m_386 | m_486 | m_K6;
 -const int x86_use_simode_fiop = ~(m_PPRO | m_ATHLON_K8 | m_PENT | m_GENERIC);
 +const int x86_partial_flag_reg_stall =  m_CORE2 | m_GENERIC;
 +const int x86_use_himode_fiop = m_386 | m_486 | m_K6_GEODE;
 +const int x86_use_simode_fiop = ~(m_PPRO | m_ATHLON_K8 | m_PENT | m_CORE2 | m_GENERIC);
  const int x86_use_mov0 = m_K6;
 -const int x86_use_cltd = ~(m_PENT | m_K6 | m_GENERIC);
 +const int x86_use_cltd = ~(m_PENT | m_K6 | m_CORE2 | m_GENERIC);
  const int x86_read_modify_write = ~m_PENT;
  const int x86_read_modify = ~(m_PENT | m_PPRO);
  const int x86_split_long_moves = m_PPRO;
 -const int x86_promote_QImode = m_K6 | m_PENT | m_386 | m_486 | m_ATHLON_K8 | m_GENERIC; /* m_PENT4 ? */
 +const int x86_promote_QImode = m_K6_GEODE | m_PENT | m_386 | m_486 | m_ATHLON_K8 | m_CORE2 | m_GENERIC; /* m_PENT4 ? */
  const int x86_fast_prefix = ~(m_PENT | m_486 | m_386);
  const int x86_single_stringop = m_386 | m_PENT4 | m_NOCONA;
  const int x86_qimode_math = ~(0);
 @@ -780,18 +889,18 @@ const int x86_promote_qi_regs = 0;
     if our scheme for avoiding partial stalls was more effective.  */
  const int x86_himode_math = ~(m_PPRO);
  const int x86_promote_hi_regs = m_PPRO;
 -const int x86_sub_esp_4 = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_sub_esp_8 = m_ATHLON_K8 | m_PPRO | m_386 | m_486 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_add_esp_4 = m_ATHLON_K8 | m_K6 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_add_esp_8 = m_ATHLON_K8 | m_PPRO | m_K6 | m_386 | m_486 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_integer_DFmode_moves = ~(m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC);
 -const int x86_partial_reg_dependency = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_memory_mismatch_stall = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_accumulate_outgoing_args = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC;
 -const int x86_prologue_using_move = m_ATHLON_K8 | m_PPRO | m_GENERIC;
 -const int x86_epilogue_using_move = m_ATHLON_K8 | m_PPRO | m_GENERIC;
 +const int x86_sub_esp_4 = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_sub_esp_8 = m_ATHLON_K8 | m_PPRO | m_386 | m_486 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_add_esp_4 = m_ATHLON_K8 | m_K6_GEODE | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_add_esp_8 = m_ATHLON_K8 | m_PPRO | m_K6_GEODE | m_386 | m_486 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_integer_DFmode_moves = ~(m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC | m_GEODE);
 +const int x86_partial_reg_dependency = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_memory_mismatch_stall = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_accumulate_outgoing_args = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC;
 +const int x86_prologue_using_move = m_ATHLON_K8 | m_PPRO | m_CORE2 | m_GENERIC;
 +const int x86_epilogue_using_move = m_ATHLON_K8 | m_PPRO | m_CORE2 | m_GENERIC;
  const int x86_shift1 = ~m_486;
 -const int x86_arch_always_fancy_math_387 = m_PENT | m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 +const int x86_arch_always_fancy_math_387 = m_PENT | m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
  /* In Generic model we have an conflict here in between PPro/Pentium4 based chips
     that thread 128bit SSE registers as single units versus K8 based chips that
     divide SSE registers to two 64bit halves.
 @@ -801,7 +910,7 @@ const int x86_arch_always_fancy_math_387
     this option on P4 brings over 20% SPECfp regression, while enabling it on
     K8 brings roughly 2.4% regression that can be partly masked by careful scheduling
     of moves.  */
 -const int x86_sse_partial_reg_dependency = m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC;
 +const int x86_sse_partial_reg_dependency = m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC;
  /* Set for machines where the type and dependencies are resolved on SSE
     register parts instead of whole registers, so we may maintain just
     lower part of scalar values in proper format leaving the upper part
 @@ -810,28 +919,28 @@ const int x86_sse_split_regs = m_ATHLON_
  const int x86_sse_typeless_stores = m_ATHLON_K8;
  const int x86_sse_load0_by_pxor = m_PPRO | m_PENT4 | m_NOCONA;
  const int x86_use_ffreep = m_ATHLON_K8;
 -const int x86_rep_movl_optimal = m_386 | m_PENT | m_PPRO | m_K6;
 -const int x86_use_incdec = ~(m_PENT4 | m_NOCONA | m_GENERIC);
 +const int x86_rep_movl_optimal = m_386 | m_PENT | m_PPRO | m_K6_GEODE | m_CORE2;
 +const int x86_use_incdec = ~(m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC);
  
  /* ??? Allowing interunit moves makes it all too easy for the compiler to put
     integer data in xmm registers.  Which results in pretty abysmal code.  */
  const int x86_inter_unit_moves = 0 /* ~(m_ATHLON_K8) */;
  
 -const int x86_ext_80387_constants = m_K6 | m_ATHLON | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC32;
 +const int x86_ext_80387_constants = m_K6_GEODE | m_ATHLON | m_PENT4 | m_NOCONA | m_CORE2 | m_PPRO | m_GENERIC32;
  /* Some CPU cores are not able to predict more than 4 branch instructions in
     the 16 byte window.  */
 -const int x86_four_jump_limit = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_schedule = m_PPRO | m_ATHLON_K8 | m_K6 | m_PENT | m_GENERIC;
 +const int x86_four_jump_limit = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_schedule = m_PPRO | m_ATHLON_K8 | m_K6_GEODE | m_PENT | m_CORE2 | m_GENERIC;
  const int x86_use_bt = m_ATHLON_K8;
  /* Compare and exchange was added for 80486.  */
  const int x86_cmpxchg = ~m_386;
  /* Compare and exchange 8 bytes was added for pentium.  */
  const int x86_cmpxchg8b = ~(m_386 | m_486);
  /* Compare and exchange 16 bytes was added for nocona.  */
 -const int x86_cmpxchg16b = m_NOCONA;
 +const int x86_cmpxchg16b = m_NOCONA | m_CORE2;
  /* Exchange and add was added for 80486.  */
  const int x86_xadd = ~m_386;
 -const int x86_pad_returns = m_ATHLON_K8 | m_GENERIC;
 +const int x86_pad_returns = m_ATHLON_K8 | m_CORE2 | m_GENERIC;
  
  /* In case the average insn count for single function invocation is
     lower than this constant, emit fast (but longer) prologue and
 @@ -1402,16 +1511,24 @@ ix86_handle_option (size_t code, const c
      case OPT_msse:
        if (!value)
  	{
 -	  target_flags &= ~(MASK_SSE2 | MASK_SSE3);
 -	  target_flags_explicit |= MASK_SSE2 | MASK_SSE3;
 +	  target_flags &= ~(MASK_SSE2 | MASK_SSE3 | MASK_SSSE3);
 +	  target_flags_explicit |= MASK_SSE2 | MASK_SSE3 | MASK_SSSE3;
  	}
        return true;
  
      case OPT_msse2:
        if (!value)
  	{
 -	  target_flags &= ~MASK_SSE3;
 -	  target_flags_explicit |= MASK_SSE3;
 +	  target_flags &= ~(MASK_SSE3 | MASK_SSSE3);
 +	  target_flags_explicit |= MASK_SSE3 | MASK_SSSE3;
 +	}
 +      return true;
 +
 +    case OPT_msse3:
 +      if (!value)
 +	{
 +	  target_flags &= ~MASK_SSSE3;
 +	  target_flags_explicit |= MASK_SSSE3;
  	}
        return true;
  
 @@ -1455,11 +1572,13 @@ override_options (void)
        {&i486_cost, 0, 0, 16, 15, 16, 15, 16},
        {&pentium_cost, 0, 0, 16, 7, 16, 7, 16},
        {&pentiumpro_cost, 0, 0, 16, 15, 16, 7, 16},
 +      {&geode_cost, 0, 0, 0, 0, 0, 0, 0},
        {&k6_cost, 0, 0, 32, 7, 32, 7, 32},
        {&athlon_cost, 0, 0, 16, 7, 16, 7, 16},
        {&pentium4_cost, 0, 0, 0, 0, 0, 0, 0},
        {&k8_cost, 0, 0, 16, 7, 16, 7, 16},
        {&nocona_cost, 0, 0, 0, 0, 0, 0, 0},
 +      {&core2_cost, 0, 0, 16, 7, 16, 7, 16},
        {&generic32_cost, 0, 0, 16, 7, 16, 7, 16},
        {&generic64_cost, 0, 0, 16, 7, 16, 7, 16}
      };
 @@ -1478,7 +1597,8 @@ override_options (void)
  	  PTA_PREFETCH_SSE = 16,
  	  PTA_3DNOW = 32,
  	  PTA_3DNOW_A = 64,
 -	  PTA_64BIT = 128
 +	  PTA_64BIT = 128,
 +	  PTA_SSSE3 = 256
  	} flags;
      }
    const processor_alias_table[] =
 @@ -1506,6 +1626,11 @@ override_options (void)
  				        | PTA_MMX | PTA_PREFETCH_SSE},
        {"nocona", PROCESSOR_NOCONA, PTA_SSE | PTA_SSE2 | PTA_SSE3 | PTA_64BIT
  				        | PTA_MMX | PTA_PREFETCH_SSE},
 +      {"core2", PROCESSOR_CORE2, PTA_SSE | PTA_SSE2 | PTA_SSE3 | PTA_SSSE3
 +                                        | PTA_64BIT | PTA_MMX
 +                                        | PTA_PREFETCH_SSE},
 +      {"geode", PROCESSOR_GEODE, PTA_MMX | PTA_PREFETCH_SSE | PTA_3DNOW
 +				   | PTA_3DNOW_A},
        {"k6", PROCESSOR_K6, PTA_MMX},
        {"k6-2", PROCESSOR_K6, PTA_MMX | PTA_3DNOW},
        {"k6-3", PROCESSOR_K6, PTA_MMX | PTA_3DNOW},
 @@ -1695,6 +1820,9 @@ override_options (void)
  	if (processor_alias_table[i].flags & PTA_SSE3
  	    && !(target_flags_explicit & MASK_SSE3))
  	  target_flags |= MASK_SSE3;
 +	if (processor_alias_table[i].flags & PTA_SSSE3
 +	    && !(target_flags_explicit & MASK_SSSE3))
 +	  target_flags |= MASK_SSSE3;
  	if (processor_alias_table[i].flags & PTA_PREFETCH_SSE)
  	  x86_prefetch_sse = true;
  	if (TARGET_64BIT && !(processor_alias_table[i].flags & PTA_64BIT))
 @@ -1871,6 +1999,10 @@ override_options (void)
    if (!TARGET_80387)
      target_flags |= MASK_NO_FANCY_MATH_387;
  
 +  /* Turn on SSE3 builtins for -mssse3.  */
 +  if (TARGET_SSSE3)
 +    target_flags |= MASK_SSE3;
 +
    /* Turn on SSE2 builtins for -msse3.  */
    if (TARGET_SSE3)
      target_flags |= MASK_SSE2;
 @@ -13706,6 +13838,9 @@ ix86_issue_rate (void)
      case PROCESSOR_GENERIC64:
        return 3;
  
 +    case PROCESSOR_CORE2:
 +      return 4;
 +
      default:
        return 1;
      }
 @@ -14574,6 +14709,41 @@ enum ix86_builtins
    IX86_BUILTIN_MONITOR,
    IX86_BUILTIN_MWAIT,
  
 +  /* SSSE3.  */
 +  IX86_BUILTIN_PHADDW,
 +  IX86_BUILTIN_PHADDD,
 +  IX86_BUILTIN_PHADDSW,
 +  IX86_BUILTIN_PHSUBW,
 +  IX86_BUILTIN_PHSUBD,
 +  IX86_BUILTIN_PHSUBSW,
 +  IX86_BUILTIN_PMADDUBSW,
 +  IX86_BUILTIN_PMULHRSW,
 +  IX86_BUILTIN_PSHUFB,
 +  IX86_BUILTIN_PSIGNB,
 +  IX86_BUILTIN_PSIGNW,
 +  IX86_BUILTIN_PSIGND,
 +  IX86_BUILTIN_PALIGNR,
 +  IX86_BUILTIN_PABSB,
 +  IX86_BUILTIN_PABSW,
 +  IX86_BUILTIN_PABSD,
 +
 +  IX86_BUILTIN_PHADDW128,
 +  IX86_BUILTIN_PHADDD128,
 +  IX86_BUILTIN_PHADDSW128,
 +  IX86_BUILTIN_PHSUBW128,
 +  IX86_BUILTIN_PHSUBD128,
 +  IX86_BUILTIN_PHSUBSW128,
 +  IX86_BUILTIN_PMADDUBSW128,
 +  IX86_BUILTIN_PMULHRSW128,
 +  IX86_BUILTIN_PSHUFB128,
 +  IX86_BUILTIN_PSIGNB128,
 +  IX86_BUILTIN_PSIGNW128,
 +  IX86_BUILTIN_PSIGND128,
 +  IX86_BUILTIN_PALIGNR128,
 +  IX86_BUILTIN_PABSB128,
 +  IX86_BUILTIN_PABSW128,
 +  IX86_BUILTIN_PABSD128,
 +
    IX86_BUILTIN_VEC_INIT_V2SI,
    IX86_BUILTIN_VEC_INIT_V4HI,
    IX86_BUILTIN_VEC_INIT_V8QI,
 @@ -14915,7 +15085,33 @@ static const struct builtin_description 
    { MASK_SSE3, CODE_FOR_sse3_haddv4sf3, "__builtin_ia32_haddps", IX86_BUILTIN_HADDPS, 0, 0 },
    { MASK_SSE3, CODE_FOR_sse3_haddv2df3, "__builtin_ia32_haddpd", IX86_BUILTIN_HADDPD, 0, 0 },
    { MASK_SSE3, CODE_FOR_sse3_hsubv4sf3, "__builtin_ia32_hsubps", IX86_BUILTIN_HSUBPS, 0, 0 },
 -  { MASK_SSE3, CODE_FOR_sse3_hsubv2df3, "__builtin_ia32_hsubpd", IX86_BUILTIN_HSUBPD, 0, 0 }
 +  { MASK_SSE3, CODE_FOR_sse3_hsubv2df3, "__builtin_ia32_hsubpd", IX86_BUILTIN_HSUBPD, 0, 0 },
 +
 +  /* SSSE3 */
 +  { MASK_SSSE3, CODE_FOR_ssse3_phaddwv8hi3, "__builtin_ia32_phaddw128", IX86_BUILTIN_PHADDW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phaddwv4hi3, "__builtin_ia32_phaddw", IX86_BUILTIN_PHADDW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phadddv4si3, "__builtin_ia32_phaddd128", IX86_BUILTIN_PHADDD128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phadddv2si3, "__builtin_ia32_phaddd", IX86_BUILTIN_PHADDD, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phaddswv8hi3, "__builtin_ia32_phaddsw128", IX86_BUILTIN_PHADDSW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phaddswv4hi3, "__builtin_ia32_phaddsw", IX86_BUILTIN_PHADDSW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubwv8hi3, "__builtin_ia32_phsubw128", IX86_BUILTIN_PHSUBW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubwv4hi3, "__builtin_ia32_phsubw", IX86_BUILTIN_PHSUBW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubdv4si3, "__builtin_ia32_phsubd128", IX86_BUILTIN_PHSUBD128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubdv2si3, "__builtin_ia32_phsubd", IX86_BUILTIN_PHSUBD, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubswv8hi3, "__builtin_ia32_phsubsw128", IX86_BUILTIN_PHSUBSW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubswv4hi3, "__builtin_ia32_phsubsw", IX86_BUILTIN_PHSUBSW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pmaddubswv8hi3, "__builtin_ia32_pmaddubsw128", IX86_BUILTIN_PMADDUBSW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pmaddubswv4hi3, "__builtin_ia32_pmaddubsw", IX86_BUILTIN_PMADDUBSW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pmulhrswv8hi3, "__builtin_ia32_pmulhrsw128", IX86_BUILTIN_PMULHRSW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pmulhrswv4hi3, "__builtin_ia32_pmulhrsw", IX86_BUILTIN_PMULHRSW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pshufbv16qi3, "__builtin_ia32_pshufb128", IX86_BUILTIN_PSHUFB128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pshufbv8qi3, "__builtin_ia32_pshufb", IX86_BUILTIN_PSHUFB, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv16qi3, "__builtin_ia32_psignb128", IX86_BUILTIN_PSIGNB128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv8qi3, "__builtin_ia32_psignb", IX86_BUILTIN_PSIGNB, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv8hi3, "__builtin_ia32_psignw128", IX86_BUILTIN_PSIGNW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv4hi3, "__builtin_ia32_psignw", IX86_BUILTIN_PSIGNW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv4si3, "__builtin_ia32_psignd128", IX86_BUILTIN_PSIGND128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv2si3, "__builtin_ia32_psignd", IX86_BUILTIN_PSIGND, 0, 0 }
  };
  
  static const struct builtin_description bdesc_1arg[] =
 @@ -14962,6 +15158,14 @@ static const struct builtin_description 
    /* SSE3 */
    { MASK_SSE3, CODE_FOR_sse3_movshdup, 0, IX86_BUILTIN_MOVSHDUP, 0, 0 },
    { MASK_SSE3, CODE_FOR_sse3_movsldup, 0, IX86_BUILTIN_MOVSLDUP, 0, 0 },
 +
 +  /* SSSE3 */
 +  { MASK_SSSE3, CODE_FOR_absv16qi2, "__builtin_ia32_pabsb128", IX86_BUILTIN_PABSB128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_absv8qi2, "__builtin_ia32_pabsb", IX86_BUILTIN_PABSB, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_absv8hi2, "__builtin_ia32_pabsw128", IX86_BUILTIN_PABSW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_absv4hi2, "__builtin_ia32_pabsw", IX86_BUILTIN_PABSW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_absv4si2, "__builtin_ia32_pabsd128", IX86_BUILTIN_PABSD128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_absv2si2, "__builtin_ia32_pabsd", IX86_BUILTIN_PABSD, 0, 0 },
  };
  
  static void
 @@ -15096,6 +15300,16 @@ ix86_init_mmx_sse_builtins (void)
    /* Normal vector unops.  */
    tree v4sf_ftype_v4sf
      = build_function_type_list (V4SF_type_node, V4SF_type_node, NULL_TREE);
 +  tree v16qi_ftype_v16qi
 +    = build_function_type_list (V16QI_type_node, V16QI_type_node, NULL_TREE);
 +  tree v8hi_ftype_v8hi
 +    = build_function_type_list (V8HI_type_node, V8HI_type_node, NULL_TREE);
 +  tree v4si_ftype_v4si
 +    = build_function_type_list (V4SI_type_node, V4SI_type_node, NULL_TREE);
 +  tree v8qi_ftype_v8qi
 +    = build_function_type_list (V8QI_type_node, V8QI_type_node, NULL_TREE);
 +  tree v4hi_ftype_v4hi
 +    = build_function_type_list (V4HI_type_node, V4HI_type_node, NULL_TREE);
  
    /* Normal vector binops.  */
    tree v4sf_ftype_v4sf_v4sf
 @@ -15115,6 +15329,12 @@ ix86_init_mmx_sse_builtins (void)
  				long_long_unsigned_type_node,
  				long_long_unsigned_type_node, NULL_TREE);
  
 +  tree di_ftype_di_di_int
 +    = build_function_type_list (long_long_unsigned_type_node,
 +				long_long_unsigned_type_node,
 +				long_long_unsigned_type_node,
 +				integer_type_node, NULL_TREE);
 +
    tree v2si_ftype_v2sf
      = build_function_type_list (V2SI_type_node, V2SF_type_node, NULL_TREE);
    tree v2sf_ftype_v2si
 @@ -15216,6 +15436,9 @@ ix86_init_mmx_sse_builtins (void)
    tree v2di_ftype_v2di_int
      = build_function_type_list (V2DI_type_node,
  				V2DI_type_node, integer_type_node, NULL_TREE);
 +  tree v2di_ftype_v2di_v2di_int
 +    = build_function_type_list (V2DI_type_node, V2DI_type_node,
 +				V2DI_type_node, integer_type_node, NULL_TREE);
    tree v4si_ftype_v4si_int
      = build_function_type_list (V4SI_type_node,
  				V4SI_type_node, integer_type_node, NULL_TREE);
 @@ -15332,6 +15555,50 @@ ix86_init_mmx_sse_builtins (void)
        def_builtin (d->mask, d->name, type, d->code);
      }
  
 +  /* Add all builtins that are more or less simple operations on 1 operand.  */
 +  for (i = 0, d = bdesc_1arg; i < ARRAY_SIZE (bdesc_1arg); i++, d++)
 +    {
 +      enum machine_mode mode;
 +      tree type;
 +
 +      if (d->name == 0)
 +	continue;
 +      mode = insn_data[d->icode].operand[1].mode;
 +
 +      switch (mode)
 +	{
 +	case V16QImode:
 +	  type = v16qi_ftype_v16qi;
 +	  break;
 +	case V8HImode:
 +	  type = v8hi_ftype_v8hi;
 +	  break;
 +	case V4SImode:
 +	  type = v4si_ftype_v4si;
 +	  break;
 +	case V2DFmode:
 +	  type = v2df_ftype_v2df;
 +	  break;
 +	case V4SFmode:
 +	  type = v4sf_ftype_v4sf;
 +	  break;
 +	case V8QImode:
 +	  type = v8qi_ftype_v8qi;
 +	  break;
 +	case V4HImode:
 +	  type = v4hi_ftype_v4hi;
 +	  break;
 +	case V2SImode:
 +	  type = v2si_ftype_v2si;
 +	  break;
 +
 +	default:
 +	  abort ();
 +	}
 +
 +      def_builtin (d->mask, d->name, type, d->code);
 +    }
 +
    /* Add the remaining MMX insns with somewhat more complicated types.  */
    def_builtin (MASK_MMX, "__builtin_ia32_emms", void_ftype_void, IX86_BUILTIN_EMMS);
    def_builtin (MASK_MMX, "__builtin_ia32_psllw", v4hi_ftype_v4hi_di, IX86_BUILTIN_PSLLW);
 @@ -15531,6 +15798,12 @@ ix86_init_mmx_sse_builtins (void)
    def_builtin (MASK_SSE3, "__builtin_ia32_lddqu",
  	       v16qi_ftype_pcchar, IX86_BUILTIN_LDDQU);
  
 +  /* SSSE3.  */
 +  def_builtin (MASK_SSSE3, "__builtin_ia32_palignr128",
 +	       v2di_ftype_v2di_v2di_int, IX86_BUILTIN_PALIGNR128);
 +  def_builtin (MASK_SSSE3, "__builtin_ia32_palignr", di_ftype_di_di_int,
 +	       IX86_BUILTIN_PALIGNR);
 +
    /* Access to the vec_init patterns.  */
    ftype = build_function_type_list (V2SI_type_node, integer_type_node,
  				    integer_type_node, NULL_TREE);
 @@ -16029,7 +16302,7 @@ ix86_expand_builtin (tree exp, rtx targe
    tree arglist = TREE_OPERAND (exp, 1);
    tree arg0, arg1, arg2;
    rtx op0, op1, op2, pat;
 -  enum machine_mode tmode, mode0, mode1, mode2;
 +  enum machine_mode tmode, mode0, mode1, mode2, mode3;
    unsigned int fcode = DECL_FUNCTION_CODE (fndecl);
  
    switch (fcode)
 @@ -16499,6 +16772,52 @@ ix86_expand_builtin (tree exp, rtx targe
        return ix86_expand_unop_builtin (CODE_FOR_sse3_lddqu, arglist,
  				       target, 1);
  
 +    case IX86_BUILTIN_PALIGNR:
 +    case IX86_BUILTIN_PALIGNR128:
 +      if (fcode == IX86_BUILTIN_PALIGNR)
 +	{
 +	  icode = CODE_FOR_ssse3_palignrdi;
 +	  mode = DImode;
 +	}
 +      else
 +	{
 +	  icode = CODE_FOR_ssse3_palignrti;
 +	  mode = V2DImode;
 +	}
 +      arg0 = TREE_VALUE (arglist);
 +      arg1 = TREE_VALUE (TREE_CHAIN (arglist));
 +      arg2 = TREE_VALUE (TREE_CHAIN (TREE_CHAIN (arglist)));
 +      op0 = expand_expr (arg0, NULL_RTX, VOIDmode, 0);
 +      op1 = expand_expr (arg1, NULL_RTX, VOIDmode, 0);
 +      op2 = expand_expr (arg2, NULL_RTX, VOIDmode, 0);
 +      tmode = insn_data[icode].operand[0].mode;
 +      mode1 = insn_data[icode].operand[1].mode;
 +      mode2 = insn_data[icode].operand[2].mode;
 
 *** DIFF OUTPUT TRUNCATED AT 1000 LINES ***
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 
State-Changed-From-To: patched->closed 
State-Changed-By: mm 
State-Changed-When: Sat Apr 30 22:19:45 UTC 2011 
State-Changed-Why:  
Committed. Thanks! 

http://www.freebsd.org/cgi/query-pr.cgi?pr=155308 

From: dfilter@FreeBSD.ORG (dfilter service)
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: gnu/155308: commit references a PR
Date: Mon,  2 May 2011 08:32:09 +0000 (UTC)

 Author: mm
 Date: Mon May  2 08:31:53 2011
 New Revision: 221317
 URL: http://svn.freebsd.org/changeset/base/221317
 
 Log:
   MFC 218895, 218896, 219374, 219376, 219639, 219640, 219697, 219711, 220150:
   
   MFC r218895:
   Backport svn r124339 from gcc 4.3 and add opteron-sse3, athlon64-sse3
   and k8-sse3 cpu-types for -march=/-mtune= gcc options.
   These new cpu-types include the SSE3 instruction set that is supported
   by all newer AMD Athlon 64 and Opteron processors.
   All three cpu-types are supported by clang and all gcc versions
   starting with 4.3 SVN rev 124339 (at that time GPLv2 licensed).
   
   Source: gcc-4_3-branch (rev. 124339; GPLv2)
   
   MFC r218896:
   Add opteron-sse3, athlon64-sse3 and k8-sse3 cpu types to bsd.cpu.mk.
   - add "sse3" to MACHINE_CPU for the new cpu types
   - for i386, default to CPUTYPE=prescott for the new cpu types
   
   MFC r219374:
   Backport Intel Core 2 and AMD Geode CPU types from gcc-4.3 (GPLv2)
   These options are supported in this shape in all newer GCC versions.
   
   Source:	gcc-4_3-branch (rev. 118090, 118973, 120846; GPLv2)
   
   MFC r219376:
   Add AMD Geode CPU type to bsd.cpu.mk and examples/etc/make.conf
   For CPUTYPE=core2 use -march=core2
   
   MFC r219639:
   Backport SSSE3 instruction set support to base gcc.
   Enabled by default for -march=core2
   
   Source:	gcc-4_3-branch (rev. 117958, 121687, 121726, 123639; GPLv2)
   
   MFC r219640:
   Add ssse3 capability for CPUTYPE=core2 to MACHINE_CPU in bsd.cpu.mk
   
   MFC r219697:
   Fix -march/-mtune=native autodetection for Intel Core 2 CPUs
   
   Source:	gcc-4_3-branch (partial rev. 119454; GPLv2)
   
   MFC r219711:
   Backport missing tunings for -march=core2:
   - enable extra 80387 mathematical constants (ext_80387_constants)
   - enable compare and exchange 16 bytes (cmpxchg16b)
   
   Verified against llvm-gcc (and apple gcc)
   Source:	gcc-4_3-branch (ref. svn revs. 119260, 121140; GPLv2)
   
   MFC r220150:
   Upgrade of base gcc and libstdc++ to the last GPLv2-licensed revision
   (rev. 127959 of gcc-4_2-branch).
   
   Resolved GCC bugs:
   	c++: 17763, 29365, 30535, 30917, 31337, 31941, 32108, 32112, 32346,
   	     32898, 32992
   	debug: 32610, 32914
   	libstdc++: 33084, 33128
   	middle-end: 32563
   	rtl-optimization: 33148
   	tree-optimization: 25413, 32723
   	target: 32218
   
   Source:	gcc-4_2-branch (up to rev. 127959)
   
   Obtained from:	gcc (var. revs of gcc-4_2-branch and gcc-4_3-branch; GPLv2)
   PR:		gnu/153298, gnu/153959, gnu/154385, gnu/155308, gnu/154906
 
 Added:
   stable/7/contrib/gcc/config/i386/geode.md
      - copied unchanged from r219374, head/contrib/gcc/config/i386/geode.md
   stable/7/contrib/gcc/config/i386/tmmintrin.h
      - copied unchanged from r219639, head/contrib/gcc/config/i386/tmmintrin.h
 Modified:
   stable/7/contrib/gcc/BASE-VER
   stable/7/contrib/gcc/ChangeLog
   stable/7/contrib/gcc/DATESTAMP
   stable/7/contrib/gcc/DEV-PHASE
   stable/7/contrib/gcc/config.gcc
   stable/7/contrib/gcc/config/i386/driver-i386.c
   stable/7/contrib/gcc/config/i386/i386.c
   stable/7/contrib/gcc/config/i386/i386.h
   stable/7/contrib/gcc/config/i386/i386.md
   stable/7/contrib/gcc/config/i386/i386.opt
   stable/7/contrib/gcc/config/i386/sse.md
   stable/7/contrib/gcc/config/i386/xmmintrin.h
   stable/7/contrib/gcc/config/mips/predicates.md
   stable/7/contrib/gcc/config/rs6000/rs6000.c
   stable/7/contrib/gcc/config/s390/s390.md
   stable/7/contrib/gcc/cp/ChangeLog
   stable/7/contrib/gcc/cp/call.c
   stable/7/contrib/gcc/cp/cp-tree.h
   stable/7/contrib/gcc/cp/cxx-pretty-print.c
   stable/7/contrib/gcc/cp/decl.c
   stable/7/contrib/gcc/cp/decl2.c
   stable/7/contrib/gcc/cp/error.c
   stable/7/contrib/gcc/cp/lex.c
   stable/7/contrib/gcc/cp/name-lookup.c
   stable/7/contrib/gcc/cp/pt.c
   stable/7/contrib/gcc/cp/semantics.c
   stable/7/contrib/gcc/cp/typeck.c
   stable/7/contrib/gcc/doc/contrib.texi
   stable/7/contrib/gcc/doc/extend.texi
   stable/7/contrib/gcc/doc/gcc.1
   stable/7/contrib/gcc/doc/invoke.texi
   stable/7/contrib/gcc/dwarf2out.c
   stable/7/contrib/gcc/expr.c
   stable/7/contrib/gcc/fold-const.c
   stable/7/contrib/gcc/gimplify.c
   stable/7/contrib/gcc/reload1.c
   stable/7/contrib/gcc/simplify-rtx.c
   stable/7/contrib/gcc/target-def.h
   stable/7/contrib/gcc/target.h
   stable/7/contrib/gcc/targhooks.c
   stable/7/contrib/gcc/targhooks.h
   stable/7/contrib/gcc/tree-if-conv.c
   stable/7/contrib/gcc/tree-ssa-structalias.c
   stable/7/contrib/gcc/tree-vect-analyze.c
   stable/7/contrib/gcc/tree-vect-patterns.c
   stable/7/contrib/gcc/tree.c
   stable/7/contrib/gcc/tree.h
   stable/7/contrib/libstdc++/ChangeLog
   stable/7/contrib/libstdc++/include/std/std_valarray.h
   stable/7/contrib/libstdc++/include/tr1/random
   stable/7/share/examples/etc/make.conf
   stable/7/share/mk/bsd.cpu.mk
 Directory Properties:
   stable/7/contrib/gcc/   (props changed)
   stable/7/contrib/libstdc++/   (props changed)
   stable/7/share/examples/   (props changed)
   stable/7/share/mk/   (props changed)
 
 Modified: stable/7/contrib/gcc/BASE-VER
 ==============================================================================
 --- stable/7/contrib/gcc/BASE-VER	Mon May  2 06:59:09 2011	(r221316)
 +++ stable/7/contrib/gcc/BASE-VER	Mon May  2 08:31:53 2011	(r221317)
 @@ -1 +1 @@
 -4.2.1
 +4.2.2
 
 Modified: stable/7/contrib/gcc/ChangeLog
 ==============================================================================
 --- stable/7/contrib/gcc/ChangeLog	Mon May  2 06:59:09 2011	(r221316)
 +++ stable/7/contrib/gcc/ChangeLog	Mon May  2 08:31:53 2011	(r221317)
 @@ -1,3 +1,121 @@
 +2007-08-31  Jakub Jelinek  <jakub@redhat.com>
 +
 +	PR rtl-optimization/33148
 +	* simplify-rtx.c (simplify_unary_operation_1): Only optimize
 +	(neg (lt X 0)) if X has scalar int mode.
 +
 +	PR debug/32914
 +	* dwarf2out.c (rtl_for_decl_init): If vector decl has CONSTRUCTOR
 +	initializer, use build_vector_from_ctor if possible to create
 +	VECTOR_CST out of it.  If vector initializer is not VECTOR_CST
 +	even after this, return NULL.
 +
 +2007-08-27  Jason Merrill  <jason@redhat.com>
 +
 +	PR c++/31337
 +	* gimplify.c (gimplify_modify_expr): Discard the assignment of 
 +	zero-sized types after calling gimplify_modify_expr_rhs.
 +
 +2007-08-24  Jakub Jelinek  <jakub@redhat.com>
 +
 +	PR debug/32610
 +	* dwarf2out.c (gen_decl_die): Don't call
 +	gen_tagged_type_instantiation_die if decl doesn't have tagged type.
 +
 +2007-08-24  Richard Guenther  <rguenther@suse.de>
 +
 +	* expr.c (get_inner_reference): Remove unused variable.
 +
 +2007-08-24  Richard Guenther  <rguenther@suse.de>
 +
 +	* expr.c (get_inner_reference): Do computation of bitoffset
 +	from offset in a way we can detect overflow reliably.
 +
 +2007-08-22  Richard Guenther  <rguenther@suse.de>
 +
 +	PR middle-end/32563
 +	* tree.c (host_integerp): Treat sizetype as signed as it is
 +	sign-extended.
 +
 +2007-08-20  Adam Nemet  <anemet@caviumnetworks.com>
 +
 +	* config/mips/predicates.md (const_call_insn_operand): Invoke
 +	SYMBOL_REF_LONG_CALL_P only on SYMBOL_REFs.
 +
 +2007-08-17  Chen liqin  <liqin@sunnorth.com.cn>
 +
 +        * config/score/score.md : Update pattern tablejump.
 +        * config/score/score.c : Update score_initialize_trampoline 
 +        function.
 +        * config/score/score.h (TRAMPOLINE_TEMPLATE): Added macro.
 +        (TRAMPOLINE_INSNS, TRAMPOLINE_SIZE) Update macro.
 +        * doc/contrib.texi: Add my entry.
 +
 +2007-08-02  Andreas Krebbel  <krebbel1@de.ibm.com>
 +
 +	* config/s390/s390.md ("*xordi3_cconly"): Change xr to xg.
 +
 +2007-08-01  Andreas Krebbel  <krebbel1@de.ibm.com>
 +
 +	* config/s390/s390.md (TF in GPR splitter): Change operand_subword
 +	parameter to TFmode.
 +
 +2007-07-30  Mark Mitchell  <mark@codesourcery.com>
 +
 +	* BASE-VER: Bump.
 +	* DEV-PHASE: Mark as prerelease.
 +
 +2007-07-25  Steve Ellcey  <sje@cup.hp.com>
 +
 +	PR target/32218
 +	* tree-vect-patterns.c (vect_pattern_recog_1): Check for valid type.
 +
 +2007-07-25  Dorit Nuzman  <dorit@il.ibm.com>
 +	    Devang Patel  <dpatel@apple.com>
 +
 +	PR tree-optimization/25413
 +	* targhooks.c (default_builtin_vector_alignment_reachable): New.
 +	* targhooks.h (default_builtin_vector_alignment_reachable): New.
 +	* tree.h (contains_packed_reference): New.
 +	* expr.c (contains_packed_reference): New.
 +	* tree-vect-analyze.c (vector_alignment_reachable_p): New.
 +	(vect_enhance_data_refs_alignment): Call
 +	vector_alignment_reachable_p.
 +	* target.h (vector_alignment_reachable): New builtin.
 +	* target-def.h (TARGET_VECTOR_ALIGNMENT_REACHABLE): New.
 +	* config/rs6000/rs6000.c (rs6000_vector_alignment_reachable): New.
 +	(TARGET_VECTOR_ALIGNMENT_REACHABLE): Define.
 +
 +2007-07-24  Richard Guenther  <rguenther@suse.de>
 +
 +	Backport from mainline:
 +	2007-07-16  Richard Guenther  <rguenther@suse.de>
 +		    Uros Bizjak  <ubizjak@gmail.com>
 +
 +	* tree-if-conv.c (find_phi_replacement_condition): Unshare "*cond"
 +	before forcing it to gimple operand.
 +
 +2007-07-24  Richard Guenther  <rguenther@suse.de>
 +
 +	PR tree-optimization/32723
 +	Backport from mainline:
 +	2007-03-09  Daniel Berlin  <dberlin@dberlin.org>
 +
 +        * tree-ssa-structalias.c (shared_bitmap_info_t): New structure.
 +        (shared_bitmap_table): New variable.
 +        (shared_bitmap_hash): New function.
 +        (shared_bitmap_eq): Ditto
 +        (shared_bitmap_lookup): Ditto.
 +        (shared_bitmap_add): Ditto.
 +        (find_what_p_points_to): Rewrite to use shared bitmap hashtable.
 +        (init_alias_vars): Init shared bitmap hashtable.
 +        (delete_points_to_sets): Delete shared bitmap hashtable.
 +
 +2007-07-23  Bernd Schmidt  <bernd.schmidt@analog.com>
 +
 +	* reload1.c (choose_reload_regs): Set reload_spill_index for regs
 +	chosen during find_reloads.
 +
  2007-07-19  Release Manager
  
  	* GCC 4.2.1 released.
 
 Modified: stable/7/contrib/gcc/DATESTAMP
 ==============================================================================
 --- stable/7/contrib/gcc/DATESTAMP	Mon May  2 06:59:09 2011	(r221316)
 +++ stable/7/contrib/gcc/DATESTAMP	Mon May  2 08:31:53 2011	(r221317)
 @@ -1 +1 @@
 -20070719
 +20070831
 
 Modified: stable/7/contrib/gcc/DEV-PHASE
 ==============================================================================
 --- stable/7/contrib/gcc/DEV-PHASE	Mon May  2 06:59:09 2011	(r221316)
 +++ stable/7/contrib/gcc/DEV-PHASE	Mon May  2 08:31:53 2011	(r221317)
 @@ -0,0 +1 @@
 +prerelease
 
 Modified: stable/7/contrib/gcc/config.gcc
 ==============================================================================
 --- stable/7/contrib/gcc/config.gcc	Mon May  2 06:59:09 2011	(r221316)
 +++ stable/7/contrib/gcc/config.gcc	Mon May  2 08:31:53 2011	(r221317)
 @@ -268,11 +268,13 @@ xscale-*-*)
  	;;
  i[34567]86-*-*)
  	cpu_type=i386
 -	extra_headers="mmintrin.h mm3dnow.h xmmintrin.h emmintrin.h pmmintrin.h"
 +	extra_headers="mmintrin.h mm3dnow.h xmmintrin.h emmintrin.h
 +		       pmmintrin.h tmmintrin.h"
  	;;
  x86_64-*-*)
  	cpu_type=i386
 -	extra_headers="mmintrin.h mm3dnow.h xmmintrin.h emmintrin.h pmmintrin.h"
 +	extra_headers="mmintrin.h mm3dnow.h xmmintrin.h emmintrin.h
 +		       pmmintrin.h tmmintrin.h"
  	need_64bit_hwint=yes
  	;;
  ia64-*-*)
 @@ -1207,14 +1209,14 @@ i[34567]86-*-solaris2*)
  		# FIXME: -m64 for i[34567]86-*-* should be allowed just
  		# like -m32 for x86_64-*-*.
  		case X"${with_cpu}" in
 -		Xgeneric|Xnocona|Xx86-64|Xk8|Xopteron|Xathlon64|Xathlon-fx)
 +		Xgeneric|Xcore2|Xnocona|Xx86-64|Xk8|Xopteron|Xathlon64|Xathlon-fx)
  			;;
  		X)
  			with_cpu=generic
  			;;
  		*)
  			echo "Unsupported CPU used in --with-cpu=$with_cpu, supported values:" 1>&2
 -			echo "generic nocona x86-64 k8 opteron athlon64 athlon-fx" 1>&2
 +			echo "generic core2 nocona x86-64 k8 opteron athlon64 athlon-fx" 1>&2
  			exit 1
  			;;
  		esac
 @@ -2537,6 +2539,9 @@ if test x$with_cpu = x ; then
          nocona-*)
            with_cpu=nocona
            ;;
 +	core2-*)
 +	  with_cpu=core2
 +	  ;;
          pentium_m-*)
            with_cpu=pentium-m
            ;;
 @@ -2556,6 +2561,9 @@ if test x$with_cpu = x ; then
          nocona-*)
            with_cpu=nocona
            ;;
 +	core2-*)
 +	  with_cpu=core2
 +	  ;;
          *)
            with_cpu=generic
            ;;
 @@ -2787,7 +2795,7 @@ case "${target}" in
  				esac
  				# OK
  				;;
 -			"" | k8 | opteron | athlon64 | athlon-fx | nocona | generic)
 +			"" | k8 | opteron | athlon64 | athlon-fx | nocona | core2 | generic)
  				# OK
  				;;
  			*)
 
 Modified: stable/7/contrib/gcc/config/i386/driver-i386.c
 ==============================================================================
 --- stable/7/contrib/gcc/config/i386/driver-i386.c	Mon May  2 06:59:09 2011	(r221316)
 +++ stable/7/contrib/gcc/config/i386/driver-i386.c	Mon May  2 08:31:53 2011	(r221317)
 @@ -39,6 +39,7 @@ const char *host_detect_local_cpu (int a
  #define bit_SSE2 (1 << 26)
  
  #define bit_SSE3 (1 << 0)
 +#define bit_SSSE3 (1 << 9)
  #define bit_CMPXCHG16B (1 << 13)
  
  #define bit_3DNOW (1 << 31)
 @@ -66,7 +67,7 @@ const char *host_detect_local_cpu (int a
    unsigned int vendor;
    unsigned int ext_level;
    unsigned char has_mmx = 0, has_3dnow = 0, has_3dnowp = 0, has_sse = 0;
 -  unsigned char has_sse2 = 0, has_sse3 = 0, has_cmov = 0;
 +  unsigned char has_sse2 = 0, has_sse3 = 0, has_ssse3 = 0, has_cmov = 0;
    unsigned char has_longmode = 0, has_cmpxchg8b = 0;
    unsigned char is_amd = 0;
    unsigned int family = 0;
 @@ -107,6 +108,7 @@ const char *host_detect_local_cpu (int a
    has_sse = !!(edx & bit_SSE);
    has_sse2 = !!(edx & bit_SSE2);
    has_sse3 = !!(ecx & bit_SSE3);
 +  has_ssse3 = !!(ecx & bit_SSSE3);
    /* We don't care for extended family.  */
    family = (eax >> 8) & ~(1 << 4);
  
 @@ -148,7 +150,9 @@ const char *host_detect_local_cpu (int a
  	  /* We have no idea.  Use something reasonable.  */
  	  if (arch)
  	    {
 -	      if (has_sse3)
 +	      if (has_ssse3)
 +		cpu = "core2";
 +	      else if (has_sse3)
  		{
  		  if (has_longmode)
  		    cpu = "nocona";
 @@ -230,6 +234,9 @@ const char *host_detect_local_cpu (int a
  	  cpu = "generic";
  	}
        break;
 +    case PROCESSOR_GEODE:
 +      cpu = "geode";
 +      break;
      case PROCESSOR_K6:
        if (has_3dnow)
          cpu = "k6-3";
 
 Copied: stable/7/contrib/gcc/config/i386/geode.md (from r219374, head/contrib/gcc/config/i386/geode.md)
 ==============================================================================
 --- /dev/null	00:00:00 1970	(empty, because file is newly added)
 +++ stable/7/contrib/gcc/config/i386/geode.md	Mon May  2 08:31:53 2011	(r221317, copy of r219374, head/contrib/gcc/config/i386/geode.md)
 @@ -0,0 +1,153 @@
 +;; Geode Scheduling
 +;; Copyright (C) 2006
 +;; Free Software Foundation, Inc.
 +;;
 +;; This file is part of GCC.
 +;;
 +;; GCC is free software; you can redistribute it and/or modify
 +;; it under the terms of the GNU General Public License as published by
 +;; the Free Software Foundation; either version 2, or (at your option)
 +;; any later version.
 +;;
 +;; GCC is distributed in the hope that it will be useful,
 +;; but WITHOUT ANY WARRANTY; without even the implied warranty of
 +;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 +;; GNU General Public License for more details.
 +;;
 +;; You should have received a copy of the GNU General Public License
 +;; along with GCC; see the file COPYING.  If not, write to
 +;; the Free Software Foundation, 51 Franklin Street, Fifth Floor,
 +;; Boston, MA 02110-1301, USA.
 +;;
 +;; The Geode architecture is one insn issue processor.
 +;;
 +;; This description is based on data from the following documents:
 +;;
 +;;    "AMD Geode GX Processor Data Book"
 +;;    Advanced Micro Devices, Inc., Aug 2005.
 +;;
 +;;    "AMD Geode LX Processor Data Book"
 +;;    Advanced Micro Devices, Inc., Jan 2006.
 +;;
 +;;
 +;; CPU execution units of the Geode:
 +;;
 +;; issue	describes the issue pipeline.
 +;; alu		describes the Integer unit
 +;; fpu		describes the FP unit
 +;;
 +;; The fp unit is out of order execution unit with register renaming.
 +;; There is also memory management unit and execution pipeline for
 +;; load/store operations.  We ignore it and difference between insns
 +;; using memory and registers.
 +
 +(define_automaton "geode")
 +
 +(define_cpu_unit "geode_issue,geode_alu,geode_fpu" "geode")
 +
 +(define_insn_reservation "alu" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "alu,alu1,negnot,icmp,lea,test,imov,imovx,icmov,incdec,setcc"))
 +			 "geode_issue,geode_alu")
 +
 +(define_insn_reservation "shift" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "ishift,ishift1,rotate,rotate1,cld"))
 +			 "geode_issue,geode_alu*2")
 +
 +(define_insn_reservation "imul" 7
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "imul"))
 +			 "geode_issue,geode_alu*7")
 +
 +(define_insn_reservation "idiv" 40
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "idiv"))
 +			 "geode_issue,geode_alu*40")
 +
 +;; The branch unit.
 +(define_insn_reservation "call" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "call,callv"))
 +			 "geode_issue,geode_alu*2")
 +
 +(define_insn_reservation "geode_branch" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "ibr"))
 +			 "geode_issue,geode_alu")
 +
 +(define_insn_reservation "geode_pop_push" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "pop,push"))
 +			 "geode_issue,geode_alu")
 +
 +(define_insn_reservation "geode_leave" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "leave"))
 +			 "geode_issue,geode_alu*2")
 +
 +(define_insn_reservation "geode_load_str" 4
 +			 (and (eq_attr "cpu" "geode")
 +			      (and (eq_attr "type" "str")
 +				   (eq_attr "memory" "load,both")))
 +			 "geode_issue,geode_alu*4")
 +
 +(define_insn_reservation "geode_store_str" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (and (eq_attr "type" "str")
 +				   (eq_attr "memory" "store")))
 +			 "geode_issue,geode_alu*2")
 +
 +;; Be optimistic
 +(define_insn_reservation "geode_unknown" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "multi,other"))
 +			 "geode_issue,geode_alu")
 +
 +;; FPU
 +
 +(define_insn_reservation "geode_fop" 6
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fop,fcmp"))
 +			 "geode_issue,geode_fpu*6")
 +
 +(define_insn_reservation "geode_fsimple" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fmov,fcmov,fsgn,fxch"))
 +			 "geode_issue,geode_fpu")
 +
 +(define_insn_reservation "geode_fist" 4
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fistp,fisttp"))
 +			 "geode_issue,geode_fpu*4")
 +
 +(define_insn_reservation "geode_fmul" 10
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fmul"))
 +			 "geode_issue,geode_fpu*10")
 +
 +(define_insn_reservation "geode_fdiv" 47
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fdiv"))
 +			 "geode_issue,geode_fpu*47")
 +
 +;; We use minimal latency (fsin) here
 +(define_insn_reservation "geode_fpspc" 54
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "fpspc"))
 +			 "geode_issue,geode_fpu*54")
 +
 +(define_insn_reservation "geode_frndint" 12
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "frndint"))
 +			 "geode_issue,geode_fpu*12")
 +
 +(define_insn_reservation "geode_mmxmov" 1
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "mmxmov"))
 +			 "geode_issue,geode_fpu")
 +
 +(define_insn_reservation "geode_mmx" 2
 +			 (and (eq_attr "cpu" "geode")
 +			      (eq_attr "type" "mmx,mmxadd,mmxmul,mmxcmp,mmxcvt,mmxshft"))
 +			 "geode_issue,geode_fpu*2")
 
 Modified: stable/7/contrib/gcc/config/i386/i386.c
 ==============================================================================
 --- stable/7/contrib/gcc/config/i386/i386.c	Mon May  2 06:59:09 2011	(r221316)
 +++ stable/7/contrib/gcc/config/i386/i386.c	Mon May  2 08:31:53 2011	(r221317)
 @@ -336,6 +336,60 @@ struct processor_costs pentiumpro_cost =
  };
  
  static const
 +struct processor_costs geode_cost = {
 +  COSTS_N_INSNS (1),			/* cost of an add instruction */
 +  COSTS_N_INSNS (1),			/* cost of a lea instruction */
 +  COSTS_N_INSNS (2),			/* variable shift costs */
 +  COSTS_N_INSNS (1),			/* constant shift costs */
 +  {COSTS_N_INSNS (3),			/* cost of starting multiply for QI */
 +   COSTS_N_INSNS (4),			/*                               HI */
 +   COSTS_N_INSNS (7),			/*                               SI */
 +   COSTS_N_INSNS (7),			/*                               DI */
 +   COSTS_N_INSNS (7)},			/*                               other */
 +  0,					/* cost of multiply per each bit set */
 +  {COSTS_N_INSNS (15),			/* cost of a divide/mod for QI */
 +   COSTS_N_INSNS (23),			/*                          HI */
 +   COSTS_N_INSNS (39),			/*                          SI */
 +   COSTS_N_INSNS (39),			/*                          DI */
 +   COSTS_N_INSNS (39)},			/*                          other */
 +  COSTS_N_INSNS (1),			/* cost of movsx */
 +  COSTS_N_INSNS (1),			/* cost of movzx */
 +  8,					/* "large" insn */
 +  4,					/* MOVE_RATIO */
 +  1,					/* cost for loading QImode using movzbl */
 +  {1, 1, 1},				/* cost of loading integer registers
 +					   in QImode, HImode and SImode.
 +					   Relative to reg-reg move (2).  */
 +  {1, 1, 1},				/* cost of storing integer registers */
 +  1,					/* cost of reg,reg fld/fst */
 +  {1, 1, 1},				/* cost of loading fp registers
 +					   in SFmode, DFmode and XFmode */
 +  {4, 6, 6},				/* cost of storing fp registers
 +					   in SFmode, DFmode and XFmode */
 +
 +  1,					/* cost of moving MMX register */
 +  {1, 1},				/* cost of loading MMX registers
 +					   in SImode and DImode */
 +  {1, 1},				/* cost of storing MMX registers
 +					   in SImode and DImode */
 +  1,					/* cost of moving SSE register */
 +  {1, 1, 1},				/* cost of loading SSE registers
 +					   in SImode, DImode and TImode */
 +  {1, 1, 1},				/* cost of storing SSE registers
 +					   in SImode, DImode and TImode */
 +  1,					/* MMX or SSE register to integer */
 +  32,					/* size of prefetch block */
 +  1,					/* number of parallel prefetches */
 +  1,					/* Branch cost */
 +  COSTS_N_INSNS (6),			/* cost of FADD and FSUB insns.  */
 +  COSTS_N_INSNS (11),			/* cost of FMUL instruction.  */
 +  COSTS_N_INSNS (47),			/* cost of FDIV instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FABS instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FCHS instruction.  */
 +  COSTS_N_INSNS (54),			/* cost of FSQRT instruction.  */
 +};
 +
 +static const
  struct processor_costs k6_cost = {
    COSTS_N_INSNS (1),			/* cost of an add instruction */
    COSTS_N_INSNS (2),			/* cost of a lea instruction */
 @@ -600,6 +654,58 @@ struct processor_costs nocona_cost = {
    COSTS_N_INSNS (44),			/* cost of FSQRT instruction.  */
  };
  
 +static const
 +struct processor_costs core2_cost = {
 +  COSTS_N_INSNS (1),			/* cost of an add instruction */
 +  COSTS_N_INSNS (1) + 1,		/* cost of a lea instruction */
 +  COSTS_N_INSNS (1),			/* variable shift costs */
 +  COSTS_N_INSNS (1),			/* constant shift costs */
 +  {COSTS_N_INSNS (3),			/* cost of starting multiply for QI */
 +   COSTS_N_INSNS (3),			/*                               HI */
 +   COSTS_N_INSNS (3),			/*                               SI */
 +   COSTS_N_INSNS (3),			/*                               DI */
 +   COSTS_N_INSNS (3)},			/*                               other */
 +  0,					/* cost of multiply per each bit set */
 +  {COSTS_N_INSNS (22),			/* cost of a divide/mod for QI */
 +   COSTS_N_INSNS (22),			/*                          HI */
 +   COSTS_N_INSNS (22),			/*                          SI */
 +   COSTS_N_INSNS (22),			/*                          DI */
 +   COSTS_N_INSNS (22)},			/*                          other */
 +  COSTS_N_INSNS (1),			/* cost of movsx */
 +  COSTS_N_INSNS (1),			/* cost of movzx */
 +  8,					/* "large" insn */
 +  16,					/* MOVE_RATIO */
 +  2,					/* cost for loading QImode using movzbl */
 +  {6, 6, 6},				/* cost of loading integer registers
 +					   in QImode, HImode and SImode.
 +					   Relative to reg-reg move (2).  */
 +  {4, 4, 4},				/* cost of storing integer registers */
 +  2,					/* cost of reg,reg fld/fst */
 +  {6, 6, 6},				/* cost of loading fp registers
 +					   in SFmode, DFmode and XFmode */
 +  {4, 4, 4},				/* cost of loading integer registers */
 +  2,					/* cost of moving MMX register */
 +  {6, 6},				/* cost of loading MMX registers
 +					   in SImode and DImode */
 +  {4, 4},				/* cost of storing MMX registers
 +					   in SImode and DImode */
 +  2,					/* cost of moving SSE register */
 +  {6, 6, 6},				/* cost of loading SSE registers
 +					   in SImode, DImode and TImode */
 +  {4, 4, 4},				/* cost of storing SSE registers
 +					   in SImode, DImode and TImode */
 +  2,					/* MMX or SSE register to integer */
 +  128,					/* size of prefetch block */
 +  8,					/* number of parallel prefetches */
 +  3,					/* Branch cost */
 +  COSTS_N_INSNS (3),			/* cost of FADD and FSUB insns.  */
 +  COSTS_N_INSNS (5),			/* cost of FMUL instruction.  */
 +  COSTS_N_INSNS (32),			/* cost of FDIV instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FABS instruction.  */
 +  COSTS_N_INSNS (1),			/* cost of FCHS instruction.  */
 +  COSTS_N_INSNS (58),			/* cost of FSQRT instruction.  */
 +};
 +
  /* Generic64 should produce code tuned for Nocona and K8.  */
  static const
  struct processor_costs generic64_cost = {
 @@ -721,38 +827,41 @@ const struct processor_costs *ix86_cost 
  #define m_486 (1<<PROCESSOR_I486)
  #define m_PENT (1<<PROCESSOR_PENTIUM)
  #define m_PPRO (1<<PROCESSOR_PENTIUMPRO)
 +#define m_GEODE  (1<<PROCESSOR_GEODE)
 +#define m_K6_GEODE  (m_K6 | m_GEODE)
  #define m_K6  (1<<PROCESSOR_K6)
  #define m_ATHLON  (1<<PROCESSOR_ATHLON)
  #define m_PENT4  (1<<PROCESSOR_PENTIUM4)
  #define m_K8  (1<<PROCESSOR_K8)
  #define m_ATHLON_K8  (m_K8 | m_ATHLON)
  #define m_NOCONA  (1<<PROCESSOR_NOCONA)
 +#define m_CORE2  (1<<PROCESSOR_CORE2)
  #define m_GENERIC32 (1<<PROCESSOR_GENERIC32)
  #define m_GENERIC64 (1<<PROCESSOR_GENERIC64)
  #define m_GENERIC (m_GENERIC32 | m_GENERIC64)
  
  /* Generic instruction choice should be common subset of supported CPUs
 -   (PPro/PENT4/NOCONA/Athlon/K8).  */
 +   (PPro/PENT4/NOCONA/CORE2/Athlon/K8).  */
  
  /* Leave is not affecting Nocona SPEC2000 results negatively, so enabling for
     Generic64 seems like good code size tradeoff.  We can't enable it for 32bit
     generic because it is not working well with PPro base chips.  */
 -const int x86_use_leave = m_386 | m_K6 | m_ATHLON_K8 | m_GENERIC64;
 -const int x86_push_memory = m_386 | m_K6 | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 +const int x86_use_leave = m_386 | m_K6_GEODE | m_ATHLON_K8 | m_CORE2 | m_GENERIC64;
 +const int x86_push_memory = m_386 | m_K6_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
  const int x86_zero_extend_with_and = m_486 | m_PENT;
 -const int x86_movx = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_GENERIC /* m_386 | m_K6 */;
 +const int x86_movx = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC | m_GEODE /* m_386 | m_K6 */;
  const int x86_double_with_add = ~m_386;
  const int x86_use_bit_test = m_386;
 -const int x86_unroll_strlen = m_486 | m_PENT | m_PPRO | m_ATHLON_K8 | m_K6 | m_GENERIC;
 -const int x86_cmove = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA;
 +const int x86_unroll_strlen = m_486 | m_PENT | m_PPRO | m_ATHLON_K8 | m_K6 | m_CORE2 | m_GENERIC;
 +const int x86_cmove = m_PPRO | m_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA;
  const int x86_3dnow_a = m_ATHLON_K8;
 -const int x86_deep_branch = m_PPRO | m_K6 | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 +const int x86_deep_branch = m_PPRO | m_K6_GEODE | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
  /* Branch hints were put in P4 based on simulation result. But
     after P4 was made, no performance benefit was observed with
     branch hints. It also increases the code size. As the result,
     icc never generates branch hints.  */
  const int x86_branch_hints = 0;
 -const int x86_use_sahf = m_PPRO | m_K6 | m_PENT4 | m_NOCONA | m_GENERIC32; /*m_GENERIC | m_ATHLON_K8 ? */
 +const int x86_use_sahf = m_PPRO | m_K6_GEODE | m_PENT4 | m_NOCONA | m_GENERIC32; /*m_GENERIC | m_ATHLON_K8 ? */
  /* We probably ought to watch for partial register stalls on Generic32
     compilation setting as well.  However in current implementation the
     partial register stalls are not eliminated very well - they can
 @@ -762,15 +871,15 @@ const int x86_use_sahf = m_PPRO | m_K6 |
     with partial reg. dependencies used by Athlon/P4 based chips, it is better
     to leave it off for generic32 for now.  */
  const int x86_partial_reg_stall = m_PPRO;
 -const int x86_partial_flag_reg_stall = m_GENERIC;
 -const int x86_use_himode_fiop = m_386 | m_486 | m_K6;
 -const int x86_use_simode_fiop = ~(m_PPRO | m_ATHLON_K8 | m_PENT | m_GENERIC);
 +const int x86_partial_flag_reg_stall =  m_CORE2 | m_GENERIC;
 +const int x86_use_himode_fiop = m_386 | m_486 | m_K6_GEODE;
 +const int x86_use_simode_fiop = ~(m_PPRO | m_ATHLON_K8 | m_PENT | m_CORE2 | m_GENERIC);
  const int x86_use_mov0 = m_K6;
 -const int x86_use_cltd = ~(m_PENT | m_K6 | m_GENERIC);
 +const int x86_use_cltd = ~(m_PENT | m_K6 | m_CORE2 | m_GENERIC);
  const int x86_read_modify_write = ~m_PENT;
  const int x86_read_modify = ~(m_PENT | m_PPRO);
  const int x86_split_long_moves = m_PPRO;
 -const int x86_promote_QImode = m_K6 | m_PENT | m_386 | m_486 | m_ATHLON_K8 | m_GENERIC; /* m_PENT4 ? */
 +const int x86_promote_QImode = m_K6_GEODE | m_PENT | m_386 | m_486 | m_ATHLON_K8 | m_CORE2 | m_GENERIC; /* m_PENT4 ? */
  const int x86_fast_prefix = ~(m_PENT | m_486 | m_386);
  const int x86_single_stringop = m_386 | m_PENT4 | m_NOCONA;
  const int x86_qimode_math = ~(0);
 @@ -780,18 +889,18 @@ const int x86_promote_qi_regs = 0;
     if our scheme for avoiding partial stalls was more effective.  */
  const int x86_himode_math = ~(m_PPRO);
  const int x86_promote_hi_regs = m_PPRO;
 -const int x86_sub_esp_4 = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_sub_esp_8 = m_ATHLON_K8 | m_PPRO | m_386 | m_486 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_add_esp_4 = m_ATHLON_K8 | m_K6 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_add_esp_8 = m_ATHLON_K8 | m_PPRO | m_K6 | m_386 | m_486 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_integer_DFmode_moves = ~(m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC);
 -const int x86_partial_reg_dependency = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_memory_mismatch_stall = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_accumulate_outgoing_args = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC;
 -const int x86_prologue_using_move = m_ATHLON_K8 | m_PPRO | m_GENERIC;
 -const int x86_epilogue_using_move = m_ATHLON_K8 | m_PPRO | m_GENERIC;
 +const int x86_sub_esp_4 = m_ATHLON_K8 | m_PPRO | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_sub_esp_8 = m_ATHLON_K8 | m_PPRO | m_386 | m_486 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_add_esp_4 = m_ATHLON_K8 | m_K6_GEODE | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_add_esp_8 = m_ATHLON_K8 | m_PPRO | m_K6_GEODE | m_386 | m_486 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_integer_DFmode_moves = ~(m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC | m_GEODE);
 +const int x86_partial_reg_dependency = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_memory_mismatch_stall = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_accumulate_outgoing_args = m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC;
 +const int x86_prologue_using_move = m_ATHLON_K8 | m_PPRO | m_CORE2 | m_GENERIC;
 +const int x86_epilogue_using_move = m_ATHLON_K8 | m_PPRO | m_CORE2 | m_GENERIC;
  const int x86_shift1 = ~m_486;
 -const int x86_arch_always_fancy_math_387 = m_PENT | m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 +const int x86_arch_always_fancy_math_387 = m_PENT | m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
  /* In Generic model we have an conflict here in between PPro/Pentium4 based chips
     that thread 128bit SSE registers as single units versus K8 based chips that
     divide SSE registers to two 64bit halves.
 @@ -801,7 +910,7 @@ const int x86_arch_always_fancy_math_387
     this option on P4 brings over 20% SPECfp regression, while enabling it on
     K8 brings roughly 2.4% regression that can be partly masked by careful scheduling
     of moves.  */
 -const int x86_sse_partial_reg_dependency = m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC;
 +const int x86_sse_partial_reg_dependency = m_PENT4 | m_NOCONA | m_PPRO | m_CORE2 | m_GENERIC;
  /* Set for machines where the type and dependencies are resolved on SSE
     register parts instead of whole registers, so we may maintain just
     lower part of scalar values in proper format leaving the upper part
 @@ -810,28 +919,28 @@ const int x86_sse_split_regs = m_ATHLON_
  const int x86_sse_typeless_stores = m_ATHLON_K8;
  const int x86_sse_load0_by_pxor = m_PPRO | m_PENT4 | m_NOCONA;
  const int x86_use_ffreep = m_ATHLON_K8;
 -const int x86_rep_movl_optimal = m_386 | m_PENT | m_PPRO | m_K6;
 -const int x86_use_incdec = ~(m_PENT4 | m_NOCONA | m_GENERIC);
 +const int x86_rep_movl_optimal = m_386 | m_PENT | m_PPRO | m_K6_GEODE | m_CORE2;
 +const int x86_use_incdec = ~(m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC);
  
  /* ??? Allowing interunit moves makes it all too easy for the compiler to put
     integer data in xmm registers.  Which results in pretty abysmal code.  */
  const int x86_inter_unit_moves = 0 /* ~(m_ATHLON_K8) */;
  
 -const int x86_ext_80387_constants = m_K6 | m_ATHLON | m_PENT4 | m_NOCONA | m_PPRO | m_GENERIC32;
 +const int x86_ext_80387_constants = m_K6_GEODE | m_ATHLON | m_PENT4 | m_NOCONA | m_CORE2 | m_PPRO | m_GENERIC32;
  /* Some CPU cores are not able to predict more than 4 branch instructions in
     the 16 byte window.  */
 -const int x86_four_jump_limit = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_GENERIC;
 -const int x86_schedule = m_PPRO | m_ATHLON_K8 | m_K6 | m_PENT | m_GENERIC;
 +const int x86_four_jump_limit = m_PPRO | m_ATHLON_K8 | m_PENT4 | m_NOCONA | m_CORE2 | m_GENERIC;
 +const int x86_schedule = m_PPRO | m_ATHLON_K8 | m_K6_GEODE | m_PENT | m_CORE2 | m_GENERIC;
  const int x86_use_bt = m_ATHLON_K8;
  /* Compare and exchange was added for 80486.  */
  const int x86_cmpxchg = ~m_386;
  /* Compare and exchange 8 bytes was added for pentium.  */
  const int x86_cmpxchg8b = ~(m_386 | m_486);
  /* Compare and exchange 16 bytes was added for nocona.  */
 -const int x86_cmpxchg16b = m_NOCONA;
 +const int x86_cmpxchg16b = m_NOCONA | m_CORE2;
  /* Exchange and add was added for 80486.  */
  const int x86_xadd = ~m_386;
 -const int x86_pad_returns = m_ATHLON_K8 | m_GENERIC;
 +const int x86_pad_returns = m_ATHLON_K8 | m_CORE2 | m_GENERIC;
  
  /* In case the average insn count for single function invocation is
     lower than this constant, emit fast (but longer) prologue and
 @@ -1402,16 +1511,24 @@ ix86_handle_option (size_t code, const c
      case OPT_msse:
        if (!value)
  	{
 -	  target_flags &= ~(MASK_SSE2 | MASK_SSE3);
 -	  target_flags_explicit |= MASK_SSE2 | MASK_SSE3;
 +	  target_flags &= ~(MASK_SSE2 | MASK_SSE3 | MASK_SSSE3);
 +	  target_flags_explicit |= MASK_SSE2 | MASK_SSE3 | MASK_SSSE3;
  	}
        return true;
  
      case OPT_msse2:
        if (!value)
  	{
 -	  target_flags &= ~MASK_SSE3;
 -	  target_flags_explicit |= MASK_SSE3;
 +	  target_flags &= ~(MASK_SSE3 | MASK_SSSE3);
 +	  target_flags_explicit |= MASK_SSE3 | MASK_SSSE3;
 +	}
 +      return true;
 +
 +    case OPT_msse3:
 +      if (!value)
 +	{
 +	  target_flags &= ~MASK_SSSE3;
 +	  target_flags_explicit |= MASK_SSSE3;
  	}
        return true;
  
 @@ -1455,11 +1572,13 @@ override_options (void)
        {&i486_cost, 0, 0, 16, 15, 16, 15, 16},
        {&pentium_cost, 0, 0, 16, 7, 16, 7, 16},
        {&pentiumpro_cost, 0, 0, 16, 15, 16, 7, 16},
 +      {&geode_cost, 0, 0, 0, 0, 0, 0, 0},
        {&k6_cost, 0, 0, 32, 7, 32, 7, 32},
        {&athlon_cost, 0, 0, 16, 7, 16, 7, 16},
        {&pentium4_cost, 0, 0, 0, 0, 0, 0, 0},
        {&k8_cost, 0, 0, 16, 7, 16, 7, 16},
        {&nocona_cost, 0, 0, 0, 0, 0, 0, 0},
 +      {&core2_cost, 0, 0, 16, 7, 16, 7, 16},
        {&generic32_cost, 0, 0, 16, 7, 16, 7, 16},
        {&generic64_cost, 0, 0, 16, 7, 16, 7, 16}
      };
 @@ -1478,7 +1597,8 @@ override_options (void)
  	  PTA_PREFETCH_SSE = 16,
  	  PTA_3DNOW = 32,
  	  PTA_3DNOW_A = 64,
 -	  PTA_64BIT = 128
 +	  PTA_64BIT = 128,
 +	  PTA_SSSE3 = 256
  	} flags;
      }
    const processor_alias_table[] =
 @@ -1506,6 +1626,11 @@ override_options (void)
  				        | PTA_MMX | PTA_PREFETCH_SSE},
        {"nocona", PROCESSOR_NOCONA, PTA_SSE | PTA_SSE2 | PTA_SSE3 | PTA_64BIT
  				        | PTA_MMX | PTA_PREFETCH_SSE},
 +      {"core2", PROCESSOR_CORE2, PTA_SSE | PTA_SSE2 | PTA_SSE3 | PTA_SSSE3
 +                                        | PTA_64BIT | PTA_MMX
 +                                        | PTA_PREFETCH_SSE},
 +      {"geode", PROCESSOR_GEODE, PTA_MMX | PTA_PREFETCH_SSE | PTA_3DNOW
 +				   | PTA_3DNOW_A},
        {"k6", PROCESSOR_K6, PTA_MMX},
        {"k6-2", PROCESSOR_K6, PTA_MMX | PTA_3DNOW},
        {"k6-3", PROCESSOR_K6, PTA_MMX | PTA_3DNOW},
 @@ -1523,10 +1648,19 @@ override_options (void)
  			       | PTA_SSE | PTA_SSE2 },
        {"k8", PROCESSOR_K8, PTA_MMX | PTA_PREFETCH_SSE | PTA_3DNOW | PTA_64BIT
  				      | PTA_3DNOW_A | PTA_SSE | PTA_SSE2},
 +      {"k8-sse3", PROCESSOR_K8, PTA_MMX | PTA_PREFETCH_SSE | PTA_3DNOW | PTA_64BIT
 +				      | PTA_3DNOW_A | PTA_SSE | PTA_SSE2
 +				      | PTA_SSE3 },
        {"opteron", PROCESSOR_K8, PTA_MMX | PTA_PREFETCH_SSE | PTA_3DNOW | PTA_64BIT
  				      | PTA_3DNOW_A | PTA_SSE | PTA_SSE2},
 +      {"opteron-sse3", PROCESSOR_K8, PTA_MMX | PTA_PREFETCH_SSE | PTA_3DNOW | PTA_64BIT
 +				      | PTA_3DNOW_A | PTA_SSE | PTA_SSE2
 +				      | PTA_SSE3 },
        {"athlon64", PROCESSOR_K8, PTA_MMX | PTA_PREFETCH_SSE | PTA_3DNOW | PTA_64BIT
  				      | PTA_3DNOW_A | PTA_SSE | PTA_SSE2},
 +      {"athlon64-sse3", PROCESSOR_K8, PTA_MMX | PTA_PREFETCH_SSE | PTA_3DNOW | PTA_64BIT
 +				      | PTA_3DNOW_A | PTA_SSE | PTA_SSE2
 +				      | PTA_SSE3 },
        {"athlon-fx", PROCESSOR_K8, PTA_MMX | PTA_PREFETCH_SSE | PTA_3DNOW | PTA_64BIT
  				      | PTA_3DNOW_A | PTA_SSE | PTA_SSE2},
        {"generic32", PROCESSOR_GENERIC32, 0 /* flags are only used for -march switch.  */ },
 @@ -1686,6 +1820,9 @@ override_options (void)
  	if (processor_alias_table[i].flags & PTA_SSE3
  	    && !(target_flags_explicit & MASK_SSE3))
  	  target_flags |= MASK_SSE3;
 +	if (processor_alias_table[i].flags & PTA_SSSE3
 +	    && !(target_flags_explicit & MASK_SSSE3))
 +	  target_flags |= MASK_SSSE3;
  	if (processor_alias_table[i].flags & PTA_PREFETCH_SSE)
  	  x86_prefetch_sse = true;
  	if (TARGET_64BIT && !(processor_alias_table[i].flags & PTA_64BIT))
 @@ -1862,6 +1999,10 @@ override_options (void)
    if (!TARGET_80387)
      target_flags |= MASK_NO_FANCY_MATH_387;
  
 +  /* Turn on SSE3 builtins for -mssse3.  */
 +  if (TARGET_SSSE3)
 +    target_flags |= MASK_SSE3;
 +
    /* Turn on SSE2 builtins for -msse3.  */
    if (TARGET_SSE3)
      target_flags |= MASK_SSE2;
 @@ -13697,6 +13838,9 @@ ix86_issue_rate (void)
      case PROCESSOR_GENERIC64:
        return 3;
  
 +    case PROCESSOR_CORE2:
 +      return 4;
 +
      default:
        return 1;
      }
 @@ -14565,6 +14709,41 @@ enum ix86_builtins
    IX86_BUILTIN_MONITOR,
    IX86_BUILTIN_MWAIT,
  
 +  /* SSSE3.  */
 +  IX86_BUILTIN_PHADDW,
 +  IX86_BUILTIN_PHADDD,
 +  IX86_BUILTIN_PHADDSW,
 +  IX86_BUILTIN_PHSUBW,
 +  IX86_BUILTIN_PHSUBD,
 +  IX86_BUILTIN_PHSUBSW,
 +  IX86_BUILTIN_PMADDUBSW,
 +  IX86_BUILTIN_PMULHRSW,
 +  IX86_BUILTIN_PSHUFB,
 +  IX86_BUILTIN_PSIGNB,
 +  IX86_BUILTIN_PSIGNW,
 +  IX86_BUILTIN_PSIGND,
 +  IX86_BUILTIN_PALIGNR,
 +  IX86_BUILTIN_PABSB,
 +  IX86_BUILTIN_PABSW,
 +  IX86_BUILTIN_PABSD,
 +
 +  IX86_BUILTIN_PHADDW128,
 +  IX86_BUILTIN_PHADDD128,
 +  IX86_BUILTIN_PHADDSW128,
 +  IX86_BUILTIN_PHSUBW128,
 +  IX86_BUILTIN_PHSUBD128,
 +  IX86_BUILTIN_PHSUBSW128,
 +  IX86_BUILTIN_PMADDUBSW128,
 +  IX86_BUILTIN_PMULHRSW128,
 +  IX86_BUILTIN_PSHUFB128,
 +  IX86_BUILTIN_PSIGNB128,
 +  IX86_BUILTIN_PSIGNW128,
 +  IX86_BUILTIN_PSIGND128,
 +  IX86_BUILTIN_PALIGNR128,
 +  IX86_BUILTIN_PABSB128,
 +  IX86_BUILTIN_PABSW128,
 +  IX86_BUILTIN_PABSD128,
 +
    IX86_BUILTIN_VEC_INIT_V2SI,
    IX86_BUILTIN_VEC_INIT_V4HI,
    IX86_BUILTIN_VEC_INIT_V8QI,
 @@ -14906,7 +15085,33 @@ static const struct builtin_description 
    { MASK_SSE3, CODE_FOR_sse3_haddv4sf3, "__builtin_ia32_haddps", IX86_BUILTIN_HADDPS, 0, 0 },
    { MASK_SSE3, CODE_FOR_sse3_haddv2df3, "__builtin_ia32_haddpd", IX86_BUILTIN_HADDPD, 0, 0 },
    { MASK_SSE3, CODE_FOR_sse3_hsubv4sf3, "__builtin_ia32_hsubps", IX86_BUILTIN_HSUBPS, 0, 0 },
 -  { MASK_SSE3, CODE_FOR_sse3_hsubv2df3, "__builtin_ia32_hsubpd", IX86_BUILTIN_HSUBPD, 0, 0 }
 +  { MASK_SSE3, CODE_FOR_sse3_hsubv2df3, "__builtin_ia32_hsubpd", IX86_BUILTIN_HSUBPD, 0, 0 },
 +
 +  /* SSSE3 */
 +  { MASK_SSSE3, CODE_FOR_ssse3_phaddwv8hi3, "__builtin_ia32_phaddw128", IX86_BUILTIN_PHADDW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phaddwv4hi3, "__builtin_ia32_phaddw", IX86_BUILTIN_PHADDW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phadddv4si3, "__builtin_ia32_phaddd128", IX86_BUILTIN_PHADDD128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phadddv2si3, "__builtin_ia32_phaddd", IX86_BUILTIN_PHADDD, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phaddswv8hi3, "__builtin_ia32_phaddsw128", IX86_BUILTIN_PHADDSW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phaddswv4hi3, "__builtin_ia32_phaddsw", IX86_BUILTIN_PHADDSW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubwv8hi3, "__builtin_ia32_phsubw128", IX86_BUILTIN_PHSUBW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubwv4hi3, "__builtin_ia32_phsubw", IX86_BUILTIN_PHSUBW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubdv4si3, "__builtin_ia32_phsubd128", IX86_BUILTIN_PHSUBD128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubdv2si3, "__builtin_ia32_phsubd", IX86_BUILTIN_PHSUBD, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubswv8hi3, "__builtin_ia32_phsubsw128", IX86_BUILTIN_PHSUBSW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_phsubswv4hi3, "__builtin_ia32_phsubsw", IX86_BUILTIN_PHSUBSW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pmaddubswv8hi3, "__builtin_ia32_pmaddubsw128", IX86_BUILTIN_PMADDUBSW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pmaddubswv4hi3, "__builtin_ia32_pmaddubsw", IX86_BUILTIN_PMADDUBSW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pmulhrswv8hi3, "__builtin_ia32_pmulhrsw128", IX86_BUILTIN_PMULHRSW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pmulhrswv4hi3, "__builtin_ia32_pmulhrsw", IX86_BUILTIN_PMULHRSW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pshufbv16qi3, "__builtin_ia32_pshufb128", IX86_BUILTIN_PSHUFB128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_pshufbv8qi3, "__builtin_ia32_pshufb", IX86_BUILTIN_PSHUFB, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv16qi3, "__builtin_ia32_psignb128", IX86_BUILTIN_PSIGNB128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv8qi3, "__builtin_ia32_psignb", IX86_BUILTIN_PSIGNB, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv8hi3, "__builtin_ia32_psignw128", IX86_BUILTIN_PSIGNW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv4hi3, "__builtin_ia32_psignw", IX86_BUILTIN_PSIGNW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv4si3, "__builtin_ia32_psignd128", IX86_BUILTIN_PSIGND128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_ssse3_psignv2si3, "__builtin_ia32_psignd", IX86_BUILTIN_PSIGND, 0, 0 }
  };
  
  static const struct builtin_description bdesc_1arg[] =
 @@ -14953,6 +15158,14 @@ static const struct builtin_description 
    /* SSE3 */
    { MASK_SSE3, CODE_FOR_sse3_movshdup, 0, IX86_BUILTIN_MOVSHDUP, 0, 0 },
    { MASK_SSE3, CODE_FOR_sse3_movsldup, 0, IX86_BUILTIN_MOVSLDUP, 0, 0 },
 +
 +  /* SSSE3 */
 +  { MASK_SSSE3, CODE_FOR_absv16qi2, "__builtin_ia32_pabsb128", IX86_BUILTIN_PABSB128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_absv8qi2, "__builtin_ia32_pabsb", IX86_BUILTIN_PABSB, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_absv8hi2, "__builtin_ia32_pabsw128", IX86_BUILTIN_PABSW128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_absv4hi2, "__builtin_ia32_pabsw", IX86_BUILTIN_PABSW, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_absv4si2, "__builtin_ia32_pabsd128", IX86_BUILTIN_PABSD128, 0, 0 },
 +  { MASK_SSSE3, CODE_FOR_absv2si2, "__builtin_ia32_pabsd", IX86_BUILTIN_PABSD, 0, 0 },
  };
  
  static void
 @@ -15087,6 +15300,16 @@ ix86_init_mmx_sse_builtins (void)
    /* Normal vector unops.  */
    tree v4sf_ftype_v4sf
      = build_function_type_list (V4SF_type_node, V4SF_type_node, NULL_TREE);
 +  tree v16qi_ftype_v16qi
 +    = build_function_type_list (V16QI_type_node, V16QI_type_node, NULL_TREE);
 +  tree v8hi_ftype_v8hi
 +    = build_function_type_list (V8HI_type_node, V8HI_type_node, NULL_TREE);
 +  tree v4si_ftype_v4si
 +    = build_function_type_list (V4SI_type_node, V4SI_type_node, NULL_TREE);
 +  tree v8qi_ftype_v8qi
 +    = build_function_type_list (V8QI_type_node, V8QI_type_node, NULL_TREE);
 +  tree v4hi_ftype_v4hi
 +    = build_function_type_list (V4HI_type_node, V4HI_type_node, NULL_TREE);
  
    /* Normal vector binops.  */
    tree v4sf_ftype_v4sf_v4sf
 @@ -15106,6 +15329,12 @@ ix86_init_mmx_sse_builtins (void)
  				long_long_unsigned_type_node,
  				long_long_unsigned_type_node, NULL_TREE);
  
 +  tree di_ftype_di_di_int
 +    = build_function_type_list (long_long_unsigned_type_node,
 +				long_long_unsigned_type_node,
 +				long_long_unsigned_type_node,
 +				integer_type_node, NULL_TREE);
 +
    tree v2si_ftype_v2sf
      = build_function_type_list (V2SI_type_node, V2SF_type_node, NULL_TREE);
    tree v2sf_ftype_v2si
 @@ -15207,6 +15436,9 @@ ix86_init_mmx_sse_builtins (void)
    tree v2di_ftype_v2di_int
      = build_function_type_list (V2DI_type_node,
  				V2DI_type_node, integer_type_node, NULL_TREE);
 +  tree v2di_ftype_v2di_v2di_int
 +    = build_function_type_list (V2DI_type_node, V2DI_type_node,
 +				V2DI_type_node, integer_type_node, NULL_TREE);
    tree v4si_ftype_v4si_int
      = build_function_type_list (V4SI_type_node,
  				V4SI_type_node, integer_type_node, NULL_TREE);
 @@ -15323,6 +15555,50 @@ ix86_init_mmx_sse_builtins (void)
        def_builtin (d->mask, d->name, type, d->code);
      }
  
 +  /* Add all builtins that are more or less simple operations on 1 operand.  */
 +  for (i = 0, d = bdesc_1arg; i < ARRAY_SIZE (bdesc_1arg); i++, d++)
 +    {
 +      enum machine_mode mode;
 +      tree type;
 +
 +      if (d->name == 0)
 +	continue;
 +      mode = insn_data[d->icode].operand[1].mode;
 +
 +      switch (mode)
 +	{
 +	case V16QImode:
 +	  type = v16qi_ftype_v16qi;
 +	  break;
 +	case V8HImode:
 +	  type = v8hi_ftype_v8hi;
 +	  break;
 +	case V4SImode:
 +	  type = v4si_ftype_v4si;
 +	  break;
 +	case V2DFmode:
 +	  type = v2df_ftype_v2df;
 +	  break;
 +	case V4SFmode:
 +	  type = v4sf_ftype_v4sf;
 +	  break;
 +	case V8QImode:
 +	  type = v8qi_ftype_v8qi;
 +	  break;
 +	case V4HImode:
 +	  type = v4hi_ftype_v4hi;
 +	  break;
 +	case V2SImode:
 +	  type = v2si_ftype_v2si;
 +	  break;
 +
 +	default:
 +	  abort ();
 +	}
 +
 +      def_builtin (d->mask, d->name, type, d->code);
 +    }
 +
    /* Add the remaining MMX insns with somewhat more complicated types.  */
    def_builtin (MASK_MMX, "__builtin_ia32_emms", void_ftype_void, IX86_BUILTIN_EMMS);
    def_builtin (MASK_MMX, "__builtin_ia32_psllw", v4hi_ftype_v4hi_di, IX86_BUILTIN_PSLLW);
 @@ -15522,6 +15798,12 @@ ix86_init_mmx_sse_builtins (void)
    def_builtin (MASK_SSE3, "__builtin_ia32_lddqu",
  	       v16qi_ftype_pcchar, IX86_BUILTIN_LDDQU);
  
 +  /* SSSE3.  */
 +  def_builtin (MASK_SSSE3, "__builtin_ia32_palignr128",
 +	       v2di_ftype_v2di_v2di_int, IX86_BUILTIN_PALIGNR128);
 +  def_builtin (MASK_SSSE3, "__builtin_ia32_palignr", di_ftype_di_di_int,
 +	       IX86_BUILTIN_PALIGNR);
 +
    /* Access to the vec_init patterns.  */
    ftype = build_function_type_list (V2SI_type_node, integer_type_node,
  				    integer_type_node, NULL_TREE);
 @@ -16020,7 +16302,7 @@ ix86_expand_builtin (tree exp, rtx targe
    tree arglist = TREE_OPERAND (exp, 1);
    tree arg0, arg1, arg2;
    rtx op0, op1, op2, pat;
 -  enum machine_mode tmode, mode0, mode1, mode2;
 +  enum machine_mode tmode, mode0, mode1, mode2, mode3;
    unsigned int fcode = DECL_FUNCTION_CODE (fndecl);
  
    switch (fcode)
 @@ -16490,6 +16772,52 @@ ix86_expand_builtin (tree exp, rtx targe
        return ix86_expand_unop_builtin (CODE_FOR_sse3_lddqu, arglist,
  				       target, 1);
  
 +    case IX86_BUILTIN_PALIGNR:
 
 *** DIFF OUTPUT TRUNCATED AT 1000 LINES ***
 _______________________________________________
 svn-src-all@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/svn-src-all
 To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"
 
>Unformatted:
