1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
3969
3970
3971
3972
3973
3974
3975
3976
3977
3978
3979
3980
3981
3982
3983
3984
3985
3986
3987
3988
3989
3990
3991
3992
3993
3994
3995
3996
3997
3998
3999
4000
4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
4074
4075
4076
4077
4078
4079
4080
4081
4082
4083
4084
4085
4086
4087
4088
4089
4090
4091
4092
4093
4094
4095
4096
4097
4098
4099
4100
4101
4102
4103
4104
4105
4106
4107
4108
4109
4110
4111
4112
4113
4114
4115
4116
4117
4118
4119
4120
4121
4122
4123
4124
4125
4126
4127
4128
4129
4130
4131
4132
4133
4134
4135
4136
4137
4138
4139
4140
4141
4142
4143
4144
4145
4146
4147
4148
4149
4150
4151
4152
4153
4154
4155
4156
4157
4158
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
4170
4171
4172
4173
4174
4175
4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
4189
4190
4191
4192
4193
4194
4195
4196
4197
4198
4199
4200
4201
4202
4203
4204
4205
4206
4207
4208
4209
4210
4211
4212
4213
4214
4215
4216
4217
4218
4219
4220
4221
4222
4223
4224
4225
4226
4227
4228
4229
4230
4231
4232
4233
4234
4235
4236
4237
4238
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
4255
4256
4257
4258
4259
4260
4261
4262
4263
4264
4265
4266
4267
4268
4269
4270
4271
4272
4273
4274
4275
4276
4277
4278
4279
4280
4281
4282
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
4296
4297
4298
4299
4300
4301
4302
4303
4304
4305
4306
4307
4308
4309
4310
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
4337
4338
4339
4340
4341
4342
4343
4344
4345
4346
4347
4348
4349
4350
4351
4352
4353
4354
4355
4356
4357
4358
4359
4360
4361
4362
4363
4364
4365
4366
4367
4368
4369
4370
4371
4372
4373
4374
4375
4376
4377
4378
4379
4380
4381
4382
4383
4384
4385
4386
4387
4388
4389
4390
4391
4392
4393
4394
4395
4396
4397
4398
4399
4400
4401
4402
4403
4404
4405
4406
4407
4408
4409
4410
4411
4412
4413
4414
4415
4416
4417
4418
4419
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
4434
4435
4436
4437
4438
4439
4440
4441
4442
4443
4444
4445
4446
4447
4448
4449
4450
4451
4452
4453
4454
4455
4456
4457
4458
4459
4460
4461
4462
4463
4464
4465
4466
4467
4468
4469
4470
4471
4472
4473
4474
4475
4476
4477
4478
4479
4480
4481
4482
4483
4484
4485
4486
4487
4488
4489
4490
4491
4492
4493
4494
4495
4496
4497
4498
4499
4500
4501
4502
4503
4504
4505
4506
4507
4508
4509
4510
4511
4512
4513
4514
4515
4516
4517
4518
4519
4520
4521
4522
4523
4524
4525
4526
4527
4528
4529
4530
4531
4532
4533
4534
4535
4536
4537
4538
4539
4540
4541
4542
4543
4544
4545
4546
4547
4548
4549
4550
4551
4552
4553
4554
4555
4556
4557
4558
4559
4560
4561
4562
4563
4564
4565
4566
4567
4568
4569
4570
4571
4572
4573
4574
4575
4576
4577
4578
4579
4580
4581
4582
4583
4584
4585
4586
4587
4588
4589
4590
4591
4592
4593
4594
4595
4596
4597
4598
4599
4600
4601
4602
4603
4604
4605
4606
4607
4608
4609
4610
4611
4612
4613
4614
4615
4616
4617
4618
4619
4620
4621
4622
4623
4624
4625
4626
4627
4628
4629
4630
4631
4632
4633
4634
4635
4636
4637
4638
4639
4640
4641
4642
4643
4644
4645
4646
4647
4648
4649
4650
4651
4652
4653
4654
4655
4656
4657
4658
4659
4660
4661
4662
4663
4664
4665
4666
4667
4668
4669
4670
4671
4672
4673
4674
4675
4676
4677
4678
4679
4680
4681
4682
4683
4684
4685
4686
4687
4688
4689
4690
4691
4692
4693
4694
4695
4696
4697
4698
4699
4700
4701
4702
4703
4704
4705
4706
4707
4708
4709
4710
4711
4712
4713
4714
4715
4716
4717
4718
4719
4720
4721
4722
4723
4724
4725
4726
4727
4728
4729
4730
4731
4732
4733
4734
4735
4736
4737
4738
4739
4740
4741
4742
4743
4744
4745
4746
4747
4748
4749
4750
4751
4752
4753
4754
4755
4756
4757
4758
4759
4760
4761
4762
4763
4764
4765
4766
4767
4768
4769
4770
4771
4772
4773
4774
4775
4776
4777
4778
4779
4780
4781
4782
4783
4784
4785
4786
4787
4788
4789
4790
4791
4792
4793
4794
4795
4796
4797
4798
4799
4800
4801
4802
4803
4804
4805
4806
4807
4808
4809
4810
4811
4812
4813
4814
4815
4816
4817
4818
4819
4820
4821
4822
4823
4824
4825
4826
4827
4828
4829
4830
4831
4832
4833
4834
4835
4836
4837
4838
4839
4840
4841
4842
4843
4844
4845
4846
4847
4848
4849
4850
4851
4852
4853
4854
4855
4856
4857
4858
4859
4860
4861
4862
4863
4864
4865
4866
4867
4868
4869
4870
4871
4872
4873
4874
4875
4876
4877
4878
4879
4880
4881
4882
4883
4884
4885
4886
4887
4888
4889
4890
4891
4892
4893
4894
4895
4896
4897
4898
4899
4900
4901
4902
4903
4904
4905
4906
4907
4908
4909
4910
4911
4912
4913
4914
4915
4916
4917
4918
4919
4920
4921
4922
4923
4924
4925
4926
4927
4928
4929
4930
4931
4932
4933
4934
4935
4936
4937
4938
4939
4940
4941
4942
4943
4944
4945
4946
4947
4948
4949
4950
4951
4952
4953
4954
4955
4956
4957
4958
4959
4960
4961
4962
4963
4964
4965
4966
4967
4968
4969
4970
4971
4972
4973
4974
4975
4976
4977
4978
4979
4980
4981
4982
4983
4984
4985
4986
4987
4988
4989
4990
4991
4992
4993
4994
4995
4996
4997
4998
4999
5000
5001
5002
5003
5004
5005
5006
5007
5008
5009
5010
5011
5012
5013
5014
5015
5016
5017
5018
5019
5020
5021
5022
5023
5024
5025
5026
5027
5028
5029
5030
5031
5032
5033
5034
5035
5036
5037
5038
5039
5040
5041
5042
5043
5044
5045
5046
5047
5048
5049
5050
5051
5052
5053
5054
5055
5056
5057
5058
5059
5060
5061
5062
5063
5064
5065
5066
5067
5068
5069
5070
5071
5072
5073
5074
5075
5076
5077
5078
5079
5080
5081
5082
5083
5084
5085
5086
5087
5088
5089
5090
5091
5092
5093
5094
5095
5096
5097
5098
5099
5100
5101
5102
5103
5104
5105
5106
5107
5108
5109
5110
5111
5112
5113
5114
5115
5116
5117
5118
5119
5120
5121
5122
5123
5124
5125
5126
5127
5128
5129
5130
5131
5132
5133
5134
5135
5136
5137
5138
5139
5140
5141
5142
5143
5144
5145
5146
5147
5148
5149
5150
5151
5152
5153
5154
5155
5156
5157
5158
5159
5160
5161
5162
5163
5164
5165
5166
5167
5168
5169
5170
5171
5172
5173
5174
5175
5176
5177
5178
5179
5180
5181
5182
5183
5184
5185
5186
5187
5188
5189
5190
5191
5192
5193
5194
5195
5196
5197
5198
5199
5200
5201
5202
5203
5204
5205
5206
5207
5208
5209
5210
5211
5212
5213
5214
5215
5216
5217
5218
5219
5220
5221
5222
5223
5224
5225
5226
5227
5228
5229
5230
5231
5232
5233
5234
5235
5236
5237
5238
5239
5240
5241
5242
5243
5244
5245
5246
5247
5248
5249
5250
5251
5252
5253
5254
5255
5256
5257
5258
5259
5260
5261
5262
5263
5264
5265
5266
5267
5268
5269
5270
5271
5272
5273
5274
5275
5276
5277
5278
5279
5280
5281
5282
5283
5284
5285
5286
5287
5288
5289
5290
5291
5292
5293
5294
5295
5296
5297
5298
5299
5300
5301
5302
5303
5304
5305
5306
5307
5308
5309
5310
5311
5312
5313
5314
5315
5316
5317
5318
5319
5320
5321
5322
5323
5324
5325
5326
5327
5328
5329
5330
5331
5332
5333
5334
5335
5336
5337
5338
5339
5340
5341
5342
5343
5344
5345
5346
5347
5348
5349
5350
5351
5352
5353
5354
5355
5356
5357
5358
5359
5360
5361
5362
5363
5364
5365
5366
5367
5368
5369
5370
5371
5372
5373
5374
5375
5376
5377
5378
5379
5380
5381
5382
5383
5384
5385
5386
5387
5388
5389
5390
5391
5392
5393
5394
5395
5396
5397
5398
5399
5400
5401
5402
5403
5404
5405
5406
5407
5408
5409
5410
5411
5412
5413
5414
5415
5416
5417
5418
5419
5420
5421
5422
5423
5424
5425
5426
5427
5428
5429
5430
5431
5432
5433
5434
5435
5436
5437
5438
5439
5440
5441
5442
5443
5444
5445
5446
5447
5448
5449
5450
5451
5452
5453
5454
5455
5456
5457
5458
5459
5460
5461
5462
5463
5464
5465
5466
5467
5468
5469
5470
5471
5472
5473
5474
5475
5476
5477
5478
5479
5480
5481
5482
5483
5484
5485
5486
5487
5488
5489
5490
5491
5492
5493
5494
5495
5496
5497
5498
5499
5500
5501
5502
5503
5504
5505
5506
5507
5508
5509
5510
5511
5512
5513
5514
5515
5516
5517
5518
5519
5520
5521
5522
5523
5524
5525
5526
5527
5528
5529
5530
5531
5532
5533
5534
5535
5536
5537
5538
5539
5540
5541
5542
5543
5544
5545
5546
5547
5548
5549
5550
5551
5552
5553
5554
5555
5556
5557
5558
5559
5560
5561
5562
5563
5564
5565
5566
5567
5568
5569
5570
5571
5572
5573
5574
5575
5576
5577
5578
5579
5580
5581
5582
5583
5584
5585
5586
5587
5588
5589
5590
5591
5592
5593
5594
5595
5596
5597
5598
5599
5600
5601
5602
5603
5604
5605
5606
5607
5608
5609
5610
5611
5612
5613
5614
5615
5616
5617
5618
5619
5620
5621
5622
5623
5624
5625
5626
5627
5628
5629
5630
5631
5632
5633
5634
5635
5636
5637
5638
5639
5640
5641
5642
5643
5644
5645
5646
5647
5648
5649
5650
5651
5652
5653
5654
5655
5656
5657
5658
5659
5660
5661
5662
5663
5664
5665
5666
5667
5668
5669
5670
5671
5672
5673
5674
5675
5676
5677
5678
5679
5680
5681
5682
5683
5684
5685
5686
5687
5688
5689
5690
5691
5692
5693
5694
5695
5696
5697
5698
5699
5700
5701
5702
5703
5704
5705
5706
5707
5708
5709
5710
5711
5712
5713
5714
5715
5716
5717
5718
5719
5720
5721
5722
5723
5724
5725
5726
5727
5728
5729
5730
5731
5732
5733
5734
5735
5736
5737
5738
5739
5740
5741
5742
5743
5744
5745
5746
5747
5748
5749
5750
5751
5752
5753
5754
5755
5756
5757
5758
5759
5760
5761
5762
5763
5764
5765
5766
5767
5768
5769
5770
5771
5772
5773
5774
5775
5776
5777
5778
5779
5780
5781
5782
5783
5784
5785
5786
5787
5788
5789
5790
5791
5792
5793
5794
5795
5796
5797
5798
5799
5800
5801
5802
5803
5804
5805
5806
5807
5808
5809
5810
5811
5812
5813
5814
5815
5816
5817
5818
5819
5820
5821
5822
5823
5824
5825
5826
5827
5828
5829
5830
5831
5832
5833
5834
5835
5836
5837
5838
5839
5840
5841
5842
5843
5844
5845
5846
5847
5848
5849
5850
5851
5852
5853
5854
5855
5856
5857
5858
5859
5860
5861
5862
5863
5864
5865
5866
5867
5868
5869
5870
5871
5872
5873
5874
5875
5876
5877
5878
5879
5880
5881
5882
5883
5884
5885
5886
5887
5888
5889
5890
5891
5892
5893
5894
5895
5896
5897
5898
5899
5900
5901
5902
5903
5904
5905
5906
5907
5908
5909
5910
5911
5912
5913
5914
5915
5916
5917
5918
5919
5920
5921
5922
5923
5924
5925
5926
5927
5928
5929
5930
5931
5932
5933
5934
5935
5936
5937
5938
5939
5940
5941
5942
5943
5944
5945
5946
5947
5948
5949
5950
5951
5952
5953
5954
5955
5956
5957
5958
5959
5960
5961
5962
5963
5964
5965
5966
5967
5968
5969
5970
5971
5972
5973
5974
5975
5976
5977
5978
5979
5980
5981
5982
5983
5984
5985
5986
5987
5988
5989
5990
5991
5992
5993
5994
5995
5996
5997
5998
5999
6000
6001
6002
6003
6004
6005
6006
6007
6008
6009
6010
6011
6012
6013
6014
6015
6016
6017
6018
6019
6020
6021
6022
6023
6024
6025
6026
6027
6028
6029
6030
6031
6032
6033
6034
6035
6036
6037
6038
6039
6040
6041
6042
6043
6044
6045
6046
6047
6048
6049
6050
6051
6052
6053
6054
6055
6056
6057
6058
6059
6060
6061
6062
|
\input texinfo @c -*-texinfo-*-
@c %**start of header
@setfilename gmp.info
@include version.texi
@settitle GNU MP @value{VERSION}
@synindex tp fn
@iftex
@afourpaper
@end iftex
@comment %**end of header
@dircategory GNU libraries
@direntry
* gmp: (gmp). GNU Multiple Precision Arithmetic Library.
@end direntry
@c smallbook
@iftex
@finalout
@end iftex
@c Texinfo version 4 or up will be needed to process this into .info files.
@c
@c The edition number is in three places and the month/year in one, all taken
@c from version.texi. version.texi is created when you configure with
@c --enable-maintainer-mode, and is included in a distribution made with
@c "make dist".
@c
@c "cindex" entries have been made for function categories and programming
@c topics. Minutiae like particular systems and processors mentioned in
@c various places have been left out so as not to bury important topics under
@c a lot of junk. "mpn" functions aren't in the concept index because a
@c beginner looking for "GCD" or something is only going to be confused by
@c pointers to low level routines.
@c @m{T,N} is $T$ in tex or @math{N} otherwise. This is an easy way to give
@c different forms for math in tex and info. Commas in N or T don't work,
@c but @C{} can be used instead. \, works in info but not in tex.
@iftex
@macro m {T,N}
@tex$\T\$@end tex
@end macro
@end iftex
@ifnottex
@macro m {T,N}
@math{\N\}
@end macro
@end ifnottex
@macro C {}
,
@end macro
@c @ma{E} is $E$ for tex or @math{E} otherwise. This suits expressions which
@c want $$ rather than @math{} in tex, for example @ma{N^2}.
@iftex
@macro ma {E}
@tex$\E\$@end tex
@end macro
@end iftex
@ifnottex
@macro ma {E}
@math{\E\}
@end macro
@end ifnottex
@c @ms{V,N} is $V_N$ in tex or just vn otherwise. This suits simple
@c subscripts like @ms{x,0}.
@iftex
@macro ms {V,N}
@tex$\V\_{\N\}$@end tex
@end macro
@end iftex
@ifnottex
@macro ms {V,N}
\V\\N\
@end macro
@end ifnottex
@c @nicode{S} is plain S in info, or @code{S} elsewhere. This can be used
@c when the quotes that @code{} gives in info aren't wanted, but the
@c fontification in tex or html is wanted. Doesn't work as @nicode{'\\0'}
@c though (gives two backslashes in tex).
@ifinfo
@macro nicode {S}
\S\
@end macro
@end ifinfo
@ifnotinfo
@macro nicode {S}
@code{\S\}
@end macro
@end ifnotinfo
@c Usage: @GMPtimes{}
@c Give either \times or the word "times".
@tex
\gdef\GMPtimes{\times}
@end tex
@ifnottex
@macro GMPtimes
times
@end macro
@end ifnottex
@c Math operators already available in tex, made available in info too.
@c For example @bmod{} can be used in both tex and info.
@ifnottex
@macro bmod
mod
@end macro
@macro gcd
gcd
@end macro
@macro log
log
@end macro
@macro min
min
@end macro
@macro rightarrow
->
@end macro
@end ifnottex
@c New math operators.
@c @abs{} can be used in both tex and info, or just \abs in tex.
@tex
\gdef\abs{\mathop{\rm abs}}
@end tex
@ifnottex
@macro abs
abs
@end macro
@end ifnottex
@c @cross{} is a \times symbol in tex, or an "x" in info. In tex it works
@c inside or outside $ $.
@tex
\gdef\cross{\ifmmode\times\else$\times$\fi}
@end tex
@ifnottex
@macro cross
x
@end macro
@end ifnottex
@c @times{} made available as a "*" in info and html (already works in tex).
@ifnottex
@macro times
*
@end macro
@end ifnottex
@c Usage: @W{text}
@c Like @w{} but working in math mode too.
@tex
\gdef\W#1{\ifmmode{#1}\else\w{#1}\fi}
@end tex
@ifnottex
@macro W {S}
@w{\S\}
@end macro
@end ifnottex
@c Usage: \GMPdisplay{text}
@c Put the given text in an @display style indent, but without turning off
@c paragraph reflow etc.
@tex
\gdef\GMPdisplay#1{%
\noindent
\advance\leftskip by \lispnarrowing
#1\par}
@end tex
@c Usage: \GMPhat
@c A new \hat that will work in math mode, unlike the texinfo redefined
@c version.
@tex
\gdef\GMPhat{\mathaccent"705E}
@end tex
@c Usage: \GMPraise{text}
@c For use in a $ $ math expression as an alternative to "^". This is good
@c for @code{} in an exponent, since there seems to be no superscript font
@c for that.
@tex
\gdef\GMPraise#1{\mskip0.5\thinmuskip\hbox{\raise0.8ex\hbox{#1}}}
@end tex
@ifnottex
This file documents GNU MP, a library for arbitrary-precision arithmetic.
Copyright 1991, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001
Free Software Foundation, Inc.
Permission is granted to make and distribute verbatim copies of
this manual provided the copyright notice and this permission notice
are preserved on all copies.
@ignore
Permission is granted to process this file through TeX and print the
results, provided the printed document carries copying permission
notice identical to this one except for the removal of this paragraph
(this paragraph not being relevant to the printed manual).
@end ignore
Permission is granted to copy and distribute modified versions of this
manual under the conditions for verbatim copying, provided that the entire
resulting derived work is distributed under the terms of a permission
notice identical to this one.
Permission is granted to copy and distribute translations of this manual
into another language, under the above conditions for modified versions,
except that this permission notice may be stated in a translation approved
by the Foundation.
@end ifnottex
@setchapternewpage on
@titlepage
@c use the new format for titles
@title GNU MP
@subtitle The GNU Multiple Precision Arithmetic Library
@subtitle Edition @value{EDITION}
@subtitle @value{UPDATED}
@author by Torbj@"orn Granlund, Swox AB
@email{tege@@swox.com}
@c Include the Distribution inside the titlepage so
@c that headings are turned off.
@tex
\global\parindent=0pt
\global\parskip=8pt
\global\baselineskip=13pt
@end tex
@page
@vskip 0pt plus 1filll
Copyright @copyright{} 1991, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000,
2001 Free Software Foundation, Inc.
@sp 2
Published by the Free Software Foundation @*
59 Temple Place - Suite 330 @*
Boston, MA 02111-1307, USA @*
Permission is granted to make and distribute verbatim copies of
this manual provided the copyright notice and this permission notice
are preserved on all copies.
Permission is granted to copy and distribute modified versions of this
manual under the conditions for verbatim copying, provided that the entire
resulting derived work is distributed under the terms of a permission
notice identical to this one.
Permission is granted to copy and distribute translations of this manual
into another language, under the above conditions for modified versions,
except that this permission notice may be stated in a translation approved
by the Foundation.
@end titlepage
@headings double
@ifnottex
@node Top, Copying, (dir), (dir)
@top GNU MP
This manual documents how to install and use the GNU multiple precision
arithmetic library, version @value{VERSION}.
@end ifnottex
@menu
* Copying:: GMP Copying Conditions (LGPL).
* Introduction to GMP:: Brief introduction to GNU MP.
* Installing GMP:: How to configure and compile the GMP library.
* GMP Basics:: What every GMP user should know.
* Reporting Bugs:: How to usefully report bugs.
* Integer Functions:: Functions for arithmetic on signed integers.
* Rational Number Functions:: Functions for arithmetic on rational numbers.
* Floating-point Functions:: Functions for arithmetic on floats.
* Low-level Functions:: Fast functions for natural numbers.
* Random Number Functions:: Functions for generating random numbers.
* BSD Compatible Functions:: All functions found in BSD MP.
* Custom Allocation:: How to customize the internal allocation.
* Algorithms:: What happens behind the scenes.
* Contributors:: Who brings your this library?
* References:: Some useful papers and books to read.
* Concept Index::
* Function Index::
@end menu
@node Copying, Introduction to GMP, Top, Top
@comment node-name, next, previous, up
@unnumbered GNU MP Copying Conditions
@cindex Copying conditions
@cindex Conditions for copying GNU MP
@cindex License conditions
This library is @dfn{free}; this means that everyone is free to use it and
free to redistribute it on a free basis. The library is not in the public
domain; it is copyrighted and there are restrictions on its distribution, but
these restrictions are designed to permit everything that a good cooperating
citizen would want to do. What is not allowed is to try to prevent others
from further sharing any version of this library that they might get from
you.@refill
Specifically, we want to make sure that you have the right to give away copies
of the library, that you receive source code or else can get it if you want
it, that you can change this library or use pieces of it in new free programs,
and that you know you can do these things.@refill
To make sure that everyone has such rights, we have to forbid you to deprive
anyone else of these rights. For example, if you distribute copies of the GNU
MP library, you must give the recipients all the rights that you have. You
must make sure that they, too, receive or can get the source code. And you
must tell them their rights.@refill
Also, for our own protection, we must make certain that everyone finds out
that there is no warranty for the GNU MP library. If it is modified by
someone else and passed on, we want their recipients to know that what they
have is not what we distributed, so that any problems introduced by others
will not reflect on our reputation.@refill
The precise conditions of the license for the GNU MP library are found in the
Lesser General Public License that accompanies the source code.@refill
@node Introduction to GMP, Installing GMP, Copying, Top
@comment node-name, next, previous, up
@chapter Introduction to GNU MP
@cindex Introduction
GNU MP is a portable library written in C for arbitrary precision arithmetic
on integers, rational numbers, and floating-point numbers. It aims to provide
the fastest possible arithmetic for all applications that need higher
precision than is directly supported by the basic C types.
Many applications use just a few hundred bits of precision; but some
applications may need thousands or even millions of bits. GMP is designed to
give good performance for both, by choosing algorithms based on the sizes of
the operands, and by carefully keeping the overhead at a minimum.
The speed of GMP is achieved by using fullwords as the basic arithmetic type,
by using sophisticated algorithms, by including carefully optimized assembly
code for the most common inner loops for many different CPUs, and by a general
emphasis on speed (as opposed to simplicity or elegance).
There is carefully optimized assembly code for these CPUs:
@cindex CPUs supported
ARM,
DEC Alpha 21064, 21164, and 21264,
AMD 29000,
AMD K6, K6-2 and Athlon,
Hitachi SuperH and SH-2,
HPPA 1.0, 1.1 and 2.0,
Intel Pentium, Pentium Pro/II/III, generic x86,
Intel i960,
Motorola MC68000, MC68020, MC88100, and MC88110,
Motorola/IBM PowerPC 32 and 64,
National NS32000,
IBM POWER,
MIPS R3000, R4000,
SPARCv7, SuperSPARC, generic SPARCv8, UltraSPARC,
DEC VAX,
and
Zilog Z8000.
Some optimizations also for
Cray vector systems,
Clipper,
IBM ROMP (RT),
and
Pyramid AP/XP.
@cindex Mailing list
There is a mailing list for GMP users. To join it, send a mail to
@email{gmp-request@@swox.com} with the word @samp{subscribe} in the message
@strong{body} (not in the subject line).
@cindex Home page
@cindex Web page
For up-to-date information on GMP, please see the GMP web pages at
@display
@uref{http://swox.com/gmp/}
@end display
@cindex Latest version of GMP
@cindex Anonymous FTP of latest version
@cindex FTP of latest version
The latest version of the library is available at
@display
@uref{ftp://ftp.gnu.org/pub/gnu/gmp}
@end display
Many sites around the world mirror @samp{ftp.gnu.org}, please use a mirror
near you, see @uref{http://www.gnu.org/order/ftp.html} for a full list.
@section How to use this Manual
@cindex About this manual
Everyone should read @ref{GMP Basics}. If you need to install the library
yourself, you need to read @ref{Installing GMP}, too.
The rest of the manual can be used for later reference, although it is
probably a good idea to glance through it.
@node Installing GMP, GMP Basics, Introduction to GMP, Top
@comment node-name, next, previous, up
@chapter Installing GMP
@cindex Installing GMP
@cindex Configuring GMP
@noindent
GMP has an autoconf/automake/libtool based configuration system. On a
Unix-like system a basic build can be done with
@example
./configure
make
@end example
@noindent
Some self-tests can be run with
@example
make check
@end example
@noindent
And you can install (under @file{/usr/local} by default) with
@example
make install
@end example
@noindent
If you experience problems, please report them to @email{bug-gmp@@gnu.org}.
(@xref{Reporting Bugs}, for information on what to include in useful bug
reports.)
@menu
* Build Options::
* ABI and ISA::
* Notes for Package Builds::
* Notes for Particular Systems::
* Known Build Problems::
@end menu
@node Build Options, ABI and ISA, Installing GMP, Installing GMP
@section Build Options
@cindex Build options
All the usual autoconf configure options are available, run @samp{./configure
--help} for a summary. The file @file{INSTALL.autoconf} has some generic
installation information too.
@table @asis
@item Non-Unix Systems
@samp{configure} needs various Unix-like tools installed. On an MS-DOS system
Cygwin or DJGPP should work. See
@display
@uref{http://www.delorie.com/djgpp/}
@uref{http://www.cygnus.com/cygwin/}
@end display
It might be possible to build without the help of @samp{configure}, certainly
all the code is there, but unfortunately you'll be on your own.
@item Object Directory
To compile in a separate object directory, @command{cd} to that directory, and
prefix the configure command with the path to the GMP source directory. For
example @samp{../src/gmp-@value{VERSION}/configure}. Not all @samp{make}
programs have the necessary features (@code{VPATH}) to support this. In
particular, SunOS and Slowaris @command{make} have bugs that make them unable
to build from a separate object directory. Use GNU @command{make} instead.
@item @option{--disable-shared}, @option{--disable-static}
By default both shared and static libraries are built (where possible), but
one or other can be disabled. Shared libraries result in smaller executables
and permit code sharing between separate running processes, but on some CPUs
are slightly slower, having a small cost on each function call.
@item Native Compilation, @option{--build=CPU-VENDOR-OS}
For normal native compilation, the system can be specified with
@samp{--build}. By default @samp{./configure} uses the output from running
@samp{./config.guess}. On some systems @samp{./config.guess} can determine
the exact CPU type, on others it will be necessary to give the exact CPU
explicitly. For example,
@example
./configure --build=ultrasparc-sun-solaris2.7
@end example
In all cases the @samp{OS} part is important, since it controls how libtool
generates shared libraries. Running @samp{./config.guess} is the simplest way
to see what it should be, if you don't know already.
@item Cross Compilation, @option{--host=CPU-VENDOR-OS}
When cross-compiling, the system used for compiling is given by @samp{--build}
and the system where the library will run is given by @samp{--host}. For
example when using a FreeBSD Athlon system to build GNU/Linux m68k binaries,
@example
./configure --build=athlon-pc-freebsd3.5 --host=m68k-mac-linux-gnu
@end example
Compiler tools are sought first with the host system type as a prefix. For
example @command{m68k-mac-linux-gnu-ranlib} is checked for, then plain
@command{ranlib}. This makes it possible for a set of cross-compiling tools
to co-exist with native tools. The prefix is the argument to @samp{--host},
and this can be an alias, such as @samp{m68k-linux}. But note that tools
don't have to be setup this way, it's enough to just have a @env{PATH} with a
suitable cross-compiling @command{cc} etc.
Compiling for a different CPU in the same family as the build system is a form
of cross-compilation, though very possibly this would merely be with special
options on a native compiler. In any case @samp{./configure} will avoid
trying to run anything on the build system, and this can be important when
creating binaries for a newer CPU, since they might not run on the build
system.
Currently a warning is given unless an explicit @samp{--build} is used when
cross-compiling, because it may not be possible to correctly guess the build
system type if the @env{PATH} has only a cross-compiling @command{cc}.
Note that the @samp{--target} option is not appropriate for GMP. It's for use
when building compiler tools, with @samp{--host} being where they will run,
and @samp{--target} what they'll produce code for. Ordinary programs or
libraries like GMP are only interested in the @samp{--host} part, being where
they'll run. (Some past versions of GMP used @samp{--target} incorrectly.)
@item CPU types
In general, if you want a library that runs as fast as possible, you should
configure GMP for the exact CPU type your system uses. However, this may mean
the binaries won't run on older members of the family, and might run slower on
other members, older or newer. The best idea is always to build GMP for the
exact machine type you intend to run it on.
The following CPUs have specific support. See @file{configure.in} for details
of which code and what compiler options they select.
@itemize @bullet
@c Keep this formatting, it's easy to read and it can be grepped to
@c automatically test that CPUs listed get through ./config.sub
@item
Alpha:
@samp{alpha},
@samp{alphaev5},
@samp{alphaev56},
@samp{alphapca56},
@samp{alphaev6},
@samp{alphaev67}
@item
Cray:
@samp{c90},
@samp{j90},
@samp{t90},
@samp{sv1}
@item
HPPA:
@samp{hppa1.0},
@samp{hppa1.1},
@samp{hppa2.0},
@samp{hppa2.0n},
@samp{hppa2.0w}
@item
MIPS:
@samp{mips},
@samp{mips3},
@samp{mips64}
@item
Motorola:
@samp{m68k},
@samp{m68000},
@samp{m68020},
@samp{m88k},
@samp{m88110}
@item
POWER:
@samp{power},
@samp{power1},
@samp{power2},
@samp{power2sc},
@samp{powerpc},
@samp{powerpc64}
@item
SPARC:
@samp{sparc},
@samp{sparcv8},
@samp{microsparc},
@samp{supersparc},
@samp{sparcv9},
@samp{ultrasparc},
@samp{sparc64}
@item
80x86 family:
@samp{i386},
@samp{i486},
@samp{i586},
@samp{pentium},
@samp{pentiummmx},
@samp{pentiumpro},
@samp{pentium2},
@samp{pentium3},
@samp{k6},
@samp{k62},
@samp{k63},
@samp{athlon}
@item
Other:
@samp{a29k},
@samp{arm},
@samp{clipper},
@samp{i960},
@samp{ns32k},
@samp{pyramid},
@samp{sh},
@samp{sh2},
@samp{vax},
@samp{z8k}
@end itemize
CPUs not listed will use generic C code.
@item Generic C Build
If some of the assembly code causes problems, or if otherwise desired, the
generic C code can be selected with CPU @samp{none}. For example,
@example
./configure --build=none-unknown-freebsd3.5
@end example
Note that this will run quite slowly, but it should be portable and should at
least make it possible to get something running if all else fails.
@item @option{ABI}
On some systems GMP supports multiple ABIs (application binary interfaces),
meaning data type sizes and calling conventions. By default GMP chooses the
best ABI available, but a particular ABI can be selected. For example
@example
./configure --build=mips64-sgi-irix6 ABI=n32
@end example
See @ref{ABI and ISA}, for the available choices on relevant CPUs, and what
applications need to do.
@item @option{CC}, @option{CFLAGS}
By default the C compiler used is chosen from among some likely candidates,
with @command{gcc} normally preferred if it's present. The usual
@samp{CC=whatever} can be passed to @samp{./configure} to choose something
different.
For some systems, default compiler flags are set based on the CPU and
compiler. The usual @samp{CFLAGS="-whatever"} can be passed to
@samp{./configure} to use something different or to set good flags for systems
GMP doesn't otherwise know.
The @samp{CC} and @samp{CFLAGS} used are printed during @samp{./configure},
and can be found in each generated @file{Makefile}. This is the easiest way
to check the defaults when considering changing or adding something.
Note that when @samp{CC} and @samp{CFLAGS} are specified on a system
supporting multiple ABIs it's important to give an explicit
@samp{ABI=whatever}, since GMP can't determine the ABI just from the flags and
won't be able to select the correct assembler code.
If just @samp{CC} is selected then normal default @samp{CFLAGS} for that
compiler will be used (if GMP recognises it). For example @samp{CC=gcc} can
be used to force the use of GCC, with default flags (and default ABI).
@item @option{CPPFLAGS}
Any flags like @samp{-D} defines or @samp{-I} includes required by the
preprocessor should be set in @samp{CPPFLAGS} rather than @samp{CFLAGS}.
Compiling is done with both @samp{CPPFLAGS} and @samp{CFLAGS}, but
preprocessing uses just @samp{CPPFLAGS}. This distinction is because most
preprocessors won't accept all the flags the compiler does. Preprocessing is
done separately in some configure tests, and in the @samp{ansi2knr} support
for K&R compilers.
@item @option{--enable-alloca=yes/no/detect}, @option{--disable-alloca}
@cindex Stack overflow segfaults
@cindex @code{alloca}
GMP allocates temporary workspace using either @code{alloca} or @code{malloc}.
The default @samp{--enable-alloca=detect} uses @code{alloca} if available, or
@code{malloc} if not. @option{--disable-alloca} or @samp{--enable-alloca=no}
always uses @code{malloc}. @samp{--enable-alloca=yes} always uses
@code{alloca} (generating an error if that function isn't available).
@code{alloca} is fast and is recommended, but when working with large numbers
it can overflow the available stack space. It might be possible to increase
available stack with @command{limit}, @command{ulimit} or @code{setrlimit}, or
under DJGPP with @command{stubedit} or @code{@w{_stklen}}. Note that
depending on the system, the only indication of stack overflow might be a
segmentation violation.
When @code{malloc} is used, it's actually the memory allocation functions
selected by @code{mp_set_memory_functions} that are used, these being
@code{malloc} and friends by default. @xref{Custom Allocation}.
Currently when @code{malloc} is used the library is not re-entrant and not
thread safe, due to the implementation of @file{stack-alloc.c}.
@xref{Reentrancy}.
@item @option{--enable-fft}
By default multiplications are done using Karatsuba and 3-way Toom-Cook
algorithms, but a Fermat FFT can be enabled, for use on large to very large
operands. Currently the FFT is recommended only for knowledgeable users who
check the algorithm thresholds for their system.
@item @option{--enable-mpbsd}
The Berkeley MP compatibility library (@file{libmp}) and header file
(@file{mp.h}) are built and installed if @option{--enable-mpbsd} is used.
@xref{BSD Compatible Functions}.
@item @option{--enable-mpfr}
The optional MPFR functions are built and installed only if
@option{--enable-mpfr} is used. These are in a separate library
@file{libmpfr.a} and are documented separately too (@pxref{Introduction to
MPFR,, Introduction to MPFR, mpfr, MPFR}).
@item @option{--enable-assert}
This option enables some consistency checking within the library. This can be
of use while debugging, @pxref{Debugging}.
@item @option{--enable-profiling=prof/gprof}
Profiling support can be enabled either for @command{prof} or @command{gprof}.
This adds @samp{-p} or @samp{-pg} respectively to @samp{CFLAGS}, and for some
systems adds corresponding @code{mcount} calls to the assembler code.
@xref{Profiling}.
@item @option{MPN_PATH}
Various assembler versions of mpn subroutines are provided, and, for a given
CPU, a search is made though a path to choose a version of each. For example
@samp{sparcv8} has path @samp{sparc32/v8 sparc32 generic}, which means it
looks first for v8 code, then plain sparc32, and finally falls back on generic
C. Knowledgeable users with special requirements can specify a path with
@samp{MPN_PATH="dir list"}. This will normally be unnecessary because all
sensible paths should be available under one or other CPU.
@item Demonstration Programs
@cindex Demonstration programs
@cindex Example programs
The @file{demos} subdirectory has some sample programs using GMP. These
aren't built or installed, but there's a @file{Makefile} with rules for them.
For instance,
@example
make pexpr
./pexpr 68^975+10
@end example
@item Documentation
The document you're now reading is @file{gmp.texi}. The usual automake
targets are available to make @file{gmp.ps} and/or @file{gmp.dvi}. HTML can
be made with @samp{makeinfo --html} (@pxref{makeinfo html,Generating
HTML,Generating HTML,texinfo,Texinfo}) or @samp{texi2html}. PDF can be made
with @samp{pdftex} or @samp{texi2dvi --pdf} (@pxref{PDF
Output,PDF,,texinfo,Texinfo}).
Some supplementary notes can be found in the @file{doc} subdirectory.
@end table
@need 2000
@node ABI and ISA, Notes for Package Builds, Build Options, Installing GMP
@section ABI and ISA
@cindex ABI
@cindex ISA
ABI (Application Binary Interface) refers to the calling conventions between
functions, meaning what registers are used and what sizes the various C data
types are. ISA (Instruction Set Architecture) refers to the instructions and
registers a CPU has available.
Some 64-bit ISA CPUs have both a 64-bit ABI and a 32-bit ABI defined, the
latter for compatibility with older CPUs in the family. GMP supports some
CPUs like this in both ABIs. In fact within GMP @samp{ABI} a combination of
chip ABI, plus how GMP chooses to use it. For example in some 32-bit ABIs,
GMP has support for a limb as either a 32-bit @code{long} or a 64-bit
@code{long long}.
By default GMP chooses the best ABI available for a given system, and this
generally gives significantly greater speed. But an ABI can be chosen
explicitly to make GMP compatible with other libraries, or particular
application requirements. In all cases it's vital that all object code used
in a given program is compiled for the same ABI.
Usually a limb is implemented as a @code{long}. When a @code{long long} limb
is used, this is encoded in a generated @file{gmp.h}. This is convenient for
applications, but it does mean @file{gmp.h} will vary from system to system,
and can't be just copied around. Note that whether a limb is a @code{long} or
a @code{long long} for a particular ABI, it will be the same for all compilers
on that system in that ABI.
Currently no attempt is made to follow whatever conventions a system has for
installing library or header files built for a particular ABI. This will
probably only matter when installing multiple builds of GMP, and it might be
as simple as configuring with a special @samp{libdir}, or it might require
more than that.
@table @asis
@need 1000
@item HPPA 2.0 (@samp{hppa2.0*})
@table @asis
@item @samp{ABI=2.0w}
The 2.0w ABI uses 64-bit limbs and pointers and is available on HP-UX 11 or up
when using @command{cc}. @command{gcc} support for this is in progress.
Applications must be compiled with
@example
cc +DD64
@end example
@item @samp{ABI=2.0n}
The 2.0n ABI means the 32-bit HPPA 1.0 ABI but with a 64-bit limb using
@code{long long}. This is available on HP-UX 10 or up when using
@command{cc}. No @command{gcc} support is planned for this. Applications
must be compiled with
@example
cc +DA2.0 +e
@end example
@item @samp{ABI=1.0}
HPPA 2.0 CPUs can run all HPPA 1.0 and 1.1 code in the 32-bit HPPA 1.0 ABI.
No special compiler options are needed for applications.
@end table
CPU type @samp{hppa2.0w} or @samp{hppa2.0} will use any of the above ABIs,
@samp{hppa2.0n} uses only 2.0n or 1.0.
@need 1000
@item MIPS under IRIX 6 (@samp{mips*-*-irix[6789]}
IRIX 6 and up supports the n32 and 64 ABIs, and always has a MIPS 3 or better
CPU, so a 64-bit limb is used on that system. A new enough @command{gcc} is
required (2.95 for instance). Note that GNU/Linux, as of kernel version 2.2,
doesn't have the necessary support for n32 or 64 and so only uses a 32-bit
limb.
@table @asis
@item @samp{ABI=64}
The 64-bit ABI is 64-bit pointers and integers. Applications must be compiled
with
@example
gcc -mabi=64
cc -64
@end example
@item @samp{ABI=n32}
The n32 ABI is 32-bit pointers and integers, but with a 64-bit limb using a
@code{long long} in the 64-bit registers. Applications must be compiled with
@example
gcc -mabi=n32
cc -n32
@end example
@end table
@need 1000
@item PowerPC 64 (@samp{powerpc64*})
@table @asis
@item @samp{ABI=aix64}
The AIX 64 ABI uses 64-bit limbs and pointers and is available on systems
@samp{powerpc64*-*-aix*}. Applications must be compiled (and linked) with
@example
gcc -maix64
xlc -q64
@end example
@item @samp{ABI=32L}
This uses the 32-bit ABI but a 64-bit limb using GCC @code{long long} in
64-bit registers. Applications must be compiled with
@example
gcc -mpowerpc64
@end example
@item @samp{ABI=32}
This is the basic 32-bit PowerPC ABI. No special compiler options are needed
for applications.
@end table
@need 1000
@item Sparc V9 (@samp{sparcv9} and @samp{ultrasparc*})
@table @asis
@item @samp{ABI=64}
The 64-bit V9 ABI is available on Solaris 2.7 and up, and on GNU/Linux, when
using @command{gcc} 2.95 or up, or Sun @command{cc}. Applications must be
compiled with
@example
gcc -m64 -mptr64 -Wa,-xarch=v9 -mcpu=v9
cc -xarch=v9
@end example
@item @samp{ABI=32}
On Solaris 2.6 and earlier only the plain V8 32-bit ABI can be used, since the
kernel doesn't save all registers. GMP still uses as much of the V9 ISA as it
can in these circumstances. No special compiler options are required for
applications, though using something like the following requesting V9 code
within the V8 ABI is recommended.
@example
gcc -mv8plus
cc -xarch=v8plus
@end example
@command{gcc} 2.8 and earlier only supports @samp{-mv8} though.
@end table
Don't be confused by the names of these sparc options, they're called
@samp{arch} but they effectively control the ABI.
@end table
@need 2000
@node Notes for Package Builds, Notes for Particular Systems, ABI and ISA, Installing GMP
@section Notes for Package Builds
@cindex Build notes for binary packaging
@cindex Packaged builds
GMP should present no great difficulties for packaging in a binary
distribution.
@cindex Libtool versioning
Libtool is used to build the library and @samp{-version-info} is set
appropriately, having started from @samp{3:0:0} in GMP 3.0. The GMP 3 series
will be upwardly binary compatible in each release, but may be adding
additional function interfaces. On systems where libtool versioning is not
fully checked by the loader, an auxiliary mechanism may be needed to express
that a dynamic linked application depends on a new enough minor version of
GMP.
When building a package for a CPU family, care should be taken to use
@samp{--host} (or @samp{--build}) to choose the least common denominator among
the CPUs which might use the package. For example this might necessitate
@samp{i386} for x86s, or plain @samp{sparc} (meaning V7) for SPARCs.
Users who care about speed will want GMP built for their exact CPU type, to
make use of the available optimizations. Providing a way to suitably rebuild
a package may be useful. This could be as simple as making it possible for a
user to omit @samp{--build} (and @samp{--host}) so @samp{./config.guess} will
detect the CPU. But a way to manually specify a @samp{--build} will be wanted
for systems where @samp{./config.guess} is inexact.
@need 2000
@node Notes for Particular Systems, Known Build Problems, Notes for Package Builds, Installing GMP
@section Notes for Particular Systems
@cindex Build notes for particular systems
@table @asis
@c This section is more or less meant for notes about performance or about
@c build problems that have been worked around but might leave a user
@c scratching their head. Fun with different ABIs on a system belongs in the
@c above section.
@item AIX 3 and 4
On systems @samp{*-*-aix[34]*} shared libraries are disabled by default, since
some versions of the native @command{ar} fail on the convenience libraries
used. A shared build can be attempted with
@example
./configure --enable-shared --disable-static
@end example
Note that the @samp{--disable-static} is necessary because in a shared build
libtool makes @file{libgmp.a} a symlink to @file{libgmp.so}, apparently for
the benefit of old versions of @command{ld} which only recognise @file{.a},
but unfortunately this is done even if a fully functional @command{ld} is
available.
@item OpenBSD 2.6
@command{m4} in this release of OpenBSD has a bug in @code{eval} that makes it
unsuitable for @file{.asm} file processing. @samp{./configure} will detect
the problem and either abort or choose another m4 in the @env{PATH}. The bug
is fixed in OpenBSD 2.7, so either upgrade or use GNU m4.
@item Power Variants
In GMP, CPU types @samp{power} and @samp{powerpc} will each use instructions
not available on the other, so it's important to choose the right one for the
CPU that will be used. Currently GMP has no assembler code support for using
just the common instruction subset. To get executables that run on both, the
current suggestion is to use the generic C code (CPU @samp{none}), possibly
with appropriate compiler options (like @samp{-mcpu=common} for
@command{gcc}). CPU @samp{rs6000} (which is not a CPU but a family of
workstations) is accepted by @file{config.sub}, but is currently equivalent to
@samp{none}.
@item Sparc V8
Using CPU type @samp{sparcv8} or @samp{supersparc} on relevant systems will
give a significant performance increase over the V7 code.
@item SunOS 4
@command{/usr/bin/m4} lacks various features needed to process @file{.asm}
files, and instead @samp{./configure} will automatically use
@command{/usr/5bin/m4}, which we believe is always available (if not then use
GNU m4).
@item x86 Pentium and PentiumPro
The Intel Pentium P5 code is good for its intended P5, but quite slow when run
on Intel P6 class chips (PPro, P-II, P-III)@. @samp{i386} is a better choice
when making binaries that must run on both.
@item x86 MMX Assembler Code
If the CPU selected has MMX code but the assembler doesn't support it, a
warning is given and non-MMX code is used instead. This will be an inferior
build, since the MMX code that's present is there because it's faster than the
corresponding plain integer code.
Old versions of @samp{gas} don't support MMX instructions, in particular
version 1.92.3 that comes with FreeBSD 2.2.8 doesn't (and unfortunately
there's no newer assembler for that system).
Solaris 2.6 and 2.7 @command{as} generate incorrect object code for register
to register @code{movq} instructions, making that assembler unusable. Install
a recent @command{gas} if MMX code is wanted on these systems.
@item x86 GCC @samp{-march=pentiumpro}
GCC 2.95.2 miscompiled some versions of @file{mpz/powm.c} when
@samp{-march=pentiumpro} was used, so for relevant CPUs that option is only in
the default @env{CFLAGS} for GCC 2.96 and up.
@end table
@need 2000
@node Known Build Problems, , Notes for Particular Systems, Installing GMP
@section Known Build Problems
@cindex Build problems known
@c This section is more or less meant for known build problems that are not
@c otherwise worked around and require some sort of manual intervention.
You might find more up-to-date information at @uref{http://swox.com/gmp/}.
@table @asis
@item GNU binutils @command{strip}
@cindex Stripped libraries
GNU binutils @command{strip} should not be used on the static libraries
@file{libgmp.a} and @file{libmp.a}, neither directly nor via @samp{make
install-strip}. It can be used on the shared libraries @file{libgmp.so} and
@file{libmp.so} though.
Currently (binutils 2.10.0), @command{strip} unpacks an archive then operates
on the files, but GMP contains multiple object files of the same name
(eg. three versions of @file{init.o}), and they overwrite each other, leaving
only the one that happens to be last.
If stripped static libraries are wanted, the suggested workaround is to build
normally, strip the separate object files, and do another @samp{make all} to
rebuild. Alternately @samp{CFLAGS} with @samp{-g} omitted can always be used
if it's just debugging which is unwanted.
@item @command{make}, @code{$*} and K&R
When using a K&R compiler, GMP relies on @command{make} setting the special
@code{$*} variable in explicit rules, not just suffix rules. Some versions of
@code{make} don't do this, in particular the HP-UX 10 bundled @code{make}
doesn't, and the bundled @command{cc} on that system only accepts K&R, hence
triggering the problem. GNU @code{make} is recommended instead.
@item NeXT prior to 3.3
The system compiler on old versions of NeXT was a massacred and old GCC, even
if it called itself @file{cc}. This compiler cannot be used to build GMP, you
need to get a real GCC, and install that. (NeXT may have fixed this in
release 3.3 of their system.)
@item POWER and PowerPC
Bugs in GCC 2.7.2 (and 2.6.3) mean it can't be used to compile GMP on POWER or
PowerPC. If you want to use GCC for these machines, get GCC 2.7.2.1 (or
later).
@item Sequent Symmetry
Use the GNU assembler instead of the system assembler, since the latter has
serious bugs.
@item VAX running Ultrix
You need to build and install the GNU assembler before you compile GMP. The
VAX assembly in GMP uses an instruction (@code{jsobgtr}) that cannot be
assembled by the Ultrix assembler.
@end table
@node GMP Basics, Reporting Bugs, Installing GMP, Top
@comment node-name, next, previous, up
@chapter GMP Basics
@cindex Basics
@cindex @file{gmp.h}
All declarations needed to use GMP are collected in the include file
@file{gmp.h}. It is designed to work with both C and C++ compilers.
@strong{Using functions, macros, data types, etc.@: not documented in this
manual is strongly discouraged. If you do so your application is guaranteed
to be incompatible with future versions of GMP.}
@menu
* Nomenclature and Types::
* Function Classes::
* Variable Conventions::
* Parameter Conventions::
* Memory Management::
* Reentrancy::
* Useful Macros and Constants::
* Compatibility with older versions::
* Efficiency::
* Debugging::
* Profiling::
* Autoconf::
@end menu
@node Nomenclature and Types, Function Classes, GMP Basics, GMP Basics
@section Nomenclature and Types
@cindex Nomenclature
@cindex Types
@cindex Integer
@tindex @code{mpz_t}
@noindent
In this manual, @dfn{integer} usually means a multiple precision integer, as
defined by the GMP library. The C data type for such integers is @code{mpz_t}.
Here are some examples of how to declare such integers:
@example
mpz_t sum;
struct foo @{ mpz_t x, y; @};
mpz_t vec[20];
@end example
@cindex Rational number
@tindex @code{mpq_t}
@noindent
@dfn{Rational number} means a multiple precision fraction. The C data type
for these fractions is @code{mpq_t}. For example:
@example
mpq_t quotient;
@end example
@cindex Floating-point number
@tindex @code{mpf_t}
@noindent
@dfn{Floating point number} or @dfn{Float} for short, is an arbitrary precision
mantissa with a limited precision exponent. The C data type for such objects
is @code{mpf_t}.
@cindex Limb
@tindex @code{mp_limb_t}
@noindent
A @dfn{limb} means the part of a multi-precision number that fits in a single
word. (We chose this word because a limb of the human body is analogous to a
digit, only larger, and containing several digits.) Normally a limb contains
32 or 64 bits. The C data type for a limb is @code{mp_limb_t}.
@node Function Classes, Variable Conventions, Nomenclature and Types, GMP Basics
@section Function Classes
@cindex Function classes
There are six classes of functions in the GMP library:
@enumerate
@item
Functions for signed integer arithmetic, with names beginning with
@code{mpz_}. The associated type is @code{mpz_t}. There are about 100
functions in this class.
@item
Functions for rational number arithmetic, with names beginning with
@code{mpq_}. The associated type is @code{mpq_t}. There are about 20
functions in this class, but the functions in the previous class can be used
for performing arithmetic on the numerator and denominator separately.
@item
Functions for floating-point arithmetic, with names beginning with
@code{mpf_}. The associated type is @code{mpf_t}. There are about 50
functions is this class.
@item
Functions compatible with Berkeley MP, such as @code{itom}, @code{madd}, and
@code{mult}. The associated type is @code{MINT}.
@item
Fast low-level functions that operate on natural numbers. These are used by
the functions in the preceding groups, and you can also call them directly
from very time-critical user programs. These functions' names begin with
@code{mpn_}. There are about 30 (hard-to-use) functions in this class.
The associated type is array of @code{mp_limb_t}.
@item
Miscellaneous functions. Functions for setting up custom allocation and
functions for generating random numbers.
@end enumerate
@node Variable Conventions, Parameter Conventions, Function Classes, GMP Basics
@section Variable Conventions
@cindex Variable conventions
@cindex Conventions for variables
GMP functions generally have output arguments before input arguments. This
notation is by analogy with the assignment operator. The BSD MP compatibility
functions are exceptions, having the output arguments last.
GMP lets you use the same variable for both input and output in one call. For
example, the main function for integer multiplication, @code{mpz_mul}, can be
used to square @code{x} and put the result back in @code{x} with
@example
mpz_mul (x, x, x);
@end example
Before you can assign to a GMP variable, you need to initialize it by calling
one of the special initialization functions. When you're done with a
variable, you need to clear it out, using one of the functions for that
purpose. Which function to use depends on the type of variable. See the
chapters on integer functions, rational number functions, and floating-point
functions for details.
A variable should only be initialized once, or at least cleared out between
each initialization. After a variable has been initialized, it may be
assigned to any number of times.
For efficiency reasons, avoid excessive initializing and clearing. In
general, initialize near the start of a function and clear near the end. For
example,
@example
void
foo (void)
@{
mpz_t n;
int i;
mpz_init (n);
for (i = 1; i < 100; i++)
@{
mpz_mul (n, @dots{});
mpz_fdiv_q (n, @dots{});
@dots{}
@}
mpz_clear (n);
@}
@end example
@node Parameter Conventions, Memory Management, Variable Conventions, GMP Basics
@section Parameter Conventions
@cindex Parameter conventions
@cindex Conventions for parameters
When an GMP variable is used as a function parameter, it's effectively a
call-by-reference, meaning if the function stores a value there it will change
the original in the caller.
When a function is going to return a GMP result, it should designate a
parameter that it sets, like the library functions do. More than one value
can be returned by having more than one output parameter, again like the
library functions. A @code{return} of an @code{mpz_t} etc doesn't return the
object, only a pointer, and this is almost certainly not what's wanted.
Here's an example function accepting an @code{mpz_t} parameter, doing a
certain calculation, and storing a result to the indicated parameter.
@example
void
foo (mpz_t result, mpz_t param, unsigned long n)
@{
unsigned long i;
mpz_mul_ui (result, param, n);
for (i = 1; i < n; i++)
mpz_add_ui (result, result, i*7);
@}
int
main (void)
@{
mpz_t r, n;
mpz_init (r);
mpz_init_set_str (n, "123456", 0);
foo (r, n, 20L);
mpz_out_str (stdout, 10, r); printf ("\n");
return 0;
@}
@end example
This example will work if @code{result} and @code{param} are the same
variable, just like the library functions. But sometimes this is tricky to
arrange, and an application might not want to bother for its own subroutines.
For interest, the GMP types @code{mpz_t} etc are implemented as one-element
arrays of certain structure types. This is why declaring a variable creates
an object with the fields GMP needs, but then using it as a parameter passes a
pointer to the object. Note that the actual fields in each @code{mpz_t} etc
are for internal use only and should not be accessed directly by code that
expects to be compatible with future GMP releases.
@need 1000
@node Memory Management, Reentrancy, Parameter Conventions, GMP Basics
@section Memory Management
@cindex Memory
GMP variables are small, containing only a couple of sizes, and pointers to
allocated data. Once a variable is initialized, GMP takes care of all space
allocation. Additional space is allocated whenever a variable doesn't have
enough.
@code{mpz_t} and @code{mpq_t} variables never reduce their allocated space.
Normally this is the best policy, since it avoids frequent reallocation.
Applications that must return memory to the heap at some particular point can
use @code{_mpz_realloc}, or clear variables no longer needed.
@code{mpf_t} variables, in the current implementation, use a fixed amount of
space, determined by the chosen precision and allocated at initialization, so
their size doesn't change.
All memory is allocated using @code{malloc} and friends by default, but this
can be changed, see @ref{Custom Allocation}. Temporary memory on the stack is
also used, but this can be changed at build-time if desired, see @ref{Build
Options}.
@node Reentrancy, Useful Macros and Constants, Memory Management, GMP Basics
@section Reentrancy
@cindex Reentrancy
@cindex Thread safety
@cindex Multi-threading
GMP is reentrant and thread-safe, with some exceptions:
@itemize @bullet
@item
@code{mpf_set_default_prec} and @code{mpf_init} use a global variable for the
selected precision. @code{mpf_init2} can be used instead.
@item
@code{mp_set_memory_functions} uses global variables to store the selected
memory allocation functions.
@item
@code{mpz_random} and the other old random number functions use a random
number generator from the C library, usually @code{mrand48} or @code{random}.
These routines are not reentrant, since they rely on global state. The newer
random number functions that accept a @code{gmp_randstate_t} parameter can be
used instead.
@item
If the memory allocation functions set by a call to
@code{mp_set_memory_functions} (or @code{malloc} and friends by default) are
not reentrant, then GMP will not be reentrant either.
@item
If the standard I/O functions such as @code{fwrite} are not reentrant then the
GMP I/O functions using them will not be reentrant either.
@item
If @code{alloca} is not available, or GMP is configured with
@samp{--disable-alloca}, the library is not reentrant, due to the current
implementation of @file{stack-alloc.c}.
@item
It's safe for two threads to read from the same GMP variable simultaneously,
but it's not safe for one to read while the another might be writing, nor for
two threads to write simultaneously. Note that this also applies to the seed
variables when generating random numbers (@pxref{Random Number Functions}),
since the seed is updated when a number is generated.
@item
On SCO systems the default @code{<ctype.h>} macros use per-file static
variables and may not be reentrant, depending whether the compiler optimizes
away fetches from them. The GMP functions affected are @code{mpz_set_str},
@code{mpz_inp_str}, @code{mpf_set_str} and @code{mpf_inp_str}.
@end itemize
@need 2000
@node Useful Macros and Constants, Compatibility with older versions, Reentrancy, GMP Basics
@section Useful Macros and Constants
@cindex Useful macros and constants
@cindex Constants
@deftypevr {Global Constant} {const int} mp_bits_per_limb
@cindex Bits per limb
@cindex Limb size
The number of bits per limb.
@end deftypevr
@defmac __GNU_MP_VERSION
@defmacx __GNU_MP_VERSION_MINOR
@defmacx __GNU_MP_VERSION_PATCHLEVEL
@cindex Version number
@cindex GMP version number
The major and minor GMP version, and patch level, respectively, as integers.
For GMP i.j, these numbers will be i, j, and 0, respectively.
For GMP i.j.k, these numbers will be i, j, and k, respectively.
@end defmac
@node Compatibility with older versions, Efficiency, Useful Macros and Constants, GMP Basics
@section Compatibility with older versions
@cindex Compatibility with older versions
@cindex Upward compatibility
This version of GMP is upwardly binary compatible with all 3.x versions, and
upwardly compatible at the source level with all 2.x versions, with the
following exceptions.
@itemize @bullet
@item
@code{mpn_gcd} had its source arguments swapped as of GMP 3.0, for consistency
with other @code{mpn} functions.
@item
@code{mpf_get_prec} counted precision slightly differently in GMP 3.0 and
3.0.1, but in 3.1 has reverted to the 2.x style.
@end itemize
There are a number of compatibility issues between GMP 1 and GMP 2 that of
course also apply when porting applications from GMP 1 to GMP 3. Please
see the GMP 2 manual for details.
@c @enumerate
@c @item Integer division functions round the result differently. The obsolete
@c functions (@code{mpz_div}, @code{mpz_divmod}, @code{mpz_mdiv},
@c @code{mpz_mdivmod}, etc) now all use floor rounding (i.e., they round the
@c quotient towards
@c @ifinfo
@c @minus{}infinity).
@c @end ifinfo
@c @iftex
@c @tex
@c $-\infty$).
@c @end tex
@c @end iftex
@c There are a lot of functions for integer division, giving the user better
@c control over the rounding.
@c @item The function @code{mpz_mod} now compute the true @strong{mod} function.
@c @item The functions @code{mpz_powm} and @code{mpz_powm_ui} now use
@c @strong{mod} for reduction.
@c @item The assignment functions for rational numbers do no longer canonicalize
@c their results. In the case a non-canonical result could arise from an
@c assignment, the user need to insert an explicit call to
@c @code{mpq_canonicalize}. This change was made for efficiency.
@c @item Output generated by @code{mpz_out_raw} in this release cannot be read
@c by @code{mpz_inp_raw} in previous releases. This change was made for making
@c the file format truly portable between machines with different word sizes.
@c @item Several @code{mpn} functions have changed. But they were intentionally
@c undocumented in previous releases.
@c @item The functions @code{mpz_cmp_ui}, @code{mpz_cmp_si}, and @code{mpq_cmp_ui}
@c are now implemented as macros, and thereby sometimes evaluate their
@c arguments multiple times.
@c @item The functions @code{mpz_pow_ui} and @code{mpz_ui_pow_ui} now yield 1
@c for 0^0. (In version 1, they yielded 0.)
@c In version 1 of the library, @code{mpq_set_den} handled negative
@c denominators by copying the sign to the numerator. That is no longer done.
@c Pure assignment functions do not canonicalize the assigned variable. It is
@c the responsibility of the user to canonicalize the assigned variable before
@c any arithmetic operations are performed on that variable.
@c Note that this is an incompatible change from version 1 of the library.
@c @end enumerate
@need 1000
@node Efficiency, Debugging, Compatibility with older versions, GMP Basics
@section Efficiency
@table @asis
@item Small operands
On small operands, function call overheads and memory allocation can be
significant in comparison to actual calculating. This is unavoidable in a
general purpose variable precision library, although GMP attempts to be as
efficient as it can on both large and small operands.
@item Initializing and clearing
Avoid excessive initializing and clearing of variables, since this can be
quite time consuming, especially in comparison to otherwise fast operations
like addition.
A language interpreter might want to keep a free list or stack of
initialized variables ready for use. It should be possible to integrate
something like that with a garbage collector.
@item Reallocations
An @code{mpz_t} or @code{mpq_t} variable used to hold successively increasing
values will have its memory repeatedly @code{realloc}ed, which could be quite
slow or could fragment memory, depending on the C library. If an application
can estimate the final size then @code{@w{_mpz}_realloc} can be called to
allocate the necessary space from the beginning (@pxref{Initializing
Integers}).
It doesn't matter if a size set with @code{@w{_mpz}_realloc} is too small,
since all functions will do a further reallocation if necessary. Badly
overestimating memory required will waste space though.
@item Divisibility Testing (Small Integers)
@code{mpz_divisible_ui_p} is the best function for testing whether an
@code{mpz_t} is divisible by an individual small integer. It uses an
algorithm which is faster than @code{mpz_tdiv_ui}, but which gives no useful
information about the actual remainder, only whether it's zero.
However when testing divisibility by several small integers, it's best to take
a remainder modulo their product, in order to save multi-precision operations.
For instance to test whether a number is divisible by any of 23, 29 and 31,
take a remainder modulo @ma{23@times{}29@times{}31 = 20677} and then test
that.
The division functions like @code{mpz_tdiv_q_ui} which give a quotient as well
as a remainder are generally a little slower than the remainder-only functions
like @code{mpz_tdiv_ui}. If the quotient is only rarely wanted, perhaps only
on finding a divisor, then it's probably best to just take a remainder and
then go back and calculate the quotient if and when it's wanted.
@item Rational Arithmetic
The @code{mpq} functions operate on @code{mpq_t} values with no common factors
in the numerator and denominator. Common factors are checked-for and cast out
as necessary. In general, cancelling factors every time is the best approach
since it minimizes the sizes for subsequent operations.
However, applications that know something about the factorization of the
values they're working with might be able to avoid some of the GCDs used for
canonicalization, or swap them for divisions. For example when multiplying by
a prime it's enough to check for factors of it in the denominator rather than
do a full GCD. Or when forming a big product it might be known that very
little cancellation will be possible, and so canonicalization can be left to
the end.
The @code{mpq_numref} and @code{mpq_denref} macros give access to the
numerator and denominator to do things outside the scope of the @code{mpq}
functions. @xref{Applying Integer Functions}.
@item @code{2exp} functions
It's up to an application to call functions like @code{mpz_mul_2exp} when
appropriate. General purpose functions like @code{mpz_mul} make no attempt to
identify powers of two or other special forms, because such inputs will
usually be very rare and testing every time would be wasteful.
@item @code{si} and @code{ui} functions
The @code{si} and @code{ui} functions exist for convenience and should be used
where applicable. But if for example an @code{mpz_t} contains a value that
fits in an @code{unsigned long} there's no need extract it and call a
@code{ui} function, just use the regular @code{mpz} function.
@item Number Sequences
Functions like @code{mpz_fac_ui}, @code{mpz_fib_ui} and @code{mpz_bin_uiui}
are designed for calculating isolated values. If a range of values is wanted
it's probably better to call to get a starting point and iterate from there.
@end table
@node Debugging, Profiling, Efficiency, GMP Basics
@section Debugging
@cindex Debugging
@table @asis
@item Stack Overflow
Depending on the system, a segmentation violation or bus error might be the
only indication of stack overflow. See @samp{--disable-alloca} in @ref{Build
Options}, for ways to address this.
@item Heap Problems
The most likely cause of application problems with GMP is heap corruption.
Failing to @code{init} GMP variables will have unpredictable effects, and
corruption arising elsewhere in a program may well affect GMP. Initializing
GMP variables more than once or failing to clear them will cause memory leaks.
In all such cases a malloc debugger is recommended. On a GNU or BSD system
the standard C library @code{malloc} has some diagnostic facilities, see
@ref{Allocation Debugging,,,libc,The GNU C Library Reference Manual}, or
@samp{man 3 malloc}. Other possibilities, in no particular order, include
@display
@uref{http://www.inf.ethz.ch/personal/biere/projects/ccmalloc}
@uref{http://quorum.tamu.edu/jon/gnu} @ (debauch)
@uref{http://dmalloc.com}
@uref{http://www.perens.com/FreeSoftware} @ (electric fence)
@uref{http://packages.debian.org/fda}
@uref{http://people.redhat.com/~otaylor/memprof}
@end display
@item Stack Backtraces
On some systems the compiler options GMP uses by default can interfere with
debugging. In particular on x86 and 68k systems @samp{-fomit-frame-pointer}
is used and this generally inhibits stack backtracing. Recompiling without
such options may help while debugging, though the usual caveats about it
potentially moving a memory problem or hiding a compiler bug will apply.
@item GNU Debugger
A sample @file{.gdbinit} is included in the distribution, showing how to call
some undocumented dump functions to print GMP variables from within GDB. But
note that these functions shouldn't be used in final application code since
they're undocumented and may be subject to incompatible changes in future
versions of GMP.
@item Source File Paths
GMP has multiple source files with the same name, in different directories.
For example @file{mpz}, @file{mpq}, @file{mpf} and @file{mpfr} each have an
@file{init.c}. If the debugger can't already determine the right one it can
help to build with absolute paths on each C file. One way to do that is to
use a separate object directory with an absolute path to the source directory.
@example
cd /my/build/dir
/my/source/dir/gmp-@value{VERSION}/configure
@end example
This works via @code{VPATH}, and might require GNU Make. Alternately it might
be possible to change the @code{.c.lo} rules appropriately.
@item Assertion Checking
The build option @option{--enable-assert} is available to add some consistency
checks to the library (see @ref{Build Options}). These are likely to be of
limited value to most applications. Assertion failures are just as likely to
indicate a memory corruption as a library or compiler bug.
Applications using the low-level @code{mpn} functions, however, will benefit
from @option{--enable-assert} since it adds checks on the parameters of most
such functions, many of which have subtle restrictions on their usage. Note
however that only the generic C code has checks, not the assembler code, so
CPU @samp{none} should be used for maximum checking.
@item Other Problems
Any suspected bug in GMP itself should be isolated to make sure it's not an
application problem, see @ref{Reporting Bugs}.
@end table
@node Profiling, Autoconf, Debugging, GMP Basics
@section Profiling
@cindex Profiling
Running a program under a profiler is a good way to find where it's spending
most time and where improvements can be best sought.
Depending on the system, it may be possible to get a flat profile, meaning
simple timer sampling of the program counter, with no special GMP build
options, just a @samp{-p} when compiling the mainline. This is a good way to
ensure minimum interference with normal operation. The necessary symbol type
and size information exists in most assembler code.
The @samp{--enable-profiling} build option can be used to add suitable
compiler flags, either for @command{prof} (@samp{-p}) or @command{gprof}
(@samp{-pg}), see @ref{Build Options}. Which of the two is available and what
they do will depend on the system, and possibly on support available in
@file{libc}. For some systems appropriate corresponding @code{mcount} calls
are added to the assembler code too.
On x86 systems @command{prof} gives call counting, so that average time spent
in a function can be determined. @command{gprof}, where supported, adds call
graph construction, so for instance calls to @code{mpn_add_n} from
@code{mpz_add} and from @code{mpz_mul} can be differentiated.
On x86 and 68k systems @samp{-pg} and @samp{-fomit-frame-pointer} are
incompatible, so the latter is not used when @command{gprof} profiling is
selected, which may result in poorer code generation. If @command{prof}
profiling is selected instead it should still be possible to use
@command{gprof}, but only the @samp{gprof -p} flat profile and call counts can
be expected to be valid, not the @samp{gprof -q} call graph.
@node Autoconf, , Profiling, GMP Basics
@section Autoconf
@cindex Autoconf detections
It should be easy for autoconf based applications to check whether GMP is
installed. The only thing to be noted is that GMP 3 library symbols have
prefixes like @code{__gmpz}. The following would be a simple test,
@example
AC_CHECK_LIB(gmp, __gmpz_init)
@end example
This just uses the default @code{AC_CHECK_LIB} actions for found or not found,
but an application that must have GMP available would want to generate an
error if not found. For example,
@example
AC_CHECK_LIB(gmp, __gmpz_init, , [AC_MSG_ERROR(
[GNU MP 3 not found, see http://www.swox.com/gmp])])
@end example
If functions added in some particular version of GMP are required, then one of
those can be used when checking. For example @code{mpz_mul_si} was added in
GMP 3.1,
@example
AC_CHECK_LIB(gmp, __gmpz_mul_si, , [AC_MSG_ERROR(
[GNU MP not found, or not 3.1 or up, see http://www.swox.com/gmp])])
@end example
An alternative would be to test the version number in @file{gmp.h} using say
@code{AC_EGREP_CPP}. That would make it possible to test the exact version,
if some particular sub-minor release is known to be necessary.
An application that can use either GMP 2 or 3 will need to test for
@code{__gmpz_init} (GMP 3) or @code{mpz_init} (GMP 2), and it's also worth
checking for @file{libgmp2} since Debian GNU/Linux systems used that name in
the past. For example,
@example
AC_CHECK_LIB(gmp, __gmpz_init, ,
[AC_CHECK_LIB(gmp, mpz_init, ,
[AC_CHECK_LIB(gmp2, mpz_init)])])
@end example
In general it's suggested that applications should simply demand a new enough
GMP rather than trying to provide supplements for features not available in
past versions.
Occasionally an application will need or want to know the size of a type at
configuration or preprocessing time, not just with @code{sizeof} in the code.
This can be done in the normal way with @code{mp_limb_t} etc, but GMP 3.2 or
up is best for this, since prior versions needed certain @samp{-D} defines on
systems using a @code{long long} limb. The following would suit Autoconf 2.50
or up,
@example
AC_CHECK_SIZEOF(mp_limb_t, , [#include <gmp.h>])
@end example
The optional @code{mpfr} functions are provided in a separate
@file{libmpfr.a}, and this might be from GMP @option{--enable-mpfr} or from
MPFR installed separately. Either way @file{libmpfr} depends on
@file{libgmp}, it doesn't stand alone. Currently only a static
@file{libmpfr.a} will be available, not a shared library, since upward binary
compatibility is not guaranteed.
@example
AC_CHECK_LIB(mpfr, mpfr_add, , [AC_MSG_ERROR(
[Need MPFR either from GNU MP 3.2 or separate MPFR package.
See http://www.mpfr.org or http://www.swox.com/gmp])
@end example
@node Reporting Bugs, Integer Functions, GMP Basics, Top
@comment node-name, next, previous, up
@chapter Reporting Bugs
@cindex Reporting bugs
@cindex Bug reporting
If you think you have found a bug in the GMP library, please investigate it
and report it. We have made this library available to you, and it is not too
much to ask you to report the bugs you find. Before you report a bug, you may
want to check @uref{http://swox.com/gmp/} for patches for this release.
Please include the following in any report,
@itemize @bullet
@item
The GMP version number, and if pre-packaged or patched then say so.
@item
A test program that makes it possible for us to reproduce the bug. Include
instructions on how to run the program.
@item
A description of what is wrong. If the results are incorrect, in what way.
If you get a crash, say so.
@item
If you get a crash, include a stack backtrace from the debugger if it's
informative (@samp{where} in @command{gdb}, or @samp{$C} in @command{adb}).
@item
@strong{Please do not send core dumps, executables or @command{strace}s.}
@item
The configuration options you used when building GMP, if any.
@item
The name of the compiler and its version. For @command{gcc}, get the version
with @samp{gcc -v}, otherwise perhaps @samp{what `which cc`}, or similar.
@item
The output from running @samp{uname -a}.
@item
The output from running @samp{./config.guess}, and from running
@samp{./configfsf.guess} (might be the same).
@item
If the bug is related to @samp{configure}, then the contents of
@file{config.log}.
@item
If the bug is related to an @file{asm} file not assembling, then the contents
of @file{config.m4}.
@end itemize
It is not uncommon that an observed problem is actually due to a bug in the
compiler; the GMP code tends to explore interesting corners in compilers.
If your bug report is good, we will do our best to help you get a corrected
version of the library; if the bug report is poor, we won't do anything about
it (except maybe ask you to send a better report).
Send your report to: @email{bug-gmp@@gnu.org}.
If you think something in this manual is unclear, or downright incorrect, or if
the language needs to be improved, please send a note to the same address.
@node Integer Functions, Rational Number Functions, Reporting Bugs, Top
@comment node-name, next, previous, up
@chapter Integer Functions
@cindex Integer functions
This chapter describes the GMP functions for performing integer arithmetic.
These functions start with the prefix @code{mpz_}.
GMP integers are stored in objects of type @code{mpz_t}.
@menu
* Initializing Integers::
* Assigning Integers::
* Simultaneous Integer Init & Assign::
* Converting Integers::
* Integer Arithmetic::
* Integer Division::
* Integer Exponentiation::
* Integer Roots::
* Number Theoretic Functions::
* Integer Comparisons::
* Integer Logic and Bit Fiddling::
* I/O of Integers::
* Integer Random Numbers::
* Miscellaneous Integer Functions::
@end menu
@node Initializing Integers, Assigning Integers, Integer Functions, Integer Functions
@comment node-name, next, previous, up
@section Initialization Functions
@cindex Integer initialization functions
@cindex Initialization functions
The functions for integer arithmetic assume that all integer objects are
initialized. You do that by calling the function @code{mpz_init}.
@deftypefun void mpz_init (mpz_t @var{integer})
Initialize @var{integer} with limb space and set the initial numeric value to
0. Each variable should normally only be initialized once, or at least cleared
out (using @code{mpz_clear}) between each initialization.
@end deftypefun
Here is an example of using @code{mpz_init}:
@example
@{
mpz_t integ;
mpz_init (integ);
@dots{}
mpz_add (integ, @dots{});
@dots{}
mpz_sub (integ, @dots{});
/* Unless the program is about to exit, do ... */
mpz_clear (integ);
@}
@end example
@noindent
As you can see, you can store new values any number of times, once an
object is initialized.
@deftypefun void mpz_clear (mpz_t @var{integer})
Free the limb space occupied by @var{integer}. Make sure to call this
function for all @code{mpz_t} variables when you are done with them.
@end deftypefun
@deftypefun {void *} _mpz_realloc (mpz_t @var{integer}, mp_size_t @var{new_alloc})
Change the limb space allocation to @var{new_alloc} limbs. This function is
not normally called from user code, but it can be used to give memory back to
the heap, or to increase the space of a variable to avoid repeated automatic
re-allocation.
@end deftypefun
@deftypefun void mpz_array_init (mpz_t @var{integer_array}[], size_t @var{array_size}, @w{mp_size_t @var{fixed_num_bits}})
Allocate @strong{fixed} limb space for all @var{array_size} integers in
@var{integer_array}. Each integer in the array will have enough room to store
@var{fixed_num_bits}.
This function can reduce memory usage in algorithms that need large arrays of
integers, since it can avoid allocating and reallocating lots of small memory
blocks. There is no way to free the storage allocated by this function.
Don't call @code{mpz_clear}!
Since the allocation for each variable is fixed, care must be taken that
values stored are no bigger than that size. The following special rules must
be observed,
@itemize @bullet
@item
@code{mpz_abs}, @code{mpz_neg}, @code{mpz_set}, @code{mpz_set_si} and
@code{mpz_set_ui} need room for the value they copy.
@item
@code{mpz_add}, @code{mpz_add_ui}, @code{mpz_sub} and @code{mpz_sub_ui} need
room for the larger of the two operands, plus an extra
@code{mp_bits_per_limb}.
@item
@code{mpz_mul}, @code{mpz_mul_ui} and @code{mpz_mul_ui} need room for the sum
of the number of bits in their operands, but each rounded up to a multiple of
@code{mp_bits_per_limb}.
@item
@code{mpz_swap} can be used between two array variables, but not between an
array and a normal variable.
@end itemize
For other functions, or if in doubt, the suggestion is to calculate in a
regular @code{mpz_init} variable and copy the result to an array variable with
@code{mpz_set}.
@end deftypefun
@node Assigning Integers, Simultaneous Integer Init & Assign, Initializing Integers, Integer Functions
@comment node-name, next, previous, up
@section Assignment Functions
@cindex Integer assignment functions
@cindex Assignment functions
These functions assign new values to already initialized integers
(@pxref{Initializing Integers}).
@deftypefun void mpz_set (mpz_t @var{rop}, mpz_t @var{op})
@deftypefunx void mpz_set_ui (mpz_t @var{rop}, unsigned long int @var{op})
@deftypefunx void mpz_set_si (mpz_t @var{rop}, signed long int @var{op})
@deftypefunx void mpz_set_d (mpz_t @var{rop}, double @var{op})
@deftypefunx void mpz_set_q (mpz_t @var{rop}, mpq_t @var{op})
@deftypefunx void mpz_set_f (mpz_t @var{rop}, mpf_t @var{op})
Set the value of @var{rop} from @var{op}.
@code{mpz_set_d}, @code{mpz_set_q} and @code{mpz_set_f} truncate @var{op} to
make it an integer.
@end deftypefun
@deftypefun int mpz_set_str (mpz_t @var{rop}, char *@var{str}, int @var{base})
Set the value of @var{rop} from @var{str}, a null-terminated C string in base
@var{base}. White space is allowed in the string, and is simply ignored. The
base may vary from 2 to 36. If @var{base} is 0, the actual base is determined
from the leading characters: if the first two characters are ``0x'' or ``0X'',
hexadecimal is assumed, otherwise if the first character is ``0'', octal is
assumed, otherwise decimal is assumed.
This function returns 0 if the entire string is a valid number in base
@var{base}. Otherwise it returns @minus{}1.
[It turns out that it is not entirely true that this function ignores
white-space. It does ignore it between digits, but not after a minus sign or
within or after ``0x''. We are considering changing the definition of this
function, making it fail when there is any white-space in the input, since
that makes a lot of sense. Please tell us your opinion about this change. Do
you really want it to accept @nicode{"3 14"} as meaning 314 as it does now?]
@end deftypefun
@deftypefun void mpz_swap (mpz_t @var{rop1}, mpz_t @var{rop2})
Swap the values @var{rop1} and @var{rop2} efficiently.
@end deftypefun
@node Simultaneous Integer Init & Assign, Converting Integers, Assigning Integers, Integer Functions
@comment node-name, next, previous, up
@section Combined Initialization and Assignment Functions
@cindex Initialization and assignment functions
@cindex Integer init and assign
For convenience, GMP provides a parallel series of initialize-and-set functions
which initialize the output and then store the value there. These functions'
names have the form @code{mpz_init_set@dots{}}
Here is an example of using one:
@example
@{
mpz_t pie;
mpz_init_set_str (pie, "3141592653589793238462643383279502884", 10);
@dots{}
mpz_sub (pie, @dots{});
@dots{}
mpz_clear (pie);
@}
@end example
@noindent
Once the integer has been initialized by any of the @code{mpz_init_set@dots{}}
functions, it can be used as the source or destination operand for the ordinary
integer functions. Don't use an initialize-and-set function on a variable
already initialized!
@deftypefun void mpz_init_set (mpz_t @var{rop}, mpz_t @var{op})
@deftypefunx void mpz_init_set_ui (mpz_t @var{rop}, unsigned long int @var{op})
@deftypefunx void mpz_init_set_si (mpz_t @var{rop}, signed long int @var{op})
@deftypefunx void mpz_init_set_d (mpz_t @var{rop}, double @var{op})
Initialize @var{rop} with limb space and set the initial numeric value from
@var{op}.
@end deftypefun
@deftypefun int mpz_init_set_str (mpz_t @var{rop}, char *@var{str}, int @var{base})
Initialize @var{rop} and set its value like @code{mpz_set_str} (see its
documentation above for details).
If the string is a correct base @var{base} number, the function returns 0;
if an error occurs it returns @minus{}1. @var{rop} is initialized even if
an error occurs. (I.e., you have to call @code{mpz_clear} for it.)
@end deftypefun
@node Converting Integers, Integer Arithmetic, Simultaneous Integer Init & Assign, Integer Functions
@comment node-name, next, previous, up
@section Conversion Functions
@cindex Integer conversion functions
@cindex Conversion functions
This section describes functions for converting GMP integers to standard C
types. Functions for converting @emph{to} GMP integers are described in
@ref{Assigning Integers} and @ref{I/O of Integers}.
@deftypefun mp_limb_t mpz_getlimbn (mpz_t @var{op}, mp_size_t @var{n})
Return limb #@var{n} from @var{op}. This function allows for very efficient
decomposition of a number in its limbs.
The function @code{mpz_size} can be used to determine the useful range for
@var{n}.
@end deftypefun
@deftypefun {unsigned long int} mpz_get_ui (mpz_t @var{op})
Return the least significant part from @var{op}. This function combined with
@* @code{mpz_tdiv_q_2exp(@dots{}, @var{op}, CHAR_BIT*sizeof(unsigned long
int))} can be used to decompose an integer into unsigned longs.
@end deftypefun
@deftypefun {signed long int} mpz_get_si (mpz_t @var{op})
If @var{op} fits into a @code{signed long int} return the value of @var{op}.
Otherwise return the least significant part of @var{op}, with the same sign
as @var{op}.
If @var{op} is too large to fit in a @code{signed long int}, the returned
result is probably not very useful. To find out if the value will fit, use
the function @code{mpz_fits_slong_p}.
@end deftypefun
@deftypefun double mpz_get_d (mpz_t @var{op})
Convert @var{op} to a @code{double}.
@end deftypefun
@deftypefun {char *} mpz_get_str (char *@var{str}, int @var{base}, mpz_t @var{op})
Convert @var{op} to a string of digits in base @var{base}. The base may vary
from 2 to 36.
If @var{str} is @code{NULL}, the result string is allocated using the current
allocation function (@pxref{Custom Allocation}). The block will be
@code{strlen(str)+1} bytes, that being exactly enough for the string and
null-terminator.
If @var{str} is not @code{NULL}, it should point to a block of storage enough
large for the result, that being @code{mpz_sizeinbase (@var{op}, @var{base}) +
2}. The two extra bytes are for a possible minus sign, and the
null-terminator.
A pointer to the result string is returned, being either the allocated block,
or the given @var{str}.
@end deftypefun
@need 2000
@node Integer Arithmetic, Integer Division, Converting Integers, Integer Functions
@comment node-name, next, previous, up
@section Arithmetic Functions
@cindex Integer arithmetic functions
@cindex Arithmetic functions
@deftypefun void mpz_add (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
@deftypefunx void mpz_add_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
Set @var{rop} to @ma{@var{op1} + @var{op2}}.
@end deftypefun
@deftypefun void mpz_sub (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
@deftypefunx void mpz_sub_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
Set @var{rop} to @var{op1} @minus{} @var{op2}.
@end deftypefun
@deftypefun void mpz_mul (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
@deftypefunx void mpz_mul_si (mpz_t @var{rop}, mpz_t @var{op1}, long int @var{op2})
@deftypefunx void mpz_mul_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
Set @var{rop} to @ma{@var{op1} @GMPtimes{} @var{op2}}.
@end deftypefun
@deftypefun void mpz_addmul_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
Set @var{rop} to @ma{@var{rop} + @var{op1} @GMPtimes{} @var{op2}}.
@end deftypefun
@deftypefun void mpz_mul_2exp (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
@cindex Bit shift left
Set @var{rop} to @m{@var{op1} \times 2^{op2}, @var{op1} times 2 raised to
@var{op2}}. This operation can also be defined as a left shift by @var{op2}
bits.
@end deftypefun
@deftypefun void mpz_neg (mpz_t @var{rop}, mpz_t @var{op})
Set @var{rop} to @minus{}@var{op}.
@end deftypefun
@deftypefun void mpz_abs (mpz_t @var{rop}, mpz_t @var{op})
Set @var{rop} to the absolute value of @var{op}.
@end deftypefun
@need 2000
@node Integer Division, Integer Exponentiation, Integer Arithmetic, Integer Functions
@section Division Functions
@cindex Integer division functions
@cindex Division functions
Division is undefined if the divisor is zero. Passing a zero divisor to the
division or modulo functions (including the modular powering functions
@code{mpz_powm} and @code{mpz_powm_ui}), will cause an intentional division by
zero. This lets a program handle arithmetic exceptions in these functions the
same way as for normal C @code{int} arithmetic.
@c Separate deftypefun groups for cdiv, fdiv and tdiv produce a blank line
@c between each, and seem to let tex do a better job of page breaks than an
@c @sp 1 in the middle of one big set.
@deftypefun void mpz_cdiv_q (mpz_t @var{q}, mpz_t @var{n}, mpz_t @var{d})
@deftypefunx void mpz_cdiv_r (mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
@deftypefunx void mpz_cdiv_qr (mpz_t @var{q}, mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
@deftypefunx {unsigned long int} mpz_cdiv_q_ui (mpz_t @var{q}, mpz_t @var{n}, @w{unsigned long int @var{d}})
@deftypefunx {unsigned long int} mpz_cdiv_r_ui (mpz_t @var{r}, mpz_t @var{n}, @w{unsigned long int @var{d}})
@deftypefunx {unsigned long int} mpz_cdiv_qr_ui (mpz_t @var{q}, mpz_t @var{r}, @w{mpz_t @var{n}}, @w{unsigned long int @var{d}})
@deftypefunx {unsigned long int} mpz_cdiv_ui (mpz_t @var{n}, @w{unsigned long int @var{d}})
@deftypefunx void mpz_cdiv_q_2exp (mpz_t @var{q}, mpz_t @var{n}, @w{unsigned long int @var{b}})
@deftypefunx void mpz_cdiv_r_2exp (mpz_t @var{r}, mpz_t @var{n}, @w{unsigned long int @var{b}})
@end deftypefun
@deftypefun void mpz_fdiv_q (mpz_t @var{q}, mpz_t @var{n}, mpz_t @var{d})
@deftypefunx void mpz_fdiv_r (mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
@deftypefunx void mpz_fdiv_qr (mpz_t @var{q}, mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
@deftypefunx {unsigned long int} mpz_fdiv_q_ui (mpz_t @var{q}, mpz_t @var{n}, @w{unsigned long int @var{d}})
@deftypefunx {unsigned long int} mpz_fdiv_r_ui (mpz_t @var{r}, mpz_t @var{n}, @w{unsigned long int @var{d}})
@deftypefunx {unsigned long int} mpz_fdiv_qr_ui (mpz_t @var{q}, mpz_t @var{r}, @w{mpz_t @var{n}}, @w{unsigned long int @var{d}})
@deftypefunx {unsigned long int} mpz_fdiv_ui (mpz_t @var{n}, @w{unsigned long int @var{d}})
@deftypefunx void mpz_fdiv_q_2exp (mpz_t @var{q}, mpz_t @var{n}, @w{unsigned long int @var{b}})
@deftypefunx void mpz_fdiv_r_2exp (mpz_t @var{r}, mpz_t @var{n}, @w{unsigned long int @var{b}})
@end deftypefun
@deftypefun void mpz_tdiv_q (mpz_t @var{q}, mpz_t @var{n}, mpz_t @var{d})
@deftypefunx void mpz_tdiv_r (mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
@deftypefunx void mpz_tdiv_qr (mpz_t @var{q}, mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
@deftypefunx {unsigned long int} mpz_tdiv_q_ui (mpz_t @var{q}, mpz_t @var{n}, @w{unsigned long int @var{d}})
@deftypefunx {unsigned long int} mpz_tdiv_r_ui (mpz_t @var{r}, mpz_t @var{n}, @w{unsigned long int @var{d}})
@deftypefunx {unsigned long int} mpz_tdiv_qr_ui (mpz_t @var{q}, mpz_t @var{r}, @w{mpz_t @var{n}}, @w{unsigned long int @var{d}})
@deftypefunx {unsigned long int} mpz_tdiv_ui (mpz_t @var{n}, @w{unsigned long int @var{d}})
@deftypefunx void mpz_tdiv_q_2exp (mpz_t @var{q}, mpz_t @var{n}, @w{unsigned long int @var{b}})
@deftypefunx void mpz_tdiv_r_2exp (mpz_t @var{r}, mpz_t @var{n}, @w{unsigned long int @var{b}})
@cindex Bit shift right
@sp 1
Divide @var{n} by @var{d}, forming a quotient @var{q} and/or remainder
@var{r}. For the @code{2exp} functions the divisor is @m{2^b, 2^@var{b}}.
The rounding is in three styles, each suiting different applications.
@itemize @bullet
@item
@code{cdiv} rounds @var{q} up towards @m{+\infty, +infinity}, and @var{r} will
have the opposite sign to @var{d}. The @code{c} stands for ``ceil''.
@item
@code{fdiv} rounds @var{q} down towards @m{-\infty, @minus{}infinity}, and
@var{r} will have the same sign as @var{d}. The @code{f} stands for
``floor''.
@item
@code{tdiv} rounds @var{q} towards zero, and @var{r} will have the same sign
as @var{n}. The @code{t} stands for ``truncate''.
@end itemize
In all cases @var{q} and @var{r} will satisfy
@m{@var{n}=@var{q}@var{d}+@var{r}, @var{n}=@var{q}*@var{d}+@var{r}}, and
@var{r} will satisfy @m{0\leq|@var{r}|<|@var{d}|,
0<=abs(@var{r})<abs(@var{d})}.
The @code{q} functions calculate only the quotient, the @code{r} functions
only the remainder, and the @code{qr} functions calculate both. Note that for
@code{qr} the same variable cannot be passed for both @var{q} and @var{r}, or
results will be unpredictable.
For the @code{ui} variants the return value is the remainder, and in fact
returning the remainder is all the @code{div_ui} functions do. For
@code{tdiv} and @code{cdiv} the remainder can be negative, so for those the
return value is the absolute value of the remainder.
The @code{2exp} functions are right shifts and bit masks, but of course
rounding the same as the other functions. For positive @var{n} both
@code{mpz_fdiv_q_2exp} and @code{mpz_tdiv_q_2exp} are simple bitwise right
shifts. For negative @var{n}, @code{mpz_fdiv_q_2exp} is effectively an
arithmetic right shift treating @var{n} as twos complement the same as the
bitwise logical functions do, whereas @code{mpz_tdiv_q_2exp} effectively
treats @var{n} as sign and magnitude.
@end deftypefun
@deftypefun void mpz_mod (mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
@deftypefunx {unsigned long int} mpz_mod_ui (mpz_t @var{r}, mpz_t @var{n}, @w{unsigned long int @var{d}})
Set @var{r} to @var{n} @code{mod} @var{d}. The sign of the divisor is
ignored; the result is always non-negative.
@code{mpz_mod_ui} is identical to @code{mpz_fdiv_r_ui} above, returning the
remainder as well as setting @var{r}. See @code{mpz_fdiv_ui} above if only
the return value is wanted.
@end deftypefun
@deftypefun void mpz_divexact (mpz_t @var{q}, mpz_t @var{n}, mpz_t @var{d})
@cindex Exact division functions
Set @var{q} to @var{n}/@var{d}. This function produces correct results only
when it is known in advance that @var{d} divides @var{n}.
@code{mpz_divexact} is much faster than the other division functions, and is
the best choice when exact division is known to occur, for example reducing a
rational to lowest terms.
@end deftypefun
@deftypefun int mpz_divisible_p (mpz_t @var{n}, mpz_t @var{d})
@deftypefunx int mpz_divisible_ui_p (mpz_t @var{n}, unsigned long int @var{d})
@deftypefunx int mpz_divisible_2exp_p (mpz_t @var{n}, unsigned long int @var{b})
Return non-zero if @var{n} is exactly divisible by @var{d}, or in the case of
@code{mpz_divisible_2exp_p} by @m{2^b,2^@var{b}}.
@end deftypefun
@need 2000
@node Integer Exponentiation, Integer Roots, Integer Division, Integer Functions
@section Exponentiation Functions
@cindex Integer exponentiation functions
@cindex Exponentiation functions
@deftypefun void mpz_powm (mpz_t @var{rop}, mpz_t @var{base}, mpz_t @var{exp}, mpz_t @var{mod})
@deftypefunx void mpz_powm_ui (mpz_t @var{rop}, mpz_t @var{base}, unsigned long int @var{exp}, mpz_t @var{mod})
Set @var{rop} to @m{base^{exp} \bmod mod, (@var{base} raised to @var{exp})
modulo @var{mod}}. If @var{exp} is negative, the result is undefined.
@end deftypefun
@deftypefun void mpz_pow_ui (mpz_t @var{rop}, mpz_t @var{base}, unsigned long int @var{exp})
@deftypefunx void mpz_si_pow_ui (mpz_t @var{rop}, signed long int @var{base}, unsigned long int @var{exp})
@deftypefunx void mpz_ui_pow_ui (mpz_t @var{rop}, unsigned long int @var{base}, unsigned long int @var{exp})
Set @var{rop} to @m{base^{exp}, @var{base} raised to @var{exp}}. The case
@ma{0^0} yields 1.
@end deftypefun
@need 2000
@node Integer Roots, Number Theoretic Functions, Integer Exponentiation, Integer Functions
@section Root Extraction Functions
@cindex Integer root functions
@cindex Root extraction functions
@deftypefun int mpz_root (mpz_t @var{rop}, mpz_t @var{op}, unsigned long int @var{n})
Set @var{rop} to @m{\lfloor\root n \of {op}\rfloor@C{},} the truncated integer
part of the @var{n}th root of @var{op}. Return non-zero if the computation
was exact, i.e., if @var{op} is @var{rop} to the @var{n}th power.
@end deftypefun
@deftypefun void mpz_sqrt (mpz_t @var{rop}, mpz_t @var{op})
Set @var{rop} to @m{\lfloor\sqrt{@var{op}}\rfloor@C{},} the truncated
integer part of the square root of @var{op}.
@end deftypefun
@deftypefun void mpz_sqrtrem (mpz_t @var{rop1}, mpz_t @var{rop2}, mpz_t @var{op})
Set @var{rop1} to @m{\lfloor\sqrt{@var{op}}\rfloor, the truncated integer part
of the square root of @var{op}}, like @code{mpz_sqrt}. Set @var{rop2} to the
remainder @m{(@var{op} - @var{rop1}^2),
@var{op}@minus{}@var{rop1}*@var{rop1}}, which will be zero if @var{op} is a
perfect square.
If @var{rop1} and @var{rop2} are the same variable, the results are
undefined.
@end deftypefun
@deftypefun int mpz_perfect_power_p (mpz_t @var{op})
Return non-zero if @var{op} is a perfect power, i.e., if there exist integers
@m{a,@var{a}} and @m{b,@var{b}}, with @m{b>1, @var{b}>1}, such that
@m{@var{op}=a^b, @var{op} equals @var{a} raised to @var{b}}. Return zero
otherwise.
@end deftypefun
@deftypefun int mpz_perfect_square_p (mpz_t @var{op})
Return non-zero if @var{op} is a perfect square, i.e., if the square root of
@var{op} is an integer. Return zero otherwise.
@end deftypefun
@need 2000
@node Number Theoretic Functions, Integer Comparisons, Integer Roots, Integer Functions
@section Number Theoretic Functions
@cindex Number theoretic functions
@deftypefun int mpz_probab_prime_p (mpz_t @var{n}, int @var{reps})
@cindex Prime testing functions
If this function returns 0, @var{n} is definitely not prime. If it
returns 1, then @var{n} is `probably' prime. If it returns 2, then
@var{n} is surely prime. Reasonable values of reps vary from 5 to 10; a
higher value lowers the probability for a non-prime to pass as a
`probable' prime.
The function uses Miller-Rabin's probabilistic test.
@end deftypefun
@deftypefun int mpz_nextprime (mpz_t @var{rop}, mpz_t @var{op})
Set @var{rop} to the next prime greater than @var{op}.
This function uses a probabilistic algorithm to identify primes, but for for
practical purposes it's adequate, since the chance of a composite passing will
be extremely small.
@end deftypefun
@c mpz_prime_p not implemented as of gmp 3.0.
@c @deftypefun int mpz_prime_p (mpz_t @var{n})
@c Return non-zero if @var{n} is prime and zero if @var{n} is a non-prime.
@c This function is far slower than @code{mpz_probab_prime_p}, but then it
@c never returns non-zero for composite numbers.
@c (For practical purposes, using @code{mpz_probab_prime_p} is adequate.
@c The likelihood of a programming error or hardware malfunction is orders
@c of magnitudes greater than the likelihood for a composite to pass as a
@c prime, if the @var{reps} argument is in the suggested range.)
@c @end deftypefun
@deftypefun void mpz_gcd (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
@cindex Greatest common divisor functions
Set @var{rop} to the greatest common divisor of @var{op1} and @var{op2}.
The result is always positive even if either of or both input operands
are negative.
@end deftypefun
@deftypefun {unsigned long int} mpz_gcd_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
Compute the greatest common divisor of @var{op1} and @var{op2}. If
@var{rop} is not @code{NULL}, store the result there.
If the result is small enough to fit in an @code{unsigned long int}, it is
returned. If the result does not fit, 0 is returned, and the result is equal
to the argument @var{op1}. Note that the result will always fit if @var{op2}
is non-zero.
@end deftypefun
@deftypefun void mpz_gcdext (mpz_t @var{g}, mpz_t @var{s}, mpz_t @var{t}, mpz_t @var{a}, mpz_t @var{b})
@cindex Extended GCD
Compute @var{g}, @var{s}, and @var{t}, such that @var{a}@var{s} +
@var{b}@var{t} = @var{g} = @code{gcd}(@var{a}, @var{b}). If @var{t} is
@code{NULL}, that argument is not computed.
@end deftypefun
@deftypefun void mpz_lcm (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
@cindex Least common multiple functions
Set @var{rop} to the least common multiple of @var{op1} and @var{op2}.
@end deftypefun
@deftypefun int mpz_invert (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
@cindex Modular inverse functions
Compute the inverse of @var{op1} modulo @var{op2} and put the result in
@var{rop}. Return non-zero if an inverse exists, zero otherwise. When the
function returns zero, @var{rop} is undefined.
@end deftypefun
@deftypefun int mpz_jacobi (mpz_t @var{a}, mpz_t @var{b})
@deftypefunx int mpz_legendre (mpz_t @var{a}, mpz_t @var{p})
@deftypefunx int mpz_kronecker (mpz_t @var{a}, mpz_t @var{b})
@deftypefunx int mpz_kronecker_si (mpz_t @var{a}, long @var{b})
@deftypefunx int mpz_kronecker_ui (mpz_t @var{a}, unsigned long @var{b})
@deftypefunx int mpz_si_kronecker (long @var{a}, mpz_t @var{b})
@deftypefunx int mpz_ui_kronecker (unsigned long @var{a}, mpz_t @var{b})
@cindex Jacobi symbol functions
@cindex Kronecker symbol functions
@code{mpz_jacobi} calculates the Jacobi symbol @m{\left(a \over b\right),
(@var{a}/@var{b})}. This is undefined if @var{b} is even, but for the
purposes of this implementation any factors of 2 in @var{b} are simply
ignored.
@code{mpz_legendre} calculates the Legendre symbol @m{\left(a \over p\right),
(@var{a}/@var{p})}. This is defined only for @var{p} an odd positive prime,
but currently @code{mpz_legendre} is simply a synonym for @code{mpz_jacobi}.
@code{mpz_kronecker} etc calculates the Jacobi symbol @m{\left(a \over
b\right), (@var{a}/@var{b})} with the Kronecker extension @m{\left(a
\over 2\right) = \left(2 \over a\right), (a/2)=(2/a)} when @ma{a} odd,
or @m{\left(a \over 2\right) = 0, (a/2)=0} when @ma{a} even. Note that
when @var{b} is odd, @code{mpz_jacobi} and @code{mpz_kronecker} are
identical.
For more information see Henri Cohen section 1.4.2 (@pxref{References}),
or any number theory textbook. See also the example program
@file{demos/qcn.c} which uses @code{mpz_kronecker_ui}.
@end deftypefun
@deftypefun {unsigned long int} mpz_remove (mpz_t @var{rop}, mpz_t @var{op}, mpz_t @var{f})
Remove all occurrences of the factor @var{f} from @var{op} and store the
result in @var{rop}. Return the multiplicity of @var{f} in @var{op}.
@end deftypefun
@deftypefun void mpz_fac_ui (mpz_t @var{rop}, unsigned long int @var{op})
@cindex Factorial functions
Set @var{rop} to @var{op}!, the factorial of @var{op}.
@end deftypefun
@deftypefun void mpz_bin_ui (mpz_t @var{rop}, mpz_t @var{n}, unsigned long int @var{k})
@deftypefunx void mpz_bin_uiui (mpz_t @var{rop}, unsigned long int @var{n}, @w{unsigned long int @var{k}})
@cindex Binomial coefficient functions
Compute the binomial coefficient @m{\left({n}\atop{k}\right), @var{n} over
@var{k}} and store the result in @var{rop}. Negative values of @var{n} are
supported by @code{mpz_bin_ui}, using the identity
@m{\left({-n}\atop{k}\right) = (-1)^k \left({n+k-1}\atop{k}\right),
bin(-n@C{}k) = (-1)^k * bin(n+k-1@C{}k)}, see Knuth volume 1 section 1.2.6
part G.
@end deftypefun
@deftypefun void mpz_fib_ui (mpz_t @var{rop}, unsigned long int @var{n})
@cindex Fibonacci sequence functions
Compute the @var{n}th Fibonacci number and store the result in @var{rop}.
@end deftypefun
@node Integer Comparisons, Integer Logic and Bit Fiddling, Number Theoretic Functions, Integer Functions
@comment node-name, next, previous, up
@section Comparison Functions
@cindex Integer comparison functions
@cindex Comparison functions
@deftypefun int mpz_cmp (mpz_t @var{op1}, mpz_t @var{op2})
Compare @var{op1} and @var{op2}. Return a positive value if @ma{@var{op1} >
@var{op2}}, zero if @ma{@var{op1} = @var{op2}}, and a negative value if
@ma{@var{op1} < @var{op2}}.
@end deftypefun
@deftypefn Macro int mpz_cmp_ui (mpz_t @var{op1}, unsigned long int @var{op2})
@deftypefnx Macro int mpz_cmp_si (mpz_t @var{op1}, signed long int @var{op2})
Compare @var{op1} and @var{op2}. Return a positive value if @ma{@var{op1} >
@var{op2}}, zero if @ma{@var{op1} = @var{op2}}, and a negative value if
@ma{@var{op1} < @var{op2}}.
These functions are actually implemented as macros. They evaluate their
arguments multiple times.
@end deftypefn
@deftypefun int mpz_cmpabs (mpz_t @var{op1}, mpz_t @var{op2})
@deftypefunx int mpz_cmpabs_ui (mpz_t @var{op1}, unsigned long int @var{op2})
Compare the absolute values of @var{op1} and @var{op2}. Return a positive
value if @m{|@var{op1}| > |@var{op2}|, @var{op1} > @var{op2}}, zero if
@m{|@var{op1}| = |@var{op2}|, @var{op1} = @var{op2}}, and a negative value if
@m{|@var{op1}| < |@var{op2}|, @var{op1} < @var{op2}}.
@end deftypefun
@deftypefn Macro int mpz_sgn (mpz_t @var{op})
Return @ma{+1} if @ma{@var{op} > 0}, 0 if @ma{@var{op} = 0}, and @ma{-1} if
@ma{@var{op} < 0}.
This function is actually implemented as a macro. It evaluates its argument
multiple times.
@end deftypefn
@node Integer Logic and Bit Fiddling, I/O of Integers, Integer Comparisons, Integer Functions
@comment node-name, next, previous, up
@section Logical and Bit Manipulation Functions
@cindex Logical functions
@cindex Bit manipulation functions
@cindex Integer bit manipulation functions
These functions behave as if twos complement arithmetic were used (although
sign-magnitude is the actual implementation). The least significant bit is
number 0.
@deftypefun void mpz_and (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
Set @var{rop} to @var{op1} logical-and @var{op2}.
@end deftypefun
@deftypefun void mpz_ior (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
Set @var{rop} to @var{op1} inclusive-or @var{op2}.
@end deftypefun
@deftypefun void mpz_xor (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
Set @var{rop} to @var{op1} exclusive-or @var{op2}.
@end deftypefun
@deftypefun void mpz_com (mpz_t @var{rop}, mpz_t @var{op})
Set @var{rop} to the one's complement of @var{op}.
@end deftypefun
@deftypefun {unsigned long int} mpz_popcount (mpz_t @var{op})
For non-negative numbers, return the population count of @var{op}. For
negative numbers, return the largest possible value (@var{MAX_ULONG}).
@end deftypefun
@deftypefun {unsigned long int} mpz_hamdist (mpz_t @var{op1}, mpz_t @var{op2})
If @var{op1} and @var{op2} are both non-negative, return the hamming distance
between the two operands. Otherwise, return the largest possible value
(@var{MAX_ULONG}).
It is possible to extend this function to return a useful value when the
operands are both negative, but the current implementation returns
@var{MAX_ULONG} in this case. @strong{Do not depend on this behavior, since
it will change in a future release.}
@end deftypefun
@deftypefun {unsigned long int} mpz_scan0 (mpz_t @var{op}, unsigned long int @var{starting_bit})
@deftypefunx {unsigned long int} mpz_scan1 (mpz_t @var{op}, unsigned long int @var{starting_bit})
Scan @var{op}, starting from bit @var{starting_bit}, towards more significant
bits, until the first 0 or 1 bit (respectively) is found. Return the index of
the found bit.
If the bit at @var{starting_bit} is already what's sought, then
@var{starting_bit} is returned.
If there's no bit found, then @var{MAX_ULONG} is returned. This will happen
in @code{mpz_scan0} past the end of a positive number, or @code{mpz_scan1}
past the end of a negative.
@end deftypefun
@deftypefun void mpz_setbit (mpz_t @var{rop}, unsigned long int @var{bit_index})
Set bit @var{bit_index} in @var{rop}.
@end deftypefun
@deftypefun void mpz_clrbit (mpz_t @var{rop}, unsigned long int @var{bit_index})
Clear bit @var{bit_index} in @var{rop}.
@end deftypefun
@deftypefun int mpz_tstbit (mpz_t @var{op}, unsigned long int @var{bit_index})
Test bit @var{bit_index} in @var{op} and return 0 or 1 accordingly.
@end deftypefun
@node I/O of Integers, Integer Random Numbers, Integer Logic and Bit Fiddling, Integer Functions
@comment node-name, next, previous, up
@section Input and Output Functions
@cindex Integer input and output functions
@cindex Input functions
@cindex Output functions
@cindex I/O functions
Functions that perform input from a stdio stream, and functions that output to
a stdio stream. Passing a @code{NULL} pointer for a @var{stream} argument to any of
these functions will make them read from @code{stdin} and write to
@code{stdout}, respectively.
When using any of these functions, it is a good idea to include @file{stdio.h}
before @file{gmp.h}, since that will allow @file{gmp.h} to define prototypes
for these functions.
@deftypefun size_t mpz_out_str (FILE *@var{stream}, int @var{base}, mpz_t @var{op})
Output @var{op} on stdio stream @var{stream}, as a string of digits in base
@var{base}. The base may vary from 2 to 36.
Return the number of bytes written, or if an error occurred, return 0.
@end deftypefun
@deftypefun size_t mpz_inp_str (mpz_t @var{rop}, FILE *@var{stream}, int @var{base})
Input a possibly white-space preceded string in base @var{base} from stdio
stream @var{stream}, and put the read integer in @var{rop}. The base may vary
from 2 to 36. If @var{base} is 0, the actual base is determined from the
leading characters: if the first two characters are `0x' or `0X', hexadecimal
is assumed, otherwise if the first character is `0', octal is assumed,
otherwise decimal is assumed.
Return the number of bytes read, or if an error occurred, return 0.
@end deftypefun
@deftypefun size_t mpz_out_raw (FILE *@var{stream}, mpz_t @var{op})
Output @var{op} on stdio stream @var{stream}, in raw binary format. The
integer is written in a portable format, with 4 bytes of size information, and
that many bytes of limbs. Both the size and the limbs are written in
decreasing significance order (i.e., in big-endian).
The output can be read with @code{mpz_inp_raw}.
Return the number of bytes written, or if an error occurred, return 0.
The output of this can not be read by @code{mpz_inp_raw} from GMP 1, because
of changes necessary for compatibility between 32-bit and 64-bit machines.
@end deftypefun
@deftypefun size_t mpz_inp_raw (mpz_t @var{rop}, FILE *@var{stream})
Input from stdio stream @var{stream} in the format written by
@code{mpz_out_raw}, and put the result in @var{rop}. Return the number of
bytes read, or if an error occurred, return 0.
This routine can read the output from @code{mpz_out_raw} also from GMP 1, in
spite of changes necessary for compatibility between 32-bit and 64-bit
machines.
@end deftypefun
@need 2000
@node Integer Random Numbers, Miscellaneous Integer Functions, I/O of Integers, Integer Functions
@comment node-name, next, previous, up
@section Random Number Functions
@cindex Integer random number functions
@cindex Random number functions
The random number functions of GMP come in two groups; older function
that rely on a global state, and newer functions that accept a state
parameter that is read and modified. Please see the @ref{Random Number
Functions} for more information on how to use and not to use random
number functions.
@deftypefun void mpz_urandomb (mpz_t @var{rop}, gmp_randstate_t @var{state}, unsigned long int @var{n})
Generate a uniformly distributed random integer in the range 0 to @m{2^n-1,
2^@var{n}@minus{}1}, inclusive.
The variable @var{state} must be initialized by calling one of the
@code{gmp_randinit} functions (@ref{Random State Initialization}) before
invoking this function.
@end deftypefun
@deftypefun void mpz_urandomm (mpz_t @var{rop}, gmp_randstate_t @var{state}, mpz_t @var{n})
Generate a uniform random integer in the range 0 to @ma{@var{n}-1}, inclusive.
The variable @var{state} must be initialized by calling one of the
@code{gmp_randinit} functions (@ref{Random State Initialization})
before invoking this function.
@end deftypefun
@deftypefun void mpz_rrandomb (mpz_t @var{rop}, gmp_randstate_t @var{state}, unsigned long int @var{n})
Generate a random integer with long strings of zeros and ones in the
binary representation. Useful for testing functions and algorithms,
since this kind of random numbers have proven to be more likely to
trigger corner-case bugs. The random number will be in the range
0 to @m{2^n-1, 2^@var{n}@minus{}1}, inclusive.
The variable @var{state} must be initialized by calling one of the
@code{gmp_randinit} functions (@ref{Random State Initialization})
before invoking this function.
@end deftypefun
@deftypefun void mpz_random (mpz_t @var{rop}, mp_size_t @var{max_size})
Generate a random integer of at most @var{max_size} limbs. The generated
random number doesn't satisfy any particular requirements of randomness.
Negative random numbers are generated when @var{max_size} is negative.
This function is obsolete. Use @code{mpz_urandomb} or
@code{mpz_urandomm} instead.
@end deftypefun
@deftypefun void mpz_random2 (mpz_t @var{rop}, mp_size_t @var{max_size})
Generate a random integer of at most @var{max_size} limbs, with long strings
of zeros and ones in the binary representation. Useful for testing functions
and algorithms, since this kind of random numbers have proven to be more
likely to trigger corner-case bugs. Negative random numbers are generated
when @var{max_size} is negative.
This function is obsolete. Use @code{mpz_rrandomb} instead.
@end deftypefun
@need 2000
@node Miscellaneous Integer Functions, , Integer Random Numbers, Integer Functions
@comment node-name, next, previous, up
@section Miscellaneous Functions
@cindex Miscellaneous integer functions
@cindex Integer miscellaneous functions
@deftypefun int mpz_fits_ulong_p (mpz_t @var{op})
@deftypefunx int mpz_fits_slong_p (mpz_t @var{op})
@deftypefunx int mpz_fits_uint_p (mpz_t @var{op})
@deftypefunx int mpz_fits_sint_p (mpz_t @var{op})
@deftypefunx int mpz_fits_ushort_p (mpz_t @var{op})
@deftypefunx int mpz_fits_sshort_p (mpz_t @var{op})
Return non-zero iff the value of @var{op} fits in an @code{unsigned long int},
@code{signed long int}, @code{unsigned int}, @code{signed int}, @code{unsigned
short int}, or @code{signed short int}, respectively. Otherwise, return zero.
@end deftypefun
@deftypefn Macro int mpz_odd_p (mpz_t @var{op})
@deftypefnx Macro int mpz_even_p (mpz_t @var{op})
Determine whether @var{op} is odd or even, respectively. Return non-zero if
yes, zero if no. These macros evaluate their argument more than once.
@end deftypefn
@deftypefun size_t mpz_size (mpz_t @var{op})
Return the size of @var{op} measured in number of limbs. If @var{op} is zero,
the returned value will be zero.
@c (@xref{Nomenclature}, for an explanation of the concept @dfn{limb}.)
@end deftypefun
@deftypefun size_t mpz_sizeinbase (mpz_t @var{op}, int @var{base})
Return the size of @var{op} measured in number of digits in base @var{base}.
The base may vary from 2 to 36. The returned value will be exact or 1 too
big. If @var{base} is a power of 2, the returned value will always be exact.
This function is useful in order to allocate the right amount of space before
converting @var{op} to a string. The right amount of allocation is normally
two more than the value returned by @code{mpz_sizeinbase} (one extra for a
minus sign and one for the null-terminator).
@end deftypefun
@node Rational Number Functions, Floating-point Functions, Integer Functions, Top
@comment node-name, next, previous, up
@chapter Rational Number Functions
@cindex Rational number functions
This chapter describes the GMP functions for performing arithmetic on rational
numbers. These functions start with the prefix @code{mpq_}.
Rational numbers are stored in objects of type @code{mpq_t}.
All rational arithmetic functions assume operands have a canonical form, and
canonicalize their result. The canonical from means that the denominator and
the numerator have no common factors, and that the denominator is positive.
Zero has the unique representation 0/1.
Pure assignment functions do not canonicalize the assigned variable. It is
the responsibility of the user to canonicalize the assigned variable before
any arithmetic operations are performed on that variable.
@deftypefun void mpq_canonicalize (mpq_t @var{op})
Remove any factors that are common to the numerator and denominator of
@var{op}, and make the denominator positive.
@end deftypefun
@menu
* Initializing Rationals::
* Rational Arithmetic::
* Comparing Rationals::
* Applying Integer Functions::
* I/O of Rationals::
* Miscellaneous Rational Functions::
@end menu
@node Initializing Rationals, Rational Arithmetic, Rational Number Functions, Rational Number Functions
@comment node-name, next, previous, up
@section Initialization and Assignment Functions
@cindex Initialization and assignment functions
@cindex Rational init and assign
@deftypefun void mpq_init (mpq_t @var{dest_rational})
Initialize @var{dest_rational} and set it to 0/1. Each variable should
normally only be initialized once, or at least cleared out (using the function
@code{mpq_clear}) between each initialization.
@end deftypefun
@deftypefun void mpq_clear (mpq_t @var{rational_number})
Free the space occupied by @var{rational_number}. Make sure to call this
function for all @code{mpq_t} variables when you are done with them.
@end deftypefun
@deftypefun void mpq_set (mpq_t @var{rop}, mpq_t @var{op})
@deftypefunx void mpq_set_z (mpq_t @var{rop}, mpz_t @var{op})
Assign @var{rop} from @var{op}.
@end deftypefun
@deftypefun void mpq_set_ui (mpq_t @var{rop}, unsigned long int @var{op1}, unsigned long int @var{op2})
@deftypefunx void mpq_set_si (mpq_t @var{rop}, signed long int @var{op1}, unsigned long int @var{op2})
Set the value of @var{rop} to @var{op1}/@var{op2}. Note that if @var{op1} and
@var{op2} have common factors, @var{rop} has to be passed to
@code{mpq_canonicalize} before any operations are performed on @var{rop}.
@end deftypefun
@deftypefun void mpq_swap (mpq_t @var{rop1}, mpq_t @var{rop2})
Swap the values @var{rop1} and @var{rop2} efficiently.
@end deftypefun
@node Rational Arithmetic, Comparing Rationals, Initializing Rationals, Rational Number Functions
@comment node-name, next, previous, up
@section Arithmetic Functions
@cindex Rational arithmetic functions
@cindex Arithmetic functions
@deftypefun void mpq_add (mpq_t @var{sum}, mpq_t @var{addend1}, mpq_t @var{addend2})
Set @var{sum} to @var{addend1} + @var{addend2}.
@end deftypefun
@deftypefun void mpq_sub (mpq_t @var{difference}, mpq_t @var{minuend}, mpq_t @var{subtrahend})
Set @var{difference} to @var{minuend} @minus{} @var{subtrahend}.
@end deftypefun
@deftypefun void mpq_mul (mpq_t @var{product}, mpq_t @var{multiplier}, mpq_t @var{multiplicand})
Set @var{product} to @ma{@var{multiplier} @GMPtimes{} @var{multiplicand}}.
@end deftypefun
@deftypefun void mpq_mul_2exp (mpq_t @var{rop}, mpq_t @var{op1}, unsigned long int @var{op2})
Set @var{rop} to @m{@var{op1} \times 2^{op2}, @var{op1} times 2 raised to
@var{op2}}.
@end deftypefun
@deftypefun void mpq_div (mpq_t @var{quotient}, mpq_t @var{dividend}, mpq_t @var{divisor})
@cindex Division functions
Set @var{quotient} to @var{dividend}/@var{divisor}.
@end deftypefun
@deftypefun void mpq_div_2exp (mpq_t @var{rop}, mpq_t @var{op1}, unsigned long int @var{op2})
Set @var{rop} to @m{@var{op1}/2^{op2}, @var{op1} divided by 2 raised to
@var{op2}}.
@end deftypefun
@deftypefun void mpq_neg (mpq_t @var{negated_operand}, mpq_t @var{operand})
Set @var{negated_operand} to @minus{}@var{operand}.
@end deftypefun
@deftypefun void mpq_abs (mpq_t @var{rop}, mpq_t @var{op})
Set @var{rop} to the absolute value of @var{op}.
@end deftypefun
@deftypefun void mpq_inv (mpq_t @var{inverted_number}, mpq_t @var{number})
Set @var{inverted_number} to 1/@var{number}. If the new denominator is
zero, this routine will divide by zero.
@end deftypefun
@node Comparing Rationals, Applying Integer Functions, Rational Arithmetic, Rational Number Functions
@comment node-name, next, previous, up
@section Comparison Functions
@cindex Rational comparison functions
@cindex Comparison functions
@deftypefun int mpq_cmp (mpq_t @var{op1}, mpq_t @var{op2})
Compare @var{op1} and @var{op2}. Return a positive value if @ma{@var{op1} >
@var{op2}}, zero if @ma{@var{op1} = @var{op2}}, and a negative value if
@ma{@var{op1} < @var{op2}}.
To determine if two rationals are equal, @code{mpq_equal} is faster than
@code{mpq_cmp}.
@end deftypefun
@deftypefn Macro int mpq_cmp_ui (mpq_t @var{op1}, unsigned long int @var{num2}, unsigned long int @var{den2})
Compare @var{op1} and @var{num2}/@var{den2}. Return a positive value if
@ma{@var{op1} > @var{num2}/@var{den2}}, zero if @ma{@var{op1} =
@var{num2}/@var{den2}}, and a negative value if @ma{@var{op1} <
@var{num2}/@var{den2}}.
This routine allows that @var{num2} and @var{den2} have common factors.
This function is actually implemented as a macro. It evaluates its
arguments multiple times.
@end deftypefn
@deftypefn Macro int mpq_sgn (mpq_t @var{op})
Return @ma{+1} if @ma{@var{op} > 0}, 0 if @ma{@var{op} = 0}, and @ma{-1} if
@ma{@var{op} < 0}.
This function is actually implemented as a macro. It evaluates its
arguments multiple times.
@end deftypefn
@deftypefun int mpq_equal (mpq_t @var{op1}, mpq_t @var{op2})
Return non-zero if @var{op1} and @var{op2} are equal, zero if they are
non-equal. Although @code{mpq_cmp} can be used for the same purpose, this
function is much faster.
@end deftypefun
@node Applying Integer Functions, I/O of Rationals, Comparing Rationals, Rational Number Functions
@comment node-name, next, previous, up
@section Applying Integer Functions to Rationals
@cindex Rational numerator and denominator
@cindex Numerator and denominator
The set of @code{mpq} functions is quite small. In particular, there are few
functions for either input or output. But there are two macros that allow us
to apply any @code{mpz} function on the numerator or denominator of a rational
number. If these macros are used to assign to the rational number,
@code{mpq_canonicalize} normally need to be called afterwards.
@deftypefn Macro mpz_t mpq_numref (mpq_t @var{op})
@deftypefnx Macro mpz_t mpq_denref (mpq_t @var{op})
Return a reference to the numerator and denominator of @var{op}, respectively.
The @code{mpz} functions can be used on the result of these macros.
@end deftypefn
@need 2000
@node I/O of Rationals, Miscellaneous Rational Functions, Applying Integer Functions, Rational Number Functions
@comment node-name, next, previous, up
@section Input and Output Functions
@cindex Rational input and output functions
@cindex Input functions
@cindex Output functions
@cindex I/O functions
Functions that perform input from a stdio stream, and functions that output to
a stdio stream. Passing a @code{NULL} pointer for a @var{stream} argument to
any of these functions will make them read from @code{stdin} and write to
@code{stdout}, respectively.
When using any of these functions, it is a good idea to include @file{stdio.h}
before @file{gmp.h}, since that will allow @file{gmp.h} to define prototypes
for these functions.
@deftypefun size_t mpq_out_str (FILE *@var{stream}, int @var{base}, mpq_t @var{op})
Output @var{op} on stdio stream @var{stream}, as a string of digits in base
@var{base}. The base may vary from 2 to 36. Output is in the form
@samp{num/den} or if the denominator is 1 then just @samp{num}.
Return the number of bytes written, or if an error occurred, return 0.
@end deftypefun
@need 2000
@node Miscellaneous Rational Functions, , I/O of Rationals, Rational Number Functions
@comment node-name, next, previous, up
@section Miscellaneous Functions
@cindex Rational miscellaneous functions
@cindex Miscellaneous rational functions
@deftypefun double mpq_get_d (mpq_t @var{op})
Convert @var{op} to a @code{double}.
@end deftypefun
@deftypefun void mpq_set_d (mpq_t @var{rop}, double @var{op})
@deftypefunx void mpq_set_f (mpq_t @var{rop}, mpf_t @var{op})
Set @var{rop} to the value of @var{op}, without rounding.
@end deftypefun
@deftypefun void mpq_get_num (mpz_t @var{numerator}, mpq_t @var{rational})
@deftypefunx void mpq_get_den (mpz_t @var{denominator}, mpq_t @var{rational})
@deftypefunx void mpq_set_num (mpq_t @var{rational}, mpz_t @var{numerator})
@deftypefunx void mpq_set_den (mpq_t @var{rational}, mpz_t @var{denominator})
Get or set the numerator or denominator of a rational. These functions are
equivalent to calling @code{mpz_set} with an appropriate @code{mpq_numref} or
@code{mpq_denref}.
When an assignment to the numerator and/or denominator could introduce common
factors or if the denominator could become negative, the value must be put
into canonical form using @code{mpq_canonicalize} before any other operations
on that rational.
Note that there's no need to copy a numerator or denominator to an
@code{mpz_t} just to operate on it, all the @code{mpz} functions can be used
with an @code{mpq_numref} or @code{mpq_denref}. When modifying a rational
that way the rule about canonicalizing still applies of course.
@end deftypefun
@node Floating-point Functions, Low-level Functions, Rational Number Functions, Top
@comment node-name, next, previous, up
@chapter Floating-point Functions
@cindex Floating-point functions
@cindex Float functions
@cindex User-defined precision
@cindex Precision of floats
GMP floating point numbers are stored in objects of type @code{mpf_t} and
functions operating on them have an @code{mpf_} prefix.
The mantissa of each float has a user-selectable precision, limited only by
available memory. Each variable has its own precision, which can be increased
or decreased at any time.
The exponent of each float is a fixed precision, one machine word on most
systems. In the current implementation the exponent is a count of limbs, so
for example on a 32-bit system this means a range of roughly
@ma{2^@W{-68719476768}} to @ma{2^@W{68719476736}}. On a 64-bit system this
will be greater.
In each variable the current size of the mantissa data is maintained. This
means that if the mantissa is exactly represented in only a few bits then only
those bits will be used in a calculation, even if the selected precision is
high.
All calculations are performed to the precision of the destination variable.
Each function is defined to calculate with ``infinite precision'' followed by
a truncation to the destination precision, but of course the actual work done
is only what's needed to determine a result under that definition.
The precision set for each variable is actually a minimum value, GMP may
increase it a little to facilitate efficient calculation. Currently this
means rounding up to a whole limb, and then sometimes having a further partial
limb, depending on the high limb of the mantissa. But applications shouldn't
be concerned by such details.
@code{mpf} functions and variables have no special notion of infinity or
not-a-number, and applications must take care not to overflow the exponent or
results will be unpredictable. This might change in a future release.
Note that the @code{mpf} functions are @emph{not} intended as a smooth
extension to IEEE P754 arithmetic. In particular results obtained on one
computer often differ from the results on a computer with a different word
size.
@menu
* Initializing Floats::
* Assigning Floats::
* Simultaneous Float Init & Assign::
* Converting Floats::
* Float Arithmetic::
* Float Comparison::
* I/O of Floats::
* Miscellaneous Float Functions::
@end menu
@node Initializing Floats, Assigning Floats, Floating-point Functions, Floating-point Functions
@comment node-name, next, previous, up
@section Initialization Functions
@cindex Float initialization functions
@cindex Initialization functions
@deftypefun void mpf_set_default_prec (unsigned long int @var{prec})
Set the default precision to be @strong{at least} @var{prec} bits. All
subsequent calls to @code{mpf_init} will use this precision, but previously
initialized variables are unaffected.
@end deftypefun
An @code{mpf_t} object must be initialized before storing the first value in
it. The functions @code{mpf_init} and @code{mpf_init2} are used for that
purpose.
@deftypefun void mpf_init (mpf_t @var{x})
Initialize @var{x} to 0. Normally, a variable should be initialized once only
or at least be cleared, using @code{mpf_clear}, between initializations. The
precision of @var{x} is undefined unless a default precision has already been
established by a call to @code{mpf_set_default_prec}.
@end deftypefun
@deftypefun void mpf_init2 (mpf_t @var{x}, unsigned long int @var{prec})
Initialize @var{x} to 0 and set its precision to be @strong{at least}
@var{prec} bits. Normally, a variable should be initialized once only or at
least be cleared, using @code{mpf_clear}, between initializations.
@end deftypefun
@deftypefun void mpf_clear (mpf_t @var{x})
Free the space occupied by @var{x}. Make sure to call this function for all
@code{mpf_t} variables when you are done with them.
@end deftypefun
@need 2000
Here is an example on how to initialize floating-point variables:
@example
@{
mpf_t x, y;
mpf_init (x); /* use default precision */
mpf_init2 (y, 256); /* precision @emph{at least} 256 bits */
@dots{}
/* Unless the program is about to exit, do ... */
mpf_clear (x);
mpf_clear (y);
@}
@end example
The following three functions are useful for changing the precision during a
calculation. A typical use would be for adjusting the precision gradually in
iterative algorithms like Newton-Raphson, making the computation precision
closely match the actual accurate part of the numbers.
@deftypefun void mpf_set_prec (mpf_t @var{rop}, unsigned long int @var{prec})
Set the precision of @var{rop} to be @strong{at least} @var{prec} bits.
Since changing the precision involves calls to @code{realloc}, this routine
should not be called in a tight loop.
@end deftypefun
@deftypefun {unsigned long int} mpf_get_prec (mpf_t @var{op})
Return the precision actually used for assignments of @var{op}.
@end deftypefun
@deftypefun void mpf_set_prec_raw (mpf_t @var{rop}, unsigned long int @var{prec})
Set the precision of @var{rop} to be @strong{at least} @var{prec} bits. This
is a low-level function that does not change the allocation. The @var{prec}
argument must not be larger that the precision previously returned by
@code{mpf_get_prec}. It is crucial that the precision of @var{rop} is
ultimately reset to exactly the value returned by @code{mpf_get_prec} before
the first call to @code{mpf_set_prec_raw}.
@end deftypefun
@need 2000
@node Assigning Floats, Simultaneous Float Init & Assign, Initializing Floats, Floating-point Functions
@comment node-name, next, previous, up
@section Assignment Functions
@cindex Float assignment functions
@cindex Assignment functions
These functions assign new values to already initialized floats
(@pxref{Initializing Floats}).
@deftypefun void mpf_set (mpf_t @var{rop}, mpf_t @var{op})
@deftypefunx void mpf_set_ui (mpf_t @var{rop}, unsigned long int @var{op})
@deftypefunx void mpf_set_si (mpf_t @var{rop}, signed long int @var{op})
@deftypefunx void mpf_set_d (mpf_t @var{rop}, double @var{op})
@deftypefunx void mpf_set_z (mpf_t @var{rop}, mpz_t @var{op})
@deftypefunx void mpf_set_q (mpf_t @var{rop}, mpq_t @var{op})
Set the value of @var{rop} from @var{op}.
@end deftypefun
@deftypefun int mpf_set_str (mpf_t @var{rop}, char *@var{str}, int @var{base})
Set the value of @var{rop} from the string in @var{str}. The string is of the
form @samp{M@@N} or, if the base is 10 or less, alternatively @samp{MeN}.
@samp{M} is the mantissa and @samp{N} is the exponent. The mantissa is always
in the specified base. The exponent is either in the specified base or, if
@var{base} is negative, in decimal.
The argument @var{base} may be in the ranges 2 to 36, or @minus{}36 to
@minus{}2. Negative values are used to specify that the exponent is in
decimal.
Unlike the corresponding @code{mpz} function, the base will not be determined
from the leading characters of the string if @var{base} is 0. This is so that
numbers like @samp{0.23} are not interpreted as octal.
White space is allowed in the string, and is simply ignored. [This is not
really true; white-space is ignored in the beginning of the string and within
the mantissa, but not in other places, such as after a minus sign or in the
exponent. We are considering changing the definition of this function, making
it fail when there is any white-space in the input, since that makes a lot of
sense. Please tell us your opinion about this change. Do you really want it
to accept @nicode{"3 14"} as meaning 314 as it does now?]
This function returns 0 if the entire string is a valid number in base
@var{base}. Otherwise it returns @minus{}1.
@end deftypefun
@deftypefun void mpf_swap (mpf_t @var{rop1}, mpf_t @var{rop2})
Swap @var{rop1} and @var{rop2} efficiently. Both the values and the
precisions of the two variables are swapped.
@end deftypefun
@node Simultaneous Float Init & Assign, Converting Floats, Assigning Floats, Floating-point Functions
@comment node-name, next, previous, up
@section Combined Initialization and Assignment Functions
@cindex Initialization and assignment functions
@cindex Float init and assign functions
For convenience, GMP provides a parallel series of initialize-and-set functions
which initialize the output and then store the value there. These functions'
names have the form @code{mpf_init_set@dots{}}
Once the float has been initialized by any of the @code{mpf_init_set@dots{}}
functions, it can be used as the source or destination operand for the ordinary
float functions. Don't use an initialize-and-set function on a variable
already initialized!
@deftypefun void mpf_init_set (mpf_t @var{rop}, mpf_t @var{op})
@deftypefunx void mpf_init_set_ui (mpf_t @var{rop}, unsigned long int @var{op})
@deftypefunx void mpf_init_set_si (mpf_t @var{rop}, signed long int @var{op})
@deftypefunx void mpf_init_set_d (mpf_t @var{rop}, double @var{op})
Initialize @var{rop} and set its value from @var{op}.
The precision of @var{rop} will be taken from the active default precision, as
set by @code{mpf_set_default_prec}.
@end deftypefun
@deftypefun int mpf_init_set_str (mpf_t @var{rop}, char *@var{str}, int @var{base})
Initialize @var{rop} and set its value from the string in @var{str}. See
@code{mpf_set_str} above for details on the assignment operation.
Note that @var{rop} is initialized even if an error occurs. (I.e., you have to
call @code{mpf_clear} for it.)
The precision of @var{rop} will be taken from the active default precision, as
set by @code{mpf_set_default_prec}.
@end deftypefun
@node Converting Floats, Float Arithmetic, Simultaneous Float Init & Assign, Floating-point Functions
@comment node-name, next, previous, up
@section Conversion Functions
@cindex Float conversion functions
@cindex Conversion functions
@deftypefun double mpf_get_d (mpf_t @var{op})
Convert @var{op} to a @code{double}.
@end deftypefun
@deftypefun long mpf_get_si (mpf_t @var{op})
@deftypefunx {unsigned long} mpf_get_ui (mpf_t @var{op})
Convert @var{op} to a @code{long} or @code{unsigned long}, truncating any
fraction part. If @var{op} is too big for the return type, the result is
undefined.
See also @code{mpf_fits_slong_p} and @code{mpf_fits_ulong_p}
(@pxref{Miscellaneous Float Functions}).
@end deftypefun
@deftypefun {char *} mpf_get_str (char *@var{str}, mp_exp_t *@var{expptr}, int @var{base}, size_t @var{n_digits}, mpf_t @var{op})
Convert @var{op} to a string of digits in base @var{base}. The base may vary
from 2 to 36. Generate at most @var{n_digits} significant digits, or if
@var{n_digits} is 0, the maximum number of digits accurately representable by
@var{op}.
If @var{str} is @code{NULL}, the string is allocated using the current
allocation function (@pxref{Custom Allocation}). The block will be
@code{strlen(str)+1} bytes, that being exactly enough for the string and
null-terminator.
If @var{str} is not @code{NULL}, it should point to a block of storage enough
large for the mantissa, that being @var{n_digits} + 2. The two extra bytes
are for a possible minus sign, and for the null-terminator.
When @var{n_digits} is 0, to get all significant digits, an application won't
be able to know the space required, and @var{str} should be @code{NULL} in
that case.
The generated string is a fraction, with an implicit radix point immediately
to the left of the first digit. The applicable exponent is written through
the @var{expptr} pointer. For example, the number 3.1416 would be returned as
string @nicode{"31416"} and exponent 1.
When @var{op} is zero, an empty string is produced and the exponent returned
is 0.
A pointer to the result string is returned, being either the allocated block
or the given @var{str}.
@end deftypefun
@node Float Arithmetic, Float Comparison, Converting Floats, Floating-point Functions
@comment node-name, next, previous, up
@section Arithmetic Functions
@cindex Float arithmetic functions
@cindex Arithmetic functions
@deftypefun void mpf_add (mpf_t @var{rop}, mpf_t @var{op1}, mpf_t @var{op2})
@deftypefunx void mpf_add_ui (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
Set @var{rop} to @ma{@var{op1} + @var{op2}}.
@end deftypefun
@deftypefun void mpf_sub (mpf_t @var{rop}, mpf_t @var{op1}, mpf_t @var{op2})
@deftypefunx void mpf_ui_sub (mpf_t @var{rop}, unsigned long int @var{op1}, mpf_t @var{op2})
@deftypefunx void mpf_sub_ui (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
Set @var{rop} to @var{op1} @minus{} @var{op2}.
@end deftypefun
@deftypefun void mpf_mul (mpf_t @var{rop}, mpf_t @var{op1}, mpf_t @var{op2})
@deftypefunx void mpf_mul_ui (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
Set @var{rop} to @ma{@var{op1} @GMPtimes{} @var{op2}}.
@end deftypefun
Division is undefined if the divisor is zero, and passing a zero divisor to the
divide functions will make these functions intentionally divide by zero. This
lets the user handle arithmetic exceptions in these functions in the same
manner as other arithmetic exceptions.
@deftypefun void mpf_div (mpf_t @var{rop}, mpf_t @var{op1}, mpf_t @var{op2})
@deftypefunx void mpf_ui_div (mpf_t @var{rop}, unsigned long int @var{op1}, mpf_t @var{op2})
@deftypefunx void mpf_div_ui (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
@cindex Division functions
Set @var{rop} to @var{op1}/@var{op2}.
@end deftypefun
@deftypefun void mpf_sqrt (mpf_t @var{rop}, mpf_t @var{op})
@deftypefunx void mpf_sqrt_ui (mpf_t @var{rop}, unsigned long int @var{op})
@cindex Root extraction functions
Set @var{rop} to @m{\sqrt{@var{op}}, the square root of @var{op}}.
@end deftypefun
@deftypefun void mpf_pow_ui (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
@cindex Exponentiation functions
Set @var{rop} to @m{@var{op1}^{op2}, @var{op1} raised to the power @var{op2}}.
@end deftypefun
@deftypefun void mpf_neg (mpf_t @var{rop}, mpf_t @var{op})
Set @var{rop} to @minus{}@var{op}.
@end deftypefun
@deftypefun void mpf_abs (mpf_t @var{rop}, mpf_t @var{op})
Set @var{rop} to the absolute value of @var{op}.
@end deftypefun
@deftypefun void mpf_mul_2exp (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
Set @var{rop} to @m{@var{op1} \times 2^{op2}, @var{op1} times 2 raised to
@var{op2}}.
@end deftypefun
@deftypefun void mpf_div_2exp (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
Set @var{rop} to @m{@var{op1}/2^{op2}, @var{op1} divided by 2 raised to
@var{op2}}.
@end deftypefun
@node Float Comparison, I/O of Floats, Float Arithmetic, Floating-point Functions
@comment node-name, next, previous, up
@section Comparison Functions
@cindex Float comparison functions
@cindex Comparison functions
@deftypefun int mpf_cmp (mpf_t @var{op1}, mpf_t @var{op2})
@deftypefunx int mpf_cmp_ui (mpf_t @var{op1}, unsigned long int @var{op2})
@deftypefunx int mpf_cmp_si (mpf_t @var{op1}, signed long int @var{op2})
Compare @var{op1} and @var{op2}. Return a positive value if @ma{@var{op1} >
@var{op2}}, zero if @ma{@var{op1} = @var{op2}}, and a negative value if
@ma{@var{op1} < @var{op2}}.
@end deftypefun
@deftypefun int mpf_eq (mpf_t @var{op1}, mpf_t @var{op2}, unsigned long int op3)
Return non-zero if the first @var{op3} bits of @var{op1} and @var{op2} are
equal, zero otherwise. I.e., test of @var{op1} and @var{op2} are approximately
equal.
@end deftypefun
@deftypefun void mpf_reldiff (mpf_t @var{rop}, mpf_t @var{op1}, mpf_t @var{op2})
Compute the relative difference between @var{op1} and @var{op2} and store the
result in @var{rop}. This is
@m{|@var{op1}-@var{op2}| / @var{op1}, abs(@var{op1}-@var{op2})/@var{op1}}.
@end deftypefun
@deftypefn Macro int mpf_sgn (mpf_t @var{op})
Return @ma{+1} if @ma{@var{op} > 0}, 0 if @ma{@var{op} = 0}, and @ma{-1} if
@ma{@var{op} < 0}.
This function is actually implemented as a macro. It evaluates its arguments
multiple times.
@end deftypefn
@node I/O of Floats, Miscellaneous Float Functions, Float Comparison, Floating-point Functions
@comment node-name, next, previous, up
@section Input and Output Functions
@cindex Float input and output functions
@cindex Input functions
@cindex Output functions
@cindex I/O functions
Functions that perform input from a stdio stream, and functions that output to
a stdio stream. Passing a @code{NULL} pointer for a @var{stream} argument to
any of these functions will make them read from @code{stdin} and write to
@code{stdout}, respectively.
When using any of these functions, it is a good idea to include @file{stdio.h}
before @file{gmp.h}, since that will allow @file{gmp.h} to define prototypes
for these functions.
@deftypefun size_t mpf_out_str (FILE *@var{stream}, int @var{base}, size_t @var{n_digits}, mpf_t @var{op})
Output @var{op} on stdio stream @var{stream}, as a string of digits in base
@var{base}. The base may vary from 2 to 36. Print at most @var{n_digits}
significant digits, or if @var{n_digits} is 0, the maximum number of digits
accurately representable by @var{op}.
In addition to the significant digits, a leading @samp{0.} and a trailing
exponent, in the form @samp{eNNN}, are printed. If @var{base} is greater than
10, @samp{@@} will be used instead of @samp{e} as exponent delimiter.
Return the number of bytes written, or if an error occurred, return 0.
@end deftypefun
@deftypefun size_t mpf_inp_str (mpf_t @var{rop}, FILE *@var{stream}, int @var{base})
Input a string in base @var{base} from stdio stream @var{stream}, and put the
read float in @var{rop}. The string is of the form @samp{M@@N} or, if the base
is 10 or less, alternatively @samp{MeN}. @samp{M} is the mantissa and @samp{N}
is the exponent. The mantissa is always in the specified base. The exponent
is either in the specified base or, if @var{base} is negative, in decimal.
The argument @var{base} may be in the ranges 2 to 36, or @minus{}36 to
@minus{}2. Negative values are used to specify that the exponent is in
decimal.
Unlike the corresponding @code{mpz} function, the base will not be determined
from the leading characters of the string if @var{base} is 0. This is so that
numbers like @samp{0.23} are not interpreted as octal.
Return the number of bytes read, or if an error occurred, return 0.
@end deftypefun
@c @deftypefun void mpf_out_raw (FILE *@var{stream}, mpf_t @var{float})
@c Output @var{float} on stdio stream @var{stream}, in raw binary
@c format. The float is written in a portable format, with 4 bytes of
@c size information, and that many bytes of limbs. Both the size and the
@c limbs are written in decreasing significance order.
@c @end deftypefun
@c @deftypefun void mpf_inp_raw (mpf_t @var{float}, FILE *@var{stream})
@c Input from stdio stream @var{stream} in the format written by
@c @code{mpf_out_raw}, and put the result in @var{float}.
@c @end deftypefun
@node Miscellaneous Float Functions, , I/O of Floats, Floating-point Functions
@comment node-name, next, previous, up
@section Miscellaneous Functions
@cindex Miscellaneous float functions
@cindex Float miscellaneous functions
@deftypefun void mpf_ceil (mpf_t @var{rop}, mpf_t @var{op})
@deftypefunx void mpf_floor (mpf_t @var{rop}, mpf_t @var{op})
@deftypefunx void mpf_trunc (mpf_t @var{rop}, mpf_t @var{op})
Set @var{rop} to @var{op} rounded to an integer. @code{mpf_ceil} rounds to the
next higher integer, @code{mpf_floor} to the next lower, and @code{mpf_trunc}
to the integer towards zero.
@end deftypefun
@deftypefun int mpf_integer_p (mpf_t @var{op})
Return non-zero if @var{op} is an integer.
@end deftypefun
@deftypefun int mpf_fits_ulong_p (mpf_t @var{op})
@deftypefunx int mpf_fits_slong_p (mpf_t @var{op})
@deftypefunx int mpf_fits_uint_p (mpf_t @var{op})
@deftypefunx int mpf_fits_sint_p (mpf_t @var{op})
@deftypefunx int mpf_fits_ushort_p (mpf_t @var{op})
@deftypefunx int mpf_fits_sshort_p (mpf_t @var{op})
Return non-zero if @var{op} would fit in the respective C data type, when
truncated to an integer.
@end deftypefun
@deftypefun void mpf_urandomb (mpf_t @var{rop}, gmp_randstate_t @var{state}, unsigned long int @var{nbits})
Generate a uniformly distributed random float in @var{rop}, such that 0 <=
@var{rop} < 1, with @var{nbits} significant bits in the mantissa.
The variable @var{state} must be initialized by calling one of the
@code{gmp_randinit} functions (@ref{Random State Initialization}) before
invoking this function.
@end deftypefun
@deftypefun void mpf_random2 (mpf_t @var{rop}, mp_size_t @var{max_size}, mp_exp_t @var{exp})
Generate a random float of at most @var{max_size} limbs, with long strings of
zeros and ones in the binary representation. The exponent of the number is in
the interval @minus{}@var{exp} to @var{exp}. This function is useful for
testing functions and algorithms, since this kind of random numbers have proven
to be more likely to trigger corner-case bugs. Negative random numbers are
generated when @var{max_size} is negative.
@end deftypefun
@c @deftypefun size_t mpf_size (mpf_t @var{op})
@c Return the size of @var{op} measured in number of limbs. If @var{op} is
@c zero, the returned value will be zero. (@xref{Nomenclature}, for an
@c explanation of the concept @dfn{limb}.)
@c
@c @strong{This function is obsolete. It will disappear from future GMP
@c releases.}
@c @end deftypefun
@node Low-level Functions, Random Number Functions, Floating-point Functions, Top
@comment node-name, next, previous, up
@chapter Low-level Functions
@cindex Low-level functions
This chapter describes low-level GMP functions, used to implement the
high-level GMP functions, but also intended for time-critical user code.
These functions start with the prefix @code{mpn_}.
@c 1. Some of these function clobber input operands.
@c
The @code{mpn} functions are designed to be as fast as possible, @strong{not}
to provide a coherent calling interface. The different functions have somewhat
similar interfaces, but there are variations that make them hard to use. These
functions do as little as possible apart from the real multiple precision
computation, so that no time is spent on things that not all callers need.
A source operand is specified by a pointer to the least significant limb and a
limb count. A destination operand is specified by just a pointer. It is the
responsibility of the caller to ensure that the destination has enough space
for storing the result.
With this way of specifying operands, it is possible to perform computations on
subranges of an argument, and store the result into a subrange of a
destination.
A common requirement for all functions is that each source area needs at least
one limb. No size argument may be zero. Unless otherwise stated, in-place
operations are allowed where source and destination are the same, but not where
they only partly overlap.
The @code{mpn} functions are the base for the implementation of the
@code{mpz_}, @code{mpf_}, and @code{mpq_} functions.
This example adds the number beginning at @var{s1p} and the number beginning at
@var{s2p} and writes the sum at @var{destp}. All areas have @var{n} limbs.
@example
cy = mpn_add_n (destp, s1p, s2p, n)
@end example
@noindent
In the notation used here, a source operand is identified by the pointer to
the least significant limb, and the limb count in braces. For example,
@{@var{s1p}, @var{s1n}@}.
@deftypefun mp_limb_t mpn_add_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
Add @{@var{s1p}, @var{n}@} and @{@var{s2p}, @var{n}@}, and write the @var{n}
least significant limbs of the result to @var{rp}. Return carry, either 0 or
1.
This is the lowest-level function for addition. It is the preferred function
for addition, since it is written in assembly for most CPUs. For addition of
a variable to itself (i.e., @var{s1p} equals @var{s2p}, use @code{mpn_lshift}
with a count of 1 for optimal speed.
@end deftypefun
@deftypefun mp_limb_t mpn_add_1 (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n}, mp_limb_t @var{s2limb})
Add @{@var{s1p}, @var{n}@} and @var{s2limb}, and write the @var{n} least
significant limbs of the result to @var{rp}. Return carry, either 0 or 1.
@end deftypefun
@deftypefun mp_limb_t mpn_add (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, const mp_limb_t *@var{s2p}, mp_size_t @var{s2n})
Add @{@var{s1p}, @var{s1n}@} and @{@var{s2p}, @var{s2n}@}, and write the
@var{s1n} least significant limbs of the result to @var{rp}. Return carry,
either 0 or 1.
This function requires that @var{s1n} is greater than or equal to @var{s2n}.
@end deftypefun
@deftypefun mp_limb_t mpn_sub_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
Subtract @{@var{s2p}, @var{n}@} from @{@var{s1p}, @var{n}@}, and write the
@var{n} least significant limbs of the result to @var{rp}. Return borrow,
either 0 or 1.
This is the lowest-level function for subtraction. It is the preferred
function for subtraction, since it is written in assembly for most CPUs.
@end deftypefun
@deftypefun mp_limb_t mpn_sub_1 (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n}, mp_limb_t @var{s2limb})
Subtract @var{s2limb} from @{@var{s1p}, @var{n}@}, and write the @var{n} least
significant limbs of the result to @var{rp}. Return borrow, either 0 or 1.
@end deftypefun
@deftypefun mp_limb_t mpn_sub (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, const mp_limb_t *@var{s2p}, mp_size_t @var{s2n})
Subtract @{@var{s2p}, @var{s2n}@} from @{@var{s1p}, @var{s1n}@}, and write the
@var{s1n} least significant limbs of the result to @var{rp}. Return borrow,
either 0 or 1.
This function requires that @var{s1n} is greater than or equal to
@var{s2n}.
@end deftypefun
@deftypefun void mpn_mul_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
Multiply @{@var{s1p}, @var{n}@} and @{@var{s2p}, @var{n}@}, and write the
2*@var{n}-limb result to @var{rp}.
The destination has to have space for 2*@var{n} limbs, even if the product's
most significant limb is zero.
@end deftypefun
@deftypefun mp_limb_t mpn_mul_1 (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n}, mp_limb_t @var{s2limb})
Multiply @{@var{s1p}, @var{n}@} and @var{s2limb}, and write the @var{n} least
significant limbs of the product to @var{rp}. Return the most significant limb
of the product.
This is a low-level function that is a building block for general
multiplication as well as other operations in GMP. It is written in assembly
for most CPUs.
Don't call this function if @var{s2limb} is a power of 2; use @code{mpn_lshift}
with a count equal to the logarithm of @var{s2limb} instead, for optimal speed.
@end deftypefun
@deftypefun mp_limb_t mpn_addmul_1 (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n}, mp_limb_t @var{s2limb})
Multiply @{@var{s1p}, @var{n}@} and @var{s2limb}, and add the @var{n} least
significant limbs of the product to @{@var{rp}, @var{n}@} and write the result
to @var{rp}. Return the most significant limb of the product, plus carry-out
from the addition.
This is a low-level function that is a building block for general
multiplication as well as other operations in GMP. It is written in assembly
for most CPUs.
@end deftypefun
@deftypefun mp_limb_t mpn_submul_1 (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n}, mp_limb_t @var{s2limb})
Multiply @{@var{s1p}, @var{n}@} and @var{s2limb}, and subtract the @var{n}
least significant limbs of the product from @{@var{rp}, @var{n}@} and write the
result to @var{rp}. Return the most significant limb of the product, minus
borrow-out from the subtraction.
This is a low-level function that is a building block for general
multiplication and division as well as other operations in GMP. It is written
in assembly for most CPUs.
@end deftypefun
@deftypefun mp_limb_t mpn_mul (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, const mp_limb_t *@var{s2p}, mp_size_t @var{s2n})
Multiply @{@var{s1p}, @var{s1n}@} and @{@var{s2p}, @var{s2n}@}, and write the
result to @var{rp}. Return the most significant limb of the result.
The destination has to have space for @var{s1n} + @var{s2n} limbs, even if the
result might be one limb smaller.
This function requires that @var{s1n} is greater than or equal to
@var{s2n}. The destination must be distinct from both input operands.
@end deftypefun
@deftypefun void mpn_tdiv_qr (mp_limb_t *@var{qp}, mp_limb_t *@var{rp}, mp_size_t @var{qxn}, const mp_limb_t *@var{np}, mp_size_t @var{nn}, const mp_limb_t *@var{dp}, mp_size_t @var{dn})
Divide @{@var{np}, @var{nn}@} by @{@var{dp}, @var{dn}@} and put the quotient
at @{@var{qp}, @var{nn}@minus{}@var{dn}+1@} and the remainder at @{@var{rp},
@var{dn}@}. The quotient is rounded towards 0.
No overlap is permitted between arguments. @var{nn} must be greater than or
equal to @var{dn}. The most significant limb of @var{dp} must be non-zero.
The @var{qxn} operand must be zero.
@comment FIXME: Relax overlap requirements!
@end deftypefun
@deftypefun mp_limb_t mpn_divrem (mp_limb_t *@var{r1p}, mp_size_t @var{qxn}, mp_limb_t *@var{rs2p}, mp_size_t @var{rs2n}, const mp_limb_t *@var{s3p}, mp_size_t @var{s3n})
[This function is obsolete. Please call @code{mpn_tdiv_qr} instead for best
performance.]
Divide @{@var{rs2p}, @var{rs2n}@} by @{@var{s3p}, @var{s3n}@}, and write the
quotient at @var{r1p}, with the exception of the most significant limb, which
is returned. The remainder replaces the dividend at @var{rs2p}; it will be
@var{s3n} limbs long (i.e., as many limbs as the divisor).
In addition to an integer quotient, @var{qxn} fraction limbs are developed, and
stored after the integral limbs. For most usages, @var{qxn} will be zero.
It is required that @var{rs2n} is greater than or equal to @var{s3n}. It is
required that the most significant bit of the divisor is set.
If the quotient is not needed, pass @var{rs2p} + @var{s3n} as @var{r1p}. Aside
from that special case, no overlap between arguments is permitted.
Return the most significant limb of the quotient, either 0 or 1.
The area at @var{r1p} needs to be @var{rs2n} @minus{} @var{s3n} + @var{qxn}
limbs large.
@end deftypefun
@deftypefn Function mp_limb_t mpn_divrem_1 (mp_limb_t *@var{r1p}, mp_size_t @var{qxn}, @w{mp_limb_t *@var{s2p}}, mp_size_t @var{s2n}, mp_limb_t @var{s3limb})
@deftypefnx Macro mp_limb_t mpn_divmod_1 (mp_limb_t *@var{r1p}, mp_limb_t *@var{s2p}, @w{mp_size_t @var{s2n}}, @w{mp_limb_t @var{s3limb}})
Divide @{@var{s2p}, @var{s2n}@} by @var{s3limb}, and write the quotient at
@var{r1p}. Return the remainder.
The integer quotient is written to @{@var{r1p}+@var{qxn}, @var{s2n}@} and in
addition @var{qxn} fraction limbs are developed and written to @{@var{r1p},
@var{qxn}@}. Either or both @var{s2n} and @var{qxn} can be zero. For most
usages, @var{qxn} will be zero.
@code{mpn_divmod_1} exists for upward source compatibility and is simply a
macro calling @code{mpn_divrem_1} with an @var{qxn} of 0.
The areas at @var{r1p} and @var{s2p} have to be identical or completely
separate, not partially overlapping.
@end deftypefn
@deftypefun mp_limb_t mpn_divmod (mp_limb_t *@var{r1p}, mp_limb_t *@var{rs2p}, mp_size_t @var{rs2n}, const mp_limb_t *@var{s3p}, mp_size_t @var{s3n})
[This function is obsolete. Please call @code{mpn_tdiv_qr} instead for best
performance.]
@end deftypefun
@deftypefn Macro mp_limb_t mpn_divexact_by3 (mp_limb_t *@var{rp}, mp_limb_t *@var{sp}, @w{mp_size_t @var{n}})
@deftypefnx Function mp_limb_t mpn_divexact_by3c (mp_limb_t *@var{rp}, mp_limb_t *@var{sp}, @w{mp_size_t @var{n}}, mp_limb_t @var{carry})
Divide @{@var{sp}, @var{n}@} by 3, expecting it to divide exactly, and writing
the result to @{@var{rp}, @var{n}@}. If 3 divides exactly, the return value is
zero and the result is the quotient. If not, the return value is non-zero and
the result won't be anything useful.
@code{mpn_divexact_by3c} takes an initial carry parameter, which can be the
return value from a previous call, so a large calculation can be done piece by
piece from low to high. @code{mpn_divexact_by3} is simply a macro calling
@code{mpn_divexact_by3c} with a 0 carry parameter.
These routines use a multiply-by-inverse and will be faster than
@code{mpn_divrem_1} on CPUs with fast multiplication but slow division.
The source @ma{a}, result @ma{q}, size @ma{n}, initial carry @ma{i}, and
return value @ma{c} satisfy @m{cb^n+a-i=3q, c*b^n + a-i = 3*q}, where
@m{b=2\GMPraise{@code{mp\_bits\_per\_limb}}, b=2^mp_bits_per_limb}. The
return @ma{c} is always 0, 1 or 2, and the initial carry @ma{i} must also be
0, 1 or 2 (these are both borrows really). When @ma{c=0} clearly
@ma{q=(a-i)/3}. When @m{c \neq 0, c!=0}, the remainder @ma{(a-i) @bmod{} 3}
is given by @ma{3-c}, because @ma{b @equiv{} 1 @bmod{} 3} (when
@code{mp_bits_per_limb} is even, which is always so currently).
@end deftypefn
@deftypefun mp_limb_t mpn_mod_1 (mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, mp_limb_t @var{s2limb})
Divide @{@var{s1p}, @var{s1n}@} by @var{s2limb}, and return the remainder.
@var{s1n} can be zero.
@end deftypefun
@deftypefun mp_limb_t mpn_bdivmod (mp_limb_t *@var{rp}, mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, const mp_limb_t *@var{s2p}, mp_size_t @var{s2n}, unsigned long int @var{d})
The function puts the low [@var{d}/@var{BITS_PER_MP_LIMB}] limbs of @var{q} =
@{@var{s1p}, @var{s1n}@}/@{@var{s2p}, @var{s2n}@} mod 2^@var{d} at @var{rp},
and returns the high @var{d} mod @var{BITS_PER_MP_LIMB} bits of @var{q}.
@{@var{s1p}, @var{s1n}@} - @var{q} * @{@var{s2p}, @var{s2n}@} mod
2^(@var{s1n}*@var{BITS_PER_MP_LIMB}) is placed at @var{s1p}. Since the low
[@var{d}/@var{BITS_PER_MP_LIMB}] limbs of this difference are zero, it is
possible to overwrite the low limbs at @var{s1p} with this difference, provided
@var{rp} <= @var{s1p}.
This function requires that @var{s1n} * @var{BITS_PER_MP_LIMB} >= @var{D}, and
that @{@var{s2p}, @var{s2n}@} is odd.
@strong{This interface is preliminary. It might change incompatibly in future
revisions.}
@end deftypefun
@deftypefun mp_limb_t mpn_lshift (mp_limb_t *@var{rp}, const mp_limb_t *@var{sp}, mp_size_t @var{n}, unsigned int @var{count})
Shift @{@var{sp}, @var{n}@} @var{count} bits to the left, and write the @var{n}
least significant limbs of the result to @var{rp}. @var{count} must be in the
range 1 to @code{mp_bits_per_limb}@minus{}1. The bits shifted out to the left
are returned.
Overlapping of the destination space and the source space is allowed in this
function, provided @var{rp} >= @var{sp}.
This function is written in assembly for most CPUs.
@end deftypefun
@deftypefun mp_limb_t mpn_rshift (mp_limb_t *@var{rp}, const mp_limb_t *@var{sp}, mp_size_t @var{n}, unsigned int @var{count})
Shift @{@var{sp}, @var{n}@} @var{count} bits to the right, and write the
@var{n} most significant limbs of the result to @var{rp}. @var{count} must be
in the range 1 to @code{mp_bits_per_limb}@minus{}1. The bits shifted out to
the right are returned.
Overlapping of the destination space and the source space is allowed in this
function, provided @var{rp} <= @var{sp}.
This function is written in assembly for most CPUs.
@end deftypefun
@deftypefun int mpn_cmp (const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
Compare @{@var{s1p}, @var{n}@} and @{@var{s2p}, @var{n}@} and return a positive
value if s1 > src2, 0 if they are equal, and a negative value if s1 < src2.
@end deftypefun
@deftypefun mp_size_t mpn_gcd (mp_limb_t *@var{rp}, mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, mp_limb_t *@var{s2p}, mp_size_t @var{s2n})
Set @{@var{rp}, @var{retval}@} to the greatest common divisor of @{@var{s1p},
@var{s1n}@} and @{@var{s2p}, @var{s2n}@}. The result can be up to @var{s2n}
limbs, the return value is the actual number produced. Both source operands
are destroyed.
@{@var{s1p}, @var{s1n}@} must have at least as many bits as @{@var{s2p},
@var{s2n}@}. @{@var{s2p}, @var{s2n}@} must be odd. Both operands must have
non-zero most significant limbs.
@end deftypefun
@deftypefun mp_limb_t mpn_gcd_1 (const mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, mp_limb_t @var{s2limb})
Return the greatest common divisor of @{@var{s1p}, @var{s1n}@} and
@var{s2limb}. Both operands must be non-zero.
@end deftypefun
@deftypefun mp_size_t mpn_gcdext (mp_limb_t *@var{r1p}, mp_limb_t *@var{r2p}, mp_size_t *@var{r2n}, mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, mp_limb_t *@var{s2p}, mp_size_t @var{s2n})
Compute the greatest common divisor of @{@var{s1p}, @var{s1n}@} and
@{@var{s2p}, @var{s2n}@}. Store the gcd at @{@var{r1p}, @var{retval}@}.
Store the first cofactor at @{@var{r2p}, *@var{r2n}@}. If the cofactor is
negative, *@var{r2n} is negative. @var{r1p} and @var{r2p} should each have
room for @var{s1n} limbs, the return value and value stored through @var{r2n}
indicate the actual number produced.
@{@var{s1p}, @var{s1n}@} must be greater than or equal to @{@var{s2p},
@var{s2n}@}. Both operands must be non-zero. Both source regions are
destroyed, plus one limb past the end of each.
@end deftypefun
@deftypefun mp_size_t mpn_sqrtrem (mp_limb_t *@var{r1p}, mp_limb_t *@var{r2p}, const mp_limb_t *@var{sp}, mp_size_t @var{n})
Compute the square root of @{@var{sp}, @var{n}@} and put the result at
@{@var{r1p}, @m{\lceil@var{n}/2\rceil, ceil(@var{n}/2)}@} and the remainder at
@{@var{r2p}, @var{retval}@}. @var{r2p} needs space for @var{n} limbs, but the
return value indicates how many are produced.
The most significant limb of @var{sp} must be non-zero. The areas
@{@var{r1p}, @m{\lceil@var{n}/2\rceil, ceil(@var{n}/2)}@} and @{@var{sp},
@var{n}@} must be completely separate. The areas @{@var{r2p}, @var{n}@} and
@{@var{sp}, @var{n}@} must be either identical or completely separate.
If the remainder is not wanted then @var{r2p} can be @code{NULL}, and in this
case the return value is zero or non-zero according to whether the remainder
would have been zero or non-zero.
A return value of zero indicates a perfect square. See also
@code{mpz_perfect_square_p}.
@end deftypefun
@deftypefun mp_size_t mpn_get_str (unsigned char *@var{str}, int @var{base}, mp_limb_t *@var{s1p}, mp_size_t @var{s1n})
Convert @{@var{s1p}, @var{s1n}@} to a raw unsigned char array in base
@var{base}. There may be leading zeros in the string. The string is not in
ASCII; to convert it to printable format, add the ASCII codes for @samp{0} or
@samp{A}, depending on the base and range.
The area @{@var{s1p}, @var{s1n}+1@} is clobbered.
Return the number of characters in @var{str}.
The area at @var{str} has to have space for the largest possible number
represented by a @var{s1n} long limb array, plus one extra character.
@end deftypefun
@deftypefun mp_size_t mpn_set_str (mp_limb_t *@var{r1p}, const char *@var{str}, size_t @var{strsize}, int @var{base})
Convert the raw unsigned char array at @var{str} of length @var{strsize} to a
limb array. The base of @var{str} is @var{base}.
Return the number of limbs stored in @var{r1p}.
@end deftypefun
@deftypefun {unsigned long int} mpn_scan0 (const mp_limb_t *@var{s1p}, unsigned long int @var{bit})
Scan @var{s1p} from bit position @var{bit} for the next clear bit.
It is required that there be a clear bit within the area at @var{s1p} at or
beyond bit position @var{bit}, so that the function has something to return.
@end deftypefun
@deftypefun {unsigned long int} mpn_scan1 (const mp_limb_t *@var{s1p}, unsigned long int @var{bit})
Scan @var{s1p} from bit position @var{bit} for the next set bit.
It is required that there be a set bit within the area at @var{s1p} at or
beyond bit position @var{bit}, so that the function has something to return.
@end deftypefun
@deftypefun void mpn_random (mp_limb_t *@var{r1p}, mp_size_t @var{r1n})
@deftypefunx void mpn_random2 (mp_limb_t *@var{r1p}, mp_size_t @var{r1n})
Generate a random number of length @var{r1n} and store it at @var{r1p}. The
most significant limb is always non-zero. @code{mpn_random} generates
uniformly distributed limb data, @code{mpn_random2} generates long strings of
zeros and ones in the binary representation.
@code{mpn_random2} is intended for testing the correctness of the @code{mpn}
routines.
@end deftypefun
@deftypefun {unsigned long int} mpn_popcount (const mp_limb_t *@var{s1p}, mp_size_t @var{n})
Count the number of set bits in @{@var{s1p}, @var{n}@}.
@end deftypefun
@deftypefun {unsigned long int} mpn_hamdist (const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
Compute the hamming distance between @{@var{s1p}, @var{n}@} and @{@var{s2p},
@var{n}@}.
@end deftypefun
@deftypefun int mpn_perfect_square_p (const mp_limb_t *@var{s1p}, mp_size_t @var{n})
Return non-zero iff @{@var{s1p}, @var{n}@} is a perfect square.
@end deftypefun
@node Random Number Functions, BSD Compatible Functions, Low-level Functions, Top
@chapter Random Number Functions
@cindex Random number functions
There are two groups of random number functions in GMP; older functions that
call C library random number generators, rely on global state, and aren't very
random; and newer functions that don't have these problems. The newer
functions are self-contained, they accept a state parameter, and generate good
random numbers.
The random state parameter is of type @code{gmp_randstate_t}. It must be
initialized by a call to one of the @code{gmp_randinit} functions (@ref{Random
State Initialization}). The initial seed is set using one of the
@code{gmp_randseed} functions (@ref{Random State Initialization}).
The size of the seed determines the number of different sequences of random
numbers that it's possible to generate. The ``quality'' of the seed is the
randomness of a given seed compared to the previous seed used and this affects
the randomness of separate number sequences.
The method for choosing a seed is critical if the generated numbers are to be
used for important applications, such as generating cryptographic keys.
The traditional method is to use the current system time for seeding, but some
care needs to be taken. If an application seeds the random functions very
often and the resolution of the system clock is low, then the same sequence of
numbers might be generated until the clock ticks over. Furthermore, the
current system time is quite easy to guess, so if unpredictability is required
then the time should definitely not be the only source for seed values.
On some systems there's a special device @file{/dev/random} which provides
random data better suited for use as a seed.
The functions actually generating random functions are documented in
@ref{Miscellaneous Integer Functions} and @ref{Miscellaneous Float Functions}.
@menu
* Random State Initialization:: How to initialize a random state.
@end menu
@node Random State Initialization, , Random Number Functions, Random Number Functions
@section Random State Initialization
@cindex Random number state
@deftypefun void gmp_randinit (gmp_randstate_t @var{state}, gmp_randalg_t @var{alg}, ...)
Initialize @var{state}, for the algorithm indicated by @var{alg}. Currently
only one algorithm is supported:
@itemize @minus
@item @code{GMP_RAND_ALG_LC} --- Linear congruential.
A fast generator defined by @ma{X = (aX + c) @bmod m}.
A third argument @var{size} of type @code{unsigned long int} is required.
This is the size of the largest good quality random number to be generated,
expressed in number of bits. If the random generation functions are asked for
a bigger random number then two or more numbers of @var{size} bits will be
generated and concatenated, resulting in a ``bad'' random number. But this
can be used to generate big random numbers relatively cheaply if the quality
of randomness isn't of great importance.
Parameters @ma{a}, @ma{c}, and @ma{m} are chosen from a table where the
modulus @ma{m} is a power of 2 and the multiplier is congruent to 5 (mod 8).
The choice is based on the @var{size} parameter. The maximum @var{size}
supported by the table is 128. If you need bigger random numbers, use your
own scheme and call one of the other @code{gmp_randinit} functions.
@ignore
@item @code{GMP_RAND_ALG_BBS} --- Blum, Blum, and Shub.
@end ignore
@end itemize
If @var{alg} is 0 or @code{GMP_RAND_ALG_DEFAULT}, the default algorithm is
used, this being @code{GMP_RAND_ALG_LC} described above.
@code{gmp_randinit} may set the following bits in @code{gmp_errno}:
@itemize
@item @code{GMP_ERROR_UNSUPPORTED_ARGUMENT} --- @var{alg} is unsupported
@item @code{GMP_ERROR_INVALID_ARGUMENT} --- @var{size} is too big
@end itemize
@end deftypefun
@c Not yet in the library.
@ignore
@deftypefun void gmp_randinit_lc (gmp_randstate_t @var{state}, mpz_t @var{a}, unsigned long int @var{c}, mpz_t @var{m})
Initialize @var{state} for a linear congruential scheme @m{X = (@var{a}X +
@var{c}) @bmod @var{m}, X = (@var{a}*X + @var{c}) mod 2^@var{m}}.
@end deftypefun
@end ignore
@deftypefun void gmp_randinit_lc_2exp (gmp_randstate_t @var{state}, mpz_t @var{a}, @w{unsigned long int @var{c}}, @w{unsigned long int @var{m2exp}})
Initialize @var{state} for a linear congruential scheme @m{X = (@var{a}X +
@var{c}) @bmod 2^{m2exp}, X = (@var{a}*X + @var{c}) mod 2^@var{m2exp}}.
The low bits of random numbers from this scheme are not very random, so the
low half of each number generated is discarded. This should be taken into
account when choosing @var{m2exp}.
@end deftypefun
@deftypefun void gmp_randseed (gmp_randstate_t @var{state}, mpz_t @var{seed})
@deftypefunx void gmp_randseed_ui (gmp_randstate_t @var{state}, @w{unsigned long int @var{seed}})
Set an initial seed value into @var{state}.
@end deftypefun
@deftypefun void gmp_randclear (gmp_randstate_t @var{state})
Free all memory occupied by @var{state}. Make sure to call this function for
all @code{gmp_randstate_t} variables when you are done with them.
@end deftypefun
@node BSD Compatible Functions, Custom Allocation, Random Number Functions, Top
@comment node-name, next, previous, up
@chapter Berkeley MP Compatible Functions
@cindex Berkeley MP compatible functions
@cindex BSD MP compatible functions
These functions are intended to be fully compatible with the Berkeley MP
library which is available on many BSD derived U*ix systems. The
@samp{--enable-mpbsd} option must be used when building GNU MP to make these
available (@pxref{Installing GMP}).
The original Berkeley MP library has a usage restriction: you cannot use the
same variable as both source and destination in a single function call. The
compatible functions in GNU MP do not share this restriction---inputs and
outputs may overlap.
It is not recommended that new programs are written using these functions.
Apart from the incomplete set of functions, the interface for initializing
@code{MINT} objects is more error prone, and the @code{pow} function collides
with @code{pow} in @file{libm.a}.
@cindex @file{mp.h}
Include the header @file{mp.h} to get the definition of the necessary types and
functions. If you are on a BSD derived system, make sure to include GNU
@file{mp.h} if you are going to link the GNU @file{libmp.a} to your program.
This means that you probably need to give the @samp{-I<dir>} option to the
compiler, where @samp{<dir>} is the directory where you have GNU @file{mp.h}.
@deftypefun {MINT *} itom (signed short int @var{initial_value})
Allocate an integer consisting of a @code{MINT} object and dynamic limb space.
Initialize the integer to @var{initial_value}. Return a pointer to the
@code{MINT} object.
@end deftypefun
@deftypefun {MINT *} xtom (char *@var{initial_value})
Allocate an integer consisting of a @code{MINT} object and dynamic limb space.
Initialize the integer from @var{initial_value}, a hexadecimal,
null-terminated C string. Return a pointer to the @code{MINT} object.
@end deftypefun
@deftypefun void move (MINT *@var{src}, MINT *@var{dest})
Set @var{dest} to @var{src} by copying. Both variables must be previously
initialized.
@end deftypefun
@deftypefun void madd (MINT *@var{src_1}, MINT *@var{src_2}, MINT *@var{destination})
Add @var{src_1} and @var{src_2} and put the sum in @var{destination}.
@end deftypefun
@deftypefun void msub (MINT *@var{src_1}, MINT *@var{src_2}, MINT *@var{destination})
Subtract @var{src_2} from @var{src_1} and put the difference in
@var{destination}.
@end deftypefun
@deftypefun void mult (MINT *@var{src_1}, MINT *@var{src_2}, MINT *@var{destination})
Multiply @var{src_1} and @var{src_2} and put the product in @var{destination}.
@end deftypefun
@deftypefun void mdiv (MINT *@var{dividend}, MINT *@var{divisor}, MINT *@var{quotient}, MINT *@var{remainder})
@deftypefunx void sdiv (MINT *@var{dividend}, signed short int @var{divisor}, MINT *@var{quotient}, signed short int *@var{remainder})
Set @var{quotient} to @var{dividend}/@var{divisor}, and @var{remainder} to
@var{dividend} mod @var{divisor}. The quotient is rounded towards zero; the
remainder has the same sign as the dividend unless it is zero.
Some implementations of these functions work differently---or not at all---for
negative arguments.
@end deftypefun
@deftypefun void msqrt (MINT *@var{op}, MINT *@var{root}, MINT *@var{remainder})
Set @var{root} to @m{\lfloor\sqrt{@var{op}}\rfloor, the truncated integer part
of the square root of @var{op}}, like @code{mpz_sqrt}. Set @var{remainder} to
@m{(@var{op} - @var{root}^2), @var{op}@minus{}@var{root}*@var{root}}, i.e.
zero if @var{op} is a perfect square.
If @var{root} and @var{remainder} are the same variable, the results are
undefined.
@end deftypefun
@deftypefun void pow (MINT *@var{base}, MINT *@var{exp}, MINT *@var{mod}, MINT *@var{dest})
Set @var{dest} to (@var{base} raised to @var{exp}) modulo @var{mod}.
@end deftypefun
@deftypefun void rpow (MINT *@var{base}, signed short int @var{exp}, MINT *@var{dest})
Set @var{dest} to @var{base} raised to @var{exp}.
@end deftypefun
@deftypefun void gcd (MINT *@var{op1}, MINT *@var{op2}, MINT *@var{res})
Set @var{res} to the greatest common divisor of @var{op1} and @var{op2}.
@end deftypefun
@deftypefun int mcmp (MINT *@var{op1}, MINT *@var{op2})
Compare @var{op1} and @var{op2}. Return a positive value if @var{op1} >
@var{op2}, zero if @var{op1} = @var{op2}, and a negative value if @var{op1} <
@var{op2}.
@end deftypefun
@deftypefun void min (MINT *@var{dest})
Input a decimal string from @code{stdin}, and put the read integer in
@var{dest}. SPC and TAB are allowed in the number string, and are ignored.
@end deftypefun
@deftypefun void mout (MINT *@var{src})
Output @var{src} to @code{stdout}, as a decimal string. Also output a newline.
@end deftypefun
@deftypefun {char *} mtox (MINT *@var{op})
Convert @var{op} to a hexadecimal string, and return a pointer to the string.
The returned string is allocated using the default memory allocation function,
@code{malloc} by default.
@end deftypefun
@deftypefun void mfree (MINT *@var{op})
De-allocate, the space used by @var{op}. @strong{This function should only be
passed a value returned by @code{itom} or @code{xtom}.}
@end deftypefun
@node Custom Allocation, Algorithms, BSD Compatible Functions, Top
@comment node-name, next, previous, up
@chapter Custom Allocation
@cindex Custom allocation
@cindex Memory allocation
@cindex Allocation of memory
By default GMP uses @code{malloc}, @code{realloc} and @code{free} for memory
allocation. If @code{malloc} or @code{realloc} fails, GMP prints a message to
the standard error output and terminates the program.
Some applications might want to allocate memory in other ways, or might not
want a fatal error when no more is available. To accomplish this you can
specify alternative memory allocation functions.
This feature is available in the Berkeley compatibility library as well as the
main GMP library.
@deftypefun void mp_set_memory_functions (@* void *(*@var{alloc_func_ptr}) (size_t), @* void *(*@var{realloc_func_ptr}) (void *, size_t, size_t), @* void (*@var{free_func_ptr}) (void *, size_t))
Replace the current allocation functions from the arguments. If an argument
is @code{NULL}, the corresponding default function is retained.
@strong{Be sure to call this function only when there are no active GMP
objects allocated using the previous memory functions! Usually that means
calling this before any other GMP function.}
@end deftypefun
The functions you supply should fit the following declarations:
@deftypefun {void *} allocate_function (size_t @var{alloc_size})
This function should return a pointer to newly allocated space with at least
@var{alloc_size} storage units.
@end deftypefun
@deftypefun {void *} reallocate_function (void *@var{ptr}, size_t @var{old_size}, size_t @var{new_size})
This function should return a pointer to newly allocated space of at least
@var{new_size} storage units, after copying at least the smaller of
@var{old_size} and @var{new_size} storage units from @var{ptr}. It should
also de-allocate the space at @var{ptr}.
You can assume that the space at @var{ptr} was formerly returned from
@code{allocate_function} or @code{reallocate_function}, for a request for
@var{old_size} storage units. It's possible @var{new_size} will be smaller
than @var{old_size}. @var{ptr} will never be @code{NULL}.
@end deftypefun
@deftypefun void deallocate_function (void *@var{ptr}, size_t @var{size})
De-allocate the space pointed to by @var{ptr}.
You can assume that the space at @var{ptr} was formerly returned from
@code{allocate_function} or @code{reallocate_function}, from a request for
@var{size} storage units.
@end deftypefun
A @dfn{storage unit} is the unit used by the @code{sizeof} operator, normally
an 8 bit byte.
Note that no error return is allowed from these functions. They must perform
the specified operation, or take their own fatal error action (possibly after
attempting a garbage collect or whatever). Getting a different fatal error
action is one good use for setting new allocation functions.
The @var{old_size} parameters to @code{reallocate_function} and
@code{deallocate_function} are passed for convenience, but of course can be
ignored if not needed. The default functions using @code{malloc} for instance
don't use them.
@node Algorithms, Contributors, Custom Allocation, Top
@chapter Algorithms
@cindex Algorithms
This chapter is an introduction to some of the algorithms used for various GMP
operations. The code is likely to be hard to understand without knowing
something about the algorithms.
Some GMP internals are mentioned, but applications that expect to be
compatible with future GMP releases should take care to use only the
documented functions.
@menu
* Multiplication Algorithms::
* Division Algorithms::
* Greatest Common Divisor Algorithms::
* Powering Algorithms::
* Root Extraction Algorithms::
* Radix Conversion Algorithms::
* Other Algorithms::
* Assembler Coding::
@end menu
@node Multiplication Algorithms, Division Algorithms, Algorithms, Algorithms
@section Multiplication
@cindex Multiplication algorithms
N@cross{}N limb multiplications and squares are done using one of four
algorithms, as the size N increases.
@c @display doesn't seem to work around @multitable in tex
@iftex
{@advance@leftskip by @lispnarrowing@noindent
@multitable {KaratsubaMMM} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
@item Algorithm @tab Threshold
@item Basecase @tab (none)
@item Karatsuba @tab @code{KARATSUBA_MUL_THRESHOLD}
@item Toom-3 @tab @code{TOOM3_MUL_THRESHOLD}
@item FFT @tab @code{FFT_MUL_THRESHOLD}
@end multitable
@par}
@end iftex
@ifnottex
@display
@multitable {KaratsubaMMM} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
@item Algorithm @tab Threshold
@item Basecase @tab (none)
@item Karatsuba @tab @nicode{KARATSUBA_MUL_THRESHOLD}
@item Toom-3 @tab @nicode{TOOM3_MUL_THRESHOLD}
@item FFT @tab @nicode{FFT_MUL_THRESHOLD}
@end multitable
@end display
@end ifnottex
Similarly for squaring, with the @code{SQR} thresholds. Note though that the
FFT is only used if GMP is configured with @samp{--enable-fft}, @pxref{Build
Options}.
N@cross{}M multiplications of operands with different sizes above
@code{KARATSUBA_MUL_THRESHOLD} are currently done by splitting into M@cross{}M
pieces. The Karatsuba and Toom-3 routines then operate only on equal size
operands. This is not very efficient, and is slated for improvement in the
future.
@menu
* Basecase Multiplication::
* Karatsuba Multiplication::
* Toom-Cook 3-Way Multiplication::
* FFT Multiplication::
* Other Multiplication::
@end menu
@node Basecase Multiplication, Karatsuba Multiplication, Multiplication Algorithms, Multiplication Algorithms
@subsection Basecase Multiplication
Basecase N@cross{}M multiplication is a straightforward rectangular set of
cross-products, the same as long multiplication done by hand and for that
reason sometimes known as the schoolbook or grammar school method. This is an
@m{O(NM),O(N*M)} algorithm. See Knuth section 4.3.1 algorithm M
(@pxref{References}), and the @file{mpn/generic/mul_basecase.c} code.
Assembler implementations of @code{mpn_mul_basecase} are essentially the same
as the generic C code, but have all the usual assembler tricks and
obscurities introduced for speed.
A square can be done in roughly half the time of a multiply, by using the fact
that the cross products above and below the diagonal are the same. A triangle
of products below the diagonal is formed, doubled (left shift by one bit), and
then the products on the diagonal added. This can be seen in
@file{mpn/generic/sqr_basecase.c}. Again the assembler implementations take
essentially the same approach.
@tex
\def\GMPline#1#2#3#4#5#6{%
\hbox {%
\vrule height 2.5ex depth 1ex
\hbox to 2em {\hfil{#2}\hfil}%
\vrule \hbox to 2em {\hfil{#3}\hfil}%
\vrule \hbox to 2em {\hfil{#4}\hfil}%
\vrule \hbox to 2em {\hfil{#5}\hfil}%
\vrule \hbox to 2em {\hfil{#6}\hfil}%
\vrule}}
\GMPdisplay{
\hbox{%
\vbox{%
\hbox to 1.5em {\vrule height 2.5ex depth 1ex width 0pt}%
\hbox {\vrule height 2.5ex depth 1ex width 0pt u0\hfil}%
\hbox {\vrule height 2.5ex depth 1ex width 0pt u1\hfil}%
\hbox {\vrule height 2.5ex depth 1ex width 0pt u2\hfil}%
\hbox {\vrule height 2.5ex depth 1ex width 0pt u3\hfil}%
\hbox {\vrule height 2.5ex depth 1ex width 0pt u4\hfil}%
\vfill}%
\vbox{%
\hbox{%
\hbox to 2em {\hfil u0\hfil}%
\hbox to 2em {\hfil u1\hfil}%
\hbox to 2em {\hfil u2\hfil}%
\hbox to 2em {\hfil u3\hfil}%
\hbox to 2em {\hfil u4\hfil}}%
\vskip 0.7ex
\hrule
\GMPline{u0}{d}{}{}{}{}%
\hrule
\GMPline{u1}{}{d}{}{}{}%
\hrule
\GMPline{u2}{}{}{d}{}{}%
\hrule
\GMPline{u3}{}{}{}{d}{}%
\hrule
\GMPline{u4}{}{}{}{}{d}%
\hrule}}}
@end tex
@ifnottex
@example
@group
u0 u1 u2 u3 u4
+---+---+---+---+---+
u0 | d | | | | |
+---+---+---+---+---+
u1 | | d | | | |
+---+---+---+---+---+
u2 | | | d | | |
+---+---+---+---+---+
u3 | | | | d | |
+---+---+---+---+---+
u4 | | | | | d |
+---+---+---+---+---+
@end group
@end example
@end ifnottex
In practice squaring isn't a full 2@cross{} faster than multiplying, it's
usually around 1.5@cross{}. Less than 1.5@cross{} probably indicates
@code{mpn_sqr_basecase} wants improving on that CPU.
On some CPUs @code{mpn_mul_basecase} can be faster than the generic C
@code{mpn_sqr_basecase}. @code{BASECASE_SQR_THERSHOLD} is the size at which
to use @code{mpn_sqr_basecase}, this will be zero if that routine should be
used always.
@node Karatsuba Multiplication, Toom-Cook 3-Way Multiplication, Basecase Multiplication, Multiplication Algorithms
@subsection Karatsuba Multiplication
The Karatsuba multiplication algorithm is described in Knuth section 4.3.3
part A, and various other textbooks. A brief description is given here.
The inputs @ma{x} and @ma{y} are treated as each split into two parts of equal
length (or the most significant part one limb shorter if N is odd).
@tex
\global\newdimen\GMPboxwidth \GMPboxwidth=5em
\global\newdimen\GMPboxheight \GMPboxheight=3ex
\def\GMPbox#1#2{%
\vbox {%
\hrule
\hbox{%
\vrule height 2ex depth 1ex
\hbox to \GMPboxwidth {\hfil\hbox{$#1$}\hfil}%
\vrule
\hbox to \GMPboxwidth {\hfil\hbox{$#2$}\hfil}%
\vrule}
\hrule
}}
\GMPdisplay{%
\vbox{%
\hbox to 2\GMPboxwidth {high \hfil low}
\vskip 0.7ex
\GMPbox{x_1}{x_0}
\vskip 0.5ex
\GMPbox{y_1}{y_0}
}}
%}
%\moveright \lispnarrowing
%\vskip 0.5 ex
%\vskip 0.5 ex
@end tex
@ifnottex
@example
@group
high low
+----------+----------+
| x1 | x0 |
+----------+----------+
+----------+----------+
| y1 | y0 |
+----------+----------+
@end group
@end example
@end ifnottex
Let @ma{b} be the power of 2 where the split occurs, ie.@: if @ms{x,0} is
@ma{k} limbs (@ms{y,0} the same) then
@m{b=2\GMPraise{$k*$@code{mp\_bits\_per\_limb}}, b=2^(k*mp_bits_per_limb)}.
With that @m{x=x_1b+x_0,x=x1*b+x0} and @m{y=y_1b+y_0,y=y1*b+y0}, and the
following holds,
@display
@m{xy = (b^2+b)x_1y_1 - b(x_1-x_0)(y_1-y_0) + (b+1)x_0y_0,
x*y = (b^2+b)*x1*y1 - b*(x1-x0)*(y1-y0) + (b+1)*x0*y0}
@end display
This formula means doing only three multiplies of (N/2)@cross{}(N/2) limbs,
whereas a basecase multiply of N@cross{}N limbs is equivalent to four
multiplies of (N/2)@cross{}(N/2). The factors @ma{(b^2+b)} etc represent the
positions where the three products must be added.
@tex
\global\newdimen\GMPboxwidth \GMPboxwidth=5em
\global\newdimen\GMPboxheight \GMPboxheight=3ex
\def\GMPboxA#1#2{%
\vbox to \GMPboxheight{%
\hrule \vfil
\hbox{%
\strut \vrule
\hbox to 2\GMPboxwidth {\hfil\hbox{$#1$}\hfil}%
\vrule
\hbox to 2\GMPboxwidth {\hfil\hbox{$#2$}\hfil}%
\vrule}
\vfil \hrule}}
\def\GMPboxB#1#2{%
\hbox{%
\vbox to \GMPboxheight{%
\vfil \hbox to \GMPboxwidth {\hfil #1} \vfil }
\vbox to \GMPboxheight{%
\hrule \vfil
\hbox{%
\strut \vrule
\hbox to 2\GMPboxwidth {\hfil\hbox{$#2$}\hfil}
\vrule}
\vfil \hrule}}}
\GMPdisplay{%
\vbox{%
\hbox to 4\GMPboxwidth {high \hfil low}
\vskip 0.7ex
\GMPboxA{x_1y_1}{x_0y_0}
\vskip 0.5ex
\GMPboxB{$+$}{x_1y_1}
\vskip 0.5ex
\GMPboxB{$+$}{x_0y_0}
\vskip 0.5ex
\GMPboxB{$-$}{(x_1-x_0)(y_1-y_0)}
}}
@end tex
@ifnottex
@example
@group
high low
+--------+--------+ +--------+--------+
| x1*y1 | | x0*y0 |
+--------+--------+ +--------+--------+
+--------+--------+
add | x1*y1 |
+--------+--------+
+--------+--------+
add | x0*y0 |
+--------+--------+
+--------+--------+
sub | (x1-x0)*(y1-y0) |
+--------+--------+
@end group
@end example
@end ifnottex
The term @m{(x_1-x_0)(y_1-y_0),(x1-x0)*(y1-y0)} is best calculated as an
absolute value, and the sign used to choose to add or subtract. Notice the
sum @m{\mathop{\rm high}(x_0y_0)+\mathop{\rm low}(x_1y_1),
high(x0*y0)+low(x1*y1)} occurs twice, so it's possible to do @m{5k,5*k} limb
additions, rather than @m{6k,6*k}, but in GMP extra function call overheads
outweigh the saving.
Squaring is similar to multiplying, but with @ma{x=y} the formula reduces to
an equivalent with three squares,
@display
@m{x^2 = (b^2+b)x_1^2 - b(x_1-x_0)^2 + (b+1)x_0^2,
x^2 = (b^2+b)*x1^2 - b*(x1-x0)^2 + (b+1)*x0^2}
@end display
The final result is accumulated from those three squares the same way as for
the three multiplies above. The middle term @m{(x_1-x_0)^2,(x1-x0)^2} is now
always positive.
A similar formula for both multiplying and squaring can be constructed with a
middle term @m{(x_1+x_0)(y_1+y_0),(x1+x0)*(y1+y0)}. But those sums can exceed
@ma{k} limbs, leading to more carry handling and additions than the form
above.
Karatsuba multiplication is asymptotically an @ma{O(N^@W{1.585})} algorithm,
the exponent being @m{\log3/\log2,log(3)/log(2)}, representing 3 multiplies
each 1/2 the size of the inputs. This is a big improvement over the basecase
multiply at @ma{O(N^2)} and the advantage soon overcomes the extra additions
Karatsuba performs.
@code{KARATSUBA_MUL_THRESHOLD} can be as little as 10 limbs. The @code{SQR}
threshold is usually about twice the @code{MUL}. A fast @code{mpn_addmul_1}
or @code{mpn_mul_basecase} can make the thresholds higher by staying faster
than the Karatsuba overheads for longer. Unrolling in those functions can
further raise the thresholds, by becoming proportionally more efficient as the
size increases.
@node Toom-Cook 3-Way Multiplication, FFT Multiplication, Karatsuba Multiplication, Multiplication Algorithms
@subsection Toom-Cook 3-Way Multiplication
The Karatsuba formula is the simplest case of a general approach to splitting
inputs that leads to both Toom-Cook and FFT algorithms. A description of
Toom-Cook can be found in Knuth section 4.3.3, with an example 3-way
calculation after Theorem A. The 3-way form used in GMP is described here.
The operands are each considered split into 3 pieces of equal length (or the
most significant part 1 or 2 limbs shorter than the others).
@iftex
@global@newdimen@GMPboxwidth @GMPboxwidth=5em
@global@newdimen@GMPboxheight @GMPboxheight=3ex
@end iftex
@tex
\def\GMPbox#1#2#3{%
\vbox to \GMPboxheight{%
\hrule \vfil
\hbox{%
\strut \vrule
\hbox to \GMPboxwidth {\hfil\hbox{$#1$}\hfil}%
\vrule
\hbox to \GMPboxwidth {\hfil\hbox{$#2$}\hfil}%
\vrule
\hbox to \GMPboxwidth {\hfil\hbox{$#3$}\hfil}%
\vrule}
\vfil \hrule
}}
\GMPdisplay{%
\vbox{%
\hbox to 3\GMPboxwidth {high \hfil low}
\vskip 0.7ex
\GMPbox{x_2}{x_1}{x_0}
\vskip 0.5ex
\GMPbox{y_2}{y_1}{y_0}
\vskip 0.5ex
}}
@end tex
@ifnottex
@example
@group
high low
+----------+----------+----------+
| x2 | x1 | x0 |
+----------+----------+----------+
+----------+----------+----------+
| y2 | y1 | y0 |
+----------+----------+----------+
@end group
@end example
@end ifnottex
@noindent
These parts are treated as the coefficients of two polynomials
@display
@group
@m{X(t) = x_2t^2 + x_1t + x_0,
X(t) = x2*t^2 + x1*t + x0}
@m{Y(t) = y_2t^2 + y_1t + y_0,
Y(t) = y2*t^2 + y1*t + y0}
@end group
@end display
Again let @ma{b} equal the power of 2 which is the size of the @ms{x,0},
@ms{x,1}, @ms{y,0} and @ms{y,1} pieces, ie.@: if they're @ma{k} limbs each
then @m{b=2\GMPraise{$k*$@code{mp\_bits\_per\_limb}},
b=2^(k*mp_bits_per_limb)}. With this @ma{x=X(b)} and @ma{y=Y(b)}.
Let a polynomial @m{W(t)=X(t)Y(t),W(t)=X(t)*Y(t)} and suppose its coefficients
are
@display
@m{W(t) = w_4t^4 + w_3t^3 + w_2t^2 + w_1t + w_0,
W(t) = w4*t^4 + w3*t^3 + w2*t^2 + w1*t + w0}
@end display
@noindent
The @m{w_i,w[i]} are going to be determined, and when they are they'll give
the final result using @ma{w=W(b)}, since @m{xy=X(b)Y(b),x*y=X(b)*Y(b)=W(b)}.
The coefficients will be roughly @ma{b^2} each, and the final @ma{W(b)} will
be an addition like,
@tex
\def\GMPbox#1#2{%
\moveright #1\GMPboxwidth
\vbox to \GMPboxheight{%
\hrule \vfil
\hbox{%
\strut \vrule
\hbox to 2\GMPboxwidth {\hfil\hbox{$#2$}\hfil}%
\vrule}
\vfil \hrule
}}
\GMPdisplay{%
\vbox{%
\hbox to 6\GMPboxwidth {high \hfil low}
\vskip 0.7ex
\GMPbox{0}{w_4}
\vskip 0.5ex
\GMPbox{1}{w_3}
\vskip 0.5ex
\GMPbox{2}{w_2}
\vskip 0.5ex
\GMPbox{3}{w_1}
\vskip 0.5ex
\GMPbox{4}{w_1}
}}
@end tex
@ifnottex
@example
@group
high low
+-------+-------+
| w4 |
+-------+-------+
+--------+-------+
| w3 |
+--------+-------+
+--------+-------+
| w2 |
+--------+-------+
+--------+-------+
| w1 |
+--------+-------+
+-------+-------+
| w0 |
+-------+-------+
@end group
@end example
@end ifnottex
The @m{w_i,w[i]} coefficients could be formed by a simple set of cross
products, like @m{w_4=x_2y_2,w4=x2*y2}, @m{w_3=x_2y_1+x_1y_2,w3=x2*y1+x1*y2},
@m{w_2=x_2y_0+x_1y_1+x_0y_2,w2=x2*y0+x1*y1+x0*y2} etc, but this would need all
nine @m{x_iy_j,x[i]*y[j]} for @ma{i,j=0,1,2}, and would be equivalent merely
to a basecase multiply. Instead the following approach is used.
@ma{X(t)} and @ma{Y(t)} are evaluated and multiplied at 5 points, giving
values of @ma{W(t)} at those points. The points used can be chosen in
various ways, but in GMP the following are used
@tex
\GMPdisplay{%
\def\GMPbox#1{\hbox to 4em{$#1$\hfil}}%
\GMPbox{\rm Point} Value\hfil\break
\GMPbox{t=0} $x_0y_0$, which gives $w_0$ immediately\hfil\break
\GMPbox{t=2} $(4x_2+2x_1+x_0)(4y_2+2y_1+y_0)$\hfil\break
\GMPbox{t=1} $(x_2+x_1+x_0)(y_2+y_1+y_0)$\hfil\break
\GMPbox{t={1\over2}} $(x_2+2x_1+4x_0)(y_2+2y_1+4y_0)$\hfil\break
\GMPbox{t=\infty} $x_2y_2$, which gives $w_4$ immediately
}
@end tex
@ifnottex
@display
@multitable {@ma{t=infM}} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
@item Point @tab Value
@item @ma{t=0} @tab @ma{x0*y0}, which gives w0 immediately
@item @ma{t=2} @tab @ma{(4*x2+2*x1+x0)*(4*y2+2*y1+y0)}
@item @ma{t=1} @tab @ma{(x2+x1+x0)*(y2+y1+y0)}
@item @ma{t=1/2} @tab @ma{(x2+2*x1+4*x0)*(y2+2*y1+4*y0)}
@item @ma{t=inf} @tab @ma{x2*y2}, which gives @ma{w4} immediately
@end multitable
@end display
@end ifnottex
At @m{t={1\over2},t=1/2} the value calculated is actually
@m{16X({1\over2})Y({1\over2}), 16*X(1/2)*Y(1/2)}, giving a value for
@m{16W({1\over2}),16*W(1/2)}, and this is always an integer. At
@m{t=\infty,t=inf} the value is actually @m{\lim_{t\to\infty} {X(t)Y(t)\over
t^4}, X(t)*Y(t)/t^4 in the limit as t approaches infinity}, but it's much
easier to think of as simply @m{x_2y_2,x2*y2} giving @ms{w,4} immediately
(much like @m{x_0y_0,x0*y0} at @ma{t=0} gives @ms{w,0} immediately).
Now each of the points substituted into
@m{W(t)=w_4t^4+\cdots+w_0,W(t)=w4*t^4+@dots{}+w0} gives a linear combination
of the @m{w_i,w[i]} coefficients, and the value of those combinations has just
been calculated.
@tex
\GMPdisplay{%
$\matrix{%
W(0) & = & & & & & & & & & w_0 \cr
16W({1\over2}) & = & w_4 & + & 2w_3 & + & 4w_2 & + & 8w_1 & + & 16w_0 \cr
W(1) & = & w_4 & + & w_3 & + & w_2 & + & w_1 & + & w_0 \cr
W(2) & = & 16w_4 & + & 8w_3 & + & 4w_2 & + & 2w_1 & + & w_0 \cr
W(\infty) & = & w_4 \cr
}$}
@end tex
@ifnottex
@example
@group
W(0) = w0
16*W(1/2) = w4 + 2*w3 + 4*w2 + 8*w1 + 16*w0
W(1) = w4 + w3 + w2 + w1 + w0
W(2) = 16*w4 + 8*w3 + 4*w2 + 2*w1 + w0
W(inf) = w4
@end group
@end example
@end ifnottex
This is a set of five equations in five unknowns, and some elementary linear
algebra quickly isolates each @m{w_i,w[i]}, by subtracting multiples of one
equation from another.
In the code the set of five values @ma{W(0)},@dots{},@m{W(\infty),W(inf)} will
represent those certain linear combinations. By adding or subtracting one
from another as necessary, values which are each @m{w_i,w[i]} alone are
arrived at. This involves only a few subtractions of small multiples (some of
which are powers of 2), and so is fast. A couple of divisions remain by
powers of 2 and one division by 3 (or by 6 rather), and that last uses the
special @code{mpn_divexact_by3}.
In the code the values @ms{w,4}, @ms{w,2} and @ms{w,0} are formed in the
destination with pointers @code{E}, @code{C} and @code{A}, and @ms{w,3} and
@ms{w,1} in temporary space @code{D} and @code{B} are added to them. There
are extra limbs @code{tD}, @code{tC} and @code{tB} at the high end of
@ms{w,3}, @ms{w,2} and @ms{w,1} which are handled separately. The final
addition then is as follows.
@tex
\def\GMPboxT#1{%
\vbox to \GMPboxheight{%
\hrule
\hbox {\strut \vrule{} #1 \vrule}%
\hrule
}}
\GMPdisplay{%
\advance\baselineskip by 1ex
\vbox{%
\hbox to 6\GMPboxwidth {high \hfil low}
\vbox to \GMPboxheight{%
\hrule \vfil
\hbox{%
\strut \vrule
\hbox to 2\GMPboxwidth {\hfil@code{E}\hfil}
\vrule
\hbox to 2\GMPboxwidth {\hfil@code{C}\hfil}
\vrule
\hbox to 2\GMPboxwidth {\hfil@code{A}\hfil}
\vrule}
\vfil \hrule
}%
\moveright \GMPboxwidth
\vbox to \GMPboxheight{%
\hrule \vfil
\hbox{%
\strut \vrule
\hbox to 2\GMPboxwidth {\hfil@code{D}\hfil}
\vrule
\hbox to 2\GMPboxwidth {\hfil@code{B}\hfil}
\vrule}
\vfil \hrule
}%
\hbox{%
\hbox to \GMPboxwidth{\hfil \GMPboxT{\code{tD}}}%
\hbox to \GMPboxwidth{\hfil \GMPboxT{\code{tC}}}%
\hbox to \GMPboxwidth{\hfil \GMPboxT{\code{tB}}}}
}}
@end tex
@ifnottex
@example
@group
high low
+-------+-------+-------+-------+-------+-------+
| E | C | A |
+-------+-------+-------+-------+-------+-------+
+------+-------++------+-------+
| D || B |
+------+-------++------+-------+
-- -- --
|tD| |tC| |tB|
-- -- --
@end group
@end example
@end ifnottex
The conversion of @ma{W(t)} values to the coefficients is interpolation. A
polynomial of degree 4 like @ma{W(t)} is uniquely determined by values known
at 5 different points. The points can be chosen to make the linear equations
come out with a convenient set of steps for isolating the @m{w_i,w[i]}.
In @file{mpn/generic/mul_n.c} the @code{interpolate3} routine performs the
interpolation. The open-coded one-pass version may be a bit hard to
understand, the steps performed can be better seen in the @code{USE_MORE_MPN}
version.
Squaring follows the same procedure as multiplication, but there's only one
@ma{X(t)} and it's evaluated at 5 points, and those values squared to give
values of @ma{W(t)}. The interpolation is then identical, and in fact the
same @code{interpolate3} subroutine is used for both squaring and multiplying.
Toom-3 is asymptotically @ma{O(N^@W{1.465})}, the exponent being
@m{\log5/\log3,log(5)/log(3)}, representing 5 recursive multiplies of 1/3 the
original size. This is an improvement over Karatsuba at @ma{O(N^@W{1.585})},
though Toom-Cook does more work in the evaluation and interpolation and so it
only realizes its advantage above a certain size.
Near the crossover between Toom-3 and Karatsuba there's generally a range of
sizes where the difference between the two is small.
@code{TOOM3_MUL_THRESHOLD} is a somewhat arbitrary point in that range and
successive runs of the tune program can give different values due to small
variations in measuring. A graph of time versus size for the two shows the
effect, see @file{tune/README}.
At the fairly small sizes where the Toom-3 thresholds occur it's worth
remembering that the asymptotic behaviour for Karatsuba and Toom-3 can't be
expected to make accurate predictions, due of course to the big influence of
all sorts of overheads, and the fact that only a few recursions of each are
being performed. Even at large sizes there's a good chance machine dependent
effects like cache architecture will mean actual performance deviates from
what might be predicted.
The formula given above for the Karatsuba algorithm has an equivalent for
Toom-3 involving only five multiplies, but this would be complicated and
unenlightening.
An alternate view of Toom-3 can be found in Zuras (@pxref{References}), using
a vector to represent the @ma{x} and @ma{y} splits and a matrix multiplication
for the evaluation and interpolation stages. The matrix inverses are not
meant to be actually used, and they have elements with values much greater
than in fact arise in the interpolation steps. The diagram shown for the
3-way is attractive, but again doesn't have to be implemented that way and for
example with a bit of rearrangement just one division by 6 can be done.
@node FFT Multiplication, Other Multiplication, Toom-Cook 3-Way Multiplication, Multiplication Algorithms
@subsection FFT Multiplication
At large to very large sizes a Fermat style FFT multiplication is used,
following Sch@"onhage and Strassen (@pxref{References}). Descriptions of FFTs
in various forms can be found in many textbooks, for instance Knuth section
4.3.3 part C or Lipson chapter IX. A brief description of the form used in
GMP is given here.
The multiplication done is @m{xy \bmod 2^N+1, x*y mod 2^N+1}, for a given
@ma{N}. A full product @m{xy,x*y} is obtained by choosing @m{N \ge
\mathop{\rm bits}(x)+\mathop{\rm bits}(y), N>=bits(x)+bits(y)} and padding
@ma{x} and @ma{y} with high zero limbs. The modular product is the native
form for the algorithm, so padding to get a full product is unavoidable.
The algorithm follows a split, evaluate, pointwise multiply, interpolate and
combine similar to that described above for Karatsuba and Toom-3. A @ma{k}
parameter controls the split, with an FFT-@ma{k} splitting into @ma{2^k}
pieces of @ma{M=N/2^k} bits each. @ma{N} must be a multiple of
@m{2^k\times@code{mp\_bits\_per\_limb}, (2^k)*@nicode{mp_bits_per_limb}} so
the split falls on limb boundaries, avoiding bit shifts in the split and
combine stages.
The evaluations, pointwise multiplications, and interpolation, are all done
modulo @m{2^{N'}+1, 2^N'+1} where @ma{N'} is @ma{2M+k+3} rounded up to a
multiple of @ma{2^k} and of @code{mp_bits_per_limb}. The results of
interpolation will be the following negacyclic convolution of the input
pieces, and the choice of @ma{N'} ensures these sums aren't truncated.
@tex
$$ w_n = \sum_{{i+j = b2^k+n}\atop{b=0,1}} (-1)^b x_i y_j $$
@end tex
@ifnottex
@example
b
w[n] = sum (-1) * x[i] * y[j]
i+j==b*2^k+n
b=0,1
@end example
@end ifnottex
The points used for the evaluation are @ma{g^i} for @ma{i=0} to @ma{2^k-1}
where @m{g=2^{2N'/2^k}, g=2^(2N'/2^k)}. @ma{g} is a @m{2^k,2^k'}th root of
unity mod @m{2^{N'}+1,2^N'+1}, which produces necessary cancellations at the
interpolation stage, and it's also a power of 2 so the fast fourier transforms
used for the evaluation and interpolation do only shifts, adds and negations.
The pointwise multiplications are done modulo @m{2^{N'}+1, 2^N'+1} and either
recurse into a further FFT or use a plain multiplication (Toom-3, Karatsuba or
basecase), whichever is optimal at the size @ma{N'}. The interpolation is an
inverse fast fourier transform. The resulting set of sums of @m{x_iy_j,
x[i]*y[j]} are added at appropriate offsets to give the final result.
Squaring is the same, but @ma{x} is the only input so it's one transform at
the evaluate stage and the pointwise multiplies are squares. The
interpolation is the same.
For a mod @ma{2^N+1} product, an FFT-@ma{k} is an @m{O(N^{k/(k-1)}),
O(N^(k/(k-1)))} algorithm, the exponent representing @ma{2^k} recursed modular
multiplies each @m{1/2^{k-1},1/2^(k-1)} the size of the original. Each
successive @ma{k} is an asymptotic improvement, but overheads mean each is
only faster at bigger and bigger sizes. In the code, @code{FFT_MUL_TABLE} and
@code{FFT_SQR_TABLE} are the thresholds where each @ma{k} is used. Each new
@ma{k} effectively swaps some multiplying for some shifts, adds and overheads.
A mod @ma{2^N+1} product can be formed with a normal
@ma{N@cross{}N@rightarrow{}2N} bit multiply plus a subtraction, so an FFT and
Toom-3 etc can be compared directly. A @ma{k=4} FFT at @ma{O(N^@W{1.333})}
can be expected to be the first faster than Toom-3 at @ma{O(N^@W{1.465})}. In
practice this is what's found, with @code{FFT_MODF_MUL_THRESHOLD} and
@code{FFT_MODF_SQR_THRESHOLD} being between 300 and 1000 limbs, depending on
the CPU. So far it's been found that only very large FFTs recurse into
pointwise multiplies above these sizes.
When an FFT is to give a full product, the change of @ma{N} to @ma{2N} doesn't
alter the theoretical complexity for a given @ma{k}, but for the purposes of
considering where an FFT might be first used it can be assumed that the FFT is
recursing into a normal multiply and that on that basis it's doing @ma{2^k}
recursed multiplies each @m{1/2^{k-2},1/2^(k-2)} the size of the inputs,
making it @m{O(N^{k/(k-2)}), O(N^(k/(k-2)))}. This would mean @ma{k=7} at
@ma{O(N^@W{1.4})} would be the first FFT faster than Toom-3. In practice
@code{FFT_MUL_THRESHOLD} and @code{FFT_SQR_THRESHOLD} have been found to be in
the @ma{k=8} range, somewhere between 3000 and 10000 limbs.
The way @ma{N} is split into @ma{2^k} pieces and then @ma{2M+k+3} is rounded
up to a multiple of @ma{2^k} and @code{mp_bits_per_limb} means that when
@m{2^k\ge@code{mp\_bits\_per\_limb}, 2^k>=@nicode{mp_bits_per_limb}} the
effective @ma{N} is a multiple of @m{2^{2k-1},2^(2k-1)} bits. The @ma{+k+3}
means some values of @ma{N} just under such a multiple will be rounded to the
next. The complexity calculations above assume that a favourable size is
used, meaning one which isn't padded through rounding, and it's also assumed
that the extra @ma{+k+3} bits are negligible at typical FFT sizes.
The practical effect of the @m{2^{2k-1},2^(2k-1)} constraint is to introduce a
step-effect into measured speeds. For example @ma{k=8} will round @ma{N} up
to a multiple of 32768 bits, so for a 32-bit limb there'll be 512 limb groups
of sizes for which @code{mpn_mul_n} runs at the same speed. Or for @ma{k=9}
groups of 2048 limbs, @ma{k=10} groups of 8192 limbs, etc. In practice it's
been found each @ma{k} is used at quite small multiples of its size constraint
and so the step effect is quite noticeable in a time versus size graph.
The threshold determinations currently measure at the mid-points of size
steps, but this is sub-optimal since at the start of a new step it can happen
that it's better to go back to the previous @ma{k} for a while. Something
more sophisticated for @code{FFT_MUL_TABLE} and @code{FFT_SQR_TABLE} will be
needed.
@node Other Multiplication, , FFT Multiplication, Multiplication Algorithms
@subsection Other Multiplication
The 3-way Toom-Cook algorithm described above (@pxref{Toom-Cook 3-Way
Multiplication}) generalizes to split into an arbitrary number of pieces, as
per Knuth section 4.3.3 algorithm C. This is not currently used, though it's
possible a Toom-4 might fit in between Toom-3 and the FFTs. The notes here
are merely for interest.
In general a split into @ma{r+1} pieces is made, and evaluations and pointwise
multiplications done at @m{2r+1,2*r+1} points. A 4-way split does 7 pointwise
multiplies, 5-way does 9, etc. Asymptotically an @ma{(r+1)}-way algorithm is
@m{O(N^{log(2r+1)/log(r+1)}, O(N^(log(2*r+1)/log(r+1)))}. Only the pointwise
multiplications count towards big-@ma{O} complexity, but the time spent in the
evaluate and interpolate stages grows with @ma{r} and has a significant
practical impact, with the asymptotic advantage of each @ma{r} realized only
at bigger and bigger sizes. The overheads grow as @m{O(Nr),O(N*r)}, whereas
in an @ma{r=2^k} FFT they grow only as @m{O(N \log r), O(N*log(r))}.
Knuth algorithm C evaluates at points 0,1,2,@dots{},@m{2r,2*r}, but exercise 4
uses @ma{-r},@dots{},0,@dots{},@ma{r} and the latter saves some small
multiplies in the evaluate stage (or rather trades them for additions), and
has a further saving of nearly half the interpolate steps. The idea is to
separate odd and even final coefficients and then perform algorithm C steps C7
and C8 on them separately. The divisors at step C7 become @ma{j^2} and the
multipliers at C8 become @m{2tj-j^2,2*t*j-j^2}.
Splitting odd and even parts through positive and negative points can be
thought of as using @ma{-1} as a square root of unity. If a 4th root of unity
was available then a further split and speedup would be possible, but no such
root exists for plain integers. Going to complex integers with
@m{i=\sqrt{-1}, i=sqrt(-1)} doesn't help, essentially because in cartesian
form it takes three real multiplies to do a complex multiply. The existence
of @m{2^k,2^k'}th roots of unity in a suitable ring or field lets the fast
fourier transform keep splitting and get to @m{O(N \log r), O(N*log(r))}.
@node Division Algorithms, Greatest Common Divisor Algorithms, Multiplication Algorithms, Algorithms
@section Division Algorithms
@cindex Division algorithms
@menu
* Single Limb Division::
* Basecase Division::
* Divide and Conquer Division::
* Exact Division::
* Exact Remainder::
* Small Quotient Division::
@end menu
@node Single Limb Division, Basecase Division, Division Algorithms, Division Algorithms
@subsection Single Limb Division
N@cross{}1 division is implemented using repeated 2@cross{}1 divisions from
high to low, either with a hardware divide instruction or a multiplication by
inverse, whichever is best on a given CPU.
The multiply by inverse used follows section 8 of ``Division by Invariant
Integers using Multiplication'' by Granlund and Montgomery
(@pxref{References}) and is implemented as @code{udiv_qrnnd_preinv} in
@file{gmp-impl.h}. The idea is to have a fixed-point approximation to
@ma{1/d} (see @code{invert_limb}) and then multiply by the high limb (plus one
bit) of the dividend to get a quotient @ma{q}. With @ma{d} normalized (high
bit set), @ma{q} is no more than 1 too small. Subtracting @m{qd,q*d} from the
dividend gives a remainder, and reveals whether a correction is necessary.
The result is a division done with two multiplications and four or five
arithmetic operations. On CPUs with low latency multipliers this can be much
faster than a hardware divide, though the cost of calculating the inverse at
the start may mean it's only better on inputs bigger than say 4 or 5 limbs.
When a divisor must be normalized, either for the generic C
@code{__udiv_qrnnd_c} or the multiply by inverse, the division performed is
actually @m{a2^k,a*2^k} by @m{d2^k,d*2^k} where @ma{a} is the dividend and
@ma{k} is the power necessary to have the high bit of @m{d2^k,d*2^k} set. The
bit shifts for the dividend are usually accomplished ``on the fly'' meaning by
extracting the appropriate bits at each step. Done this way the quotient
limbs come out aligned ready to store. When only the remainder is wanted, an
alternative is to take the dividend limbs unshifted and calculate @m{r = a
\bmod d2^k, r = a mod d*2^k} followed by an extra final step @m{r2^k \bmod
d2^k, r*2^k mod d*2^k}. This can help on CPUs with poor bit shifts or few
registers.
N@cross{}1 division is used for small divisors (which are fairly common), and
for radix conversion. It's not used to construct N@cross{}M divisions, though
the same 2@cross{}1 divisions find a use.
@node Basecase Division, Divide and Conquer Division, Single Limb Division, Division Algorithms
@subsection Basecase Division
Basecase N@cross{}M division is like long division done by hand, but done in
base @m{2\GMPraise{@code{mp\_bits\_per\_limb}}, 2^mp_bits_per_limb}. See
Knuth section 4.3.1 algorithm D, and @file{mpn/generic/sb_divrem_mn.c}.
Briefly stated, while the dividend remains larger than the divisor, a high
quotient limb is formed and the N@cross{}1 product @m{qd,q*d} subtracted at
the top end of the dividend. As noted in Knuth, with a normalized divisor
(most significant bit set), each quotient limb can be formed using a
2@cross{}1 division and a 1@cross{}1 multiplication plus some subtractions.
The 2@cross{}1 division is by the high limb of the divisor and is done either
with a hardware divide or a multiply by inverse (the same as in @ref{Single
Limb Division}) whichever is faster. Such a quotient is sometimes one too
big, requiring an addback of the divisor, but that happens rarely.
With Q=N@minus{}M being the number of quotient limbs, this is an
@m{O(QM),O(Q*M)} algorithm and will run at a speed similar to a basecase
Q@cross{}M multiplication, differing in fact only in the extra multiply and
divide for each of the Q quotient limbs.
@node Divide and Conquer Division, Exact Division, Basecase Division, Division Algorithms
@subsection Divide and Conquer Division
For divisors larger than @code{DC_THRESHOLD}, division is done by dividing.
Or to be precise by a recursive divide and conquer algorithm based on work by
Moenck and Borodin, Jebelean, and Burnikel and Ziegler (@pxref{References}).
The algorithm consists essentially of recognising that a 2N@cross{}N division
can be done with the basecase division algorithm (@pxref{Basecase Division}),
but using N/2 limbs as a base, not just a single limb. This way the
multiplications that arise are (N/2)@cross{}(N/2) and can take advantage of
Karatsuba and higher multiplication algorithms (@pxref{Multiplication
Algorithms}). The ``digits'' of the quotient are formed by recursive
N@cross{}(N/2) divisions.
If the (N/2)@cross{}(N/2) multiplies are done with a basecase multiplication
then the work is about the same as a basecase division, but with more function
call overheads and with some subtractions separated from the multiplies.
These overheads mean that it's only when N/2 is above
@code{KARATSUBA_MUL_THRESHOLD} that divide and conquer is of use.
@code{DC_THRESHOLD} is based on the divisor size N, so it will be somewhere
above twice @code{KARATSUBA_MUL_THRESHOLD}, but how much above depends on the
CPU. An optimized @code{mpn_mul_basecase} can lower @code{DC_THRESHOLD} a
little by offering a ready-made advantage over repeated @code{mpn_submul_1}
calls.
Divide and conquer is asymptotically @m{O(M(N)\log N),O(M(N)*log(N))} where
@ma{M(N)} is the time for an N@cross{}N multiplication done with FFTs. The
actual time is a sum over multiplications of the recursed sizes, as can be
seen near the end of section 2.2 of Burnikel and Ziegler. For example, within
the Toom-3 range, divide and conquer is @m{2.63M(N), 2.63*M(N)}. With higher
algorithms the @ma{M(N)} term improves and the multiplier tends to @m{\log N,
log(N)}. In practice, at moderate to large sizes, a 2N@cross{}N division is
about 2 to 4 times slower than an N@cross{}N multiplication.
Newton's method used for division is asymptotically @ma{O(M(N))} and should
therefore be superior to divide and conquer, but it's believed this would only
be for large to very large N.
@node Exact Division, Exact Remainder, Divide and Conquer Division, Division Algorithms
@subsection Exact Division
A so-called exact division is when the dividend is known to be an exact
multiple of the divisor. Jebelean's exact division algorithm uses this
knowledge to make some significant optimizations (@pxref{References}).
The idea can be illustrated in decimal for example with 368154 divided by
543. Because the low digit of the dividend is 4, the low digit of the
quotient must be 8. This is arrived at from @m{4 \mathord{\times} 7 \bmod 10,
4*7 mod 10}, using the fact 7 is the modular inverse of 3 (the low digit of
the divisor), since @m{3 \mathord{\times} 7 \mathop{\equiv} 1 \bmod 10, 3*7
@equiv{} 1 mod 10}. So @m{8\mathord{\times}543 = 4344,8*543=4344} can be
subtracted from the dividend leaving 363810. Notice the low digit has become
zero.
The procedure is repeated at the second digit, with the next quotient digit 7
(@m{1 \mathord{\times} 7 \bmod 10, 7 @equiv{} 1*7 mod 10}), subtracting
@m{7\mathord{\times}543 = 3801,7*543=3801}, leaving 325800. And finally at
the third digit with quotient digit 6 (@m{8 \mathord{\times} 7 \bmod 10, 8*7
mod 10}), subtracting @m{6\mathord{\times}543 = 3258,6*543=3258} leaving 0.
So the quotient is 678.
Notice however that the multiplies and subtractions don't need to extend past
the low three digits of the dividend, since that's enough to determine the
three quotient digits. For the last quotient digit no subtraction is needed
at all. On a 2N@cross{}N division like this one, only about half the work of
a normal basecase division is necessary.
For an N@cross{}M exact division producing Q=N@minus{}M quotient limbs, the
saving over a normal basecase division is in two parts. Firstly, each of the
Q quotient limbs needs only one multiply, not a 2@cross{}1 divide and
multiply. Secondly, the crossproducts are reduced when @ma{Q>M} to
@m{QM-M(M+1)/2,Q*M-M*(M+1)/2}, or when @m{Q\leq M, Q<=M} to @m{Q(Q-1)/2,
Q*(Q-1)/2}. Notice the savings are complementary. If Q is big then many
divisions are saved, or if Q is small then the crossproducts reduce to a small
number.
The modular inverse used is calculated efficiently using @code{modlimb_invert}
in @file{gmp-impl.h}. This does four multiplies for a 32-bit limb, or six for
a 64-bit limb. @file{tune/modlinv.c} has some alternate implementations that
might suit processors better at bit twiddling than multiplying.
The sub-quadratic exact division described by Jebelean in ``Exact Division
with Karatsuba Complexity'' is not currently implemented. It uses a
rearrangement similar to the divide and conquer for normal division
(@pxref{Divide and Conquer Division}), but operating from low to high. A
further possibility not currently implemented is ``Bidirectional Exact Integer
Division'' by Krandick and Jebelean which forms quotient limbs from both the
high and low ends of the dividend, and can halve once more the number of
crossproducts needed in a 2N@cross{}N division.
A special case exact division by 3 exists in @code{mpn_divexact_by3},
supporting Toom-3 multiplication and @code{mpq} canonicalizations. It forms
quotient digits with a multiply by the modular inverse of 3 (which is
@code{0xAA..AAB}) and uses two comparisons to determine a borrow for the next
limb. The multiplications don't need to be on the dependent chain, so long as
the effect of the borrows is applied. Only a few optimized assembler
implementations currently exist.
@node Exact Remainder, Small Quotient Division, Exact Division, Division Algorithms
@subsection Exact Remainder
If the exact division algorithm is done with a full subtraction at each stage
and the dividend isn't a multiple of the divisor, then low zero limbs are
produced but with a remainder in the high limbs. For dividend @ma{a}, divisor
@ma{d}, quotient @ma{q}, and @m{b = 2 \GMPraise{@code{mp\_bits\_per\_limb}}, b
= 2^mp_bits_per_limb}, then this remainder @ma{r} is of the form
@tex
$$ a = qd + r b^n $$
@end tex
@ifnottex
@example
a = q*d + r*b^n
@end example
@end ifnottex
@ma{n} represents the number of zero limbs produced by the subtractions, that
being the number of limbs produced for @ma{q}. @ma{r} will be in the range
@m{0 \leq r < d, 0<=r<d} and can be viewed as a remainder, but one shifted up
by a factor of @ma{b^n}.
Carrying out full subtractions at each stage means the same number of cross
products must be done as a normal division, but there's still some single limb
divisions saved. When @ma{d} is a single limb some simplifications arise,
providing good speedups on a number of processors.
@code{mpn_bdivmod}, @code{mpn_divexact_by3}, @code{mpn_modexact_1_odd} and the
@code{redc} function in @code{mpz_powm} differ subtly in how they return
@ma{r}, leading to some negations in the above formula, but all are
essentially the same.
Clearly @ma{r} is zero when @ma{a} is a multiple of @ma{d}, and this leads to
divisibility or congruence tests which are potentially more efficient than a
normal division. Code implementing this is in progress.
The factor of @ma{b^n} on @ma{r} can be ignored in a GCD (with @ma{d} odd),
hence the use of @code{mpn_bdivmod} by @code{mpn_gcd}, and the use of
@code{mpn_modexact_1_odd} by @code{mpn_gcd_1} and @code{mpz_kronecker_ui} etc
(@pxref{Greatest Common Divisor Algorithms}).
Montgomery's REDC method for modular multiplications uses operands of the form
of @m{xb^{-n}, x*b^-n} and @m{yb^{-n}, y*b^-n} and on calculating @m{(xb^{-n})
(yb^{-n}), (x*b^-n)*(y*b^-n)} uses the factor of @ma{b^n} in the exact
remainder to reach a product in the same form @m{(xy)b^{-n},
(x*y)*b^-n} (@pxref{Modular Powering Algorithm}).
Notice that @ma{r} generally gives no useful information about the ordinary
remainder @ma{a @bmod d} since @ma{b^n @bmod d} could be anything. If however
@ma{b^n @equiv{} 1 @bmod d}, then @ma{r} is the negative of the ordinary
remainder. This occurs whenever @ma{d} is a factor of @ma{b^n-1}, as for
example with 3 in @code{mpn_divexact_by3}. Other such factors include 5, 17
and 257, but no particular use has been found for this.
@node Small Quotient Division, , Exact Remainder, Division Algorithms
@subsection Small Quotient Division
An N@cross{}M division where the number of quotient limbs Q=N@minus{}M is
small can be optimized somewhat.
An ordinary basecase division normalizes the divisor by shifting it to make
the high bit set, shifting the dividend accordingly, and shifting the
remainder back down at the end of the calculation. This is wasteful if only a
few quotient limbs are to be formed. Instead a division of just the top
@m{\rm2Q,2*Q} limbs of the dividend by the top Q limbs of the divisor can be
used to form a trial quotient. This requires only those limbs normalized, not
the whole of the divisor and dividend.
A multiply and subtract then applies the trial quotient to the M@minus{}Q
unused limbs of the divisor and N@minus{}Q dividend limbs (which includes Q
limbs remaining from the trial quotient division). The starting trial
quotient can be 1 or 2 too big, but all cases of 2 too big and most cases of 1
too big are detected by first comparing the most significant limbs that will
arise from the subtraction. An addback is done if the quotient still turns
out to be 1 too big.
This whole procedure is essentially the same as one step of the basecase
algorithm done in a Q limb base, though with the trial quotient test done only
with the high limbs, not an entire Q limb ``digit'' product. The correctness
of this weaker test can be established by following the argument of Knuth
section 4.3.1 exercise 20 but with the @m{v_2 \GMPhat q > b \GMPhat r
+ u_2, v2*q>b*r+u2} condition appropriately relaxed.
@need 1000
@node Greatest Common Divisor Algorithms, Powering Algorithms, Division Algorithms, Algorithms
@section Greatest Common Divisor
@cindex Greatest common divisor algorithms
@menu
* Binary GCD::
* Accelerated GCD::
* Extended GCD::
* Jacobi Symbol::
@end menu
@node Binary GCD, Accelerated GCD, Greatest Common Divisor Algorithms, Greatest Common Divisor Algorithms
@subsection Binary GCD
At small sizes GMP uses an @ma{O(N^2)} binary style GCD. This is described in
many textbooks, for example Knuth section 4.5.2 algorithm B. It simply
consists of successively reducing operands @ma{a} and @ma{b} using
@ma{@gcd{}(a,b) = @gcd{}(@min{}(a,b),@abs{}(a-b))}, and also that if @ma{a}
and @ma{b} are first made odd then @ma{@abs{}(a-b)} is even and factors of two
can be discarded.
Variants like letting @ma{a-b} become negative and doing a different next step
are of interest only as far as they suit particular CPUs, since on small
operands it's machine dependent factors that determine performance.
The Euclidean GCD algorithm, as per Knuth algorithms E and A, reduces using
@ma{a @bmod b} but this has so far been found to be slower everywhere. One
reason the binary method does well is that the implied quotient at each step
is usually small, so often only one or two subtractions are needed to get the
same effect as a division. Quotients 1, 2 and 3 for example occur 67.7% of
the time, see Knuth section 4.5.3 Theorem E.
When the implied quotient is large, meaning @ma{b} is much smaller than
@ma{a}, then a division is worthwhile. This is the basis for the initial
@ma{a @bmod b} reductions in @code{mpn_gcd} and @code{mpn_gcd_1} (the latter
for both Nx1 and 1x1 cases). But after that initial reduction, big quotients
occur too rarely to make it worth checking for them.
@node Accelerated GCD, Extended GCD, Binary GCD, Greatest Common Divisor Algorithms
@subsection Accelerated GCD
For sizes above @code{GCD_ACCEL_THRESHOLD}, GMP uses the Accelerated GCD
algorithm described independently by Weber and Jebelean (the latter as the
``Generalized Binary'' algorithm), @pxref{References}. This algorithm is
still @ma{O(N^2)}, but is much faster than the binary algorithm since it does
fewer multi-precision operations. It consists of alternating the @ma{k}-ary
reduction by Sorenson, and a ``dmod'' exact remainder reduction.
For operands @ma{u} and @ma{v} the @ma{k}-ary reduction replaces @ma{u} with
@m{nv-du,n*v-d*u} where @ma{n} and @ma{d} are single limb values chosen to
give two trailing zero limbs on that value, which can be stripped. @ma{n} and
@ma{d} are calculated using an algorithm similar to half of a two limb GCD
(see @code{find_a} in @file{mpn/generic/gcd.c}).
When @ma{u} and @ma{v} differ in size by more than a certain number of bits, a
dmod is performed to zero out bits at the low end of the larger. It consists
of an exact remainder style division applied to an appropriate number of bits
(@pxref{Exact Division}, and @pxref{Exact Remainder}). This is faster than a
@ma{k}-ary reduction but useful only when the operands differ in size.
There's a dmod after each @ma{k}-ary reduction, and if the dmod leaves the
operands still differing in size then it's repeated.
The @ma{k}-ary reduction step can introduce spurious factors into the GCD
calculated, and these are eliminated at the end by taking GCDs with the
original inputs @ma{@gcd{}(u,@gcd{}(v,g))} using the binary algorithm. Since
@ma{g} is almost always small this takes very little time.
At small sizes the algorithm needs a good implementation of @code{find_a}. At
larger sizes it's dominated by @code{mpn_addmul_1} applying @ma{n} and @ma{d}.
@node Extended GCD, Jacobi Symbol, Accelerated GCD, Greatest Common Divisor Algorithms
@subsection Extended GCD
The extended GCD calculates @ma{@gcd{}(a,b)} and also cofactors @ma{x} and
@ma{y} satisfying @m{ax+by=\gcd(a@C{}b), a*x+b*y=gcd(a@C{}b)}. Lehmer's
multi-step improvement of the extended Euclidean algorithm is used. See Knuth
section 4.5.2 algorithm L, and @file{mpn/generic/gcdext.c}. This is an
@ma{O(N^2)} algorithm.
The multipliers at each step are found using single limb calculations for
sizes up to @code{GCDEXT_THRESHOLD}, or double limb calculations above that.
The single limb code is faster but doesn't produce full-limb multipliers.
When a CPU has a data-dependent multiplier, meaning one which is faster on
operands with fewer bits, the extra work in the double-limb calculation might
only save some looping overheads, leading to a large @code{GCDEXT_THRESHOLD}.
Currently the single limb calculation doesn't optimize for the small quotients
that often occur, and this can lead to unusually low values of
@code{GCDEXT_THRESHOLD}, depending on the CPU.
An analysis of double-limb calculations can be found in ``A Double-Digit
Lehmer-Euclid Algorithm'' by Jebelean (@pxref{References}). The code in GMP
was developed independently.
It should be noted that when a double limb calculation is used, it's used for
the whole of that GCD, it doesn't fall back to single limb part way through.
This is because as the algorithm proceeds, the inputs @ma{a} and @ma{b} are
reduced, but the cofactors @ma{x} and @ma{y} grow, so the multipliers at each
step are applied to a roughly constant total number of limbs.
@node Jacobi Symbol, , Extended GCD, Greatest Common Divisor Algorithms
@subsection Jacobi Symbol
@code{mpz_jacobi} and @code{mpz_kronecker} are currently implemented with a
simple binary algorithm similar to that described for the GCDs (@pxref{Binary
GCD}). They're not very fast when both inputs are large. Lehmer's multi-step
improvement or a binary based multi-step algorithm is likely to be better.
When one operand fits a single limb, and that includes @code{mpz_kronecker_ui}
and friends, an initial reduction is done with either @code{mpn_mod_1} or
@code{mpn_modexact_1_odd}, followed by the binary algorithm on a single limb.
The binary algorithm is well suited to a single limb, and the whole
calculation in this case is quite efficient.
In all the routines sign changes for the result are accumulated using some bit
twiddling, avoiding table lookups or conditional jumps.
@node Powering Algorithms, Root Extraction Algorithms, Greatest Common Divisor Algorithms, Algorithms
@section Powering Algorithms
@cindex Powering algorithms
@menu
* Normal Powering Algorithm::
* Modular Powering Algorithm::
@end menu
@node Normal Powering Algorithm, Modular Powering Algorithm, Powering Algorithms, Powering Algorithms
@subsection Normal Powering
Normal @code{mpz} or @code{mpf} powering uses a simple binary algorithm,
successively squaring and then multiplying by the base when a 1 bit is seen in
the exponent, as per Knuth section 4.6.3. The ``left to right''
variant described there is used rather than algorithm A, since it's just as
easy and can be done with somewhat less temporary memory.
@node Modular Powering Algorithm, , Normal Powering Algorithm, Powering Algorithms
@subsection Modular Powering
Modular powering is implemented using a @ma{2^k}-ary sliding window algorithm,
as per ``Handbook of Applied Cryptography'' algorithm 14.85
(@pxref{References}). @ma{k} is chosen according to the size of the exponent.
Larger exponents use larger values of @ma{k}, the choice being made to
minimize the average number of multiplications that must supplement the
squaring.
The modular multiplies and squares use either a simple division or the REDC
method by Montgomery (@pxref{References}). REDC is a little faster,
essentially saving N single limb divisions in a fashion similar to an exact
remainder (@pxref{Exact Remainder}). The current REDC has some limitations.
It's only @ma{O(N^2)} so above @code{POWM_THRESHOLD} division becomes faster
and is used. It doesn't attempt to detect small bases, but rather always uses
a REDC form, which is usually a full size operand. And lastly it's only
applied to odd moduli.
@node Root Extraction Algorithms, Radix Conversion Algorithms, Powering Algorithms, Algorithms
@section Root Extraction Algorithms
@cindex Root extraction algorithms
@menu
* Square Root Algorithm::
* Nth Root Algorithm::
* Perfect Square Algorithm::
* Perfect Power Algorithm::
@end menu
@node Square Root Algorithm, Nth Root Algorithm, Root Extraction Algorithms, Root Extraction Algorithms
@subsection Square Root
Square roots are taken using the ``Karatsuba Square Root'' algorithm by Paul
Zimmermann (@pxref{References}). This is expressed in a divide and conquer
form, but as noted in the paper it can also be viewed as a discrete variant of
Newton's method.
In the Karatsuba multiplication range this is an @m{O({3\over2}
M(N/2)),O(1.5*M(N/2))} algorithm, where @ma{M(n)} is the time to multiply two
numbers of @ma{n} limbs. In the FFT multiplication range this grows to a
bound of @m{O(6 M(N/2)),O(6*M(N/2))}. In practice a factor of about 1.5 to
1.8 is found in the Karatsuba and Toom-3 ranges, growing to 2 or 3 in the FFT
range.
The algorithm does all its calculations in integers and the resulting
@code{mpn_sqrtrem} is used for both @code{mpz_sqrt} and @code{mpf_sqrt}.
The extended precision given by @code{mpf_sqrt_ui} is obtained by
padding with zero limbs.
@node Nth Root Algorithm, Perfect Square Algorithm, Square Root Algorithm, Root Extraction Algorithms
@subsection Nth Root
Integer Nth roots are taken using Newton's method with the following
iteration, where @ma{A} is the input and @ma{n} is the root to be taken.
@tex
$$a_{i+1} = {1\over n} \left({A \over a_i^{n-1}} + (n-1)a_i \right)$$
@end tex
@ifnottex
@example
1 A
a[i+1] = - * ( --------- + (n-1)*a[i] )
n a[i]^(n-1)
@end example
@end ifnottex
The initial approximation @m{a_1,a[1]} is generated bitwise by successively
powering a trial root with or without new 1 bits, aiming to be just above the
true root. The iteration converges quadratically when started from a good
approximation. When @ma{n} is large more initial bits are needed to get good
convergence. The current implementation is not particularly well optimized.
@node Perfect Square Algorithm, Perfect Power Algorithm, Nth Root Algorithm, Root Extraction Algorithms
@subsection Perfect Square
@code{mpz_perfect_square_p} is able to quickly exclude most non-squares by
checking whether the input is a quadratic residue modulo some small integers.
The first test is modulo 256 which means simply examining the least
significant byte. Only 44 different values occur as the low byte of a square,
so 82.8% of non-squares can be immediately excluded. Similar tests modulo
primes from 3 to 29 exclude 99.5% of those remaining, or if a limb is 64 bits
then primes up to 53 are used, excluding 99.99%. A single N@cross{}1
remainder using @code{PP} from @file{gmp-impl.h} quickly gives all these
remainders.
A square root must still be taken for any value that passes the residue tests,
to verify it's really a square and not one of the 0.086% (or 0.000156% for 64
bits) non-squares that get through. @xref{Square Root Algorithm}.
@node Perfect Power Algorithm, , Perfect Square Algorithm, Root Extraction Algorithms
@subsection Perfect Power
Detecting perfect powers is required by some factorization algorithms.
Currently @code{mpz_perfect_power_p} is implemented using repeated Nth root
extractions, though naturally only prime roots need to be considered.
(@xref{Nth Root Algorithm}.)
If a prime divisor @ma{p} with multiplicity @ma{e} can be found, then only
roots which are divisors of @ma{e} need to be considered, much reducing the
work necessary. To this end divisibility by a set of small primes is checked.
@node Radix Conversion Algorithms, Other Algorithms, Root Extraction Algorithms, Algorithms
@section Radix Conversion
@cindex Radix conversion algorithms
Radix conversions are less important than other algorithms. A program
dominated by conversions should probably use a different data representation.
@menu
* Binary to Radix::
* Radix to Binary::
@end menu
@node Binary to Radix, Radix to Binary, Radix Conversion Algorithms, Radix Conversion Algorithms
@subsection Binary to Radix
Conversions from binary to a power-of-2 radix use a simple and fast @ma{O(N)}
bit extraction algorithm.
Conversions from binary to other radices use repeated divisions, first by the
biggest power of the radix that fits in a single limb, then by the radix on
the remainders. This is an @ma{O(N^2)} algorithm and can be quite
time-consuming on large inputs.
@node Radix to Binary, , Binary to Radix, Radix Conversion Algorithms
@subsection Radix to Binary
Conversions from a power-of-2 radix into binary use a simple and fast
@ma{O(N)} bitwise concatenation algorithm.
Conversions from other radices use repeated multiplications, first
accumulating as many digits as fit in a limb, then doing an N@cross{}1
multi-precision multiplication. This is @ma{O(N^2)} and is certainly
sub-optimal on sizes above the Karatsuba multiply threshold.
@node Other Algorithms, Assembler Coding, Radix Conversion Algorithms, Algorithms
@section Other Algorithms
@menu
* Fibonacci Numbers Algorithm::
@end menu
@node Fibonacci Numbers Algorithm, , Other Algorithms, Other Algorithms
@subsection Fibonacci Numbers
The Fibonacci number function @code{mpz_fib_ui} is designed for calculating an
isolated @m{F_n,F[n]} efficiently. It uses three approaches.
One and two limb numbers are held in a table and simply returned. For a
32-bit limb this means up to @m{F_{93},F[93]}, or for a 64-bit limb up to
@m{F_{186},F[186]}. A 64-bit system can be expected to have more memory
available, so it makes sense to use more table data there.
Values past the tables are generated by starting from the last two entries and
iterating the defining Fibonacci formula,
@tex
$$ F_n = F_{n-1} + F_{n-2} $$
@end tex
@ifnottex
@example
F[n] = F[n-1] + F[n-2]
@end example
@end ifnottex
For @ma{n} above @code{FIB_THRESHOLD}, a binary powering algorithm is used,
calculating @m{F_n,F[n]} and @m{F_{n-1},F[n-1]}. The ``doubling'' formulas
are
@tex
$$\eqalign{
F_{2n} &= F_n (2F_{n-1} + F_{n}) \cr
F_{2n-2} &= F_{n-1} (2F_n - F_{n-1}) \cr
}$$
@end tex
@ifnottex
@example
F[2n] = F[n] * (2*F[n-1] + F[n])
F[2n-2] = F[n-1] * (2*F[n] - F[n-1])
@end example
@end ifnottex
And from these a new pair @m{F_{2n},F[2n]},@m{F_{2n-1},F[2n-1]} or
@m{F_{2n+1},F[2n+1]},@m{F_{2n},F[2n]} is obtained with one of
@tex
$$\eqalign{
F_{2n+1} &= 2F_{2n} - F_{2n-2} \cr
F_{2n-1} &= F_{2n} - F_{2n-2} \cr
}$$
@end tex
@ifnottex
@display
F[2n+1] = 2*F[2n]-F[2n-2]
F[2n-1] = F[2n]-F[2n-2]
@end display
@end ifnottex
Powering this way is much faster than simple addition, and in fact
@code{FIB_THRESHOLD} is usually small, with only a few @ma{n} handled by
additions. For large @ma{n}, Karatsuba and higher multiplication algorithms
get used for the multiplications since the operands are roughly the same size.
Alternate formulas using two squares per bit rather than two multiplies exist,
and will be used in the future.
@node Assembler Coding, , Other Algorithms, Algorithms
@section Assembler Coding
The assembler subroutines in GMP are the most significant source of speed at
small to moderate sizes. At larger sizes algorithm selection becomes more
important, but of course speedups in low level routines will still speed up
everything proportionally.
Carry handling and widening multiplies that are important for GMP can't be
easily expressed in C. GCC @code{asm} blocks help a lot and are provided in
@file{longlong.h}, but hand coding low level routines invariably offers a
speedup over generic C by a factor of anything from 2 to 10.
@menu
* Assembler Code Organisation::
* Assembler Basics::
* Assembler Carry Propagation::
* Assembler Cache Handling::
* Assembler Floating Point::
* Assembler SIMD Instructions::
* Assembler Software Pipelining::
* Assembler Loop Unrolling::
@end menu
@node Assembler Code Organisation, Assembler Basics, Assembler Coding, Assembler Coding
@subsection Code Organisation
The various @file{mpn} subdirectories contain machine-dependent code, written
in C or assembler. The @file{mpn/generic} subdirectory contains default code,
used when there's no machine-specific version of a particular file.
Each @file{mpn} subdirectory is for an ISA family. Generally 32-bit and
64-bit variants in a family cannot share code and will have separate
directories. Within a family further subdirectories may exist for CPU
variants.
@node Assembler Basics, Assembler Carry Propagation, Assembler Code Organisation, Assembler Coding
@subsection Assembler Basics
@code{mpn_addmul_1} and @code{mpn_submul_1} are the most important routines
for overall GMP performance. All multiplications and divisions come down to
repeated calls to these. @code{mpn_add_n}, @code{mpn_sub_n},
@code{mpn_lshift} and @code{mpn_rshift} are next most important.
On some CPUs assembler versions of the internal functions
@code{mpn_mul_basecase} and @code{mpn_sqr_basecase} give significant speedups,
mainly through avoiding function call overheads. They can also potentially
make better use of a wide superscalar processor.
The restrictions on overlaps between sources and destinations
(@pxref{Low-level Functions}) are designed to facilitate a variety of
implementations. For example, knowing @code{mpn_add_n} won't have partly
overlapping sources and destination means reading can be done far ahead of
writing on superscalar processors, and loops can be vectorized on a vector
processor, depending on the carry handling.
@node Assembler Carry Propagation, Assembler Cache Handling, Assembler Basics, Assembler Coding
@subsection Carry Propagation
The problem that presents most challenges in GMP is propagating carries from
one limb to the next. In functions like @code{mpn_addmul_1} and
@code{mpn_add_n}, carries are the only dependencies between limb operations.
On processors with carry flags, a straightforward CISC style @code{adc} is
generally best. AMD K6 @code{mpn_addmul_1} however is an example of an
unusual set of circumstances where a branch works out better.
On RISC processors generally an add and compare for overflow is used. This
sort of thing can be seen in @file{mpn/generic/aors_n.c} too. Some carry
propagation schemes require 4 instructions, meaning at least 4 cycles per
limb, but other schemes may use just 1 or 2. On wide superscalar processors
performance may be completely determined by the number of dependent
instructions between carry-in and carry-out for each limb.
On vector processors good use can be made of the fact that a carry bit only
very rarely propagates more than one limb. When adding a single bit to a
limb, there's only a carry out if that limb was @code{0xFF...FF} which on
random data will be only 1 in @m{2\GMPraise{@code{mp\_bits\_per\_limb}},
2^mp_bits_per_limb}. @file{mpn/cray/add_n.c} is an example of this, it adds
all limbs in parallel, adds one set of carry bits in parallel and then only
rarely needs to fall through to a loop propagating further carries.
On the x86s, GCC (as of version 2.95.2) doesn't generate particularly good code
for the RISC style idioms that are necessary to handle carry bits in
C. Often conditional jumps are generated where @code{adc} or @code{sbb} forms
would be better. And so unfortunately almost any loop involving carry bits
eventually needs to be coded in assembler for best results.
@node Assembler Cache Handling, Assembler Floating Point, Assembler Carry Propagation, Assembler Coding
@subsection Cache Handling
GMP aims to perform well both on operands that fit entirely in L1 cache and
those that don't. In the assembler subroutines this means prefetching, either
always or when large enough operands are presented.
Pre-fetching sources combines well with loop unrolling, since a prefetch can
be initiated once per unrolled loop (or more than once if the loop processes
more than one cache line).
Pre-fetching destinations won't be necessary if the CPU has a big enough store
queue. Older processors without a write-allocate L1 however will want
destination prefetching, to avoid repeated write-throughs, unless they can
keep up with the rate at which destination limbs are produced.
The distance ahead to prefetch will be determined by the rate data is
processed versus the time it takes to bring a line up to L1. Naturally the
net data rate from L2 or RAM will always limit the rate of data processing.
Prefetch distance may also be limited by the number of prefetches the
processor can have in progress at any one time.
If a special prefetch instruction doesn't exist then a plain load can be used,
so long as the CPU supports out-of-order loads. But this may mean having a
second copy of a loop so that the last few limbs can be processed without
prefetching, since reading past the end of an operand must be avoided.
@node Assembler Floating Point, Assembler SIMD Instructions, Assembler Cache Handling, Assembler Coding
@subsection Floating Point
Floating point arithmetic is used in GMP for multiplications on CPUs with poor
integer multipliers. Floating point generally doesn't suit other operations
like additions or shifts, due to difficulties implementing carry handling.
With IEEE 53-bit double precision floats, integer multiplications producing up
to 53 bits will give exact results. Breaking a multiplication into
16@cross{}@ma{32@rightarrow{}48} bit pieces is convenient. With some care
though three 21@cross{}@ma{32@rightarrow{}53} bit products can be used to do a
64@cross{}32 multiply, if one of those 21@cross{}32 parts uses the sign bit.
Generally limbs want to be treated as unsigned, but on some CPUs floating
point conversions only treat integers as signed. Copying through a zero
extended memory region or testing and adjusting for a sign bit may be
necessary.
Currently floating point FFTs aren't used for large multiplications. On some
processors they probably have a good chance of being worthwhile, if great care
is taken with precision control.
@node Assembler SIMD Instructions, Assembler Software Pipelining, Assembler Floating Point, Assembler Coding
@subsection SIMD Instructions
The single-instruction multiple-data support in current microprocessors is
aimed at signal processing algorithms where each data point can be treated
more or less independently. There's generally not much support for
propagating the sort of carries that arise in GMP.
SIMD multiplications of say four 16@cross{}16 bit multiplies only do as much
work as one 32@cross{}32 from GMP's point of view, and need some shifts and
adds besides. But of course if say the SIMD form is fully pipelined and uses
less instruction decoding then it may still be worthwhile.
On the 80x86 chips, MMX has so far found a use in @code{mpn_rshift} and
@code{mpn_lshift} since it allows 64-bit operations, and is used in a special
case for 16-bit multipliers in the P55 @code{mpn_mul_1}. 3DNow and SSE
haven't found a use so far.
@node Assembler Software Pipelining, Assembler Loop Unrolling, Assembler SIMD Instructions, Assembler Coding
@subsection Software Pipelining
Software pipelining consists of scheduling instructions around the branch
point in a loop. For example a loop taking a checksum of an array of limbs
might have a load and an add, but the load wouldn't be for that add, rather
for the one next time around the loop. Each load then is effectively
scheduled back in the previous iteration, allowing latency to be hidden.
Naturally this is wanted only when doing things like loads or multiplies that
take a few cycles to complete, and only where a CPU has multiple functional
units so that other work can be done while waiting.
A pipeline with several stages will have a data value in progress at each
stage and each loop iteration moves them along one stage. This is like
juggling.
Within the loop some moves between registers may be necessary to have the
right values in the right places for each iteration. Loop unrolling can help
this, with each unrolled block able to use different registers for different
values, even if some shuffling is still needed just before going back to the
top of the loop.
@node Assembler Loop Unrolling, , Assembler Software Pipelining, Assembler Coding
@subsection Loop Unrolling
Loop unrolling consists of replicating code so that several limbs are
processed in each loop. At a minimum this reduces loop overheads by a
corresponding factor, but it can also allow better register usage, for example
alternately using one register combination and then another. Judicious use of
@command{m4} macros can help avoid lots of duplication in the source code.
Unrolling is generally best done to a power of 2 multiple. This makes it
possible to calculate the number of unrolled loops and the number of remaining
limbs using a shift and mask.
Sizes not a multiple of the unrolling can be handled in various ways, for
example
@itemize @bullet
@item
A simple loop at the end (or the start) to process the excess. Care will be
wanted that it isn't too much slower than the unrolled part.
@item
A set of binary tests, for example after an 8-limb unrolling, test for 4 more
limbs to process, then a further 2 more or not, and finally 1 more or not.
This will probably take more code space than a simple loop.
@item
A @code{switch} statement, providing separate code for each possible excess,
for example an 8-limb unrolling would have separate code for 0 remaining, 1
remaining, etc, up to 7 remaining. This might take a lot of code, but may be
the best way to optimize all cases in combination with a deep pipelined loop.
@item
A computed jump into the middle of the loop, thus making the first iteration
handle the excess. This should make times smoothly increase with size, which
is attractive, but setups for the jump and adjustments for pointers can be
tricky and could become quite difficult in combination with deep pipelining.
@end itemize
One way to write the setups and finishups for a pipelined unrolled loop is
simply to duplicate the loop at the start and the end, then delete
instructions at the start which have no valid antecedents, and delete
instructions at the end whose results are unwanted. Sizes not a multiple of
the unrolling can then be handled as desired.
@node Contributors, References, Algorithms, Top
@comment node-name, next, previous, up
@unnumbered Contributors
@cindex Contributors
Torbjorn Granlund wrote the original GMP library and is still developing and
maintaining it. Several other individuals and organizations have contributed
to GMP in various ways. Here is a list in chronological order:
Gunnar Sjoedin and Hans Riesel helped with mathematical problems in early
versions of the library.
Richard Stallman contributed to the interface design and revised the first
version of this manual.
Brian Beuning and Doug Lea helped with testing of early versions of the
library and made creative suggestions.
John Amanatides of York University in Canada contributed the function
@code{mpz_probab_prime_p}.
Paul Zimmermann of Inria sparked the development of GMP 2, with his
comparisons between bignum packages.
Ken Weber (Kent State University, Universidade Federal do Rio Grande do Sul)
contributed @code{mpz_gcd}, @code{mpz_divexact}, @code{mpn_gcd}, and
@code{mpn_bdivmod}, partially supported by CNPq (Brazil) grant 301314194-2.
Per Bothner of Cygnus Support helped to set up GMP to use Cygnus' configure.
He has also made valuable suggestions and tested numerous intermediary
releases.
Joachim Hollman was involved in the design of the @code{mpf} interface, and in
the @code{mpz} design revisions for version 2.
Bennet Yee contributed the functions @code{mpz_jacobi} and @code{mpz_legendre}.
Andreas Schwab contributed the files @file{mpn/m68k/lshift.S} and
@file{mpn/m68k/rshift.S}.
The development of floating point functions of GNU MP 2, were supported in part
by the ESPRIT-BRA (Basic Research Activities) 6846 project POSSO (POlynomial
System SOlving).
GNU MP 2 was finished and released by SWOX AB, SWEDEN, in cooperation with the
IDA Center for Computing Sciences, USA.
Robert Harley of Inria, France and David Seal of ARM, England, suggested clever
improvements for population count.
Robert Harley also wrote highly optimized Karatsuba and 3-way Toom
multiplication functions for GMP 3. He also contributed the ARM assembly
code.
Torsten Ekedahl of the Mathematical department of Stockholm University provided
significant inspiration during several phases of the GMP development. His
mathematical expertise helped improve several algorithms.
Paul Zimmermann wrote the Divide and Conquer division code, the REDC code, the
REDC-based mpz_powm code, and the FFT multiply code. The ECMNET project Paul
is organizing has been a driving force behind many of the optimization of GMP
3.
Linus Nordberg wrote the new configure system based on autoconf and
implemented the new random functions.
Kent Boortz made the Macintosh port.
Kevin Ryde wrote a lot of very high quality x86 code, optimized for most CPU
variants. He also made countless other valuable contributions.
Steve Root helped write the optimized alpha 21264 assembly code.
GNU MP 3.1 was finished and released by Torbjorn Granlund and Kevin Ryde.
Torbjorn's work was partially funded by the IDA Center for Computing Sciences,
USA.
(This list is chronological, not ordered after significance. If you have
contributed to GMP but are not listed above, please tell @email{tege@@swox.com}
about the omission!)
@node References, Concept Index, Contributors, Top
@comment node-name, next, previous, up
@unnumbered References
@cindex References
@section Books
@itemize @bullet
@item
Henri Cohen, ``A Course in Computational Algebraic Number Theory'', Graduate
Texts in Mathematics number 138, Springer-Verlag, 1993.
@* @uref{http://www.math.u-bordeaux.fr/~cohen}
@item
Donald E. Knuth, ``The Art of Computer Programming'', volume 2,
``Seminumerical Algorithms'', 3rd edition, Addison-Wesley, 1988.
@* @uref{http://www-cs-faculty.stanford.edu/~knuth/taocp.html}
@item
John D. Lipson, ``Elements of Algebra and Algebraic Computing'',
The Benjamin Cummings Publishing Company Inc, 1981.
@item
Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone, ``Handbook of
Applied Cryptography'', @* @uref{http://www.cacr.math.uwaterloo.ca/hac/}
@item
Richard M. Stallman, ``Using and Porting GCC'', Free Software Foundation, 1999,
available online @uref{http://www.gnu.org/software/gcc/onlinedocs/}, and in
the GCC package @uref{ftp://ftp.gnu.org/pub/gnu/gcc/}
@end itemize
@section Papers
@itemize @bullet
@item
Christoph Burnikel and Joachim Ziegler, ``Fast Recursive Division'',
Max-Planck-Institut fuer Informatik Research Report MPI-I-98-1-022, @*
@uref{http://www.mpi-sb.mpg.de/~ziegler/TechRep.ps.gz}
@item
Torbjorn Granlund and Peter L. Montgomery, ``Division by Invariant Integers
using Multiplication'', in Proceedings of the SIGPLAN PLDI'94 Conference, June
1994. Also available @uref{ftp://ftp.cwi.nl/pub/pmontgom/divcnst.psa4.gz}
(and .psl.gz).
@item
Peter L. Montgomery, ``Modular Multiplication Without Trial Division'', in
Mathematics of Computation, volume 44, number 170, April 1985.
@item
Tudor Jebelean,
``An algorithm for exact division'',
Journal of Symbolic Computation,
volume 15, 1993, pp. 169-180.
Research report version available @*
@uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1992/92-35.ps.gz}
@item
Tudor Jebelean, ``Exact Division with Karatsuba Complexity - Extended
Abstract'', RISC-Linz technical report 96-31, @*
@uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1996/96-31.ps.gz}
@item
Tudor Jebelean, ``Practical Integer Division with Karatsuba Complexity'',
ISSAC 97, pp. 339-341. Technical report available @*
@uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1996/96-29.ps.gz}
@item
Tudor Jebelean, ``A Generalization of the Binary GCD Algorithm'', ISSAC 93,
pp. 111-116. Technical report version available @*
@uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1993/93-01.ps.gz}
@item
Tudor Jebelean, ``A Double-Digit Lehmer-Euclid Algorithm for Finding the GCD
of Long Integers'', Journal of Symbolic Computation, volume 19, 1995,
pp. 145-157. Technical report version also available @*
@uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1992/92-69.ps.gz}
@item
Werner Krandick and Tudor Jebelean, ``Bidirectional Exact Integer Division'',
Journal of Symbolic Computation, volume 21, 1996, pp. 441-455. Early
technical report version also available
@uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1994/94-50.ps.gz}
@item
R. Moenck and A. Borodin, ``Fast Modular Transforms via Division'',
Proceedings of the 13th Annual IEEE Symposium on Switching and Automata
Theory, October 1972, pp. 90-96. Reprinted as ``Fast Modular Transforms'',
Journal of Computer and System Sciences, volume 8, number 3, June 1974,
pp. 366-386.
@item
Arnold Sch@"onhage and Volker Strassen, ``Schnelle Multiplikation grosser
Zahlen'', Computing 7, 1971, pp. 281-292.
@item
Kenneth Weber, ``The accelerated integer GCD algorithm'',
ACM Transactions on Mathematical Software,
volume 21, number 1, March 1995, pp. 111-122.
@item
Paul Zimmermann, ``Karatsuba Square Root'', INRIA Research Report 3805,
November 1999, @uref{http://www.inria.fr/RRRT/RR-3805.html}
@item
Paul Zimmermann, ``A Proof of GMP Fast Division and Square Root
Implementations'', @*
@uref{http://www.loria.fr/~zimmerma/papers/proof-div-sqrt.ps.gz}
@item
Dan Zuras, ``On Squaring and Multiplying Large Integers'', ARITH-11: IEEE
Symposium on Computer Arithmetic, 1993, pp. 260 to 271. Reprinted as ``More
on Multiplying and Squaring Large Integers'', IEEE Transactions on Computers,
volume 43, number 8, August 1994, pp. 899-908.
@end itemize
@node Concept Index, Function Index, References, Top
@comment node-name, next, previous, up
@unnumbered Concept Index
@printindex cp
@node Function Index, , Concept Index, Top
@comment node-name, next, previous, up
@unnumbered Function and Type Index
@printindex fn
@contents
@bye
@c Local variables:
@c fill-column: 78
@c End:
|