Releases: sagemath/sage
4.1.2
4.1.1
Release Tour
Sage 4.1.1 was released on August 14th, 2009 (changelog), 165 tickets (PRs) merged, 56 contributors. A nicely formatted version of this release tour can be found here. The following points are some of the foci of this release:
- Improved data conversion between NumPy and Sage
- Solaris support, Solaris specific bug fixes for NTL, zn_poly, Pari/GP, FLINT, MPFR, PolyBoRI, ATLAS
- Upgrade/updates for about 8 standard packages
- Three new optional packages: openopt, GLPK, p_group_cohomology
Algebra
- Adds method
__nonzero__()
to abelian groups (Taylor Sutton) #6510 --- New method__nonzero__()
for the classAbelianGroup_class
insage/groups/abelian_gps/abelian_group.py
. This method returnsTrue
if the abelian group in question is non-trivial:
sage: E = EllipticCurve([0, 82])
sage: T = E.torsion_subgroup()
sage: bool(T)
False
sage: T.__nonzero__()
False
Basic Arithmetic
- Implement
real_part()
andimag_part()
forCDF
andCC
(Alex Ghitza) #6159 --- The namereal_part
is now an alias to the methodreal()
; similarly,imag_part
is now an alias to the methodimag()
.
sage: a = CDF(3, -2)
sage: a.real()
3.0
sage: a.real_part()
3.0
sage: a.imag()
-2.0
sage: a.imag_part()
-2.0
sage: i = ComplexField(100).0
sage: z = 2 + 3*i
sage: z.real()
2.0000000000000000000000000000
sage: z.real_part()
2.0000000000000000000000000000
sage: z.imag()
3.0000000000000000000000000000
sage: z.imag_part()
3.0000000000000000000000000000
- Efficient summing using balanced sum (Jason Grout, Mike Hansen) #2737 --- New function
balanced_sum()
in the modulesage/misc/misc_c.pyx
for summing the elements in a list. In some cases,balanced_sum()
is more efficient than the built-in Pythonsum()
function, where the efficiency can range from 26x up to 1410x faster thansum()
. The following timing statistics were obtained using the machine sage.math:
sage: R.<x,y> = QQ["x,y"]
sage: L = [x^i for i in range(1000)]
sage: %time sum(L);
CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s
Wall time: 0.01 s
sage: %time balanced_sum(L);
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 0.00 s
sage: %timeit sum(L);
100 loops, best of 3: 8.66 ms per loop
sage: %timeit balanced_sum(L);
1000 loops, best of 3: 324 µs per loop
sage:
sage: L = [[i] for i in range(10e4)]
sage: %time sum(L, []);
CPU times: user 84.61 s, sys: 0.00 s, total: 84.61 s
Wall time: 84.61 s
sage: %time balanced_sum(L, []);
CPU times: user 0.06 s, sys: 0.00 s, total: 0.06 s
Wall time: 0.06 s
Calculus
- Wigner
3j
,6j
,9j
, Clebsch-Gordan, Racah and Gaunt coefficients (Jens Rasch) #5996 --- A collection of functions for exactly calculating Wigner3j
,6j
,9j
, Clebsch-Gordan, Racah as well as Gaunt coefficients. All these functions evaluate to a rational number times the square root of a rational number. These new functions are defined in the modulesage/functions/wigner.py
. Here are some examples on calculating the Wigner3j
,6j
,9j
symbols:
sage: wigner_3j(2, 6, 4, 0, 0, 0)
sqrt(5/143)
sage: wigner_3j(0.5, 0.5, 1, 0.5, -0.5, 0)
sqrt(1/6)
sage: wigner_6j(3,3,3,3,3,3)
-1/14
sage: wigner_6j(8,8,8,8,8,8)
-12219/965770
sage: wigner_9j(1,1,1, 1,1,1, 1,1,0 ,prec=64) # ==1/18
0.0555555555555555555
sage: wigner_9j(15,15,15, 15,3,15, 15,18,10, prec=1000)*1.0
-0.0000778324615309539
The Clebsch-Gordan, Racah and Gaunt coefficients can be computed as follows:
sage: simplify(clebsch_gordan(3/2,1/2,2, 3/2,1/2,2))
1
sage: clebsch_gordan(1.5,0.5,1, 1.5,-0.5,1)
1/2*sqrt(3)
sage: racah(3,3,3,3,3,3)
-1/14
sage: gaunt(1,0,1,1,0,-1)
-1/2/sqrt(pi)
sage: gaunt(12,15,5,2,3,-5)
91/124062*sqrt(36890)/sqrt(pi)
sage: gaunt(1000,1000,1200,9,3,-12).n(64)
0.00689500421922113448
Combinatorics
- Optimize the words library code (Vincent Delecroix, Sébastien Labbé, Franco Saliola) #6519 --- An enhancement of the words library code in
sage/combinat/words
to improve its efficiency and reorganize the code. The efficiency gain for creating small words can be up to 6x:
# BEFORE
sage: %timeit Word()
10000 loops, best of 3: 46.6 µs per loop
sage: %timeit Word("abbabaab")
10000 loops, best of 3: 62 µs per loop
sage: %timeit Word([0,1,1,0,1,0,0,1])
10000 loops, best of 3: 59.4 µs per loop
# AFTER
sage: %timeit Word()
100000 loops, best of 3: 6.85 µs per loop
sage: %timeit Word("abbabaab")
100000 loops, best of 3: 11.8 µs per loop
sage: %timeit Word([0,1,1,0,1,0,0,1])
100000 loops, best of 3: 10.6 µs per loop
For the creation of large words, the improvement can be from between 8000x up to 39000x:
# BEFORE
sage: t = words.ThueMorseWord()
sage: w = list(t[:1000000])
sage: %timeit Word(w)
10 loops, best of 3: 792 ms per loop
sage: u = "".join(map(str, list(t[:1000000])))
sage: %timeit Word(u)
10 loops, best of 3: 777 ms per loop
sage: %timeit Words("01")(u)
10 loops, best of 3: 748 ms per loop
# AFTER
sage: t = words.ThueMorseWord()
sage: w = list(t[:1000000])
sage: %timeit Word(w)
10000 loops, best of 3: 20.3 µs per loop
sage: u = "".join(map(str, list(t[:1000000])))
sage: %timeit Word(u)
10000 loops, best of 3: 21.9 µs per loop
sage: %timeit Words("01")(u)
10000 loops, best of 3: 84.3 µs per loop
All of the above timing statistics were obtained using the machine sage.math. Further timing comparisons can be found at the Sage wiki page.
- Improve the speed of
Permutation.inverse()
(Anders Claesson) #6621 --- In some cases, the speed gain is up to 11x. The following timing statistics were obtained using the machine sage.math:
#!python numbers=off
# BEFORE
sage: p = Permutation([6, 7, 8, 9, 4, 2, 3, 1, 5])
sage: %timeit p.inverse()
10000 loops, best of 3: 67.1 µs per loop
sage: p = Permutation([19, 5, 13, 8, 7, 15, 9, 10, 16, 3, 12, 6, 2, 20, 18, 11, 14, 4, 17, 1])
sage: %timeit p.inverse()
1000 loops, best of 3: 240 µs per loop
sage: p = Permutation([14, 17, 1, 24, 16, 34, 19, 9, 20, 18, 36, 5, 22, 2, 27, 40, 37, 15, 3, 35, 10, 25, 21, 8, 13, 26, 12, 32, 23, 38, 11, 4, 6, 39, 31, 28, 29, 7, 30, 33])
sage: %timeit p.inverse()
1000 loops, best of 3: 857 µs per loop
# AFTER
sage: p = Permutation([6, 7, 8, 9, 4, 2, 3, 1, 5])
sage: %timeit p.inverse()
10000 loops, best of 3: 24.6 µs per loop
sage: p = Permutation([19, 5, 13, 8, 7, 15, 9, 10, 16, 3, 12, 6, 2, 20, 18, 11, 14, 4, 17, 1])
sage: %timeit p.inverse()
10000 loops, best of 3: 41.4 µs per loop
sage: p = Permutation([14, 17, 1, 24, 16, 34, 19, 9, 20, 18, 36, 5, 22, 2, 27, 40, 37, 15, 3, 35, 10, 25, 21, 8, 13, 26, 12, 32, 23, 38, 11, 4, 6, 39, 31, 28, 29, 7, 30, 33])
sage: %timeit p.inverse()
10000 loops, best of 3: 72.4 µs per loop
- Updating some quirks in
sage/combinat/partition.py
(Andrew Mathas) #5790 --- The functionsr_core()
,r_quotient()
,k_core()
, andpartition_sign()
are now deprecated. These are replaced withcore()
,quotient()
, andsign()
respectively. The rewrite of the functionPartition()
deprecated the argumentcore_and_quotient
. The core and the quotient can be passed as keywords ofPartition()
.
sage: Partition(core_and_quotient=([2,1], [[2,1],[3],[1,1,1]]))
/home/mvngu/.sage/temp/sage.math.washington.edu/9221/_home_mvngu__sage_init_sage_0.py:1: DeprecationWarning: "core_and_quotient=(*)" is deprecated. Use "core=[*], quotient=[*]" instead.
# -*- coding: utf-8 -*-
[11, 5, 5, 3, 2, 2, 2]
sage: Partition(core=[2,1], quotient=[[2,1],[3],[1,1,1]])
[11, 5, 5, 3, 2, 2, 2]
sage: Partition([6,3,2,2]).r_quotient(3)
/home/mvngu/.sage/temp/sage.math.washington.edu/9221/_home_mvngu__sage_init_sage_0.py:1: DeprecationWarning: r_quotient is deprecated. Please use quotient instead.
# -*- coding: utf-8 -*-
[[], [], [2, 1]]
sage: Partition([6,3,2,2]).quotient(3)
[[], [], [2, 1]]
sage: partition_sign([5,1,1,1,1,1])
/home/mvngu/.sage/temp/sage.math.washington.edu/9221/_home_mvngu__sage_init_sage_0.py:1: DeprecationWarning: "partition_sign deprecated. Use Partition(pi).sign() instead
# -*- coding: utf-8 -*-
1
sage: Partition([5,1,1,1,1,1]).sign()
1
Cryptography
- Improve S-box linear and differences matrices computation (Yann Laigle-Chapuy) #6454 --- Speed up the functions
difference_distribution_matrix()
andlinear_approximation_matrix()
of the classSBox
in the modulesage/crypto/mq/sbox.py
. The functionlinear_approximation_matrix()
now uses the Walsh transform. The efficiency ofdifference_distribution_matrix()
can be up to 277x, while that forlinear_approximation_matrix()
can be up to 132x. The following timing statistics were ob...
4.1
Release Tour
Sage 4.1 was released on July 09, 2009 (changelog), 91 tickets (PRs) merged, 44 contributors. A nicely formatted version of this release tour can be found here. The following points are some of the foci of this release:
- Upgrade to the Python 2.6.x series
- Support for building Singular with GCC 4.4
- FreeBSD support for the following packages: FreeType, gd, libgcrypt, libgpg-error, Linbox, NTL, Readline, Tachyon
- Combinatorics: irreducible matrix representations of symmetric groups; and Yang-Baxter Graphs
- Cryptography: Mini Advanced Encryption Standard for educational purposes
- Graph theory: a backend for graph theory using Cython (c_graph); and improve accuracy of graph eigenvalues
- Linear algebra: a general package for finitely generated, not-necessarily free R-modules; and multiplicative order for matrices over finite fields
- Miscellaneous: optimized Sudoku solver; a decorator for declaring abstract methods; support Unicode in LaTeX cells (notebook); and optimized integer division
- Number theory: improved random element generation for number field orders and ideals; support Michael Stoll's ratpoints package; and elliptic exponential
- Numerical: computing numerical values of constants using mpmath
- Update/upgrade 19 packages to latest upstream releases
Algebraic Geometry
- Construct an elliptic curve from a plane curve of genus one (Lloyd Kilford, John Cremona ) -- New function
EllipticCurve_from_plane_curve()
in the modulesage/schemes/elliptic_curves/constructor.py
to allow the construction of an elliptic curve from a smooth plane cubic with a rational point. Currently, this function uses Magma and it will not work on machines that do not have Magma installed. Assuming you have Magma installed on your computer, we can use the functionEllipticCurve_from_plane_curve()
to first check that the Fermat cubic is isomorphic to the curve with Cremona label "27a1":
sage: x, y, z = PolynomialRing(QQ, 3, 'xyz').gens() # optional - magma
sage: C = Curve(x^3 + y^3 + z^3) # optional - magma
sage: P = C(1, -1, 0) # optional - magma
sage: E = EllipticCurve_from_plane_curve(C, P) # optional - magma
sage: E # optional - magma
Elliptic Curve defined by y^2 + y = x^3 - 7 over Rational Field
sage: E.label() # optional - magma
'27a1'
Here is a quartic example:
sage: u, v, w = PolynomialRing(QQ, 3, 'uvw').gens() # optional - magma
sage: C = Curve(u^4 + u^2*v^2 - w^4) # optional - magma
sage: P = C(1, 0, 1) # optional - magma
sage: E = EllipticCurve_from_plane_curve(C, P) # optional - magma
sage: E # optional - magma
Elliptic Curve defined by y^2 = x^3 + 4*x over Rational Field
sage: E.label() # optional - magma
'32a1'
Basic Arithmetic
- Speed-up integer division (Robert Bradshaw ) -- In some cases, integer division is now up to 31% faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE
sage: a = next_prime(2**31)
sage: b = Integers(a)(100)
sage: %timeit a % b;
1000000 loops, best of 3: 1.12 µs per loop
sage: %timeit 101 // int(5);
1000000 loops, best of 3: 215 ns per loop
sage: %timeit 100 // int(-3)
1000000 loops, best of 3: 214 ns per loop
sage: a = ZZ.random_element(10**50)
sage: b = ZZ.random_element(10**15)
sage: %timeit a.quo_rem(b)
1000000 loops, best of 3: 454 ns per loop
# AFTER
sage: a = next_prime(2**31)
sage: b = Integers(a)(100)
sage: %timeit a % b;
1000000 loops, best of 3: 1.02 µs per loop
sage: %timeit 101 // int(5);
1000000 loops, best of 3: 201 ns per loop
sage: %timeit 100 // int(-3)
1000000 loops, best of 3: 194 ns per loop
sage: a = ZZ.random_element(10**50)
sage: b = ZZ.random_element(10**15)
sage: %timeit a.quo_rem(b)
1000000 loops, best of 3: 313 ns per loop
Combinatorics
- Irreducible matrix representations of symmetric groups (Franco Saliola) -- Support for constructing irreducible representations of the symmetric group. This is based on Alain Lascoux's article Young representations of the symmetric group. The following types of representations are supported:
* Specht representations -- The matrices have integer entries:
sage: chi = SymmetricGroupRepresentation([3, 2]); chi
Specht representation of the symmetric group corresponding to [3, 2]
sage: chi([5, 4, 3, 2, 1])
[ 1 -1 0 1 0]
[ 0 0 -1 0 1]
[ 0 0 0 -1 1]
[ 0 1 -1 -1 1]
[ 0 1 0 -1 1]
* Young's seminormal representation -- The matrices have rational entries:
sage: snorm = SymmetricGroupRepresentation([2, 1], "seminormal"); snorm
Seminormal representation of the symmetric group corresponding to [2, 1]
sage: snorm([1, 3, 2])
[-1/2 3/2]
[ 1/2 1/2]
* Young's orthogonal representation (the matrices are orthogonal) -- These matrices are defined over Sage's `Symbolic Ring`:
sage: ortho = SymmetricGroupRepresentation([3, 2], "orthogonal"); ortho
Orthogonal representation of the symmetric group corresponding to [3, 2]
sage: ortho([1, 3, 2, 4, 5])
[ 1 0 0 0 0]
[ 0 -1/2 1/2*sqrt(3) 0 0]
[ 0 1/2*sqrt(3) 1/2 0 0]
[ 0 0 0 -1/2 1/2*sqrt(3)]
[ 0 0 0 1/2*sqrt(3) 1/2]
You can also create the CombinatorialClass
of all irreducible matrix representations of a given symmetric group. Then particular representations can be created by providing partitions. For example:
sage: chi = SymmetricGroupRepresentations(5); chi
Specht representations of the symmetric group of order 5! over Integer Ring
sage: chi([5]) # the trivial representation
Specht representation of the symmetric group corresponding to [5]
sage: chi([5])([2, 1, 3, 4, 5])
[1]
sage: chi([1, 1, 1, 1, 1]) # the sign representation
Specht representation of the symmetric group corresponding to [1, 1, 1, 1, 1]
sage: chi([1, 1, 1, 1, 1])([2, 1, 3, 4, 5])
[-1]
sage: chi([3, 2])
Specht representation of the symmetric group corresponding to [3, 2]
sage: chi([3, 2])([5, 4, 3, 2, 1])
[ 1 -1 0 1 0]
[ 0 0 -1 0 1]
[ 0 0 0 -1 1]
[ 0 1 -1 -1 1]
[ 0 1 0 -1 1]
See the documentation of SymmetricGroupRepresentation
and SymmetricGroupRepresentations
for more information and examples.
- Yang-Baxter graphs (Franco Saliola) -- Besides being used for constructing the irreducible matrix representations of the symmetric group, Yang-Baxter graphs can also be used to construct the Cayley graph of a finite group. For example:
sage: def left_multiplication_by(g):
....: return lambda h : h*g
....:
sage: G = AlternatingGroup(4)
sage: operators = [ left_multiplication_by(gen) for gen in G.gens() ]
sage: Y = YangBaxterGraph(root=G.identity(), operators=operators); Y
Yang-Baxter graph with root vertex ()
sage: Y.plot(edge_labels=False)
- Yang-Baxter graphs can also be used to construct the permutahedron:
sage: from sage.combinat.yang_baxter_graph import SwapIncreasingOperator
sage: operators = [SwapIncreasingOperator(i) for i in range(3)]
sage: Y = YangBaxterGraph(root=(1,2,3,4), operators=operators); Y
Yang-Baxter graph with root vertex (1, 2, 3, 4)
sage: Y.plot()
- See the documentation of
YangBaxterGraph
for more information and examples.
Cryptography
- Mini Advanced Encryption Standard for educational purposes (Minh Van Nguyen) -- New module
sage/crypto/block_cipher/miniaes.py
to support the Mini Advanced Encryption Standard (Mini-AES) to allow students to explore the working of a block cipher. This is a simplified variant of the Advanced Encryption Standard (AES) to be used for cryptography education. Mini-AES is described in the paper:- A. C.-W. Phan. Mini advanced encryption standard (mini-AES): a testbed for cryptanalysis students. Cryptologia, 26(4):283--306, 2002. We can encrypt a plaintext using Mini-AES as follows:
sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: K = FiniteField(16, "x")
sage: MS = MatrixSpace(K, 2, 2)
sage: P = MS([K("x^3 + x"), K("x^2 + 1"), K("x^2 + x"), K("x^3 + x^2")]); P
[ x^3 + x x^2 + 1]
[ x^2 + x x^3 + x^2]
sage: key = MS([K("x^3 + x^2"), K("x^3 + x"), K("x^3 + x^2 + x"), K("x^2 + x + 1")]); key
[ x^3 + x^2 x^3 + x]
[x^3 + x^2 + x x^2 + x + 1]
sage: C = maes.encrypt(P, key); C
[ x x^2 + x]
[x^3 + x^2 + x x^3 + x]
Here is the decryption process:
sage: plaintxt = maes.decrypt(C, key)
sage: plaintxt == P
True
We can also work directly with binary strings:
sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: bin = BinaryStrings()
sage: key = bin.encoding("KE"); key
0100101101000101
sage: P = bin.encoding("Encrypt this secret message!")
sage: C = maes(P, key, algorithm="encrypt")
sage: plaintxt = maes(C, key, algorithm="decrypt")
sage: plaintxt == P
True
Or work with integers n
such that 0 <= n <= 15
:
sage: from sage.crypto.block_cipher.miniaes import MiniAES
sage: maes = MiniAES()
sage: P = [n for n in xrange(16)]; P
[0, 1, 2, 3, 4, 5, 6, 7, 8, ...
4.0.2
Release Tour
Sage 4.0.2 was released on June 18th, 2009 (changelog), 84 tickets (PRs) merged, 36 contributors. A nicely formatted version of this release tour can be found at Wordpress.com. The following points are some of the foci of this release:
- Upgrade NumPy, SciPy, Singular, and FLINT to latest upstream releases.
- A script to automate the testing and merging of tickets.
- LaTeX output for combinatorial graphs.
- New features for linear algebra include Hermite normal form over principal ideal domains.
- New features for number theory include elliptic curve isogeny, and local and global heights for number fields.
Algebra
- Correct precision bound in
hilbert_class_polynomial()
and miscellaneous new functions (John Cremona) -- The two new functions areelliptic_j()
insage/functions/special.py
, andis_primitive()
in the classBinaryQF
ofsage/quadratic_forms/binary_qf.py
. The functionelliptic_j(z)
returns the elliptic modularj
-function evaluated atz
. The functionis_primitive()
determines whether the binary quadratic formax^2 + bxy + cy^2
satisfiesgcd(a,b,c) = 1
, i.e. that it is primitive. Here are some examples on using these new functions:
sage: elliptic_j(CC(i))
1728.00000000000
sage: elliptic_j(sqrt(-2.0))
8000.00000000000
sage: Q = BinaryQF([6,3,9])
sage: Q.is_primitive()
False
sage: Q = BinaryQF([1,1,1])
sage: Q.is_primitive()
True
- Efficient Lagrange interpolation polynomial (Yann Laigle-Chapuy) -- Calculating the Lagrange interpolation polynomial of a set of points is now up to 48% faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE
sage: R = PolynomialRing(QQ, 'x')
sage: %timeit R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)])
1000 loops, best of 3: 824 µs per loop
sage: R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)])
-23/84*x^3 - 11/84*x^2 + 13/7*x + 1
sage: R = PolynomialRing(GF(2**3,'a'), 'x')
sage: a = R.base_ring().gen()
sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])")
625 loops, best of 3: 111 µs per loop
sage: R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])
a^2*x^2 + a^2*x + a^2
# AFTER
sage: R = PolynomialRing(QQ, 'x')
sage: %timeit R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)])
1000 loops, best of 3: 425 µs per loop
sage: R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)])
-23/84*x^3 - 11/84*x^2 + 13/7*x + 1
sage: R = PolynomialRing(GF(2**3,'a'), 'x')
sage: a = R.base_ring().gen()
sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])")
625 loops, best of 3: 86.4 µs per loop
sage: R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])
a^2*x^2 + a^2*x + a^2
- Deprecate the method
__len__()
for a matrix group (Nicolas Thiery) -- The method__len__()
of the classMatrixGroup_gap
insage/groups/matrix_gps/matrix_group.py
is now deprecated and will be removed in a future release. To get the number of elements in a matrix group, users are advised to use the methodcardinality()
instead. The methodorder()
is essentially the same ascardinality()
, soorder()
will be deprecated in a future release.
Algebraic Geometry
- Optimize hyperelliptic curve arithmetic (Nick Alexander) -- Arithmetics with hyperelliptic curves can be up to 6x faster than previously. The following timing statistics were obtained using the machine sage.math:
#BEFORE
sage: F = GF(next_prime(10^30))
sage: x = F['x'].gen()
sage: f = x^7 + x^2 + 1
sage: H = HyperellipticCurve(f, 2*x)
sage: J = H.jacobian()(F)
verbose 0 (902: multi_polynomial_ideal.py, dimension) Warning: falling back to very slow toy implementation.
sage: Q = J(H.lift_x(F(1)))
sage: %time ZZ.random_element(10**10) * Q;
CPU times: user 0.65 s, sys: 0.02 s, total: 0.67 s
Wall time: 0.68 s
sage: %time ZZ.random_element(10**10) * Q;
CPU times: user 1.08 s, sys: 0.00 s, total: 1.08 s
Wall time: 1.08 s
sage: %time ZZ.random_element(10**10) * Q;
CPU times: user 0.72 s, sys: 0.02 s, total: 0.74 s
Wall time: 0.74 s
sage: %time ZZ.random_element(10**10) * Q;
CPU times: user 0.67 s, sys: 0.00 s, total: 0.67 s
Wall time: 0.67 s
sage: %time ZZ.random_element(10**10) * Q;
CPU times: user 0.66 s, sys: 0.00 s, total: 0.66 s
Wall time: 0.66 s
# AFTER
sage: F = GF(next_prime(10^30))
sage: x = F['x'].gen()
sage: f = x^7 + x^2 + 1
sage: H = HyperellipticCurve(f, 2*x)
sage: J = H.jacobian()(F)
verbose 0 (919: multi_polynomial_ideal.py, dimension) Warning: falling back to very slow toy implementation.
sage: Q = J(H.lift_x(F(1)))
sage: %time ZZ.random_element(10**10) * Q;
CPU times: user 0.14 s, sys: 0.01 s, total: 0.15 s
Wall time: 0.15 s
sage: %time ZZ.random_element(10**10) * Q;
CPU times: user 0.10 s, sys: 0.00 s, total: 0.10 s
Wall time: 0.10 s
sage: %time ZZ.random_element(10**10) * Q;
CPU times: user 0.09 s, sys: 0.00 s, total: 0.09 s
Wall time: 0.10 s
sage: %time ZZ.random_element(10**10) * Q;
CPU times: user 0.09 s, sys: 0.01 s, total: 0.10 s
Wall time: 0.10 s
sage: %time ZZ.random_element(10**10) * Q;
CPU times: user 0.10 s, sys: 0.00 s, total: 0.10 s
Wall time: 0.11 s
Coding Theory
- Hexads in
S(5,6,12)
and mathematical blackjack (David Joyner) -- Implements kittens, hexads and mathematical blackjack as described in the following papers:- R. Curtis. The Steiner system
S(5,6,12)
, the Mathieu groupM_{12
}, and the kitten. In M. Atkinson (ed.) Computational Group Theory, Academic Press, 1984. - J. Conway. Hexacode and tetracode -- MINIMOG and MOG. In M. Atkinson (ed.) Computational Group Theory, Academic Press, 1984.
- J. Conway and N. Sloane. Lexicographic codes: error-correcting codes from game theory. IEEE Transactions on Information Theory, 32:337-348, 1986.
- J. Kahane and A. Ryba. The hexad game. Electronic Journal of Combinatorics, 8, 2001. http://www.combinatorics.org/Volume_8/Abstracts/v8i2r11.html
- R. Curtis. The Steiner system
Commutative Algebra
- Enable Singular's coefficient rings which are not fields (Martin Albrecht) -- Singular 3-1-0 supports coefficient rings which are not fields. In particular, it supports
ZZ
andZZ/nZZ
now. These are now natively supported in Sage.
Cryptography
- S-box to CNF Conversion (Martin Albrecht) -- New method
cnf()
in the classSBox
ofsage/crypto/mq/sbox.py
for converting an S-box to conjunctive normal form. Here are some examples on S-box to CNF conversion:
sage: S = mq.SBox(1,2,0,3); S
(1, 2, 0, 3)
sage: S.cnf()
[(1, 2, -3),
(1, 2, 4),
(1, -2, 3),
(1, -2, -4),
(-1, 2, -3),
(-1, 2, -4),
(-1, -2, 3),
(-1, -2, 4)]
sage: # convert this representation to the DIMACS format
sage: print S.cnf(format='dimacs')
p cnf 4 8
1 2 -3 0
1 2 4 0
1 -2 3 0
1 -2 -4 0
-1 2 -3 0
-1 2 -4 0
-1 -2 3 0
-1 -2 4 0
sage: # as a truth table
sage: log = SymbolicLogic()
sage: s = log.statement(S.cnf(format='symbolic'))
sage: log.truthtable(s)[1:]
[['False', 'False', 'False', 'False', 'False'],
['False', 'False', 'False', 'True', 'False'],
['False', 'False', 'True', 'False', 'False'],
['False', 'False', 'True', 'True', 'True'],
['False', 'True', 'False', 'False', 'True'],
['False', 'True', 'False', 'True', 'True'],
['False', 'True', 'True', 'False', 'True'],
['False', 'True', 'True', 'True', 'True'],
['True', 'False', 'False', 'False', 'True'],
['True', 'False', 'False', 'True', 'True'],
['True', 'False', 'True', 'False', 'True'],
['True', 'False', 'True', 'True', 'True'],
['True', 'True', 'False', 'False', 'True'],
['True', 'True', 'False', 'True', 'True'],
['True', 'True', 'True', 'False', 'True'],
['True', 'True', 'True', 'True', 'True']]
Graph Theory
- LaTeX output for (combinatorial) graphs (Robert Beezer, Fidel Barrera Cruz) -- Implement the option
tkz_style
to output graphs in LaTeX format so that they could be processed by pgf/tkz. Here's an example of the Petersen graph visualized using tkz:
g = graphs.PetersenGraph()
g.set_latex_options(tkz_style='Art')
view(g, pdflatex=True)
Group Theory
- Python interface to partition backtrack functions (Robert Miller) -- New module in
sage/groups/perm_gps/partn_ref/refinement_python.pyx
provides Python frontends to the Cython-based partition backtrack functions. This allows one to write the three input functions (all_children_are_equivalent
,refine_and_return_invariant
, andcompare_structures
) in pure Python, and still use the Cython algorithms. Experimentation with specific partition backtrack implementations no longer requires compilation, as the input functions can be dynamically changed at runtime. Note that this is not intended for production quality implementations of partition refinement, but instead for experimentation, learning, and use of the Python debugger.
Linear Algebra
- Hermite normal form over principal ideal domains (David Loeffler) -- This adds echelon form (or Hermite normal form) over principal ideal domains. Here an example:
sage: L.<w> = NumberField(x^2 - x + 2)
sage: OL = L.ring_of_integers()
sage: m = matrix(OL, 2, 2, [1,2,3,4+w])
sage: m.echelon_form()
[ 1 -2]
[ 0 w - 2]
sage: m.echelon_form(transformation=True)
([ 1 -2]
[ 0 w - 2], [-3*w - 2 w + 1]
[ -3 1])
Miscellaneous
- Bypassing jsMath with view command (John Palmieri) -- This provides a way to not always use jsMath when rendering LaTeX for the
view
command in the notebook. It works...
4.0.1
Release Tour
Sage 4.0.1 was released on June 06, 2009 (changelog), 75 tickets (PRs) merged, 38 contributors. A nicely formatted version of this release tour can be found at Wordpress. The following points are some of the foci of this release:
- Nested lists as nicely formatted HTML tables.
- Update FLINT and MPIR to latest upstream releases.
- Speed optimization for algebra, basic arithmetics, and the GAP interface.
- A tool for understanding pickling.
Algebra
- Factoring rational functions (Soroosh Yazdani) -- New method
factor()
in the classFractionFieldElement
ofsage/rings/fraction_field_element.pyx
to return the factorization of self over the base ring. Here's an example for working with this new method:
sage: K.<x> = QQ["x"]
sage: f = (x^3 + x) / (x-3)
sage: f.factor()
(x - 3)^-1 * x * (x^2 + 1)
- Faster
basis_matrix()
for ambient modules (John Cremona) -- The speed-up can be up to 376x faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE
sage: K = FreeModule(ZZ, 2000)
sage: %time I = K.basis_matrix()
CPU times: user 292.74 s, sys: 20.11 s, total: 312.85 s
Wall time: 312.90 s
# AFTER
sage: K = FreeModule(ZZ, 2000)
sage: %time I = K.basis_matrix()
CPU times: user 0.41 s, sys: 0.43 s, total: 0.84 s
Wall time: 0.83 s
- Optimize the construction of Lagrange interpolation polynomials (Minh Van Nguyen) -- Rewrite the method
lagrange_polynomial()
in the classPolynomialRing_field
ofsage/rings/polynomial/polynomial_ring.py
for generating then
-th Lagrange interpolation polynomial. The method now provides two new options:algorithm
--- (default:divided_difference
) Ifalgorithm="divided_difference"
, then use the method of divided difference. Ifalgorithm="neville"
, then use a variant of Neville's method to recursively generate then
-th Lagrange interpolation polynomial. This adaptation of Neville's method is more memory efficient than the original Neville's method, since the former doesn't generate the full Neville table resulting from Neville's recursive procedure. Instead the adaptation only keeps track of the current and previous rows of the said table.previous_row
--- (default:None
) This is only relevant if used together withalgorithm="neville"
. Here "previous row" refers to the last row in the Neville table that was obtained from a previous computation of ann
-th Lagrange interpolation polynomial using Neville's method. If the last row is provided, then use a memory efficient variant of Neville's method to recursively generate a better interpolation polynomial from the results of previous computation.
There's also the new methoddivided_difference()
to compute the Newton divided-difference coefficients of then
-th Lagrange interpolation polynomial. The following are some timing statistics obtained using sage.math. When the results of previous computations are fed tolagrange_polynomial
in order to produce better interpolation polynomials, we can gain an efficiency of up to 42%.
# BEFORE
# using the definition of Lagrange interpolation polynomial
sage: R = PolynomialRing(QQ, 'x')
sage: %timeit R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)])
1000 loops, best of 3: 1.71 ms per loop
sage: R = PolynomialRing(GF(2**3,'a'), 'x')
sage: a = R.base_ring().gen()
sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])")
625 loops, best of 3: 233 µs per loop
# without using precomputed values to generate successively better interpolation polynomials
sage: R = PolynomialRing(QQ, 'x')
sage: timeit("R.lagrange_polynomial([(0,1),(2,2)])");
625 loops, best of 3: 571 µs per loop
sage: # add two more points
sage: timeit("R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)])");
125 loops, best of 3: 2.29 ms per loop
sage:
sage: R = PolynomialRing(GF(2**3,'a'), 'x')
sage: a = R.base_ring().gen()
sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1)])")
625 loops, best of 3: 76.1 µs per loop
sage: # add another point
sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])")
625 loops, best of 3: 229 µs per loop
sage:
sage: R = PolynomialRing(QQ, 'x')
sage: points = [(random(), random()) for i in range(100)]
sage: time R.lagrange_polynomial(points);
CPU times: user 1.21 s, sys: 0.00 s, total: 1.21 s
Wall time: 1.21 s
sage: # add three more points
sage: for i in range(3): points.append((random(), random()))
....:
sage: time R.lagrange_polynomial(points);
CPU times: user 1.28 s, sys: 0.01 s, total: 1.29 s
Wall time: 1.29 s
sage: # add another 100 points
sage: for i in range(100): points.append((random(), random()))
....:
sage: time R.lagrange_polynomial(points);
CPU times: user 5.87 s, sys: 0.02 s, total: 5.89 s
Wall time: 5.89 s
# AFTER
# using the method of divided-difference
sage: R = PolynomialRing(QQ, 'x')
sage: %timeit R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)])
1000 loops, best of 3: 827 µs per loop
sage: R = PolynomialRing(GF(2**3,'a'), 'x')
sage: a = R.base_ring().gen()
sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])")
625 loops, best of 3: 111 µs per loop
# using precomputed values to generate successively better interpolation polynomials
sage: R = PolynomialRing(QQ, 'x')
sage: timeit("R.lagrange_polynomial([(0,1),(2,2)], neville=True)");
625 loops, best of 3: 332 µs per loop
sage: p = R.lagrange_polynomial([(0,1),(2,2)], neville=True);
sage: # add two more points
sage: timeit("R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)], neville=True, previous_row=p)");
625 loops, best of 3: 1.41 ms per loop
sage:
sage: R = PolynomialRing(GF(2**3,'a'), 'x')
sage: a = R.base_ring().gen()
sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1)], neville=True)");
625 loops, best of 3: 36.4 µs per loop
sage: p = R.lagrange_polynomial([(a^2+a,a),(a,1)], neville=True);
sage: # add another point
sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)], neville=True, previous_row=p)");
625 loops, best of 3: 131 µs per loop
sage:
sage: R = PolynomialRing(QQ, 'x')
sage: points = [(random(), random()) for i in range(100)]
sage: time R.lagrange_polynomial(points, neville=True);
CPU times: user 1.26 s, sys: 0.00 s, total: 1.26 s
Wall time: 1.26 s
sage: p = R.lagrange_polynomial(points, neville=True);
sage: # add three more points
sage: for i in range(3): points.append((random(), random()))
....:
sage: time R.lagrange_polynomial(points, neville=True, previous_row=p);
CPU times: user 0.09 s, sys: 0.00 s, total: 0.09 s
Wall time: 0.08 s
sage: p = R.lagrange_polynomial(points, neville=True, previous_row=p)
sage: # add another 100 points
sage: for i in range(100): points.append((random(), random()))
....:
sage: time R.lagrange_polynomial(points, neville=True, previous_row=p);
CPU times: user 4.62 s, sys: 0.00 s, total: 4.62 s
Wall time: 4.62 s
Basic Arithmetic
- Speed overhaul for
digits
,exact_log
andndigits
(Joel B. Mohler) -- Speed-up for the cases where the methodexact_log
can conveniently be computed by log 2 estimation. In some cases, time efficiency can be up to 927x faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE
sage: n = 5^1000
sage: m = 2975982357823879528793587928793592
sage: %timeit n.exact_log(m)
1000 loops, best of 3: 205 µs per loop
sage: n = 5^50
sage: m = 33
sage: %timeit n.exact_log(m)
10000 loops, best of 3: 29.6 µs per loop
sage: def zlog(m, n, k):
....: for i in range(0, m/1000):
....: a = ZZ.random_element(n) + 2
....: b = ZZ.random_element(k)
....: c = a^b
....: for i in range(1000):
....: c.exact_log(a)
....:
sage: time zlog(100000, 2^100, 100)
CPU times: user 22.59 s, sys: 0.12 s, total: 22.71 s
Wall time: 22.70 s
sage: time zlog(100000, 100, 100)
CPU times: user 3.45 s, sys: 0.02 s, total: 3.47 s
Wall time: 3.47 s
# AFTER
sage: n = 5^1000
sage: m = 2975982357823879528793587928793592
sage: %timeit n.exact_log(m)
1000000 loops, best of 3: 221 ns per loop
sage: n = 5^50
sage: m = 33
sage: %timeit n.exact_log(m)
1000000 loops, best of 3: 526 ns per loop
sage: def zlog(m, n, k):
....: for i in range(0, m/1000):
....: a = ZZ.random_element(n) + 2
....: b = ZZ.random_element(k)
....: c = a^b
....: for i in range(1000):
....: c.exact_log(a)
....:
sage: time zlog(100000, 2^100, 100)
CPU times: user 1.96 s, sys: 0.02 s, total: 1.98 s
Wall time: 1.99 s
sage: time zlog(100000, 100, 100)
CPU times: user 0.05 s, sys: 0.01 s, total: 0.06 s
Wall time: 0.05 s
Calculus
- Deprecate the function
numerical_sqrt()
(Robert Bradshaw, John H. Palmieri) -- The functionnumerical_sqrt()
insage/misc/functional.py
is now deprecated. Users are advised to instead usesqrt()
.
Combinatorics
- Sets enumerated by exploring a search space with a (lazy) tree or graph structure (Nicolas Thiery) -- Extend the
sage.combinat.backtrack
library with other generic tools for constructing large sets whose elements can be enumerated by exploring a search space with a (lazy) tree or graph structure. The following generic utilities have been added:SearchForest
: Depth first search through a tree described by a "child" function.GenericBacktracker
: Depth first search through a tree described by a "child" function, with branch pruning, etc.TransitiveIdeal
: Depth first search through a graph described by a "neighbours" relation.TransitiveIdealGraded
: Breath first search through a graph described by a "nei...
4.0
Release Tour
Sage 4.0 was released on May 29, 2009 (changelog), 140 tickets (PRs) merged, 40 contributors. A nicely formatted version of this release tour can be found at this Wordpress blog. The following points are some of the foci of this release:
- New symbolics based on Pynac
- Bring doctest coverage up to at least 75%
- Solaris 10 support (at least for gcc 4.3.x + gmake)
- Switch from Clisp to ECL
- OS X 64-bit support
Algebra
- Deprecate the
order()
method on elements of rings (John Palmieri) -- The methodorder()
of the classsage.structure.element.RingElement
is now deprecated and will be removed in a future release. For additive or multiplicative order, use theadditive_order
ormultiplicative_order
method respectively. - Partial fraction decomposition for irreducible denominators (Gonzalo Tornaria) -- For example, over the field
ZZ[x]
you can do
sage: R.<x> = ZZ["x"]
sage: q = x^2 / (x - 1)
sage: q.partial_fraction_decomposition()
(x + 1, [1/(x - 1)])
sage: q = x^10 / (x - 1)^5
sage: whole, parts = q.partial_fraction_decomposition()
sage: whole + sum(parts)
x^10/(x^5 - 5*x^4 + 10*x^3 - 10*x^2 + 5*x - 1)
sage: whole + sum(parts) == q
True
and over the finite field GF(2)[x]
:
sage: R.<x> = GF(2)["x"]
sage: q = (x + 1) / (x^3 + x + 1)
qsage: q.partial_fraction_decomposition()
(0, [(x + 1)/(x^3 + x + 1)])
Algebraic Geometry
- Various invariants for genus 2 hyperelliptic curves (Nick Alexander) -- The following invariants for genus 2 hyperelliptic curves are implemented in the module
sage/schemes/hyperelliptic_curves/hyperelliptic_g2_generic.py
:- the Clebsch invariants
- the Igusa-Clebsch invariants
- the absolute Igusa invariants
Basic Arithmetic
- Utility methods for integer arithmetics (Fredrik Johansson) -- New methods
trailing_zero_bits()
andsqrtrem()
for the classInteger
insage/rings/integer.pyx
:trailing_zero_bits(self)
-- Returns the number of trailing zero bits inself
, i.e. the exponent of the largest power of 2 dividingself
.sqrtrem(self)
-- Returns a pair(s, r)
wheres
is the integer square root ofself
andr
is the remainder such thatself = s^2 + r
. Here are some examples for working with these new methods:
sage: 13.trailing_zero_bits()
0
sage: (-13).trailing_zero_bits()
0
sage: (-13 >> 2).trailing_zero_bits()
2
sage: (-13 >> 3).trailing_zero_bits()
1
sage: (-13 << 3).trailing_zero_bits()
3
sage: (-13 << 2).trailing_zero_bits()
2
sage: 29.sqrtrem()
(5, 4)
sage: 25.sqrtrem()
(5, 0)
- Casting from float to rationals (Robert Bradshaw) -- One can now create a rational out of a float. Here's an example:
sage: a = float(1.0)
sage: QQ(a)
1
sage: type(a); type(QQ(a))
<type 'float'>
<type 'sage.rings.rational.Rational'>
- Speedup to Integer creation (Robert Bradshaw) -- Memory for recycled integers are only reclaimed if over 10 limbs are used, giving a significant speedup for small integers. (Previously all integers were reallocated to a single limb, which were often then reallocated to two limbs for arithmetic operations even when the result fit into a single limb.)
Combinatorics
- ASCII art output for Dynkin diagrams (Dan Bump) -- Support for ASCII art representation of Dynkin diagrams of a finite Cartan type. Here are some examples:
sage: DynkinDiagram("E6")
O 2
|
|
O---O---O---O---O
1 3 4 5 6
E6
sage: DynkinDiagram(['E',6,1])
O 0
|
|
O 2
|
|
O---O---O---O---O
1 3 4 5 6
E6~
- Crystal of letters for type E (Brant Jones, Anne Schilling) -- Support crystal of letters for type E corresponding to the highest weight crystal
B(\Lambda_1)
and its dualB(\Lambda_6)
(using the Sage labeling convention of the Dynkin nodes). Here are some examples:
sage: C = CrystalOfLetters(['E',6])
sage: C.list()
[[1],
[-1, 3],
[-3, 4],
[-4, 2, 5],
[-2, 5],
[-5, 2, 6],
[-2, -5, 4, 6],
[-4, 3, 6],
[-3, 1, 6],
[-1, 6],
[-6, 2],
[-2, -6, 4],
[-4, -6, 3, 5],
[-3, -6, 1, 5],
[-1, -6, 5],
[-5, 3],
[-3, -5, 1, 4],
[-1, -5, 4],
[-4, 1, 2],
[-1, -4, 2, 3],
[-3, 2],
[-2, -3, 4],
[-4, 5],
[-5, 6],
[-6],
[-2, 1],
[-1, -2, 3]]
sage: C = CrystalOfLetters(['E',6], element_print_style="compact")
sage: C.list()
[+, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z]
Commutative Algebra
- Improved performance for
SR
(Martin Albrecht) -- The speed-up gain forSR
is up to 6x. The following timing statistics were obtained using the machine sage.math:
# BEFORE
sage: sr = mq.SR(4, 4, 4, 8, gf2=True, polybori=True, allow_zero_inversions=True)
sage: %time F,s = sr.polynomial_system()
CPU times: user 21.65 s, sys: 0.03 s, total: 21.68 s
Wall time: 21.83 s
# AFTER
sage: sr = mq.SR(4, 4, 4, 8, gf2=True, polybori=True, allow_zero_inversions=True)
sage: %time F,s = sr.polynomial_system()
CPU times: user 3.61 s, sys: 0.06 s, total: 3.67 s
Wall time: 3.67 s
- Symmetric Groebner bases and infinitely generated polynomial rings (Simon King, Mike Hansen) -- The new modules
sage/rings/polynomial/infinite_polynomial_element.py
andsage/rings/polynomial/infinite_polynomial_ring.py
support computation in polynomial rings with a countably infinite number of variables. Here are some examples for working with these new modules:
sage: from sage.rings.polynomial.infinite_polynomial_element import InfinitePolynomial
sage: X.<x> = InfinitePolynomialRing(QQ)
sage: a = InfinitePolynomial(X, "(x1 + x2)^2"); a
x2^2 + 2*x2*x1 + x1^2
sage: p = a.polynomial()
sage: b = InfinitePolynomial(X, a.polynomial())
sage: a == b
True
sage: InfinitePolynomial(X, int(1))
1
sage: InfinitePolynomial(X, 1)
1
sage: Y.<x,y> = InfinitePolynomialRing(GF(2), implementation="sparse")
sage: InfinitePolynomial(Y, a)
x2^2 + x1^2
sage: X.<x,y> = InfinitePolynomialRing(QQ, implementation="sparse")
sage: A.<a,b> = InfinitePolynomialRing(QQ, order="deglex")
sage: f = x[5] + 2; f
x5 + 2
sage: g = 3*y[1]; g
3*y1
sage: g._p.parent()
Univariate Polynomial Ring in y1 over Rational Field
sage: f2 = a[5] + 2; f2
a5 + 2
sage: g2 = 3*b[1]; g2
3*b1
sage: A.polynomial_ring()
Multivariate Polynomial Ring in b5, b4, b3, b2, b1, b0, a5, a4, a3, a2, a1, a0 over Rational Field
sage: f + g
3*y1 + x5 + 2
sage: p = x[10]^2 * (f + g); p
3*y1*x10^2 + x10^2*x5 + 2*x10^2
```Furthermore, the new module `sage/rings/polynomial/symmetric_ideal.py` supports ideals of polynomial rings in a countably infinite number of variables that are invariant under variable permutation. Symmetric reduction of infinite polynomials is provided by the new module `sage/rings/polynomial/symmetric_reduction.pyx`.
### Geometry
* Simplicial complex method for polytopes (Marshall Hampton) -- New method `simplicial_complex()` in the class `Polyhedron` of `sage/geometry/polyhedra.py` for computing the simplicial complex from a triangulation of the polytope. Here's an example:
```txt
sage: p = polytopes.cuboctahedron()
sage: p.simplicial_complex()
Simplicial complex with 13 vertices and 20 facets
- Face lattices and f-vectors for polytopes (Marshall Hampton) -- New methods
face_lattice()
andf_vector()
in the classPolyhedron
ofsage/geometry/polyhedra.py
:face_lattice()
-- Returns the face-lattice poset. Elements are tuples of (vertices, facets) which keeps track of both the vertices in each face, and all the facets containing them. This method implements the results from the following paper:- V. Kaibel and M.E. Pfetsch. Computing the face lattice of a polytope from its vertex-facet incidences. Computational Geometry, 23(3):281--290, 2002.
f_vector()
-- Returns the f-vector of a polytope as a list. Here are some examples:
sage: c5_10 = Polyhedron(vertices = [[i,i^2,i^3,i^4,i^5] for i in xrange(1,11)])
sage: c5_10_fl = c5_10.face_lattice()
sage: [len(x) for x in c5_10_fl.level_sets()]
[1, 10, 45, 100, 105, 42, 1]
sage: p = Polyhedron(vertices = [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1], [0, 0, 0]])
sage: p.f_vector()
[1, 7, 12, 7, 1]
Graph Theory
- Graph colouring (Robert Miller) -- New method
coloring()
of the classsage.graphs.graph.Graph
for obtaining the first (optimal) coloring found on a graph. Here are some examples on using this new method:
sage: G = Graph("Fooba")
sage: P = G.coloring()
sage: G.plot(partition=P)
sage: H = G.coloring(hex_colors=True)
sage: G.plot(vertex_colors=H)
- Optimize the construction of large Sage graphs (Radoslav Kirov) -- The construction of large Sage graphs is now up to 19x faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE
sage: D = {}
sage: for i in xrange(10^3):
....: D[i] = [i+1, i-1]
....:
sage: timeit("g = Graph(D)")
5 loops, best of 3: 1.02 s per loop
# AFTER
sage: D = {}
sage: for i in xrange(10^3):
....: D[i] = [i+1, i-1]
....:
sage: timeit("g = Graph(D)")
5 loops, best of 3: 51.2 ms per loop
- Generate size
n
trees in linear time (Ryan Dingman) -- The speed-up can be up to 3400x. However, the efficiency gain is greater asn
becomes larger....
3.4.2
Release Tour
Sage 3.4.2 was released on May 05, 2009 (changelog), 74 tickets (PRs) merged, 33 contributors. A nicely formatted version of this release tour can be found here. The following points are some of the foci of this release:
- Improve doctest coverage of the Sage library in anticipation of version 4.0.
- New features for symbolic logic.
Algebra
- Comparison of ring coercion morphisms (Alex Ghitza) -- New comparison method
__cmp__()
for the classRingHomomorphism_coercion
insage/rings/morphism.pyx
. The comparison method__cmp__(self, other)
compares a ring coercion morphismself
toother
. Ring coercion morphisms never compare equal to any other data type. Ifother
is a ring coercion morphism, the parents ofself
andother
are compared. Here are some examples on comparing ring coercion morphisms:
sage: f = ZZ.hom(QQ)
sage: g = ZZ.hom(ZZ)
sage: f == g
False
sage: f > g
True
sage: f < g
False
sage: h = Zmod(6).lift()
sage: f == h
False
sage: f = ZZ.hom(QQ)
sage: g = loads(dumps(f))
sage: f == g
True
- Coercing factors into a common universe (Alex Ghitza) -- New method
base_change(self, U)
in the modulesage/structure/factorization.py
to allow the factorizationself
with its factors (including the unit part) coerced into the universeU
. Here's an example for working with the new methodbase_change()
:
sage: F = factor(2006)
sage: F.universe()
Integer Ring
sage: P.<x> = ZZ["x"]
sage: F.base_change(P).universe()
Univariate Polynomial Ring in x over Integer Ring
Basic Arithmetic
- Enhancements to symbolic logic (Chris Gorecki) -- This adds a number of utilities for working with symbolic logic:
sage/logic/booleval.py
-- For evaluating boolean formulas.sage/logic/boolformula.py
-- For boolean evaluation of boolean formulas.sage/logic/logicparser.py
-- For creating and modifying parse trees of well-formed boolean formulas.sage/logic/logictable.py
-- For creating and printing truth tables associated with logical statements.sage/logic/propcalc.py
-- For propositional calculus. Here are some examples for working with the new symbolic logic modules:
sage: import sage.logic.propcalc as propcalc
sage: f = propcalc.formula("a&((b|c)^a->c)<->b")
sage: g = propcalc.formula("boolean<->algebra")
sage: (f&~g).ifthen(f)
((a&((b|c)^a->c)<->b)&(~(boolean<->algebra)))->(a&((b|c)^a->c)<->b)
sage: f.truthtable()
a b c value
False False False True
False False True True
False True False False
False True True False
True False False True
True False True False
True True False True
True True True True
- New function
squarefree_divisors()
(Robert Miller) -- The new functionsquarefree_divisors(x)
in the modulesage/rings/arith.py
allows for iterating over the squarefree divisors (up to units) of the elementx
. Here, we assume thatx
is an element of any ring for which the functionprime_divisors()
works. Below are some examples for working with the new functionsquarefree_divisors()
:
sage: list(squarefree_divisors(7))
[1, 7]
sage: list(squarefree_divisors(6))
[1, 2, 3, 6]
sage: list(squarefree_divisors(81))
[1, 3]
Combinatorics
- Make
cartan_type
a method rather than an attribute (Dan Bump) -- For the modulesage/combinat/root_system/weyl_characters.py
,cartan_type
is now a method, not an attribute. For example, one can now invokecartan_type
as a method like so:
sage: A2 = WeylCharacterRing("A2")
sage: A2([1,0,0]).cartan_type()
['A', 2]
Commutative Algebra
- Improved performance in
MPolynomialRing_libsingular
(Simon King) -- This provides some optimization of the methodMPolynomialRing_libsingular.__call__()
. In some cases, the efficiency is up to 19%. The following timing statistics are obtained using the machine sage.math:
# BEFORE
sage: R = PolynomialRing(QQ,5,"x")
sage: S = PolynomialRing(QQ,6,"x")
sage: T = PolynomialRing(QQ,5,"y")
sage: U = PolynomialRing(GF(2),5,"x")
sage: p = R("x0*x1+2*x4+x3*x1^2")^4
sage: timeit("q = S(p)")
625 loops, best of 3: 321 µs per loop
sage: timeit("q = T(p)")
625 loops, best of 3: 348 µs per loop
sage: timeit("q = U(p)")
625 loops, best of 3: 435 µs per loop
# AFTER
sage: R = PolynomialRing(QQ,5,"x")
sage: S = PolynomialRing(QQ,6,"x")
sage: T = PolynomialRing(QQ,5,"y")
sage: U = PolynomialRing(GF(2),5,"x")
sage: p = R("x0*x1+2*x4+x3*x1^2")^4
sage: timeit("q = S(p)")
625 loops, best of 3: 316 µs per loop
sage: timeit("q = T(p)")
625 loops, best of 3: 281 µs per loop
sage: timeit("q = U(p)")
625 loops, best of 3: 392 µs per loop
Graph Theory
- Default edge color is black (Robert Miller) -- If only one edge of a graph is colored red, for example, then the remaining edges should be colored with black by default. Here's an example:
sage: G = graphs.CompleteGraph(5)
sage: G.show(edge_colors={'red':[(0,1)]})
Interfaces
- Split off the FriCAS interface from the Axiom interface (Mike Hansen, Bill Page) -- The FriCAS interface is now split off from the Axiom interface and can now be found in the module
sage/interfaces/fricas.py
.
Modular Forms
- Vast speedup in
P1List
construction (John Cremona) -- This provides huge improvement in theP1List()
constructor for Manin symbols. The efficiency gain can range from 27% up to 6x. Here are some timing statistics obtained using the machine sage.math:
# BEFORE
sage: time P1List(100000)
CPU times: user 4.11 s, sys: 0.08 s, total: 4.19 s
Wall time: 4.19 s
The projective line over the integers modulo 100000
sage: time P1List(1000000)
CPU times: user 192.22 s, sys: 0.60 s, total: 192.82 s
Wall time: 192.84 s
The projective line over the integers modulo 1000000
sage: time P1List(1009*1013)
CPU times: user 31.20 s, sys: 0.05 s, total: 31.25 s
Wall time: 31.25 s
The projective line over the integers modulo 1022117
sage: time P1List(1000003)
CPU times: user 35.92 s, sys: 0.05 s, total: 35.97 s
Wall time: 35.97 s
The projective line over the integers modulo 1000003
# AFTER
sage: time P1List(100000)
CPU times: user 0.78 s, sys: 0.02 s, total: 0.80 s
Wall time: 0.80 s
The projective line over the integers modulo 100000
sage: time P1List(1000000)
CPU times: user 27.82 s, sys: 0.21 s, total: 28.03 s
Wall time: 28.02 s
The projective line over the integers modulo 1000000
sage: time P1List(1009*1013)
CPU times: user 21.59 s, sys: 0.04 s, total: 21.63 s
Wall time: 21.63 s
The projective line over the integers modulo 1022117
sage: time P1List(1000003)
CPU times: user 26.19 s, sys: 0.05 s, total: 26.24 s
Wall time: 26.24 s
The projective line over the integers modulo 1000003
Notebook
- Downloading and uploading folders of worksheets (Robert Bradshaw) -- One can now download and upload entire folders of worksheets at once, instead of individual worksheets one at a time. This also allows for downloading only selecting worksheets in one go.
- Reduce the number of actions that trigger taking a snapshot (William Stein, Rob Beezer) -- Snapshots now need to be explicitly requested by clicking the save button. This greatly reduces many unnecessary snapshots.
Number Theory
- Enhanced function
prime_pi()
for counting primes (R. Andrew Ohana) -- The improved functionprime_pi()
insage/functions/prime_pi.pyx
implements the prime counting functionpi(n)
. Essentially,prime_pi(n)
counts the number of primes less than or equal ton
. Here are some examples:
sage: prime_pi(10)
4
sage: prime_pi(100)
25
sage: prime_pi(-10)
0
sage: prime_pi(-0.5)
0
sage: prime_pi(10^10)
455052511
- Action of the Galois group on cusps (William Stein) -- New method
galois_action()
insage/modular/cusps.py
for computing action of the Galois group on cusps for congruence subgroups. The relevant algorithm here is taken from section 1.3 of the following text:- S. Glenn. Arithmetic on Modular Curves. Progress in Mathematics, volume 20, Birkhauser, 1982.
Here are some examples for working withgalois_action()
:
- S. Glenn. Arithmetic on Modular Curves. Progress in Mathematics, volume 20, Birkhauser, 1982.
sage: Cusp(1/10).galois_action(3, 50)
1/170
sage: Cusp(oo).galois_action(3, 50)
Infinity
sage: Cusp(0).galois_action(3, 50)
0
- Finding elliptic curves with prescribed reduction over
QQ
(John Cremona) -- New functionEllipticCurves_with_good_reduction_outside_S()
for constructing elliptic curves with good reduction outside a finite set of primes. This essentially implements the algorithm presented in the paper, but currently only overQQ
:- J. Cremona and M. Lingham. Finding all elliptic curves with good reduction outside a given set of primes. Experimental Mathematics, 16(3):303--312, 2007. Here are some examples for working with this new function:
sage: EllipticCurves_with_good_reduction_outside_S([])
[]
sage: elist = EllipticCurves_with_good_reduction_outside_S([2])
sage: elist
[Elliptic Curve defined by y^2 = x^3 + 4*x over Rational Field,
Elliptic Curve defined by y^2 = x^3 - x over Rational Field,
Elliptic Curve defined by y^2 = x^3 - 11*x - 14 over Rational Field,
Elliptic Curve defined by y^2 = x^3 - 11*x + 14 over Rational Field,
Elliptic Curve defined by y^2 = x^3 - 4*x over Rational Field,
Elliptic Curve defined by y^2 = x^3 - 44*x - 112 over Rational Field,
Elliptic Curve defined by...
3.4.1
Release Tour
Sage 3.4.1 was released on April 22nd, 2009, 226 tickets (PRs) merged. For the official, comprehensive release note, please refer to sage-3.4.1.txt. A nicely formatted version of this release tour can be found at this blog. The following points are some of the foci of this release:
- Upgrade to Cython 0.11.
- Rewrite
fast_float
to support more data types. - Improved UTF8/Unicode support in the Notebook.
- Latest upstream versions of MPIR and FLINT.
- Pizer's algorithm for computing Brandt Modules and Brandt Matrices.
- Quadratic twists for p-adic L-functions.
- Overconvergent modular forms for genus 0 primes.
- Many improvements for computing with number field.
Algebra
- Optimized
is_primitive()
method (Ryan Hinton) -- The methodis_primitive()
insage/rings/polynomial/polynomial_element.pyx
is used for determining whether or not a polynomial is primitive over a finite field. Prime divisors are calculated during the test for polynomial primitivity. Where n is large, calculating those prime divisors can dominate the running time of the test. Theis_primitive()
method now has the optional argumentn_prime_divs
for providing precomputed prime divisors. This optional argument can result in a performance improvement of up to 4x. On the machine sage.math, one has the following timing statistics:
sage: R.<x> = PolynomialRing(GF(2), 'x')
sage: nn = 128
sage: max_order = 2^nn - 1
sage: pdivs = max_order.prime_divisors()
sage: poly = R.random_element(nn)
sage: while not (poly.degree()==nn and poly.is_primitive(max_order, pdivs)):
....: poly = R.random_element(nn)
....:
sage: %timeit poly.is_primitive() # without n_prime_divs optional argument
10 loops, best of 3: 285 ms per loop
sage: %timeit poly.is_primitive(max_order, pdivs) # with n_prime_divs optional argument
10 loops, best of 3: 279 ms per loop
sage:
sage: nn = 256
sage: max_order = 2^nn - 1
sage: pdivs = max_order.prime_divisors()
sage: poly = R.random_element(nn)
sage: while not (poly.degree()==nn and poly.is_primitive(max_order, pdivs)):
....: poly = R.random_element(nn)
....:
sage: %timeit poly.is_primitive() # without n_prime_divs optional argument
10 loops, best of 3: 3.22 s per loop
sage: %timeit poly.is_primitive(max_order, pdivs) # with n_prime_divs optional argument
10 loops, best of 3: 700 ms per loop
- Speed-up the method
order_from_multiple()
(John Cremona) -- For groups of prime order n, every non-identity element has order n. The previous implementation of the methodorder_from_multiple()
computes g^n twice when g is not the identity and n is prime. Such double computation is now avoided. Now for each prime p dividing the given multiple of the order, we avoid the last multiplication/powering by p, hence saving some computation time whenever the p-exponent of the order is maximal. The new implementation oforder_from_multiple()
results in a performance improvement of up to 25%. Here are some timing statistics obtained using the machine sage.math:
# BEFORE
sage: F = GF(2^1279, 'a')
sage: n = F.cardinality() - 1 # Mersenne prime
sage: order_from_multiple(F.random_element(), n, [n], operation='*') == n
True
sage: %timeit order_from_multiple(F.random_element(), n, [n], operation='*') == n
10 loops, best of 3: 63.7 ms per loop
# AFTER
sage: F = GF(2^1279, 'a')
sage: n = F.cardinality() - 1 # Mersenne prime
sage: %timeit order_from_multiple(F.random_element(), n, [n], operation='*') == n
10 loops, best of 3: 47.2 ms per loop
- Speed-up in irreducibility test (Ryan Hinton) -- For polynomials over the finite field
GF(2)
, the test for irreducibility is now up to 40,000 times faster than previously. On a 64-bit Debian/squeeze machine with Core 2 Duo running at 2.33 GHz, one has the following timing improvements:
# BEFORE
sage: P.<x> = GF(2)[]
sage: f = P.random_element(1000)
sage: %timeit f.is_irreducible()
10 loops, best of 3: 948 ms per loop
sage:
sage: f = P.random_element(10000)
sage: %time f.is_irreducible()
# gave up because it took minutes!
# AFTER
sage: P.<x> = GF(2)[]
sage: f = P.random_element(1000)
sage: %timeit f.is_irreducible()
10000 loops, best of 3: 22.7 µs per loop
sage:
sage: f = P.random_element(10000)
sage: %timeit f.is_irreducible()
1000 loops, best of 3: 394 µs per loop
sage:
sage: f = P.random_element(100000)
sage: %timeit f.is_irreducible()
100 loops, best of 3: 10.4 ms per loop
Furthermore, on Debian 5.0 Lenny with kernel 2.6.24-1-686, an Intel(R) Celeron(R) CPU running at 2.00GHz with 1.0GB of RAM, one has the following timing statistics:
# BEFORE
sage: P.<x> = GF(2)[]
sage: f = P.random_element(1000)
sage: %timeit f.is_irreducible()
10 loops, best of 3: 1.14 s per loop
sage:
sage: f = P.random_element(10000)
sage: %time f.is_irreducible()
CPU times: user 4972.13 s, sys: 2.83 s, total: 4974.95 s
Wall time: 5043.02 s
False
# AFTER
sage: P.<x> = GF(2)[]
sage: f = P.random_element(1000)
sage: %timeit f.is_irreducible()
10000 loops, best of 3: 40.7 µs per loop
sage:
sage: f = P.random_element(10000)
sage: %timeit f.is_irreducible()
1000 loops, best of 3: 930 µs per loop
sage:
sage:
sage: f = P.random_element(100000)
sage: %timeit f.is_irreducible()
10 loops, best of 3: 27.6 ms per loop
Algebraic Geometry
- Refactor
dimension()
method for schemes (Alex Ghitza) -- Implement methodsdimension_absolute()
anddimension_relative()
, wheredimension()
is an alias fordimension_absolute()
. Here are some examples of usingdimension_absolute()
anddimension()
:
sage: A2Q = AffineSpace(2, QQ)
sage: A2Q.dimension_absolute()
2
sage: A2Q.dimension()
2
And here's an example demonstrating the use of dimension_relative()
:
sage: S = Spec(ZZ)
sage: S.dimension_relative()
0
- Plotting affine and projective curves (Alex Ghitza) -- Improving the plotting usability so it is now easier to plot affine and projective curves. For example, we can plot a 5-nodal curve of degree 11:
sage: R.<x, y> = ZZ[]
sage: C = Curve(32*x^2 - 2097152*y^11 + 1441792*y^9 - 360448*y^7 + 39424*y^5 - 1760*y^3 + 22*y - 1)
sage: C.plot((x, -1, 1), (y, -1, 1), plot_points=400)
- Now we plot an elliptic curve:
sage: E = EllipticCurve('101a')
sage: C = Curve(E)
sage: C.plot()
Basic Arithmetic
- Speed-up in dividing a polynomial by an integer (William Stein, Burcin Erocal) -- Dividing a polynomial by an integer is now up to 6x faster than previously. On Debian 5.0 Lenny with kernel 2.6.24-1-686, an Intel(R) Celeron(R) CPU running at 2.00GHz with 1.0GB of RAM, one has the following timing statistics:
# BEFORE
sage: R.<x> = ZZ["x"]
sage: f = 389 * R.random_element(1000)
sage: timeit("f//389")
625 loops, best of 3: 312 µs per loop
# AFTER
sage: R.<x> = ZZ["x"]
sage: f = 389 * R.random_element(1000)
sage: timeit("f//389")
625 loops, best of 3: 48.3 µs per loop
- New
fast_float
supports more data types with improved performance (Carl Witty) -- A rewrite offast_float
to support multiple types. Here, we get accelerated evaluation overRealField(k)
as well asRDF
, real double field. As compared with the previousfast_float
, improved performance can range from 2% faster to more than 2x as fast. An extended list of benchmark details is available at #5093. - Complex double fast callable interpreter (Robert Bradshaw) -- Support for complex double floating point (CDF). The new interpreter is implemented in the class
CDFInterpreter
ofsage/ext/gen_interpreters.py
. - Speed-up the function
solve_mod()
(Wilfried Huss) -- Performance improvement for the functionsolve_mod()
is now up to 5x when solving an equation or a list of equations modulo a given integer modulus. On the machine sage.math, we have the following timing statistics:
# BEFORE
sage: x, y = var('x,y')
sage: time solve_mod([x^2 + 2 == x, x^2 + y == y^2], 14)
CPU times: user 0.01 s, sys: 0.02 s, total: 0.03 s
Wall time: 0.18 s
[(4, 2), (4, 6), (4, 9), (4, 13)]
sage:
sage: x,y,z = var('x,y,z')
sage: time solve_mod([x^5 + y^5 == z^5], 3)
CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s
Wall time: 0.10 s
[(0, 0, 0),
(0, 1, 1),
(0, 2, 2),
(1, 0, 1),
(1, 1, 2),
(1, 2, 0),
(2, 0, 2),
(2, 1, 0),
(2, 2, 1)]
# AFTER
sage: x, y = var('x,y')
sage: time solve_mod([x^2 + 2 == x, x^2 + y == y^2], 14)
CPU times: user 0.03 s, sys: 0.01 s, total: 0.04 s
Wall time: 0.16 s
[(4, 2), (4, 6), (4, 9), (4, 13)
sage:
sage: x,y,z = var('x,y,z')
sage: time solve_mod([x^5 + y^5 == z^5], 3)
CPU times: user 0.01 s, sys: 0.01 s, total: 0.02 s
Wall time: 0.02 s
[(0, 0, 0),
(0, 1, 1),
(0, 2, 2),
(1, 0, 1),
(1, 1, 2),
(1, 2, 0),
(2, 0, 2),
(2, 1, 0),
(2, 2, 1)]
- Optimized binomial function when an input is real or complex floating point (Dan Drake) -- The function
binomial()
for returning the binomial coefficients is now much faster. In some cases, speed efficiency can be up to 4000x. Here are some timing statistics obtained using the machine sage.math:
# BEFORE
sage: x, y = 1140000.78, 420000
sage: %timeit binomial(x, y)
10 loops, best of 3: 1.19 s per loop
sage:
sage: x, y = RR(pi^5), 10
sage: %timeit binomial(x, y)
10000 loops, best of 3: 28.2 µs per loop
sage:
sage...
3.4
Release Tour
Sage 3.4 was released on March 11, 2009, 110 tickets (PRs) merged. For the official, comprehensive release note, please refer to sage-3.4.txt. A nicely formatted version of this release tour can be found here. The following points are some of the foci of this release:
- Merging of Jon Hanke's quadratic forms code
- Sphinxifying the Sage documentation and move its content to the main Sage development repository
Here's a summary of features in this release, categorized under various headings.
Algebra
- Rewrite the quaternion algebra module (Jonathan Bober, William Stein, Gonzalo Tornaria, John Voight, Michael Abshoff) -- The quaternion algebra module is completely rewritten to enhance its performance. Note that a number of functions have been removed in the rewritten version.
Graph Theory
- Testing for graph isomorphism (Robert L. Miller) -- Enhanced performance of the graph isomorphism test
is_isomorphic()
. The improved performance can be up to 3x faster than previously.
Graphics
- Arrowheads in multi-edge digraphs (Emily Kirkman) -- This feature has been in Sage even before this release. However, in version 3.4, Emily worked on enhancing the visualization of multi-edge digraphs. In a multi-edge digraph, the arrowheads pointing to a vertex are now clearly displayed. Here's a plot of a multi-edge digraph, produced using the following code:
sage: S = SupersingularModule(389)
sage: D = DiGraph(S.hecke_matrix(2))
sage: D.plot(vertex_size=50).show(figsize=10)
Linear Algebra
- Optimize transpose and antitranspose for dense matrices (Rob Beezer, Yann Laigle-Chapuy) -- A rewrite of sections of the method
transpose()
in the classsage.matrix.matrix_dense.Matrix_dense
, resulting in an improvement of up to 5x, depending on the computer architecture. For a system with architecture
CPU: Intel(R) Core(TM)2 Duo CPU T5450 @ 1.66GHz
RAM: 2066004 KB
Linux kernel: 2.6.24-23
one would obtain the following timing and memory statistics for a 3000x3000 identity matrix:
# BEFORE
sage: m=identity_matrix(3000)
sage: time m2=m.transpose(); m3=m.antitranspose()
CPU times: user 14.13 s, sys: 1.11 s, total: 15.44 s
Wall time: 15.44 s
sage: get_memory_usage()
1254.28125
# AFTER
sage: m=identity_matrix(3000)
sage: time m2=m.transpose(); m3=m.antitranspose()
CPU times: user 2.92 s, sys: 0.46 s, total: 3.38 s
Wall time: 3.38 s
sage: get_memory_usage()
835.6171875
Furthermore, on KUbuntu 8.10 with architecture
CPU: Intel(R) Core(TM)2 Duo CPU E8500 @ 3.16GHz
RAM: 8 GB
for a 5000x5000 identity matrix, the previous time was 11.94s and the new improved time would be about 2.46s.
- Optimize transpose for integer and rational dense matrices (Yann Laigle-Chapuy) -- New methods
transpose()
andantitranspose()
for the classessage.matrix.matrix_integer_dense.Matrix_integer_dense
andsage.matrix.matrix_rational_dense.Matrix_rational_dense
. The new methodtranspose()
returns the transpose of an integer (respectively rational) dense matrix without changing the original matrix itself. In addition, the new methodantitranspose()
returns the antitranspose of an integer (respectively rational) matrix, leaving the original matrix unchanged.
Packages
- Update the libgcrypt spkg to
libgcrypt-1.4.3.p0.spkg
(Michael Abshoff) -- Originally based on Gnu Privacy Guard (GnuPG), libgcrypt is a general purpose library of cryptographic primitives. As such, it does not provide an implementation of any cryptographic protocols, but rather serves as a collection of cryptographic building blocks. - Update the Python spkg to
python-2.5.2.p9.spkg
(Michael Abshoff) -- Python is a general purpose, object oriented programming language. Together with various other open source components, Python serves as a fundamental tool that unify the components of Sage under a common interface. - Upgrade MPFR to version 2.4.1 upstream release (Michael Abshoff) -- Version 2.4.1 of MPFR fixes critical security vulnerabilities in
mpfr_snprintf()
andmpfr_vsnprintf()
, in particular, buffer overflow or off-by-one vulnerabilities. These vulnerabilities have been reported in previous versions of MPFR, and they can be exploited to compromise an application that uses the MPFR library. Users are advised to upgrade to the new MPFR spkgmpfr-2.4.1.spkg
. A Secunia security advisory can be found here and a SecurityFocus security advisory is also available. - Upgrade PyCrypto to version 2.0.1 upstream release (Michael Abshoff) -- Version 2.0.1 fixes a buffer overflow vulnerability in the hash functions SHA256 and RIPEMD, which previously failed to adequately verify user-supplied input. The affected module is
ARC2
, an implementation of the RC2 block cipher. A successful exploit may allow an attacker to execute arbitrary code in the context of applications that uses the previously vulnerable ARC2 module. Furthermore, a failed attempt may lead to a denial-of-service attack. Users are advised to upgrade to the new PyCrypto spkgpycrypto-2.0.1.p3.spkg
. A SecurityFocus security advisory can be found here.
Quadratic Forms
- Merge Jon Hanke's quadratic forms code (Gonzalo Tornaria, Michael Abshoff) -- John Hanke has written a substantial amount of Sage code for working with quadratic forms. Hanke's code can serve as base for further enhancement to the case of binary quadratic forms, which are quadratic forms involving two variables. There are currently a number of patches on the trac server for enhancing the functionalities of binary quadratic forms.
- Clifford invariant and Clifford conductor (Gonzalo Tornaria) -- New functions
clifford_invariant()
andclifford_conductor()
for computing Clifford invariants and conductors. The Clifford invariant is the class in the Brauer group of the Clifford algebra for even dimension. The new functionclifford_invariant()
computes the Clifford invariant of the even Clifford algebra for odd dimension. The new functionclifford_conductor()
computes the Clifford conductor, i.e. the product of all primes where the Clifford invariant is -1. See the following text for the definition of the Clifford invariant and p.119 for the formula relating it to the Hasse invariant:
* T.Y. Lam. "Introduction to Quadratic Forms over Fields". Graduate Studies in Mathematics, vol.67. American Mathematical Society, 2005.
Full Changelog: 3.3...3.4
3.3
Release Tour
Sage 3.3 was released on February 21st, 2009 (changelog), 385 tickets (PRs) merged, 57 contributors. There's also a beautifully formatted version of this release tour. The following points are some of the foci of this release:
- Clean up various doctest failures from 3.2.3
- Fix some build issues from 3.2.3 on the new set of supported images
- Merge small to medium sized patches ready to go in
- Switch to MPIR for multi-precision integers and rationals
- Update the gmp-mpir spkg
- Switch to FLINT for univariate polynomial arithmetic over
Z/nZ
- Upgrade NetworkX to version 0.99 upstream release
- Upgrade cddlib to version 0.94f upstream release
- Some improvements to the lrs spkg
- Pynac interface enhancements
- Update the readline spkg
- Update the ATLAS spkg
- Upgrade ATLAS to version 3.8.3 upstream release
- Update the NTL spkg
- Upgrade M4RI to version 20090105 upstream release
- Upgrade jsMath to version 3.6 upstream release
- Upgrade GAP to version 4.4.12 upstream release
- Upgrade GUAVA to version 3.9 upstream release
- Upgrade Jmol to version 11.6 upstream release
- Upgrade matplotlib to version 0.98.5.3-svn6910 upstream release
- Upgrade libpng to version 1.2.34 upstream release
- Upgrade Sphinx to version 0.5.1 upstream release
- Upgrade bzip2 to version 1.0.5 upstream release
- Upgrade trac to version 0.11 upstream release
All tickets in the 3.3 milestone can be found on the trac server. Here's a summary of features in this release, categorized under various headings.
Algebra
- Transitivity for permutation groups (William Stein) -- In the permutation group module
permgroup.py
, the query functionis_transitive()
returns whether or not the group is transitive on[1..G.degree()]
. A few surrounding docstrings are fixed and doctest coverage for the modulesage.groups.perm_gps.permgroup.py
is now 100%. - Update the ATLAS spkg (Michael Abshoff).
- New function
is_unit()
for symbolic ring (Florent Hivert) -- The new functionsage.calculus.calculus.SymbolicExpressionRing.is_unit()
returns whether or not an element of the symbolic ring is a unit.
Algebraic Geometry
- Improved precision and performance when calculating analytic rank (William Stein) -- When calculating the analytic rank of an elliptic curve, the default is to use Cremona's
gp
script, where the precision is automatically doubled until it doesn't fail. The precision is started at 16 rather than the previous default precision. The computation is now about 3 times faster usually by starting off using this smaller precision. Here's an example:
# BEFORE
sage: E = EllipticCurve('5077a')
sage: time E.analytic_rank()
CPU times: user 0.01 s, sys: 0.01 s, total: 0.02 s
Wall time: 0.21 s
# AFTER
sage: E = EllipticCurve('5077a')
sage: time E.analytic_rank()
CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s
Wall time: 0.06 s
And another:
# BEFORE
sage: time elliptic_curves.rank(4)[0].analytic_rank()
CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s
Wall time: 0.50 s
# AFTER
sage: time elliptic_curves.rank(4)[0].analytic_rank()
CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s
Wall time: 0.33 s
- Weil pairing (David Moller Hansen, John Cremona) -- A basic framework for Weil pairing on elliptic curves using Miller's algorithm as contained in Proposition 8 of the following paper:
* Victor S. Miller. "The Weil pairing, and its efficient calculation". Journal of Cryptology, 17(4):235-261, 2004.
Basic Arithmetic
ivalue
field ininteger_mod.pyx
is no longer public (Craig Citro) -- Theivalue
field forIntegerMod_int
is no longer public. This gives about a 1.5 to 2X speed-up when multiplyingIntegerMod_ints
. Here's an example:
# BEFORE
sage: R = Integers(100) ; x = R(3) ; y = R(5)
sage: timeit('x*y')
625 loops, best of 3: 403 ns per loop
sage: timeit('x*y')
625 loops, best of 3: 370 ns per loop
sage: timeit('x*y')
625 loops, best of 3: 410 ns per loop
sage: timeit('x*y')
625 loops, best of 3: 405 ns per loop
# AFTER
sage: R = Integers(100) ; x = R(3) ; y = R(5)
sage: timeit('x*y')
625 loops, best of 3: 190 ns per loop
sage: timeit('x*y')
625 loops, best of 3: 213 ns per loop
sage: timeit('x*y')
625 loops, best of 3: 174 ns per loop
sage: timeit('x*y')
625 loops, best of 3: 191 ns per loop
- Some fixes for
is_perfect_power()
andbessel_J(0,0)
(Craig Citro, Robert Bradshaw, Robert L. Miller) -- A temporary work around for an upstream bug in GMP when usingis_perfect_power()
. Resolved a Pari interface bug when usingbessel_J(0,0)
. - Improved performance for generic polynomial rings, and for univariate polynomial arithmetic over
Z/nZ[x]
(Yann Laigle-Chapuy, Martin Albrecht, Burcin Erocal) -- Improved performance when performing modulo arithmetic between elements of a generic polynomial ring. Univariate polynomial arithmetic overZ/nZ[x]
now has considerable speed-up at approximately 20x.
# BEFORE
sage: P.<x> = PolynomialRing(GF(7))
sage: type(x)
<type 'sage.rings.polynomial.polynomial_modn_dense_ntl.Polynomial_dense_mod_p'>
sage: f = P.random_element(100)
sage: g = P.random_element(100)
sage: %timeit f*g
1000 loops, best of 3: 445 µs per loop
# AFTER
sage: P.<x> = PolynomialRing(GF(7))
sage: type(x)
<type 'sage.rings.polynomial.polynomial_zmod_flint.Polynomial_zmod_flint'>
sage: f = P.random_element(100)
sage: g = P.random_element(100)
sage: %timeit f*g
100000 loops, best of 3: 7.92 µs per loop
- Technical preview of David Harvey's
zn_poly
library exposed to the Sage library (Martin Albrecht).
def f(p,n):
P = PolynomialRing(GF(next_prime(p)),'x')
f = P.random_element(n)
g = P.random_element(n)
t0 = cputime()
r0 = f*g
t0 = cputime(t0)
t1 = cputime()
r1 = f._mul_zn_poly(g)
t1 = cputime(t1)
assert(r0 == r1)
return p,n,t0,t1
for i in range(21):
f(2**47,2**i)
returns on sage.math
# (140737488355328, 1, 0.0, 0.0)
# (140737488355328, 2, 0.0, 0.0)
# (140737488355328, 4, 0.00099999999999766942, 0.0)
# (140737488355328, 8, 0.0, 0.0)
# (140737488355328, 16, 0.0, 0.0)
# (140737488355328, 32, 0.0059990000000027521, 0.0)
# (140737488355328, 64, 0.0, 0.0)
# (140737488355328, 128, 0.0, 0.0)
# (140737488355328, 256, 0.0, 0.0)
# (140737488355328, 512, 0.0, 0.00099999999999766942)
# (140737488355328, 1024, 0.00099999999999766942, 0.0)
# (140737488355328, 2048, 0.0020000000000024443, 0.0019989999999978636)
# (140737488355328, 4096, 0.0049989999999979773, 0.005000000000002558)
# (140737488355328, 8192, 0.010998000000000729, 0.011997999999998399)
# (140737488355328, 16384, 0.023995999999996798, 0.023997000000001378)
# (140737488355328, 32768, 0.050992000000000814, 0.052991999999996153)
# (140737488355328, 65536, 0.1149820000000048, 0.10598499999999689)
# (140737488355328, 131072, 0.29195599999999189, 0.21996599999999944)
# (140737488355328, 262144, 0.6119070000000022, 0.45393199999999467)
# (140737488355328, 524288, 1.5217689999999919, 1.0278430000000043)
# (140737488355328, 1048576, 3.1365230000000111, 2.0966819999999871)
- Deprecate the function
sqrt_approx()
(David Roe) -- To obtain a numerical approximation of the square root of a ring element (integers, polynomials overGF(2^x)
, rationals), users are advised to use the functionsqrt()
with a given number of bits of precision instead. - Use Pohlig-Hellman for generic discrete logarithm (Yann Laigle-Chapuy) -- This results in significant improvement in performance and less memory foot print. Here's an example with a smooth order:
sage: factor(5^15-1)
2^2 * 11 * 31 * 71 * 181 * 1741
# BEFORE
sage: F.<a>=GF(5^15)
sage: g=F.gen()
sage: u=g^123456789
sage: time log(u,g)
CPU times: user 271.39 s, sys: 4.72 s, total: 276.11 s
Wall time: 276.96 s
123456789
sage: get_memory_usage()
378.21875
# AFTER
sage: F.<a>=GF(5^15)
sage: g=F.gen()
sage: u=g^123456789
sage: time log(u,g)
CPU times: user 0.14 s, sys: 0.00 s, total: 0.14 s
Wall time: 0.16 s
123456789
sage: get_memory_usage()
115.8984375
And here's another example with a not-so-smooth order:
sage:factor(3^13-1)
2 * 797161
# BEFORE
sage: F.<a>=GF(3**13)
sage: g=F.gen()
sage: u=g^1234567
sage: timeit('log(u,g)')
5 loops, best of 3: 1.54 s per loop
sage: get_memory_usage()
155.11328125
...