Yes computational models can be created to the accuracy of our ability to code for measurement of all desired variables. That is the challenge I see with AI systems increasing able to account for variables we failed to consider.
We have the resources. They should be applied but with much care,& consideration in regards to AI and eventual self awareness. The rate of technological advancement now surpasses the rate of our enlightenment. Mankind is akin to a teenager failing to consider the long term negative implications of his actions over near term rewards. Yes we are capable of great things if not greatness but on the micro level we lack the humility to account for our fallibility. A concerning situation as few make decisions affecting the many.
History has shown Captain Ahab to be at the helm for too many decisions affecting the whole. Is there a default in the human psyche to always consider the way things are today is how they will be tomorrow? History proves otherwise. Black swans events of global proportions attributal to man increasingly occur with their possible implications growing exponentially with technological advancement. Until mankind becomes in-tune with our collective conscious as a whole we need to walk gently into that dark night
Because the exponentiation method lets you compute an arbitrary element of the Fibonacci sequence without calculating most of the ones before it. Your `benchmark' calculates them in ascending order, so obviously a straightforward memoizing/iterative implementation does well. Instead, it should just calculate the millionth Fibonacci number (say), ab initio.
You're doing too much work in the matrix multiplication: the matrices are always symmetrical, but you're not taking advantage of that. But there's a slicker way, which involves some slightly fancier mathematics.
Suppose that we know that t^2 = t + 1 for some t. (Equivalently, we'll work in the quotient ring R = Z[t]/(t^2 — t — 1).) Now, let's look at the powers of t. By using the rule that t^2 = t + 1, we will end up with only units and coefficients of t. (That this is the characteristic polynomial of the matrix discussed in the main article is, of course, not a coincidence.)
t^0 = 1
t^1 = t
t^2 = t + 1
t^3 = t^2 + t = (t + 1) + t = 2 t + 1
t^4 = 2 t^2 + t = 2 (t + 1) + t = 3 t + 2
t^5 = 3 t^2 + 2 t = 3 (t + 1) + 2 t = 5 t + 3
Hmm. There's a pattern: t (a t + b) = a t^2 + b t = a (t + 1) + b t = (a + b) t + a. I feel a theorem coming on: t^k = F_k t + F_{k-1} for all integers k. Proof. (a) t^1 = t = F_1 t + F_0. (b) Suppose the claim holds for k; then t^{k+1} = t t^k = (F_k + F_{k-1}) t + F_k = F_{k+1} t + F_k. (c) Notice that t (t — 1) = t^2 — t = (t + 1) — t = 1, so t^{-1} = t — 1; then, supposing again that the claim holds for k, we have t^{k-1} = (t — 1) (F_k t + F_{k-1}) = F^k t^2 + F_{k-1} t — F_k t — F_{k-1} = F_k (t + 1) + F_{k-1} t — F_k t — F_{k-1} = F_{k-1} t + (F_k — F_{k-1}) = F_{k-1} t + F_{k-2}, as required. []
Now, we can compute powers of t using the ordinary square-and-multiply method (or fancier methods involving windowing, combs, or whatever). The important pieces are:
(a t + b)^2 = a^2 (t + 1) + 2 a b t + b^2 = (2 a b + a^2) t + (a^2 + b^2) t
(a t + b) (c t + d) = a c (t + 1) + (a d + b c) t + b d = (a d + a c + b c) t + (a c + b d)
Here's a simple left-to-right exponentiation algorithm. Suppose we want to compute x^n, and we know that n = 2^k u + v, for 0 n, so u = 0, v = n, and x = 1.) Now, suppose that v = 2^{k-1} b + v', where b is either zero or one, and v'
Annoyingly, this means we need to count the number of bits in n, i.e., k = L(n), with 2^{i-1} n. Suppose, inductively, that we know how to determine L(m) for 0
Enough theory! Now let's turn this into actual code!
#! /usr/bin/python
def power(x, n, id, sqr, mul):
"""
Compute and return X^N for nonnegative N.
This is an abstract operation, assuming only that X is an element of a
monoid: ID is the multiplicative identity, and the necessary operations are
provided as functions: SQR(Y) returns Y^2, and MUL(Y, Z) returns the
product Y Z.
"""
## First, we need to find a 2^{2^i} > n.
i = 1
while (n >> i) > 0: i = 2*i
## Refine our estimate, so that 2^{k-1} 1:
i = i >> 1
mm = m >> i
if mm > 0: m, k = mm, k + i
## Now do the square-and-multiply thing.
y = id
for i in xrange(k - 1, -1, -1):
y = sqr(y)
if (n >> i)%2: y = mul(x, y)
## We're done.
return y
def fibonacci(n):
"""
Compute and return the Nth Fibonacci number.
Let F_0 = 0, F_1 = 1, F_{k+1} = F_k + F_{k-1}, for all integers k. Note
that this is well-defined even for negative k. We return F_N.
"""
## Auxiliary functions for working in our polynomial ring.
def poly_sqr((a, b)):
a2 = a*a
return 2*a*b + a2, a2 + b*b
def poly_mul((a, b), (c, d)):
ac = a*c
return a*d + b*c + ac, ac + b*d
## Do the job. For negative indices, we take powers of t^{-1}.
if n >sys.stderr, 'Usage: fib N'
sys.exit(1)
print fibonacci(n)[0]
This is a good write up, and I'm pretty interested in rust. However, you immediately lose my interest when you used gcc instead of LLVM for C++. I think it does matter for some of the examples especially with error message quality
I think your «entangle» function might be misnamed. It seems to be taking the tensor product of the states rather than entangling the qubits (which is a very different (although related) quantum phenomenon). Other than that thanks for a really interesting article!
Edward Kmett is magnificent. When we discussed the original article in Russian, this question arose once. Of course one need to use good linear-algebra-library, if he want to implement QC framework.
Thinking about it, I realized that even the implementation described here isn't really O(log n). Here's why: it's rather easy to prove that the nth term of the Fibonacci sequence has O(n) digits. So, even if you are performing only O(log n) multiplications, each multiplication is actually O(n), so the algorithm is actually O(n log n).
We have the resources. They should be applied but with much care,& consideration in regards to AI and eventual self awareness. The rate of technological advancement now surpasses the rate of our enlightenment. Mankind is akin to a teenager failing to consider the long term negative implications of his actions over near term rewards. Yes we are capable of great things if not greatness but on the micro level we lack the humility to account for our fallibility. A concerning situation as few make decisions affecting the many.
History has shown Captain Ahab to be at the helm for too many decisions affecting the whole. Is there a default in the human psyche to always consider the way things are today is how they will be tomorrow? History proves otherwise. Black swans events of global proportions attributal to man increasingly occur with their possible implications growing exponentially with technological advancement. Until mankind becomes in-tune with our collective conscious as a whole we need to walk gently into that dark night
Suppose that we know that t^2 = t + 1 for some t. (Equivalently, we'll work in the quotient ring R = Z[t]/(t^2 — t — 1).) Now, let's look at the powers of t. By using the rule that t^2 = t + 1, we will end up with only units and coefficients of t. (That this is the characteristic polynomial of the matrix discussed in the main article is, of course, not a coincidence.)
t^0 = 1
t^1 = t
t^2 = t + 1
t^3 = t^2 + t = (t + 1) + t = 2 t + 1
t^4 = 2 t^2 + t = 2 (t + 1) + t = 3 t + 2
t^5 = 3 t^2 + 2 t = 3 (t + 1) + 2 t = 5 t + 3
Hmm. There's a pattern: t (a t + b) = a t^2 + b t = a (t + 1) + b t = (a + b) t + a. I feel a theorem coming on: t^k = F_k t + F_{k-1} for all integers k. Proof. (a) t^1 = t = F_1 t + F_0. (b) Suppose the claim holds for k; then t^{k+1} = t t^k = (F_k + F_{k-1}) t + F_k = F_{k+1} t + F_k. (c) Notice that t (t — 1) = t^2 — t = (t + 1) — t = 1, so t^{-1} = t — 1; then, supposing again that the claim holds for k, we have t^{k-1} = (t — 1) (F_k t + F_{k-1}) = F^k t^2 + F_{k-1} t — F_k t — F_{k-1} = F_k (t + 1) + F_{k-1} t — F_k t — F_{k-1} = F_{k-1} t + (F_k — F_{k-1}) = F_{k-1} t + F_{k-2}, as required. []
Now, we can compute powers of t using the ordinary square-and-multiply method (or fancier methods involving windowing, combs, or whatever). The important pieces are:
(a t + b)^2 = a^2 (t + 1) + 2 a b t + b^2 = (2 a b + a^2) t + (a^2 + b^2) t
(a t + b) (c t + d) = a c (t + 1) + (a d + b c) t + b d = (a d + a c + b c) t + (a c + b d)
Here's a simple left-to-right exponentiation algorithm. Suppose we want to compute x^n, and we know that n = 2^k u + v, for 0 n, so u = 0, v = n, and x = 1.) Now, suppose that v = 2^{k-1} b + v', where b is either zero or one, and v'
Annoyingly, this means we need to count the number of bits in n, i.e., k = L(n), with 2^{i-1} n. Suppose, inductively, that we know how to determine L(m) for 0
Enough theory! Now let's turn this into actual code!
There's a great one on hackage by Edward Kmett called linear :)
hackage.haskell.org/package/linear
I made a benchmark that shows that in practice:
docs.google.com/spreadsheets/d/1mDIuAUrI6lzZxDf2HEqHbaYjddfK0hijd_r9qBOOEoQ