
Let's rewrite your equation as y^{2} = z  3x^{2}. Then y = sqrt(z  3x^{2}).
We can easily see that there are no solutions for z < 3. Now we can write the following code (is C code, but it's easy to port it to another language):
int f(int z)
{
int result = 0;
int x = 0;
double m = 0;
double y;
if (z < 3) return 0;
while (m <= z)
{
y = sqrt(z  m);
if (y == floor(y)) result++;
x++;
m = 3*x*x;
}
return result;
}
Note that this function has a complexity of O(z^{2}) as it checks for all possible integers values of x.
Also note that this implementation accept solutions where x and/or y are zero (e.g. f(4) returns 2 and the solutions found are { 0, 2 } and { 1 ,1 }). If you want solutions with x and y stricly positive you should initialize the loop with x = 1 and m = 3 , and inside the loop ignore solutions with y == 0 .





Hi Jeff,
In my experience formulas giving the number of solutions of some problem are rare in number theory (Diophantine equations). As an example, there are some formulas that estimate the number of primes in a given range, however all they give is an approximation, not a exact value.
So I'm afraid you need some loops, as Sauro suggested. You may organize it differently though: by calculating z from x and y, and storing all the results, you could build a large table much faster than you could analyze a large number of z values one by one; this is similar to the concept of the Sieve or Eratosthenes.





The code that I have implemented actually does exactly what you suggest. I have an outer loop for y and an inner loop for x, and just flip a boolean value in an array each time I find a solution (since I only need to know the number of solutions mod 2). However, the algorithm I am attempting to implement claims to have a better O() runtime than another algorithm I implemented to do the same thing, and I am just not seeing this claim play out in my implementation of the two algorithms. In fact, I am seeing just the opposite, and determining the number of solutions to this equation is really the only difference between the two. I guess I'll just have to look a bit harder at my loops to see if I can do anything more efficiently. Thanks,
Sounds like somebody's got a case of the Mondays
Jeff





if you want to do a mapping approach such as Eratostenes over a huge range, applying a banding technique pays off big time. I investigated prime generation more than a year ago (still should finish the article!), and upped the performance more than 10 times by making sure the output mapped well into the level two cache. IIRC I am getting 20 million primes per second on a typical machine.





If you will check several values of Z per session, and memory usage is not a problem, you might store the work of the algorithm as you check for a value of Z. Errr... confusing, It's hard for me to explain this in English. Let me give you an example in C#:
public class Solutions
{
Dictionary<int, bool> matches = new Dictionary<int, bool>();
int maxZ = 0;
public bool? EvenSolutions(int z)
{
if (z > maxZ)
{
int maxY = (int)Math.Sqrt(z  3);
int maxX = (int)(Math.Sqrt(z  1 / 3d));
int n = 4;
for (int x = 1; x < maxX; x++)
{
for (int y = 1; y < maxY && (n = 3 * x * x + y * y) <= z; y++)
{
if (n > maxZ)
{
if (!matches.ContainsKey(n))
matches.Add(n, false);
else
matches[n] = !matches[n];
}
}
}
maxZ = z;
}
return (matches.ContainsKey(z)) ? (bool?)matches[z] : null;
}
}
The method returns true if there is a even number of solutions; false if the number of solutions is odd; and null if there are no solutions but, at the same time, it stores the possible values of Z which have solutions and are lesser than the given Z. So, if you are later checking for a smaller value of Z, you will already have it in matches dictionary and you will not have to repeat the process... and if it is not in the dictionary then it means that there weren't solutions for that value of Z.





Hi Jeff,
With VC2010 (or gcc 4.5) C++0x implementation:
#include <utility>
#include <array>
#include <algorithm>
template <unsigned int z>
size_t NumSolutions()
{
typedef std::pair<unsigned int, unsigned int> Solution;
std::array<Solution, z * z + 2 * z> All;
std::generate(All.begin(), All.end(),
[]()>Solution{
static unsigned int x = 0, y = 0;
return y == z ? Solution(++x, y = 0) : Solution(x , ++y);});
return std::count_if(All.begin(), All.end(),
[](const Solution& sol){return z == 3 * sol.first * sol.first + sol.second * sol.second;});
}
With previous C++03 compilers:
#include <utility>
#include <vector>
#include <algorithm>
typedef std::pair<unsigned int, unsigned int> Solution;
template <unsigned int z>
bool IsSolution(const Solution& sol)
{
return z == 3 * sol.first * sol.first + sol.second * sol.second;
}
template <unsigned int z>
Solution Next()
{
static unsigned int x = 0, y = 0;
return y == z ? Solution(++x, y = 0) : Solution(x , ++y);
}
template <unsigned int z>
size_t NumSolutions()
{
std::vector<Solution> All(z *z + 2 * z);
std::generate(All.begin(), All.end(), Next<z>);
return std::count_if(All.begin(), All.end(), IsSolution<z>);
}
You may test both with:
#include <iostream>
int main()
{
using std::cout;
using std::endl;
cout << "NumSolutions<4>(): " << NumSolutions<4>() << endl;
cout << "NumSolutions<7>(): " << NumSolutions<7>() << endl;
cout << "NumSolutions<28>(): " << NumSolutions<28>() << endl;
return 0;
}
cheers,
AR
Edited for correct size of All vector.
Edited for use of adequate std::array in C++0x version.
When the wise (person) points at the moon the fool looks at the finger (Chinese proverb)
modified on Friday, November 26, 2010 4:45 PM





Below, there is an algorithm which I wrote to find lower bound of the given key in a sorted table. In other words, it finds a key which is <= of a given one. I wonder whether there is a more compact algorithm or maybe more efficient.
#include <stddef.h>
static size_t blsize;
static const void *skey;
static int (*cmpf)(const void *e1, const void *e2);
static void *lbnd(const void *base, size_t size)
{
const char *p,*p1;
int c;
size_t n;
if(size==0) return NULL;
if(size==1)
{
c = (*cmpf)(skey,base);
if(c<0)
return NULL;
else
return (void *)base;
}
n = size >> 1;
p = (char *)base + n*blsize;
c = (*cmpf)(skey,p);
if(c==0) return (void *)p;
if(c<0)
{
return lbnd(base,n);
}
else
{
p1 = lbnd((char*)p + blsize,size  n  1);
if(p1==NULL)
return (void *)p;
else
return (void *)p1;
}
}
void *lbound(const void *key, const void *base0, size_t nmemb, size_t size, int (*compar)(const void *e1, const void *e2))
{
blsize = size;
cmpf = compar;
skey = key;
return lbnd(base0,nmemb);
}
The static function lbnd does the search using static variables which contain constant values because lbnd is called recursively and to minimize code overhead in parameters passing.





“I think the bubble sort would be the wrong way to go.”





Maybe yes maybe no. The question is not about sorting but about searching the sorted table, though.





Oh good grief... did none of you CP geeks get the reference[^] ?





I don't think I understand the your requirements correctly, because to me it looks like any key <= x will be fine, and since the table is sorted, you could just take the first one and return an error if it is > x





Hi,
liquid_ wrote: I wonder whether there is a more compact algorithm or maybe more efficient
As you use size_t you are probably using a C++ compiler here.
Then if Key is any type with operator<(const Key& other) defined:
#include <algorithm>
std::min_element( base, base + nElts );
returns a const Key* pointing to the smallest key in the array.
cheers,
AR
When the wise (person) points at the moon the fool looks at the finger (Chinese proverb)





Thanks for constructive reply.
Firstly, I used size_t only for shortage of unsigned int . The rest of code is written in pure C and it should be just such  a more universal one.
Secondly, analyzing the std::min_element function it is clear that it has linear complexity. What I tried to aim is logarithmic complexity.
However, comparing my implementation of
lbound(const void* key, void *base, unsigned int nmemb, unsigned int size, int (*compare)(const void *e1, const void *e2)) with bsearch implementation, it seemed to me that my lbound is not so compact and efficient as bsearch standard implementation (two additional checks  size==0 and size==1 , and one more comparison after size==1 check).
Maybe its the matter of lower bound specific. bsearch just returns NULL if it does not find a key, lbound returns NULL only if key is lower than first element of table.
Regards





Hi,
liquid_ wrote: code is written in pure C and it should be just such  a more universal one. This is really planetdependant
liquid_ wrote: What I tried to aim is logarithmic complexity.
If your data are unsorted you will have to check each and any for smaller than current minimum, so you will never get under linear complexity. If your data are sorted the problem is trivial.
More generally you should think that more man*hours have been spent on each part of the Standard C++ Library than you can provide in your total professional life.
Good luck,
AR
When the wise (person) points at the moon the fool looks at the finger (Chinese proverb)





The assumption is that data are sorted. I'd just like to know if I found this trivial solution.
Concerning the philosophy of using ready made STL or creating own solutions, it depends on particular needs. Many programs are written in C not in C++ because of various reasons (eg. UNIX or LINUX areas). I use pure C or C++ with STL or other available libraries (eg. BOOST, MFC, .NET aso) as well. Most of software I do is made just for fun or to search/find interesting solutions, or to resolve some problems I meet in my job. It is not my core business.
Anyway, thanks for your points.
Regards





...an extraordinary fact  probably known to half of you, but nevertheless I thought I'd share it. It's not an alogorithm, but never mind!
The square of any prime number >= 5 can be expressed as 1 + a multiple of 24
or
If p is prime, then p ^ 2 = 1 + 24 * n, for some n
Don't know about you, but I think that's quite amazing. Why is it true at all, and given that it is, where does 24 come from?! (Pity it's not 42 really!)
[edit] When I ask "where does 24 come from?" I am not asking for a mathermatical proof  I can look that up myself. It's more just a metaphysical musing...)
modified on Thursday, November 4, 2010 6:37 AM





what is that "n" it is 1 for p = 5??





Yes, for
p=5, n=1
p=7, n=2
p=11, n=5
p=13, n=7
If you can work out a forumla that spits out each value of n that produces the "next" prime, you'll be able to retire early





Try
p ^ 2 = 1 + 24 * (p  6)
Just say 'NO' to evaluated arguments for diadic functions! Ash





OK...
p^2  24*p + 143 = 0
So..?





Just say 'NO' to evaluated arguments for diadic functions! Ash





Me too  ome of us is being a bit dumb here (most likely me...)
I just rewrote the equation you posted... what does it prove? Is there a typo in it (yours)?





No typo as far as I am aware. I just thought you wanted to know what was the value of 'n' in your original question.
Just say 'NO' to evaluated arguments for diadic functions! Ash





Ah, got you  except that it doesn't work for all values of p
p=5, n=1 != p6
p=7, n=2 != p6
p=11, n=5 ok
p=13, n=7 ok
p=17, n=12 != p6
...





a quadratic equation like that has at most two solutions, so it is hardly a way to discover lots of primes.




