Hiring hundreds of bogosorts to help me calculate pi

Happy Pi Day! This fine morning I woke up and felt the sudden urge to torture my CPU.
And what better way to do that than run hundreds of instances of one of the most notoriously inefficient sorting algorithms at once?
Of course the algorithm I speak of is…
Bogosort
In the world of sorting algorithms, bogosort is kind of the laughingstock of the town. The village idiot. He’s just…not that bright. How does it work?
Take a list of size
and randomly shuffle it until it’s sorted purely by chance.
Yeah, like I said, not too clever of an algorithm. Though, technically it is a way to sort a list.
Love it or hate it, this bad boy can sort a list in a quite frankly impressive
Recall the factorial of
For the nerds, since a shuffle requires
All this yapping and we’re not any closer to seeing what any of this has to do with pi, are we? Fine, let me cut to the chase and introduce today’s weapon of choice.
Stirling’s Approximation
Here it is:
You might be able to guess we can obtain
But before we get to that, we should unpack Stirling ’s approximation.
First, let’s look at that squiggly little guy (
Why on earth does this formula work?
Before we go any further, it might be nice to wave a few hands and run a few sanity checks to convince ourselves this Stirling guy wasn’t some kind of fraud. I’m not going to try and fumble my way through a full proof, but want to at least see if I can convince you that this is reasonable enough to believe.
First, let’s look at the natural logarithm of
That should be fairly obvious, but what might not be is the fact that this looks like one of those “introduction to integrals” demonstrations you might have seen in high school. You approximate a curve but slicing it into a bunch of skinny rectangles and calculate the area under the curve to be the sum of the areas of those rectangles, right?
If you don’t believe me, witness proof by shoddy p5.js sketch! You can adjust the slider under the curve to see how the integral approaches the actual area under the curve as we add more and more rectangles.
Well, doesn’t
Re-familiarising myself with some of the most boring calculus I forgot, we can solve this integral using integration by parts:
If we exponentiate both sides to undo our logarithm, we get
Look at that! We are already remarkably close to Stirling’s actual formula. The only difference is that we have a factor of
But if we observe that
So hopefully that’s enough to let us move on with confidence that Stirling’s approximation will give us the horsepower for what we want to do. Speaking of which, let’s get to the good stuff.
Where does the come from?
Let’s just forget I totally hand-waved the

The boys, from left to right: Abraham de Moivre and James Stirling
To give a very rough idea of how
It turns out, when you make this connection and continue to solve the integral using some clever observations about logarithms and Taylor series, you’ll find
All this hand waving, and suddenly emerges!
Now, let’s just dust off some high school algebra and isolate
Here’s what itches the whip-cracking part of my brain: I’m far too lazy to calculate

Exercising free will by obtaining empirically, for some reason
Here is where the dunce up my sleeve, bogosort, gets to shine.
As previously discussed, the number of shuffles expected for bogosort to sort a list is
Instead, we will employ hundreds of bogosort algorithms to sort hundreds of lists, until we are satisfied with our average, our approximation for
Why would we do that when instead we could just calculate directly
Anyways, suppose we do this, and obtain an average number of shuffles for a list of size
Plugging our hard-earned
The plan is as follows: pick a huge
And obviously the accuracy of our result relies on choosing a sufficiently large
Nothing could possibly be wrong with this plan. Ever. Nope.
…wait. The larger
At what turns out to be a worse-than-exponential rate.
Competing objectives

Thanks to that eye-watering
To put things into perspective, let’s choose a modest
No, let’s see what happens if we choose
Since
In fact, go ahead and try changing the value of
But no, they’re just that small compared to the last two bars, representing
The plan?
Anyway, I believe I’m yapping again. Let’s summarise the plan for approximating
Set
and repeat the following times (where is as many as you have the patience for):
- run bogosort on a list of size 10, keeping track of the number of shuffles required, let’s call this
; - increment
by , as to contribute to our running average.
If you prefer, we can write this in concrete C code. This is exactly how I implemented it, and my
But for the sake of simplicity, here is a single-threaded version of the code:
#include <stdio.h>
#include <math.h>
#define N 10
#define K 320
int main() {
long long total_shuffles = 0;
for (int t = 0; t < K; t++) {
int arr[N] = {4, 2, 9, 1, 5, 3, 8, 6, 10, 7};
long long s = 0;
while (!is_sorted(arr, N)) {
shuffle(arr, N);
s++;
}
total_shuffles += s;
printf("Trial %d completed. Shuffles: %lld\n", t, s);
}
// Calculate our empirical average
double S = (double)total_shuffles / K;
// Calculate Pi using our Bogosort-derived S
double e = exp(1.0);
double pi = (S * S) / (2 * N) * pow(e / N, 2 * N);
printf("Empirical S (Average shuffles): %f\n", S);
printf("Estimated value of pi: %f\n", pi);
return 0;
}
The moment of truth
Running the above C code, and depending on your luck…
$ gcc main.c -o bogopi -lm
$ ./bogopi
Trial 0 completed. Shuffles: 1677134
Trial 1 completed. Shuffles: 7093752
Trial 2 completed. Shuffles: 708255
Trial 3 completed. Shuffles: 1584920
...
Trial 318 completed. Shuffles: 3490385
Trial 319 completed. Shuffles: 5376543
Empirical S (average shuffles): 3611139.253125
Estimated value of pi: 3.163356

Hey, that’s pretty good! Especially considering our bogosorts did their job on average in 3,611,139 shuffles, and we know the real
So now we can all go home and rest easy knowing that
Fun fact, here’s an interpretation of a circle if
The ceiling
Wait a minute.
Suppose, just for a moment, that our bogosort was perfect. Suppose it wasn’t subject to the whims of a random number generator, and it gave us the exact, mathematically precise value for
Well, even with a perfect
That’s 1.68% above the true
This 1.68% is the theoretical floor on our error at
Error propagation a.k.a. does us a solid
To make matters worse, notice that our formula squares the bogosort average (
By the standard rules of error propagation , when a measured quantity is squared, the relative error in the result is doubled.
So if bogosort underestimates
This could work in our favour, however, if bogosort underestimates. In that case, it would actually be nudging our estimate closer to the true value of

It’s time for me to come clean
It’s time for me to confess: the only way to get a "good" estimate of
Because Stirling’s formula has a built-in overestimate for
In my case, I simply kept running the experiment with
The average of many runs will eventually land our
The verdict
So, is this a good way to calculate
- Stirling’s approximation needs a large
to be accurate, but bogosort needs a small to actually, you know, complete in time for Pi Day. - At
, Stirling’s formula has a built-in 1.68% overestimate that no amount of trials can fix. - The squaring of
doubles any empirical noise. - The only way to accidentally land near the true
is if our bogosort gets slightly unlucky at estimating (but not too much!) to cancel Stirling’s bias.
We are using one of the worst possible sorting algorithms to fuel an asymptotic approximation at values where it is not yet asymptotic, and the only path to a correct answer is through a lucky coincidence of offsetting errors.
Even so, the fact that 320 instances of an algorithm that sorts by pure random chance can even semi-consistently produce a number that starts with the digit