A Story about Data, Part 2: Abandoning the notion of normality

Continuing on with my work, I was just about to conclude the non-normal data of the distribution. However, I remembered reading about different transformations that can be applied to data to make it more normal. Are any such transformations likely to have any effect on the normality (or the lack thereof) of the score data?
I’d read about the Box-Cox family of transformations: essentially proceeding through powers and their inverses, in the quest to improve normality. I decided to try it, using the Jarque-Bera statistic as a measure of the normality of the data.

The equation representing the continuum of Box-Cox transformations is given as below; you may read more about them here, too.

  f(x) = \left\{\begin{matrix}   & \frac{x^\lambda - 1}{\lambda} &if {\lambda} \neq 0\\   & \ln(x) &if {\lambda} = 0\\  \end{matrix}\right.

I picked up the improvement factor first for testing. Being the lazy type, I really did not want to spend a lot of time finding out the JB statistic for different values of \lambda manually. So I wrote a handy graph plotting the Jarque-Bera statistic against different values of \lambda for the improvement distribution.
This is what came out:

As you can see, the lowest JB statistic the transformation could achieve was about 282, for a \lambda of about 0.45. That’s still too high to be considered anywhere close to normal. I won’t reproduce the graphs for the pre- and the post-intervention totals because they look similar, only with higher minimum values of the JB statistic.

It looks like much of the statistics I’ve reading so far, don’t really apply to this data set, because the normality assumption is so thoroughly violated.