The way everything is done is simple. There's many researches/studies out there that try to determine the average dick size and how common it is, all the website does it take data from these studies and process it. In this page you can read more about the studies and the calculations done.

**The calculations are done using the data from the study you select**, and the dataset each study provides is displayed on a table below the calculator. You can view more information about each research by clicking their names on those tables, they link to other websites that describe more about how the information was gathered.

**The default dataset is the recommended one**, but you can choose other ones as well, feel free to compare them. Outdated ones are marked as outdated and using those is not recommended. Data provided from these studies is only based on a small sample of the population, but they are reliable enough for general purposes. **As always, don't get too attached to the statistics.**

If you feel like the studies are wrong or if you are skeptical about it, read this page. If you want to know more about the default dataset that calcSD uses, go to this page.

The hosting service used does not allow for any server-side processing, which means that everything has to be done via JavaScript (ex. if there's a need to do an addition to get a value, your browser does the addition while the page is open rather than the server doing the addition and transmitting the result to you), which restricts what can be done with the website.

The average/mean and standard deviation for erect length, erect girth, flaccid length and flaccid girth are all stored as individual arrays in the JavaScript code, with each dataset corresponding to the same index number in the arrays (ex. every array has in index 2 the values from dataset 2).

The website takes the inserted values and calculates the standard scores (z-scores) of each by subtracting the average from it and dividing it by the standard deviation. The z-scores are the amount of standard deviations someone is above or below the average in each specific measurement.

An external formula is used to convert the z-score into a percentile, it does this by assuming that the distribution is a **normal distribution bell curve**, meaning it's highest at the average and gradually lowers from there, which is the same effect observed in other natural qualities as well. **This is only an approximation**, meaning if you're in either extreme of the bell curve you're likely to have a difficult time comparing against the data as there's less and less people to compare to.

After a percentile is gathered, it's not difficult to convert it to a number, which is how the values are compared against a room of *n* guys in the end. Fun fact: the 0 and 100 percentiles don't exist. Saying you're in the 99 percentile means you're higher than 99% of the population, but saying you're on the 100 percentile means you're higher than everyone, even yourself...which is a contradiction. Same goes for the 0 percentile but in reverse.

After all that, the only mistery is the volume and how it's calculated. The volume of the measurements inserted are calculated using the following formula:

Assuming a perfectly cylindrical shape (not the case in real-life), the circumference/girth (C) is divided by two times π, then the result is squared. After that, multiply it by π again and mulitply it by the length.

The standard deviation for each is a bit complicated, since I'm unable to do *multivariate normal distributions* properly in JavaScript as of yet and no library seems to be able to help in this regard, I had to get creative. I had an Excel file calculate for me, for each dataset:

- The length/girth value for each 0.1% increment, which generated 999 values each.
- The volume for each combination of length and girth in the increments generated before, making a 999x999 grid with a total of 998001 values.
- The average/mean of all those values to make sure they were close to the volume of an average-sized member.
- The standard deviation of all those values.

Unfortunately in step 3 I realized that a slight error margin happens when I do these calculations this way, **meaning there's an added error margin regarding volume percentiles.** This error ranges from 1-5ml depending on the dataset.

This was kinda convoluted, but in the end it worked. That file that I still have occupies 17.5 MB and contains only formulas and text, just so you have an idea. Curiously, the exact difference seems to be directly correlated to both the average/mean length and the girth's standard deviation but not the other two values.

**If anyone has a better method to calculate the volume stats on-the-fly with JavaScript, please contact me.**