Fit data to parametric distribution
Clash Royale CLAN TAG#URR8PPP
I have data with nice bellshaped histogram PDF. However, the Normal distribution fitting (by calculating mean and variance) does not work as the figure below.
My question is that if there are other distributions I should try according to your experience. In other words, by which distribution should a nice bellshaped data, which is not well fit by Normal distribution, be fit.
My ultimate goal is to have an approximated analytic form of cummulative distribution function to analyze the according probability. Thus any advices towards solving this goal are appreciated.
I include the data (space as delimiter):
Data to fit.
Update: qq plot

show 3 more comments
I have data with nice bellshaped histogram PDF. However, the Normal distribution fitting (by calculating mean and variance) does not work as the figure below.
My question is that if there are other distributions I should try according to your experience. In other words, by which distribution should a nice bellshaped data, which is not well fit by Normal distribution, be fit.
My ultimate goal is to have an approximated analytic form of cummulative distribution function to analyze the according probability. Thus any advices towards solving this goal are appreciated.
I include the data (space as delimiter):
Data to fit.
Update: qq plot

3
Welcome to the site, Anna. What is your ultimate goal? Why are you trying to fit a distribution to these data? What are you hoping to achieve in the end?
– COOLSerdash
Jan 5 at 10:53 
1
@Xi’an it is not. My question is that by which distribution should a nice bellshaped data, which is not well fit by Normal distribution, be fit.
– Anna Noie
Jan 5 at 11:06

4
A better plot to show us would be a qqplot
– kjetil b halvorsen
Jan 5 at 11:24 
6
There is a different global view on the problem: if you don’t know the functional form of the real distribution and hope to judge any fit by its agreement with the observed histogram, the ultimate fit will have the precision of the histogram, due to model uncertainty. So I would just compute the empirical cumulative distribution function, a nonparametric estimator, and be done. This is the cumulative histogram when there is no binning of the data.
– Frank Harrell
Jan 5 at 12:32 
1
One thing to try is my online statistical distribution fitter at zunzun.com/StatisticalDistributions/1 to see of it suggests any good candidate distributions. It fits the data to over 80 of the continuous statistical distributions in scipy, and is open source.
– James Phillips
Jan 5 at 16:18

show 3 more comments
I have data with nice bellshaped histogram PDF. However, the Normal distribution fitting (by calculating mean and variance) does not work as the figure below.
My question is that if there are other distributions I should try according to your experience. In other words, by which distribution should a nice bellshaped data, which is not well fit by Normal distribution, be fit.
My ultimate goal is to have an approximated analytic form of cummulative distribution function to analyze the according probability. Thus any advices towards solving this goal are appreciated.
I include the data (space as delimiter):
Data to fit.
Update: qq plot
I have data with nice bellshaped histogram PDF. However, the Normal distribution fitting (by calculating mean and variance) does not work as the figure below.
My question is that if there are other distributions I should try according to your experience. In other words, by which distribution should a nice bellshaped data, which is not well fit by Normal distribution, be fit.
My ultimate goal is to have an approximated analytic form of cummulative distribution function to analyze the according probability. Thus any advices towards solving this goal are appreciated.
I include the data (space as delimiter):
Data to fit.
Update: qq plot

3
Welcome to the site, Anna. What is your ultimate goal? Why are you trying to fit a distribution to these data? What are you hoping to achieve in the end?
– COOLSerdash
Jan 5 at 10:53 
1
@Xi’an it is not. My question is that by which distribution should a nice bellshaped data, which is not well fit by Normal distribution, be fit.
– Anna Noie
Jan 5 at 11:06

4
A better plot to show us would be a qqplot
– kjetil b halvorsen
Jan 5 at 11:24 
6
There is a different global view on the problem: if you don’t know the functional form of the real distribution and hope to judge any fit by its agreement with the observed histogram, the ultimate fit will have the precision of the histogram, due to model uncertainty. So I would just compute the empirical cumulative distribution function, a nonparametric estimator, and be done. This is the cumulative histogram when there is no binning of the data.
– Frank Harrell
Jan 5 at 12:32 
1
One thing to try is my online statistical distribution fitter at zunzun.com/StatisticalDistributions/1 to see of it suggests any good candidate distributions. It fits the data to over 80 of the continuous statistical distributions in scipy, and is open source.
– James Phillips
Jan 5 at 16:18

show 3 more comments

3
Welcome to the site, Anna. What is your ultimate goal? Why are you trying to fit a distribution to these data? What are you hoping to achieve in the end?
– COOLSerdash
Jan 5 at 10:53 
1
@Xi’an it is not. My question is that by which distribution should a nice bellshaped data, which is not well fit by Normal distribution, be fit.
– Anna Noie
Jan 5 at 11:06

4
A better plot to show us would be a qqplot
– kjetil b halvorsen
Jan 5 at 11:24 
6
There is a different global view on the problem: if you don’t know the functional form of the real distribution and hope to judge any fit by its agreement with the observed histogram, the ultimate fit will have the precision of the histogram, due to model uncertainty. So I would just compute the empirical cumulative distribution function, a nonparametric estimator, and be done. This is the cumulative histogram when there is no binning of the data.
– Frank Harrell
Jan 5 at 12:32 
1
One thing to try is my online statistical distribution fitter at zunzun.com/StatisticalDistributions/1 to see of it suggests any good candidate distributions. It fits the data to over 80 of the continuous statistical distributions in scipy, and is open source.
– James Phillips
Jan 5 at 16:18
Welcome to the site, Anna. What is your ultimate goal? Why are you trying to fit a distribution to these data? What are you hoping to achieve in the end?
– COOLSerdash
Jan 5 at 10:53
Welcome to the site, Anna. What is your ultimate goal? Why are you trying to fit a distribution to these data? What are you hoping to achieve in the end?
– COOLSerdash
Jan 5 at 10:53
@Xi’an it is not. My question is that by which distribution should a nice bellshaped data, which is not well fit by Normal distribution, be fit.
– Anna Noie
Jan 5 at 11:06
@Xi’an it is not. My question is that by which distribution should a nice bellshaped data, which is not well fit by Normal distribution, be fit.
– Anna Noie
Jan 5 at 11:06
A better plot to show us would be a qqplot
– kjetil b halvorsen
Jan 5 at 11:24
A better plot to show us would be a qqplot
– kjetil b halvorsen
Jan 5 at 11:24
There is a different global view on the problem: if you don’t know the functional form of the real distribution and hope to judge any fit by its agreement with the observed histogram, the ultimate fit will have the precision of the histogram, due to model uncertainty. So I would just compute the empirical cumulative distribution function, a nonparametric estimator, and be done. This is the cumulative histogram when there is no binning of the data.
– Frank Harrell
Jan 5 at 12:32
There is a different global view on the problem: if you don’t know the functional form of the real distribution and hope to judge any fit by its agreement with the observed histogram, the ultimate fit will have the precision of the histogram, due to model uncertainty. So I would just compute the empirical cumulative distribution function, a nonparametric estimator, and be done. This is the cumulative histogram when there is no binning of the data.
– Frank Harrell
Jan 5 at 12:32
One thing to try is my online statistical distribution fitter at zunzun.com/StatisticalDistributions/1 to see of it suggests any good candidate distributions. It fits the data to over 80 of the continuous statistical distributions in scipy, and is open source.
– James Phillips
Jan 5 at 16:18
One thing to try is my online statistical distribution fitter at zunzun.com/StatisticalDistributions/1 to see of it suggests any good candidate distributions. It fits the data to over 80 of the continuous statistical distributions in scipy, and is open source.
– James Phillips
Jan 5 at 16:18

show 3 more comments
3 Answers
3
active
oldest
votes
The histogram, as presented by the OP, gives the impression that the data is symmetrical. Given that the data is noticeably more peaked than Normal, and if the data is roughly symmetrical, then a natural suggestion to try is the Student’s t with location parameter $mu$, scale parameter $sigma$, and $v$ degrees of freedom, and pdf $f(x)$:
$$f = frac{1}{sigma sqrt{v} ; Bleft(frac{v}{2},frac{1}{2}right)} left(frac{v}{v+frac{(xmu )^2}{sigma ^2}}right)^{frac{v+1}{2}} quad text{defined on the real line}$$
Student t fit
The following diagram shows a sample fit using the Student’s t, with $mu = 5.45$, $sigma = 6.61$ and $v = 2.97$:
In the diagram:

the dashed red curve is the fitted Student’s t pdf

the squiggly blue curve is the empirical pdf (frequency polygon) of the raw data
On the upside, this appears to be a significantly better fit than the Normal, using the same raw data set provided.
On the possible downside, I am not sure I would fully agree with the OP’s opening statement: “I have data with nice bellshaped histogram PDF”. In particular, if one looks more closely at your data set (which contains 100,000 samples), the maximum is 37.45, while the minimum is 910. Moreover, there is not just one large negative value, but a whole bunch of them. This suggests that your data set is not symmetrical, but negatively skewed … and that there other things going on in the tails, and if so, other distributions may perhaps be better suited. Zooming out, again with the same Student’s t fit, we can see this feature of the data, in the right and left tails:

I also found student;s t to work well per your answer. Additionally, I found the Johnson SU to work well here.
– James Phillips
Jan 5 at 22:06
add a comment 
In short: your two plots show a big discrepancy, the smallest values shown in the histogram is about $30$, while the qqplot shows values down to around $900$. All those lomtailed outliers is about 0.7% of the sample, but dominates the qqplot. So you need to ask yourself what produces those outliers! and that should guide you to what to do with your data. If I make a qqplot after eliminating that long tail, it looks much closer to normal, but not perfect. Look at these:
mean(Y)
[1] 3.9657
mean(Y[Y>= 30])
[1] 4.414797
but the effect on standard deviation is larger:
sd(Y)
[1] 10.92237
sd(Y[Y>= 30])
[1] 8.006223
and that explains the strange form of your first plot (histogram): the fitted normal curve you shows is influenced by that long tail you omitted from the plot.

1
Thanks, great advice. I understand that perfect fitting of the probability density function is not very feasible. As I just edit my question, my goal is to have an approximated analytic CDF. I am thinking about your advice about dominated outliners.
– Anna Noie
Jan 5 at 14:16 
4
To give better advice, we really need to know the context. What does your variable measure, and what is the goal of modeling?
– kjetil b halvorsen
Jan 5 at 14:18
add a comment 
You might try a Gaussian mixture, which is easy using Mclust in the mclust library of R.
library(mclust)
mc.fit = Mclust(data$V1)
summary(mc.fit,parameters=TRUE)
This gives a threecomponent Gaussian mixture (8 parameters total), with components
1: N(69.269908, 6995.71627), p1 = 0.003970506
2: N( 4.314187, 171.76873), p2 = 0.115329209
3: N( 5.380137, 46.26587), p3 = 0.880700285
The log likelihood is 352620.4, which you can use to compare other possible fits such as those suggested.
The long left tail is captured by the first two components, especially the first.
The cumulative distribution estimate at “x” is (in R form)
p1*pnorm(x, 69.269908, sqrt(6995.71627)) + p2*pnorm(x, 4.314187, sqrt(171.76873))
+ p3*pnorm(x, 5.380137, sqrt(46.26587))
I tried various quantiles (x) from .0001 to .9999 and the accuracy of the estimate seems reasonable to me.
add a comment 
Your Answer
StackExchange.ifUsing(“editor”, function () {
return StackExchange.using(“mathjaxEditing”, function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [[“$”, “$”], [“\\(“,”\\)”]]);
});
});
}, “mathjaxediting”);
StackExchange.ready(function() {
var channelOptions = {
tags: “”.split(” “),
id: “65”
};
initTagRenderer(“”.split(” “), “”.split(” “), channelOptions);
StackExchange.using(“externalEditor”, function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using(“snippets”, function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: ‘answer’,
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: “”,
imageUploader: {
brandingHtml: “Powered by u003ca class=”iconimgurwhite” href=”https://imgur.com/”u003eu003c/au003e”,
contentPolicyHtml: “User contributions licensed under u003ca href=”https://creativecommons.org/licenses/bysa/3.0/”u003ecc bysa 3.0 with attribution requiredu003c/au003e u003ca href=”https://stackoverflow.com/legal/contentpolicy”u003e(content policy)u003c/au003e”,
allowUrls: true
},
onDemand: true,
discardSelector: “.discardanswer”
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave(‘#loginlink’);
});
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin(‘.newpostlogin’, ‘https%3a%2f%2fstats.stackexchange.com%2fquestions%2f385728%2ffitdatatoparametricdistribution%23newanswer’, ‘question_page’);
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
The histogram, as presented by the OP, gives the impression that the data is symmetrical. Given that the data is noticeably more peaked than Normal, and if the data is roughly symmetrical, then a natural suggestion to try is the Student’s t with location parameter $mu$, scale parameter $sigma$, and $v$ degrees of freedom, and pdf $f(x)$:
$$f = frac{1}{sigma sqrt{v} ; Bleft(frac{v}{2},frac{1}{2}right)} left(frac{v}{v+frac{(xmu )^2}{sigma ^2}}right)^{frac{v+1}{2}} quad text{defined on the real line}$$
Student t fit
The following diagram shows a sample fit using the Student’s t, with $mu = 5.45$, $sigma = 6.61$ and $v = 2.97$:
In the diagram:

the dashed red curve is the fitted Student’s t pdf

the squiggly blue curve is the empirical pdf (frequency polygon) of the raw data
On the upside, this appears to be a significantly better fit than the Normal, using the same raw data set provided.
On the possible downside, I am not sure I would fully agree with the OP’s opening statement: “I have data with nice bellshaped histogram PDF”. In particular, if one looks more closely at your data set (which contains 100,000 samples), the maximum is 37.45, while the minimum is 910. Moreover, there is not just one large negative value, but a whole bunch of them. This suggests that your data set is not symmetrical, but negatively skewed … and that there other things going on in the tails, and if so, other distributions may perhaps be better suited. Zooming out, again with the same Student’s t fit, we can see this feature of the data, in the right and left tails:

I also found student;s t to work well per your answer. Additionally, I found the Johnson SU to work well here.
– James Phillips
Jan 5 at 22:06
add a comment 
The histogram, as presented by the OP, gives the impression that the data is symmetrical. Given that the data is noticeably more peaked than Normal, and if the data is roughly symmetrical, then a natural suggestion to try is the Student’s t with location parameter $mu$, scale parameter $sigma$, and $v$ degrees of freedom, and pdf $f(x)$:
$$f = frac{1}{sigma sqrt{v} ; Bleft(frac{v}{2},frac{1}{2}right)} left(frac{v}{v+frac{(xmu )^2}{sigma ^2}}right)^{frac{v+1}{2}} quad text{defined on the real line}$$
Student t fit
The following diagram shows a sample fit using the Student’s t, with $mu = 5.45$, $sigma = 6.61$ and $v = 2.97$:
In the diagram:

the dashed red curve is the fitted Student’s t pdf

the squiggly blue curve is the empirical pdf (frequency polygon) of the raw data
On the upside, this appears to be a significantly better fit than the Normal, using the same raw data set provided.
On the possible downside, I am not sure I would fully agree with the OP’s opening statement: “I have data with nice bellshaped histogram PDF”. In particular, if one looks more closely at your data set (which contains 100,000 samples), the maximum is 37.45, while the minimum is 910. Moreover, there is not just one large negative value, but a whole bunch of them. This suggests that your data set is not symmetrical, but negatively skewed … and that there other things going on in the tails, and if so, other distributions may perhaps be better suited. Zooming out, again with the same Student’s t fit, we can see this feature of the data, in the right and left tails:

I also found student;s t to work well per your answer. Additionally, I found the Johnson SU to work well here.
– James Phillips
Jan 5 at 22:06
add a comment 
The histogram, as presented by the OP, gives the impression that the data is symmetrical. Given that the data is noticeably more peaked than Normal, and if the data is roughly symmetrical, then a natural suggestion to try is the Student’s t with location parameter $mu$, scale parameter $sigma$, and $v$ degrees of freedom, and pdf $f(x)$:
$$f = frac{1}{sigma sqrt{v} ; Bleft(frac{v}{2},frac{1}{2}right)} left(frac{v}{v+frac{(xmu )^2}{sigma ^2}}right)^{frac{v+1}{2}} quad text{defined on the real line}$$
Student t fit
The following diagram shows a sample fit using the Student’s t, with $mu = 5.45$, $sigma = 6.61$ and $v = 2.97$:
In the diagram:

the dashed red curve is the fitted Student’s t pdf

the squiggly blue curve is the empirical pdf (frequency polygon) of the raw data
On the upside, this appears to be a significantly better fit than the Normal, using the same raw data set provided.
On the possible downside, I am not sure I would fully agree with the OP’s opening statement: “I have data with nice bellshaped histogram PDF”. In particular, if one looks more closely at your data set (which contains 100,000 samples), the maximum is 37.45, while the minimum is 910. Moreover, there is not just one large negative value, but a whole bunch of them. This suggests that your data set is not symmetrical, but negatively skewed … and that there other things going on in the tails, and if so, other distributions may perhaps be better suited. Zooming out, again with the same Student’s t fit, we can see this feature of the data, in the right and left tails:
The histogram, as presented by the OP, gives the impression that the data is symmetrical. Given that the data is noticeably more peaked than Normal, and if the data is roughly symmetrical, then a natural suggestion to try is the Student’s t with location parameter $mu$, scale parameter $sigma$, and $v$ degrees of freedom, and pdf $f(x)$:
$$f = frac{1}{sigma sqrt{v} ; Bleft(frac{v}{2},frac{1}{2}right)} left(frac{v}{v+frac{(xmu )^2}{sigma ^2}}right)^{frac{v+1}{2}} quad text{defined on the real line}$$
Student t fit
The following diagram shows a sample fit using the Student’s t, with $mu = 5.45$, $sigma = 6.61$ and $v = 2.97$:
In the diagram:

the dashed red curve is the fitted Student’s t pdf

the squiggly blue curve is the empirical pdf (frequency polygon) of the raw data
On the upside, this appears to be a significantly better fit than the Normal, using the same raw data set provided.
On the possible downside, I am not sure I would fully agree with the OP’s opening statement: “I have data with nice bellshaped histogram PDF”. In particular, if one looks more closely at your data set (which contains 100,000 samples), the maximum is 37.45, while the minimum is 910. Moreover, there is not just one large negative value, but a whole bunch of them. This suggests that your data set is not symmetrical, but negatively skewed … and that there other things going on in the tails, and if so, other distributions may perhaps be better suited. Zooming out, again with the same Student’s t fit, we can see this feature of the data, in the right and left tails:

I also found student;s t to work well per your answer. Additionally, I found the Johnson SU to work well here.
– James Phillips
Jan 5 at 22:06
add a comment 

I also found student;s t to work well per your answer. Additionally, I found the Johnson SU to work well here.
– James Phillips
Jan 5 at 22:06
I also found student;s t to work well per your answer. Additionally, I found the Johnson SU to work well here.
– James Phillips
Jan 5 at 22:06
I also found student;s t to work well per your answer. Additionally, I found the Johnson SU to work well here.
– James Phillips
Jan 5 at 22:06
add a comment 
In short: your two plots show a big discrepancy, the smallest values shown in the histogram is about $30$, while the qqplot shows values down to around $900$. All those lomtailed outliers is about 0.7% of the sample, but dominates the qqplot. So you need to ask yourself what produces those outliers! and that should guide you to what to do with your data. If I make a qqplot after eliminating that long tail, it looks much closer to normal, but not perfect. Look at these:
mean(Y)
[1] 3.9657
mean(Y[Y>= 30])
[1] 4.414797
but the effect on standard deviation is larger:
sd(Y)
[1] 10.92237
sd(Y[Y>= 30])
[1] 8.006223
and that explains the strange form of your first plot (histogram): the fitted normal curve you shows is influenced by that long tail you omitted from the plot.

1
Thanks, great advice. I understand that perfect fitting of the probability density function is not very feasible. As I just edit my question, my goal is to have an approximated analytic CDF. I am thinking about your advice about dominated outliners.
– Anna Noie
Jan 5 at 14:16 
4
To give better advice, we really need to know the context. What does your variable measure, and what is the goal of modeling?
– kjetil b halvorsen
Jan 5 at 14:18
add a comment 
In short: your two plots show a big discrepancy, the smallest values shown in the histogram is about $30$, while the qqplot shows values down to around $900$. All those lomtailed outliers is about 0.7% of the sample, but dominates the qqplot. So you need to ask yourself what produces those outliers! and that should guide you to what to do with your data. If I make a qqplot after eliminating that long tail, it looks much closer to normal, but not perfect. Look at these:
mean(Y)
[1] 3.9657
mean(Y[Y>= 30])
[1] 4.414797
but the effect on standard deviation is larger:
sd(Y)
[1] 10.92237
sd(Y[Y>= 30])
[1] 8.006223
and that explains the strange form of your first plot (histogram): the fitted normal curve you shows is influenced by that long tail you omitted from the plot.

1
Thanks, great advice. I understand that perfect fitting of the probability density function is not very feasible. As I just edit my question, my goal is to have an approximated analytic CDF. I am thinking about your advice about dominated outliners.
– Anna Noie
Jan 5 at 14:16 
4
To give better advice, we really need to know the context. What does your variable measure, and what is the goal of modeling?
– kjetil b halvorsen
Jan 5 at 14:18
add a comment 
In short: your two plots show a big discrepancy, the smallest values shown in the histogram is about $30$, while the qqplot shows values down to around $900$. All those lomtailed outliers is about 0.7% of the sample, but dominates the qqplot. So you need to ask yourself what produces those outliers! and that should guide you to what to do with your data. If I make a qqplot after eliminating that long tail, it looks much closer to normal, but not perfect. Look at these:
mean(Y)
[1] 3.9657
mean(Y[Y>= 30])
[1] 4.414797
but the effect on standard deviation is larger:
sd(Y)
[1] 10.92237
sd(Y[Y>= 30])
[1] 8.006223
and that explains the strange form of your first plot (histogram): the fitted normal curve you shows is influenced by that long tail you omitted from the plot.
In short: your two plots show a big discrepancy, the smallest values shown in the histogram is about $30$, while the qqplot shows values down to around $900$. All those lomtailed outliers is about 0.7% of the sample, but dominates the qqplot. So you need to ask yourself what produces those outliers! and that should guide you to what to do with your data. If I make a qqplot after eliminating that long tail, it looks much closer to normal, but not perfect. Look at these:
mean(Y)
[1] 3.9657
mean(Y[Y>= 30])
[1] 4.414797
but the effect on standard deviation is larger:
sd(Y)
[1] 10.92237
sd(Y[Y>= 30])
[1] 8.006223
and that explains the strange form of your first plot (histogram): the fitted normal curve you shows is influenced by that long tail you omitted from the plot.

1
Thanks, great advice. I understand that perfect fitting of the probability density function is not very feasible. As I just edit my question, my goal is to have an approximated analytic CDF. I am thinking about your advice about dominated outliners.
– Anna Noie
Jan 5 at 14:16 
4
To give better advice, we really need to know the context. What does your variable measure, and what is the goal of modeling?
– kjetil b halvorsen
Jan 5 at 14:18
add a comment 

1
Thanks, great advice. I understand that perfect fitting of the probability density function is not very feasible. As I just edit my question, my goal is to have an approximated analytic CDF. I am thinking about your advice about dominated outliners.
– Anna Noie
Jan 5 at 14:16 
4
To give better advice, we really need to know the context. What does your variable measure, and what is the goal of modeling?
– kjetil b halvorsen
Jan 5 at 14:18
Thanks, great advice. I understand that perfect fitting of the probability density function is not very feasible. As I just edit my question, my goal is to have an approximated analytic CDF. I am thinking about your advice about dominated outliners.
– Anna Noie
Jan 5 at 14:16
Thanks, great advice. I understand that perfect fitting of the probability density function is not very feasible. As I just edit my question, my goal is to have an approximated analytic CDF. I am thinking about your advice about dominated outliners.
– Anna Noie
Jan 5 at 14:16
To give better advice, we really need to know the context. What does your variable measure, and what is the goal of modeling?
– kjetil b halvorsen
Jan 5 at 14:18
To give better advice, we really need to know the context. What does your variable measure, and what is the goal of modeling?
– kjetil b halvorsen
Jan 5 at 14:18
add a comment 
You might try a Gaussian mixture, which is easy using Mclust in the mclust library of R.
library(mclust)
mc.fit = Mclust(data$V1)
summary(mc.fit,parameters=TRUE)
This gives a threecomponent Gaussian mixture (8 parameters total), with components
1: N(69.269908, 6995.71627), p1 = 0.003970506
2: N( 4.314187, 171.76873), p2 = 0.115329209
3: N( 5.380137, 46.26587), p3 = 0.880700285
The log likelihood is 352620.4, which you can use to compare other possible fits such as those suggested.
The long left tail is captured by the first two components, especially the first.
The cumulative distribution estimate at “x” is (in R form)
p1*pnorm(x, 69.269908, sqrt(6995.71627)) + p2*pnorm(x, 4.314187, sqrt(171.76873))
+ p3*pnorm(x, 5.380137, sqrt(46.26587))
I tried various quantiles (x) from .0001 to .9999 and the accuracy of the estimate seems reasonable to me.
add a comment 
You might try a Gaussian mixture, which is easy using Mclust in the mclust library of R.
library(mclust)
mc.fit = Mclust(data$V1)
summary(mc.fit,parameters=TRUE)
This gives a threecomponent Gaussian mixture (8 parameters total), with components
1: N(69.269908, 6995.71627), p1 = 0.003970506
2: N( 4.314187, 171.76873), p2 = 0.115329209
3: N( 5.380137, 46.26587), p3 = 0.880700285
The log likelihood is 352620.4, which you can use to compare other possible fits such as those suggested.
The long left tail is captured by the first two components, especially the first.
The cumulative distribution estimate at “x” is (in R form)
p1*pnorm(x, 69.269908, sqrt(6995.71627)) + p2*pnorm(x, 4.314187, sqrt(171.76873))
+ p3*pnorm(x, 5.380137, sqrt(46.26587))
I tried various quantiles (x) from .0001 to .9999 and the accuracy of the estimate seems reasonable to me.
add a comment 
You might try a Gaussian mixture, which is easy using Mclust in the mclust library of R.
library(mclust)
mc.fit = Mclust(data$V1)
summary(mc.fit,parameters=TRUE)
This gives a threecomponent Gaussian mixture (8 parameters total), with components
1: N(69.269908, 6995.71627), p1 = 0.003970506
2: N( 4.314187, 171.76873), p2 = 0.115329209
3: N( 5.380137, 46.26587), p3 = 0.880700285
The log likelihood is 352620.4, which you can use to compare other possible fits such as those suggested.
The long left tail is captured by the first two components, especially the first.
The cumulative distribution estimate at “x” is (in R form)
p1*pnorm(x, 69.269908, sqrt(6995.71627)) + p2*pnorm(x, 4.314187, sqrt(171.76873))
+ p3*pnorm(x, 5.380137, sqrt(46.26587))
I tried various quantiles (x) from .0001 to .9999 and the accuracy of the estimate seems reasonable to me.
You might try a Gaussian mixture, which is easy using Mclust in the mclust library of R.
library(mclust)
mc.fit = Mclust(data$V1)
summary(mc.fit,parameters=TRUE)
This gives a threecomponent Gaussian mixture (8 parameters total), with components
1: N(69.269908, 6995.71627), p1 = 0.003970506
2: N( 4.314187, 171.76873), p2 = 0.115329209
3: N( 5.380137, 46.26587), p3 = 0.880700285
The log likelihood is 352620.4, which you can use to compare other possible fits such as those suggested.
The long left tail is captured by the first two components, especially the first.
The cumulative distribution estimate at “x” is (in R form)
p1*pnorm(x, 69.269908, sqrt(6995.71627)) + p2*pnorm(x, 4.314187, sqrt(171.76873))
+ p3*pnorm(x, 5.380137, sqrt(46.26587))
I tried various quantiles (x) from .0001 to .9999 and the accuracy of the estimate seems reasonable to me.
add a comment 
add a comment 
Thanks for contributing an answer to Cross Validated!
 Please be sure to answer the question. Provide details and share your research!
But avoid …
 Asking for help, clarification, or responding to other answers.
 Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave(‘#loginlink’);
});
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin(‘.newpostlogin’, ‘https%3a%2f%2fstats.stackexchange.com%2fquestions%2f385728%2ffitdatatoparametricdistribution%23newanswer’, ‘question_page’);
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave(‘#loginlink’);
});
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave(‘#loginlink’);
});
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave(‘#loginlink’);
});
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Welcome to the site, Anna. What is your ultimate goal? Why are you trying to fit a distribution to these data? What are you hoping to achieve in the end?
– COOLSerdash
Jan 5 at 10:53
@Xi’an it is not. My question is that by which distribution should a nice bellshaped data, which is not well fit by Normal distribution, be fit.
– Anna Noie
Jan 5 at 11:06
A better plot to show us would be a qqplot
– kjetil b halvorsen
Jan 5 at 11:24
There is a different global view on the problem: if you don’t know the functional form of the real distribution and hope to judge any fit by its agreement with the observed histogram, the ultimate fit will have the precision of the histogram, due to model uncertainty. So I would just compute the empirical cumulative distribution function, a nonparametric estimator, and be done. This is the cumulative histogram when there is no binning of the data.
– Frank Harrell
Jan 5 at 12:32
One thing to try is my online statistical distribution fitter at zunzun.com/StatisticalDistributions/1 to see of it suggests any good candidate distributions. It fits the data to over 80 of the continuous statistical distributions in scipy, and is open source.
– James Phillips
Jan 5 at 16:18