# two parameter exponential distribution sufficient statistic

1010-1020. So, we firstly define complete sequence function and recall some well-known theorems. 2. Lett., Vol. Arnold, N. Balakrishnan, and H.N. Plan. If X has Exp(μ,σ), then from (7) proof of the necessity is concluded. A. Dembińska, J. Stat. Let us define two spacings W1 and W2 as follows, From (1) and (2), one can obtain easily the probability generating functions (pgf) of K−(n,k,a) and K+(n,k,b) as follows (see Balakrishnan and Stepanov ), It also follows that the joint pgf K−(n,k,a) and K+(n,k,b) is. From (6), the pmf of K+(n,k,a) can be written as, Eq. (or joint p.m.f. Y. Nikitin, ACUTM, Vol. J. Aczél, Lectures on Functional Equations and Their Applications, Academic Press, London, England, New York, NY, USA, 1966. N. Balakrishnan and A. Stepanov, J. Stat. Let $$X_1, X_2, \ldots, X_n$$ denote a random sample from a normal distribution $$N(\theta_1, \theta_2)$$. Theor. The other factor, the exponential function, depends on y1, …, yn only through the given sum. The exponential distribution. are also joint sufficient statistics for $$\theta_1$$ and $$\theta_2$$. of the exponential form: $$f(x;\theta_1,\theta_2)=\text{exp}\left[K_1(x)p_1(\theta_1,\theta_2)+K_2(x)p_2(\theta_1,\theta_2)+S(x) +q(\theta_1,\theta_2) \right]$$. Let X1,X2,…,Xn be continuous random variables with CDF F. Then F has Exp(μ,σ) if and only if K−(n,k,a) and K+(n,k,b) be independent for a fixed k≥1 and for any a>0 and b>0. There exists a unique relationship between the exponential distribution and the Poisson distribution. 1.1. This is an expression of the form of the Exponential Distribution Family and since the support does not depend on θ, we can conclude that it belongs in the exponential distribution family. Desu, Ann. Nagaraja, A First Course in Order Statistics, SIAM, Philadelphia, PA, USA, 2008. Probab. 179-193. We conclude that in all examples of a location family of distributions, statistics Yi are ancillary for the location parameter θ. So, the 100(1−α)% interval confidence for e−σa is given by. with a support that does not depend on the parameters $$\theta_1$$ and $$\theta_2$$. B.C. Odit molestiae mollitia The results are proved through properties of completeness sequence function. CHARACTERIZATION BASED ON DEPENDENCY ASSUMPTIONS, 4. Let X1,X2,…,Xn be continuous random variables with CDF F. Then F has exponential distribution Exp(μ,σ) if and only if for a fixed k≥1 and every a>0, following quantity holds. J. Galambos and S. Kotz, Characterizations of Probability Distributions, Springer-Verlag, New York, NY, USA, 1978. https://books.google.com/books?id=BkcRRgAACAAJ. Now, we present an asymptotic confidence interval for e−σa based on counting random variable K+(n,k,a) which is stated in the following remark. 1. The proceedings and journals on our platform are Open Access and generate millions of downloads every month. The confusion starts when you see the term “decay parameter”, or even worse, the term “decay rate”, which is frequently used in exponential distribution. After that, following two random variables have been considered in the literature. Arcu felis bibendum ut tristique et egestas quis: In each of the examples we considered so far in this lesson, there is one and only one parameter. 54, 2012, pp. ), Further, it is easy to verify that the pmf of K−(n,k,a) for any j=0,1,⋯,k−1 is, Now, assume that F(⋅) has a form as (3). Stat., Vol. T ( X 1 n ) = ∑ i = 1 n X i. sufﬁcient statistic is characterized in the following result. E. Hashorva, Stat. NZ. Lett., Vol. 48, 2014, pp. One-parameter exponential distribution has been considered by different authors since the work of … Since the time length 't' is independent, it cannot affect the times between the current events. Other examples include the length, in minutes, of long distance business telephone calls, and the amount of time, in months, a car battery lasts. Upcoming Events 2020 Community Moderator Election A continuous random variable x (with scale parameter λ > 0) is said to have an exponential distribution only if its probability density function can be expressed by multiplying the scale parameter to the exponential function of minus scale parameter and x for all x greater than or equal to zero, otherwise the probability density function is equal to zero. Atlantis Press is a professional publisher of scientific, technical and medical (STM) proceedings, journals and books. Key Definitions: Sufficient, Complete, and Ancillary Statistics. Let's try the extended theorem out for size on an example. 375-395. According to distribution of K+(n,k,a), it can be considered as sum of independent and identically distributed random variables from binomial 1,1−e−σa. (20–22), we have, Suppose that counting random variables K−(n,k,a) and K+(n,k,b) be independent. According to expectation of K+(n,k,a), an unbiased estimator for e−σa is equal to, So, the estimator T2 is uniformly minimum-variance unbiased estimator (UMVUE) and its variance or minimum square error (MSE) is as follows. To see this, consider the joint probability density function of . N. Balakrishnan and A. Stepanov, J. Stat. NZ. = constant rate, in failures per unit of measurement, (e.g., failures per hour, per cycle, etc.) The exponential distribution family is defined by pdf of the form: f x = ( x; θ) = c ( θ) g ( x) e x p [ ∑ j = 1 l G j ( θ) T j ( x)] Where θ ∈ Θ and c ( θ) > 0 And Q j ( θ) are arbitrary functions of θ, and g ( x) > 0 And t (x) are arbitrary functions of x. A sequence {Φn}n≥1 of elements of a Hilbert space H is called complete if the only element which is orthogonal to every {Φn} is the null element, that is. Journal of Statistical Theory and Applications. a dignissimos. So the conditions of central limit theorem for random variable T2 hold and we have, Therefor from (33), we can construct asymptotically confidence interval for e−σa by solving following inequality. Let $$X_1, X_2, \ldots, X_n$$ denote random variables with a joint p.d.f. 10, 2007, pp. into two functions, one (ϕ) being only a function of the statistic Y = X ¯ and the other (h) not depending on the parameter μ: Therefore, the Factorization Theorem tells us that Y = X ¯ is a sufficient statistic for μ. The quantity (26) shows that the spacings W1 and W2 are independent. Sufficient Statistics1: (Intuitively, a sufficient statistics are those statistics that in some sense contain all the information aboutθ) A statistic T(X) is called sufficient for θif the conditional distribution of the data X given T(X) = t does not depend on θ (i.e. Nagaraja, Order Statistics, John Wiley-Sons, New York, NY, USA, 2003. 85-97. Rewriting the first factor, and squaring the quantity in parentheses, and distributing the summation, in the second factor, we get: $$f(x_1, x_2, ... , x_n;\theta_1, \theta_2) = \text{exp} \left[\text{log}\left(\dfrac{1}{\sqrt{2\pi\theta_2}}\right)^n\right] \text{exp} \left[-\dfrac{1}{2\theta_2}\left\{ \sum_{i=1}^{n}x_{i}^{2} -2\theta_1\sum_{i=1}^{n}x_{i} +\sum_{i=1}^{n}\theta_{1}^{2} \right\}\right]$$, $$f(x_1, x_2, ... , x_n;\theta_1, \theta_2) = \text{exp} \left[ -\dfrac{1}{2\theta_2}\sum_{i=1}^{n}x_{i}^{2}+\dfrac{\theta_1}{\theta_2}\sum_{i=1}^{n}x_{i} -\dfrac{n\theta_{1}^{2}}{2\theta_2}-n\text{log}\sqrt{2\pi\theta_2} \right]$$. Stat., Vol. Lesson 2: Confidence Intervals for One Mean, Lesson 3: Confidence Intervals for Two Means, Lesson 4: Confidence Intervals for Variances, Lesson 5: Confidence Intervals for Proportions, 6.2 - Estimating a Proportion for a Large Population, 6.3 - Estimating a Proportion for a Small, Finite Population, 7.5 - Confidence Intervals for Regression Parameters, 7.6 - Using Minitab to Lighten the Workload, 8.1 - A Confidence Interval for the Mean of Y, 8.3 - Using Minitab to Lighten the Workload, 10.1 - Z-Test: When Population Variance is Known, 10.2 - T-Test: When Population Variance is Unknown, Lesson 11: Tests of the Equality of Two Means, 11.1 - When Population Variances Are Equal, 11.2 - When Population Variances Are Not Equal, Lesson 13: One-Factor Analysis of Variance, Lesson 14: Two-Factor Analysis of Variance, Lesson 15: Tests Concerning Regression and Correlation, 15.3 - An Approximate Confidence Interval for Rho, Lesson 16: Chi-Square Goodness-of-Fit Tests, 16.5 - Using Minitab to Lighten the Workload, Lesson 19: Distribution-Free Confidence Intervals for Percentiles, 20.2 - The Wilcoxon Signed Rank Test for a Median, Lesson 21: Run Test and Test for Randomness, Lesson 22: Kolmogorov-Smirnov Goodness-of-Fit Test, Lesson 23: Probability, Estimation, and Concepts, Lesson 28: Choosing Appropriate Statistical Methods, Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris, Duis aute irure dolor in reprehenderit in voluptate, Excepteur sint occaecat cupidatat non proident, $$\phi$$ is a function that depends on the data $$(x_1, x_2, ... , x_n)$$ only through the functions $$u_1(x_1, x_2, ... , x_n)$$ and $$u_2(x_1, x_2, ... , x_n)$$, and. It is shown that its probability mass function and its first moment can characterize the exponential distribution. M. Ahsanullah and G.G. Assume that X has exponential distribution. Also, more characterization results of exponential distribution can be seen in Galambos and Kotz  and Ahsanullah and Hamedani . 508-523. Then, the statistics $$Y_1=\sum_{i=1}^{n}K_1(X_i)$$ and $$Y_2=\sum_{i=1}^{n}K_2(X_i)$$ are jointly sufficient for $$\theta_1$$ and $$\theta_2$$. Then the cumulative distribution function (CDF) of X is, According to (1), the probability mass function (pmf) of K+(n,k,a) for any j=0,1,⋯,n−k, have been obtained as (See Dembińska et al. 199-210. Partition Interpretation for Minimal Sufficient Statistics: • Any sufficient statistic introduces a partition on the sample space. = operating time, life, or age, in hours, cycles, miles, actuations… Theorem 6.2.24 (Basu’s theorem) Let V and T be two statistics of X from a population indexed by q 2 . Because the observations are … voluptate repellendus blanditiis veritatis ducimus ad ipsa quisquam, commodi vel necessitatibus, harum quos A. Dembińska and G. Iliopoulos, J. Multivariate Anal., Vol. We have just shown that the intuitive estimators of $$\mu$$ and $$\sigma^2$$ are also sufficient estimators. Desu  proved that distribution of population is exponential if and only if nX1:n=dX1, for all n≥1, where the notation =d states the equality in distribution. AN ESTIMATOR BASED ON NEAR-ORDER STATISTIC, https://doi.org/10.2991/jsta.d.200224.001, http://creativecommons.org/licenses/by-nc/4.0/. A.G. Pakes, Extremes, Vol. Let $$X_1, X_2, \ldots, X_n$$ denote a random sample from a normal distribution $$N(\theta_1, \theta_2$$. So, the obtained results show that with choosing appropriate k, the estimator T2 can be considered as a good estimator for parameter e−σa. Therefore, K+(n,k,a) is a sufficient and complete statistic for e−σa. Exponential distribution. Stat., Vol. The probability density function of a normal random variable with mean $$\theta_1$$ and variance $$\theta_2$$ can be written in exponential form as: Therefore, the statistics $$Y_1=\sum_{i=1}^{n}X^{2}_{i}$$ and $$Y_2=\sum_{i=1}^{n}X_i$$ are joint sufficient statistics for $$\theta_1$$ and $$\theta_2$$. 40, 1998, pp. An exact confidence interval for e−σa when a is known can be obtained by this fact that a confidence interval is available for σ in two-parameter exponential distribution. The probability density function of a normal random variable with mean θ 1 and variance θ 2 can be written in exponential form as: Therefore, the statistics Y 1 = ∑ i = 1 n X i 2 and Y 2 = ∑ i = 1 n X i are joint sufficient statistics for θ 1 and θ 2. We offer world-class services, fast turnaround times and personalised communication. Let X be a random variable having two-parameter exponential distribution with parameters μ and σ, denoted by Exp(μ,σ). Substituting its CDF into (27), imply that, It is well-known that the MLE of unknown scale parameter σ is n∑i=1n(Xi−X1:n)−1, when the underlying distribution is Exp(μ,σ). The two parameter exponential distribution is also a very useful component in reliability engineering. 69, 2004, pp. Let X 1, X 2, ⋯ X n be independent and continuous random variables. Inf., Vol. Recall that the Poisson distribution with parameter $$\theta \in (0, \infty)$$ is a discrete distribution on $$\N$$ with probability density function $$g$$ defined by \[ g(x) = e^{-\theta} \frac{\theta^x}{x! Use the Exponential Criterion to find joint sufficient statistics for $$\theta_1$$ and $$\theta_2$$. Plan. . It is stated here without proof. 37-49. sufficient statistic whenever and are two data values such that ( ) ( ), then ( ) ( ). The exponential distribution is a probability distribution which represents the time between events in a Poisson process. A. Dembińska, Stat. Inserting what we know to be the probability density function of a normal random variable with mean $$\theta_1$$ and variance $$\theta_2$$, the joint p.d.f. In summary, we have factored the joint p.d.f. The corresponding order statistics are the Xi's arranged in non-decreasing order, denoted by X1:n