# 线性代数网课代修|最小二乘法代写least squares method辅导|STAT671

linearalgebra.me 为您的留学生涯保驾护航 在线性代数linear algebra作业代写方面已经树立了自己的口碑, 保证靠谱, 高质且原创的线性代数linear algebra代写服务。我们的专家在线性代数linear algebra代写方面经验极为丰富，各种线性代数linear algebra相关的作业也就用不着 说。

• 数值分析
• 高等线性代数
• 矩阵论
• 优化理论
• 线性规划
• 逼近论

## 线性代数作业代写linear algebra代考|Uncertainty in the Model Predictions

In Section $2.5$ the uncertainties in the model parameters were considered. If the only purpose of the experiment is to determine the parameters of the model, then only these uncertainties are of interest. However, there are many situations in which we are interested in using the model for making predictions. Once the parameters of the model are available, then the equation $\boldsymbol{f}(\mathbf{X})$ can be used to predict $\boldsymbol{y}$ for any combination of the independent variables (i.e., the vector $\mathbf{X}$ ). In this section attention is turned towards the uncertainties $\boldsymbol{\sigma}_{f}$ of these predictions.

Typically, one assumes that the model is “correct” and thus the computed values of $y$ are normally distributed about the true values. For a given set of values for the terms of the $\mathbf{X}$ vector (i.e., a combination of the independent variables $x_{1}, x_{2}, . ., x_{m}$ ), we assume that the uncertainty in the predicted value of $y$ is due to the uncertainties associated with the $\boldsymbol{a}{k}$ ‘s. The predicted value of $y$ is determined by substituting $\mathbf{X}$ into $f(\mathbf{X})$ : $$y=f\left(\mathrm{X} ; a{1}, a_{2}, . ., a_{p}\right)$$
Defining $\Delta a_{k}$ as the error in $a_{k}$, we can estimate $\Delta y$ (the error in $y$ ) by neglecting higher order terms in a Taylor expansion around the true value of $y$ :
$$\Delta f \cong \frac{\partial f}{\partial a_{1}} \Delta a_{1}+\frac{\partial f}{\partial a_{2}} \Delta a_{2}+\ldots+\frac{\partial f}{\partial a_{p}} \Delta a_{p}$$
To simplify the analysis, let is use the following definition:
$$T_{k}=\frac{\partial f}{\partial a_{k}} \Delta a_{k}$$

## 线性代数作业代写linear algebra代考|Treatment of Prior Estimates

In the previous sections we noted that a basic requirement of the method of least squares is that the number of data points $\boldsymbol{n}$ must exceed $\boldsymbol{p}$ (the number of unknown parameters of the model). The difference between these two numbers $\boldsymbol{n}-\boldsymbol{p}$ is called the “number of degrees of freedom”. Very early in my career I came across an experiment in which the value of $n-p$ was in fact negative! The modeling effort was related to damage caused by a certain type of event and data had been obtained based upon only two events. Yet the model included over ten unknown parameters. The independent variables included the power of the event and other variables related to position. To make up the deficit, estimates of the parameters based upon theoretical models were used to supplement the two data points. The prior estimates of the parameters are called Bayesian estimators and if the number of Bayesian estimators is $\boldsymbol{n}{\boldsymbol{b}}$ then the number of degrees of freedom is $\boldsymbol{n}+\boldsymbol{n}{\boldsymbol{b}} \boldsymbol{- p}$. As long as this number is greater than zero, a least squares calculation can be made.

In Section 2.2 Equation 2.2.6 is the modified form that the objective function takes when prior estimates of the $\boldsymbol{a}{\boldsymbol{k}}$ parameters are available: $$S=\sum{i=1}^{i=n} w_{i}\left(Y_{i}-f\left(\mathbf{X}{i}\right)\right)^{2}+\sum{k=1}^{k=p}\left(a_{k}-b_{k}\right)^{2} / \sigma_{b_{k}}^{2}$$
In this equation $\boldsymbol{b}{\boldsymbol{k}}$ is the prior estimates of $\boldsymbol{a}{k}$ and $\boldsymbol{\sigma}{\boldsymbol{b}{\boldsymbol{k}}}$ is the uncertainty associated with this prior estimate. The parameter $\boldsymbol{b}{\boldsymbol{k}}$ is typically used as the initial guess $\boldsymbol{a} \boldsymbol{0}{k}$ for $\boldsymbol{a}{k}$. We see from this equation that each value of $\boldsymbol{b}{k}$ is treated as an additional data point. However, if $\boldsymbol{\sigma}{b{k}}$ is not specified, then it is assumed to be infinite and no weight is associated with this point. In other words, if $\boldsymbol{\sigma}{\boldsymbol{b}{k}}$ is not specified then $\boldsymbol{b}{k}$ is treated as just an initial guess for $\boldsymbol{a}{k}$ and not as a prior estimate. The number of values of $\boldsymbol{b}{k}$ that are specified (i.e., not infinity) is $\boldsymbol{n}{b}$.

## 线性代数作业代写linear algebra代考|Uncertainty in the Model Predictions

$$y=f\left(\mathrm{X} ; a 1, a_{2}, \ldots, a_{p}\right)$$

$$\Delta f \cong \frac{\partial f}{\partial a_{1}} \Delta a_{1}+\frac{\partial f}{\partial a_{2}} \Delta a_{2}+\ldots+\frac{\partial f}{\partial a_{p}} \Delta a_{p}$$

$$T_{k}=\frac{\partial f}{\partial a_{k}} \Delta a_{k}$$

## 线性代数作业代写linear algebra代考|Treatment of Prior Estimates

$$S=\sum i=1^{i=n} w_{i}\left(Y_{i}-f\left(\mathbf{X}{i}\right)\right)^{2}+\sum k=1^{k=p}\left(a{k}-b_{k}\right)^{2} / \sigma_{b_{k}}^{2}$$

# 计量经济学代写

## 在这种情况下，如何学好线性代数？如何保证线性代数能获得高分呢？

1.1 mark on book

【重点的误解】划重点不是书上粗体，更不是每个定义，线代概念这么多，很多朋友强迫症似的把每个定义整整齐齐用荧光笔标出来，然后整本书都是重点，那期末怎么复习呀。我认为需要标出的重点为

A. 不懂，或是生涩，或是不熟悉的部分。这点很重要，有的定义浅显，但证明方法很奇怪。我会将晦涩的定义，证明方法标出。在看书时，所有例题将答案遮住，自己做，卡住了就说明不熟悉这个例题的方法，也标出。

B. 老师课上总结或强调的部分。这个没啥好讲的，跟着老师走就对了

C. 你自己做题过程中，发现模糊的知识点

1.2 take note

1.3 understand the relation between definitions