Skip to content

Commit

Permalink
Fix
Browse files Browse the repository at this point in the history
  • Loading branch information
marco-bernardi committed Jan 22, 2024
1 parent a3dfe97 commit e4431ae
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
Binary file modified question.pdf
Binary file not shown.
4 changes: 2 additions & 2 deletions question.tex
Original file line number Diff line number Diff line change
Expand Up @@ -869,7 +869,7 @@ \section{Uncertainty}

Conditional independence:
\begin{equation}\label{eq:prob_cond_ind}
P(X|Y,Z) = P(X|Z)P(Y|Z)
P(X,Y|Z) = P(X|Z)P(Y|Z)
\end{equation}

Full joint distribution:
Expand Down Expand Up @@ -1012,7 +1012,7 @@ \section{Machine Learning}
error_D(h) \equiv \underset{x\in\mathcal{D}}{Pr}[c(x)\neq h(x)]
\end{equation}

We can say that $h\equiv\mathcal{H}$ overfits $Tr$ if $\exists h'\in\mathcal{H}$ such that $error_{Tr}(h)<error_{Tr}(h')$ and $error_D(h)>error_D(h')$.
We can say that $h\in\mathcal{H}$ overfits $Tr$ if $\exists h'\in\mathcal{H}$ such that $error_{Tr}(h)<error_{Tr}(h')$ and $error_D(h)>error_D(h')$.

The goal of machine learning is to solve a task with the lowest possible true error, but a classifier learn on training data so
it generate empirical error and not true error.
Expand Down

0 comments on commit e4431ae

Please sign in to comment.