Sign up
Forgot password?
FAQ: Login

Machine learning

B
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, Dr. Zeyd Boukhers, Dr. Thomas Gottron, Dr. Florian Lemmerich, Dr. Christoph Kling, Prof. Dr. Steffen Staab, 2019, 33 p. K-means. Expectation maximization. DBSCAN. Agglomerative hierarhial clustering.
  • №1
  • 984,92 KB
  • added
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, 2019, 50 p. - Defining task - Designing features - Preprocessing -- Outlier removal -- Feature scaling -- Feature correlation measurement -- Missing data - Class imbalance problem
  • №2
  • 2,01 MB
  • added
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, Dr. Zeyd Boukhers, 2019, 47 p. Context-dependent classification. Markov chain. Hidden Markov model. Recognition. Decoding. Training. Data dimension reduction. Principal component analysis. Singular value decomposition.
  • №3
  • 1,72 MB
  • added
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, Dr. Zeyd Boukhers, 2019, 72 p. Data dimension reduction. Principal component analysis. Singular value decomposition. Clustering. Unsupervised learning. Evaluate clusters. Intrinsic and extrinsic evaluation measures. How K-mean works. Choosing K. EM algorithm.
  • №4
  • 4,29 MB
  • added
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, Dr. Zeyd Boukhers, 2019, 47 p. Defining task. Designing features. Preprocessing. Outlier removal. Feature scaling. Feature correlation measurement. Missing data. Class imbalance problem. k-nearest. Classification. Regression. K-D tree. Overfitting. Evaluation. Confusion matrix....
  • №5
  • 1,82 MB
  • added
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, Dr. Zeyd Boukhers, 2019, 43 p. Class imbalance problem. k-nearest. Classification. Regression. K-D tree. Overfitting. How to choose the best K. Evaluation. Confusion matrix. Precision. Recall. F-Score. Overall accuracy. Bayesian classification. Bayes theorem. Naive Bayes.
  • №6
  • 1,78 MB
  • added
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, Dr. Zeyd Boukhers, 2019, 47 p. How to evaluate a classification model. Bayesian classification. Bayes theorem. Maximum Likelihood Estimation. Maximum a Posteriori Probability Estimation. Bayesian classification. Naïve Bayes. Decision tree.
  • №7
  • 2,08 MB
  • added
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, Dr. Zeyd Boukhers, 2019, 36 p. Naïve Bayes. Decision tree. More about decision trees. Pruning. Random forest. Underfitting, overfitting, and generalization.
  • №8
  • 1,96 MB
  • added
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, Dr. Zeyd Boukhers, 2019, 45 p. Decision tree. Random forest. Linear regression. Least squares function. Optimization. Linear classification. Perceptron classifier. Support vector machines(SVM). Optimization.
  • №9
  • 2,04 MB
  • added
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, Dr. Zeyd Boukhers, 2019, 52 p. Linear regression. Perceptron classifier. Support vector machine. Non-separable cases. Non linearly separable cases. Neural network. From perceptron to one layer perceptron. Multi-layer perceptron. Optimization.
  • №10
  • 2,68 MB
  • added
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, Dr. Zeyd Boukhers, 2019, 56 p. XOR problem. Two-layer perceptron. How the network learns. Gradient descend. Backpropagation. Activation functions. Sigmoid. Softmax. Linear. Mixture density. ReLu. Loss functions. Binary cross-entropy. Discrete cross-entropy. Gaussian cross-entropy....
  • №11
  • 3,32 MB
  • added
The lecture, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany, Dr. Zeyd Boukhers, 2019, 42 p. Deep neural networks. Context-dependent classification. Markov chain. Hidden Markov model. Recognition. Decoding. Training. Viterbi algorithm.
  • №12
  • 1,62 MB
  • added
S
The lecture, University of Würzburg, Germany, Univ.-Prof. Dr. rer. nat. Ingo Scholtes, 2022, 60 p. Generative Models and Statistical Ensembles. G(n,m) random graph model. G(n,p) random graph model. Degree distribution of the random graph. Random graphs with given degrees. Generative models and likelihood. Statistical inference in networks.
  • №13
  • 4,41 MB
  • added
There are no files in this category.

Comments

There are no comments.
Up