Linear Regression vs Maximum Likelihood #machinelearning #statistics #datascience
? *RECOMMENDED BOOKS TO START WITH MACHINE LEARNING* ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ If you're new to ML, here are the 3 best books I recommend (I've personally re...
🔥 Related Trending Topics
LIVE TRENDSThis video may be related to current global trending topics. Click any trend to explore more videos about what's hot right now!
THIS VIDEO IS TRENDING!
This video is currently trending in Thailand under the topic 'สภาพอากาศ'.
About this video
📚 *RECOMMENDED BOOKS TO START WITH MACHINE LEARNING*
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
If you're new to ML, here are the 3 best books I recommend (I've personally read all of these):
1. Hands-On Machine Learning – the go-to practical ML guide: https://amzn.to/3UcGqSS
2. Mathematics for Machine Learning – deep dive into the theoretical aspects of ML: https://amzn.to/3IZgHe7
3. Designing Machine Learning Systems - practical strategies for building scalable ML solutions: https://amzn.to/4ojEqFX
These are affiliate links, so buying through them helps support the channel at no extra cost to you — thanks 🙏
*Summary*
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
In this video, we explore why the least squares method is closely related to the Gaussian distribution. Simply put, this happens because it assumes that the errors or residuals in the data follow a normal distribution with a mean on the regression line.
*Related Videos*
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Why We Don't Use the Mean Squared Error (MSE) Loss in Classification: https://youtu.be/bNwI3IUOKyg
The Bessel's Correction: https://youtu.be/E3_408q1mjo
Gradient Boosting with Regression Trees Explained: https://youtu.be/lOwsMpdjxog
P-Values Explained: https://youtu.be/IZUfbRvsZ9w
Kabsch-Umeyama Algorithm: https://youtu.be/nCs_e6fP7Jo
Eigendecomposition Explained: https://youtu.be/ihUr2LbdYlE
*Follow Me*
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
🐦 Twitter: @datamlistic https://twitter.com/datamlistic
📸 Instagram: @datamlistic https://www.instagram.com/datamlistic
📱 TikTok: @datamlistic https://www.tiktok.com/@datamlistic
*Channel Support*
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The best way to support the channel is to share the content. ;)
If you'd like to also support the channel financially, donating the price of a coffee is always warmly welcomed! (completely optional and voluntary)
► Patreon: https://www.patreon.com/datamlistic
► Bitcoin (BTC): 3C6Pkzyb5CjAUYrJxmpCaaNPVRgRVxxyTq
► Ethereum (ETH): 0x9Ac4eB94386C3e02b96599C05B7a8C71773c9281
► Cardano (ADA): addr1v95rfxlslfzkvd8sr3exkh7st4qmgj4ywf5zcaxgqgdyunsj5juw5
► Tether (USDT): 0xeC261d9b2EE4B6997a6a424067af165BAA4afE1a
#svd #singularvaluedecomposition #eigenvectors #eigenvalues #linearalgebra
Video Information
Views
49.7K
Total views since publication
Likes
1.7K
User likes and reactions
Duration
0:54
Video length
Published
Aug 6, 2024
Release date
Quality
hd
Video definition
About the Channel
Tags and Topics
This video is tagged with the following topics. Click any tag to explore more related content and discover similar videos:
#deep learning #machine learning #artificial intelligence #ml #dl #ai #data science #ds #ml tutorial #linear regression #gaussian distribution #normal distribution #least squares #least squares vs maximum likelihood #maximum likelihood #post apriori maximization
Tags help categorize content and make it easier to find related videos. Browse our collection to discover more content in these categories.