文章吧-经典好文章在线阅读:《Hands-On Machine Learning with Scikit-Learn and TensorFlow

当前的位置:文章吧 > 经典文章 > 经典美文 > 经典精选 >

《Hands-On Machine Learning with Scikit-Learn and TensorFlow

2021-04-18 01:57:18 来源:文章吧 阅读:载入中…

《Hands-On Machine Learning with Scikit-Learn and TensorFlow

  《Hands-On Machine Learning with Scikit-Learn and TensorFlow》是一本由Aurélien Géron著作,O'Reilly Media出版的Paperback图书,本书定价:USD 49.99,页数:450,特精心从网络上整理的一些读者的读后感,希望对大家能有帮助。

  《Hands-On Machine Learning with Scikit-Learn and TensorFlow》精选点评:

  ●作为学习应用机器学习技术的第一本书,读的时候还稍微觉得这本书并没有大家口中说的那么好。然而看了这本书,再去其他的机器学习实战数,就觉得还是这本书牛逼。而且作者一直在更新本书配套代码,真的很棒

  ●好书!但是深度学习部分介绍的内容并不多,有些都是点到为止。这本书兼顾理论与实践,特别是通过实践部分,让我对一些机器学习的理论又有了新的认识。

  ●挺不错的。开头的案例分析、概念讲解都很好,后面对神经网络的介绍也不错,代码也非常有参考性。可惜之处就是案例有些少,我其实想看更多的应用,比如Kaggle真题解析100道这种

  ●深入浅出,机器学习入门极佳选择。即使编程和数学基础较弱,认真读读书里的解释,再在网上做些搜索,也能看懂大部分内容。不要犹豫了,就读这本吧。

  ●全干货,当代工程师必读

  ●safaribooksonline已经有了 浮光掠影的读了读

  ●強烈推薦。作者十分專業,對業界很瞭解,且敬業。隨書附送的代碼、習題答案不斷更新。比如SELU的論文6月剛出來,他的代碼庫6月就包含了這個算法。看本書能極大拓展初學者眼界,讓初學者在會「搬磚」的同時,也能瞭解到學術界最近的動向。可以說是近年來面對碼工初學者最好的深度學習教材。

  ●精彩! 理论与实践兼备,Know-How、Know-What, Know-Why的优秀融合。作者功底深厚,横跨产学研。本书并不像学术书籍沉迷于公式和调参,也不像工程书籍沉醉于API的介绍和调用。本书在介绍模型和包的同时,还介绍了模型背后的思想。让你看到,前人在解决现有模型存在的问题时,为了哪怕在外人看来微不足道的进步,都做了哪些漂亮的工作,提出了什么优美的解法。在拟合(偏差)和泛化(方差)间完美权衡,这只能是艺术。 "人脑是一个神奇的发现规律的系统,这意味着大脑非常容易发生过拟合"。 "模型是观察的简化版本。简化意味着舍弃无法进行推广的表面细节。但是,要确定舍弃什么数据、保留什么数据,必须要做假设。如果不对数据做假设,就没有理由选择一个模型而不选另一个。这称作没有免费午餐(NFL)公理"。

  ●真正做到由简入深、既能实践又有学术基础的好书。

  ●入门极佳!即便是小白也能轻松看懂。期待这个作者继续写书啊。

  《Hands-On Machine Learning with Scikit-Learn and TensorFlow》读后感(一):tensorflow的入门好书

  tensorflow的官方文档写的比较乱,这本书的出现,恰好拯救了一批想入门tf,又看不进去官方文档的人。行文非常棒,例子丰富,有助于工程实践。这本书上提到了一些理论,简单形象;但是,理论不是此书的重点,也不应是此书的重点。这本书对于机器学习小白十分友好,读完了也就差不多入门了。

  《Hands-On Machine Learning with Scikit-Learn and TensorFlow》读后感(二):挺好的,任何人看了都能学到不少东西

  挺不错的,推荐做ML的同学都拿来看看,一定能学到不少东西,尤其是接触没多久的

  不足之处是例子还是稍显不足,我个人更想要Kaggle真题解析

  一些我比较喜欢的地方如下

  1. 2-3章适合所有刚接触数据科学的同学

  第2章 California housing(加州区域房价)的例子非常实际,能学到很多best practice, 怎么写代码、分析数据等等

  第3章 以讲解MNIST为例,对基本概念的介绍非常好 (confusion maxtrix, ROC等等)

  2. 神经网络的基本概念介绍得不错

  - CNN: 做视觉分类 (架构设计类似于缩减、逐层抽象,如图)

  - RNN: 做时间相关的预测 (架构设计在于时间变化,如图)

  - autoencoder: 根据输入自动生成相似的内容

  - reinforcement learning: 打游戏

  看了写的代码也觉得这东西没以前那么高深了....

  3. 代码值得过一遍

  https://github.com/ageron/handson-ml

  有时候看书感觉很复杂,看代码反而很简洁

  而且在不断更新,书中一些错误也在代码中做了补充

  4. 一些完整的例子(代码)很有学习价值

  - California Housing, 第2章

  - Titanic (Kaggle), 第3章

  - Spam Filter, 第3章

  - MINST, 第2/13章,包括了用深度学习来做的例子

  - Pacman, 第16章

  本书缺点:这本书的实战的例子还是少了些

  所以,推荐Kaggle历年真题解析:http://ndres.me/kaggle-past-solutions/

  在熟悉了基本的ML、写代码之后,可以挑自己感兴趣的问题,看看别人是怎么分析数据、解题的,也非常有帮助

  感想:目前对NN和ML的业界应用还不算了解,但是感觉对于数据的处理是非常general的,似乎达到一个一般的水平就算不错?(就定量预测/回归而言)

  《Hands-On Machine Learning with Scikit-Learn and TensorFlow》读后感(三):全书最精华的部分在于附录B

  比一些照着pakcage的API tutorial抄出来的书姿势水平不知道高到哪里去了。

  个人认为这本书最精华的部分在于Appendix B 机器学习项目清单,基本上工业界做一套Machine Learning解决方案顺着这个checklist问一遍自己就够了,需要Presentation的场合按照这个结构来组织也非常合适。

  所以特此摘抄如下:

Machine Learning Project Checklist

  1. Frame the problem and look at the big picture. 2. Get the data. 3. Explore the data to gain insights. 4. Prepare the data to better expose the underlying data patterns to Machine Learning algorithms. 5. Explore many different models and short-list the best ones. 6. Fine-tune your models and combine them into a great solution. 7. Present your solution. 8. Launch, monitor, and maintain your system

Frame the Problem and Look at the Big Picture

  1. Define the objective in business terms. 2. How will your solution be used? 3. What are the current solutions/workarounds (if any)? 4. How should you frame this problem (supervised/unsupervised, online/offline,etc.)? 5. How should performance be measured? 6. Is the performance measure aligned with the business objective? 7. What would be the minimum performance needed to reach the business objective? 8. What are comparable problems? Can you reuse experience or tools? 9. Is human expertise available? 10. How would you solve the problem manually? 11. List the assumptions you (or others) have made so far. 12. Verify assumptions if possible.

Get the Data

  ote: automate as much as possible so you can easily get fresh data. 1. List the data you need and how much you need. 2. Find and document where you can get that data. 3. Check how much space it will take. 4. Check legal obligations, and get authorization if necessary. 5. Get access authorizations. 6. Create a workspace (with enough storage space). 7. Get the data. 8. Convert the data to a format you can easily manipulate (without changing thedata itself). 9. Ensure sensitive information is deleted or protected (e.g., anonymized). 10. Check the size and type of data (time series, sample, geographical, etc.). 11. Sample a test set, put it aside, and never look at it (no data snooping!).

Explore the Data

  ote: try to get insights from a field expert for these steps. 1. Create a copy of the data for exploration (sampling it down to a manageable sizeif necessary). 2. Create a Jupyter notebook to keep a record of your data exploration. 3. Study each attribute and its characteristics: • Name • Type (categorical, int/float, bounded/unbounded, text, structured, etc.) • % of missing values • Noisiness and type of noise (stochastic, outliers, rounding errors, etc.) • Possibly useful for the task? • Type of distribution (Gaussian, uniform, logarithmic, etc.) 4. For supervised learning tasks, identify the target attribute(s). 5. Visualize the data. 6. Study the correlations between attributes. 7. Study how you would solve the problem manually. 8. Identify the promising transformations you may want to apply. 9. Identify extra data that would be useful 10. Document what you have learned.

Prepare the Data

  otes: • Work on copies of the data (keep the original dataset intact). • Write functions for all data transformations you apply, for five reasons: —So you can easily prepare the data the next time you get a fresh dataset —So you can apply these transformations in future projects —To clean and prepare the test set —To clean and prepare new data instances once your solution is live —To make it easy to treat your preparation choices as hyperparameters

  1. Data cleaning: • Fix or remove outliers (optional). • Fill in missing values (e.g., with zero, mean, median…) or drop their rows (orcolumns).

  2. Feature selection (optional): • Drop the attributes that provide no useful information for the task.

  3. Feature engineering, where appropriate: • Discretize continuous features. • Decompose features (e.g., categorical, date/time, etc.). • Add promising transformations of features (e.g., log(x), sqrt(x), x^2, etc.). • Aggregate features into promising new features.

  4. Feature scaling: standardize or normalize features.

Short-List Promising Models

  otes: • If the data is huge, you may want to sample smaller training sets so you can trainmany different models in a reasonable time (be aware that this penalizes complexmodels such as large neural nets or Random Forests). • Once again, try to automate these steps as much as possible.

  1. Train many quick and dirty models from different categories (e.g., linear, naiveBayes, SVM, Random Forests, neural net, etc.) using standard parameters.

  2. Measure and compare their performance. • For each model, use N-fold cross-validation and compute the mean and standarddeviation of the performance measure on the N folds.

  3. Analyze the most significant variables for each algorithm.

  4. Analyze the types of errors the models make. • What data would a human have used to avoid these errors?

  5. Have a quick round of feature selection and engineering.

  6. Have one or two more quick iterations of the five previous steps.

  7. Short-list the top three to five most promising models, preferring models thatmake different types of errors.

Fine-Tune the System

  otes: • You will want to use as much data as possible for this step, especially as you movetoward the end of fine-tuning. • As always automate what you can.

  1. Fine-tune the hyperparameters using cross-validation. • Treat your data transformation choices as hyperparameters, especially whenyou are not sure about them (e.g., should I replace missing values with zero or with the median value? Or just drop the rows?). • Unless there are very few hyperparameter values to explore, prefer randomsearch over grid search. If training is very long, you may prefer a Bayesianoptimization approach (e.g., using Gaussian process priors).

  2. Try Ensemble methods. Combining your best models will often perform betterthan running them individually.

  3. Once you are confident about your final model, measure its performance on thetest set to estimate the generalization error.

Present Your Solution

  1. Document what you have done. 2. Create a nice presentation. • Make sure you highlight the big picture first.

  3. Explain why your solution achieves the business objective.

  4. Don’t forget to present interesting points you noticed along the way. • Describe what worked and what did not. • List your assumptions and your system’s limitations.

  5. Ensure your key findings are communicated through beautiful visualizations oreasy-to-remember statements (e.g., “the median income is the number-one predictorof housing prices”).

Launch!

  1. Get your solution ready for production (plug into production data inputs, writeunit tests, etc.).

  2. Write monitoring code to check your system’s live performance at regular intervalsand trigger alerts when it drops. • Beware of slow degradation too: models tend to “rot” as data evolves. • Measuring performance may require a human pipeline (e.g., via a crowdsourcingservice). • Also monitor your inputs’ quality (e.g., a malfunctioning sensor sending randomvalues, or another team’s output becoming stale). This is particularly important for online learning systems.

  3. Retrain your models on a regular basis on fresh data (automate as much as possible).

评价:

[匿名评论]登录注册

评论加载中……