Trending February 2024 # Users’ Choice Survey Results: Books For Learning Java # Suggested March 2024 # Top 9 Popular

You are reading the article Users’ Choice Survey Results: Books For Learning Java updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Users’ Choice Survey Results: Books For Learning Java

The market is flooded with books on Java. All are different, few have ever been compared. The better ones combine the knowledge base of a world-class programmer with the compassion of a caring educator — and hopefully, a sense of humor. Which of them best fits this description is a choice, we believe, only users should make. So we let you vote for your favorite teaching guide for the Java language. Here’s what you had to say.

In a tight race, three contestants emerged on top. Thinking in Java by Bruce Eckel (Prentice Hall) ranked number one among tutorials you’d recommend to your peers. It was followed closely by Java in a Nutshell: A Desktop Quick Reference by David Flanagan (O’Reilly) and Core Java 1.1: Vols. 1 and 2 by Cay Horstmann and Gary Cornell (Prentice Hall), copping place and show honors, respectively.

Trailing well behind the leaders but still grabbing sizable tallies were Teach Yourself Java in 21 Days by Laura Lemay and Charles L. Perkins (, The Java Programming Language by James Gosling and Ken Arnold (Addison Wesley), The Java Tutorial: Object-Oriented Programming for the Internet by Mary Campione and Kathy Walrath (Addison Wesley), Exploring Java by Pat Niemeyer and Josh Peck (O’Reilly) and Just Java by Peter van der Linden (Sun Microsystems Press).

Of the survey winner, users offered the following:

“Thinking in Java is definitely a winner. Concentrating on the language first with humour and lots of good examples. It’s the book I read from page 1 and follow chapter after chapter.”

“Of the many books I have read, the only one which made me feel that I have more to know in Java is undoubtedly Thinking in Java. He has analysed very interesting issues, particularly in the chapters on Polymorphism, RTTI and Passing Objects.

“Thinking in Java provides a thorough treatment of what goes on behind the scenes in Java. This is missing from all the other books I have read. Especially the coverage of initialization, garbage collection, reflection, inner classes and polymorphism.”

Others contrasted Eckel’s work with that of the other leading authors.

“Eckel’s book is great, Campione and Walrath’s book is also very good. The Core Java book is good, after you’ve read the first two.”

“Thinking in Java is very well-written. Java in a Nutshell is a ready resource. And although Teach Yourself Java in 21 Days is not the world’s greatest Java book, it has an excellent structure and general approach.”

“In terms of clarity, Eckel’s is the best. I like the Java 1.1 Interactive Course because of the value-added service of answering questions. Van der Linden’s is the third choice; it maintains the usual high standard of the Sun Java series.”

Praise was also heaped on several other candidates.

“Java in a Nutshell is a very valuable book, not only as a desktop reference, but also as a very effective learning means. It provides robust basis for all developers and programmers.”

“Teach Yourself Java is great. Lemay is a very good teacher.”

“Exploring Java by Niemeyer and Peck (O’Reilly): A great book with clear explanations, unlike several other Java books which omit important details. Highly recommended from someone with no vested interest.”

“The Horton book [Beginning Java] is excellent. All major aspect are covered in special detail.”

“Just Java and Beyond (Third Edition) had everything I needed to get started with Java. It discussed applets and applications, which I couldn’t find in any other book in the store!”

For the record, here’s the official order of finish using our 3-point weighting system:

Thinking in Java (Eckel), Prentice Hall

Java in a Nutshell: A Desktop Quick Reference (Flanagan), O’Reilly

Core Java 1.1: Vols. 1 and 2 (Horstmann and Cornell), Prentice Hall

Teach Yourself Java in 21 Days (Lemay and Perkins), chúng tôi

The Java Tutorial: Object-Oriented Programming for the Internet (Campione and Walrath), Addison Wesley

The Java Programming Language (Gosling and Arnold), Addison Wesley

Just Java (van der Linden), Sun Microsystems Press (tie)

Exploring Java (Niemeyer and Peck), O’Reilly (tie)

Beginning Java (Horton), Wrox Press

Mastering Java 1.1 (Vanhelsuwe), Sybex

A Complete Java Training Course: The Ultimate Cyber Classroom (Deitl and Deitl), Prentice Hall

Using Java, Special Edition (Newman et al.), Que

Hooked on Java (van Hoff et al.), Addison-Wesley (tie)

Java 1.1 Interactive Course (Lemay), Waite Group Press (tie)

Web Programming with Java (Girdley et al), chúng tôi

Java How-To (Siddalingaiah and Lockwood), Waite Group Press

Learn Java Now (Davis), Microsoft Press

Foundations of Java Programming for the World Wide Web (Walsh), IDG Books (tie)

Presenting Java (December), chúng tôi (tie)

The leading write-in offering was Java Examples in a Nutshell by Flanagan (O’Reilly).

We congratulate our survey winner and thank our users for their participation.

You're reading Users’ Choice Survey Results: Books For Learning Java

12 Best Plugins For Genesis Theme Users

Genesis is one popular theme framework that is widely used by the WordPress community. The Genesis theme framework and child themes are built ground up to be clean, customizable, secure and SEO-optimized. Though Genesis looks bare bones with its fast and secure approach, it has many great plugins to enhance and customize the themes. Here is a short list of the best Genesis plugins to customize and pimp your Genesis theme.

1. Genesis eNew Extended

Genesis eNews Extended is one of the first plugins that I always install after configuring the Genesis theme. The eNews extended plugin provides a simple way to add mailing lists to your Genesis theme. The plugin supports a variety of mailing services like MailChimp, Feedburner, Aweber, MailPoet, etc.

2.Genesis Simple Edits

Whenever you install a new Genesis theme, you might want to customize the post info, post meta, and the footer area text to suit your site. To modify these areas, you generally need to edit or add code to your theme’s functions file. If you want to make things easier, then install the Genesis Simple Edits plugin, as it will provide you with a simple GUI interface to edit those three parts of your Genesis theme.

3. Genesis Layout Extras

Genesis Layout Extras is a simple plugin that lets you change or add different layouts to different parts of your website like the homepage, archive page, author page, single posts or pages, 404 page, etc. A must-have plugin if you want to quickly customize the default layout options in Genesis.

4. Genesis Title Toggle

5. Genesis Simple Share

With the Genesis Simple Share plugin, you can add simple and beautiful social share icons to all your posts and pages. The plugin works great out of the box, but if you want to you can easily customize the social icons by just dragging and dropping.

6. Genesis Simple Hooks

Genesis Simple Hooks may not look like much or may even confuse the beginner, but the plugin packs quite a punch. Using this simple plugin, you can add or insert PHP, Shortcodes, and HTML and also attach it to any of the available hooks throughout the Genesis Theme Framework.

7. Genesis Responsive Slider

Genesis Responsive Slider is nothing fancy but provides a simple slider on your homepage with featured image, title, and excerpts. As you can tell from the name, the slider is responsive which makes it fit for any screen size.

8. Genesis Simple Sidebars

Genesis Simple Sidebars allows you to create multiple and dynamic sidebar widget areas. These widget areas can be assigned to any available sidebar location and you can make them appear based on a post or page. It’s really helpful when you want to assign specific widgets to specific pages or posts.

9. Genesis Grid

Genesis Grid allows you to easily customize how posts appear in the grid format. Using this plugin you can specify the number of featured and teaser posts, images sizes, post excerpt length, etc. Moreover, you can easily enable or disable the grid loop on specific pages like homepage, archives, author pages, etc.

10. Genesis Connect for WooCommerce

If you are using Genesis with WooCommerce, then you might want to consider installing Genesis Connect for WooCommerce. This simple plugin replaces the default or built-in shop templates with custom Genesis-ready templates.

11. Genesis Visual Hook Guide

The Genesis Visual Hook Guide plugin may not be for everyone, but it is a handy plugin to visually see all the Genesis hooks and their locations inside your child theme. This plugin goes hand in hand with the Genesis Simple Hooks plugin. Of course, you can also use the web version of the visual hook guide.

12. Genesis Design Palette Pro

Vamsi Krishna

Vamsi is a tech and WordPress geek who enjoys writing how-to guides and messing with his computer and software in general. When not writing for MTE, he writes for he shares tips, tricks, and lifehacks on his own blog Stugon.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

Books To Buy For Your Smart Friend

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Need a last-minute gift for your favorite smartypants? These science and sci-fi books should do the trick.

The Confidence Game

Part science and part self-help, Maria Konnikova’s latest book uses the psychological profiles of real-life con artists to help explain why even the most rational humans fall prey to falsehoods and scams. “For our minds are built for stories. We crave them, and, when there aren’t ready ones available, we create them. Stories about our origins. Our purpose. The reasons the world is the way it is. Human beings don’t like to exist in a state of uncertainty or ambiguity. When something doesn’t make sense, we want to supply the missing link. When we don’t understand what or why or how something happened, we want to find the explanation. A confidence artist is only too happy to comply — and the well-crafted narrative is his absolute forte.” From $17 on Amazon.

Seven Brief Lessons on Physics

Just because he’s the smartest guy you know doesn’t mean he knows a thing about physics. This best-seller will give your friend a whirlwind introduction to the basics. From $11 on Amazon.

Oliver Sacks

Oliver Sacks was a brilliant neuroscientist. But he also wrote some of the best science nonfiction ever published. For those unfamiliar with his poignant body of work, you can’t go wrong with the classic The Man Who Mistook His Wife For A Hat ($10 on Amazon). If your recipient is a seasoned Sacks fan, they’re bound to appreciate the late author’s recent autobiography. $10 on Amazon.

Black Hole Blues

What’s up with this whole gravitational waves business? Find out more about the science—and the people behind it. $16 on Amazon.

The Handmaid’s Tale

It’s nearly 20 years old, but Margaret Atwood’s seminal work of speculative fiction couldn’t be more relevant today. (It’s also about to be turned into a Hulu series, so it’s probably on your smart friend’s re-read list.) From $9 on Amazon. This underrated space opera is guaranteed to make the recipient think deep, satisfying thoughts about language, love, society, and what it means to be human. It’s a must-read for anyone who raved about the movie Arrival. $14 on Amazon. Amazon

Hidden Figures

Get this one under the tree just in time to beat the movie: Hidden Figures, soon to hit the big screen, tells the story of NASA’s first “computers”— the women of color who helped put men on the moon. Whether your friend is a math geek, a space nerd, or a history buff, this one is definitely worth a read. From $11 on Amazon.

All The Birds In The Sky

The former editor-in-chief of sci-fi site io9 made waves with her first foray into fiction this year. The story of witches and tech geniuses working together to save the world made pretty much every best-of list there is, so you can’t go wrong. $18 on Amazon.

I Contain Multitudes

Ed Yong is quite simply one of the best science writers out there, so it’s no surprise that his first book is a delight. With everyone buzzing about the microbiome these days, your smartypants friend will want to dive into this read on the microbial world. The New York Times called it “infectiously enthusiastic”. Get it? $18 on Amazon.


If your friend loves to read about science, they probably already love Mary Roach. Any book of hers is an easy recommendation, and her titles explore everything from the “science” of the supernatural to the history of research on sex. Grunt, her latest, covers the science of warfare. $17 on Amazon.

Automated Machine Learning For Supervised Learning (Part 1)

This article was published as a part of the Data Science Blogathon                      

This article aims to demonstrate automated Machine Learning, also referred to as AutoML. To be specific, the AutoML will be applied to the problem statement requiring supervised learning, like regression and classification for tabular data. This article does not discuss other kinds of Machine Learning problems, such as clustering, dimensionality reduction, time series forecasting, Natural Language Processing, recommendation machine, or image analysis.

Understanding the problem statement and dataset

Before jumping to the AutoML, we will cover the basic knowledge of conventional Machine Learning workflow. After getting the dataset and understanding the problem statement, we need to identify the goal of the task. This article, as mentioned above, focuses on regression and classification tasks. So, make sure that the dataset is tabular. Other data formats, such as time series, spatial, image, or text, are not the main focus here.

Next, explore the dataset to understand some basic information, such as the:

Descriptive statistics (count, mean, standard deviation, minimum, maximum, and quartile) using .describe();

Data type of each feature using .info() or .dtypes;

Count of values using .value_counts();

Null value existance using .isnull().sum();

Correlation test using .corr();




After understanding the dataset, do the data pre-processing. This part is very important in that it will result in a training dataset for Machine Learning fitting. Data pre-processing can start with handling the missing data. Users should decide whether to remove the observation with missing data or apply data imputation. Data imputation means to fill the missing value with the average, median, constant, or most occurring value. Users can also pay attention to outliers or bad data to remove them so that they will not be the noise.

Feature scaling is a very important process in data preprocessing. Feature scaling aims to scale the value range in each feature so that features with higher values and small variance do not dominate other features with low values and high variance. Some examples of feature scaling are standardization, normalization, log normalization, etc.

Feature scaling is suitable to apply to gradient descent- and distance-based Machine Learning algorithms. Tree-based algorithms do not need feature scaling The following table shows the examples of algorithms.

Table 1 Examples of algorithms

Machine Learning Type Algorithms

Gradient descent-based Linear Regression, Ridge Regression, Lasso Regression, Elasticnet Regression, Neural Network (Deep Learning)

Distance-based K Nearest Neighbors, Support Vector Machine, K-means, Hierarchical clustering

Tree-based Decision Tree, Random Forest, Gradient Boosting Machine, Light GBM, Extreme Gradient Boosting,

Notice that there are also clustering algorithms in the table. K-means and hierarchical clustering are unsupervised learning algorithms.

Feature engineering: generation, selection, and extraction refer to the activities of creating new features (expected to help the prediction), removing low importance features or noises, and adding new features from extracting partial information of combined existing features respectively. This part is very important that adding new features or removing features can improve model accuracy. Cutting the number of features can also reduce the running time.

Creating model, hyperparameter-tuning, and model evaluation

The main part of Machine Learning is choosing an algorithm and build it. The algorithm needs training dataset features, a target or label feature, and some hyperparameters as the arguments. After the model is built, it is then used for predicting validation or test dataset to check the score. To improve the score, hyperparameter-tuning is performed. Hyperparameter-tuning is the activity of changing the hyperparameter(s) of each Machine Learning algorithms repeatedly until a satisfied model is obtained with a set of hyperparameters. The model is evaluated using scorer metrics, such Root Mean Squared Error, Mean Squared Error, or R2 for regression problems and accuracy, Area Under the ROC Curve, or F1-score for classification problems. The model score is evaluated using cross-validation. To read more about hyperparameter-tuning, please find this article.

After getting the optimum model with a set of hyperparameters, we may want to try other Machine Learning algorithms, along with the hyperparameter-tuning. There are many algorithms for regression and classification problems with their pros and cons. Different datasets have different Machine Learning algorithms to build the best prediction models. I have made notebooks containing a number of commonly used Machine Learning algorithms using the steps mentioned above. Please check it here:

The datasets are provided by Kaggle. The regression task is to predict house prices using the parameters of the houses. The notebook contains the algorithms: Linear Regression, Ridge Regression, Lasso Regression, Elastic-net Regression, K Nearest Neighbors, Support Vector Machine, Decision Tree, Random Forest, Gradient Boosting Machine (GBM), Light GBM, Extreme Gradient Boosting (XGBoost), and Neural Network (Deep Learning).

The binary classification task is to predict whether the Titanic passengers would survive or not. This is a newer dataset published just this April 2023 (not the old Titanic dataset for Kaggle newcomers). The goal is to classify each observation into class “survived” or not survived” without probability. If the classes are more than 2, it is called multi-class classification. However, the technics are similar. The notebook contains the algorithms: Logistic Regression, Naive Bayes, K Nearest Neighbors, Support Vector Machine, Decision Tree, Random Forest, Gradient Boosting Machine, Light GBM, Extreme Gradient Boosting, and Neural Network (Deep Learning). Notice that some algorithms can perform regression and classification works.

Another notebook I created is to predict binary classification with probability. It predicts whether each observation of location, date, and time was in high traffic or not with probability. If the probability of being high traffic is, for example, 0.8, the probability of not being high traffic is 0.2. There is also multi-label classification which predicts the probability of more than two classes.

If you have seen my notebooks from the hyperlinks above, there are many algorithms used to build the prediction models for the same dataset. But, which model should be used since the models predict different outputs. The simplest way is just picking the model with the best score (lowest RMSE or highest accuracy). Or, we can perform ensemble methods. Ensemble methods use multiple different machine learning algorithms for predicting the same dataset. The final output is determined by averaging the predicted outputs in regression or majority voting in classification. Actually, Random Forest, GBM, and XGBoost are also ensemble methods. But, they develop the same type of Machine Learning, which is a Decision Tree, from different subsets of the training data.

Finally, we can save the model if it is satisfying. The saved model can be loaded again in other notebooks to do the same prediction.

Fig. 1 Machine Learning Workflow. Source: created by the author


Automated Machine Learning

The process to build Machine Learning models and choose the best model is very long. It takes many lines of code and much time to complete. However, Data Science and Machine Learning are associated with automation. Then, we have automated Machine learning or autoML. AutoML only needs a few lines to do most of the steps above, but not all of the steps. Figure 1 shows the workflow of Machine Learning. The autoML covers only the parts of data pre-processing, choosing model, and hyperparameter-tuning. The users still have to understand the goals, explore the dataset, and prepare the data.

There are many autoML packages for regression and classification tasks for structured tabular data, image, text, and other predictions. Below is the code of one of the autoML packages, named Auto-Sklearn. The dataset is Titanic Survival, still the same as in the previous notebooks. Auto-Sklearn was developed by Matthias Feurer, et al. (2024) in the paper “Efficient and Robust Automated Machine Learning”. Auto-Sklearn is available openly in python scripting. Yes, Sklearn or Scikit-learn is the common package for performing Machine Learning in Python language. Almost all of the algorithms in the notebooks above are from Sklearn.

# Install and import packages !apt install -y build-essential swig curl !pip install auto-sklearn from autosklearn.classification import AutoSklearnClassifier # Create the AutoSklearnClassifier sklearn = AutoSklearnClassifier(time_left_for_this_task=3*60, per_run_time_limit=15, n_jobs=-1) # Fit the training data, y_train) # Sprint Statistics print(sklearn.sprint_statistics()) # Predict the validation data pred_sklearn = sklearn.predict(X_val) # Compute the accuracy print('Accuracy: ' + str(accuracy_score(y_val, pred_sklearn)))


Dataset name: da588f6e-c217-11eb-802c-0242ac130202 Metric: accuracy Best validation score: 0.769936 Number of target algorithm runs: 26 Number of successful target algorithm runs: 7 Number of crashed target algorithm runs: 0 Number of target algorithms that exceeded the time limit: 19 Number of target algorithms that exceeded the memory limit: 0 Accuracy: 0.7710593242331447 # Prediction results print('Confusion Matrix') print(pd.DataFrame(confusion_matrix(y_val, pred_sklearn))) print(classification_report(y_val, pred_sklearn))


Confusion Matrix 0 1 0 8804 2215 1 2196 6052 precision recall f1-score support 0 0.80 0.80 0.80 11019 1 0.73 0.73 0.73 8248 accuracy 0.77 19267 macro avg 0.77 0.77 0.77 19267 weighted avg 0.77 0.77 0.77 19267

The code is set to run for 3 minutes with no single algorithm running for more than 30 seconds. See, with only a few lines, we can create a classification algorithm automatically. We do not even need to think about which algorithm to use or which hyperparameters to set. Even a beginner in Machine Learning can do it right away. We can just get the final result. The code above has run 26 algorithms, but only 7 of them are completed. The other 19 algorithms exceeded the set time limit. It can achieve an accuracy of 0.771. To find the process of finding the selected model, run this line


The following code is also Auto-Sklearn, but for regression work. It develops an autoML model to predict the House Prices dataset. It can find a model with RMSE of 28,130 from successful 16 algorithms out of the total 36 algorithms.

# Install and import packages !apt install -y build-essential swig curl !pip install auto-sklearn from autosklearn.regression import AutoSklearnRegressor # Create the AutoSklearnRegessor sklearn = AutoSklearnRegressor(time_left_for_this_task=3*60, per_run_time_limit=30, n_jobs=-1) # Fit the training data, y_train) # Sprint Statistics print(sklearn.sprint_statistics()) # Predict the validation data pred_sklearn = sklearn.predict(X_val) # Compute the RMSE rmse_sklearn=MSE(y_val, pred_sklearn)**0.5 print('RMSE: ' + str(rmse_sklearn))


Dataset name: 71040d02-c21a-11eb-803f-0242ac130202 Metric: r2 Best validation score: 0.888788 Number of target algorithm runs: 36 Number of successful target algorithm runs: 16 Number of crashed target algorithm runs: 1 Number of target algorithms that exceeded the time limit: 15 Number of target algorithms that exceeded the memory limit: 4 RMSE: 28130.17557050461 # Scatter plot true and predicted values plt.scatter(pred_sklearn, y_val, alpha=0.2) plt.xlabel('predicted') plt.ylabel('true value') plt.text(100000, 400000, 'RMSE: ' + str(round(rmse_sklearn))) plt.text(100000, 350000, 'MAE: ' + str(round(mean_absolute_error(y_val, pred_sklearn)))) plt.text(100000, 300000, 'R: ' + str(round(np.corrcoef(pred_sklearn, y_val)[0,1],2)))


# Scatter plot true and predicted values plt.scatter(pred_sklearn, y_val, alpha=0.2) plt.xlabel('predicted') plt.ylabel('true value') plt.text(100000, 400000, 'RMSE: ' + str(round(rmse_sklearn))) plt.text(100000, 350000, 'MAE: ' + str(round(mean_absolute_error(y_val, pred_sklearn)))) plt.text(100000, 300000, 'R: ' + str(round(np.corrcoef(pred_sklearn, y_val)[0,1],2)))

Fig. 2 Scatter plot from autoSklearnRegressor. Source: created by the author

So, do you think that Machine Learning Scientists/Engineers are still needed?

There are still other autoML packages to discuss, like Hyperopt–Sklearn, Tree-based Pipeline Optimization Tool (TPOT), AuroKeras, MLJAR, and so on. But, we will discuss them in part 2.

About Author

Connect with me here.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.


Google Colab For Machine Learning And Deep Learning

“Memory Error” – that all too familiar dreaded message in Jupyter notebooks when we try to execute a machine learning or deep learning algorithm on a large dataset. Most of us do not have access to unlimited computational power on our machines. And let’s face it, it costs an arm and a leg to get a decent GPU from existing cloud providers. So how do we build large deep learning models without burning a hole in our pockets? Step up – Google Colab!

It’s an incredible online browser-based platform that allows us to train our models on machines for free! Sounds too good to be true, but thanks to Google, we can now work with large datasets, build complex models, and even share our work seamlessly with others. That’s the power of Google Colab.

What is Google Colab?

Google Colaboratory is a free online cloud-based Jupyter notebook environment that allows us to train our machine learning and deep learning models on CPUs, GPUs, and TPUs.

Here’s what I truly love about Colab. It does not matter which computer you have, what it’s configuration is, and how ancient it might be. You can still use Google Colab! All you need is a Google account and a web browser. And here’s the cherry on top – you get access to GPUs like Tesla K80 and even a TPU, for free!

TPUs are much more expensive than a GPU, and you can use it for free on Colab. It’s worth repeating again and again – it’s an offering like no other.

Are you are still using that same old Jupyter notebook on your system for training models? Trust me, you’re going to love Google Colab.

What is a Notebook in Google Colab? Google Colab Features

Colab provides users free access to GPUs and TPUs, which can significantly speed up the training and inference of machine learning and deep learning models.

Colab’s interface is web-based, so installing any software on your local machine is unnecessary. The interface is also intuitive and user-friendly, making it easy to get started with coding.

Colab allows multiple users to work on the same notebook simultaneously, making collaborating with team members easy. Colab also integrates with other Google services, such as Google Drive and GitHub, making it easy to share your work.

Colab notebooks support markdown, which allows you to include formatted text, equations, and images alongside your code. This makes it easier to document your work and communicate your ideas.

Colab comes pre-installed with many popular libraries and tools for machine learning and deep learning, such as TensorFlow and PyTorch. This saves time and eliminates the need to manually install and configure these tools.

GPUs and TPUs on Google Colab

Ask anyone who uses Colab why they love it. The answer is unanimous – the availability of free GPUs and TPUs. Training models, especially deep learning ones, takes numerous hours on a CPU. We’ve all faced this issue on our local machines. GPUs and TPUs, on the other hand, can train these models in a matter of minutes or seconds.

If you still need a reason to work with GPUs, check out this excellent explanation by Faizan Shaikh.

It gives you a decent GPU for free, which you can continuously run for 12 hours. For most data science folks, this is sufficient to meet their computation needs. Especially if you are a beginner, then I would highly recommend you start using Google Colab.

Google Colab gives us three types of runtime for our notebooks:


GPUs, and


As I mentioned, Colab gives us 12 hours of continuous execution time. After that, the whole virtual machine is cleared and we have to start again. We can run multiple CPU, GPU, and TPU instances simultaneously, but our resources are shared between these instances.

Let’s take a look at the specifications of different runtimes offered by Google Colab:

It will cost you A LOT to buy a GPU or TPU from the market. Why not save that money and use Google Colab from the comfort of your own machine?

How to Use Google Colab?

You can go to Google Colab using this link. This is the screen you’ll get when you open Colab:

You can also import your notebook from Google Drive or GitHub, but they require an authentication process.

Google Colab Runtimes – Choosing the GPU or TPU Option

The ability to choose different types of runtimes is what makes Colab so popular and powerful. Here are the steps to change the runtime of your notebook:

Step 2: Here you can change the runtime according to your need:

A wise man once said, “With great power comes great responsibility.” I implore you to shut down your notebook after you have completed your work so that others can use these resources because various users share them. You can terminate your notebook like this:

Using Terminal Commands on Google Colab

You can use the Colab cell for running terminal commands. Most of the popular libraries come installed by default on Google Colab. Yes, Python libraries like Pandas, NumPy, scikit-learn are all pre-installed.

If you want to run a different Python library, you can always install it inside your Colab notebook like this:

!pip install 


Pretty easy, right? Everything is similar to how it works in a regular terminal. We just you have to put an exclamation(!) before writing each command like:




Cloning Repositories in Google Colab

You can also clone a Git repo inside Google Colaboratory. Just go to your GitHub repository and copy the clone link of the repository:

Then, simply run:

And there you go!

Uploading Files and Datasets

Here’s a must-know aspect for any data scientist. The ability to import your dataset into Colab is the first step in your data analysis journey.

The most basic approach is to upload your dataset to Colab directly:

You can also upload your dataset to any other platform and access it using its link. I tend to go with the second approach more often than not (when feasible).

Saving Your Notebook

All the notebooks on Colab are stored on your Google Drive. The best thing about Colab is that your notebook is automatically saved after a certain time period and you don’t lose your progress.

If you want, you can export and save your notebook in both *.py and *.ipynb formats:

Not just that, you can also save a copy of your notebook directly on GitHub, or you can create a GitHub Gist:

I love the variety of options we get.

Exporting Data/Files from Google Colab

You can export your files directly to Google Drive, or you can export it to the VM instance and download it by yourself:

Exporting directly to the Drive is a better option when you have bigger files or more than one file. You’ll pick up these nuances as you work on bigger projects in Colab.

Sharing Your Notebook

Google Colab also gives us an easy way of sharing our work with others. This is one of the best things about Colab:

What’s Next?

Google Colab now also provides a paid platform called Google Colab Pro, priced at $9.99 a month. In this plan, you can get the Tesla T4 or Tesla P100 GPU, and an option of selecting an instance with a high RAM of around 27 GB. Also, your maximum computation time is doubled from 12 hours to 24 hours. How cool is that?

You can consider this plan if you need high computation power because it is still quite cheap when compared to other cloud GPU providers like AWS, Azure, and even GCP.


If you’re new to the world of Deep Learning, I have some excellent resources to help you get started in a comprehensive and structured manner:


Top Books On Python For Beginners And Advanced (2023)

Best Python Books to Learn Business Finance

Python is a wide-spread programming language very commonly used for coding nowadays. It is an object-oriented and functional programming language. Learning python benefits students and developers in the fields of data science, machine learning, and software development. This list of python books will guide students and professionals to build a strong base in python. Even kids can start learning from them at an early age. These python books have explanations and examples for helping their readers understand the topics better.

Web development, programming languages, Software testing & others

Let us now review python books in detail.

Book #1 Mastering Python Networking

Author: Eric Chou

Purchase this book here

Review: Key Points:

This book comprises the methods to unlock the potential of Python libraries to address tough network problems.

It is the best choice to learn the method of leveraging Python for SDN, DevOps, and network automation.

This book is most useful for a Programmer or a Network Engineer who wants to learn Python for networking.

Book #2 Python Data Science Handbook

Author: Jake VanderPlas

Purchase this book here


This book is an excellent choice if you want to refer to data science and data analytics tasks. The book uses Jupyter, which is very easy to read. Every chapter contains illustrations with well-designed examples. The author seems to have the gift of providing explanations with clarity.

Key Points:

This book is a great source of various pieces of the data science stack, such as Scikit-Learn, Matplotlib, Pandas, NumPy, and IPython.

This book comes in handy for data crunchers and working scientists who work in Python code.

The book is a ‘must-have’ for readers who are doing scientific computing in Python.

Book #3 Python Tricks

Author: Dan Bader


This book strikes a perfect balance between real-world solutions and in-depth explanations. The content is lucid and thorough but in an informal manner. The author drives into sufficient details while not over-explaining concepts to the extent that they become slow and frustrating.

Key Points:

After each section, there is a brief recap that explains the rules of thumb that need following. This serves to remove any uncertainty.

If you have experience working in other programming languages or you have worked with legacy versions of Python, this book is most suitable for you to come up to speed with modern features and patterns.

This book reveals the best practices in Python and the potential of the Pythonic code with a step-by-step narrative.

Book #4 Python for Kids

Author: Jason R. Briggs

Purchase this book here


Children who use this book find programming accessible and enjoyable, and they begin to generate ideas for making games. This book is well-written, consists of excellent topics, and renders a plethora of examples. There is a very good balance between support (help and explanations) and challenge (concepts and coding tasks).

Key Points:

This book is introductory to Python programming and comprises illustrations and kid-friendly examples.

Kids can build a game and create drawings with Turtle, Python’s graphics library after reading this book.

This book is a ticket for children aged 10 and more to the amazing sphere of computer programming.

Book #5 A Smarter Way to Learn Python

Author: Mark Myers

Purchase this book here


This is a great guide that makes Python easy to learn due to digestible chapters, which make you confident to work on projects. After each chapter, there are online quizzes to test readers. Proves exist that this is a great way for a totally-new coder to embark on the Python journey.

Key Points:

This book uses ‘interactive recall practice’ as the key teaching method.

Washington University states that this method augments learning performance by 400 percent.

This book has approximately 1,000 interactive exercises, which are online and free.

Book #6 Python Cookbook

Author: David Beazley

Purchase this book here


This book is one of the best resources to learn to write lean code in Python 3. Learners of this book know how to avoid writing unnecessary long code and falling into traps. It is a fast-paced Python book.

Key Points:

This book is full of practical examples written in Python 3.3, which is suitable for experienced Python programmers who are searching for content that has modern tools.

The recipes in this book cover an extensive range in scope and difficulty, which begins from simple string concatenation to the creation of BNF recursive parsers.

This book is the optimum choice if you want to update Python 2 code or need assistance to create programs in Python 3.

Book #7 Python for Data Analysis

Author: William McKinney

Purchase this book here


McKinney has clear experience and vision for the pandas framework. He has nicely explained the main function and inner workings of NumPy and pandas. This is a very practical book that has a plethora of examples, which can be best leveraged by being on the keyboard while reading this book.

Key Points:

This book is a modern introduction that deals with scientific computing in Python, which enables readers to know data-intensive applications.

It consists of complete instructions to manipulate, process, clean, and crunch datasets in Python.

Python programmers who are novices in scientific computing and analysts new to Python can use this book.

Book #8 Python Crash Course

Author: Eric Matthes

Purchase this book here


The structure of this book is such that the difficulty level increases gradually as you proceed forward. Each chapter in the book has exercises at the end, which enable cementing the content. Real Python has chosen this book as one of the best for those who want to learn Python.

Key Points:

This book introduces Python programming in a thorough, quick-paced manner with the result that you can write programs, solve problems, and make things work in no time.

This book is ideal for those who want to learn fundamental programming concepts, such as loops, classes, dictionaries, and lists.

The exercises will ensure that you can write clean and readable code and make programs interactive.

Book #9 Hands-on Machine Learning with Scikit-Learn and TensorFlow

Author: Aurelien Geron

Purchase this book here


This book teaches readers not only various tools but also the framework that you can apply to a specific issue and the method of thinking about what you want to do in each project. The coding exercises cement your learning and restrict readers from outpacing themselves.

Key Points:

This book uses two production-ready Python frameworks (TensorFlow and Scikit-Learn) to develop an intuitive understanding of the tools and concepts essential to building intelligent systems.

You can learn a gamut of techniques, beginning from simple linear regression and terminating deep neural networks.

Even if you know nothing about machine learning, this book helps you to leverage simple and efficient tools for program implementation that are capable of learning from data.

Book #10 Python Programming

Author: John Zelle

Purchase this book here


This book explains tough concepts at a good speed with pertinent examples. It fulfills two aims: introducing readers to computer science and then introducing them to Python as the first programming language.

Key Points:

The design of this book is such that one can select it as a primary textbook in the first course of computing at the college level.

This book teaches the core skills of computer science traditionally with an emphasis on programming, design, and problem resolution.

The most crucial modification in this edition is that a majority of the uses of eval were removed and a discussion of its dangers added.

Update the detailed information about Users’ Choice Survey Results: Books For Learning Java on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!