Table of Contents [show]

 

Top Machine Learning Interview Questions

 

Companies are seeking to make information and services more available to people by embracing new-age technologies like artificial intelligence (AI) and machine learning. Data scientists, artificial intelligence engineers, machine learning engineers and data analysts are some of the in-demand executive positions are embracing AI. 


1. What is machine learning?


Machine learning is a subfield of artificial intelligence, which is described as the ability of a machine to mimic intelligent human behaviour. Artificial intelligence methods are used to perform complicated tasks in a way that is similar to how humans solve issues.

 

2. Why was Machine Learning Introduced?


The simplest explanation is to create our lives more comfortable. In the earlier days of “intelligent” applications, many systems used hardcoded limitations of “if” and “else” decisions to process data or modify the user input. Assume a spam filter whose job is to move the suitable incoming email messages to a spam folder. But with machine learning algorithms, we are provided abundant information for the data to know and recognise the patterns from the data. Unlike common issues we don’t require to write new rules for every situation in machine learning, we just need to use the same workflow but with a distinct dataset.

 

3. What are the different types of Machine Learning?


Machine Learning is classified into four types:


1. Supervised Learning
2. Unsupervised Learning
3. Semi-Supervised Learning
4. Reinforcement Learning

 

•  Supervised Learning: In this type of machine learning, the model is prepared on the labelled dataset for classification and regression-based issues. Some of the algorithms that are a component of supervised learning are Linear Regression, Logistic Regression, Decision Tree, Random Forest, and Naive Bayes.


•  Unsupervised Learning: In this type of machine learning, the model is prepared for discovering patterns, abnormalities and clusters in the unlabeled dataset. Some of the algorithms which are a component of unsupervised learning are K-Means, C-Means, and Hierarchical Clustering.


•  Semi-Supervised Learning: In this type of machine learning, the model is prepared to operate both labelled and unlabeled datasets.


•  Reinforcement Learning: In this type of machine learning, the model is left to prepare on their own using the notion of compensations and damages. In easy words, there’s an agent with an assignment to complete with rewards, penalties and many burdens in between. 

 

Read More: An Ultimate Guide to Starting a Career in Machine Learning


4. What is Deep Learning?


Deep Learning is a subset of machine learning that affects systems that think and learn like humans using artificial neural networks. The term ‘deep’ arrives from the point that you can have several layers of neural networks. One of the prior differences between machine learning and deep learning is that attribute engineering is accomplished manually in machine learning. In the issue of deep learning, the model consisting of neural networks will automatically decide which features to operate (and which not to operate). 

 

5. What distinguishes machine learning from deep learning?


Machine learning algorithms are which study data patterns and then apply them to declare decisions. On the other hand, deep learning can learn by processing data independently and is quite identical to the way the human brain recognises something, studies it, and makes a decision. 


The following are the main interpretations: 


•  How the data is shown to the system
•  Deep learning networks are based on layers of artificial neural networks, while machine learning algorithms always need structured data. 

 

6. What is the Difference Between Supervised and Unsupervised Machine Learning?


•  Supervised learning - This model knows from the labelled data and creates a future projection as output.
•  Unsupervised learning - This model operates unlabeled input data and permits the algorithm to work on that information without recommendation.

 

7. What is ‘Naive’ in a Naive Bayes?


The Naive Bayes approach is a supervised machine learning algorithm, it is naive since it creates assumptions by using Bayes’ theorem that all traits are independent of each other. The classifier is called ‘naive’ because it makes deductions that may or may not turn out to be accurate. 


The algorithm presumes that the existence of one feature of a class is not associated with the existence of any other feature (absolute independence of features), given the class variable.


 
8. What are the steps involved in the Machine Learning Model Building Process?


Building a machine learning model applies three steps which can be defined as follows:


•  Model Building: Select a suitable algorithm for the model, then prepare it to meet the conditions.
•  Model Validation: The trial data can be used to decide the model's correctness.
•  Applying the model: After testing, make the required adjustments and use the final model for real-world assignments. 


It is essential to keep in mind that the model requires to be periodically tested to make sure it's working properly. To confirm its relevancy and remain up-to-date, modifying it regularly is essential. 

 

9. What is the Bias and Variance Tradeoff?


By counting bias, variance, and a small amount of irreducible error resulting from noise in the underlying dataset, the bias-variance breakdown effectively deteriorates the learning error from any method. Naturally, you will reduce bias but earn variance if you make the model more complex and include more variables. You must compromise between bias and variance to get the ideal level of error deduction. Large bias and high variance are not desired. Models that are always correct but with low variance are trained using a high bias and low variance approach. Models that are correct yet inconsistent are trained providing high variances and low bias procedures. 

 

10. Define Cross-Validation.


A machine learning technique called cross-validation operates distinct parts of the dataset to prepare and test a machine learning algorithm on various iterations. Cross-validation is a strategy used to estimate a model's predictive power on new data that wasn't used to prepare it. Cross-validation controls data overfitting. The most well-liked resampling method separates the entire dataset into K sets of equal sizes: K-Fold Cross Validation.

 

11. What is Linear Regression in Machine Learning?


Linear Regression is a supervised machine learning algorithm which is introduced on the labelled dataset and is used to indicate successive data. Linear Regression models find a linear association between the continuous independent variable (x) and dependent variables (Y). The association between the dependent and independent variable is discovered by operating a straight line equation of Y = MX + c, where m is the slope of the line and c is the intercept.

 

12. What is the difference between Lasso and Ridge regression?


Lasso(also known as L1) and Ridge(also known as L2) regression are two famous regularization methods that are used to avert overfitting data. These techniques are used to penalize the coefficients to find the optimum solution and decrease complexity. The Lasso regression operates by penalizing the sum of the fundamental values of the coefficients. In Ridge or L2 regression, the penalty operation is discovered by the sum of the squares of the coefficients.

 

13. What is the Trade-off Between Bias and Variance?


The bias-variance decomposition effectively deteriorates the learning error from any algorithm by adding the bias, variance, and a bit of irreducible error due to noise in the underlying dataset. Necessarily, if you create a model more complicated and add more variables, you will fail bias but earn variance. To get the optimally-reduced amount of error, you will have to trade off bias and variance. Neither high bias nor high variance is wanted. High bias and low variance algorithms prepare models that are constant, but incorrect on average. High variance and low-bias algorithms train models that are correct but unpredictable. 

 

14. How is KNN different from k-means?


KNN or K nearest neighbours is a supervised machine learning algorithm which is used for classification intentions. In KNN, a test piece is given as the class of the majority of its nearest neighbours. On the other side, K-means is an unsupervised algorithm which is primarily used for clustering. In k-means clustering, it requires a set of unlabeled facts and a threshold only. The algorithm also takes unlabeled data and understands how to cluster it into groups by calculating the mean of the distance between other unlabeled points.

 

15. What is a Decision Tree Classification?


A decision tree builds classification (or regression) models as a tree format, with datasets split up into ever-smaller subsets while creating the decision tree directly in a tree-like way with branches and nodes. Decision trees can control both categorical and numerical data. 

 

16. What is Principal Component Analysis?


Principal Component Analysis or PCA is a multivariate statistical procedure that is used for examing quantitative data. The purpose of PCA is to decrease higher dimensional data to lower proportions, terminate noise, and remove crucial information such as attributes and characteristics from large portions of data. 

 

17. What is Clustering in unsupervised ML techniques?


Data must be divided into subsets to decode clustering difficulties. These groups of data are often understood as clusters that contain data that are connected. Unlike classification or regression, different clusters give a variety of information about the entities. 

 

18. How to Tackle Overfitting and Underfitting?


Overfitting means the model fitted to training data too well, in this case, we need to resample the data and estimate the model accuracy using methods like k-fold cross-validation. Whereas for the Underfitting case we are unable to comprehend or grab the patterns from the data, in this case, we need to modify the algorithms, or we need to provide more data attributes to the model. 

 

19. Explain Correlation and Covariance?


Correlation is used for measuring and also for estimating the quantitative connection between two variables. Correlation estimates how deeply two variables are linked. For example income and expenditure, demand and supply, etc. Covariance is an easy way to calculate the correlation between two variables. The issue with covariance is that they are difficult to reach without normalization.

 

20. What are Parametric and Non-Parametric Models?


Parametric models will have restricted parameters and to expect new data, you only require to understand the parameter of the model. Non-Parametric models have no limitations in bringing several parameters, permitting more flexibility and expecting new data. You need to understand the condition of the data and model parameters.

 

21. What is Reinforcement Learning?


Reinforcement learning is distinct from the other kinds of learning like supervised and unsupervised. In reinforcement learning, we are provided with neither data nor labels. Our learning is established on the rewards offered to the agent by the surroundings.

 

22. What do you understand by the F1 score?


The F1 score is a metric that incorporates both Precision and Recall. It is also the weighted standard of precision and recall. 
The F1 score can be estimated using the below formula:
F1 = 2 * (P * R) / (P + R)
The F1 score is one when both Precision and Recall values are one.

 

23. What are Support Vectors in SVM?


Support Vectors in SVM are data points that are closest to the hyperplane. It affects the status and orientation of the hyperplane. Removing the asset vectors will change the status of the hyperplane. The support vectors allow us to create our support vector appliance model.

 

24. How can we relate standard deviation and variance?


Standard deviation directs the reach of your data from the mean. Variance is the average degree to which per point varies from the mean i.e. the average of all data points. We can describe Standard deviation and Variance because it is the square root of Variance.

 

25. What is Time series?


A Time series is a series of numerical data points in the next order. It follows the direction of the chosen data points, over a specified period and records the data points at frequent intervals. Time series doesn’t demand any minimum or maximum time input. Analysts usually use Time series to analyse data according to their exact conditions.

 

26. Define Precision and Recall.


•  Precision: Precision is the proportion of several events you can accurately recognise to the total number of events you recognise (mix of correct and wrong recognise). Precision = (True Positive) / (True Positive + False Positive)


•  Recall: A recall is the proportion of the number of events you can remember to the number of total occurrences. Recall = (True Positive) / (True Positive + False Negative)

 

27. When does regularization become necessary in Machine Learning?


Regularization is essential whenever the model starts to overfit/ underfit. It is a cost term for bringing in more attributes with the factual function. Hence, it attempts to force the coefficients for many variables to zero and decrease the cost term. It enables lower model complexity so that the model can evolve better at forecasting.

 

28. What is Regularization? What kind of problems does regularization solve?


A regularization is a form of regression, which constrains/ regularizes or shrinks the coefficient calculations towards zero. In further words, it prevents learning a more complicated or adaptable model to avert the threat of overfitting. It decreases the variance of the model, without considerable growth in its bias. Regularization is used to address overfitting issues as it disciplines the loss function by adding a multiple of an L1 (LASSO) or an L2 (Ridge) norm of weights vector w.

 

29. What do you mean by Genetic Programming?


Genetic Programming (GP) is equivalent to an Evolutionary Algorithm, a subset of machine learning. Genetic programming software systems execute an algorithm that operates random mutation, a fitness function, crossover, and multiple generations of development to decide a user-defined task. The genetic programming model is based on testing and selecting the best option among a set of outcomes.

 

30. How is machine learning used in day-to-day life?


Most people are already utilising machine learning in their day-to-day life. Assume that you are busy with the internet, you are expressing your choices, likes dislikes through your quests. All these things are picked up by cookies coming on your computer, from this, the conduct of a user is estimated. It enables the improvement of the progress of a user through the internet and gives similar recommendations.

 

Conclusion


All the above-listed questions are the basics of machine learning. Machine learning is growing so fast hence new notions will appear.