首页  >  文章  >  后端开发  >  热门 Python 机器学习面试问题和答案

热门 Python 机器学习面试问题和答案

王林
王林原创
2024-09-10 20:31:49349浏览

Top Python Machine Learning Interview Questions and Answers

机器学习 (ML) 是科技行业最受欢迎的领域之一,鉴于其丰富的库和易用性,熟练掌握 Python 通常是先决条件。如果您正在准备该领域的面试,那么精通理论概念和实际实现至关重要。以下是一些常见的 Python ML 面试问题和答案,可帮助您做好准备。

1. Python 中您最熟悉哪些预处理技术?

预处理技术对于为机器学习模型准备数据至关重要。一些最常见的技术包括:

  • 归一化:将特征向量中的值调整到共同的尺度,而不扭曲值范围的差异。
  • 虚拟变量:使用 pandas 创建指示变量(0 或 1),用于显示分类变量是否可以采用特定值。
  • 检查异常值:可以使用多种方法,包括单变量、多变量和 Minkowski 错误。

代码示例:

from sklearn.preprocessing import MinMaxScaler
import pandas as pd

# Data normalization
scaler = MinMaxScaler()
normalized_data = scaler.fit_transform(data)

# Creating dummy variables
df_with_dummies = pd.get_dummies(data, drop_first=True)

2.什么是暴力算法?提供一个例子。

暴力算法彻底尝试所有可能性来找到解决方案。一个常见的例子是线性搜索,其中算法检查数组的每个元素以查找匹配项。

代码示例:

def linear_search(arr, target):
    for i in range(len(arr)):
        if arr[i] == target:
            return i
    return -1

# Example usage
arr = [2, 3, 4, 10, 40]
target = 10
result = linear_search(arr, target)

3. 处理不平衡数据集的方法有哪些?

不平衡的数据集类别比例存在偏差。处理此问题的策略包括:

  • 收集更多数据:为少数群体收集更多数据。
  • 重新采样:对少数类进行过采样或对多数类进行欠采样。
  • SMOTE(合成少数过采样技术):为少数类别生成合成样本。
  • 算法调整:使用可以处理不平衡的算法,例如 bagging 或 boosting 方法。

代码示例:

from imblearn.over_sampling import SMOTE
from sklearn.model_selection import train_test_split

X_resampled, y_resampled = SMOTE().fit_resample(X, y)
X_train, X_test, y_train, y_test = train_test_split(X_resampled, y_resampled, test_size=0.2)

4.Python中有哪些处理缺失数据的方法?

处理缺失数据的常见策略包括省略插补

  • 遗漏:删除缺失值的行或列。
  • 插补:使用均值、中位数、众数等技术或 SimpleImputer 或 IterativeImputer 等高级方法填充缺失值。

代码示例:

from sklearn.impute import SimpleImputer

# Imputing missing values
imputer = SimpleImputer(strategy='median')
data_imputed = imputer.fit_transform(data)

5.什么是回归?如何在 Python 中实现回归?

回归是一种监督学习技术,用于查找变量之间的相关性并对因变量进行预测。常见的例子包括线性回归和逻辑回归,可以使用 Scikit-learn 来实现。

代码示例:

from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split

# Split the dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

# Create and train the model
model = LinearRegression()
model.fit(X_train, y_train)

# Make predictions
predictions = model.predict(X_test)

6.如何在Python中分割训练和测试数据集?

在 Python 中,您可以使用 Scikit-learn 中的 train_test_split 函数将数据拆分为训练集和测试集。

代码示例:

from sklearn.model_selection import train_test_split

# Split the dataset: 60% training and 40% testing
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.4)

7. 对于基于树的学习器来说哪些参数最重要?

基于树的学习器的一些关键参数包括:

  • max_深度:每棵树的最大深度。
  • learning_rate:每次迭代的步长。
  • n_estim- **n_estimators:集成中的树数或 boosting 轮数。
  • 子样本:每棵树要采样的观测值的分数。

代码示例:

from sklearn.ensemble import RandomForestClassifier

# Setting parameters for Random Forest
model = RandomForestClassifier(max_depth=5, n_estimators=100, max_features='sqrt', random_state=42)
model.fit(X_train, y_train)

8. Scikit-learn 中常见的超参数调优方法有哪些?

两种常用的超参数调整方法是:

  • 网格搜索:定义超参数值网格并搜索最佳组合。
  • 随机搜索:使用广泛的超参数值并随机迭代组合。

代码示例:

from sklearn.model_selection import GridSearchCV, RandomizedSearchCV

# Grid Search
param_grid = {'n_estimators': [50, 100, 200], 'max_depth': [5, 10, 15]}
grid_search = GridSearchCV(model, param_grid, cv=5)
grid_search.fit(X_train, y_train)

# Random Search
param_dist = {'n_estimators': [50, 100, 200], 'max_depth': [5, 10, 15]}
random_search = RandomizedSearchCV(model, param_dist, n_iter=10, cv=5, random_state=42)
random_search.fit(X_train, y_train)

9. 编写一个函数来查找下雨天的降雨量中位数。

您需要删除没有下雨的日子,然后找到中位数。

代码示例:

def median_rainfall(df_rain):
    # Remove days with no rain
    df_rain_filtered = df_rain[df_rain['rainfall'] > 0]
    # Find the median amount of rainfall
    median_rainfall = df_rain_filtered['rainfall'].median()
    return median_rainfall

10. 编写一个函数来估算所选加州奶酪的中位价格来代替缺失值。

您可以使用 pandas 来计算并填充中值。

Code Example:

def impute_median_price(df, column):
    median_price = df[column].median()
    df[column].fillna(median_price, inplace=True)
    return df

11. Write a Function to Return a New List Where All None Values Are Replaced with the Most Recent Non-None Value in the List.

Code Example:

def fill_none(input_list):
    prev_value = None
    result = []
    for value in input_list:
        if value is None:
            result.append(prev_value)
        else:
            result.append(value)
            prev_value = value
    return result

12. Write a Function Named grades_colors to Select Only the Rows Where the Student’s Favorite Color is Green or Red and Their Grade is Above 90.

Code Example:

def grades_colors(df_students):
    filtered_df = df_students[(df_students["grade"] > 90) & (df_students["favorite_color"].isin(["green", "red"]))]
    return filtered_df

13. Calculate the t-value for the Mean of ‘var’ Against a Null Hypothesis That μ = μ_0.

Code Example:

import pandas as pd
from scipy import stats

def calculate_t_value(df, column, mu_0):
    sample_mean = df[column].mean()
    sample_std = df[column].std()
    n = len(df)

    t_value = (sample_mean - mu_0) / (sample_std / (n ** 0.5))
    return t_value

# Example usage
t_value = calculate_t_value(df, 'var', mu_0)
print(t_value)

14. Build a K-Nearest Neighbors Classification Model from Scratch.

Code Example:

import numpy as np
import pandas as pd

def euclidean_distance(point1, point2):
    return np.sqrt(np.sum((point1 - point2) ** 2))

def kNN(k, data, new_point):
    distances = data.apply(lambda row: euclidean_distance(row[:-1], new_point), axis=1)
    sorted_indices = distances.sort_values().index
    top_k = data.iloc[sorted_indices[:k]]

    return top_k['label'].mode()[0]

# Example usage
data = pd.DataFrame({
    'feature1': [1, 2, 3, 4],
    'feature2': [2, 3, 4, 5],
    'label': [0, 0, 1, 1]
})

new_point = [2.5, 3.5]
k = 3

result = kNN(k, data, new_point)
print(result)

15. Build a Random Forest Model from Scratch.

Note: This example uses simplified assumptions to meet the interview constraints.

Code Example:

import pandas as pd
import numpy as np

def create_tree(dataframe, new_point):
    unique_classes = dataframe['class'].unique()
    for col in dataframe.columns[:-1]:  # Exclude the 'class' column
        if new_point[col] == 1:
            sub_data = dataframe[dataframe[col] == 1]
            if len(sub_data) > 0:
                return sub_data['class'].mode()[0]
    return unique_classes[0]  # Default to the most frequent class

def random_forest(df, new_point, n_trees):
    results = []
    for _ in range
n_trees):
        tree_result = create_tree(df, new_point)
        results.append(tree_result)
    # Majority vote
    return max(set(results), key=results.count)

# Example usage
df = pd.DataFrame({
    'feature1': [0, 1, 1, 0],
    'feature2': [0, 0, 1, 1],
    'class': [0, 1, 1, 0]
})

new_point = {'feature1': 1, 'feature2': 0}
n_trees = 5

result = random_forest(df, new_point, n_trees)
print(result)

16. Build a Logistic Regression Model from Scratch.

Code Example:

import pandas as pd
import numpy as np

def sigmoid(z):
    return 1 / (1 + np.exp(-z))

def logistic_regression(X, y, num_iterations, learning_rate):
    weights = np.zeros(X.shape[1])
    for i in range(num_iterations):
        z = np.dot(X, weights)
        predictions = sigmoid(z)
        errors = y - predictions
        gradient = np.dot(X.T, errors)

gradient = np.dot(X.T, errors)
        weights += learning_rate * gradient
    return weights

# Example usage
df = pd.DataFrame({
    'feature1': [0, 1, 1, 0],
    'feature2': [0, 0, 1, 1],
    'class': [0, 1, 1, 0]
})

X = df[['feature1', 'feature2']].values
y = df['class'].values
num_iterations = 1000
learning_rate = 0.01

weights = logistic_regression(X, y, num_iterations, learning_rate)
print(weights)

17. Build a K-Means Algorithm from Scratch.

Code Example:

import numpy as np

def k_means(data_points, k, initial_centroids):
    centroids = initial_centroids
    while True:
        distances = np.linalg.norm(data_points[:, np.newaxis] - centroids, axis=2)
        clusters = np.argmin(distances, axis=1)
        new_centroids = np.array([data_points[clusters == i].mean(axis=0) for i in range(k)])        

        if np.all(centroids == new_centroids):
            break
        centroids = new_centroids
    return clusters

# Example usage
data_points = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]])
k = 2
initial_centroids = np.array([[1, 2], [10, 2]])

clusters = k_means(data_points, k, initial_centroids)
print(clusters)

18. What is Machine Learning and How Does it Work?

Machine Learning is a field of artificial intelligence focused on building algorithms that enable computers to learn from data without explicit programming. It uses algorithms to analyze and identify patterns in data and make predictions based on those patterns.

Example Answer:

"Machine learning is a branch of artificial intelligence that involves creating algorithms capable of learning from and making predictions based on data. It works by training a model on a dataset and then using that model to make predictions on new data."

19. What are the Different Types of Machine Learning Algorithms?

There are three main types of machine learning algorithms:

  • Supervised Learning: Useslabeled data and makes predictions based on this information. Examples include linear regression and classification algorithms.

  • Unsupervised Learning: Processes unlabeled data and seeks to find patterns or relationships in it. Examples include clustering algorithms like K-means.

  • Reinforcement Learning: The algorithm learns from interacting with its environment, receiving rewards or punishments for certain actions. Examples include training AI agents in games.

Example Answer:

"There are three main types of machine learning algorithms: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning uses labeled data to make predictions, unsupervised learning finds patterns in unlabeled data, and reinforcement learning learns from interactions with the environment to maximize rewards."

20. What is Cross-Validation and Why is it Important in Machine Learning?

Cross-validation is a technique to evaluate the performance of a machine learning model by dividing the dataset into two parts: a training set and a validation set. The training set trains the model, whereas the validation set evaluates it.

Importance:

  • Prevents overfitting by ensuring the model generalizes well to unseen data.
  • Provides a more accurate measure of model performance.

Example Answer:

"Cross-validation is a technique used to evaluate a machine learning model'sperformance by dividing the dataset into training and validation sets. It helps ensure the model generalizes well to new data, preventing overfitting and providing a more accurate measure of performance."

21. What is an Artificial Neural Network and How Does it Work?

Artificial Neural Networks (ANNs) are models inspired by the human brain's structure. They consist of layers of interconnected nodes (neurons) that process input data and generate output predictions.

Example Answer:

"An artificial neural network is a machine learning model inspired by the structure and function of the human brain. It comprises layers of interconnected neurons that process input data through weighted connections to make predictions."

22. What is a Decision Tree and How to Use it in Machine Learning?

Decision Trees are models for classification and regression tasks that split data into subsets based on the values of input variables to generate prediction rules.

Example Answer:

"A decision tree is a tree-like model used for classification and regression tasks. It works by recursively splitting data into subsets based on input variables, creating rules for making predictions."

23. What is the K-Nearest Neighbors (KNN) Algorithm and How Does it Work?

K-Nearest Neighbors (KNN) is a simple machine learning algorithm usedfor classification or regression tasks. It determines the k closest data points in the feature space to a given unseen data point and classifies it based on the majority class of its k nearest neighbors.

Example Answer:

"The K-Nearest Neighbors (KNN) algorithm is a machine learning technique used for classification or regression. It works by identifying the k closest data points to a given point in the feature space and classifying it based on the majority class among the k nearest neighbors."

24. What is the Support Vector Machine Algorithm and How Does it Work?

Support Vector Machines (SVM) are linear models used for binary classification and regression tasks. They find the most suitable boundary (hyperplane) that separates data into classes. Data points closest to the hyperplane, called support vectors, play a critical role in defining this boundary.

Example Answer:

"The Support Vector Machine (SVM) algorithm is a linear model used for binary classification and regression tasks. It identifies the best hyperplane that separates data into classes, relying heavily on the data points closest to the hyperplane, known as support vectors."

25. What is Regularization, and How Do You Use it in Machine Learning?

Regularization is a technique to prevent overfitting in machinelearning models by adding a penalty term to the loss function. This penalty discourages the model from learning overly complex relationships in the data.

Example Answer:

"Regularization is a technique to prevent overfitting in machine learning models by adding a penalty term to the loss function, which discourages the model from learning overly complex patterns. Common types of regularization include L1 (Lasso) and L2 (Ridge) regularization."

Code Example:

from sklearn.linear_model import Ridge

# Applying L2 Regularization (Ridge Regression)
ridge_model = Ridge(alpha=1.0)
ridge_model.fit(X_train, y_train)

26. Can You Explain How Gradient Descent Works?

Gradient Descent is an optimization algorithm used to minimize a cost function in machine learning. It iteratively adjusts the parameters of the model in the direction of the negative gradient of the cost function until it reaches a minimum.

Example Answer:

"Gradient Descent is an optimization algorithm used to minimize a cost function in machine learning. It iteratively updates the model parameters in the direction of the negative gradient of the cost function, aiming to find the parameters that minimize the cost."

27. Can You Explain the Concept of Ensemble Learning

Ensemble Learning is a technique where multiple models (often called "weak learners") are combined to solve a prediction task. The combined model is generally more robust and performs better than individual models.

Example Answer:

"Ensemble learning is a machine learning technique where multiple models are combined to solve a prediction task. Common ensemble methods include bagging, boosting, and stacking. Combining the predictions of individual models can improve performance and reduce the risk of overfitting."

Example Code for Random Forest (an ensemble method):

from sklearn.ensemble import RandomForestClassifier

# Ensemble learning using Random Forest
model = RandomForestClassifier(n_estimators=100, max_depth=10, random_state=42)
model.fit(X_train, y_train)
predictions = model.predict(X_test)

Conclusion

Preparing for a Python machine learning interview involves understanding both theoretical concepts and practical implementations. This guide has covered several essential questions and answers that frequently come up in interviews. By familiarizing yourself with these topics and practicing the provided code examples, you'll be well-equipped to handle a wide range of questions in your next machine learning interview. Good luck!

Visit MyExamCloud and see the most recent Python Certification Practice Tests. Begin creating your Study Plan today.

以上是热门 Python 机器学习面试问题和答案的详细内容。更多信息请关注PHP中文网其他相关文章!

声明:
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn