Home >Backend Development >Python Tutorial >Summary of commonly used machine learning libraries in Python
Python is widely used in scientific computing: computer vision, artificial intelligence, mathematics, astronomy, etc. It’s no surprise that it also applies to machine learning.
This article lists and describes the most useful machine learning tools and libraries in Python. In this list, we do not require these libraries to be written in Python, as long as they have a Python interface.
Our intention is not to list all machine learning libraries in Python (the Python Package Index (PyPI) returned 139 results when searching for "machine learning"), but to list the ones we know are useful and well maintained of those.
In addition, although some modules can be used for a variety of machine learning tasks, we only list libraries whose main focus is machine learning. For example, although Scipy1 includes some clustering algorithms, its main focus is not machine learning but a comprehensive scientific computing toolset. Therefore we exclude Scipy (although we use it too!).
Another thing to mention is that we will also evaluate these libraries based on their integration with other scientific computing libraries, because machine learning (supervised or unsupervised) is also part of the data processing system. If the library you use does not match the rest of the data processing system, you will spend a lot of time creating an intermediate layer between the different libraries. It's important to have a great library in your toolset, but it's equally important that the library integrates well with other libraries.
If you are good at other languages but also want to use Python packages, we also briefly describe how to integrate with Python to use the libraries listed in this article.
Scikit Learn7 is our machine learning tool of choice at CB Insights. We use it for classification, feature selection, feature extraction and aggregation.
What we love most is that it has an easy-to-use consistent API and provides **many** evaluation, diagnostic and cross-validation methods available out of the box (sound familiar? Python It also provides the "battery is ready" method). The icing on the cake is that it uses Scipy data structures under the hood, which fits well with the rest of Python that uses Scipy, Numpy, Pandas and Matplotlib for scientific computing.
So, if you want to visualize the performance of your classifier (for example, using a precision-recall chart, or a Receiver Operating Characteristics (ROC) curve), Matplotlib can help Make quick visualizations.
Considering the time spent cleaning and structuring data, using this library can be very convenient because it can be tightly integrated with other scientific computing packages.
In addition, it also contains limited natural language processing feature extraction capabilities, as well as bag of words, tfidf (Term Frequency Inverse Document Frequency algorithm), preprocessing (stop words/stop-words, Custom preprocessing, analyzer).
In addition, if you want to quickly conduct different benchmark tests on small data sets (toy datasets), its own data set module provides common and useful data sets. You can also create your own small data sets based on these data sets so that you can test whether the model meets your expectations for your own purposes before applying the model to the real world. For parameter optimization and parameter adjustment, it also provides grid search and random search.
None of these features would be possible without strong community support or poor maintenance. We look forward to its first stable release.
Statsmodels is another powerful library focused on statistical models, mainly used for predictive and exploratory analysis. If you want to fit linear models, perform statistical analysis, or predictive modeling, Statsmodels is a great fit. The statistical tests it provides are quite comprehensive and cover most verification tasks.
If you are an R or S user, it also provides R syntax for certain statistical models. Its model also accepts Numpy arrays and Pandas data frames, making intermediate data structures a thing of the past!
PyMC is a tool for doing **Bayes curve**. It contains diagnostic tools for Bayesian models, statistical distributions, and model convergence, as well as some hierarchical models. If you want to do Bayesian analysis, you should check it out.
Shogun1 is a machine learning toolbox focusing on Support Vector Machines (SVM), written in C++. It is under active development and maintenance, provides a Python interface, and is also the best documented interface. However, compared to Scikit-learn, we found its API to be more difficult to use. Furthermore, there are not many diagnostic and evaluation algorithms available out of the box. However, speed is a big advantage.
Gensim is defined as "topic modeling for humans". As described on its home page, its focus is Latent Dirichlet Allocation (LDA) and its variants. Unlike other packages, it supports natural language processing and can more easily combine NLP and other machine learning algorithms.
If your field is in NLP and want to do aggregation and basic classification, you can take a look. Currently, they introduce Google's text representation word2vec based on Recurrent Neural Network. This library is written exclusively in Python.
Orange is the only one with a graphical user interface (GUI) among all the libraries listed in this article. It is quite comprehensive for classification, aggregation and feature selection methods, as well as some cross-validation methods. It is better than Scikit-learn in some aspects (classification method, some preprocessing capabilities), but its adaptability to other scientific computing systems (Numpy, Scipy, Matplotlib, Pandas) is not as good as Scikit-learn.
However, the inclusion of GUI is a very important advantage. You can visualize the results of cross-validation, models, and feature selection methods (some features require Graphviz to be installed). For most algorithms, Orange has its own data structures, so you need to wrap the data into an Orange-compatible data structure, which makes its learning curve steeper.
PyMVPA is another statistical learning library, the API is very similar to Scikit-learn. Contains cross-validation and diagnostic tools, but is not as comprehensive as Scikit-learn.
deep learning
Although deep learning is a subsection of machine learning, the reason we created a separate section here is that it has recently attracted a lot of attention from the talent acquisition departments of Google and Facebook.
Theano is the most mature deep learning library. It provides a good data structure (tensor) to represent the layers of a neural network, which is very efficient for linear algebra and is similar to Numpy's array. It should be noted that its API may not be very intuitive and the user's learning curve will be high. There are many libraries based on Theano that take advantage of its data structures. It also supports GPU programming out of the box.
There is another library based on Theano, PyLearn2, which introduces modularity and configurability to Theano. You can create neural networks through different configuration files, try this Different parameters would be easier. It can be said that if the parameters and properties of the neural network are separated into configuration files, its modularity will be more powerful.
Decaf is a deep learning library recently released by UC Berkeley. It was tested in the Imagenet classification challenge and found that its neural network implementation is very advanced (state of art).
If you want to use the excellent Scikit-learn library API in deep learning, Nolearn encapsulating Decaf will make it easier for you to use it. It's a wrapper around Decaf, compatible (mostly) with Scikit-learn, making Decaf even more incredible.
OverFeat is the recent winner of Cats vs. Dogs (kaggle challenge) 4 and is written in C++ and also includes a Python wrapper (along with Matlab and Lua). It uses the GPU via the Torch library, so it's fast. Also won the ImageNet classification detection and localization challenge. If your field is computer vision, you might want to take a look.
Hebel is another neural network library with GPU support available out of the box. You can determine the properties of the neural network through YAML files (similar to Pylearn2), providing a friendly way to separate divine networks and code, and you can quickly run the model. Since it has only been developed for a short period of time, the documentation is lacking in terms of depth and breadth. As for the neural network model, it is also limited because it only supports one neural network model (feed-forward).
However, it is written in pure Python and will be a very friendly library because it contains many practical functions, such as schedulers and monitors, which we have not found in other libraries.
NeuroLab is another API-friendly (similar to Matlabapi) neural network library. Unlike other libraries, it contains different variants of Recurrent Neural Network (RNN) implementations. If you want to use RNN, this library is one of the best choices among similar APIs.
You don’t know Python but are very good at other languages? Don’t despair! One of the strengths of Python (among others) is that it is a perfect glue language that you can use Use your own commonly used programming language to access these libraries through Python. The following packages for various programming languages can be used to combine other languages with Python:
R -> RPython
Matlab -> matpython
Java - > Jython
Lua -> Lunatic Python
Julia -> PyCall.jl
These libraries have not been available for more than a year Any updates released, we list them because you may find them useful, but these libraries are unlikely to receive bug fixes, especially future enhancements.
MDP2MlPy
FFnet
PyBrain
The above is the detailed content of Summary of commonly used machine learning libraries in Python. For more information, please follow other related articles on the PHP Chinese website!