Translator|Li Rui
Reviser|Sun Shujuan
The universe is noisy and chaotic, Complex enough to make predictions difficult. Human intelligence and intuition contribute to a basic understanding of some activities in the surrounding world, and are sufficient to have some basic understanding of individual events on macroscopic space and time scales from the limited perspective of individuals and small groups.
Natural philosophers in human prehistory and antiquity were mostly limited to common sense rationalization and guess testing. These methods have significant limitations, especially for things that are too large or complex, thus leading to the prevalence of superstitious or magical thinking.
This is not to disparage guessing and checking (which are the basis of modern scientific method), but to see that changes in human ability to investigate and understand are driven by the desire and tools to distill physical phenomena into mathematical expressions Caused.
This was especially evident after the Enlightenment led by Newton and other scientists, although there are traces of analytical reductionism in antiquity as well. The ability to move from observations to mathematical equations (and the predictions those equations make) is integral to scientific exploration and progress.
Deep learning is also fundamentally about learning transformations related to input-output observations, just as human scientists try to learn functional relationships between inputs and outputs in the form of mathematical expressions.
Of course, the difference is that the input-output relationship learned by deep neural networks (the result of the universal approximation theorem) consists of an uninterpretable "black box" of numerical parameters, mainly weights, biases and their connected node.
The universal approximation theorem states that a neural network that meets very relaxed criteria should be able to get very close to any well-behaved function. In practice, a neural network is a fragile and leaky abstraction that represents input-output relationships resulting from simple yet precise underlying equations.
Unless special attention is paid to training the model (or ensemble of models) to predict uncertainty, neural networks tend to perform very poorly when making predictions outside the distribution for which they were trained.
Deep learning predictions are also poor at making falsifiable predictions, i.e. out-of-the-box assumptions that form the basis of the scientific method. So while deep learning is a well-proven tool that's good at fitting data, its usefulness is limited in one of humanity's most important pursuits: exploring the universe around us through scientific methods.
Although deep learning has various shortcomings in human scientific endeavors, the huge fitting ability and numerous successes of deep learning in scientific disciplines cannot be ignored.
Modern science produces large amounts of data, the output of which cannot be observed by individuals (or even teams) and cannot be intuitively converted from noisy data into clear mathematical equations.
For this, you can turn to symbolic regression, an automated or semi-automated method of reducing data into equations.
The Current Gold Standard: Evolutionary Methods
Before getting into some exciting recent research on applying modern deep learning to symbolic regression, it is important to first understand the evolution of transforming data sets into equations The current state of the method. The most commonly mentioned symbolic regression package is Eureqa, which is based on genetic algorithms.
Eureqa was originally developed as a research project by Hod Lipson’s team at Cornell University and offered as proprietary software from Nutonian, which was later acquired by DataRobot Corporation. Eureqa has been integrated into the Datarobot platform, led by Michael Schmidt, co-author of Eureqa and CTO of Datarobot.
Eureqa and similar symbolic regression tools use genetic algorithms to simultaneously optimize systems of equations for accuracy and simplicity.
TuringBot is an alternative symbolic regression package based on simulated annealing. Simulated annealing is an optimization algorithm similar to metallurgical annealing used to change the physical properties of metals.
In simulated annealing, the "temperature" is lowered when selecting candidate solutions to the optimization problem, where higher temperatures correspond to acceptance of poorer solutions and are used to promote early exploration, enabling the search for the global optimal solution. merit and provide energy to escape local optima.
TuringBot is another symbolic regression package based on simulated annealing. Simulated annealing is an optimization algorithm similar to metallurgical annealing used to change the physical properties of metals.
In simulated annealing, the "temperature" is lowered when selecting candidate solutions to the optimization problem, where higher temperatures correspond to acceptance of poorer solutions and are used to promote early exploration, enabling the search for the global optimal solution. merit and provide energy to escape local optima.
TuringBot is a free version, but has significant limitations in data set size and complexity, and code modifications are not allowed.
While commercial symbolic regression software (especially Eureqa) provides an important baseline for comparison when developing new tools for symbolic regression, the role of closed source programs is limited.
Another open source alternative called PySR is released under the Apache 2.0 license and is led by Princeton University doctoral student Miles Cranmer and shares the optimization goals of accuracy and parsimony (simplicity), along with Eureqa and The combination method used by TuringBot.
In addition to providing a free and freely modifiable software library for performing symbolic regression, PySR is also interesting from a software perspective: it is written in Python but uses the Julia programming language as a fast backend.
While genetic algorithms are generally considered the current state-of-the-art for symbolic regression, the past few years have seen an exciting explosion of new symbolic regression strategies.
Many of these new developments leverage modern deep learning models, either as function approximation components in multi-step processes, or in an end-to-end manner based on large Transformer models, originally developed for natural language processing, And anything in between.
In addition to new symbolic regression tools based on deep learning, there is also a resurgence in probabilistic and statistical methods, especially Bayesian statistical methods.
Combined with modern computing power, a new generation of symbolic regression software is not only an interesting study in its own right, but also provides real utility and contributions to scientific disciplines including large data sets and comprehensive experiments.
Symbolic Regression with Deep Neural Networks as Function Approximators
Due to the universal approximation theorem described and studied by Cybenko and Hornik in the late 1980s/early 1990s, one can expect to have at least one Neural networks with nonlinear activation of hidden layers can approximate any well-behaved mathematical function.
In practice, deeper neural networks tend to achieve better performance on more complex problems. However, in principle, a hidden layer is needed to approximate various functions.
The physics-inspired AI Feynman algorithm uses the universal approximation theorem as part of a more complex puzzle.
AI Feynman (and its successor AI Feynman 2.0) was developed by physicists Silviu-Marian Udrescu and Max Tegmark (along with some colleagues). AI Feynman takes advantage of functional properties found in many physical equations, such as smoothness, symmetry, and compositionality, among other properties.
Neural networks function as function approximators, learning input-output transformation pairs represented in a data set and facilitating the study of these properties by generating synthetic data under the same functional transformations.
AI The functional properties Feynman uses to solve problems are common in physics equations, but cannot be applied arbitrarily to the space of all possible mathematical functions. However, they are still reasonable assumptions to look for in various functions that correspond to the real world.
Like the genetic algorithm and simulated annealing methods described previously, AI Feynman fits each new data set from scratch. There is no generalization or pre-training involved, and deep neural networks form only an orchestrated part of a larger, physically information-rich system.
AI Feynman symbolic regression did an excellent job of deciphering the 100 equations (or puzzles) in Feynman's physics lectures, but the lack of generalization meant that each new data set (corresponding to a new equation) required a large calculation budget.
A new set of deep learning strategies for symbolic regression leverage the highly successful family of Transformer models, originally introduced as natural language models by Vaswani et al. These new methods are not perfect, but using pre-training can save a lot of computational time during inference.
The first generation of symbolic regression based on natural language models
Given that the attention-based very large Transformer model has been widely used in computer vision, audio, reinforcement learning, recommendation systems and many other fields (in addition to text-based (original role in natural language processing) has achieved great success on a variety of tasks, so it is not surprising that the Transformer model will eventually be applied to symbolic regression as well.
While the realm of numeric input-output pairs to symbolic sequences requires some careful engineering, the sequence-based nature of mathematical expressions naturally lends itself to Transformer methods.
Crucially, using Transformer to generate mathematical expressions allowed them to leverage pre-training on the structure and numerical meaning of millions of automatically generated equations.
This also lays the foundation for improving the model through scaling up. Scaling is one of the main advantages of deep learning, where larger models and more data continue to improve model performance well beyond the classic statistical learning limitations of overfitting.
Scaling is the main advantage mentioned by Biggio et al. in their paper titled "Scalable Neural Symbolic Regression", which is called NSRTS. The NSRTS Transformer model uses a dedicated encoder to transform each input-output pair of the dataset into a latent space. The encoded latent space has a fixed size independent of the input size of the encoder.
NSRTS decoder constructs a sequence of tokens to represent an equation, conditioned on the encoded latent space and the symbols generated so far. Crucially, the decoder only outputs placeholders for numeric constants, but otherwise uses the same vocabulary as the pre-trained equations dataset.
NSRTS uses PyTorch and PyTorch Lightning and has a permissive open source MIT license.
After generating constant-free equations (called equation skeletons), NSRTS uses gradient descent to optimize the constants. This approach layers a general optimization algorithm on top of sequence generation, shared by the so-called “SymbolicGPT” developed simultaneously by Valipour et al.
Valipour et al. did not use an attention-based encoder as in the NSRTS method. Instead, a model based on the Stanford point cloud model PointNet is used to generate a fixed-dimensional feature set that is used by the Transformer decoder to generate equations. Like NSRT, Symbolic GPT uses BFGS to find the numerical constants of the equation skeleton generated by the Transformer decoder.
Second generation symbolic regression based on natural language models
While some recent articles describe the use of natural language processing (NLP) Transformers to achieve generalization and scalability of symbolic regression, The above models are not truly end-to-end as they do not estimate numerical constants.
This can be a serious flaw: imagine a model that generates equations with 1000 sinusoidal bases of different frequencies. Optimizing the coefficients of each term using BFGS will probably be a good fit for most input data sets, but in reality it's just a slow and roundabout way of performing Fourier analysis.
Just in the spring of 2022, the second generation Transformer-based symbolic regression model has been released on ArXiv by Vastl et al. on SymFormer, while another end-to-end Transformer was released by Kamienny and colleagues.
The important difference between these and previous Transformer-based symbolic regression models is that they predict numeric constants as well as symbolic mathematical sequences.
SymFormer uses a double-headed Transformer decoder to complete end-to-end symbol regression. One head produces mathematical symbols, and the second head learns the task of numerical regression, i.e. estimating numerical constants that appear in equations.
The end-to-end models of Kamienny and Vastl differ in details, such as the accuracy of numerical estimates, but the solutions of both groups still rely on subsequent optimization steps for refinement.
Even so, according to the authors, they have faster inference times and produce more accurate results than previous methods, produce better equation skeletons, and provide a good starting point for optimization steps and Estimate constant.
The Era of Symbolic Regression is Coming
In most cases, symbolic regression has been an elegant and computationally intensive machine learning method. Over the past decade, it has gained The attention is much lower than that of general deep learning.
This is partly due to the "use it and lose it" approach of genetic or probabilistic methods, which must start from scratch for each new data set, a characteristic that is inconsistent with intermediate applications from deep learning to symbolic regression. (such as AI Feynman) are the same.
Using the Transformer as an integral component in symbolic regression enables recent models to take advantage of large-scale pre-training, thereby reducing energy, time and computational hardware requirements at inference time.
This trend has been extended further with new models that can estimate numerical constants and predict mathematical symbols, enabling faster inference and greater accuracy.
The task of generating symbolic expressions, which in turn can be used to generate testable hypotheses, is a very human task and is at the heart of science. Automated methods of symbolic regression have continued to make interesting technical advances over the past two decades, but the real test is whether they are useful to researchers doing real science.
Symbolic regression is starting to produce more and more publishable scientific results beyond technical demonstrations. A Bayesian symbolic regression approach yields a new mathematical model for predicting cell division.
Another research team used a sparse regression model to generate reasonable equations for ocean turbulence, paving the way for improved multiscale climate models.
A project combining graph neural networks and symbolic regression with Eureqa’s genetic algorithm generalizes expressions describing many-body gravity and derives a new equation describing the distribution of dark matter from conventional simulators .
Future development of symbolic regression algorithm
Symbolic regression is becoming a powerful tool in the scientist's toolbox. The generalization and scalability of Transformer-based methods are still hot topics and have not yet penetrated into general scientific practice. As more researchers adapt and improve the model, it promises to further advance scientific discoveries.
Many of these projects are conducted under open source licenses, so you can expect them to have an impact within a few years, and their application may be wider than proprietary software such as Eureqa and TuringBot.
Symbolic regression is a natural complement to the output of deep learning models, which are often mysterious and difficult to interpret, whereas output that is more understandable in mathematical language can help generate new testable hypotheses and Driving intuitive leaps.
These characteristics and the straightforward capabilities of the latest generation of symbolic regression algorithms promise to provide greater opportunities for moments of significant discovery.
The above is the detailed content of Neurosymbolic Regression: Extracting Science from Data. For more information, please follow other related articles on the PHP Chinese website!

人工智能Artificial Intelligence(AI)、机器学习Machine Learning(ML)和深度学习Deep Learning(DL)通常可以互换使用。但是,它们并不完全相同。人工智能是最广泛的概念,它赋予机器模仿人类行为的能力。机器学习是将人工智能应用到系统或机器中,帮助其自我学习和不断改进。最后,深度学习使用复杂的算法和深度神经网络来重复训练特定的模型或模式。让我们看看每个术语的演变和历程,以更好地理解人工智能、机器学习和深度学习实际指的是什么。人工智能自过去 70 多

众所周知,在处理深度学习和神经网络任务时,最好使用GPU而不是CPU来处理,因为在神经网络方面,即使是一个比较低端的GPU,性能也会胜过CPU。深度学习是一个对计算有着大量需求的领域,从一定程度上来说,GPU的选择将从根本上决定深度学习的体验。但问题来了,如何选购合适的GPU也是件头疼烧脑的事。怎么避免踩雷,如何做出性价比高的选择?曾经拿到过斯坦福、UCL、CMU、NYU、UW 博士 offer、目前在华盛顿大学读博的知名评测博主Tim Dettmers就针对深度学习领域需要怎样的GPU,结合自

一. 背景介绍在字节跳动,基于深度学习的应用遍地开花,工程师关注模型效果的同时也需要关注线上服务一致性和性能,早期这通常需要算法专家和工程专家分工合作并紧密配合来完成,这种模式存在比较高的 diff 排查验证等成本。随着 PyTorch/TensorFlow 框架的流行,深度学习模型训练和在线推理完成了统一,开发者仅需要关注具体算法逻辑,调用框架的 Python API 完成训练验证过程即可,之后模型可以很方便的序列化导出,并由统一的高性能 C++ 引擎完成推理工作。提升了开发者训练到部署的体验

深度学习 (DL) 已成为计算机科学中最具影响力的领域之一,直接影响着当今人类生活和社会。与历史上所有其他技术创新一样,深度学习也被用于一些违法的行为。Deepfakes 就是这样一种深度学习应用,在过去的几年里已经进行了数百项研究,发明和优化各种使用 AI 的 Deepfake 检测,本文主要就是讨论如何对 Deepfake 进行检测。为了应对Deepfake,已经开发出了深度学习方法以及机器学习(非深度学习)方法来检测 。深度学习模型需要考虑大量参数,因此需要大量数据来训练此类模型。这正是

导读深度学习已在面向自然语言处理等领域的实际业务场景中广泛落地,对它的推理性能优化成为了部署环节中重要的一环。推理性能的提升:一方面,可以充分发挥部署硬件的能力,降低用户响应时间,同时节省成本;另一方面,可以在保持响应时间不变的前提下,使用结构更为复杂的深度学习模型,进而提升业务精度指标。本文针对地址标准化服务中的深度学习模型开展了推理性能优化工作。通过高性能算子、量化、编译优化等优化手段,在精度指标不降低的前提下,AI模型的模型端到端推理速度最高可获得了4.11倍的提升。1. 模型推理性能优化

Part 01 概述 在实时音视频通信场景,麦克风采集用户语音的同时会采集大量环境噪声,传统降噪算法仅对平稳噪声(如电扇风声、白噪声、电路底噪等)有一定效果,对非平稳的瞬态噪声(如餐厅嘈杂噪声、地铁环境噪声、家庭厨房噪声等)降噪效果较差,严重影响用户的通话体验。针对泛家庭、办公等复杂场景中的上百种非平稳噪声问题,融合通信系统部生态赋能团队自主研发基于GRU模型的AI音频降噪技术,并通过算法和工程优化,将降噪模型尺寸从2.4MB压缩至82KB,运行内存降低约65%;计算复杂度从约186Mflop

今天的主角,是一对AI界相爱相杀的老冤家:Yann LeCun和Gary Marcus在正式讲述这一次的「新仇」之前,我们先来回顾一下,两位大神的「旧恨」。LeCun与Marcus之争Facebook首席人工智能科学家和纽约大学教授,2018年图灵奖(Turing Award)得主杨立昆(Yann LeCun)在NOEMA杂志发表文章,回应此前Gary Marcus对AI与深度学习的评论。此前,Marcus在杂志Nautilus中发文,称深度学习已经「无法前进」Marcus此人,属于是看热闹的不

过去十年是深度学习的“黄金十年”,它彻底改变了人类的工作和娱乐方式,并且广泛应用到医疗、教育、产品设计等各行各业,而这一切离不开计算硬件的进步,特别是GPU的革新。 深度学习技术的成功实现取决于三大要素:第一是算法。20世纪80年代甚至更早就提出了大多数深度学习算法如深度神经网络、卷积神经网络、反向传播算法和随机梯度下降等。 第二是数据集。训练神经网络的数据集必须足够大,才能使神经网络的性能优于其他技术。直至21世纪初,诸如Pascal和ImageNet等大数据集才得以现世。 第三是硬件。只有


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),
