


A total of 5 outstanding paper awards and 11 honorable mentions were selected this year.
ICLR stands for International Conference on Learning Representations. This year is the 12th conference, held in Vienna, Austria, from May 7th to 11th.
In the machine learning community, ICLR is a relatively "young" top academic conference. It is hosted by deep learning giants and Turing Award winners Yoshua Bengio and Yann LeCun. It just held its first session in 2013. However, ICLR quickly gained wide recognition from academic researchers and is considered the top academic conference on deep learning.
This conference received a total of 7262 submitted papers and accepted 2260 papers. The overall acceptance rate was about 31%, which was the same as last year (31.8%). In addition, the proportion of Spotlights papers is 5% and the proportion of Oral papers is 1.2%.
Compared with previous years, whether it is the number of participants or the number of paper submissions, the popularity of ICLR can be said to have greatly improved. .
## Previous ICLR paper data chart
Among the award-winning papers announced recently, the conference selected 5 Outstanding Paper Awards and 11 Honorable Mention Awards.5 Outstanding Paper Awards
Outstanding Paper winners
Paper: Generalization in diffusion models arises from geometry -adaptive harmonic representations
- Paper address: https://openreview.net/pdf?id=ANvmVS2Yr0
- Institution: New York University, Collège de France
- Author: Zahra Kadkhodaie, Florentin Guth, Eero P. Simoncelli, Stéphane Mallat
Thesis: Learning Interactive Real-World Simulators
- Thesis address: https://openreview. net/forum?id=sFyTZEqmUY
- Institutions: UC Berkeley, Google DeepMind, MIT, University of Alberta
- Author: Sherry Yang, Yilun Du, Kamyar Ghasemipour, Jonathan Tompson, Leslie Kaelbling, Dale Schuurmans, Pieter Abbeel
As shown in Figure 3 below, UniSim can simulate a series of rich actions, such as washing hands, taking bowls, cutting carrots, and drying hands in a kitchen scene; the upper right of Figure 3 shows pressing different switches; the bottom of Figure 3 are two navigation scenarios.
## to correspond to the navigation scene in the lower right corner of Figure 3
## The navigation scenario below 3 right
Thesis: Never Train from Scratch: Fair Comparison of Long-sequence Models Requires Data-Driven Priors
- Paper address: https://openreview.net/forum?id=PdaPky8MUn
- Institution: Tel Aviv University, IBM
- Authors: Ido Amos, Jonathan Berant, Ankit Gupta
Paper: Protein Discovery with Discrete Walk-Jump Sampling
- Paper address: https:// openreview.net/forum?id=zMPHKOmQNb
- Institution: Genentech, New York University
- Author: Nathan C. Frey, Dan Berenberg, Karina Zadorozhny, Joseph Kleinhenz, Julien Lafrance-Vanasse, Isidro Hotzel, Yan Wu, Stephen Ra, Richard Bonneau, Kyunghyun Cho, Andreas Loukas, Vladimir Gligorijevic, Saeed Saremi
Paper: Vision Transformers Need Registers
- ##Paper address: https://openreview.net/ forum?id=2dnO3LLiJ1
- Institution: Meta et al
- Author: Timothée Darcet, Maxime Oquab, Julien Mairal, Piotr Bojanowski
- This paper identifies artifacts in the feature map of a vision transformer network, which are characterized by high-norm tokens in low-information background regions.
The authors propose key assumptions for why this phenomenon occurs and provide a simple yet elegant solution using additional register tokens to account for these traces, thereby enhancing the model's performance on a variety of tasks. Insights gained from this work could also impact other application areas.
This paper is well written and provides a good example of conducting research: "Identify the problem, understand why it occurs, and then propose a solution."
11 Honorable MentionsIn addition to 5 outstanding papers, ICLR 2024 also selected 11 honorable mentions.
Paper: Amortizing intractable inference in large language models
- Institution: University of Montreal, University of Oxford
- Authors: Edward J Hu, Moksh Jain, Eric Elmoznino, Younesse Kaddar, Guillaume Lajoie, Yoshua Bengio, Nikolay Malkin
- Paper address: https://openreview.net/forum? id=Ouj6p4ca60
- This paper proposes a promising alternative to autoregressive decoding in large language models from a Bayesian inference perspective, which may inspire subsequent research .
##Institution: DeepMind
- Authors: Ian Gemp, Luke Marris, Georgios Piliouras
- Paper address: https://openreview.net/forum?id=cc8h3I3V4E
- This is a very clearly written paper that contributes significantly to solving the important problem of developing efficient and scalable Nash solvers.
- ##Institution: Peking University, Beijing Zhiyuan Artificial Intelligence Research Institute
- Author: Zhang Bohang Gai Jingchu Du Yiheng Ye Qiwei Hedi Wang Liwei ## Paper address: https://openreview.net/forum?id=HSKaGOi7Ar
- The expressiveness of GNNs is an important topic, and current solutions still have significant limitations. The author proposes a new expressivity theory based on homomorphic counting.
- Author: Ricky T. Q. Chen, Yaron Lipman
- Paper address: https://openreview.net/forum?id=g7ohDlTITL
- This paper discusses the application of general geometry This is a challenging but important problem of generative modeling on manifolds, and a practical and efficient algorithm is proposed. The paper is excellently presented and fully experimentally validated on a wide range of tasks.
- Authors: Shashanka Venkataramanan, Mamshad Nayeem Rizve, Joao Carreira, Yuki M Asano, Yannis Avrithis
- Paper address: https: //openreview.net/forum?id=Yen1lGns2o
- This paper proposes a novel self-supervised image pre-training method by learning from continuous videos. This paper contributes both a new type of data and a method for learning from new data.
- Authors: Yichen Wu, Long-Kai Huang, Renzhen Wang, Deyu Meng, Wei Ying (Ying Wei)
- Paper address: https://openreview.net/forum?id=TpD2aG1h0D
- The author proposes a new meta-continuous learning variance reduction method. The method performs well and not only has practical impact but is also supported by regret analysis.
- Authors: Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao
- Paper address: https:/ /openreview.net/forum?id=uNrFpDPMyo
- This article aims at the KV cache compression problem (this problem has a great impact on Transformer-based LLM) and uses a simple idea to reduce memory. And it can be deployed without extensive resource-intensive fine-tuning or retraining. This method is very simple and has proven to be very effective.
Authors: Yonatan Oren, Nicole Meister, Niladri S. Chatterji, Faisal Ladhak, Tatsunori Hashimoto
Paper address: https://openreview.net/forum?id= KS8mIvetg2
This paper uses a simple and elegant method for testing whether a supervised learning dataset has been included in the training of a large language model.
Author: Jonathan Richens, Tom Everitt
Paper address: https://openreview.net/forum?id=pOoKI3ouv1
This paper was laid down Considerable progress has been made in the theoretical foundations for understanding the role of causal reasoning in an agent's ability to generalize to new domains, with implications for a range of related fields.
Author: Gautam Reddy
Paper address: https://openreview.net/forum?id=aN4Jf6Cx69
This is a timely and extremely systematic study that explores the mechanisms between in-context learning and in-weight learning as we begin to understand these phenomena.
Authors: Germain Kolossov, Andrea Montanari, Pulkit Tandon
Paper address: https://openreview.net/forum?id=HhfcNgQn6p
This paper establishes a statistical foundation for data subset selection and identifies the shortcomings of popular data selection methods.
Paper: Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness
- Institution: Meta
- Institution: University of Central Florida, Google DeepMind, University of Amsterdam, etc.
- Institutions: City University of Hong Kong, Tencent AI Laboratory, Xi'an Jiaotong University, etc.
##Institution: University of Illinois at Urbana-Champaign, Microsoft
Paper: Proving Test Set Contamination in Black-Box Language Models
##Institution: Stanford University, Columbia University
Institution: Google DeepMind
Institution: Princeton University, Harvard University, etc.
Institution: Granica Computing
Reference link: https://blog.iclr.cc/2024/05/06/iclr-2024-outstanding-paper-awards/
The above is the detailed content of 7262 papers were submitted, ICLR 2024 became a hit, and two domestic papers were nominated for outstanding papers.. For more information, please follow other related articles on the PHP Chinese website!

你可能听过以下犀利的观点:1.跟着NVIDIA的技术路线,可能永远也追不上NVIDIA的脚步。2.DSA或许有机会追赶上NVIDIA,但目前的状况是DSA濒临消亡,看不到任何希望另一方面,我们都知道现在大模型正处于风口位置,业界很多人想做大模型芯片,也有很多人想投大模型芯片。但是,大模型芯片的设计关键在哪,大带宽大内存的重要性好像大家都知道,但做出来的芯片跟NVIDIA相比,又有何不同?带着问题,本文尝试给大家一点启发。纯粹以观点为主的文章往往显得形式主义,我们可以通过一个架构的例子来说明Sam

2021年9月25日,阿里云发布了开源项目通义千问140亿参数模型Qwen-14B以及其对话模型Qwen-14B-Chat,并且可以免费商用。Qwen-14B在多个权威评测中表现出色,超过了同等规模的模型,甚至有些指标接近Llama2-70B。此前,阿里云还开源了70亿参数模型Qwen-7B,仅一个多月的时间下载量就突破了100万,成为开源社区的热门项目Qwen-14B是一款支持多种语言的高性能开源模型,相比同类模型使用了更多的高质量数据,整体训练数据超过3万亿Token,使得模型具备更强大的推

在法国巴黎举行了国际计算机视觉大会ICCV(InternationalConferenceonComputerVision)本周开幕作为全球计算机视觉领域顶级的学术会议,ICCV每两年召开一次。ICCV的热度一直以来都与CVPR不相上下,屡创新高在今天的开幕式上,ICCV官方公布了今年的论文数据:本届ICCV共有8068篇投稿,其中有2160篇被接收,录用率为26.8%,略高于上一届ICCV2021的录用率25.9%在论文主题方面,官方也公布了相关数据:多视角和传感器的3D技术热度最高在今天的开

随着智慧司法的兴起,智能化方法驱动的智能法律系统有望惠及不同群体。例如,为法律专业人员减轻文书工作,为普通民众提供法律咨询服务,为法学学生提供学习和考试辅导。由于法律知识的独特性和司法任务的多样性,此前的智慧司法研究方面主要着眼于为特定任务设计自动化算法,难以满足对司法领域提供支撑性服务的需求,离应用落地有不小的距离。而大型语言模型(LLMs)在不同的传统任务上展示出强大的能力,为智能法律系统的进一步发展带来希望。近日,复旦大学数据智能与社会计算实验室(FudanDISC)发布大语言模型驱动的中

8月31日,文心一言首次向全社会全面开放。用户可以在应用商店下载“文心一言APP”或登录“文心一言官网”(https://yiyan.baidu.com)进行体验据报道,百度计划推出一系列经过全新重构的AI原生应用,以便让用户充分体验生成式AI的理解、生成、逻辑和记忆等四大核心能力今年3月16日,文心一言开启邀测。作为全球大厂中首个发布的生成式AI产品,文心一言的基础模型文心大模型早在2019年就在国内率先发布,近期升级的文心大模型3.5也持续在十余个国内外权威测评中位居第一。李彦宏表示,当文心

保险行业对于社会民生和国民经济的重要性不言而喻。作为风险管理工具,保险为人民群众提供保障和福利,推动经济的稳定和可持续发展。在新的时代背景下,保险行业面临着新的机遇和挑战,需要不断创新和转型,以适应社会需求的变化和经济结构的调整近年来,中国的保险科技蓬勃发展。通过创新的商业模式和先进的技术手段,积极推动保险行业实现数字化和智能化转型。保险科技的目标是提升保险服务的便利性、个性化和智能化水平,以前所未有的速度改变传统保险业的面貌。这一发展趋势为保险行业注入了新的活力,使保险产品更贴近人民群众的实际

不得不说,Llama2的「二创」项目越来越硬核、有趣了。自Meta发布开源大模型Llama2以来,围绕着该模型的「二创」项目便多了起来。此前7月,特斯拉前AI总监、重回OpenAI的AndrejKarpathy利用周末时间,做了一个关于Llama2的有趣项目llama2.c,让用户在PyTorch中训练一个babyLlama2模型,然后使用近500行纯C、无任何依赖性的文件进行推理。今天,在Karpathyllama2.c项目的基础上,又有开发者创建了一个启动Llama2的演示操作系统,以及一个

杭州第19届亚运会不仅是国际顶级体育盛会,更是一场精彩绝伦的中国科技盛宴。本届亚运会中,快手StreamLake与杭州电信深度合作,联合打造智慧观赛新体验,在击剑赛事的转播中,全面应用了快手StreamLake六自由度技术,其中“子弹时间”也是首次应用于击剑项目国际顶级赛事。中国电信杭州分公司智能亚运专班组长芮杰表示,依托快手StreamLake自研的4K3D虚拟运镜视频技术和中国电信5G/全光网,通过赛场内部署的4K专业摄像机阵列实时采集的高清竞赛视频,


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SublimeText3 English version
Recommended: Win version, supports code prompts!

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft
