


This week, the International Conference on Computer Vision (ICCV) opened in Paris, France.
As the world’s top academic conference in the field of computer vision, ICCV is held every two years.
Like CVPR, ICCV’s popularity has hit new highs.
At today's opening ceremony, ICCV officially announced this year's paper data: the total number of submissions to ICCV this year reached 8,068, of which 2,160 were accepted, with an acceptance rate of 26.8%. Slightly higher than the 25.9% acceptance rate of the previous ICCV 2021
Regarding the paper topic, the official also released relevant data: multi-view and sensor 3D technology is the most popular
In today’s opening ceremony, the most important part is to announce the award-winning information. Now, let us reveal the Best Paper, Best Paper Nomination and Best Student Paper one by one
Best Paper - Marr Award
Total Two papers won this year's best paper (Marr Prize).
The first article comes from researchers at the University of Toronto.
- Paper address: https://openaccess.thecvf.com/content/ICCV2023/ papers/Wei_Passive_Ultra-Wideband_Single-Photon_Imaging_ICCV_2023_paper.pdf
- Authors: Mian Wei, Sotiris Nousias, Rahul Gulve, David B. Lindell, Kiriakos N. Kutulakos
- Institution: University of Toronto
Abstract: This paper considers the problem of imaging dynamic scenes simultaneously at extreme time scales (seconds to picoseconds), and Imaging is done passively, without much light and without any timing signals from the light source that emits it. Since existing flux estimation techniques for single-photon cameras fail in this case, we develop a flux detection theory that draws insights from stochastic calculus to enable the Time-varying flux of reconstructed pixels in a stream of photon detection timestamps.
This paper uses this theory to show that passive free-running SPAD cameras have an achievable frequency bandwidth under low-flux conditions that spans the entire DC to 31 GHz range. At the same time, this paper also derives a novel Fourier domain flux reconstruction algorithm and ensures that the noise model of this algorithm is still effective under very low photon counts or non-negligible dead time
The potential of this asynchronous imaging mechanism is experimentally demonstrated: (1) No synchronization is required for imaging scenes illuminated simultaneously by light sources (such as light bulbs, projectors, multiple pulsed lasers) operating at different speeds; (2) Achieve passive non-line-of-sight video collection; (3) Record ultra-wideband video and later play it back at 30 Hz to show daily movement, or play it back a billion times slower to show the propagation of light itself
#The second article is what we know as ControNet.
- ##Paper address: https://arxiv.org/pdf/2302.05543.pdf
- Authors: Lvmin Zhang, Anyi Rao, Maneesh Agrawala
- Institution: Stanford University
Abstract: This study proposes a An end-to-end neural network architecture called ControlNet. This architecture improves image generation by adding additional conditions to control the diffusion model (such as stable diffusion). At the same time, ControlNet can generate full-color images from line drawings, generate images with the same depth structure, and optimize hand generation effects through hand key points.
The core idea of ControlNet is to add some additional conditions to the text description to control the diffusion model (such as Stable Diffusion), thereby better controlling the character pose, depth, picture structure and other information of the generated image.
The additional conditions here are input in the form of an image. The model can perform Canny edge detection, depth detection, semantic segmentation, Hough transform line detection, and overall nesting based on this input image. edge detection (HED), human pose recognition, etc., and then retain this information in the generated image. Using this model, we can directly convert line drawings or graffiti into full-color images, generate images with the same depth structure, etc., and optimize the generation of character hands through hand key points.
Please refer to Heart of the Machine's report "AI dimensionality reduction hits human painters, Vincentian graphs are introduced into ControlNet, and depth and edge information are fully reused" to Get a more detailed introduction
Best paper nomination: SAM
In April of this year, Meta released a paper called "Separate Everything (SAM) ” artificial intelligence model, which can generate masks for objects in any image or video, which shocked researchers in the field of computer vision. Some even said that “computer vision no longer exists”
Now, this high-profile paper is nominated for the best paper.
- ##Paper address: https://arxiv.org/abs/2304.02643
- Organization: Meta AI
Rewritten content: Before solving the segmentation problem, there are usually two methods. The first is interactive segmentation, which can be used to segment any class of objects but requires a human to guide the method by iteratively refining the mask. The second is automatic segmentation, which can be used to segment predefined specific object categories (such as cats or chairs), but requires a large number of manually annotated objects for training (such as thousands or even tens of thousands of examples of segmented cats). However, neither of these two methods provides a universal, fully automatic segmentation method
The SAM proposed by Meta nicely generalizes these two methods. It is a single model that can easily perform interactive segmentation and automatic segmentation. The model's promptable interface allows users to use it in a flexible way, enabling a wide range of segmentation tasks to be accomplished simply by designing the right prompts (clicks, box selections, text, etc.) for the model
To summarize, these features enable SAM to adapt to new tasks and domains. This flexibility is unique in the field of image segmentation
For details, please refer to the Heart of the Machine report: "CV Doesn't Exist?" Meta releases “split everything” AI model, CV may usher in GPT-3 moment》
Best Student Paper
The research was conducted by It was jointly completed by researchers from Cornell University, Google Research and UC Berkeley. The first work was Qianqian Wang, a doctoral student from Cornell Tech. They jointly proposed OmniMotion, a complete and globally consistent motion representation, and proposed a new test-time optimization method to perform accurate and complete motion estimation for every pixel in the video.
- ##Paper address: https://arxiv.org/abs/2306.05422
- Project homepage: https://omnimotion.github.io/
The OmniMotion proposed in this research uses a quasi-3D canonical volume to characterize the video and tracks each pixel through a bijection between local space and canonical space. This representation enables global consistency, enables motion tracking even when objects are occluded, and models any combination of camera and object motion. This study experimentally demonstrates that the proposed method significantly outperforms existing SOTA methods.
Please refer to the Heart of Machine report "The "track everything" video algorithm that tracks every pixel anytime, anywhere, and is not afraid of occlusion is here" for a more detailed introduction In addition to these award-winning papers, this year’s ICCV also has many other outstanding papers worthy of everyone’s attention. The following is an initial list of the 17 winning papers
The above is the detailed content of ICCV 2023 announced the winners of popular papers such as ControlNet and 'Split Everything'. For more information, please follow other related articles on the PHP Chinese website!

ai合并图层的快捷键是“Ctrl+Shift+E”,它的作用是把目前所有处在显示状态的图层合并,在隐藏状态的图层则不作变动。也可以选中要合并的图层,在菜单栏中依次点击“窗口”-“路径查找器”,点击“合并”按钮。

ai橡皮擦擦不掉东西是因为AI是矢量图软件,用橡皮擦不能擦位图的,其解决办法就是用蒙板工具以及钢笔勾好路径再建立蒙板即可实现擦掉东西。

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

ai可以转成psd格式。转换方法:1、打开Adobe Illustrator软件,依次点击顶部菜单栏的“文件”-“打开”,选择所需的ai文件;2、点击右侧功能面板中的“图层”,点击三杠图标,在弹出的选项中选择“释放到图层(顺序)”;3、依次点击顶部菜单栏的“文件”-“导出”-“导出为”;4、在弹出的“导出”对话框中,将“保存类型”设置为“PSD格式”,点击“导出”即可;

ai顶部属性栏不见了的解决办法:1、开启Ai新建画布,进入绘图页面;2、在Ai顶部菜单栏中点击“窗口”;3、在系统弹出的窗口菜单页面中点击“控制”,然后开启“控制”窗口即可显示出属性栏。

Yann LeCun 这个观点的确有些大胆。 「从现在起 5 年内,没有哪个头脑正常的人会使用自回归模型。」最近,图灵奖得主 Yann LeCun 给一场辩论做了个特别的开场。而他口中的自回归,正是当前爆红的 GPT 家族模型所依赖的学习范式。当然,被 Yann LeCun 指出问题的不只是自回归模型。在他看来,当前整个的机器学习领域都面临巨大挑战。这场辩论的主题为「Do large language models need sensory grounding for meaning and u

引入密集强化学习,用 AI 验证 AI。 自动驾驶汽车 (AV) 技术的快速发展,使得我们正处于交通革命的风口浪尖,其规模是自一个世纪前汽车问世以来从未见过的。自动驾驶技术具有显着提高交通安全性、机动性和可持续性的潜力,因此引起了工业界、政府机构、专业组织和学术机构的共同关注。过去 20 年里,自动驾驶汽车的发展取得了长足的进步,尤其是随着深度学习的出现更是如此。到 2015 年,开始有公司宣布他们将在 2020 之前量产 AV。不过到目前为止,并且没有 level 4 级别的 AV 可以在市场

ai移动不了东西的解决办法:1、打开ai软件,打开空白文档;2、选择矩形工具,在文档中绘制矩形;3、点击选择工具,移动文档中的矩形;4、点击图层按钮,弹出图层面板对话框,解锁图层;5、点击选择工具,移动矩形即可。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Zend Studio 13.0.1
Powerful PHP integrated development environment

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Notepad++7.3.1
Easy-to-use and free code editor

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft
