search
HomeTechnology peripheralsAIMicrosoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

(Nweon December 26, 2023) The development of remote meetings is promoting the popularization of the Metaverse. However, currently online meeting applications face a major problem when using meta-environments, which is that not all participants use the same type of device. For example, some users use PCs, while others use virtual reality headsets

Desktop device users are sometimes at a disadvantage because they cannot navigate or interact with all users in the virtual environment. While a computer provides a 2D view of a 3D environment, the computer is limited in how it receives input gestures from the user to navigate or interact with the 3D environment.

From the current point of view, although technology is developing rapidly, the experience of VR headset users and PC users is not the same. Additionally, existing systems do not allow for seamless transitions from VR headsets to desktop devices or vice versa during events such as parties or company meetings.

In Microsoft's patent application, titled "Rendering of 2D and 3D transitions in user-engaged communication sessions," a related seamless transition method is detailed.

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

Figures 1A and 1B illustrate the transition of a user interface arrangement from the display of a two-dimensional image of the user to the presentation of a three-dimensional representation of the user while the user is engaged in a communication session.

The communication session may be managed by a system 100 composed of several computers 11, each computer 11 corresponding to several users 10. In this example, the third user 10C's presentation will undergo conversion from 2D mode to 3D mode.

To initiate a transition, the system can receive an input to cause a display transition of a specific user's 2D image presentation. In this example, the input identifies the third user 10C. This input may also provide permission to allow the system to access the 3D model defining the location and orientation for the third user 10C. These positions and directions may include vectors and coordinates represented in 3D environment 200, referred to herein as virtual environment 200.

In response to receiving input, one or more computers of system 100 may make modifications to user interface 101 to remove the rendering of image 151C of user 10C as shown in Figure 1A and to add the rendering of image 151C of user 10C as shown in Figure 1B Rendering of the 3D representation of the User 10C 251C. The presentation of the 3D representation 251C of the user 10C may be positioned and oriented in the 3D environment based on the coordinates and/or vectors defined in the 3D model.

In this example, the rendering of user 10C's 2D image is removed and can then be replaced with another rendering. For example, the 2D image rendering of the third user 10C shown in FIG. 1A is replaced in the UI with another 2D image of another user, the fourth user 10D shown in FIG. 1B .

This transformation allows users to interact with computing devices in different ways. For example, in this example, if user 10C wishes to switch from live video streaming in a communication session to another mode of operation that allows the user to interact with other users in a 3D environment, the system will switch the user from one mode Switch to another mode, allowing it to interact with general content, documents, spreadsheets, and slides Switch to a mode that allows it to interact with 3D objects

This transition during a communication session allows selected users to use editing tools appropriate for different content types in each environment. For example, if a person in a video stream wishes to leave the 2D mode using 2D images shown to the user and enter a 3D environment to show other users how to move objects in specific locations or shape specific 3D objects, then once the user is able to By switching, they can do that more easily.

Users can accomplish this conversion using a desktop PC without using any type of XR headset. This transformation using the desktop enables the user to use the desktop computer to enter a 3D mode of interaction with the 3D computing environment, which may be more suitable for editing or viewing certain types of content.

Microsoft noted that one of the technical benefits is that the system can allow users to switch between 3D mode and 2D mode of the communication session regardless of what hardware they are interacting with.

The technology described in the invention is also applicable to head-mounted displays. In such embodiments, the user may remain using only one computing device, such as a head-mounted display, while transitioning the interactive model from a 3D computing environment to a 2D computing environment. Therefore, a user can launch in a 3D computing environment and be represented by a presentation of 3D representation 251C, such as the representation shown in Figure 1B.

Then, in response to one or more inputs, such as a user starting to edit content with a specific file type, or based on input indicating an intent to perform a UI transformation, the system may transform the UI to remove the rendering of the 3D representation 251C, as shown in Figure 1B , and generates a representation of the user's 2D image 151C, such as the representation shown in Figure 1A. This allows users to transition to a 2D environment without actually using a desktop device using a flat screen display and keyboard.

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

Figures 2A and 2B illustrate another example of a transition of a user interface from a display with a two-dimensional image of the user to a presentation of a three-dimensional representation of the user while the user is participating in a communication session.

In this example, the user interface 201 is a presentation of a 3D environment based on a 3D model. User interface 201 begins with a 3D rendering of representation 251A of first user 10A and a 3D rendering of representation 251B of second user 10B. Each represented 3D rendering has a position and an orientation determined by the virtual object properties stored in the 3D model

The 3D environment also includes a virtual object 275 in the form of a virtual flat screen television mounted on the wall of the virtual environment. Virtual object 275 has a display surface that displays a virtual user interface that displays a 2D rendering 151C of the third user 10C and a 2D rendering 151D of the fourth user 10D.

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

Figures 3A and 3B illustrate another aspect of third-user conversion. In this example, the presentation of the third user 10C undergoes conversion from the 2D mode to the 3D mode.

As shown in Figure 3A, the user interface 301 first displays two-dimensional images of Jessamine, Lawrence and Mike, which are displayed as renderings of images 151A, 151B and 151D respectively. The user interface simultaneously includes a presentation of the 3D environment 200 with two 3D representations 251A and 251B of other users.

In response to the input data described in this article, the system performs transformations. In the transition of the third user, the third computer 11C of the third user 10C transitions from the user interface shown in FIG. 3A to the user interface shown in FIG. 3B.

After conversion, Charlotte's computer 11C displays the modified user interface 301, as shown in Figure 3B. The system will maintain the state of each user, just like the three-dimensional representation of the two users 251A and 251B shown in Figure 3A, and also maintain the three-dimensional representation of 251A and 251B shown in Figure 3B

Also shown in Figure 3B, the modified user interface 301 includes a virtual object 275, which in this case is a virtual display device that displays a 2D rendering of other users that was originally displayed as a 2D image, such as Figure 3A Jessamine and Laurence in .

This modified 301 UI now displays Charlotte's perspective as if she teleported from a 2D environment to a 3D environment. Similar to the other examples, in this teleportation the system can determine the Charlotte Avatar's position and direction based on one or more factors.

In such an example, Charlotte might be operating a device, such as a PC. Then, in response to one or more inputs described herein, the system can transition from the user interface of Figure 3A to the user interface of Figure 3B while continuing to use the desktop PC. This example transition can be achieved even without using a headset traditionally used for viewing 3D renderings.

In another example, the transition may involve Charlotte starting with the user interface of Figure 3B and then transitioning to the user interface of Figure 3A. In this example, Charlotte might be operating a separate device, such as a head-mounted display. She first navigates the 3D environment shown in Figure 3B, and then by reacting to one or more inputs described in this article, the system can transition from the user interface of Figure 3B to the user interface of Figure 3A and continue to use the headset . This example conversion is possible even without using a computer traditionally used to view 2D images

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

Figure 4A illustrates other features of UI transitions. When receiving input that causes the UI to transition from a presentation of a 2D image of user 1OC to a presentation of a 3D representation of user 1OC, the system may determine the position and orientation of the 3D representation of user 1OC.

For example, if a model starts with only two virtual objects 351A and 351B representing users in the virtual environment 200, then the system can determine the position and direction of the newly added virtual object 351C representing the user. In this example, when the input indicates a particular user, such as third user 10C, the system may determine the location of virtual object 351C representing third user 10C based on the location of other users in virtual environment 200 and/or the location of shared content. Location and direction

In one illustrative example, if the system determines that virtual object 351C representing third user 10C is to be added to virtual environment 200, the system may position virtual object 351C in a manner such that virtual object 351C is presented Out user's avatar is viewing content shared with user 10C.

In another example, if the system determines that virtual object 351C representing a third user 10C is to be added to virtual environment 200, the system may position virtual object 351C in a manner such that it appears to be the user Avatar is talking to user 10C Avatar Conversation

In one embodiment, the placement of each virtual object 351 may be based on distribution among team members, user groups, and/or policies established by individual users or user groups. For example, if a person is part of a corporate team, when one of them is identified in input to the transformation user interface, their corresponding avatar will be positioned within a threshold distance of other team members.

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

Figure 4B illustrates the two modes of operation of the system and how each mode of operation changes the permissions of individuals participating in a communication session. In a first mode of operation in the upper part of Figure 4B, when a representation of the user is not included in the 3D model, permissions may allow the system to use an image file to display a 2D image of the user.

In this case, the 3D model data is in the first state 320A, in which the selected user has no virtual object representing the user in the 3D environment 200. When the 3D model is in this state, the selected user has no virtual object representing that user in the 3D environment and the permissions data 315 associated with the user is configured to allow the system and other users to access the user's image data 310 . This means that the system and each remote user's client can use the image data 310 to generate a representation of that user, or the system can edit the image data 310 .

When the system detects that the 3D model data is in the second state, for example, the model data 320B contains a virtual object 351C representing the selected user, the system will modify the permissions to limit the use of the image data by the specific user. As shown in the figure, the system will modify the permission data 315 to restrict the system from reading the image data 310 to display the 2D image of the specific user. In this mode of operation, permissions are configured to restrict access to image data to all users, thus preventing all clients from accessing or displaying 2D image files

Figures 5A and 5B illustrate features of a system configured to locate a user's representation in a 3D environment 200 relative to shared content.

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

Avatar orientation for the first user 351A and the second user 351B is shown in Figure 5A for a scenario of viewing shared content in a 3D environment. The content they share can be displayed on virtual objects, such as virtual displays. When the system detects that a certain number of users are viewing shared content, the system will generate directions into the 3D environment for the third user with Avatar

An example of

features is shown in Figure 5B. In this example, the third user 351C's Avatar is added to the virtual environment. The third user 351C's Avatar points to the shared content in response to the system detecting that the other user has shared content within its field of view. The system can also determine the geometry of each person's field of view and determine the position of the third user's Avatar so that the third user's Avatar does not block other users' fields of view.

Figures 6A and 6B illustrate configurations configured to be positioned relative to other users in the 3D environment 200. Figure 6A illustrates a scenario in which a first user's and a second user's Avatars are oriented such that the users are looking at each other in the virtual environment.

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings

In a specific team or predetermined group, when the system determines that a certain number of people are looking at each other, the system can position the Avatar of the third user entering the environment so that the Avatar's position can be looked at other people. user. Figure 6A shows several avatars with at least three users with other group members within the field of view. When the system determines that a certain threshold number of virtual characters have other group members within the field of view, as shown in Figure 6B, the system can allow new group members to join the virtual environment with a location and orientation that allows the user to view other group members.

Related Patents: Microsoft Patent | 2d and 3d transitions for renderings of users participating in communication sessions

The Microsoft patent application titled "2d and 3d transitions for renderings of users participating in communication sessions" was originally submitted in May 2022 and was recently published by the US Patent and Trademark Office.

It should be noted that, generally speaking, after a U.S. patent application is reviewed, it will be automatically published 18 months from the filing date or priority date, or it will be published within 18 months from the filing date at the request of the applicant. Note that publication of a patent application does not mean that the patent is approved. After a patent application is filed, the USPTO requires actual review, which can take anywhere from 1 to 3 years.

The above is the detailed content of Microsoft patent sharing for seamless transition between 2D and 3D in Metaverse remote meetings. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:搜狐. If there is any infringement, please contact admin@php.cn delete
什么是OCO订单?什么是OCO订单?Apr 25, 2023 am 11:26 AM

二选一订单(OneCancelstheOther,简称OCO)可让您同时下达两个订单。它结合了限价单和限价止损单,但只能执行其中一个。换句话说,只要其中的限价单被部分或全部成交、止盈止损单被触发,另一个订单将自动取消。请注意,取消其中一个订单也会同时取消另一个订单。在币安交易平台进行交易时,您可以将二选一订单作为交易自动化的基本形式。这个功能可让您选择同时下达两个限价单,从而有助于止盈和最大程度减少潜在损失。如何使用二选一订单?登录您的币安帐户之后,请前往基本交易界面,找到下图所示的交易区域。点

聊一聊AI有哪四种类型聊一聊AI有哪四种类型Apr 09, 2023 pm 11:31 PM

随着人工智能成为一种趋势,人们对它如何工作以及可以做什么存在很多问题。一个经常被问到的问题就是——AI有哪四种类型。下面小编就来为大家解答一下。AI有哪四种类型反应性机器反应性机器在AI中是一个非常受欢迎的概念。这是因为它是最基本也是最古老的AI类型。反应性机器是指仅对某些刺激和场景有反应性的机器。与之后的许多人工智能软件不同的是,它们不能利用以前的经验或负载的知识来评估和应对特定的情况。甚至不使用GPS或数字地图来导航周围的环境或绘制路线。相反,它们会根据所看到的东西移动。反应型机器擅长象棋和

对话清华黄民烈:借用自动驾驶分级定义AI对话系统,元宇宙虚拟伴侣或位于L5对话清华黄民烈:借用自动驾驶分级定义AI对话系统,元宇宙虚拟伴侣或位于L5Apr 12, 2023 pm 11:34 PM

本文转自雷锋网,如需转载请至雷锋网官网申请授权。“我很庆幸能陪在你身边,通过你的目光看世界(I'm so happy I get to be next to you and look at the world through your eyes.)。"这是影片《Her》中的一句台词,由AI语音助手Samantha对男主角说出。这句话对于迷失在钢铁森林中,感到失落而无力的男主角来说是莫大的安慰。Samantha是一款几乎万能的自我学习型操作系统。她能帮助男主角筛选出最优秀的信件,发给他喜

人工智能新兴岗位走热,虚拟人、数字员工前景看好人工智能新兴岗位走热,虚拟人、数字员工前景看好Apr 09, 2023 pm 12:11 PM

​随着人工智能迅猛发展及人口红利消失,大批新兴岗位如雨后春笋般涌现,机器人、虚拟人、数字员工在各种场景中频繁亮相上岗,引发了广泛关注。机器人大家都比较熟悉,接受度普遍较高。那么,虚拟人和数字员工又是什么样的职业?就业前景如何呢?下面跟随小编一探究竟吧!数字时代的职场,人类不仅要和自己的“同类”打交道,还要具备和“异类”——虚拟人、数字员工协作共事的认知和技能,适应和迎接“混合型”人机团队的新型工作方式已成为一种趋势。虚拟人去年,“元宇宙”的概念大火,带火了虚拟人。AYAYI、艾灵、华智冰、小诤、

Meta推出4年硬件路线图,致力于打造「圣杯」AR眼镜,烧了137亿美元Meta推出4年硬件路线图,致力于打造「圣杯」AR眼镜,烧了137亿美元Apr 24, 2023 pm 11:04 PM

现在,谁还提元宇宙?2022年,Meta实验室RealityLabs在AR/VR的研发投入已经亏损了137亿美元。比去年(近102亿美元)还要多,简直让人瞠目结舌。也看,生成式AI大爆发,一波ChatGPT狂热潮,让Meta内部重心也有所倾斜。就在前段时间,在公司的季度财报电话会议上,提及「元宇宙」的次数只有7次,而「AI」有23次。做着几乎赔本的买卖,元宇宙就这样凉凉了吗?NoNoNo!Meta近日公布了未来四年VR/AR硬件技术路线图。2025年,发布首款带有显示屏的智能眼镜,以及控制眼镜的

人工智能将如何影响元宇宙人工智能将如何影响元宇宙Apr 12, 2023 pm 07:16 PM

不久以前,你只能在科幻小说中找到人工智能,但现在,它是一个非常标准的技术工具,变得越来越重要。那么,人工智能到底是什么,它如何影响互联网,它将如何帮助改变web3和元宇宙的世界?什么是人工智能?人工智能,简称AI,基本上是计算机系统对人类智能的模拟。人工智能通常会模仿与人类技能相关的任务,比如那些需要逻辑或推理的任务。在这个阶段,大多数AI都是作为一个算法或几个算法一起工作形成一个AI系统。人工智能系统通常通过分析大量数据,找到模式,然后使用这些模式预测未来的事件来工作。例如,语音识别、计算机视

Unity大中华区平台技术总监杨栋:开启元宇宙的数字人之旅Unity大中华区平台技术总监杨栋:开启元宇宙的数字人之旅Apr 08, 2023 pm 06:11 PM

作为构建元宇宙内容的基石,数字人是最早可落地且可持续发展的元宇宙细分成熟场景,目前,虚拟偶像、电商带货、电视主持、虚拟主播等商业应用已被大众认可。在元宇宙世界中,最核心的内容之一非数字人莫属,因为数字人不光是真实世界人类在元宇宙中的“化身”,也是我们在元宇宙中进行各种交互的重要载具之一。众所周知,创建和渲染逼真的数字人类角色是计算机图形学中最困难的问题之一。近日,在由51CTO主办的MetaCon元宇宙技术大会《游戏与AI交互》分会场中,Unity大中华区平台技术总监杨栋通过一系列的Demo演示

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Tools

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),