search
HomeTechnology peripheralsAIAI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

Do you still remember the AI ​​mind-reading skills from before? Recently, the ability to "make all your wishes come true" has evolved again.

-Humans can directly control robots through their own thoughts!

MIT researchers released the Ddog project. They independently developed a brain-computer interface (BCI) device to control Boston Dynamics' robot dog Spot.

Dogs can move to specific areas, help people get things, or take photos according to human thoughts.

Compared with the previous need to use a headgear with many sensors to "read the mind", this time the brain-computer interface device is presented in the form of wireless glasses (AttentivU).

Although the behavior shown in the video is simple, the purpose of this system is to transform Spot into a basic communication tool to help people with diseases such as ALS, cerebral palsy, or spinal cord injury.

All it takes is two iPhones and a pair of glasses to bring practical help and care to desperate people.

And, as we will see in related papers, this system is actually built on very complex engineering.

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

Paper address: https://doi.org/10.3390/s24010080

Usage of Ddog system AttentivU is a brain-computer interface system with sensors embedded in the frame to measure a person's electroencephalogram (EEG), or brain activity, and electrooculogram, or eye movement.

The foundation for this research is MIT’s Brain Switch, a real-time, closed-loop BCI that allows users to communicate nonverbally and in real time with caregivers.

The Ddog system has an 83.4% success rate and is the first time a wireless, non-visual BCI system has been integrated with Spot in a personal assistant use case.

In the video, we can see the evolution of brain interface devices and some of the thoughts of developers.

Prior to this, the research team has completed the interaction between the brain-computer interface and the smart home, and now has completed the control of a robot that can move and operate.

These studies have given special groups a glimmer of light, giving them hope of survival and even a better life in the future.

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

Compared to the octopus-like sensor headgear, the glasses below are indeed much cooler.

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

According to the National Organization for Rare Diseases, there are currently 30,000 ALS patients in the United States, and an estimated 5,000 new cases are diagnosed each year. Additionally, approximately 1 million Americans have cerebral palsy, according to the Cerebral Palsy Guide.

Many of these people have lost or will eventually lose the ability to walk, dress, talk, write, and even breathe.

While communication aids do exist, most are eye-gazing devices that allow users to communicate using a computer. There aren't many systems that allow users to interact with the world around them.

This BCI quadruped robotic system serves as an early prototype, paving the way for the future development of modern personal assistant robots.

Hopefully, we can see even more amazing capabilities in future iterations.

Brain-controlled quadruped robot

In this work, researchers explore how wireless and wearable BCI devices can control quadruped robots ——Boston Dynamics’ Spot.

The device developed by the researchers measures the user's electroencephalogram (EEG) and electrooculogram (EOG) activity through electrodes embedded in the frame of the glasses.

Users answer a series of questions in their mind ("yes" or "no"), and each question corresponds to a set of preset Spot operations.

For example, prompt Spot to walk through a room, pick up an object (such as a bottle of water), and then retrieve it for the user.

Robots and BCI

To this day, EEG remains one of the most practical and applicable non-invasive brain-computer interface methods.

BCI systems can be controlled using endogenous (spontaneous) or exogenous (evoked) signals.

In exogenous brain-computer interfaces, evoked signals occur when a person pays attention to external stimuli, such as visual or auditory cues.

The advantages of this approach include minimalist training and high bitrates of up to 60 bits/min, but this requires the user to always focus on the stimulus, thus limiting its use in real-life situations. applicability. Furthermore, users tire quickly when using exogenous BCIs.

In endogenous brain-computer interfaces, control signals are generated independently of any external stimulus and can be fully executed by the user on demand. For those users with sensory impairments, this provides a more natural and intuitive way of interacting, allowing users to spontaneously issue commands to the system.

However, this method usually requires longer training time and has a lower bit rate.

Robotic applications using brain-computer interfaces are often for people in need of assistance, and they often include wheelchairs and exoskeletons.

The figure below shows the latest progress in brain-computer interface and robotics technology as of 2023.

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

Quadruped robots are often used to support users in complex work environments or defense applications.

One of the most famous quadruped robots is Boston Dynamics’ Spot, which can carry up to 15 kilograms of payload and iteratively map maintenance sites such as tunnels. The real estate and mining industries are also adopting quadruped robots like Spot to help monitor job sites with complex logistics.

This article uses the Spot robot controlled by the mobile BCI solution and is based on mental arithmetic tasks. The overall architecture is named Ddog.

Ddog architecture

The following figure shows the overall structure of Ddog:

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

Ddog is an autonomous application that enables users to control the Spot robot through input from the BCI, while the application uses voice to provide feedback to the user and their caregivers.

The system is designed to work completely offline or completely online. The online version has a more advanced set of machine learning models, as well as better fine-tuned models, and is more power efficient for local devices.

The entire system is designed for real-life scenarios and allows for rapid iteration on most parts.

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

On the client side, the user interacts with the brain-computer interface device (AttentivU) through a mobile application that uses Bluetooth Low Energy ( BLE) protocol to communicate with the device.

The user’s mobile device communicates with another phone controlling the Spot robot to enable agency, manipulation, navigation, and ultimately assistance to the user.

Communication between mobile phones can be through Wi-Fi or mobile networks. The controlled mobile phone establishes a Wi-Fi hotspot, and both Ddog and the user's mobile phone are connected to this hotspot. When using online mode, you can also connect to models running on the cloud.

Server side

The server side uses Kubernetes (K8S) cluster, each cluster is deployed in its own Virtual Private Cloud (VPC) .

The cloud works within a dedicated VPC, typically deployed in the same Availability Zone closer to end users, minimizing response latency for each service.

Each container in the cluster is designed for a single purpose (microservice architecture). Each service is a running AI model. Their tasks include: navigation, mapping, Computer vision, manipulation, localization and agency.

Mapping: A service that collects information about the robot's surroundings from different sources. It maps static, immovable data (a tree, a building, a wall) but also collects dynamic data that changes over time (a car, a person).

Navigation: Based on map data collected and augmented in previous services, the navigation service is responsible for constructing a path between point A and point B in space and time. It is also responsible for constructing alternative routes, as well as estimating the time required.

Computer Vision: Collect visual data from robot cameras and augment it with data from your phone to generate spatial and temporal representations. This service also attempts to segment each visual point and identify objects.

Cloud is responsible for training BCI-related models, including electroencephalogram (EEG), electrooculogram (EOG) and inertial measurement unit (IMU).

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

The offline model deployed on the mobile phone runs data collection and aggregation, and also uses TensorFlow’s mobile model (for smaller RAM and based on ARM CPUs are optimized) for real-time inference.

Visual and Operational

The original version used to deploy the segmentation model was a single TensorFlow 3D model leveraging LIDAR data. The authors then extended this to a few-shot model and enhanced it by running complementary models on Neural Radiation Field (NeRF) and RGBD data.

The raw data collected by Ddog is aggregated from five cameras. Each camera can provide grayscale, fisheye, depth and infrared data. There is also a sixth camera inside the arm's gripper, with 4K resolution and LED capabilities, that works with a pre-trained TensorFlow model to detect objects.

The point cloud is generated from lidar data and RGBD data from Ddog and mobile phone. After data acquisition is complete, it is normalized through a single coordinate system and matched to a global state that brings together all imaging and 3D positioning data.

Operation is entirely dependent on the quality of the robotic arm gripper mounted on the Ddog, the one pictured below is manufactured by Boston Dynamics.

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

Limit the use cases in the experiment to basic interactions with objects in predefined locations.

The author drew a large laboratory space and set it up as an "apartment", which contained a "kitchen" area (with a tray with different cups and bottles), The "living room" area (small sofa with pillows and small coffee table), and the "window lounge" area.

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

The number of use cases is constantly growing, so the only way to cover most use cases is to deploy a system to run continuously for a period of time and use the data to Optimize such sequences and experiences.

AttentivU

EEG data is collected from the AttentivU device. The electrodes of AttentivU glasses are made of natural silver and are located at TP9 and TP10 according to the international 10-20 electrode placement system. The glasses also include two EOG electrodes located on the nose pads and an EEG reference electrode located at the Fpz position.

These sensors can provide the information needed and enable real-time, closed-loop intervention when needed.

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

The device has two modes, EEG and EOG, which can be used to capture signals of attention, engagement, fatigue and cognitive load in real time. EEG has been used as a neurophysiological indicator of the transition between wakefulness and sleep, while EOG is based on measuring bioelectrical signals induced during eye movements due to corneal-retinal dipole properties. . Research shows that eye movements correlate with the type of memory access needed to perform certain tasks and are a good measure of visual engagement, attention, and drowsiness.

Experiment

First divide the EEG data into several windows. Define each window as a 1 second long duration of EEG data with 75% overlap with the previous window.

Then comes data preprocessing and cleaning. Data were filtered using a combination of a 50 Hz notch filter and a bandpass filter with a passband of 0.5 Hz to 40 Hz to ensure removal of power line noise and unwanted high frequencies.

Next, the author created an artifact rejection algorithm. An epoch is rejected if the absolute power difference between two consecutive epochs is greater than a predefined threshold.

In the final step of classification, the authors mixed different spectral band power ratios to track each subject’s task-based mental activity. For MA, the ratio is (alpha/delta). For WA, the ratio is (delta/low beta) and for ME, the ratio is (delta/alpha).

Then, change point detection algorithms are used to track changes in these ratios. Sudden increases or decreases in these ratios indicate a change in the user's mental state.

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

For subjects with ALS, our model achieved an accuracy of 73% in the MA task and an accuracy of 73% in the WA task. It achieved an accuracy of 74% and achieved an accuracy of 60% in the ME task.

AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality

The above is the detailed content of AI mind-reading technology has been upgraded! A pair of glasses directly controls the Boston robot dog, making brain-controlled robots a reality. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
ai合并图层的快捷键是什么ai合并图层的快捷键是什么Jan 07, 2021 am 10:59 AM

ai合并图层的快捷键是“Ctrl+Shift+E”,它的作用是把目前所有处在显示状态的图层合并,在隐藏状态的图层则不作变动。也可以选中要合并的图层,在菜单栏中依次点击“窗口”-“路径查找器”,点击“合并”按钮。

ai橡皮擦擦不掉东西怎么办ai橡皮擦擦不掉东西怎么办Jan 13, 2021 am 10:23 AM

ai橡皮擦擦不掉东西是因为AI是矢量图软件,用橡皮擦不能擦位图的,其解决办法就是用蒙板工具以及钢笔勾好路径再建立蒙板即可实现擦掉东西。

谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开Apr 07, 2023 pm 02:54 PM

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

ai可以转成psd格式吗ai可以转成psd格式吗Feb 22, 2023 pm 05:56 PM

ai可以转成psd格式。转换方法:1、打开Adobe Illustrator软件,依次点击顶部菜单栏的“文件”-“打开”,选择所需的ai文件;2、点击右侧功能面板中的“图层”,点击三杠图标,在弹出的选项中选择“释放到图层(顺序)”;3、依次点击顶部菜单栏的“文件”-“导出”-“导出为”;4、在弹出的“导出”对话框中,将“保存类型”设置为“PSD格式”,点击“导出”即可;

GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑Apr 04, 2023 am 11:55 AM

Yann LeCun 这个观点的确有些大胆。 「从现在起 5 年内,没有哪个头脑正常的人会使用自回归模型。」最近,图灵奖得主 Yann LeCun 给一场辩论做了个特别的开场。而他口中的自回归,正是当前爆红的 GPT 家族模型所依赖的学习范式。当然,被 Yann LeCun 指出问题的不只是自回归模型。在他看来,当前整个的机器学习领域都面临巨大挑战。这场辩论的主题为「Do large language models need sensory grounding for meaning and u

ai顶部属性栏不见了怎么办ai顶部属性栏不见了怎么办Feb 22, 2023 pm 05:27 PM

ai顶部属性栏不见了的解决办法:1、开启Ai新建画布,进入绘图页面;2、在Ai顶部菜单栏中点击“窗口”;3、在系统弹出的窗口菜单页面中点击“控制”,然后开启“控制”窗口即可显示出属性栏。

ai移动不了东西了怎么办ai移动不了东西了怎么办Mar 07, 2023 am 10:03 AM

ai移动不了东西的解决办法:1、打开ai软件,打开空白文档;2、选择矩形工具,在文档中绘制矩形;3、点击选择工具,移动文档中的矩形;4、点击图层按钮,弹出图层面板对话框,解锁图层;5、点击选择工具,移动矩形即可。

强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程Mar 31, 2023 pm 10:38 PM

引入密集强化学习,用 AI 验证 AI。 自动驾驶汽车 (AV) 技术的快速发展,使得我们正处于交通革命的风口浪尖,其规模是自一个世纪前汽车问世以来从未见过的。自动驾驶技术具有显着提高交通安全性、机动性和可持续性的潜力,因此引起了工业界、政府机构、专业组织和学术机构的共同关注。过去 20 年里,自动驾驶汽车的发展取得了长足的进步,尤其是随着深度学习的出现更是如此。到 2015 年,开始有公司宣布他们将在 2020 之前量产 AV。不过到目前为止,并且没有 level 4 级别的 AV 可以在市场

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!