HDFS在设计上仿照Linux下的文件操作命令,所以对熟悉Linux文件命令的小伙伴很好上手。另外在Hadoop DFS中没有pwd概念,所有都需要全路径。(本文基于版本2.5 CDH 5.2.1) 列出命令列表、格式和帮助,以及选择一个非参数文件配置的namenode。 hdfs dfs -usageh
HDFS在设计上仿照Linux下的文件操作命令,所以对熟悉Linux文件命令的小伙伴很好上手。另外在Hadoop DFS中没有pwd概念,所有都需要全路径。(本文基于版本2.5 CDH 5.2.1)
列出命令列表、格式和帮助,以及选择一个非参数文件配置的namenode。
hdfs dfs -usage hadoop dfs -usage ls hadoop dfs -help -fs <local> specify a namenode hdfs dfs -fs hdfs://test1:9000 -ls /</local>
——————————————————————————–
-df [-h] [path …] :
Shows the capacity, free and used space of the filesystem. If the filesystem has
multiple partitions, and no path to a particular partition is specified, then
the status of the root partitions will be shown.
$ hdfs dfs -df Filesystem Size Used Available Use% hdfs://test1:9000 413544071168 98304 345612906496 0%
——————————————————————————–
-mkdir [-p] path … :
Create a directory in specified location.
-p Do not fail if the directory already exists
-rmdir dir … :
Removes the directory entry specified by each directory argument, provided it is
empty.
hdfs dfs -mkdir /tmp hdfs dfs -mkdir /tmp/txt hdfs dfs -rmdir /tmp/txt hdfs dfs -mkdir -p /tmp/txt/hello
——————————————————————————–
-copyFromLocal [-f] [-p] localsrc … dst :
Identical to the -put command.
-copyToLocal [-p] [-ignoreCrc] [-crc] src … localdst :
Identical to the -get command.
-moveFromLocal localsrc …
Same as -put, except that the source is deleted after it’s copied.
-put [-f] [-p] localsrc …
Copy files from the local file system into fs. Copying fails if the file already
exists, unless the -f flag is given. Passing -p preserves access and
modification times, ownership and the mode. Passing -f overwrites the
destination if it already exists.
-get [-p] [-ignoreCrc] [-crc] src … localdst :
Copy files that match the file pattern src to the local name. src is kept.
When copying multiple files, the destination must b/e a directory. Passing -p
preserves access and modification times, ownership and the mode.
-getmerge [-nl] src localdst :
Get all the files in the directories that match the source file pattern and
merge and sort them to only one file on local fs. src is kept.
-nl Add a newline character at the end of each file.
-cat [-ignoreCrc] src … :
Fetch all files that match the file pattern src and display their content on
stdout.
#通配符? * {} [] hdfs dfs -cat /tmp/*.txt Hello, Hadoop Hello, HDFS hdfs dfs -cat /tmp/h?fs.txt Hello, HDFS hdfs dfs -cat /tmp/h{a,d}*.txt Hello, Hadoop Hello, HDFS hdfs dfs -cat /tmp/h[a-d]*.txt Hello, Hadoop Hello, HDFS echo "Hello, Hadoop" > hadoop.txt echo "Hello, HDFS" > hdfs.txt dd if=/dev/zero of=/tmp/test.zero bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 0.93978 s, 1.1 GB/s hdfs dfs -moveFromLocal /tmp/test.zero /tmp hdfs dfs -put *.txt /tmp
——————————————————————————–
-ls [-d] [-h] [-R] [path …] :
List the contents that match the specified file pattern. If path is not
specified, the contents of /user/currentUser will be listed. Directory entries
are of the form:
permissions – userId groupId sizeOfDirectory(in bytes)
modificationDate(yyyy-MM-dd HH:mm) directoryName
and file entries are of the form:
permissions numberOfReplicas userId groupId sizeOfFile(in bytes)
modificationDate(yyyy-MM-dd HH:mm) fileName
-d Directories are listed as plain files.
-h Formats the sizes of files in a human-readable fashion rather than a number
of bytes.
-R Recursively list the contents of directories.
hdfs dfs -ls /tmp hdfs dfs -ls -d /tmp hdfs dfs -ls -h /tmp Found 4 items -rw-r--r-- 3 hdfs supergroup 14 2014-12-18 10:00 /tmp/hadoop.txt -rw-r--r-- 3 hdfs supergroup 12 2014-12-18 10:00 /tmp/hdfs.txt -rw-r--r-- 3 hdfs supergroup 1 G 2014-12-18 10:19 /tmp/test.zero drwxr-xr-x - hdfs supergroup 0 2014-12-18 10:07 /tmp/txt hdfs dfs -ls -R -h /tmp -rw-r--r-- 3 hdfs supergroup 14 2014-12-18 10:00 /tmp/hadoop.txt -rw-r--r-- 3 hdfs supergroup 12 2014-12-18 10:00 /tmp/hdfs.txt -rw-r--r-- 3 hdfs supergroup 1 G 2014-12-18 10:19 /tmp/test.zero drwxr-xr-x - hdfs supergroup 0 2014-12-18 10:07 /tmp/txt drwxr-xr-x - hdfs supergroup 0 2014-12-18 10:07 /tmp/txt/hello
——————————————————————————–
-checksum src … :
Dump checksum information for files that match the file pattern src to stdout.
Note that this requires a round-trip to a datanode storing each block of the
file, and thus is not efficient to run on a large number of files. The checksum
of a file depends on its content, block size and the checksum algorithm and
parameters used for creating the file.
hdfs dfs -checksum /tmp/test.zero /tmp/test.zero MD5-of-262144MD5-of-512CRC32C 000002000000000000040000f960570129a4ef3a7e179073adceae97
——————————————————————————–
-appendToFile localsrc … dst :
Appends the contents of all the given local files to the given dst file. The dst
file will be created if it does not exist. If localSrc is -, then the input is
read from stdin.
hdfs dfs -appendToFile *.txt hello.txt hdfs dfs -cat hello.txt Hello, Hadoop Hello, HDFS
——————————————————————————–
-tail [-f] file :
Show the last 1KB of the file.
hdfs dfs -tail -f hello.txt #waiting for output. then Ctrl + C #another terminal hdfs dfs -appendToFile - hello.txt #then type something
——————————————————————————–
-cp [-f] [-p | -p[topax]] src …
Copy files that match the file pattern src to a destination. When copying
multiple files, the destination must be a directory. Passing -p preserves status
[topax] (timestamps, ownership, permission, ACLs, XAttr). If -p is specified
with no arg, then preserves timestamps, ownership, permission. If -pa is
permission. Passing -f overwrites the destination if it already exists. raw
namespace extended attributes are preserved if (1) they are supported (HDFS
only) and, (2) all of the source and target pathnames are in the /.reserved/raw
hierarchy. raw namespace xattr preservation is determined solely by the presence
(or absence) of the /.reserved/raw prefix and not by the -p option.
-mv src … dst :
Move files that match the specified file pattern src to a destination dst.
When moving multiple files, the destination must be a directory.
-rm [-f] [-r|-R] [-skipTrash] src … :
Delete all files that match the specified file pattern. Equivalent to the Unix
command “rm src”
-skipTrash option bypasses trash, if enabled, and immediately deletes src
-f If the file does not exist, do not display a diagnostic message or
modify the exit status to reflect an error.
-[rR] Recursively deletes directories
-stat [format] path … :
Print statistics about the file/directory at path in the specified format.
Format accepts filesize in blocks (%b), group name of owner(%g), filename (%n),
block size (%o), replication (%r), user name of owner(%u), modification date
(%y, %Y)
hdfs dfs -stat /tmp/hadoop.txt 2014-12-18 02:00:08 hdfs dfs -cp -p -f /tmp/hello.txt /tmp/hello.txt.bak hdfs dfs -stat /tmp/hadoop.txt.bak hdfs dfs -rm /tmp/not_exists rm: `/tmp/not_exists': No such file or directory echo $? 1 hdfs dfs -rm -f /tmp/123321123123123 echo $? 0
——————————————————————————–
-count [-q] path … :
Count the number of directories, files and bytes under the paths
that match the specified file pattern. The output columns are:
DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME or
QUOTA REMAINING_QUOTA SPACE_QUOTA REMAINING_SPACE_QUOTA
DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME
-du [-s] [-h] path … :
Show the amount of space, in bytes, used by the files that match the specified
file pattern. The following flags are optional:
-s Rather than showing the size of each individual file that matches the
pattern, shows the total (summary) size.
-h Formats the sizes of files in a human-readable fashion rather than a number
of bytes.
Note that, even without the -s option, this only shows size summaries one level
deep into a directory.
The output is in the form
size name(full path)
hdfs dfs -count /tmp 3 3 1073741850 /tmp hdfs dfs -du /tmp 14 /tmp/hadoop.txt 12 /tmp/hdfs.txt 1073741824 /tmp/test.zero 0 /tmp/txt hdfs dfs -du -s /tmp 1073741850 /tmp hdfs dfs -du -s -h /tmp 1.0 G /tmp
——————————————————————————–
-chgrp [-R] GROUP PATH… :
This is equivalent to -chown … :GROUP …
-chmod [-R] MODE[,MODE]… | OCTALMODE PATH… :
Changes permissions of a file. This works similar to the shell’s chmod command
with a few exceptions.
-R modifies the files recursively. This is the only option currently
supported.
MODE Mode is the same as mode used for the shell’s command. The only
letters recognized are ‘rwxXt’, e.g. +t,a+r,g-w,+rwx,o=r.
OCTALMODE Mode specifed in 3 or 4 digits. If 4 digits, the first may be 1 or
0 to turn the sticky bit on or off, respectively. Unlike the
shell command, it is not possible to specify only part of the
mode, e.g. 754 is same as u=rwx,g=rx,o=r.
If none of ‘augo’ is specified, ‘a’ is assumed and unlike the shell command, no
umask is applied.
-chown [-R] [OWNER][:[GROUP]] PATH… :
Changes owner and group of a file. This is similar to the shell’s chown command
with a few exceptions.
-R modifies the files recursively. This is the only option currently
supported.
If only the owner or group is specified, then only the owner or group is
modified. The owner and group names may only consist of digits, alphabet, and
any of [-_./@a-zA-Z0-9]. The names are case sensitive.
WARNING: Avoid using ‘.’ to separate user name and group though Linux allows it.
If user names have dots in them and you are using local file system, you might
see surprising results since the shell command ‘chown’ is used for local files.
-touchz path … :
Creates a file of zero length at path with current time as the timestamp of
that path. An error is returned if the file exists with non-zero length
hdfs dfs -mkdir -p /user/spark/tmp hdfs dfs -chown -R spark:hadoop /user/spark hdfs dfs -chmod -R 775 /user/spark/tmp hdfs dfs -ls -d /user/spark/tmp drwxrwxr-x - spark hadoop 0 2014-12-18 14:51 /user/spark/tmp hdfs dfs -chmod +t /user/spark/tmp #user:spark hdfs dfs -touchz /user/spark/tmp/own_by_spark #user:hadoop useradd -g hadoop hadoop su - hadoop id uid=502(hadoop) gid=492(hadoop) groups=492(hadoop) hdfs dfs -rm /user/spark/tmp/own_by_spark rm: Permission denied by sticky bit setting: user=hadoop, inode=own_by_spark #使用超级管理员(dfs.permissions.superusergroup = hdfs),可以无视sticky位设置
——————————————————————————–
-test -[defsz] path :
Answer various questions about path, with result via exit status.
-d return 0 if path is a directory.
-e return 0 if path exists.
-f return 0 if path is a file.
-s return 0 if file path is greater than zero bytes in size.
-z return 0 if file path is zero bytes in size, else return 1.
hdfs dfs -test -d /tmp echo $? 0 hdfs dfs -test -f /tmp/txt echo $? 1
——————————————————————————–
-setrep [-R] [-w] rep path … :
Set the replication level of a file. If path is a directory then the command
recursively changes the replication factor of all files under the directory tree
rooted at path.
-w It requests that the command waits for the replication to complete. This
can potentially take a very long time.
hdfs fsck /tmp/test.zero -blocks -locations Average block replication: 3.0 hdfs dfs -setrep -w 4 /tmp/test.zero Replication 4 set: /tmp/test.zero Waiting for /tmp/test.zero .... done hdfs fsck /tmp/test.zero -blocks Average block replication: 4.0
原文地址:HDFS文件命令, 感谢原作者分享。

wpsystem文件夹是windows应用文件夹;创建WpSystem文件夹是为了存储某些特定“Microsoft Store”应用程序的数据,因此建议不要删该文件夹,因为删除之后就无法使用指定的应用。

winreagent是在系统更新或升级的过程中创建的文件夹;该文件夹中通常包含临时文件,当更新或升级失败时,系统将通过还原先前创建的临时文件来回滚到执行更新或升级过程之前的版本。

baidunetdiskdownload是百度网盘默认下载文件的文件夹;百度网盘是百度推出的一项云存储服务,只要下载东西到百度网盘里,都会默认保存到这个文件夹中,并且可跨终端随时随地查看和分享。

“usmt.ppkg”是windows自带的系统还原功能的系统备份文件;Windows系统还原是在不需要重新安装操作系统,也不会破坏数据文件的前提下使系统回到原有的工作状态,PBR恢复功能的备份文件就是“usmt.ppkg”。

mobileEmuMaster是手机模拟大师的安装文件夹。手机模拟大师是PC电脑模拟运行安卓系统的免费模拟器程序,一款可以让用户在电脑上运行手机应用的软件,支持安装安卓系统中常见的apk执行文件,支持QQ、微信等生活常用应用,达到全面兼容的效果。

备份文件的扩展名通常是“.bak”;bak文件是一个备份文件,这类文件一般在'.bak前面加上应该有原来的扩展名,有的则是由原文件的后缀名和bak混合而成,在生成了某种类型的文件后,就会自动生成它的备份文件。

kml是谷歌公司创建的一种地标性文件格式;该文件用于记录某一地点或连续地点的时间、经度、纬度、海拔等地理信息数据,可以被“Google Earth”和“Google Maps”识别并显示。

config是软件或者系统中的配置文件,不可以删除;该文件是在用户开机时对计算机进行初始化设置,也就是用户对系统的设置都由它来对计算机进行恢复,因此不能删除软件或者系统中的config配置文件,以免造成错误。


熱AI工具

Undresser.AI Undress
人工智慧驅動的應用程序,用於創建逼真的裸體照片

AI Clothes Remover
用於從照片中去除衣服的線上人工智慧工具。

Undress AI Tool
免費脫衣圖片

Clothoff.io
AI脫衣器

AI Hentai Generator
免費產生 AI 無盡。

熱門文章

熱工具

MinGW - Minimalist GNU for Windows
這個專案正在遷移到osdn.net/projects/mingw的過程中,你可以繼續在那裡關注我們。 MinGW:GNU編譯器集合(GCC)的本機Windows移植版本,可自由分發的導入函式庫和用於建置本機Windows應用程式的頭檔;包括對MSVC執行時間的擴展,以支援C99功能。 MinGW的所有軟體都可以在64位元Windows平台上運作。

DVWA
Damn Vulnerable Web App (DVWA) 是一個PHP/MySQL的Web應用程序,非常容易受到攻擊。它的主要目標是成為安全專業人員在合法環境中測試自己的技能和工具的輔助工具,幫助Web開發人員更好地理解保護網路應用程式的過程,並幫助教師/學生在課堂環境中教授/學習Web應用程式安全性。 DVWA的目標是透過簡單直接的介面練習一些最常見的Web漏洞,難度各不相同。請注意,該軟體中

Safe Exam Browser
Safe Exam Browser是一個安全的瀏覽器環境,安全地進行線上考試。該軟體將任何電腦變成一個安全的工作站。它控制對任何實用工具的訪問,並防止學生使用未經授權的資源。

SAP NetWeaver Server Adapter for Eclipse
將Eclipse與SAP NetWeaver應用伺服器整合。

mPDF
mPDF是一個PHP庫,可以從UTF-8編碼的HTML產生PDF檔案。原作者Ian Back編寫mPDF以從他的網站上「即時」輸出PDF文件,並處理不同的語言。與原始腳本如HTML2FPDF相比,它的速度較慢,並且在使用Unicode字體時產生的檔案較大,但支援CSS樣式等,並進行了大量增強。支援幾乎所有語言,包括RTL(阿拉伯語和希伯來語)和CJK(中日韓)。支援嵌套的區塊級元素(如P、DIV),