HDFS在设计上仿照Linux下的文件操作命令,所以对熟悉Linux文件命令的小伙伴很好上手。另外在Hadoop DFS中没有pwd概念,所有都需要全路径。(本文基于版本2.5 CDH 5.2.1) 列出命令列表、格式和帮助,以及选择一个非参数文件配置的namenode。 hdfs dfs -usageh
HDFS在设计上仿照Linux下的文件操作命令,所以对熟悉Linux文件命令的小伙伴很好上手。另外在Hadoop DFS中没有pwd概念,所有都需要全路径。(本文基于版本2.5 CDH 5.2.1)
列出命令列表、格式和帮助,以及选择一个非参数文件配置的namenode。
hdfs dfs -usage hadoop dfs -usage ls hadoop dfs -help -fs <local> specify a namenode hdfs dfs -fs hdfs://test1:9000 -ls /</local>
——————————————————————————–
-df [-h] [path …] :
Shows the capacity, free and used space of the filesystem. If the filesystem has
multiple partitions, and no path to a particular partition is specified, then
the status of the root partitions will be shown.
$ hdfs dfs -df Filesystem Size Used Available Use% hdfs://test1:9000 413544071168 98304 345612906496 0%
——————————————————————————–
-mkdir [-p] path … :
Create a directory in specified location.
-p Do not fail if the directory already exists
-rmdir dir … :
Removes the directory entry specified by each directory argument, provided it is
empty.
hdfs dfs -mkdir /tmp hdfs dfs -mkdir /tmp/txt hdfs dfs -rmdir /tmp/txt hdfs dfs -mkdir -p /tmp/txt/hello
——————————————————————————–
-copyFromLocal [-f] [-p] localsrc … dst :
Identical to the -put command.
-copyToLocal [-p] [-ignoreCrc] [-crc] src … localdst :
Identical to the -get command.
-moveFromLocal localsrc …
Same as -put, except that the source is deleted after it’s copied.
-put [-f] [-p] localsrc …
Copy files from the local file system into fs. Copying fails if the file already
exists, unless the -f flag is given. Passing -p preserves access and
modification times, ownership and the mode. Passing -f overwrites the
destination if it already exists.
-get [-p] [-ignoreCrc] [-crc] src … localdst :
Copy files that match the file pattern src to the local name. src is kept.
When copying multiple files, the destination must b/e a directory. Passing -p
preserves access and modification times, ownership and the mode.
-getmerge [-nl] src localdst :
Get all the files in the directories that match the source file pattern and
merge and sort them to only one file on local fs. src is kept.
-nl Add a newline character at the end of each file.
-cat [-ignoreCrc] src … :
Fetch all files that match the file pattern src and display their content on
stdout.
#通配符? * {} [] hdfs dfs -cat /tmp/*.txt Hello, Hadoop Hello, HDFS hdfs dfs -cat /tmp/h?fs.txt Hello, HDFS hdfs dfs -cat /tmp/h{a,d}*.txt Hello, Hadoop Hello, HDFS hdfs dfs -cat /tmp/h[a-d]*.txt Hello, Hadoop Hello, HDFS echo "Hello, Hadoop" > hadoop.txt echo "Hello, HDFS" > hdfs.txt dd if=/dev/zero of=/tmp/test.zero bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 0.93978 s, 1.1 GB/s hdfs dfs -moveFromLocal /tmp/test.zero /tmp hdfs dfs -put *.txt /tmp
——————————————————————————–
-ls [-d] [-h] [-R] [path …] :
List the contents that match the specified file pattern. If path is not
specified, the contents of /user/currentUser will be listed. Directory entries
are of the form:
permissions – userId groupId sizeOfDirectory(in bytes)
modificationDate(yyyy-MM-dd HH:mm) directoryName
and file entries are of the form:
permissions numberOfReplicas userId groupId sizeOfFile(in bytes)
modificationDate(yyyy-MM-dd HH:mm) fileName
-d Directories are listed as plain files.
-h Formats the sizes of files in a human-readable fashion rather than a number
of bytes.
-R Recursively list the contents of directories.
hdfs dfs -ls /tmp hdfs dfs -ls -d /tmp hdfs dfs -ls -h /tmp Found 4 items -rw-r--r-- 3 hdfs supergroup 14 2014-12-18 10:00 /tmp/hadoop.txt -rw-r--r-- 3 hdfs supergroup 12 2014-12-18 10:00 /tmp/hdfs.txt -rw-r--r-- 3 hdfs supergroup 1 G 2014-12-18 10:19 /tmp/test.zero drwxr-xr-x - hdfs supergroup 0 2014-12-18 10:07 /tmp/txt hdfs dfs -ls -R -h /tmp -rw-r--r-- 3 hdfs supergroup 14 2014-12-18 10:00 /tmp/hadoop.txt -rw-r--r-- 3 hdfs supergroup 12 2014-12-18 10:00 /tmp/hdfs.txt -rw-r--r-- 3 hdfs supergroup 1 G 2014-12-18 10:19 /tmp/test.zero drwxr-xr-x - hdfs supergroup 0 2014-12-18 10:07 /tmp/txt drwxr-xr-x - hdfs supergroup 0 2014-12-18 10:07 /tmp/txt/hello
——————————————————————————–
-checksum src … :
Dump checksum information for files that match the file pattern src to stdout.
Note that this requires a round-trip to a datanode storing each block of the
file, and thus is not efficient to run on a large number of files. The checksum
of a file depends on its content, block size and the checksum algorithm and
parameters used for creating the file.
hdfs dfs -checksum /tmp/test.zero /tmp/test.zero MD5-of-262144MD5-of-512CRC32C 000002000000000000040000f960570129a4ef3a7e179073adceae97
——————————————————————————–
-appendToFile localsrc … dst :
Appends the contents of all the given local files to the given dst file. The dst
file will be created if it does not exist. If localSrc is -, then the input is
read from stdin.
hdfs dfs -appendToFile *.txt hello.txt hdfs dfs -cat hello.txt Hello, Hadoop Hello, HDFS
——————————————————————————–
-tail [-f] file :
Show the last 1KB of the file.
hdfs dfs -tail -f hello.txt #waiting for output. then Ctrl + C #another terminal hdfs dfs -appendToFile - hello.txt #then type something
——————————————————————————–
-cp [-f] [-p | -p[topax]] src …
Copy files that match the file pattern src to a destination. When copying
multiple files, the destination must be a directory. Passing -p preserves status
[topax] (timestamps, ownership, permission, ACLs, XAttr). If -p is specified
with no arg, then preserves timestamps, ownership, permission. If -pa is
permission. Passing -f overwrites the destination if it already exists. raw
namespace extended attributes are preserved if (1) they are supported (HDFS
only) and, (2) all of the source and target pathnames are in the /.reserved/raw
hierarchy. raw namespace xattr preservation is determined solely by the presence
(or absence) of the /.reserved/raw prefix and not by the -p option.
-mv src … dst :
Move files that match the specified file pattern src to a destination dst.
When moving multiple files, the destination must be a directory.
-rm [-f] [-r|-R] [-skipTrash] src … :
Delete all files that match the specified file pattern. Equivalent to the Unix
command “rm src”
-skipTrash option bypasses trash, if enabled, and immediately deletes src
-f If the file does not exist, do not display a diagnostic message or
modify the exit status to reflect an error.
-[rR] Recursively deletes directories
-stat [format] path … :
Print statistics about the file/directory at path in the specified format.
Format accepts filesize in blocks (%b), group name of owner(%g), filename (%n),
block size (%o), replication (%r), user name of owner(%u), modification date
(%y, %Y)
hdfs dfs -stat /tmp/hadoop.txt 2014-12-18 02:00:08 hdfs dfs -cp -p -f /tmp/hello.txt /tmp/hello.txt.bak hdfs dfs -stat /tmp/hadoop.txt.bak hdfs dfs -rm /tmp/not_exists rm: `/tmp/not_exists': No such file or directory echo $? 1 hdfs dfs -rm -f /tmp/123321123123123 echo $? 0
——————————————————————————–
-count [-q] path … :
Count the number of directories, files and bytes under the paths
that match the specified file pattern. The output columns are:
DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME or
QUOTA REMAINING_QUOTA SPACE_QUOTA REMAINING_SPACE_QUOTA
DIR_COUNT FILE_COUNT CONTENT_SIZE FILE_NAME
-du [-s] [-h] path … :
Show the amount of space, in bytes, used by the files that match the specified
file pattern. The following flags are optional:
-s Rather than showing the size of each individual file that matches the
pattern, shows the total (summary) size.
-h Formats the sizes of files in a human-readable fashion rather than a number
of bytes.
Note that, even without the -s option, this only shows size summaries one level
deep into a directory.
The output is in the form
size name(full path)
hdfs dfs -count /tmp 3 3 1073741850 /tmp hdfs dfs -du /tmp 14 /tmp/hadoop.txt 12 /tmp/hdfs.txt 1073741824 /tmp/test.zero 0 /tmp/txt hdfs dfs -du -s /tmp 1073741850 /tmp hdfs dfs -du -s -h /tmp 1.0 G /tmp
——————————————————————————–
-chgrp [-R] GROUP PATH… :
This is equivalent to -chown … :GROUP …
-chmod [-R] MODE[,MODE]… | OCTALMODE PATH… :
Changes permissions of a file. This works similar to the shell’s chmod command
with a few exceptions.
-R modifies the files recursively. This is the only option currently
supported.
MODE Mode is the same as mode used for the shell’s command. The only
letters recognized are ‘rwxXt’, e.g. +t,a+r,g-w,+rwx,o=r.
OCTALMODE Mode specifed in 3 or 4 digits. If 4 digits, the first may be 1 or
0 to turn the sticky bit on or off, respectively. Unlike the
shell command, it is not possible to specify only part of the
mode, e.g. 754 is same as u=rwx,g=rx,o=r.
If none of ‘augo’ is specified, ‘a’ is assumed and unlike the shell command, no
umask is applied.
-chown [-R] [OWNER][:[GROUP]] PATH… :
Changes owner and group of a file. This is similar to the shell’s chown command
with a few exceptions.
-R modifies the files recursively. This is the only option currently
supported.
If only the owner or group is specified, then only the owner or group is
modified. The owner and group names may only consist of digits, alphabet, and
any of [-_./@a-zA-Z0-9]. The names are case sensitive.
WARNING: Avoid using ‘.’ to separate user name and group though Linux allows it.
If user names have dots in them and you are using local file system, you might
see surprising results since the shell command ‘chown’ is used for local files.
-touchz path … :
Creates a file of zero length at path with current time as the timestamp of
that path. An error is returned if the file exists with non-zero length
hdfs dfs -mkdir -p /user/spark/tmp hdfs dfs -chown -R spark:hadoop /user/spark hdfs dfs -chmod -R 775 /user/spark/tmp hdfs dfs -ls -d /user/spark/tmp drwxrwxr-x - spark hadoop 0 2014-12-18 14:51 /user/spark/tmp hdfs dfs -chmod +t /user/spark/tmp #user:spark hdfs dfs -touchz /user/spark/tmp/own_by_spark #user:hadoop useradd -g hadoop hadoop su - hadoop id uid=502(hadoop) gid=492(hadoop) groups=492(hadoop) hdfs dfs -rm /user/spark/tmp/own_by_spark rm: Permission denied by sticky bit setting: user=hadoop, inode=own_by_spark #使用超级管理员(dfs.permissions.superusergroup = hdfs),可以无视sticky位设置
——————————————————————————–
-test -[defsz] path :
Answer various questions about path, with result via exit status.
-d return 0 if path is a directory.
-e return 0 if path exists.
-f return 0 if path is a file.
-s return 0 if file path is greater than zero bytes in size.
-z return 0 if file path is zero bytes in size, else return 1.
hdfs dfs -test -d /tmp echo $? 0 hdfs dfs -test -f /tmp/txt echo $? 1
——————————————————————————–
-setrep [-R] [-w] rep path … :
Set the replication level of a file. If path is a directory then the command
recursively changes the replication factor of all files under the directory tree
rooted at path.
-w It requests that the command waits for the replication to complete. This
can potentially take a very long time.
hdfs fsck /tmp/test.zero -blocks -locations Average block replication: 3.0 hdfs dfs -setrep -w 4 /tmp/test.zero Replication 4 set: /tmp/test.zero Waiting for /tmp/test.zero .... done hdfs fsck /tmp/test.zero -blocks Average block replication: 4.0
原文地址:HDFS文件命令, 感谢原作者分享。

mysql'sblobissuilableforstoringbinarydatawithinareldatabase, whilenosqloptionslikemongodb, redis, and cassandraofferflexible, scalablesolutionsforunstuctureddata.blobissimplerbutcanslowwownperformance를 사용하는 것들보업 betterscal randaysand

TOADDAUSERINMYSQL, 사용 : CreateUser'UserName '@'host'IdentifiedBy'Password '; 여기서'showTodoitseciRely : 1) ChoosetheHostCareLyTocon trolaccess.2) setResourcelimitswithOptionslikemax_queries_per_hour.3) Usestrong, iriquepasswords.4) enforcessl/tlsconnectionswith

toavoidcommonmistakeswithstringdatatypesinmysql, stroundStringTypenuances, chooseTherightType, andManageEncodingAndCollationSettingSefectively.1) usecharforfixed-lengthstrings, varcharvariable-length, andtext/blobforlargerdata.2) setcarcatter

mysqloffersechar, varchar, text, anddenumforstringdata.usecharforfixed-lengthstrings, varcharerforvariable 길이, 텍스트 forlarger 텍스트, andenumforenforcingdataantegritystofvalues.

mysqlblob 요청 최적화는 다음 전략을 통해 수행 할 수 있습니다. 1. Blob 쿼리의 빈도를 줄이거나 독립적 인 요청을 사용하거나 지연로드를 사용하십시오. 2. 적절한 Blob 유형 (예 : TinyBlob)을 선택하십시오. 3. Blob 데이터를 별도의 테이블로 분리하십시오. 4. 응용 프로그램 계층에서 블로브 데이터를 압축합니다. 5. Blob Metadata를 색인하십시오. 이러한 방법은 실제 애플리케이션에서 모니터링, 캐싱 및 데이터 샤딩을 결합하여 성능을 효과적으로 향상시킬 수 있습니다.

MySQL 사용자를 추가하는 방법을 마스터하는 것은 데이터베이스 관리자 및 개발자가 데이터베이스의 보안 및 액세스 제어를 보장하기 때문에 데이터베이스 관리자 및 개발자에게 중요합니다. 1) CreateUser 명령을 사용하여 새 사용자를 만듭니다. 2) 보조금 명령을 통해 권한 할당, 3) FlushPrivileges를 사용하여 권한이 적용되도록하십시오.

ChooseCharfixed-lengthdata, varcharforvariable-lengthdata, andtextforlargetextfields.1) charisefficientsconsentent-lengthdatalikecodes.2) varcharsuitsvariable-lengthdatalikeNames, 밸런싱 플렉스 및 성능

MySQL에서 문자열 데이터 유형 및 인덱스를 처리하기위한 모범 사례는 다음과 같습니다. 1) 고정 길이의 Char, 가변 길이의 Varchar 및 큰 텍스트의 텍스트와 같은 적절한 문자열 유형 선택; 2) 인덱싱에 신중하고, 과도한 인덱싱을 피하고, 공통 쿼리에 대한 인덱스를 만듭니다. 3) 접두사 인덱스 및 전체 텍스트 인덱스를 사용하여 긴 문자열 검색을 최적화합니다. 4) 인덱스를 작고 효율적으로 유지하기 위해 인덱스를 정기적으로 모니터링하고 최적화합니다. 이러한 방법을 통해 읽기 및 쓰기 성능의 균형을 맞추고 데이터베이스 효율성을 향상시킬 수 있습니다.


핫 AI 도구

Undresser.AI Undress
사실적인 누드 사진을 만들기 위한 AI 기반 앱

AI Clothes Remover
사진에서 옷을 제거하는 온라인 AI 도구입니다.

Undress AI Tool
무료로 이미지를 벗다

Clothoff.io
AI 옷 제거제

Video Face Swap
완전히 무료인 AI 얼굴 교환 도구를 사용하여 모든 비디오의 얼굴을 쉽게 바꾸세요!

인기 기사

뜨거운 도구

VSCode Windows 64비트 다운로드
Microsoft에서 출시한 강력한 무료 IDE 편집기

WebStorm Mac 버전
유용한 JavaScript 개발 도구

mPDF
mPDF는 UTF-8로 인코딩된 HTML에서 PDF 파일을 생성할 수 있는 PHP 라이브러리입니다. 원저자인 Ian Back은 자신의 웹 사이트에서 "즉시" PDF 파일을 출력하고 다양한 언어를 처리하기 위해 mPDF를 작성했습니다. HTML2FPDF와 같은 원본 스크립트보다 유니코드 글꼴을 사용할 때 속도가 느리고 더 큰 파일을 생성하지만 CSS 스타일 등을 지원하고 많은 개선 사항이 있습니다. RTL(아랍어, 히브리어), CJK(중국어, 일본어, 한국어)를 포함한 거의 모든 언어를 지원합니다. 중첩된 블록 수준 요소(예: P, DIV)를 지원합니다.

Eclipse용 SAP NetWeaver 서버 어댑터
Eclipse를 SAP NetWeaver 애플리케이션 서버와 통합합니다.

메모장++7.3.1
사용하기 쉬운 무료 코드 편집기